• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:尹宝才

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 78 >
Graph Neural Networks with Soft Association between Topology and Attribute CPCI-S
期刊论文 | 2024 , 9260-9268 | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8
Abstract&Keyword Cite

Abstract :

Graph Neural Networks (GNNs) have shown great performance in learning representations for graph-structured data. However, recent studies have found that the interference between topology and attribute can lead to distorted node representations. Most GNNs are designed based on homophily assumptions, thus they cannot be applied to graphs with heterophily. This research critically analyzes the propagation principles of various GNNs and the corresponding challenges from an optimization perspective. A novel GNN called Graph Neural Networks with Soft Association between Topology and Attribute (GNN-SATA) is proposed. Different embeddings are utilized to gain insights into attributes and structures while establishing their interconnections through soft association. Further as integral components of the soft association, a Graph Pruning Module (GPM) and Graph Augmentation Module (GAM) are developed. These modules dynamically remove or add edges to the adjacency relationships to make the model better fit with graphs with homophily or heterophily. Experimental results on homophilic and heterophilic graph datasets convincingly demonstrate that the proposed GNN-SATA effectively captures more accurate adjacency relationships and outperforms state-of-the-art approaches. Especially on the heterophilic graph dataset Squirrel, GNN-SATA achieves a 2.81% improvement in accuracy, utilizing merely 27.19% of the original number of adjacency relationships. Our code is released at https://github.com/wwwfadecom/GNN-SATA.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang, Yachao , Sun, Yanfeng , Wang, Shaofan et al. Graph Neural Networks with Soft Association between Topology and Attribute [J]. | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8 , 2024 : 9260-9268 .
MLA Yang, Yachao et al. "Graph Neural Networks with Soft Association between Topology and Attribute" . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8 (2024) : 9260-9268 .
APA Yang, Yachao , Sun, Yanfeng , Wang, Shaofan , Guo, Jipeng , Gao, Junbin , Ju, Fujiao et al. Graph Neural Networks with Soft Association between Topology and Attribute . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8 , 2024 , 9260-9268 .
Export to NoteExpress RIS BibTex
Modality Perception Learning-Based Determinative Factor Discovery for Multimodal Fake News Detection SCIE
期刊论文 | 2024 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Abstract&Keyword Cite

Abstract :

The dissemination of fake news, often fueled by exaggeration, distortion, or misleading statements, significantly jeopardizes public safety and shapes social opinion. Although existing multimodal fake news detection methods focus on multimodal consistency, they occasionally neglect modal heterogeneity, missing the opportunity to unearth the most related determinative information concealed within fake news articles. To address this limitation and extract more decisive information, this article proposes the modality perception learning-based determinative factor discovery (MoPeD) model. MoPeD optimizes the steps of feature extraction, fusion, and aggregation to adaptively discover determinants within both unimodality features and multimodality fusion features for the task of fake news detection. Specifically, to capture comprehensive information, the dual encoding module integrates a modal-consistent contrastive language-image pre-training (CLIP) pretrained encoder with a modal-specific encoder, catering to both explicit and implicit information. Motivated by the prompt strategy, the output features of the dual encoding module are complemented by learnable memory information. To handle modality heterogeneity during fusion, the multilevel cross-modality fusion module is introduced to deeply comprehend the complex implicit meaning within text and image. Finally, for aggregating unimodal and multimodal features, the modality perception learning module gauges the similarity between modalities to dynamically emphasize decisive modality features based on the cross-modal content heterogeneity scores. The experimental evaluations conducted on three public fake news datasets show that the proposed model is superior to other state-of-the-art fake news detection methods.

Keyword :

modality perception learning modality perception learning cross-modal analysis cross-modal analysis Semantics Semantics Data mining Data mining Adaptive prompt learning Adaptive prompt learning Encoding Encoding multimodal fake news detection multimodal fake news detection Motorcycles Motorcycles Fake news Fake news Feature extraction Feature extraction Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Boyue , Wu, Guangchao , Li, Xiaoyan et al. Modality Perception Learning-Based Determinative Factor Discovery for Multimodal Fake News Detection [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 .
MLA Wang, Boyue et al. "Modality Perception Learning-Based Determinative Factor Discovery for Multimodal Fake News Detection" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2024) .
APA Wang, Boyue , Wu, Guangchao , Li, Xiaoyan , Gao, Junbin , Hu, Yongli , Yin, Baocai . Modality Perception Learning-Based Determinative Factor Discovery for Multimodal Fake News Detection . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 .
Export to NoteExpress RIS BibTex
Attention-Bridged Modal Interaction for Text-to-Image Generation SCIE
期刊论文 | 2024 , 34 (7) , 5400-5413 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract&Keyword Cite

Abstract :

We propose a novel Text-to-Image Generation Network, Attention-bridged Modal Interaction Generative Adversarial Network (AMI-GAN), to better explore modal interaction and perception for high-quality image synthesis. The AMI-GAN contains two novel designs: an Attention-bridged Modal Interaction (AMI) module and a Residual Perception Discriminator (RPD). In AMI, we mainly design a multi-scale attention mechanism to exploit semantics alignment, fusion, and enhancement between text and image, to better refine details and context semantics of the synthesized image. In RPD, we design a multi-scale information perception mechanism with our proposed novel information adjustment function, to encourage the discriminator to better perceive visual differences between the real and synthesized image. Consequently, the discriminator will drive the generator to improve the visual quality of the synthesized image. Besides, based on these novel designs, we can design two versions, a single-stage generation framework (AMI-GAN-S), and a multi-stage generation framework (AMI-GAN-M), respectively. The former can synthesize high-resolution images because of its low computational cost; the latter can synthesize images with realistic detail. Experimental results on two widely used T2I datasets showed that our AMI-GANs achieve competitive performance in T2I task.

Keyword :

Layout Layout Generative adversarial network Generative adversarial network attention-bridged modal interaction attention-bridged modal interaction residual perception discriminator residual perception discriminator Visualization Visualization Computational modeling Computational modeling text-to-image synthesis text-to-image synthesis Generators Generators Task analysis Task analysis Semantics Semantics Image synthesis Image synthesis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Tan, Hongchen , Yin, Baocai , Xu, Kaiqiang et al. Attention-Bridged Modal Interaction for Text-to-Image Generation [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) : 5400-5413 .
MLA Tan, Hongchen et al. "Attention-Bridged Modal Interaction for Text-to-Image Generation" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 7 (2024) : 5400-5413 .
APA Tan, Hongchen , Yin, Baocai , Xu, Kaiqiang , Wang, Huasheng , Liu, Xiuping , Li, Xin . Attention-Bridged Modal Interaction for Text-to-Image Generation . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) , 5400-5413 .
Export to NoteExpress RIS BibTex
Multi-Level Interaction Based Knowledge Graph Completion SCIE
期刊论文 | 2024 , 32 , 386-396 | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING
WoS CC Cited Count: 6
Abstract&Keyword Cite

Abstract :

With the continuous emergence of new knowledge, Knowledge Graph (KG) typically suffers from the incompleteness problem, hindering the performance of downstream applications. Thus, Knowledge Graph Completion (KGC) has attracted considerable attention. However, existing KGC methods usually capture the coarse-grained information by directly interacting with the entity and relation, ignoring the important fine-grained information in them. To capture the fine-grained information, in this paper, we divide each entity/relation into several segments and propose a novel Multi-Level Interaction (MLI) based KGC method, which simultaneously interacts with the entity and relation at the fine-grained level and the coarse-grained level. The fine-grained interaction module applies the Gate Recurrent Unit (GRU) mechanism to guarantee the sequentiality between segments, which facilitates the fine-grained feature interaction and does not obviously sacrifice the model complexity. Moreover, the coarse-grained interaction module designs a High-order Factorized Bilinear (HFB) operation to facilitate the coarse-grained interaction between the entity and relation by applying the tensor factorization based multi-head mechanism, which still effectively reduces its parameter scale. Experimental results show that the proposed method achieves state-of-the-art performances on the link prediction task over five well-established knowledge graph completion benchmarks.

Keyword :

representation learning representation learning knowledge graph embedding knowledge graph embedding attention network attention network Knowledge graph completion Knowledge graph completion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Jiapu , Wang, Boyue , Gao, Junbin et al. Multi-Level Interaction Based Knowledge Graph Completion [J]. | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING , 2024 , 32 : 386-396 .
MLA Wang, Jiapu et al. "Multi-Level Interaction Based Knowledge Graph Completion" . | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING 32 (2024) : 386-396 .
APA Wang, Jiapu , Wang, Boyue , Gao, Junbin , Hu, Simin , Hu, Yongli , Yin, Baocai . Multi-Level Interaction Based Knowledge Graph Completion . | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING , 2024 , 32 , 386-396 .
Export to NoteExpress RIS BibTex
Center Focusing Network for Real-Time LiDAR Panoptic Segmentation CPCI-S
期刊论文 | 2023 , 13425-13434 | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)
Abstract&Keyword Cite

Abstract :

LiDAR panoptic segmentation facilitates an autonomous vehicle to comprehensively understand the surrounding objects and scenes and is required to run in real time. The recent proposal-free methods accelerate the algorithm, but their effectiveness and efficiency are still limited owing to the difficulty of modeling non-existent instance centers and the costly center-based clustering modules. To achieve accurate and real-time LiDAR panoptic segmentation, a novel center focusing network (CFNet) is introduced. Specifically, the center focusing feature encoding (CFFE) is proposed to explicitly understand the relationships between the original LiDAR points and virtual instance centers by shifting the LiDAR points and filling in the center points. Moreover, to leverage the redundantly detected centers, a fast center deduplication module (CDM) is proposed to select only one center for each instance. Experiments on the SemanticKITTI and nuScenes panoptic segmentation benchmarks demonstrate that our CFNet outperforms all existing methods by a large margin and is 1.6 times faster than the most efficient method.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Xiaoyan , Zhang, Gang , Wang, Boyue et al. Center Focusing Network for Real-Time LiDAR Panoptic Segmentation [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) , 2023 : 13425-13434 .
MLA Li, Xiaoyan et al. "Center Focusing Network for Real-Time LiDAR Panoptic Segmentation" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) (2023) : 13425-13434 .
APA Li, Xiaoyan , Zhang, Gang , Wang, Boyue , Hu, Yongli , Yin, Baocai . Center Focusing Network for Real-Time LiDAR Panoptic Segmentation . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) , 2023 , 13425-13434 .
Export to NoteExpress RIS BibTex
Multimodal driver distraction detection using dual-channel network of CNN and Transformer SCIE
期刊论文 | 2023 , 234 | EXPERT SYSTEMS WITH APPLICATIONS
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Distracted driving has become one of the main contributors to traffic accidents. It is therefore of great interest for intelligent vehicles to establish a distraction detection system that can continuously monitor driver behavior and respond accordingly. Although significant progress has been made in the existing research, most of them focus on extracting either local features or global features while ignoring the other one. To make full use of both local features and global features, we integrate multi-source perception information and propose a novel dual-channel feature extraction model based on CNN and Transformer. In order to improve the model's fitting ability to time series data, the CNN channel and Transformer channel are modeled separately using the mid-point residual structure. The scaling factors in the residual structure are regarded as hyperparameters, and a penalized validation method based on bilevel optimization is introduced to obtain the optimal values automatically. Extensive experiments and comparison with the state-of-the-art methods on a multimodal dataset of driver distraction validate the effectiveness of the proposed method.

Keyword :

Convolutional neural network Convolutional neural network Driver distraction detection Driver distraction detection Transformer Transformer Multimodal Multimodal Hyperparameter optimization Hyperparameter optimization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Mou, Luntian , Chang, Jiali , Zhou, Chao et al. Multimodal driver distraction detection using dual-channel network of CNN and Transformer [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2023 , 234 .
MLA Mou, Luntian et al. "Multimodal driver distraction detection using dual-channel network of CNN and Transformer" . | EXPERT SYSTEMS WITH APPLICATIONS 234 (2023) .
APA Mou, Luntian , Chang, Jiali , Zhou, Chao , Zhao, Yiyuan , Ma, Nan , Yin, Baocai et al. Multimodal driver distraction detection using dual-channel network of CNN and Transformer . | EXPERT SYSTEMS WITH APPLICATIONS , 2023 , 234 .
Export to NoteExpress RIS BibTex
HyperGraph based human mesh hierarchical representation and reconstruction from a single image SCIE CPCI-S
期刊论文 | 2023 , 115 , 339-347 | COMPUTERS & GRAPHICS-UK
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Reconstructing 3D human mesh from monocular images has been extensively studied. However, the existing non-parametric reconstruction methods are inefficient when modeling vertex relationship concerning human information due to they generally adopt an uniform template mesh. To this end, this paper proposes a novel hypergraph-based human mesh hierarchical representation that enables the expression of vertices at body, parts, and vertices perspectives, corresponding to global, local and individual, respectively. And then a novel HyperGraph Attention-based human mesh reconstruction network (HGaMRNet) is put forward in turn, which mainly consists of two modules and can efficiently capture human information from different granularities of the mesh. Specifically, the first module, Body2Parts, decouples a body into local parts, and leverages Mix-Attention (MAT) based feature encoder to learn visual cues and semantic information of the parts for capturing complex human kinematic relationships from monocular images. The second module, Part2Vertices, transfers part features to the corresponding vertices through an adaptive incidence matrix, and utilizes a HyperGraph Attention network to update the vertex features. This is conductive to learning the finegrained morphological information of a human mesh. All in one, supported by the hypergraph-based hierarchical representation of the human mesh, HGaMRNet balances the effects of neighbor vertex from different levels properly and eventually promotes the reconstruction accuracy of human mesh. Experiments conducted on both Human3.6M and 3DPW datasets show that HGaMRNet outperforms most of the existing image-based human mesh reconstruction methods.& COPY; 2023 Elsevier Ltd. All rights reserved.

Keyword :

Human mesh reconstruction Human mesh reconstruction Hierarchical representation Hierarchical representation Mix-attention Mix-attention HyperGraph HyperGraph HyperGraph attention HyperGraph attention

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hao, Chenhui , Kong, Dehui , Li, Jinghua et al. HyperGraph based human mesh hierarchical representation and reconstruction from a single image [J]. | COMPUTERS & GRAPHICS-UK , 2023 , 115 : 339-347 .
MLA Hao, Chenhui et al. "HyperGraph based human mesh hierarchical representation and reconstruction from a single image" . | COMPUTERS & GRAPHICS-UK 115 (2023) : 339-347 .
APA Hao, Chenhui , Kong, Dehui , Li, Jinghua , Liu, Caixia , Yin, Baocai . HyperGraph based human mesh hierarchical representation and reconstruction from a single image . | COMPUTERS & GRAPHICS-UK , 2023 , 115 , 339-347 .
Export to NoteExpress RIS BibTex
Multi-Concept Representation Learning for Knowledge Graph Completion SCIE
期刊论文 | 2023 , 17 (1) | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

Knowledge Graph Completion (KGC) aims at inferring missing entities or relations by embedding them in a low-dimensional space. However, most existing KGC methods generally fail to handle the complex concepts hidden in triplets, so the learned embeddings of entities or relations may deviate from the true situation. In this article, we propose a novel Multi-concept Representation Learning (McRL) method for the KGC task, which mainly consists of a multi-concept representation module, a deep residual attention module, and an interaction embedding module. Specifically, instead of the single-feature representation, the multi-concept representation module projects each entity or relation to multiple vectors to capture the complex conceptual information hidden in them. The deep residual attention module simultaneously explores the inter- and intra-connection between entities and relations to enhance the entity and relation embeddings corresponding to the current contextual situation. Moreover, the interaction embedding module further weakens the noise and ambiguity to obtain the optimal and robust embeddings. We conduct the link prediction experiment to evaluate the proposed method on several standard datasets, and experimental results show that the proposed method outperforms existing state-of-the-art KGC methods.

Keyword :

attention network attention network Knowledge graph completion Knowledge graph completion multi-concept representation multi-concept representation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Jiapu , Wang, Boyue , Gao, Junbin et al. Multi-Concept Representation Learning for Knowledge Graph Completion [J]. | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2023 , 17 (1) .
MLA Wang, Jiapu et al. "Multi-Concept Representation Learning for Knowledge Graph Completion" . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 17 . 1 (2023) .
APA Wang, Jiapu , Wang, Boyue , Gao, Junbin , Hu, Yongli , Yin, Baocai . Multi-Concept Representation Learning for Knowledge Graph Completion . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2023 , 17 (1) .
Export to NoteExpress RIS BibTex
Graph structure learning layer and its graph convolution clustering application SCIE
期刊论文 | 2023 , 165 , 1010-1020 | NEURAL NETWORKS
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

To learn the embedding representation of graph structure data corrupted by noise and outliers, existing graph structure learning networks usually follow the two-step paradigm, i.e., constructing a "good"graph structure and achieving the message passing for signals supported on the learned graph. However, the data corrupted by noise may make the learned graph structure unreliable. In this paper, we propose an adaptive graph convolutional clustering network that alternatively adjusts the graph structure and node representation layer-by-layer with back-propagation. Specifically, we design a Graph Structure Learning layer before each Graph Convolutional layer to learn the sparse graph structure from the node representations, where the graph structure is implicitly determined by the solution to the optimal self-expression problem. This is one of the first works that uses an optimization process as a Graph Network layer, which is obviously different from the function operation in traditional deep learning layers. An efficient iterative optimization algorithm is given to solve the optimal self-expression problem in the Graph Structure Learning layer. Experimental results show that the proposed method can effectively defend the negative effects of inaccurate graph structures. The code is available at https://github.com/HeXiax/SSGNN. & COPY; 2023 Elsevier Ltd. All rights reserved.

Keyword :

Graph convolutional network Graph convolutional network Subspace clustering Subspace clustering Graph structure learning Graph structure learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 He, Xiaxia , Wang, Boyue , Li, Ruikun et al. Graph structure learning layer and its graph convolution clustering application [J]. | NEURAL NETWORKS , 2023 , 165 : 1010-1020 .
MLA He, Xiaxia et al. "Graph structure learning layer and its graph convolution clustering application" . | NEURAL NETWORKS 165 (2023) : 1010-1020 .
APA He, Xiaxia , Wang, Boyue , Li, Ruikun , Gao, Junbin , Hu, Yongli , Huo, Guangyu et al. Graph structure learning layer and its graph convolution clustering application . | NEURAL NETWORKS , 2023 , 165 , 1010-1020 .
Export to NoteExpress RIS BibTex
MVMA-GCN: Multi-view multi-layer attention graph convolutional networks SCIE
期刊论文 | 2023 , 126 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

The accuracy of graph representation learning is highly dependent on the precise characterization of node relationships. However, representing the complex and diverse networks in the real world using a single type of node or link is challenging, often resulting in incomplete information. Moreover, different types of nodes and links convey rich information, which makes it difficult to design a graph network that can integrate diverse links. This paper introduces a novel multi-view and multi-layer attention model designed to optimize node embeddings for semi-supervised node classification. The proposed model exploits various types of inter -node links and employs the Hilbert-Schmidt independence criterion to maximize the dissimilarity between distinct node relationships. Furthermore, the multi-layer attention mechanism is used to discern the impact of different neighboring nodes and relationships between various node relationships. The performance of the proposed model, MVMA-GCN, was assessed on numerous real-world multi-view datasets. It was observed that MVMA-GCN consistently outperformed existing models, demonstrating superior accuracy in semi-supervised classification tasks. We have made our code publicly available at here to ensure the reproducibility of our results.

Keyword :

Graph analysis Graph analysis Semi-supervised classification Semi-supervised classification Graph convolutional networks Graph convolutional networks Graph neural networks Graph neural networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Pengyu , Zhang, Yong , Wang, Jingcheng et al. MVMA-GCN: Multi-view multi-layer attention graph convolutional networks [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 126 .
MLA Zhang, Pengyu et al. "MVMA-GCN: Multi-view multi-layer attention graph convolutional networks" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 126 (2023) .
APA Zhang, Pengyu , Zhang, Yong , Wang, Jingcheng , Yin, Baocai . MVMA-GCN: Multi-view multi-layer attention graph convolutional networks . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 126 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 78 >

Export

Results:

Selected

to

Format:
Online/Total:340/6036995
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.