• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:贾熹滨

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 9 >
Multi-scale Main-auxiliary MRI Sequences Fusion Based U-Net for Focal Lesion Segmentation SCIE
期刊论文 | 2025 , 19 (3) , 926-949 | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS
Abstract&Keyword Cite

Abstract :

Multi-parameter Magnetic Resonance Imaging (MRI) plays a significant role in clinical diagnosis by providing complementary information from auxiliary sequences to the main sequence. As a result, recent studies have been focusing on using multiple MRI sequences for focal lesion segmentation. Nevertheless, the challenge arises because of the lack of manual labels on some MRI sequences due to blurred lesion boundaries. So, we propose a multi-scale main-auxiliary sequences fusion based U-Net that utilizes the complementary information from both the unlabeled auxiliary sequences and the labeled main sequence for focal lesion segmentation. On one hand, to effectively augment the feature discriminability of the main sequence with the aid of auxiliary sequences, this paper proposes the multi-scale main-auxiliary sequences adaptive fusion based encoding path, where a main-auxiliary sequences adaptive fusion (MASAF) and its enhanced version Auxiliary Sequences Contribution Aware Fusion (ASCAF) are put forward to achieve main-auxiliary fusion at each scale. On the other hand, the multi-scale boundary feature enhancement-based decoding path is devised to decode the multi-scale fused features to acquire the final segmentation result. To improve discrimination capability of obscure lesion areas from the healthy tissues, a Dual Attention Module is performed at each scale to enhance the features via a channel attention and a spatial attention module. Comprehensive experiments on a self-collected clinical focal liver lesion dataset and the public dataset BraTS 2015 demonstrate that our proposed method outperforms comparative methods from both quantitative and visualized analysis.

Keyword :

Dual Attention Dual Attention Multiple MRI Sequences Fusion Multiple MRI Sequences Fusion Focal Lesion Segmentation Focal Lesion Segmentation Medical and Biological Imaging Medical and Biological Imaging

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jia, Xibin , Yang, Chuanxu , Yang, Yifan et al. Multi-scale Main-auxiliary MRI Sequences Fusion Based U-Net for Focal Lesion Segmentation [J]. | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS , 2025 , 19 (3) : 926-949 .
MLA Jia, Xibin et al. "Multi-scale Main-auxiliary MRI Sequences Fusion Based U-Net for Focal Lesion Segmentation" . | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS 19 . 3 (2025) : 926-949 .
APA Jia, Xibin , Yang, Chuanxu , Yang, Yifan , Qian, Chen , Wang, Luo , Yang, Zhenghan et al. Multi-scale Main-auxiliary MRI Sequences Fusion Based U-Net for Focal Lesion Segmentation . | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS , 2025 , 19 (3) , 926-949 .
Export to NoteExpress RIS BibTex
A Few-Shot Medical Image Segmentation Network with Boundary Category Correction CPCI-S
期刊论文 | 2024 , 14434 , 371-382 | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X
Abstract&Keyword Cite

Abstract :

Accurate medical image segmentation is the foundation of clinical imaging diagnosis and 3D image reconstruction. However, medical images often have low contrast between target objects, greatly affected by organ movement, and suffer from limited annotated samples. To address these issues, we propose a few-shot medical image segmentation network with boundary category correction named Boundary Category Correction Network (BCC-Net). Of overall medical few-shot learning framework, we first propose the Prior Mask Generation Module (PRGM) and Multi-scale Feature Fusion Module (MFFM). PRGM can better localize the query target, while MFFM can adaptively fuse the support set prototype, the prior mask and the query set features at different scales to solve the problem of the spatial inconsistency between the support set and the query set. To improve segmentation accuracy, we construct an additional base-learning branch, which, together with the meta-learning branch, forms the Boundary Category Correction Framework (BCCF). It corrects the boundary category of the meta-learning branch prediction mask by predicting the region of the base categories in the query set. Experiments are conducted on the mainstream ABDMR and ABD-CT medical image segmentation public datasets. Comparative analysis and ablation experiments are performed with a variety of existing state-of-the-art few-shot segmentation methods. The results demonstrate that the effectiveness of the proposed method with significant enhance the segmentation performance on medical images.

Keyword :

Prior Mask Prior Mask Medical image segmentation Medical image segmentation Feature Fusion Feature Fusion Boundary Category Correction Boundary Category Correction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Zeyu , Jia, Xibin , Guo, Xiong et al. A Few-Shot Medical Image Segmentation Network with Boundary Category Correction [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 : 371-382 .
MLA Xu, Zeyu et al. "A Few-Shot Medical Image Segmentation Network with Boundary Category Correction" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X 14434 (2024) : 371-382 .
APA Xu, Zeyu , Jia, Xibin , Guo, Xiong , Wang, Luo , Zheng, Yiming . A Few-Shot Medical Image Segmentation Network with Boundary Category Correction . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 , 371-382 .
Export to NoteExpress RIS BibTex
A novel dual-channel graph convolutional neural network for facial action unit recognition SCIE
期刊论文 | 2023 , 166 , 61-68 | PATTERN RECOGNITION LETTERS
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

Facial Action Unit (AU) recognition is a challenging problem, where the subtle muscle movement brings diverse AU representations. Recently, AU relations are utilized to assist AU recognition and improve the understanding of AUs. Nevertheless, simply adopting the regular Bayesian networks or the relations between AUs and emotions is not enough for modelling complex AU relations. To provide quantitative measurement of AU relations using the knowledge from FACS, we propose an AU relation quantization autoencoder. Moreover, to cope with the diversity of AUs generated from the individual representation differences and other environmental impacts, we propose a dual-channel graph convolutional neural network (DGCN) obtaining both inherent and random AU relations. The first channel is FACS-based relation graph convolution channel (FACS-GCN) embedding prior knowledge of FACS, and it adjusts the network to the inherent AU dependent relations. The second channel is data-learning-based relation graph convolution channel (DLR-GCN) based on metric learning, and it provides robustness for individual differences and environmental changes. Comprehensive experiments have been conducted on three public datasets: CK+, RAF-AU and DISFA. The experimental results demonstrate that our proposed DGCN can extract the hidden relations well, thereby achieving great performance in AU recognition. (c) 2023 Published by Elsevier B.V.

Keyword :

Metric learning Metric learning GCN GCN FACS FACS Dual-channel graph convolutional neural Dual-channel graph convolutional neural AU relation AU relation network network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jia, Xibin , Xu, Shaowu , Zhou, Yuhan et al. A novel dual-channel graph convolutional neural network for facial action unit recognition [J]. | PATTERN RECOGNITION LETTERS , 2023 , 166 : 61-68 .
MLA Jia, Xibin et al. "A novel dual-channel graph convolutional neural network for facial action unit recognition" . | PATTERN RECOGNITION LETTERS 166 (2023) : 61-68 .
APA Jia, Xibin , Xu, Shaowu , Zhou, Yuhan , Wang, Luo , Li, Weiting . A novel dual-channel graph convolutional neural network for facial action unit recognition . | PATTERN RECOGNITION LETTERS , 2023 , 166 , 61-68 .
Export to NoteExpress RIS BibTex
Graph Contrastive Learning with Constrained Graph Data Augmentation SCIE
期刊论文 | 2023 , 55 (8) , 10705-10726 | NEURAL PROCESSING LETTERS
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Studies on graph contrastive learning, which is an effective way of self-supervision, have achieved excellent experimental performance. Most existing methods generate two augmented views, and then perform feature learning on the two views through maximizing semantic consistency. Nevertheless, it is still challenging to generate optimal views to facilitate the graph construction that can reveal the essential association relations among nodes by graph contrastive learning. Considering that the extremely high mutual information between views is prone to have a negative effect on model training, a good choice is to add constraints to the graph data augmentation process. This paper proposes two constraint principles, low dissimilarity priority (LDP) and mutual exclusion (ME), to mitigate the mutual information between views and compress redundant parts of mutual information between views. LDP principle aims to reduce the mutual information between views at global scale, and ME principle works to reduce the mutual information at local scale. They are opposite and appropriate in different situations. Without loss of generality, the two proposed principles are applied to two well-performed graph contrastive methods, i.e. GraphCL and GCA, and experimental results on 20 public benchmark datasets show that the models with the aid of the two proposed constraint principles achieve higher recognition accuracy.

Keyword :

Augmentation principle Augmentation principle Self-supervised learning Self-supervised learning Graph data augmentation Graph data augmentation Contrastive learning Contrastive learning Graph representation learning Graph representation learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Shaowu , Wang, Luo , Jia, Xibin . Graph Contrastive Learning with Constrained Graph Data Augmentation [J]. | NEURAL PROCESSING LETTERS , 2023 , 55 (8) : 10705-10726 .
MLA Xu, Shaowu et al. "Graph Contrastive Learning with Constrained Graph Data Augmentation" . | NEURAL PROCESSING LETTERS 55 . 8 (2023) : 10705-10726 .
APA Xu, Shaowu , Wang, Luo , Jia, Xibin . Graph Contrastive Learning with Constrained Graph Data Augmentation . | NEURAL PROCESSING LETTERS , 2023 , 55 (8) , 10705-10726 .
Export to NoteExpress RIS BibTex
Boundary-Aware Dual Attention Guided Liver Segment Segmentation Model SCIE
期刊论文 | 2022 , 16 (1) , 16-37 | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

Accurate liver segment segmentation based on radiological images is indispensable for the preoperative analysis of liver tumor resection surgery. However, most of the existing segmentation methods are not feasible to be used directly for this task due to the challenge of exact edge prediction with some tiny and slender vessels as its clinical segmentation criterion. To address this problem, we propose a novel deep learning based segmentation model, called Boundary-Aware Dual Attention Liver Segment Segmentation Model (BADA). This model can improve the segmentation accuracy of liver segments with enhancing the edges including the vessels serving as segment boundaries. In our model, the dual gated attention is proposed, which composes of a spatial attention module and a semantic attention module. The spatial attention module enhances the weights of key edge regions by concerning about the salient intensity changes, while the semantic attention amplifies the contribution of filters that can extract more discriminative feature information by weighting the significant convolution channels. Simultaneously, we build a dataset of liver segments including 59 clinic cases with dynamically contrast enhanced MRI(Magnetic Resonance Imaging) of portal vein stage, which annotated by several professional radiologists. Comparing with several state-of-the-art methods and baseline segmentation methods, we achieve the best results on this clinic liver segment segmentation dataset, where Mean Dice, Mean Sensitivity and Mean Positive Predicted Value reach 89.01%, 87.71% and 90.67%, respectively.

Keyword :

attention mechanism attention mechanism boundary-aware boundary-aware liver segment liver segment Segmentation model Segmentation model

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jia, Xibin , Qian, Chen , Yang, Zhenghan et al. Boundary-Aware Dual Attention Guided Liver Segment Segmentation Model [J]. | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS , 2022 , 16 (1) : 16-37 .
MLA Jia, Xibin et al. "Boundary-Aware Dual Attention Guided Liver Segment Segmentation Model" . | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS 16 . 1 (2022) : 16-37 .
APA Jia, Xibin , Qian, Chen , Yang, Zhenghan , Xu, Hui , Han, Xianjun , Rene, Hao et al. Boundary-Aware Dual Attention Guided Liver Segment Segmentation Model . | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS , 2022 , 16 (1) , 16-37 .
Export to NoteExpress RIS BibTex
Deep-Learning-Based Coronary Artery Calcium Detection from CT Image SCIE
期刊论文 | 2021 , 21 (21) | SENSORS
WoS CC Cited Count: 12
Abstract&Keyword Cite

Abstract :

One of the most common methods for diagnosing coronary artery disease is the use of the coronary artery calcium score CT. However, the current diagnostic method using the coronary artery calcium score CT requires a considerable time, because the radiologist must manually check the CT images one-by-one, and check the exact range. In this paper, three CNN models are applied for 1200 normal cardiovascular CT images, and 1200 CT images in which calcium is present in the cardiovascular system. We conduct the experimental test by classifying the CT image data into the original coronary artery calcium score CT images containing the entire rib cage, the cardiac segmented images that cut out only the heart region, and cardiac cropped images that are created by using the cardiac images that are segmented into nine sub-parts and enlarged. As a result of the experimental test to determine the presence of calcium in a given CT image using Inception Resnet v2, VGG, and Resnet 50 models, the highest accuracy of 98.52% was obtained when cardiac cropped image data was applied using the Resnet 50 model. Therefore, in this paper, it is expected that through further research, both the simple presence of calcium and the automation of the calcium analysis score for each coronary artery calcium score CT will become possible.

Keyword :

VGG VGG calcium detection calcium detection inception resnet V2 inception resnet V2 resnet-50 resnet-50 coronary artery calcium score CT coronary artery calcium score CT deep learning deep learning image classification image classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lee, Sungjin , Rim, Beanbonyka , Jou, Sung-Shick et al. Deep-Learning-Based Coronary Artery Calcium Detection from CT Image [J]. | SENSORS , 2021 , 21 (21) .
MLA Lee, Sungjin et al. "Deep-Learning-Based Coronary Artery Calcium Detection from CT Image" . | SENSORS 21 . 21 (2021) .
APA Lee, Sungjin , Rim, Beanbonyka , Jou, Sung-Shick , Gil, Hyo-Wook , Jia, Xibin , Lee, Ahyoung et al. Deep-Learning-Based Coronary Artery Calcium Detection from CT Image . | SENSORS , 2021 , 21 (21) .
Export to NoteExpress RIS BibTex
An unsupervised person re-identification approach based on cross-view distribution alignment SCIE
期刊论文 | 2021 , 15 (11) , 2693-2704 | IET IMAGE PROCESSING
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

Unsupervised clustering is a kind of popular solution for unsupervised person re-identification (re-ID). However, due to the influence of cross-view differences, the results of clustering labels are not accurate. To solve this problem, an unsupervised re ID method based on cross-view distributed alignment (CV-DA) to reduce the influence of unsupervised cross-view is proposed. Specifically, based on a popular unsupervised clustering method, density clustering DBSCAN is used to obtain pseudo labels. By calculating the similarity scores of images in the target domain and the source domain, the similarity distribution of different camera views is obtained and is aligned with the distribution with the consistency constraint of pseudo labels. The cross-view distribution alignment constraint is used to guide the clustering process to obtain a more reliable pseudo label. The comprehensive comparative experiments are done in two public datasets, i.e. Market-1501 and DukeMTMC-reID. The comparative results show that the proposed method outperforms several state-of-the-art approaches with mAP reaching 52.6% and rank1 71.1%. In order to prove the effectiveness of the proposed CV-DA, the proposed constraint is added into two advanced re-ID methods. The experimental results demonstrate that the mAP and rank increase by ?0.5-2% after using the cross-view distribution alignment constraint comparing with that of the associated original methods without using CV-DA.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jia, Xibin , Wang, Xing , Mi, Qing . An unsupervised person re-identification approach based on cross-view distribution alignment [J]. | IET IMAGE PROCESSING , 2021 , 15 (11) : 2693-2704 .
MLA Jia, Xibin et al. "An unsupervised person re-identification approach based on cross-view distribution alignment" . | IET IMAGE PROCESSING 15 . 11 (2021) : 2693-2704 .
APA Jia, Xibin , Wang, Xing , Mi, Qing . An unsupervised person re-identification approach based on cross-view distribution alignment . | IET IMAGE PROCESSING , 2021 , 15 (11) , 2693-2704 .
Export to NoteExpress RIS BibTex
The effectiveness of data augmentation in code readability classification SCIE
期刊论文 | 2021 , 129 | INFORMATION AND SOFTWARE TECHNOLOGY
WoS CC Cited Count: 16
Abstract&Keyword Cite

Abstract :

Context: Training deep learning models for code readability classification requires large datasets of quality pre-labeled data. However, it is almost always time-consuming and expensive to acquire readability data with manual labels. Objective: We thus propose to introduce data augmentation approaches to artificially increase the size of training set, this is to reduce the risk of overfitting caused by the lack of readability data and further improve the classification accuracy as the ultimate goal. Method: We create transformed versions of code snippets by manipulating original data from aspects such as comments, indentations, and names of classes/methods/variables based on domain-specific knowledge. In addition to basic transformations, we also explore the use of Auxiliary Classifier GANs to produce synthetic data. Results: To evaluate the proposed approach, we conduct a set of experiments. The results show that the classification performance of deep neural networks can be significantly improved when they are trained on the augmented corpus, achieving a state-of-the-art accuracy of 87.38%. Conclusion: We consider the findings of this study as primary evidence of the effectiveness of data augmentation in the field of code readability classification.

Keyword :

Generative adversarial network Generative adversarial network Deep learning Deep learning Data augmentation Data augmentation Empirical software engineering Empirical software engineering Code readability classification Code readability classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Mi, Qing , Xiao, Yan , Cai, Zhi et al. The effectiveness of data augmentation in code readability classification [J]. | INFORMATION AND SOFTWARE TECHNOLOGY , 2021 , 129 .
MLA Mi, Qing et al. "The effectiveness of data augmentation in code readability classification" . | INFORMATION AND SOFTWARE TECHNOLOGY 129 (2021) .
APA Mi, Qing , Xiao, Yan , Cai, Zhi , Jia, Xibin . The effectiveness of data augmentation in code readability classification . | INFORMATION AND SOFTWARE TECHNOLOGY , 2021 , 129 .
Export to NoteExpress RIS BibTex
金字塔原型对齐的轻量级小样本语义分割网络 CSCD
期刊论文 | 2021 , 47 (05) , 455-462,519 | 北京工业大学学报
Abstract&Keyword Cite

Abstract :

小样本图像语义分割任务是计算机视觉领域一个有挑战性的问题,其目标是利用现有一张或几张带有密集分割注释的图片来预测未见类图像的分割掩码.针对该任务,提出了一个基于金字塔原型对齐的轻量级小样本图像语义分割网络.首先,该网络在MobileNetV2网络的深度可分离卷积和逆残差结构基础上,通过金字塔池化模块进行提取特征,保持高维度和低维度的信息,获得不同尺度的特征.同时通过在支持集原型和查询集之间进行相互对齐,使得网络能够从支持集中学到更多的信息,充分利用支持集的信息进行反馈.基于PASCAL-5~i数据集的大量实验结果表明,提出的网络结构的均值在1-way 1-shot和1-way 5-shot上分...

Keyword :

卷积神经网络 卷积神经网络 多尺度 多尺度 小样本语义分割 小样本语义分割 原型对齐正则化 原型对齐正则化 金字塔池化 金字塔池化 轻量级网络 轻量级网络

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 贾熹滨 , 李佳 . 金字塔原型对齐的轻量级小样本语义分割网络 [J]. | 北京工业大学学报 , 2021 , 47 (05) : 455-462,519 .
MLA 贾熹滨 et al. "金字塔原型对齐的轻量级小样本语义分割网络" . | 北京工业大学学报 47 . 05 (2021) : 455-462,519 .
APA 贾熹滨 , 李佳 . 金字塔原型对齐的轻量级小样本语义分割网络 . | 北京工业大学学报 , 2021 , 47 (05) , 455-462,519 .
Export to NoteExpress RIS BibTex
金字塔原型对齐的轻量级小样本语义分割网络 CQVIP
期刊论文 | 2021 , 47 (5) , 455-462,519 | 贾熹滨
Abstract&Keyword Cite

Abstract :

金字塔原型对齐的轻量级小样本语义分割网络

Keyword :

原型对齐正则化 原型对齐正则化 卷积神经网络 卷积神经网络 轻量级网络 轻量级网络 小样本语义分割 小样本语义分割 多尺度 多尺度 金字塔池化 金字塔池化

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 贾熹滨 , 李佳 , 北京工业大学学报 . 金字塔原型对齐的轻量级小样本语义分割网络 [J]. | 贾熹滨 , 2021 , 47 (5) : 455-462,519 .
MLA 贾熹滨 et al. "金字塔原型对齐的轻量级小样本语义分割网络" . | 贾熹滨 47 . 5 (2021) : 455-462,519 .
APA 贾熹滨 , 李佳 , 北京工业大学学报 . 金字塔原型对齐的轻量级小样本语义分割网络 . | 贾熹滨 , 2021 , 47 (5) , 455-462,519 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 9 >

Export

Results:

Selected

to

Format:
Online/Total:819/10237291
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.