Query:
学者姓名:贾熹滨
Refining:
Year
Type
Indexed by
Source
Complex
Co-Author
Language
Clean All
Abstract :
Accurate medical image segmentation is the foundation of clinical imaging diagnosis and 3D image reconstruction. However, medical images often have low contrast between target objects, greatly affected by organ movement, and suffer from limited annotated samples. To address these issues, we propose a few-shot medical image segmentation network with boundary category correction named Boundary Category Correction Network (BCC-Net). Of overall medical few-shot learning framework, we first propose the Prior Mask Generation Module (PRGM) and Multi-scale Feature Fusion Module (MFFM). PRGM can better localize the query target, while MFFM can adaptively fuse the support set prototype, the prior mask and the query set features at different scales to solve the problem of the spatial inconsistency between the support set and the query set. To improve segmentation accuracy, we construct an additional base-learning branch, which, together with the meta-learning branch, forms the Boundary Category Correction Framework (BCCF). It corrects the boundary category of the meta-learning branch prediction mask by predicting the region of the base categories in the query set. Experiments are conducted on the mainstream ABDMR and ABD-CT medical image segmentation public datasets. Comparative analysis and ablation experiments are performed with a variety of existing state-of-the-art few-shot segmentation methods. The results demonstrate that the effectiveness of the proposed method with significant enhance the segmentation performance on medical images.
Keyword :
Prior Mask Prior Mask Medical image segmentation Medical image segmentation Feature Fusion Feature Fusion Boundary Category Correction Boundary Category Correction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xu, Zeyu , Jia, Xibin , Guo, Xiong et al. A Few-Shot Medical Image Segmentation Network with Boundary Category Correction [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 : 371-382 . |
MLA | Xu, Zeyu et al. "A Few-Shot Medical Image Segmentation Network with Boundary Category Correction" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X 14434 (2024) : 371-382 . |
APA | Xu, Zeyu , Jia, Xibin , Guo, Xiong , Wang, Luo , Zheng, Yiming . A Few-Shot Medical Image Segmentation Network with Boundary Category Correction . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X , 2024 , 14434 , 371-382 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Studies on graph contrastive learning, which is an effective way of self-supervision, have achieved excellent experimental performance. Most existing methods generate two augmented views, and then perform feature learning on the two views through maximizing semantic consistency. Nevertheless, it is still challenging to generate optimal views to facilitate the graph construction that can reveal the essential association relations among nodes by graph contrastive learning. Considering that the extremely high mutual information between views is prone to have a negative effect on model training, a good choice is to add constraints to the graph data augmentation process. This paper proposes two constraint principles, low dissimilarity priority (LDP) and mutual exclusion (ME), to mitigate the mutual information between views and compress redundant parts of mutual information between views. LDP principle aims to reduce the mutual information between views at global scale, and ME principle works to reduce the mutual information at local scale. They are opposite and appropriate in different situations. Without loss of generality, the two proposed principles are applied to two well-performed graph contrastive methods, i.e. GraphCL and GCA, and experimental results on 20 public benchmark datasets show that the models with the aid of the two proposed constraint principles achieve higher recognition accuracy.
Keyword :
Augmentation principle Augmentation principle Self-supervised learning Self-supervised learning Graph data augmentation Graph data augmentation Contrastive learning Contrastive learning Graph representation learning Graph representation learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xu, Shaowu , Wang, Luo , Jia, Xibin . Graph Contrastive Learning with Constrained Graph Data Augmentation [J]. | NEURAL PROCESSING LETTERS , 2023 , 55 (8) : 10705-10726 . |
MLA | Xu, Shaowu et al. "Graph Contrastive Learning with Constrained Graph Data Augmentation" . | NEURAL PROCESSING LETTERS 55 . 8 (2023) : 10705-10726 . |
APA | Xu, Shaowu , Wang, Luo , Jia, Xibin . Graph Contrastive Learning with Constrained Graph Data Augmentation . | NEURAL PROCESSING LETTERS , 2023 , 55 (8) , 10705-10726 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Facial Action Unit (AU) recognition is a challenging problem, where the subtle muscle movement brings diverse AU representations. Recently, AU relations are utilized to assist AU recognition and improve the understanding of AUs. Nevertheless, simply adopting the regular Bayesian networks or the relations between AUs and emotions is not enough for modelling complex AU relations. To provide quantitative measurement of AU relations using the knowledge from FACS, we propose an AU relation quantization autoencoder. Moreover, to cope with the diversity of AUs generated from the individual representation differences and other environmental impacts, we propose a dual-channel graph convolutional neural network (DGCN) obtaining both inherent and random AU relations. The first channel is FACS-based relation graph convolution channel (FACS-GCN) embedding prior knowledge of FACS, and it adjusts the network to the inherent AU dependent relations. The second channel is data-learning-based relation graph convolution channel (DLR-GCN) based on metric learning, and it provides robustness for individual differences and environmental changes. Comprehensive experiments have been conducted on three public datasets: CK+, RAF-AU and DISFA. The experimental results demonstrate that our proposed DGCN can extract the hidden relations well, thereby achieving great performance in AU recognition. (c) 2023 Published by Elsevier B.V.
Keyword :
Metric learning Metric learning GCN GCN FACS FACS Dual-channel graph convolutional neural Dual-channel graph convolutional neural AU relation AU relation network network
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Jia, Xibin , Xu, Shaowu , Zhou, Yuhan et al. A novel dual-channel graph convolutional neural network for facial action unit recognition [J]. | PATTERN RECOGNITION LETTERS , 2023 , 166 : 61-68 . |
MLA | Jia, Xibin et al. "A novel dual-channel graph convolutional neural network for facial action unit recognition" . | PATTERN RECOGNITION LETTERS 166 (2023) : 61-68 . |
APA | Jia, Xibin , Xu, Shaowu , Zhou, Yuhan , Wang, Luo , Li, Weiting . A novel dual-channel graph convolutional neural network for facial action unit recognition . | PATTERN RECOGNITION LETTERS , 2023 , 166 , 61-68 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Accurate liver segment segmentation based on radiological images is indispensable for the preoperative analysis of liver tumor resection surgery. However, most of the existing segmentation methods are not feasible to be used directly for this task due to the challenge of exact edge prediction with some tiny and slender vessels as its clinical segmentation criterion. To address this problem, we propose a novel deep learning based segmentation model, called Boundary-Aware Dual Attention Liver Segment Segmentation Model (BADA). This model can improve the segmentation accuracy of liver segments with enhancing the edges including the vessels serving as segment boundaries. In our model, the dual gated attention is proposed, which composes of a spatial attention module and a semantic attention module. The spatial attention module enhances the weights of key edge regions by concerning about the salient intensity changes, while the semantic attention amplifies the contribution of filters that can extract more discriminative feature information by weighting the significant convolution channels. Simultaneously, we build a dataset of liver segments including 59 clinic cases with dynamically contrast enhanced MRI(Magnetic Resonance Imaging) of portal vein stage, which annotated by several professional radiologists. Comparing with several state-of-the-art methods and baseline segmentation methods, we achieve the best results on this clinic liver segment segmentation dataset, where Mean Dice, Mean Sensitivity and Mean Positive Predicted Value reach 89.01%, 87.71% and 90.67%, respectively.
Keyword :
attention mechanism attention mechanism boundary-aware boundary-aware liver segment liver segment Segmentation model Segmentation model
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Jia, Xibin , Qian, Chen , Yang, Zhenghan et al. Boundary-Aware Dual Attention Guided Liver Segment Segmentation Model [J]. | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS , 2022 , 16 (1) : 16-37 . |
MLA | Jia, Xibin et al. "Boundary-Aware Dual Attention Guided Liver Segment Segmentation Model" . | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS 16 . 1 (2022) : 16-37 . |
APA | Jia, Xibin , Qian, Chen , Yang, Zhenghan , Xu, Hui , Han, Xianjun , Rene, Hao et al. Boundary-Aware Dual Attention Guided Liver Segment Segmentation Model . | KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS , 2022 , 16 (1) , 16-37 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Unsupervised clustering is a kind of popular solution for unsupervised person re-identification (re-ID). However, due to the influence of cross-view differences, the results of clustering labels are not accurate. To solve this problem, an unsupervised re ID method based on cross-view distributed alignment (CV-DA) to reduce the influence of unsupervised cross-view is proposed. Specifically, based on a popular unsupervised clustering method, density clustering DBSCAN is used to obtain pseudo labels. By calculating the similarity scores of images in the target domain and the source domain, the similarity distribution of different camera views is obtained and is aligned with the distribution with the consistency constraint of pseudo labels. The cross-view distribution alignment constraint is used to guide the clustering process to obtain a more reliable pseudo label. The comprehensive comparative experiments are done in two public datasets, i.e. Market-1501 and DukeMTMC-reID. The comparative results show that the proposed method outperforms several state-of-the-art approaches with mAP reaching 52.6% and rank1 71.1%. In order to prove the effectiveness of the proposed CV-DA, the proposed constraint is added into two advanced re-ID methods. The experimental results demonstrate that the mAP and rank increase by ?0.5-2% after using the cross-view distribution alignment constraint comparing with that of the associated original methods without using CV-DA.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Jia, Xibin , Wang, Xing , Mi, Qing . An unsupervised person re-identification approach based on cross-view distribution alignment [J]. | IET IMAGE PROCESSING , 2021 , 15 (11) : 2693-2704 . |
MLA | Jia, Xibin et al. "An unsupervised person re-identification approach based on cross-view distribution alignment" . | IET IMAGE PROCESSING 15 . 11 (2021) : 2693-2704 . |
APA | Jia, Xibin , Wang, Xing , Mi, Qing . An unsupervised person re-identification approach based on cross-view distribution alignment . | IET IMAGE PROCESSING , 2021 , 15 (11) , 2693-2704 . |
Export to | NoteExpress RIS BibTex |
Abstract :
结合影像学和人工智能技术对病灶进行无创性定量分析是目前智慧医疗的一个重要研究方向.针对肝细胞癌(Hepatocellular carcinoma, HCC)分化程度的无创性定量估测方法研究,结合放射科医师的临床读片经验,提出了一种基于自注意力指导的多序列融合肝细胞癌组织学分化程度无创判别计算模型.以动态对比增强核磁共振成像(Dynamic contrastenhanced magnetic resonance imaging, DCE-MRI)的多个序列为输入,学习各时序序列及各序列的多层扫描切片在分化程度判别任务的权重,加权序列中具有的良好判别性能的时间和空间特征,以提升分化程度判别性能.模...
Keyword :
自注意力机制 自注意力机制 肝细胞癌分级 肝细胞癌分级 辅助诊断 辅助诊断 动态对比增强核磁共振成像 动态对比增强核磁共振成像 多序列融合 多序列融合
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 贾熹滨 , 孙政 , 杨大为 et al. 自注意力指导的多序列融合肝细胞癌分化判别模型 [J]. | 工程科学学报 , 2021 , 43 (09) : 1149-1156 . |
MLA | 贾熹滨 et al. "自注意力指导的多序列融合肝细胞癌分化判别模型" . | 工程科学学报 43 . 09 (2021) : 1149-1156 . |
APA | 贾熹滨 , 孙政 , 杨大为 , 杨正汉 . 自注意力指导的多序列融合肝细胞癌分化判别模型 . | 工程科学学报 , 2021 , 43 (09) , 1149-1156 . |
Export to | NoteExpress RIS BibTex |
Abstract :
小样本图像语义分割任务是计算机视觉领域一个有挑战性的问题,其目标是利用现有一张或几张带有密集分割注释的图片来预测未见类图像的分割掩码.针对该任务,提出了一个基于金字塔原型对齐的轻量级小样本图像语义分割网络.首先,该网络在MobileNetV2网络的深度可分离卷积和逆残差结构基础上,通过金字塔池化模块进行提取特征,保持高维度和低维度的信息,获得不同尺度的特征.同时通过在支持集原型和查询集之间进行相互对齐,使得网络能够从支持集中学到更多的信息,充分利用支持集的信息进行反馈.基于PASCAL-5~i数据集的大量实验结果表明,提出的网络结构的均值在1-way 1-shot和1-way 5-shot上分...
Keyword :
卷积神经网络 卷积神经网络 多尺度 多尺度 小样本语义分割 小样本语义分割 原型对齐正则化 原型对齐正则化 金字塔池化 金字塔池化 轻量级网络 轻量级网络
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 贾熹滨 , 李佳 . 金字塔原型对齐的轻量级小样本语义分割网络 [J]. | 北京工业大学学报 , 2021 , 47 (05) : 455-462,519 . |
MLA | 贾熹滨 et al. "金字塔原型对齐的轻量级小样本语义分割网络" . | 北京工业大学学报 47 . 05 (2021) : 455-462,519 . |
APA | 贾熹滨 , 李佳 . 金字塔原型对齐的轻量级小样本语义分割网络 . | 北京工业大学学报 , 2021 , 47 (05) , 455-462,519 . |
Export to | NoteExpress RIS BibTex |
Abstract :
金字塔原型对齐的轻量级小样本语义分割网络
Keyword :
原型对齐正则化 原型对齐正则化 卷积神经网络 卷积神经网络 轻量级网络 轻量级网络 小样本语义分割 小样本语义分割 多尺度 多尺度 金字塔池化 金字塔池化
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 贾熹滨 , 李佳 , 北京工业大学学报 . 金字塔原型对齐的轻量级小样本语义分割网络 [J]. | 贾熹滨 , 2021 , 47 (5) : 455-462,519 . |
MLA | 贾熹滨 et al. "金字塔原型对齐的轻量级小样本语义分割网络" . | 贾熹滨 47 . 5 (2021) : 455-462,519 . |
APA | 贾熹滨 , 李佳 , 北京工业大学学报 . 金字塔原型对齐的轻量级小样本语义分割网络 . | 贾熹滨 , 2021 , 47 (5) , 455-462,519 . |
Export to | NoteExpress RIS BibTex |
Abstract :
One of the most common methods for diagnosing coronary artery disease is the use of the coronary artery calcium score CT. However, the current diagnostic method using the coronary artery calcium score CT requires a considerable time, because the radiologist must manually check the CT images one-by-one, and check the exact range. In this paper, three CNN models are applied for 1200 normal cardiovascular CT images, and 1200 CT images in which calcium is present in the cardiovascular system. We conduct the experimental test by classifying the CT image data into the original coronary artery calcium score CT images containing the entire rib cage, the cardiac segmented images that cut out only the heart region, and cardiac cropped images that are created by using the cardiac images that are segmented into nine sub-parts and enlarged. As a result of the experimental test to determine the presence of calcium in a given CT image using Inception Resnet v2, VGG, and Resnet 50 models, the highest accuracy of 98.52% was obtained when cardiac cropped image data was applied using the Resnet 50 model. Therefore, in this paper, it is expected that through further research, both the simple presence of calcium and the automation of the calcium analysis score for each coronary artery calcium score CT will become possible.
Keyword :
VGG VGG calcium detection calcium detection inception resnet V2 inception resnet V2 resnet-50 resnet-50 coronary artery calcium score CT coronary artery calcium score CT deep learning deep learning image classification image classification
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lee, Sungjin , Rim, Beanbonyka , Jou, Sung-Shick et al. Deep-Learning-Based Coronary Artery Calcium Detection from CT Image [J]. | SENSORS , 2021 , 21 (21) . |
MLA | Lee, Sungjin et al. "Deep-Learning-Based Coronary Artery Calcium Detection from CT Image" . | SENSORS 21 . 21 (2021) . |
APA | Lee, Sungjin , Rim, Beanbonyka , Jou, Sung-Shick , Gil, Hyo-Wook , Jia, Xibin , Lee, Ahyoung et al. Deep-Learning-Based Coronary Artery Calcium Detection from CT Image . | SENSORS , 2021 , 21 (21) . |
Export to | NoteExpress RIS BibTex |
Abstract :
Context: Training deep learning models for code readability classification requires large datasets of quality pre-labeled data. However, it is almost always time-consuming and expensive to acquire readability data with manual labels. Objective: We thus propose to introduce data augmentation approaches to artificially increase the size of training set, this is to reduce the risk of overfitting caused by the lack of readability data and further improve the classification accuracy as the ultimate goal. Method: We create transformed versions of code snippets by manipulating original data from aspects such as comments, indentations, and names of classes/methods/variables based on domain-specific knowledge. In addition to basic transformations, we also explore the use of Auxiliary Classifier GANs to produce synthetic data. Results: To evaluate the proposed approach, we conduct a set of experiments. The results show that the classification performance of deep neural networks can be significantly improved when they are trained on the augmented corpus, achieving a state-of-the-art accuracy of 87.38%. Conclusion: We consider the findings of this study as primary evidence of the effectiveness of data augmentation in the field of code readability classification.
Keyword :
Generative adversarial network Generative adversarial network Deep learning Deep learning Data augmentation Data augmentation Empirical software engineering Empirical software engineering Code readability classification Code readability classification
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Mi, Qing , Xiao, Yan , Cai, Zhi et al. The effectiveness of data augmentation in code readability classification [J]. | INFORMATION AND SOFTWARE TECHNOLOGY , 2021 , 129 . |
MLA | Mi, Qing et al. "The effectiveness of data augmentation in code readability classification" . | INFORMATION AND SOFTWARE TECHNOLOGY 129 (2021) . |
APA | Mi, Qing , Xiao, Yan , Cai, Zhi , Jia, Xibin . The effectiveness of data augmentation in code readability classification . | INFORMATION AND SOFTWARE TECHNOLOGY , 2021 , 129 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |