Query:
学者姓名:段立娟
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-Author
Language
Clean All
Abstract :
Accurately detecting surface defects is crucial for maintaining the quality of steel products. The existing methods often struggle with identifying small defects in complex scenes. To overcome this limitation, we propose a high-dimensional decoupled frequency attention network (HDFA-Net), which features a novel HDFA module. This module considers frequency domain feature information at both the channel and spatial levels and uniquely decouples feature representations into low-frequency components and high-frequency components, conveying global contextual information and highlighting local details, respectively. This innovative approach enables the network to apply distinct attention mechanisms to each frequency domain, significantly enhancing the ability to detect small defects. The HDFA-Net architecture comprises three main subnetworks: a backbone for feature extraction, a neck for multiscale feature interaction, and a head for precise defect localization. The experimental results demonstrate that HDFA-Net outperforms the state-of-the-art defect detection methods, highlighting the effectiveness of the HDFA module in improving the detection accuracy.
Keyword :
Defect detection Defect detection attention attention Steel surface Steel surface High-dimensional decoupled frequency High-dimensional decoupled frequency
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liang, Fangfang , Wang, Zhaoyang , Ma, Wei et al. HDFA-Net: A high-dimensional decoupled frequency attention network for steel surface defect detection [J]. | MEASUREMENT , 2025 , 242 . |
MLA | Liang, Fangfang et al. "HDFA-Net: A high-dimensional decoupled frequency attention network for steel surface defect detection" . | MEASUREMENT 242 (2025) . |
APA | Liang, Fangfang , Wang, Zhaoyang , Ma, Wei , Liu, Bo , En, Qing , Wang, Dong et al. HDFA-Net: A high-dimensional decoupled frequency attention network for steel surface defect detection . | MEASUREMENT , 2025 , 242 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Few-shot object detection involves detecting novel objects with only a few training samples. But very few samples are difficult to cover the bias of the new class in the deep model. To address the issue, we use self-supervision to expand the coverage of samples to provide more observation angles for new classes. In this paper, we propose a multi-task approach that combines self-supervision with few-shot learning to exploit the complementarity of these two domains. Specifically, our self-supervision as an auxiliary task to improve the detection performance of the main task of few-shot learning. Moreover, in order to make self-supervision more suitable for few-shot object detection, we introduce the denoising module to expand the positive and negative samples and the team module for precise positioning. The denoising module expands the positive and negative samples and accelerate model convergence using contrastive denoising training methods. The team module utilizes location constraints for precise localization to improve the accuracy of object detection. Our experimental results demonstrate the effectiveness of our method on the Few-shot object detection task on the PASCAL VOC and COCO datasets, achieving promising results. Our results highlight the potential of combining self-supervision with few-shot learning to improve the performance of object detection models in scenarios where annotated data is limited.
Keyword :
Few-shot object detection Few-shot object detection End-to-End Detector End-to-End Detector Self-supervised learning Self-supervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian et al. Multi-task Self-supervised Few-Shot Detection [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 : 107-119 . |
MLA | Zhang, Guangyong et al. "Multi-task Self-supervised Few-Shot Detection" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII 14436 (2024) : 107-119 . |
APA | Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian , Gong, Zhi , Ma, Bian . Multi-task Self-supervised Few-Shot Detection . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 , 107-119 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Sleep staging plays a significant role in diagnosing sleep-related diseases and assessing sleep quality. Recent years have witnessed a remarkable advancement in deep learning for automatic sleep staging. However, the classification accuracy for certain stages remains unsatisfactory due to imbalanced sleep data. Moreover, the sleep stages are easily confused with each other, further leading to low classification accuracy. To address the issues, we propose a multi-task-based feature-reconstruction sleep staging method, comprising a feature extraction module and a multi-task module. In the multi-task module, specifically, the main task branch encodes the features extracted by the feature extraction module for sleep staging, while the auxiliary task branch randomly masks the features before performing the same encoding. Subsequently, based on the classification results of the main task, a dimensional reconstruction is employed to reconstruct the original features from the encoded features for confused sleep stages. The overall loss, formed by combining the classification loss and the reconstruction loss, is utilized to constrain the model. We evaluate the performance of our proposed model using three public datasets, and the results demonstrate the effectiveness of the method in solving the problem of low accuracy in confused sleep stages classification.
Keyword :
Sleep stage classification Sleep stage classification multi-task learning multi-task learning feature reconstruction feature reconstruction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Chengyu , Duan, Lijuan , Ma, Bian et al. A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging [J]. | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 , 2024 : 60-64 . |
MLA | Zhang, Chengyu et al. "A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging" . | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 (2024) : 60-64 . |
APA | Zhang, Chengyu , Duan, Lijuan , Ma, Bian , Gong, Zhi . A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging . | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 , 2024 , 60-64 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明涉及一种基于图注意力网络和稀疏编码的多通道EEG信号识别方法。首先对多通道脑电信号进行预处理,获得若干多通道脑电信号数据样本。接下来对每一个数据样本进行分频带处理,分为五种子频带,采用上述两种特征提取方式分别构造五种脑功能网络。接下来对五种脑功能网络进行融合,将其脑功能节点特征进行拼接作为融合后的脑功能节点特征;对五种脑功能连接特征取平均值,然后进行进行阈值处理去除无效连接,作为融合后的脑功能连接特征。将融合后的脑功能网络通过图注意力网络模型来还原真实的脑功能连接特征,并使用自编码器对脑功能连接稀疏特征进行降维和特征增强,进行特征降维,最终将两种特征融合并进行分类。本发明分类准确率最高。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 邹鑫宇 , 乔元华 . 一种基于图注意力网络和稀疏编码的静息态多通道脑电信号识别方法 : CN202310152183.8[P]. | 2023-02-22 . |
MLA | 段立娟 et al. "一种基于图注意力网络和稀疏编码的静息态多通道脑电信号识别方法" : CN202310152183.8. | 2023-02-22 . |
APA | 段立娟 , 邹鑫宇 , 乔元华 . 一种基于图注意力网络和稀疏编码的静息态多通道脑电信号识别方法 : CN202310152183.8. | 2023-02-22 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明公开了一种改善N1期类别混淆的多模态多尺度睡眠分期方法。对原始睡眠数据进行预处理,获得睡眠数据样本。针对N1期睡眠数据少的情况,使用基于叠取策略的数据增强算法生成N1期,减轻了N1期少对睡眠分期的影响。针对睡眠数据利用不充分的问题,设计了多模态多尺度特征提取模块,对不同模态的数据进行不同处理,且使用多尺度特征提取方式对EEG模态进行细粒度特征提取,提高特征的有效性,初步解决N1期难区分问题,提高N1期的分类准确率。针对N1期容易与N2期和REM期混淆的问题,使用对比学习的方法,使得同一分期睡眠数据特征相似度更高,不同分期睡眠数据特征相似度相对降低,进一步提高N1期的可区分性。本发明在睡眠分期任务中N1期准确率最高。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 尹悦 . 一种改善N1期类别混淆的多模态多尺度睡眠分期方法 : CN202310152184.2[P]. | 2023-02-22 . |
MLA | 段立娟 et al. "一种改善N1期类别混淆的多模态多尺度睡眠分期方法" : CN202310152184.2. | 2023-02-22 . |
APA | 段立娟 , 尹悦 . 一种改善N1期类别混淆的多模态多尺度睡眠分期方法 : CN202310152184.2. | 2023-02-22 . |
Export to | NoteExpress RIS BibTex |
Abstract :
基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法,属于信号处理和模式识别领域。首先对睡眠脑电和眼电信号进行预处理,获得若干多模态睡眠信号数据样本。接下来对源域和目标域数据的每一个样本包含的每个通道依次使用不同分辨率的Morlet小波变换提取时频特征,随后输入源域教师和目标域教师进行预训练。在对学生的训练优化时,引入冻结住特征提取器的两个教师进行指导,约束学生学习源域和目标域通用特征和目标域的域特定特征。实验证明本发明提出的模型充分利用了数据的特征进行特征迁移,在目标域数据量较少时也能得到良好效果,可以有效应对现有的自动化睡眠分期方法在面对新数据集时准确率下降的问题。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 张岩 . 基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法 : CN202310189447.7[P]. | 2023-02-22 . |
MLA | 段立娟 et al. "基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法" : CN202310189447.7. | 2023-02-22 . |
APA | 段立娟 , 张岩 . 基于知识蒸馏和域自适应的双教师睡眠分期特征迁移方法 : CN202310189447.7. | 2023-02-22 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Few-shot semantic segmentation intends to predict pixel-level categories using only a few labeled samples. Existing few-shot methods focus primarily on the categories sampled from the same distribution. Nevertheless, this assumption cannot always be ensured. The actual domain shift problem significantly reduces the performance of few-shot learning. To remedy this problem, we propose an interesting and challenging cross-domain few-shot semantic segmentation task, where the training and test tasks perform on different domains. Specifically, we first propose a meta-memory bank to improve the generalization of the segmentation network by bridging the domain gap between source and target domains. The meta-memory stores the intra-domain style information from source domain instances and transfers it to target samples. Subsequently, we adopt a new contrastive learning strategy to explore the knowledge of different categories during the training stage. The negative and positive pairs are obtained from the proposed memory-based style augmentation. Comprehensive experiments demonstrate that our proposed method achieves promising results on cross-domain few-shot semantic segmentation tasks on C000-20(i) , PASCAL-5(i), FSS-1000, and SHIM datasets.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Wenjian , Duan, Lijuan , Wang, Yuxi et al. Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 : 7055-7064 . |
MLA | Wang, Wenjian et al. "Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022) : 7055-7064 . |
APA | Wang, Wenjian , Duan, Lijuan , Wang, Yuxi , En, Qing , Fan, Junsong , Zhang, Zhaoxiang . Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 , 7055-7064 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明涉及一种应用于人脸识别的自适应快速无监督特征选择方法,用于解决高维度人脸图像中往往存在大量无意义和冗余特征导致分析困难的问题。具体方案为首先提出一种自适应快速密度峰值聚类方法对人脸图像特征进行聚类操作,然后定义特征重要性评价函数,在每个特征簇中选择出最具代表性特征,加入特征子集,完成特征选择。实施本发明能够达到得到的特征子集更精确,特征选择更快速的效果。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 解晨瑶 , 张文博 et al. 一种应用于人脸识别的自适应快速无监督特征选择方法 : CN202210183736.1[P]. | 2022-02-25 . |
MLA | 段立娟 et al. "一种应用于人脸识别的自适应快速无监督特征选择方法" : CN202210183736.1. | 2022-02-25 . |
APA | 段立娟 , 解晨瑶 , 张文博 , 乔元华 . 一种应用于人脸识别的自适应快速无监督特征选择方法 : CN202210183736.1. | 2022-02-25 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明提供了一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法,该方法借鉴了空间自注意力机制思想,对fmri数据相关的显著性区域特征进行提取并对非显著性区域特征做稀疏化处理,而后结合不同显著性区域特征的强关联性解决数据特征提取过程中出现的样本维度高、冗余特征过多,以及特征关联信息利用不足等问题。为了客观评价所提出模型的有效性,在ABIDE和ADHD数据集上进行验证。实验结果表明,本文提出的特征提取方法有效提高了fmri脑功能连接数据的分类准确率。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 李明 , 张文博 et al. 一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法 : CN202210090736.7[P]. | 2022-01-26 . |
MLA | 段立娟 et al. "一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法" : CN202210090736.7. | 2022-01-26 . |
APA | 段立娟 , 李明 , 张文博 , 乔元华 . 一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法 : CN202210090736.7. | 2022-01-26 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明公开了一种基于记忆力机制的小样本跨域分割方法,该方法不但能够缓解模型对大量标注样本的依赖,还有效提高了模型对不同环境的适应能力。该方法首先在公开数据集上训练分割模型,此过程主要借助元度量机制来缓解模型对数据标签的依赖,并且通过读、写操作将带有域信息的风格化知识存储到记忆模块中。随后在使用模型时,将存储在记忆模块中的知识载入到新环境的待分割样本中,由此提高模型对不同环境的泛化性,最终顺利完成新场景的分割任务。本发明将训练过程中模型捕捉到的域泛化知识载入到样本稀少的新环境任务中,拉近了不同环境间的数据分布,使得深度模型能够有效的面对标注数据稀少的新环境,扩展了深度分割模型的泛化性与可用性。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 王文健 , 公智 et al. 基于记忆力机制的跨域小样本图像语义分割方法 : CN202210707799.2[P]. | 2022-06-20 . |
MLA | 段立娟 et al. "基于记忆力机制的跨域小样本图像语义分割方法" : CN202210707799.2. | 2022-06-20 . |
APA | 段立娟 , 王文健 , 公智 , 乔元华 . 基于记忆力机制的跨域小样本图像语义分割方法 : CN202210707799.2. | 2022-06-20 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |