• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:段立娟

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 20 >
HDFA-Net: A high-dimensional decoupled frequency attention network for steel surface defect detection SCIE
期刊论文 | 2025 , 242 | MEASUREMENT
Abstract&Keyword Cite

Abstract :

Accurately detecting surface defects is crucial for maintaining the quality of steel products. The existing methods often struggle with identifying small defects in complex scenes. To overcome this limitation, we propose a high-dimensional decoupled frequency attention network (HDFA-Net), which features a novel HDFA module. This module considers frequency domain feature information at both the channel and spatial levels and uniquely decouples feature representations into low-frequency components and high-frequency components, conveying global contextual information and highlighting local details, respectively. This innovative approach enables the network to apply distinct attention mechanisms to each frequency domain, significantly enhancing the ability to detect small defects. The HDFA-Net architecture comprises three main subnetworks: a backbone for feature extraction, a neck for multiscale feature interaction, and a head for precise defect localization. The experimental results demonstrate that HDFA-Net outperforms the state-of-the-art defect detection methods, highlighting the effectiveness of the HDFA module in improving the detection accuracy.

Keyword :

Defect detection Defect detection attention attention Steel surface Steel surface High-dimensional decoupled frequency High-dimensional decoupled frequency

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liang, Fangfang , Wang, Zhaoyang , Ma, Wei et al. HDFA-Net: A high-dimensional decoupled frequency attention network for steel surface defect detection [J]. | MEASUREMENT , 2025 , 242 .
MLA Liang, Fangfang et al. "HDFA-Net: A high-dimensional decoupled frequency attention network for steel surface defect detection" . | MEASUREMENT 242 (2025) .
APA Liang, Fangfang , Wang, Zhaoyang , Ma, Wei , Liu, Bo , En, Qing , Wang, Dong et al. HDFA-Net: A high-dimensional decoupled frequency attention network for steel surface defect detection . | MEASUREMENT , 2025 , 242 .
Export to NoteExpress RIS BibTex
Multi-task Self-supervised Few-Shot Detection CPCI-S
期刊论文 | 2024 , 14436 , 107-119 | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII
Abstract&Keyword Cite

Abstract :

Few-shot object detection involves detecting novel objects with only a few training samples. But very few samples are difficult to cover the bias of the new class in the deep model. To address the issue, we use self-supervision to expand the coverage of samples to provide more observation angles for new classes. In this paper, we propose a multi-task approach that combines self-supervision with few-shot learning to exploit the complementarity of these two domains. Specifically, our self-supervision as an auxiliary task to improve the detection performance of the main task of few-shot learning. Moreover, in order to make self-supervision more suitable for few-shot object detection, we introduce the denoising module to expand the positive and negative samples and the team module for precise positioning. The denoising module expands the positive and negative samples and accelerate model convergence using contrastive denoising training methods. The team module utilizes location constraints for precise localization to improve the accuracy of object detection. Our experimental results demonstrate the effectiveness of our method on the Few-shot object detection task on the PASCAL VOC and COCO datasets, achieving promising results. Our results highlight the potential of combining self-supervision with few-shot learning to improve the performance of object detection models in scenarios where annotated data is limited.

Keyword :

Few-shot object detection Few-shot object detection End-to-End Detector End-to-End Detector Self-supervised learning Self-supervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian et al. Multi-task Self-supervised Few-Shot Detection [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 : 107-119 .
MLA Zhang, Guangyong et al. "Multi-task Self-supervised Few-Shot Detection" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII 14436 (2024) : 107-119 .
APA Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian , Gong, Zhi , Ma, Bian . Multi-task Self-supervised Few-Shot Detection . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 , 107-119 .
Export to NoteExpress RIS BibTex
A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging CPCI-S
期刊论文 | 2024 , 60-64 | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024
Abstract&Keyword Cite

Abstract :

Sleep staging plays a significant role in diagnosing sleep-related diseases and assessing sleep quality. Recent years have witnessed a remarkable advancement in deep learning for automatic sleep staging. However, the classification accuracy for certain stages remains unsatisfactory due to imbalanced sleep data. Moreover, the sleep stages are easily confused with each other, further leading to low classification accuracy. To address the issues, we propose a multi-task-based feature-reconstruction sleep staging method, comprising a feature extraction module and a multi-task module. In the multi-task module, specifically, the main task branch encodes the features extracted by the feature extraction module for sleep staging, while the auxiliary task branch randomly masks the features before performing the same encoding. Subsequently, based on the classification results of the main task, a dimensional reconstruction is employed to reconstruct the original features from the encoded features for confused sleep stages. The overall loss, formed by combining the classification loss and the reconstruction loss, is utilized to constrain the model. We evaluate the performance of our proposed model using three public datasets, and the results demonstrate the effectiveness of the method in solving the problem of low accuracy in confused sleep stages classification.

Keyword :

Sleep stage classification Sleep stage classification multi-task learning multi-task learning feature reconstruction feature reconstruction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Chengyu , Duan, Lijuan , Ma, Bian et al. A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging [J]. | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 , 2024 : 60-64 .
MLA Zhang, Chengyu et al. "A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging" . | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 (2024) : 60-64 .
APA Zhang, Chengyu , Duan, Lijuan , Ma, Bian , Gong, Zhi . A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging . | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 , 2024 , 60-64 .
Export to NoteExpress RIS BibTex
一种基于混合图神经网络的软件源码漏洞检测方法 incoPat zhihuiya
专利 | 2022-03-18 | CN202210274334.2
Abstract&Keyword Cite

Abstract :

本发明涉及一种基于混合图神经网络的软件源码漏洞检测方法,用于解决在软件源码处理过程中源码内部结构与语义信息丢失,漏洞检测效果差的问题,包括:将源码文件采用信息增强后的代码属性图表示,将信息增强后的代码属性图向量化后输入图卷积神经网络中得到局部特征矩阵;输入门控图神经网络中得到全局特征矩阵。将局部特征矩阵和全局特征矩阵拼接后输入分类器,最后输出检测结果。采用本方法能够有效保留源码内部的结构和语义信息,模型训练采用焦点损失函数在损失计算时赋予正负样本不同大小的权重,避免模型过度拟合样本更多的非漏洞类别,提升了模型的漏洞检测效果。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 徐泽鑫 , 陈军成 . 一种基于混合图神经网络的软件源码漏洞检测方法 : CN202210274334.2[P]. | 2022-03-18 .
MLA 段立娟 et al. "一种基于混合图神经网络的软件源码漏洞检测方法" : CN202210274334.2. | 2022-03-18 .
APA 段立娟 , 徐泽鑫 , 陈军成 . 一种基于混合图神经网络的软件源码漏洞检测方法 : CN202210274334.2. | 2022-03-18 .
Export to NoteExpress RIS BibTex
一种基于图注意力网络的MOBA游戏装备推荐方法 incoPat zhihuiya
专利 | 2022-02-22 | CN202210164356.3
Abstract&Keyword Cite

Abstract :

本发明属于推荐系统领域,针对多人在线战术竞技类型MultiplayerOnlineBattleArena, MOBA游戏装备推荐问题,提出了一种基于图注意力网络的MOBA游戏装备推荐方法。首先使用基于Transformer的局部和全局注意力特征提取方法,针对局内对战队伍的多属性特征进行细粒度提取,促使模型在装备推荐时既考虑己方协助信息也考虑敌方制约信息,进行有效信息互通。其次,基于图注意力网络的全局多重聚合方法通过计算影响因子权重深入更新聚合特征,不断强化英雄‑英雄、英雄‑装备间的交互影响。本发明在Precision和MAP指标上明显优于先前方法,对MOBA游戏的装备推荐更准确有效。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 李舒欣 , 张文博 et al. 一种基于图注意力网络的MOBA游戏装备推荐方法 : CN202210164356.3[P]. | 2022-02-22 .
MLA 段立娟 et al. "一种基于图注意力网络的MOBA游戏装备推荐方法" : CN202210164356.3. | 2022-02-22 .
APA 段立娟 , 李舒欣 , 张文博 , 乔元华 . 一种基于图注意力网络的MOBA游戏装备推荐方法 : CN202210164356.3. | 2022-02-22 .
Export to NoteExpress RIS BibTex
一种应用于人脸识别的自适应快速无监督特征选择方法 incoPat zhihuiya
专利 | 2022-02-25 | CN202210183736.1
Abstract&Keyword Cite

Abstract :

本发明涉及一种应用于人脸识别的自适应快速无监督特征选择方法,用于解决高维度人脸图像中往往存在大量无意义和冗余特征导致分析困难的问题。具体方案为首先提出一种自适应快速密度峰值聚类方法对人脸图像特征进行聚类操作,然后定义特征重要性评价函数,在每个特征簇中选择出最具代表性特征,加入特征子集,完成特征选择。实施本发明能够达到得到的特征子集更精确,特征选择更快速的效果。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 解晨瑶 , 张文博 et al. 一种应用于人脸识别的自适应快速无监督特征选择方法 : CN202210183736.1[P]. | 2022-02-25 .
MLA 段立娟 et al. "一种应用于人脸识别的自适应快速无监督特征选择方法" : CN202210183736.1. | 2022-02-25 .
APA 段立娟 , 解晨瑶 , 张文博 , 乔元华 . 一种应用于人脸识别的自适应快速无监督特征选择方法 : CN202210183736.1. | 2022-02-25 .
Export to NoteExpress RIS BibTex
一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法 incoPat zhihuiya
专利 | 2022-01-26 | CN202210090736.7
Abstract&Keyword Cite

Abstract :

本发明提供了一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法,该方法借鉴了空间自注意力机制思想,对fmri数据相关的显著性区域特征进行提取并对非显著性区域特征做稀疏化处理,而后结合不同显著性区域特征的强关联性解决数据特征提取过程中出现的样本维度高、冗余特征过多,以及特征关联信息利用不足等问题。为了客观评价所提出模型的有效性,在ABIDE和ADHD数据集上进行验证。实验结果表明,本文提出的特征提取方法有效提高了fmri脑功能连接数据的分类准确率。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 李明 , 张文博 et al. 一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法 : CN202210090736.7[P]. | 2022-01-26 .
MLA 段立娟 et al. "一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法" : CN202210090736.7. | 2022-01-26 .
APA 段立娟 , 李明 , 张文博 , 乔元华 . 一种基于显著稀疏强关联的fmri脑功能连接数据特征提取方法 : CN202210090736.7. | 2022-01-26 .
Export to NoteExpress RIS BibTex
Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer CPCI-S
期刊论文 | 2022 , 7055-7064 | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)
WoS CC Cited Count: 19
Abstract&Keyword Cite

Abstract :

Few-shot semantic segmentation intends to predict pixel-level categories using only a few labeled samples. Existing few-shot methods focus primarily on the categories sampled from the same distribution. Nevertheless, this assumption cannot always be ensured. The actual domain shift problem significantly reduces the performance of few-shot learning. To remedy this problem, we propose an interesting and challenging cross-domain few-shot semantic segmentation task, where the training and test tasks perform on different domains. Specifically, we first propose a meta-memory bank to improve the generalization of the segmentation network by bridging the domain gap between source and target domains. The meta-memory stores the intra-domain style information from source domain instances and transfers it to target samples. Subsequently, we adopt a new contrastive learning strategy to explore the knowledge of different categories during the training stage. The negative and positive pairs are obtained from the proposed memory-based style augmentation. Comprehensive experiments demonstrate that our proposed method achieves promising results on cross-domain few-shot semantic segmentation tasks on C000-20(i) , PASCAL-5(i), FSS-1000, and SHIM datasets.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wenjian , Duan, Lijuan , Wang, Yuxi et al. Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 : 7055-7064 .
MLA Wang, Wenjian et al. "Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022) : 7055-7064 .
APA Wang, Wenjian , Duan, Lijuan , Wang, Yuxi , En, Qing , Fan, Junsong , Zhang, Zhaoxiang . Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 , 7055-7064 .
Export to NoteExpress RIS BibTex
Context-aware network for RGB-D salient object detection EI
期刊论文 | 2021 , 111 | Pattern Recognition
Abstract&Keyword Cite

Abstract :

Convolutional neural networks (CNNs) have shown unprecedented success in object representation and detection. Nevertheless, CNNs lack the capability to model context dependencies among objects, which are crucial for salient object detection. As the long short-term memory (LSTM) is advantageous in propagating information, in this paper, we propose two variant LSTM units for the exploration of contextual dependencies. By incorporating these units, we present a context-aware network (CAN) to detect salient objects in RGB-D images. The proposed model consists of three components: feature extraction, context fusion of multiple modalities and context-dependent deconvolution. The first component is responsible for extracting hierarchical features in color and depth images using CNNs, respectively. The second component fuses high-level features by a variant LSTM to model multi-modal spatial dependencies in contexts. The third component, embedded with another variant LSTM, models local hierarchical context dependencies of the fused features at multi-scales. Experimental results on two public benchmark datasets show that the proposed CAN can achieve state-of-the-art performance for RGB-D stereoscopic salient object detection. © 2020

Keyword :

Object detection Object detection Stereo image processing Stereo image processing Convolutional neural networks Convolutional neural networks Object recognition Object recognition Benchmarking Benchmarking Long short-term memory Long short-term memory

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liang, Fangfang , Duan, Lijuan , Ma, Wei et al. Context-aware network for RGB-D salient object detection [J]. | Pattern Recognition , 2021 , 111 .
MLA Liang, Fangfang et al. "Context-aware network for RGB-D salient object detection" . | Pattern Recognition 111 (2021) .
APA Liang, Fangfang , Duan, Lijuan , Ma, Wei , Qiao, Yuanhua , Miao, Jun , Ye, Qixiang . Context-aware network for RGB-D salient object detection . | Pattern Recognition , 2021 , 111 .
Export to NoteExpress RIS BibTex
Periodic dynamics of multidirectional associative neural networks with discontinuous activation functions and mixed time delays SCIE
期刊论文 | 2021 , 31 (10) , 4570-4588 | INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL
WoS CC Cited Count: 6
Abstract&Keyword Cite

Abstract :

Multidirectional associative memory neural networks (MAMNNs) are constructed to simulate the many-to-many association, and they are applied widely in many fields. It is important to explore the global stability of the periodic solution of MAMNNs. In this paper, MAMNNs with discontinuous activation functions and mixed time-varying delays are considered. Firstly, we investigate the conditions for the existence of the periodic solution by using the Mawhin-like coincidence theorem, and a special connecting weight matrix is constructed to prove the existence of the periodic solution. Secondly, the uniqueness and global exponential stability of the periodic solution are explored for the non-self-connected system by introducing a Lipschitz-like condition. Finally, numerical simulations are given to illustrate the effectiveness of our main results.

Keyword :

periodic solution periodic solution mixed time&#8208 mixed time&#8208 multidirectional associative memory neural networks multidirectional associative memory neural networks global exponential stability global exponential stability discontinuous activation functions discontinuous activation functions varying delays varying delays

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Yan , Qiao, Yuanhua , Duan, Lijuan et al. Periodic dynamics of multidirectional associative neural networks with discontinuous activation functions and mixed time delays [J]. | INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL , 2021 , 31 (10) : 4570-4588 .
MLA Zhang, Yan et al. "Periodic dynamics of multidirectional associative neural networks with discontinuous activation functions and mixed time delays" . | INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL 31 . 10 (2021) : 4570-4588 .
APA Zhang, Yan , Qiao, Yuanhua , Duan, Lijuan , Miao, Jun , Zhang, Jiajia . Periodic dynamics of multidirectional associative neural networks with discontinuous activation functions and mixed time delays . | INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL , 2021 , 31 (10) , 4570-4588 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 20 >

Export

Results:

Selected

to

Format:
Online/Total:556/9271974
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.