• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:段立娟

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 23 >
Multi-task Self-supervised Few-Shot Detection CPCI-S
期刊论文 | 2024 , 14436 , 107-119 | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII
Abstract&Keyword Cite

Abstract :

Few-shot object detection involves detecting novel objects with only a few training samples. But very few samples are difficult to cover the bias of the new class in the deep model. To address the issue, we use self-supervision to expand the coverage of samples to provide more observation angles for new classes. In this paper, we propose a multi-task approach that combines self-supervision with few-shot learning to exploit the complementarity of these two domains. Specifically, our self-supervision as an auxiliary task to improve the detection performance of the main task of few-shot learning. Moreover, in order to make self-supervision more suitable for few-shot object detection, we introduce the denoising module to expand the positive and negative samples and the team module for precise positioning. The denoising module expands the positive and negative samples and accelerate model convergence using contrastive denoising training methods. The team module utilizes location constraints for precise localization to improve the accuracy of object detection. Our experimental results demonstrate the effectiveness of our method on the Few-shot object detection task on the PASCAL VOC and COCO datasets, achieving promising results. Our results highlight the potential of combining self-supervision with few-shot learning to improve the performance of object detection models in scenarios where annotated data is limited.

Keyword :

Few-shot object detection Few-shot object detection End-to-End Detector End-to-End Detector Self-supervised learning Self-supervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian et al. Multi-task Self-supervised Few-Shot Detection [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 : 107-119 .
MLA Zhang, Guangyong et al. "Multi-task Self-supervised Few-Shot Detection" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII 14436 (2024) : 107-119 .
APA Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian , Gong, Zhi , Ma, Bian . Multi-task Self-supervised Few-Shot Detection . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 , 107-119 .
Export to NoteExpress RIS BibTex
A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging CPCI-S
期刊论文 | 2024 , 60-64 | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024
Abstract&Keyword Cite

Abstract :

Sleep staging plays a significant role in diagnosing sleep-related diseases and assessing sleep quality. Recent years have witnessed a remarkable advancement in deep learning for automatic sleep staging. However, the classification accuracy for certain stages remains unsatisfactory due to imbalanced sleep data. Moreover, the sleep stages are easily confused with each other, further leading to low classification accuracy. To address the issues, we propose a multi-task-based feature-reconstruction sleep staging method, comprising a feature extraction module and a multi-task module. In the multi-task module, specifically, the main task branch encodes the features extracted by the feature extraction module for sleep staging, while the auxiliary task branch randomly masks the features before performing the same encoding. Subsequently, based on the classification results of the main task, a dimensional reconstruction is employed to reconstruct the original features from the encoded features for confused sleep stages. The overall loss, formed by combining the classification loss and the reconstruction loss, is utilized to constrain the model. We evaluate the performance of our proposed model using three public datasets, and the results demonstrate the effectiveness of the method in solving the problem of low accuracy in confused sleep stages classification.

Keyword :

Sleep stage classification Sleep stage classification multi-task learning multi-task learning feature reconstruction feature reconstruction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Chengyu , Duan, Lijuan , Ma, Bian et al. A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging [J]. | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 , 2024 : 60-64 .
MLA Zhang, Chengyu et al. "A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging" . | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 (2024) : 60-64 .
APA Zhang, Chengyu , Duan, Lijuan , Ma, Bian , Gong, Zhi . A Feature-Reconstruction Based Multi-task Learning Model for Sleep Staging . | 2024 4TH INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SOFTWARE ENGINEERING, ICICSE 2024 , 2024 , 60-64 .
Export to NoteExpress RIS BibTex
Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer CPCI-S
期刊论文 | 2022 , 7055-7064 | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)
WoS CC Cited Count: 12
Abstract&Keyword Cite

Abstract :

Few-shot semantic segmentation intends to predict pixel-level categories using only a few labeled samples. Existing few-shot methods focus primarily on the categories sampled from the same distribution. Nevertheless, this assumption cannot always be ensured. The actual domain shift problem significantly reduces the performance of few-shot learning. To remedy this problem, we propose an interesting and challenging cross-domain few-shot semantic segmentation task, where the training and test tasks perform on different domains. Specifically, we first propose a meta-memory bank to improve the generalization of the segmentation network by bridging the domain gap between source and target domains. The meta-memory stores the intra-domain style information from source domain instances and transfers it to target samples. Subsequently, we adopt a new contrastive learning strategy to explore the knowledge of different categories during the training stage. The negative and positive pairs are obtained from the proposed memory-based style augmentation. Comprehensive experiments demonstrate that our proposed method achieves promising results on cross-domain few-shot semantic segmentation tasks on C000-20(i) , PASCAL-5(i), FSS-1000, and SHIM datasets.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wenjian , Duan, Lijuan , Wang, Yuxi et al. Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 : 7055-7064 .
MLA Wang, Wenjian et al. "Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022) : 7055-7064 .
APA Wang, Wenjian , Duan, Lijuan , Wang, Yuxi , En, Qing , Fan, Junsong , Zhang, Zhaoxiang . Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 , 7055-7064 .
Export to NoteExpress RIS BibTex
Bifurcation analysis and global dynamics in a predator-prey system of Leslie type with an increasing functional response SCIE
期刊论文 | 2021 , 455 | ECOLOGICAL MODELLING
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

The dynamical behaviors of a Leslie type predator-prey system are explored when the functional response is increasing for both predator and prey. Qualitative and quantitative analysis methods based on stability theory, bifurcation theory and numerical simulation are adopted. It is showed that the system is dissipative and permanent, and its solutions are bounded. Global stability of the unique positive equilibrium is investigated by constructing Dulac function and applying Poincare-Bendixson theorem. The bifurcation behaviors are further explored and the number of limit cycles is determined. By calculating the first Lyapunov number and the first two focus values, it is proved that the positive equilibrium is not a center but a weak focus of multiplicity at most two, so the system undergoes Hopf bifurcation and Bautin bifurcation. The normal form of Bautin bifurcation is also obtained by introducing the complex system. Moreover, numerical simulations are run to demonstrate the validity of theoretical results.

Keyword :

Hopf bifurcation Hopf bifurcation Limit cycle Limit cycle Bautin bifurcation Bautin bifurcation Predator-prey system Predator-prey system Global stability Global stability

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shang, Zuchong , Qiao, Yuanhua , Duan, Lijuan et al. Bifurcation analysis and global dynamics in a predator-prey system of Leslie type with an increasing functional response [J]. | ECOLOGICAL MODELLING , 2021 , 455 .
MLA Shang, Zuchong et al. "Bifurcation analysis and global dynamics in a predator-prey system of Leslie type with an increasing functional response" . | ECOLOGICAL MODELLING 455 (2021) .
APA Shang, Zuchong , Qiao, Yuanhua , Duan, Lijuan , Miao, Jun . Bifurcation analysis and global dynamics in a predator-prey system of Leslie type with an increasing functional response . | ECOLOGICAL MODELLING , 2021 , 455 .
Export to NoteExpress RIS BibTex
Attention-Aware and Semantic-Aware Network for RGB-D Indoor Semantic Segmentation EI
期刊论文 | 2021 , 44 (2) , 275-291 | Chinese Journal of Computers
Abstract&Keyword Cite

Abstract :

Semantic segmentation is a research hotspot in the field of computer vision. It refers to assigning all pixels into different semantic classes. As a fundamental problem in scene understanding, semantic segmentation is widely used in various intelligent tasks. In recent years, with the success of convolutional neural network (CNN) in many computer vision applications, fully convolutional networks (FCN) have shown great potential on RGB semantic segmentation task. However, semantic segmentation is still a challenging task due to the complexity of scene types, severe object occlusions and varying illuminations. In recent years, with the availability of consumer RGB-D sensors such as RealSense 3D Camera and Microsoft Kinect, we can capture both RGB image and depth information at the same time. Depth information can describe 3D geometric information which might be missed in RGB-only images. It can significantly reduce classification errors and improve the accuracy of semantic segmentation. In order to make effective use of RGB information and depth information, it is crucial to find an efficient multi-modal information fusion method. According to different fusion periods, the current RGB-D feature fusion methods can be divided into three types: early fusion, late fusion and middle fusion. However, most of previous studies fail to make effective use of complementary information between RGB information and depth information. They simply fuse RGB features and depth features with equal-weight concatenating or summing, which failed to extract complementary information between two modals and will suppressed the modality specific information. In addition, semantic information in high level features between different modals is not taken into account, which is very important for the fine-grained semantic segmentation task. To solve the above problems, in this paper, we present a novel Attention-aware and Semantic-aware Multi-modal Fusion Network (ASNet) for RGB-D semantic segmentation. Our network is able to effectively fuse multi-level RGB-D features by including Attention-aware Multi-modal Fusion blocks(AMF) and Semantic-aware Multi-modal Fusion blocks(SMF). Specifically, in Attention-aware Multi-modal Fusion blocks, a cross-modal attention mechanism is designed to make RGB features and depth features guide and optimize each other through their complementary characteristics in order to obtain the feature representation with rich spatial location information. In addition, Semantic-aware Multi-modal Fusion blocks model the semantic interdependencies between multi-modal features by integrating semantic associated feature channels among the RGB and depth features and extract more precise semantic feature representation. The two blocks are integrated into a two-branch encoder-decoder architecture, which can restore image resolution gradually by using consecutive up-sampling operation and combine low level features and high level features through skip-connections to achieve high-resolution prediction. In order to optimize the training process, we using deeply supervised learning over multi-level decoding features. Our network is able to effectively learn the complementary characteristics of two modalities and models the semantic context interdependencies between RGB features and depth features. Experimental results with two challenging public RGB-D indoor semantic segmentation datasets, i.e., SUN RGB-D and NYU Depth v2, show that our network outperforms existing RGB-D semantic segmentation methods and improves the segmentation performance by 1.9% and 1.2% for mean accuracy and mean IoU respectively. © 2021, Science Press. All right reserved.

Keyword :

Semantics Semantics Convolutional neural networks Convolutional neural networks Image resolution Image resolution Convolution Convolution Computer vision Computer vision Decoding Decoding Cameras Cameras Semantic Web Semantic Web

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Duan, Li-Juan , Sun, Qi-Chao , Qiao, Yuan-Hua et al. Attention-Aware and Semantic-Aware Network for RGB-D Indoor Semantic Segmentation [J]. | Chinese Journal of Computers , 2021 , 44 (2) : 275-291 .
MLA Duan, Li-Juan et al. "Attention-Aware and Semantic-Aware Network for RGB-D Indoor Semantic Segmentation" . | Chinese Journal of Computers 44 . 2 (2021) : 275-291 .
APA Duan, Li-Juan , Sun, Qi-Chao , Qiao, Yuan-Hua , Chen, Jun-Cheng , Cui, Guo-Qin . Attention-Aware and Semantic-Aware Network for RGB-D Indoor Semantic Segmentation . | Chinese Journal of Computers , 2021 , 44 (2) , 275-291 .
Export to NoteExpress RIS BibTex
Context-sensitive zero-shot semantic segmentation model based on meta-learning SCIE
期刊论文 | 2021 , 465 , 465-475 | NEUROCOMPUTING
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

The zero-shot semantic segmentation requires models with a strong image understanding ability. The majority of current solutions are based on direct mapping or generation. These schemes are effective in dealing with the zero-shot recognition, but they cannot fully transfer the visual dependence between objects in more complex scenarios of semantic segmentation. More importantly, the predicted results become seriously biased to the seen-category in the training set, which makes it difficult to accurately recognize the unseen-category. In view of the above two problems, we propose a novel zero-shot semantic segmentation model based on meta-learning. It is observed that the pure semantic space expression has certain limitations for the zero-shot learning. Therefore, based on the original semantic migration, we first migrate the shared information in the visual space by adding a context-module, and then migrate it in the visual and semantic dual space. At the same time, in order to solve the problem of biasness, we improve the adaptability of the model parameters by adjusting the parameters of the dual-space through the meta-learning, so that it can successfully complete the segmentation even in the face of new categories without reference samples. Experiments show that our algorithm outperforms the existing best methods in the zero-shot segmentation on three datasets of Pascal-VOC 2012, Pascal-Context and Coco-stuff. (c) 2021 Published by Elsevier B.V.

Keyword :

Semantic-segmentation Semantic-segmentation Zero-shot learning Zero-shot learning Context Context Meta-learning Meta-learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wenjian , Duan, Lijuan , En, Qing et al. Context-sensitive zero-shot semantic segmentation model based on meta-learning [J]. | NEUROCOMPUTING , 2021 , 465 : 465-475 .
MLA Wang, Wenjian et al. "Context-sensitive zero-shot semantic segmentation model based on meta-learning" . | NEUROCOMPUTING 465 (2021) : 465-475 .
APA Wang, Wenjian , Duan, Lijuan , En, Qing , Zhang, Baochang . Context-sensitive zero-shot semantic segmentation model based on meta-learning . | NEUROCOMPUTING , 2021 , 465 , 465-475 .
Export to NoteExpress RIS BibTex
The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface EI
期刊论文 | 2021 , 9 , 67405-67416 | IEEE Access
Abstract&Keyword Cite

Abstract :

The breakthrough of electroencephalogram (EEG) signal classification of brain computer interface (BCI) will set off another technological revolution of human computer interaction technology. Because the collected EEG is a type of nonstationary signal with strong randomness, effective feature extraction and data mining techniques are urgently required for EEG classification of BCI. In this paper, the new bionic whale optimization algorithms (WOA) are proposed to promote the improved extreme learning machine (ELM) algorithms for EEG classification of BCI. Two improved WOA-ELM algorithms are designed to compensate for the deficiency of random weight initialization for basic ELM. Firstly, the top several best individuals are selected and voted to make decisions to avoid misjudgment on the best individual. Secondly, the initial connection weights and bias between the input layer nodes and hidden layer nodes are optimized by WOA through bubble-net attacking strategy (BNAS) and shrinking encircling mechanism (SEM), and different regularization mechanisms are introduced in different layers to generate appropriate sparse weight matrix to promote the generalization performance of the algorithm.As shown in the contrast results, the average accuracy of the proposed method can reach 93.67%, which is better than other methods on BCI dataset. © 2013 IEEE.

Keyword :

Bionics Bionics Machine learning Machine learning Electroencephalography Electroencephalography Brain computer interface Brain computer interface Biomedical signal processing Biomedical signal processing Data mining Data mining Human computer interaction Human computer interaction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lian, Zhaoyang , Duan, Lijuan , Qiao, Yuanhua et al. The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface [J]. | IEEE Access , 2021 , 9 : 67405-67416 .
MLA Lian, Zhaoyang et al. "The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface" . | IEEE Access 9 (2021) : 67405-67416 .
APA Lian, Zhaoyang , Duan, Lijuan , Qiao, Yuanhua , Chen, Juncheng , Miao, Jun , Li, Mingai . The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface . | IEEE Access , 2021 , 9 , 67405-67416 .
Export to NoteExpress RIS BibTex
基于注意力感知和语义感知的RGB-D室内图像语义分割算法 CSCD
期刊论文 | 2021 , 44 (02) , 275-291 | 计算机学报
CNKI Cited Count: 3
Abstract&Keyword Cite

Abstract :

近年来,全卷积神经网络有效提升了语义分割任务的准确率.然而,由于室内环境的复杂性,室内场景语义分割仍然是一个具有挑战性的问题.随着深度传感器的出现,人们开始考虑利用深度信息提升语义分割效果.以往的研究大多简单地使用等权值的拼接或求和操作来融合RGB特征和深度特征,未能充分利用RGB特征与深度特征之间的互补信息.本文提出一种基于注意力感知和语义感知的网络模型ASNet(Attention-aware and Semantic-aware Network).通过引入注意力感知多模态融合模块和语义感知多模态融合模块,有效地融合多层次的RGB特征和深度特征.其中,在注意力感知多模态融合模块中,本文设计...

Keyword :

深度学习 深度学习 RGB-D语义分割 RGB-D语义分割 卷积神经网络 卷积神经网络 注意力模型 注意力模型 多模态融合 多模态融合

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 孙启超 , 乔元华 et al. 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 [J]. | 计算机学报 , 2021 , 44 (02) : 275-291 .
MLA 段立娟 et al. "基于注意力感知和语义感知的RGB-D室内图像语义分割算法" . | 计算机学报 44 . 02 (2021) : 275-291 .
APA 段立娟 , 孙启超 , 乔元华 , 陈军成 , 崔国勤 . 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 . | 计算机学报 , 2021 , 44 (02) , 275-291 .
Export to NoteExpress RIS BibTex
Convolution Tells Where to Look CPCI-S
期刊论文 | 2021 , 13022 , 16-28 | PATTERN RECOGNITION AND COMPUTER VISION, PT IV
Abstract&Keyword Cite

Abstract :

Many attention models have been introduced to boost the representational power of convolutional neural networks (CNNs). Most of them are self-attention models which generate an attention mask based on current features, like spatial attention and channel attention model. However, these attention models may not achieve good results when the current features are the low-level features of CNNs. In this work, we propose a new lightweight attention unit, feature difference (FD) model, which utilizes the difference between two feature maps to generate the attention mask. The FD module can be integrated into most of the state-of-the-art CNNs like ResNets and VGG just by adding some shortcut connections, which does not introduce any additional parameters and layers. Extensive experiments show that the FD model can help improve the performance of the baseline on four benchmarks, including CIFAR10, CIFAR100, ImageNet-1K, and VOC PASCAL. Note that ResNet44 (6.10% error) with FD model achieves better results than ResNet56 (6.24% error), while the former has fewer parameters than the latter one by 29%.

Keyword :

Feature representation Feature representation Attention model Attention model Image classification Image classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Fan , Duan, Lijuan , Qiao, Yuanhua et al. Convolution Tells Where to Look [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PT IV , 2021 , 13022 : 16-28 .
MLA Xu, Fan et al. "Convolution Tells Where to Look" . | PATTERN RECOGNITION AND COMPUTER VISION, PT IV 13022 (2021) : 16-28 .
APA Xu, Fan , Duan, Lijuan , Qiao, Yuanhua , Chen, Ji . Convolution Tells Where to Look . | PATTERN RECOGNITION AND COMPUTER VISION, PT IV , 2021 , 13022 , 16-28 .
Export to NoteExpress RIS BibTex
基于注意力感知和语义感知的RGB-D室内图像语义分割算法 CQVIP
期刊论文 | 2021 , 44 (2) , 275-291 | 段立娟
Abstract&Keyword Cite

Abstract :

基于注意力感知和语义感知的RGB-D室内图像语义分割算法

Keyword :

深度学习 深度学习 多模态融合 多模态融合 注意力模型 注意力模型 卷积神经网络 卷积神经网络 RGB-D语义分割 RGB-D语义分割

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 孙启超 , 乔元华 et al. 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 [J]. | 段立娟 , 2021 , 44 (2) : 275-291 .
MLA 段立娟 et al. "基于注意力感知和语义感知的RGB-D室内图像语义分割算法" . | 段立娟 44 . 2 (2021) : 275-291 .
APA 段立娟 , 孙启超 , 乔元华 , 陈军成 , 崔国勤 , 计算机学报 . 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 . | 段立娟 , 2021 , 44 (2) , 275-291 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 23 >

Export

Results:

Selected

to

Format:
Online/Total:827/4742515
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.