• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:段立娟

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 23 >
Multi-task Self-supervised Few-Shot Detection CPCI-S
期刊论文 | 2024 , 14436 , 107-119 | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII
Abstract&Keyword Cite

Abstract :

Few-shot object detection involves detecting novel objects with only a few training samples. But very few samples are difficult to cover the bias of the new class in the deep model. To address the issue, we use self-supervision to expand the coverage of samples to provide more observation angles for new classes. In this paper, we propose a multi-task approach that combines self-supervision with few-shot learning to exploit the complementarity of these two domains. Specifically, our self-supervision as an auxiliary task to improve the detection performance of the main task of few-shot learning. Moreover, in order to make self-supervision more suitable for few-shot object detection, we introduce the denoising module to expand the positive and negative samples and the team module for precise positioning. The denoising module expands the positive and negative samples and accelerate model convergence using contrastive denoising training methods. The team module utilizes location constraints for precise localization to improve the accuracy of object detection. Our experimental results demonstrate the effectiveness of our method on the Few-shot object detection task on the PASCAL VOC and COCO datasets, achieving promising results. Our results highlight the potential of combining self-supervision with few-shot learning to improve the performance of object detection models in scenarios where annotated data is limited.

Keyword :

Few-shot object detection Few-shot object detection End-to-End Detector End-to-End Detector Self-supervised learning Self-supervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian et al. Multi-task Self-supervised Few-Shot Detection [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 : 107-119 .
MLA Zhang, Guangyong et al. "Multi-task Self-supervised Few-Shot Detection" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII 14436 (2024) : 107-119 .
APA Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian , Gong, Zhi , Ma, Bian . Multi-task Self-supervised Few-Shot Detection . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 , 107-119 .
Export to NoteExpress RIS BibTex
Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer CPCI-S
期刊论文 | 2022 , 7055-7064 | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022)
WoS CC Cited Count: 9
Abstract&Keyword Cite

Abstract :

Few-shot semantic segmentation intends to predict pixel-level categories using only a few labeled samples. Existing few-shot methods focus primarily on the categories sampled from the same distribution. Nevertheless, this assumption cannot always be ensured. The actual domain shift problem significantly reduces the performance of few-shot learning. To remedy this problem, we propose an interesting and challenging cross-domain few-shot semantic segmentation task, where the training and test tasks perform on different domains. Specifically, we first propose a meta-memory bank to improve the generalization of the segmentation network by bridging the domain gap between source and target domains. The meta-memory stores the intra-domain style information from source domain instances and transfers it to target samples. Subsequently, we adopt a new contrastive learning strategy to explore the knowledge of different categories during the training stage. The negative and positive pairs are obtained from the proposed memory-based style augmentation. Comprehensive experiments demonstrate that our proposed method achieves promising results on cross-domain few-shot semantic segmentation tasks on C000-20(i) , PASCAL-5(i), FSS-1000, and SHIM datasets.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wenjian , Duan, Lijuan , Wang, Yuxi et al. Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 : 7055-7064 .
MLA Wang, Wenjian et al. "Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022) : 7055-7064 .
APA Wang, Wenjian , Duan, Lijuan , Wang, Yuxi , En, Qing , Fan, Junsong , Zhang, Zhaoxiang . Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 , 7055-7064 .
Export to NoteExpress RIS BibTex
基于注意力感知和语义感知的RGB-D室内图像语义分割算法 CSCD
期刊论文 | 2021 , 44 (02) , 275-291 | 计算机学报
CNKI Cited Count: 3
Abstract&Keyword Cite

Abstract :

近年来,全卷积神经网络有效提升了语义分割任务的准确率.然而,由于室内环境的复杂性,室内场景语义分割仍然是一个具有挑战性的问题.随着深度传感器的出现,人们开始考虑利用深度信息提升语义分割效果.以往的研究大多简单地使用等权值的拼接或求和操作来融合RGB特征和深度特征,未能充分利用RGB特征与深度特征之间的互补信息.本文提出一种基于注意力感知和语义感知的网络模型ASNet(Attention-aware and Semantic-aware Network).通过引入注意力感知多模态融合模块和语义感知多模态融合模块,有效地融合多层次的RGB特征和深度特征.其中,在注意力感知多模态融合模块中,本文设计...

Keyword :

深度学习 深度学习 RGB-D语义分割 RGB-D语义分割 卷积神经网络 卷积神经网络 注意力模型 注意力模型 多模态融合 多模态融合

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 孙启超 , 乔元华 et al. 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 [J]. | 计算机学报 , 2021 , 44 (02) : 275-291 .
MLA 段立娟 et al. "基于注意力感知和语义感知的RGB-D室内图像语义分割算法" . | 计算机学报 44 . 02 (2021) : 275-291 .
APA 段立娟 , 孙启超 , 乔元华 , 陈军成 , 崔国勤 . 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 . | 计算机学报 , 2021 , 44 (02) , 275-291 .
Export to NoteExpress RIS BibTex
基于注意力感知和语义感知的RGB-D室内图像语义分割算法 CQVIP
期刊论文 | 2021 , 44 (2) , 275-291 | 段立娟
Abstract&Keyword Cite

Abstract :

基于注意力感知和语义感知的RGB-D室内图像语义分割算法

Keyword :

深度学习 深度学习 多模态融合 多模态融合 注意力模型 注意力模型 卷积神经网络 卷积神经网络 RGB-D语义分割 RGB-D语义分割

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 段立娟 , 孙启超 , 乔元华 et al. 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 [J]. | 段立娟 , 2021 , 44 (2) : 275-291 .
MLA 段立娟 et al. "基于注意力感知和语义感知的RGB-D室内图像语义分割算法" . | 段立娟 44 . 2 (2021) : 275-291 .
APA 段立娟 , 孙启超 , 乔元华 , 陈军成 , 崔国勤 , 计算机学报 . 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 . | 段立娟 , 2021 , 44 (2) , 275-291 .
Export to NoteExpress RIS BibTex
Context-sensitive zero-shot semantic segmentation model based on meta-learning SCIE
期刊论文 | 2021 , 465 , 465-475 | NEUROCOMPUTING
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

The zero-shot semantic segmentation requires models with a strong image understanding ability. The majority of current solutions are based on direct mapping or generation. These schemes are effective in dealing with the zero-shot recognition, but they cannot fully transfer the visual dependence between objects in more complex scenarios of semantic segmentation. More importantly, the predicted results become seriously biased to the seen-category in the training set, which makes it difficult to accurately recognize the unseen-category. In view of the above two problems, we propose a novel zero-shot semantic segmentation model based on meta-learning. It is observed that the pure semantic space expression has certain limitations for the zero-shot learning. Therefore, based on the original semantic migration, we first migrate the shared information in the visual space by adding a context-module, and then migrate it in the visual and semantic dual space. At the same time, in order to solve the problem of biasness, we improve the adaptability of the model parameters by adjusting the parameters of the dual-space through the meta-learning, so that it can successfully complete the segmentation even in the face of new categories without reference samples. Experiments show that our algorithm outperforms the existing best methods in the zero-shot segmentation on three datasets of Pascal-VOC 2012, Pascal-Context and Coco-stuff. (c) 2021 Published by Elsevier B.V.

Keyword :

Semantic-segmentation Semantic-segmentation Zero-shot learning Zero-shot learning Context Context Meta-learning Meta-learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wenjian , Duan, Lijuan , En, Qing et al. Context-sensitive zero-shot semantic segmentation model based on meta-learning [J]. | NEUROCOMPUTING , 2021 , 465 : 465-475 .
MLA Wang, Wenjian et al. "Context-sensitive zero-shot semantic segmentation model based on meta-learning" . | NEUROCOMPUTING 465 (2021) : 465-475 .
APA Wang, Wenjian , Duan, Lijuan , En, Qing , Zhang, Baochang . Context-sensitive zero-shot semantic segmentation model based on meta-learning . | NEUROCOMPUTING , 2021 , 465 , 465-475 .
Export to NoteExpress RIS BibTex
Bifurcation analysis in a predator-prey system with an increasing functional response and constant-yield prey harvesting SCIE
期刊论文 | 2021 , 190 , 976-1002 | MATHEMATICS AND COMPUTERS IN SIMULATION
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

In this paper, a Gause type predator-prey system with constant-yield prey harvesting and monotone ascending functional response is proposed and investigated. We focus on the influence of the harvesting rate on the predator-prey system. First, equilibria corresponding to different situations are investigated, as well as the stability analysis. Then bifurcations are explored at nonhyperbolic equilibria, and we give the conditions for the occurrence of two saddle-node bifurcations by analyzing the emergence, coincidence and annihilation of equilibria. We calculate the Lyapunov number and focal values to determine the stability and the quantity of limit cycles generated by supercritical, subcritical and degenerate Hopf bifurcations. Furthermore, the system is unfolded to explore the repelling and attracting Bogdanov-Takens bifurcations by perturbing two bifurcation parameters near the cusp. It is shown that there exists one limit cycle, or one homoclinic loop, or two limit cycles for different parameter values. Therefore, the system is susceptible to both the constant-yield prey harvesting and initial values of the species. Finally, we run numerical simulations to verify the theoretical analysis. (C) 2021 International Association for Mathematics and Computers in Simulation (IMACS). Published by Elsevier B.V. All rights reserved.

Keyword :

Degenerate Hopf bifurcation Degenerate Hopf bifurcation Harvesting Harvesting Bogdanov-Takens bifurcation Bogdanov-Takens bifurcation Predator-prey system Predator-prey system Limit cycle Limit cycle

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shang, Zuchong , Qiao, Yuanhua , Duan, Lijuan et al. Bifurcation analysis in a predator-prey system with an increasing functional response and constant-yield prey harvesting [J]. | MATHEMATICS AND COMPUTERS IN SIMULATION , 2021 , 190 : 976-1002 .
MLA Shang, Zuchong et al. "Bifurcation analysis in a predator-prey system with an increasing functional response and constant-yield prey harvesting" . | MATHEMATICS AND COMPUTERS IN SIMULATION 190 (2021) : 976-1002 .
APA Shang, Zuchong , Qiao, Yuanhua , Duan, Lijuan , Miao, Jun . Bifurcation analysis in a predator-prey system with an increasing functional response and constant-yield prey harvesting . | MATHEMATICS AND COMPUTERS IN SIMULATION , 2021 , 190 , 976-1002 .
Export to NoteExpress RIS BibTex
Convolution Tells Where to Look CPCI-S
期刊论文 | 2021 , 13022 , 16-28 | PATTERN RECOGNITION AND COMPUTER VISION, PT IV
Abstract&Keyword Cite

Abstract :

Many attention models have been introduced to boost the representational power of convolutional neural networks (CNNs). Most of them are self-attention models which generate an attention mask based on current features, like spatial attention and channel attention model. However, these attention models may not achieve good results when the current features are the low-level features of CNNs. In this work, we propose a new lightweight attention unit, feature difference (FD) model, which utilizes the difference between two feature maps to generate the attention mask. The FD module can be integrated into most of the state-of-the-art CNNs like ResNets and VGG just by adding some shortcut connections, which does not introduce any additional parameters and layers. Extensive experiments show that the FD model can help improve the performance of the baseline on four benchmarks, including CIFAR10, CIFAR100, ImageNet-1K, and VOC PASCAL. Note that ResNet44 (6.10% error) with FD model achieves better results than ResNet56 (6.24% error), while the former has fewer parameters than the latter one by 29%.

Keyword :

Feature representation Feature representation Attention model Attention model Image classification Image classification

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xu, Fan , Duan, Lijuan , Qiao, Yuanhua et al. Convolution Tells Where to Look [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PT IV , 2021 , 13022 : 16-28 .
MLA Xu, Fan et al. "Convolution Tells Where to Look" . | PATTERN RECOGNITION AND COMPUTER VISION, PT IV 13022 (2021) : 16-28 .
APA Xu, Fan , Duan, Lijuan , Qiao, Yuanhua , Chen, Ji . Convolution Tells Where to Look . | PATTERN RECOGNITION AND COMPUTER VISION, PT IV , 2021 , 13022 , 16-28 .
Export to NoteExpress RIS BibTex
TMD-FS: Improving Few-Shot Object Detection with Transformer Multi-modal Directing CPCI-S
期刊论文 | 2021 , 13022 , 447-458 | PATTERN RECOGNITION AND COMPUTER VISION, PT IV
Abstract&Keyword Cite

Abstract :

Few-shot object detection (FSOD) is a vital and challenging task in which the aim is to detect unseen object classes with a few annotated samples. However, the discriminative semantic information existing in the new category is not well represented in most existing approaches. To address this issue, we propose a new few-shot object detection model named TMD-FS with Transformer multi-model directing, where the lost discriminative information is mined by adapting multi-modal semantic alignment. Specifically, we transfer the multi-model information into a mixed sequence and map the visual and semantic information into the embedding space. Moreover, we propose a Semantic Visual Transformer (SVT) module to incorporate and align the visual and semantic embedding. Finally, the distance in terms of the visual and semantic embedding is minimized on the basis of the attention loss. Experimental results demonstrate that the performance of the model significantly with few samples. In addition, it achieves state-of-the-art performance when the amount of samples increases.

Keyword :

Few shot object detection Few shot object detection Transformer Transformer Multi-modal semantic fusion Multi-modal semantic fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yuan, Ying , Duan, Lijuan , Wang, Wenjian et al. TMD-FS: Improving Few-Shot Object Detection with Transformer Multi-modal Directing [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PT IV , 2021 , 13022 : 447-458 .
MLA Yuan, Ying et al. "TMD-FS: Improving Few-Shot Object Detection with Transformer Multi-modal Directing" . | PATTERN RECOGNITION AND COMPUTER VISION, PT IV 13022 (2021) : 447-458 .
APA Yuan, Ying , Duan, Lijuan , Wang, Wenjian , En, Qing . TMD-FS: Improving Few-Shot Object Detection with Transformer Multi-modal Directing . | PATTERN RECOGNITION AND COMPUTER VISION, PT IV , 2021 , 13022 , 447-458 .
Export to NoteExpress RIS BibTex
The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface EI
期刊论文 | 2021 , 9 , 67405-67416 | IEEE Access
Abstract&Keyword Cite

Abstract :

The breakthrough of electroencephalogram (EEG) signal classification of brain computer interface (BCI) will set off another technological revolution of human computer interaction technology. Because the collected EEG is a type of nonstationary signal with strong randomness, effective feature extraction and data mining techniques are urgently required for EEG classification of BCI. In this paper, the new bionic whale optimization algorithms (WOA) are proposed to promote the improved extreme learning machine (ELM) algorithms for EEG classification of BCI. Two improved WOA-ELM algorithms are designed to compensate for the deficiency of random weight initialization for basic ELM. Firstly, the top several best individuals are selected and voted to make decisions to avoid misjudgment on the best individual. Secondly, the initial connection weights and bias between the input layer nodes and hidden layer nodes are optimized by WOA through bubble-net attacking strategy (BNAS) and shrinking encircling mechanism (SEM), and different regularization mechanisms are introduced in different layers to generate appropriate sparse weight matrix to promote the generalization performance of the algorithm.As shown in the contrast results, the average accuracy of the proposed method can reach 93.67%, which is better than other methods on BCI dataset. © 2013 IEEE.

Keyword :

Bionics Bionics Machine learning Machine learning Electroencephalography Electroencephalography Brain computer interface Brain computer interface Biomedical signal processing Biomedical signal processing Data mining Data mining Human computer interaction Human computer interaction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lian, Zhaoyang , Duan, Lijuan , Qiao, Yuanhua et al. The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface [J]. | IEEE Access , 2021 , 9 : 67405-67416 .
MLA Lian, Zhaoyang et al. "The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface" . | IEEE Access 9 (2021) : 67405-67416 .
APA Lian, Zhaoyang , Duan, Lijuan , Qiao, Yuanhua , Chen, Juncheng , Miao, Jun , Li, Mingai . The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface . | IEEE Access , 2021 , 9 , 67405-67416 .
Export to NoteExpress RIS BibTex
A deep multimodal feature learning network for RGB-D salient object detection EI
期刊论文 | 2021 , 92 | Computers and Electrical Engineering
Abstract&Keyword Cite

Abstract :

In this paper, we propose a deep multimodal feature learning (DMFL) network for RGB-D salient object detection. The color and depth features are firstly extracted from low level to high level feature using CNN. Then the features at the high layer are shared and concatenated to construct joint feature representation of multi-modalities. The fused features are embedded to a high dimension metric space to express the salient and non-salient parts. And also a new objective function, consisting of cross-entropy and metric loss, is proposed to optimize the model. Both pixel and attribute level discriminative features are learned for semantical grouping to detect the salient objects. Experimental results show that the proposed model achieves promising performance and has about 1% to 2% improvement to conventional methods. © 2021 Elsevier Ltd

Keyword :

Object recognition Object recognition Set theory Set theory Topology Topology Object detection Object detection Deep learning Deep learning Feature extraction Feature extraction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liang, Fangfang , Duan, Lijuan , Ma, Wei et al. A deep multimodal feature learning network for RGB-D salient object detection [J]. | Computers and Electrical Engineering , 2021 , 92 .
MLA Liang, Fangfang et al. "A deep multimodal feature learning network for RGB-D salient object detection" . | Computers and Electrical Engineering 92 (2021) .
APA Liang, Fangfang , Duan, Lijuan , Ma, Wei , Qiao, Yuanhua , Miao, Jun . A deep multimodal feature learning network for RGB-D salient object detection . | Computers and Electrical Engineering , 2021 , 92 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 23 >

Export

Results:

Selected

to

Format:
Online/Total:623/3084026
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.