Query:
学者姓名:段立娟
Refining:
Year
Type
Indexed by
Source
Complex
Co-Author
Language
Clean All
Abstract :
Few-shot object detection involves detecting novel objects with only a few training samples. But very few samples are difficult to cover the bias of the new class in the deep model. To address the issue, we use self-supervision to expand the coverage of samples to provide more observation angles for new classes. In this paper, we propose a multi-task approach that combines self-supervision with few-shot learning to exploit the complementarity of these two domains. Specifically, our self-supervision as an auxiliary task to improve the detection performance of the main task of few-shot learning. Moreover, in order to make self-supervision more suitable for few-shot object detection, we introduce the denoising module to expand the positive and negative samples and the team module for precise positioning. The denoising module expands the positive and negative samples and accelerate model convergence using contrastive denoising training methods. The team module utilizes location constraints for precise localization to improve the accuracy of object detection. Our experimental results demonstrate the effectiveness of our method on the Few-shot object detection task on the PASCAL VOC and COCO datasets, achieving promising results. Our results highlight the potential of combining self-supervision with few-shot learning to improve the performance of object detection models in scenarios where annotated data is limited.
Keyword :
Few-shot object detection Few-shot object detection End-to-End Detector End-to-End Detector Self-supervised learning Self-supervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian et al. Multi-task Self-supervised Few-Shot Detection [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 : 107-119 . |
MLA | Zhang, Guangyong et al. "Multi-task Self-supervised Few-Shot Detection" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII 14436 (2024) : 107-119 . |
APA | Zhang, Guangyong , Duan, Lijuan , Wang, Wenjian , Gong, Zhi , Ma, Bian . Multi-task Self-supervised Few-Shot Detection . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII , 2024 , 14436 , 107-119 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Few-shot semantic segmentation intends to predict pixel-level categories using only a few labeled samples. Existing few-shot methods focus primarily on the categories sampled from the same distribution. Nevertheless, this assumption cannot always be ensured. The actual domain shift problem significantly reduces the performance of few-shot learning. To remedy this problem, we propose an interesting and challenging cross-domain few-shot semantic segmentation task, where the training and test tasks perform on different domains. Specifically, we first propose a meta-memory bank to improve the generalization of the segmentation network by bridging the domain gap between source and target domains. The meta-memory stores the intra-domain style information from source domain instances and transfers it to target samples. Subsequently, we adopt a new contrastive learning strategy to explore the knowledge of different categories during the training stage. The negative and positive pairs are obtained from the proposed memory-based style augmentation. Comprehensive experiments demonstrate that our proposed method achieves promising results on cross-domain few-shot semantic segmentation tasks on C000-20(i) , PASCAL-5(i), FSS-1000, and SHIM datasets.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Wenjian , Duan, Lijuan , Wang, Yuxi et al. Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 : 7055-7064 . |
MLA | Wang, Wenjian et al. "Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) (2022) : 7055-7064 . |
APA | Wang, Wenjian , Duan, Lijuan , Wang, Yuxi , En, Qing , Fan, Junsong , Zhang, Zhaoxiang . Remember the Difference: Cross-Domain Few-Shot Semantic Segmentation via Meta-Memory Transfer . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) , 2022 , 7055-7064 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Sleep staging is one of the important methods to diagnosis and treatment of sleep diseases. However, it is laborious and time-consuming, therefore, computer assisted sleep staging is necessary. Most of the existing sleep staging researches using hand-engineered features rely on prior knowledges of sleep analysis, and usually single channel electroencephalogram (EEG) is used for sleep staging task. Prior knowledge is not always available, and single channel EEG signal cannot fully represent the patient's sleeping physiological states. To tackle the above two problems, we propose an automatic sleep staging network model based on data adaptation and multimodal feature fusion using EEG and electrooculogram (EOG) signals. 3D-CNN is used to extract the time-frequency features of EEG at different time scales, and LSTM is used to learn the frequency evolution of EOG. The nonlinear relationship between the High-layer features of EEG and EOG is fitted by deep probabilistic network. Experiments on SLEEP-EDF and a private dataset show that the proposed model achieves state-of-the-art performance. Moreover, the prediction result is in accordance with that from the expert diagnosis.
Keyword :
sleep stage classification sleep stage classification deep learning deep learning HHT HHT multimodal physiological signals multimodal physiological signals fusion networks fusion networks
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Duan, Lijuan , Li, Mengying , Wang, Changming et al. A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion [J]. | FRONTIERS IN HUMAN NEUROSCIENCE , 2021 , 15 . |
MLA | Duan, Lijuan et al. "A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion" . | FRONTIERS IN HUMAN NEUROSCIENCE 15 (2021) . |
APA | Duan, Lijuan , Li, Mengying , Wang, Changming , Qiao, Yuanhua , Wang, Zeyu , Sha, Sha et al. A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion . | FRONTIERS IN HUMAN NEUROSCIENCE , 2021 , 15 . |
Export to | NoteExpress RIS BibTex |
Abstract :
基于注意力感知和语义感知的RGB-D室内图像语义分割算法
Keyword :
深度学习 深度学习 多模态融合 多模态融合 注意力模型 注意力模型 卷积神经网络 卷积神经网络 RGB-D语义分割 RGB-D语义分割
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 孙启超 , 乔元华 et al. 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 [J]. | 段立娟 , 2021 , 44 (2) : 275-291 . |
MLA | 段立娟 et al. "基于注意力感知和语义感知的RGB-D室内图像语义分割算法" . | 段立娟 44 . 2 (2021) : 275-291 . |
APA | 段立娟 , 孙启超 , 乔元华 , 陈军成 , 崔国勤 , 计算机学报 . 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 . | 段立娟 , 2021 , 44 (2) , 275-291 . |
Export to | NoteExpress RIS BibTex |
Abstract :
近年来,全卷积神经网络有效提升了语义分割任务的准确率.然而,由于室内环境的复杂性,室内场景语义分割仍然是一个具有挑战性的问题.随着深度传感器的出现,人们开始考虑利用深度信息提升语义分割效果.以往的研究大多简单地使用等权值的拼接或求和操作来融合RGB特征和深度特征,未能充分利用RGB特征与深度特征之间的互补信息.本文提出一种基于注意力感知和语义感知的网络模型ASNet(Attention-aware and Semantic-aware Network).通过引入注意力感知多模态融合模块和语义感知多模态融合模块,有效地融合多层次的RGB特征和深度特征.其中,在注意力感知多模态融合模块中,本文设计...
Keyword :
深度学习 深度学习 RGB-D语义分割 RGB-D语义分割 卷积神经网络 卷积神经网络 注意力模型 注意力模型 多模态融合 多模态融合
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 段立娟 , 孙启超 , 乔元华 et al. 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 [J]. | 计算机学报 , 2021 , 44 (02) : 275-291 . |
MLA | 段立娟 et al. "基于注意力感知和语义感知的RGB-D室内图像语义分割算法" . | 计算机学报 44 . 02 (2021) : 275-291 . |
APA | 段立娟 , 孙启超 , 乔元华 , 陈军成 , 崔国勤 . 基于注意力感知和语义感知的RGB-D室内图像语义分割算法 . | 计算机学报 , 2021 , 44 (02) , 275-291 . |
Export to | NoteExpress RIS BibTex |
Abstract :
The breakthrough of electroencephalogram (EEG) signal classification of brain computer interface (BCI) will set off another technological revolution of human computer interaction technology. Because the collected EEG is a type of nonstationary signal with strong randomness, effective feature extraction and data mining techniques are urgently required for EEG classification of BCI. In this paper, the new bionic whale optimization algorithms (WOA) are proposed to promote the improved extreme learning machine (ELM) algorithms for EEG classification of BCI. Two improved WOA-ELM algorithms are designed to compensate for the deficiency of random weight initialization for basic ELM. Firstly, the top several best individuals are selected and voted to make decisions to avoid misjudgment on the best individual. Secondly, the initial connection weights and bias between the input layer nodes and hidden layer nodes are optimized by WOA through bubble-net attacking strategy (BNAS) and shrinking encircling mechanism (SEM), and different regularization mechanisms are introduced in different layers to generate appropriate sparse weight matrix to promote the generalization performance of the algorithm.As shown in the contrast results, the average accuracy of the proposed method can reach 93.67%, which is better than other methods on BCI dataset. © 2013 IEEE.
Keyword :
Bionics Bionics Machine learning Machine learning Electroencephalography Electroencephalography Brain computer interface Brain computer interface Biomedical signal processing Biomedical signal processing Data mining Data mining Human computer interaction Human computer interaction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lian, Zhaoyang , Duan, Lijuan , Qiao, Yuanhua et al. The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface [J]. | IEEE Access , 2021 , 9 : 67405-67416 . |
MLA | Lian, Zhaoyang et al. "The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface" . | IEEE Access 9 (2021) : 67405-67416 . |
APA | Lian, Zhaoyang , Duan, Lijuan , Qiao, Yuanhua , Chen, Juncheng , Miao, Jun , Li, Mingai . The Improved ELM Algorithms Optimized by Bionic WOA for EEG Classification of Brain Computer Interface . | IEEE Access , 2021 , 9 , 67405-67416 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Weakly supervised video object segmentation (WSVOS) is a vital yet challenging task in which the aim is to segment pixel-level masks with only category labels. Existing methods still have certain limitations, e.g., difficulty in comprehending appropriate spatiotemporal knowledge and an inability to explore common semantic information with category labels. To overcome these challenges, we formulate a novel framework by integrating multisource saliency and incorporating an exemplar mechanism for WSVOS. Specifically, we propose a multisource saliency module to comprehend spatiotemporal knowledge by integrating spatial and temporal saliency as bottom-up cues, which can effectively eliminate disruptions due to confusing regions and identify attractive regions. Moreover, to our knowledge, we make the first attempt to incorporate an exemplar mechanism into WSVOS by proposing an adaptive exemplar module to process top-down cues, which can provide reliable guidance for co-occurring objects in intraclass videos and identify attentive regions. Our framework, which comprises the two aforementioned modules, offers a new perspective on directly constructing the correspondence between bottom-up cues and top-down cues when ground-truth information for the reference frames is lacking. Comprehensive experiments demonstrate that the proposed framework achieves state-of-the-art performance.
Keyword :
Task analysis Task analysis video object segmentation video object segmentation Motion segmentation Motion segmentation Object segmentation Object segmentation Annotations Annotations spatiotemporal saliency spatiotemporal saliency Feature extraction Feature extraction exemplar mechanism exemplar mechanism Spatiotemporal phenomena Spatiotemporal phenomena Weakly supervised learning Weakly supervised learning Training Training
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | En, Qing , Duan, Lijuan , Zhang, Zhaoxiang . Joint Multisource Saliency and Exemplar Mechanism for Weakly Supervised Video Object Segmentation [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 : 8155-8169 . |
MLA | En, Qing et al. "Joint Multisource Saliency and Exemplar Mechanism for Weakly Supervised Video Object Segmentation" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 30 (2021) : 8155-8169 . |
APA | En, Qing , Duan, Lijuan , Zhang, Zhaoxiang . Joint Multisource Saliency and Exemplar Mechanism for Weakly Supervised Video Object Segmentation . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 , 8155-8169 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Multidirectional associative memory neural networks (MAMNNs) are constructed to simulate the many-to-many association, and they are applied widely in many fields. It is important to explore the global stability of the periodic solution of MAMNNs. In this paper, MAMNNs with discontinuous activation functions and mixed time-varying delays are considered. Firstly, we investigate the conditions for the existence of the periodic solution by using the Mawhin-like coincidence theorem, and a special connecting weight matrix is constructed to prove the existence of the periodic solution. Secondly, the uniqueness and global exponential stability of the periodic solution are explored for the non-self-connected system by introducing a Lipschitz-like condition. Finally, numerical simulations are given to illustrate the effectiveness of our main results.
Keyword :
periodic solution periodic solution mixed time‐ mixed time‐ multidirectional associative memory neural networks multidirectional associative memory neural networks global exponential stability global exponential stability discontinuous activation functions discontinuous activation functions varying delays varying delays
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Yan , Qiao, Yuanhua , Duan, Lijuan et al. Periodic dynamics of multidirectional associative neural networks with discontinuous activation functions and mixed time delays [J]. | INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL , 2021 , 31 (10) : 4570-4588 . |
MLA | Zhang, Yan et al. "Periodic dynamics of multidirectional associative neural networks with discontinuous activation functions and mixed time delays" . | INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL 31 . 10 (2021) : 4570-4588 . |
APA | Zhang, Yan , Qiao, Yuanhua , Duan, Lijuan , Miao, Jun , Zhang, Jiajia . Periodic dynamics of multidirectional associative neural networks with discontinuous activation functions and mixed time delays . | INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL , 2021 , 31 (10) , 4570-4588 . |
Export to | NoteExpress RIS BibTex |
Abstract :
In this paper, a Gause type predator-prey system with constant-yield prey harvesting and monotone ascending functional response is proposed and investigated. We focus on the influence of the harvesting rate on the predator-prey system. First, equilibria corresponding to different situations are investigated, as well as the stability analysis. Then bifurcations are explored at nonhyperbolic equilibria, and we give the conditions for the occurrence of two saddle-node bifurcations by analyzing the emergence, coincidence and annihilation of equilibria. We calculate the Lyapunov number and focal values to determine the stability and the quantity of limit cycles generated by supercritical, subcritical and degenerate Hopf bifurcations. Furthermore, the system is unfolded to explore the repelling and attracting Bogdanov-Takens bifurcations by perturbing two bifurcation parameters near the cusp. It is shown that there exists one limit cycle, or one homoclinic loop, or two limit cycles for different parameter values. Therefore, the system is susceptible to both the constant-yield prey harvesting and initial values of the species. Finally, we run numerical simulations to verify the theoretical analysis. (C) 2021 International Association for Mathematics and Computers in Simulation (IMACS). Published by Elsevier B.V. All rights reserved.
Keyword :
Degenerate Hopf bifurcation Degenerate Hopf bifurcation Harvesting Harvesting Bogdanov-Takens bifurcation Bogdanov-Takens bifurcation Predator-prey system Predator-prey system Limit cycle Limit cycle
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shang, Zuchong , Qiao, Yuanhua , Duan, Lijuan et al. Bifurcation analysis in a predator-prey system with an increasing functional response and constant-yield prey harvesting [J]. | MATHEMATICS AND COMPUTERS IN SIMULATION , 2021 , 190 : 976-1002 . |
MLA | Shang, Zuchong et al. "Bifurcation analysis in a predator-prey system with an increasing functional response and constant-yield prey harvesting" . | MATHEMATICS AND COMPUTERS IN SIMULATION 190 (2021) : 976-1002 . |
APA | Shang, Zuchong , Qiao, Yuanhua , Duan, Lijuan , Miao, Jun . Bifurcation analysis in a predator-prey system with an increasing functional response and constant-yield prey harvesting . | MATHEMATICS AND COMPUTERS IN SIMULATION , 2021 , 190 , 976-1002 . |
Export to | NoteExpress RIS BibTex |
Abstract :
In this paper, we propose a deep multimodal feature learning (DMFL) network for RGB-D salient object detection. The color and depth features are firstly extracted from low level to high level feature using CNN. Then the features at the high layer are shared and concatenated to construct joint feature representation of multi-modalities. The fused features are embedded to a high dimension metric space to express the salient and non-salient parts. And also a new objective function, consisting of cross-entropy and metric loss, is proposed to optimize the model. Both pixel and attribute level discriminative features are learned for semantical grouping to detect the salient objects. Experimental results show that the proposed model achieves promising performance and has about 1% to 2% improvement to conventional methods. © 2021 Elsevier Ltd
Keyword :
Object recognition Object recognition Set theory Set theory Topology Topology Object detection Object detection Deep learning Deep learning Feature extraction Feature extraction
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liang, Fangfang , Duan, Lijuan , Ma, Wei et al. A deep multimodal feature learning network for RGB-D salient object detection [J]. | Computers and Electrical Engineering , 2021 , 92 . |
MLA | Liang, Fangfang et al. "A deep multimodal feature learning network for RGB-D salient object detection" . | Computers and Electrical Engineering 92 (2021) . |
APA | Liang, Fangfang , Duan, Lijuan , Ma, Wei , Qiao, Yuanhua , Miao, Jun . A deep multimodal feature learning network for RGB-D salient object detection . | Computers and Electrical Engineering , 2021 , 92 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |