• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:杨金福

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 12 >
Few-shot classification based on manifold metric learning SCIE
期刊论文 | 2024 , 33 (1) | JOURNAL OF ELECTRONIC IMAGING
Abstract&Keyword Cite

Abstract :

Few-shot classification aims to classify samples with a limited quantity of labeled training data, and it can be widely applied in practical scenarios such as wastewater treatment plants and healthcare. Compared with traditional methods, existing deep metric-based algorithms have excelled in few-shot classification tasks, but some issues need to be further investigated. While current standard convolutional networks can extract expressive depth features, they do not fully exploit the relationships among input sample attributes. Two problems are included here: (1) how to extract more expressive features and transform them into attributes, and (2) how to obtain the optimal combination of sample class attributes. This paper proposes a few-shot classification method based on manifold metric learning (MML) with feature space embedded in symmetric positive definite (SPD) manifolds to overcome the above limitations. First, significant features are extracted using the proposed joint dynamic convolution module. Second, the definition and properties of Riemannian popular strictly convex geodesics are used to minimize the proposed MML loss function and obtain the optimal attribute correlation matrix A. We theoretically prove that the MML is popularly strictly convex in the SPD and obtain the global optimal solution in the closed space. Extensive experimental results on popular datasets show that our proposed approach outperforms other state-of-the-art methods.

Keyword :

dynamic convolution dynamic convolution metric learning metric learning few-shot classification few-shot classification symmetric positive definite manifold symmetric positive definite manifold

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shang, Qingzhen , Yang, Jinfu , Ma, Jiaqi et al. Few-shot classification based on manifold metric learning [J]. | JOURNAL OF ELECTRONIC IMAGING , 2024 , 33 (1) .
MLA Shang, Qingzhen et al. "Few-shot classification based on manifold metric learning" . | JOURNAL OF ELECTRONIC IMAGING 33 . 1 (2024) .
APA Shang, Qingzhen , Yang, Jinfu , Ma, Jiaqi , Zhang, Jiahui . Few-shot classification based on manifold metric learning . | JOURNAL OF ELECTRONIC IMAGING , 2024 , 33 (1) .
Export to NoteExpress RIS BibTex
Monocular visual-inertial odometry leveraging point-line features with structural constraints SCIE
期刊论文 | 2023 | VISUAL COMPUTER
Abstract&Keyword Cite

Abstract :

Structural geometry constraints, such as perpendicularity, parallelism and coplanarity, are widely existing in man-made scene, especially in Manhattan scene. By fully exploiting these structural properties, we propose a monocular visual-inertial odometry (VIO) using point and line features with structural constraints. First, a coarse-to-fine vanishing points estimation method with line segment consistency verification is presented to classify lines into structural and non-structural lines accurately with less computation cost. Then, to get precise estimation of camera pose and the position of 3D landmarks, a cost function which combines structural line constraints with feature reprojection residual and inertial measurement unit residual is minimized under a sliding window framework. For geometric representation of lines, Plucker coordinates and orthonormal representation are utilized for 3D line transformation and non-linear optimization respectively. Sufficient evaluations are conducted using two public datasets to verify that the proposed system can effectively enhance the localization accuracy and robustness than other existing state-of-the-art VIO systems with acceptable time consumption.

Keyword :

Structural line Structural line Vanishing point Vanishing point Structural constraints Structural constraints Visual-inertial odometry Visual-inertial odometry

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Jiahui , Yang, Jinfu , Ma, Jiaqi . Monocular visual-inertial odometry leveraging point-line features with structural constraints [J]. | VISUAL COMPUTER , 2023 .
MLA Zhang, Jiahui et al. "Monocular visual-inertial odometry leveraging point-line features with structural constraints" . | VISUAL COMPUTER (2023) .
APA Zhang, Jiahui , Yang, Jinfu , Ma, Jiaqi . Monocular visual-inertial odometry leveraging point-line features with structural constraints . | VISUAL COMPUTER , 2023 .
Export to NoteExpress RIS BibTex
Eliminating short-term dynamic elements for robust visual simultaneous localization and mapping using a coarse-to-fine strategy SCIE
期刊论文 | 2022 , 31 (5) | JOURNAL OF ELECTRONIC IMAGING
Abstract&Keyword Cite

Abstract :

Visual simultaneous localization and mapping (VSLAM) is one of the foremost principal technologies for intelligent robots to implement environment perception. Many research works have focused on proposing comprehensive and integrated systems based on the static environment assumption. However, the elements whose motion status changes frequently, namely short-term dynamic elements, can significantly affect the system performance. Therefore, it is extremely momentous to cope with short-term dynamic elements to make the VSLAM system more adaptable to dynamic scenes. This paper proposes a coarse-to-fine elimination strategy for short-term dynamic elements based on motion status check (MSC) and feature points update (FPU). First, an object detection module is designed to obtain semantic information and screen out the potential short-term dynamic elements. And then an MSC module is proposed to judge the true status of these elements and thus ultimately determine whether to eliminate them. In addition, an FPU module is introduced to update the extracted feature points according to calculating the dynamic region factor to improve the robustness of VSLAM system. Quantitative and qualitative experiments on two challenging public datasets are performed. The results demonstrate that our method effectively eliminates the influence of short-term dynamic elements and outperforms other state-of-the-art methods. (c) 2022 SPIE and IS&T

Keyword :

visual simultaneous localization and mapping visual simultaneous localization and mapping motion status check motion status check short-term dynamic elements short-term dynamic elements feature points update feature points update coarse-to-fine coarse-to-fine

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fu, Fuji , Yang, Jinfu , Zhang, Jiahui et al. Eliminating short-term dynamic elements for robust visual simultaneous localization and mapping using a coarse-to-fine strategy [J]. | JOURNAL OF ELECTRONIC IMAGING , 2022 , 31 (5) .
MLA Fu, Fuji et al. "Eliminating short-term dynamic elements for robust visual simultaneous localization and mapping using a coarse-to-fine strategy" . | JOURNAL OF ELECTRONIC IMAGING 31 . 5 (2022) .
APA Fu, Fuji , Yang, Jinfu , Zhang, Jiahui , Ma, Jiaqi . Eliminating short-term dynamic elements for robust visual simultaneous localization and mapping using a coarse-to-fine strategy . | JOURNAL OF ELECTRONIC IMAGING , 2022 , 31 (5) .
Export to NoteExpress RIS BibTex
Multimodal based attention-pyramid for predicting pedestrian trajectory SCIE
期刊论文 | 2022 , 31 (5) | JOURNAL OF ELECTRONIC IMAGING
Abstract&Keyword Cite

Abstract :

The goal of pedestrian trajectory prediction is to predict the future trajectory according to the historical one of pedestrians. Multimodal information in the historical trajectory is conducive to perception and positioning, especially visual information and position coordinates. However, most of the current algorithms ignore the significance of multimodal information in the historical trajectory. We describe pedestrian trajectory prediction as a multimodal problem, in which historical trajectory is divided into an image and coordinate information. Specifically, we apply fully connected long short-term memory (FC-LSTM) and convolutional LSTM (ConvLSTM) to receive and process location coordinates and visual information respectively, and then fuse the information by a multimodal fusion module. Then, the attention pyramid social interaction module is built based on information fusion, to reason complex spatial and social relations between target and neighbors adaptively. The proposed approach is validated on different experimental verification tasks on which it can get better performance in terms of accuracy than other counterparts. (c) 2022 SPIE and IS&T

Keyword :

trajectory prediction trajectory prediction recurrent neural network recurrent neural network multimodal fusion multimodal fusion attention mechanism attention mechanism

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yan, Xue , Yang, Jinfu , Liu, Yubin et al. Multimodal based attention-pyramid for predicting pedestrian trajectory [J]. | JOURNAL OF ELECTRONIC IMAGING , 2022 , 31 (5) .
MLA Yan, Xue et al. "Multimodal based attention-pyramid for predicting pedestrian trajectory" . | JOURNAL OF ELECTRONIC IMAGING 31 . 5 (2022) .
APA Yan, Xue , Yang, Jinfu , Liu, Yubin , Song, Lin . Multimodal based attention-pyramid for predicting pedestrian trajectory . | JOURNAL OF ELECTRONIC IMAGING , 2022 , 31 (5) .
Export to NoteExpress RIS BibTex
Cross-modal Video Moment Retrieval Based on Enhancing Significant Features
期刊论文 | 2022 , 44 (12) , 4395-4404 | JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY
Abstract&Keyword Cite

Abstract :

With the continuous development of video acquisition equipment and technology, the number of videos has grown rapidly. It is a challenging task in video retrieval to find target video moments accurately in massive videos. Cross-modal video moment retrieval is to find a moment matching the query from the video database. Existing works focus mostly on matching the text with the moment, while ignoring the context content in the adjacent moment. As a result, there exists the problem of insufficient expression of feature relation. In this paper, a novel moment retrieval network is proposed, which highlights the significant features through residual channel attention. At the same time, a temporal adjacent network is designed to capture the context information of the adjacent moment. Experimental results show that the proposed method achieves better performance than the mainstream candidate matching based and video- text features relation based methods.

Keyword :

Temporal adjacent network Temporal adjacent network Feature relationship Feature relationship Cross-modal video moment retrieval Cross-modal video moment retrieval Residual channel attention Residual channel attention

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yang Jinfu , Liu Yubin , Song Lin et al. Cross-modal Video Moment Retrieval Based on Enhancing Significant Features [J]. | JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY , 2022 , 44 (12) : 4395-4404 .
MLA Yang Jinfu et al. "Cross-modal Video Moment Retrieval Based on Enhancing Significant Features" . | JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY 44 . 12 (2022) : 4395-4404 .
APA Yang Jinfu , Liu Yubin , Song Lin , Yan Xue . Cross-modal Video Moment Retrieval Based on Enhancing Significant Features . | JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY , 2022 , 44 (12) , 4395-4404 .
Export to NoteExpress RIS BibTex
Dense Face Network: A Dense Face Detector Based on Global Context and Visual Attention Mechanism
期刊论文 | 2022 , 19 (3) , 247-256 | MACHINE INTELLIGENCE RESEARCH
Abstract&Keyword Cite

Abstract :

Face detection has achieved tremendous strides thanks to convolutional neural networks. However, dense face detection remains an open challenge due to large face scale variation, tiny faces, and serious occlusion. This paper presents a robust, dense face detector using global context and visual attention mechanisms which can significantly improve detection accuracy. Specifically, a global context fusion module with top-down feedback is proposed to improve the ability to identify tiny faces. Moreover, a visual attention mechanism is employed to solve the problem of occlusion. Experimental results on the public face datasets WIDER FACE and FDDB demonstrate the effectiveness of the proposed method.

Keyword :

Face detection Face detection global context global context deep learning deep learning computer vision computer vision attention mechanism attention mechanism

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Song, Lin , Yang, Jin-Fu , Shang, Qing-Zhen et al. Dense Face Network: A Dense Face Detector Based on Global Context and Visual Attention Mechanism [J]. | MACHINE INTELLIGENCE RESEARCH , 2022 , 19 (3) : 247-256 .
MLA Song, Lin et al. "Dense Face Network: A Dense Face Detector Based on Global Context and Visual Attention Mechanism" . | MACHINE INTELLIGENCE RESEARCH 19 . 3 (2022) : 247-256 .
APA Song, Lin , Yang, Jin-Fu , Shang, Qing-Zhen , Li, Ming-Ai . Dense Face Network: A Dense Face Detector Based on Global Context and Visual Attention Mechanism . | MACHINE INTELLIGENCE RESEARCH , 2022 , 19 (3) , 247-256 .
Export to NoteExpress RIS BibTex
PSA-GRU:Modeling Person-Social Twin-Attention Based on GRU for Pedestrian Trajectory Prediction CPCI-S
期刊论文 | 2021 , 8151-8157 | 2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC)
Abstract&Keyword Cite

Abstract :

This paper addresses the path prediction problem of multiple pedestrians interacting in dynamic scene, which is a great challenge because of the complexity and subjectivity of pedestrian movements. To forecast the future trajectory of a pedestrian, it is necessary to consider pedestrian subjective intention and social interaction information. Previous methods ignore that important position nodes of historical trajectories can reflect the subjective intention of pedestrians in complex trajectories. In this work, we present Person-Social Twin-Attention network based on Gated Recurrent Unit (PSA-GRU), which can fully utilize important position nodes of personal historical trajectory and social interaction information between pedestrians. In our approach, the person-attention encoding module extracts the most salient parts of personal history trajectory and helps model learn where and how pedestrians will go. Meanwhile, the social-attention pool module contributes to keep distance among different pedestrians and simulates the dynamic interaction of all pedestrians in real scenes. PSA-GRU also takes advantage of Gated Recurrent Unit (GRU) to enable the model to improve the computational efficiency. Experimental results demonstrate that the prediction accuracy and efficiency of our proposed model are both greatly improved on UCY and ETH datasets.

Keyword :

Trajectory Prediction Trajectory Prediction Computer Vision Computer Vision Gated Recurrent Neural Network Gated Recurrent Neural Network Attention Mechanism Attention Mechanism

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yan, Xue , Yang, Jinfu , Song, Lin et al. PSA-GRU:Modeling Person-Social Twin-Attention Based on GRU for Pedestrian Trajectory Prediction [J]. | 2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC) , 2021 : 8151-8157 .
MLA Yan, Xue et al. "PSA-GRU:Modeling Person-Social Twin-Attention Based on GRU for Pedestrian Trajectory Prediction" . | 2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC) (2021) : 8151-8157 .
APA Yan, Xue , Yang, Jinfu , Song, Lin , Liu, Yubin . PSA-GRU:Modeling Person-Social Twin-Attention Based on GRU for Pedestrian Trajectory Prediction . | 2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC) , 2021 , 8151-8157 .
Export to NoteExpress RIS BibTex
A lightweight network with attention decoder for real-time semantic segmentation SCIE
期刊论文 | 2021 , 38 (7) , 2329-2339 | VISUAL COMPUTER
Abstract&Keyword Cite

Abstract :

As an important task in scene understanding, semantic segmentation requires a large amount of computation to achieve high performance. In recent years, with the rise of autonomous systems, it is crucial to make a trade-off in terms of accuracy and speed. In this paper, we propose a novel asymmetric encoder-decoder network structure to address this problem. In the encoder, we design a Separable Asymmetric Module, which combines depth-wise separable asymmetric convolution with dilated convolution to greatly reduce computation cost while maintaining accuracy. On the other hand, an attention mechanism is also used in the decoder to further improve segmentation performance. Experimental results on CityScapes and CamVid datasets show that the proposed method can achieve a better balance between segmentation precision and speed compared with state-of-the-art semantic segmentation methods. Specifically, our model obtains mean IoU of 72.5% and 66.3% on CityScapes and CamVid test dataset, respectively, with less than 1M parameters.

Keyword :

decoder structure decoder structure Dilated convolution Dilated convolution Depth-wise separable asymmetric convolution Depth-wise separable asymmetric convolution Semantic segmentation Semantic segmentation Attention mechanism Attention mechanism Encoder&#8211 Encoder&#8211

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Kang , Yang, Jinfu , Yuan, Shuai et al. A lightweight network with attention decoder for real-time semantic segmentation [J]. | VISUAL COMPUTER , 2021 , 38 (7) : 2329-2339 .
MLA Wang, Kang et al. "A lightweight network with attention decoder for real-time semantic segmentation" . | VISUAL COMPUTER 38 . 7 (2021) : 2329-2339 .
APA Wang, Kang , Yang, Jinfu , Yuan, Shuai , Li, Mingai . A lightweight network with attention decoder for real-time semantic segmentation . | VISUAL COMPUTER , 2021 , 38 (7) , 2329-2339 .
Export to NoteExpress RIS BibTex
Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN SCIE
期刊论文 | 2021 , 59 (10) , 2037-2050 | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING
Abstract&Keyword Cite

Abstract :

A motor imagery EEG (MI-EEG) signal is often selected as the driving signal in an active brain computer interface (BCI) system, and it has been a popular field to recognize MI-EEG images via convolutional neural network (CNN), which poses a potential problem for maintaining the integrity of the time-frequency-space information in MI-EEG images and exploring the feature fusion mechanism in the CNN. However, information is excessively compressed in the present MI-EEG image, and the sequential CNN is unfavorable for the comprehensive utilization of local features. In this paper, a multidimensional MI-EEG imaging method is proposed, which is based on time-frequency analysis and the Clough-Tocher (CT) interpolation algorithm. The time-frequency matrix of each electrode is generated via continuous wavelet transform (WT), and the relevant section of frequency is extracted and divided into nine submatrices, the longitudinal sums and lengths of which are calculated along the directions of frequency and time successively to produce a 3 x 3 feature matrix for each electrode. Then, feature matrix of each electrode is interpolated to coincide with their corresponding coordinates, thereby yielding a WT-based multidimensional image, called WTMI. Meanwhile, a multilevel and multiscale feature fusion convolutional neural network (MLMSFFCNN) is designed for WTMI, which has dense information, low signal-to-noise ratio, and strong spatial distribution. Extensive experiments are conducted on the BCI Competition IV 2a and 2b datasets, and accuracies of 92.95% and 97.03% are yielded based on 10-fold cross-validation, respectively, which exceed those of the state-of-the-art imaging methods. The kappa values and p values demonstrate that our method has lower class skew and error costs. The experimental results demonstrate that WTMI can fully represent the time-frequency-space features of MI-EEG and that MLMSFFCNN is beneficial for improving the collection of multiscale features and the fusion recognition of general and abstract features for WTMI.

Keyword :

Wavelet transform Wavelet transform Brain-computer interface Brain-computer interface Machine learning Machine learning MI-EEG imaging method MI-EEG imaging method Convolutional neural network Convolutional neural network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Ming-ai , Han, Jian-fu , Yang, Jin-fu . Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN [J]. | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2021 , 59 (10) : 2037-2050 .
MLA Li, Ming-ai et al. "Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN" . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING 59 . 10 (2021) : 2037-2050 .
APA Li, Ming-ai , Han, Jian-fu , Yang, Jin-fu . Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2021 , 59 (10) , 2037-2050 .
Export to NoteExpress RIS BibTex
高校实验室危险化学品管理现状与全过程监管实践 CQVIP
期刊论文 | 2021 , 40 (3) , 297-300 | 何淼
Abstract&Keyword Cite

Abstract :

高校实验室危险化学品管理现状与全过程监管实践

Keyword :

危险化学品 危险化学品 实验室安全 实验室安全 全过程监管 全过程监管

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 何淼 , 赵明 , 韩光宇 et al. 高校实验室危险化学品管理现状与全过程监管实践 [J]. | 何淼 , 2021 , 40 (3) : 297-300 .
MLA 何淼 et al. "高校实验室危险化学品管理现状与全过程监管实践" . | 何淼 40 . 3 (2021) : 297-300 .
APA 何淼 , 赵明 , 韩光宇 , 杨金福 , 实验室研究与探索 . 高校实验室危险化学品管理现状与全过程监管实践 . | 何淼 , 2021 , 40 (3) , 297-300 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 12 >

Export

Results:

Selected

to

Format:
Online/Total:4001/2645982
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.