• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王立春

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 9 >
Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot SCIE
期刊论文 | 2022 , 7 (4) , 8574-8581 | IEEE ROBOTICS AND AUTOMATION LETTERS
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

When robots carry out task, selecting an appropriate tool is necessary. The current research ignores the fine-grained characteristic of tasks, and mainly focuses on whether the task can be completed. Little consideration is paid for the object being manipulated, which affects the task completion quality. In order to support task oriented fine-grained tool recommendation, based on common sense knowledge, this paper proposes Fine-grained Tool-Task Graph (FTTG) to describe multi-granularity semantics of tasks, tools, objects being manipulated and relationships among them. According to FTTG, a Fine-grained Tool-Task (FTT) dataset is constructed by labeling images of tools and objects being manipulated with the defined semantics. A baseline method named Fine-grained Tool Recommendation Network (FTR-Net) is also proposed in this paper. FTR-Net gives coarse-grained and fine-grained semantic predictions by simultaneously learning the common and special features of the tools and objects being manipulated. At the same time, FTR-Net constrains the distance between features of the well matched tool and object more smaller than that of those unmatched. The constraint and the special feature ensure FTR-Net provide fine-grained tool recommendation. The constraint and the common feature ensure FTR-Net provide coarse-grained tool recommendation when the fine-grained tools are not available. Experiments show that FTR-Net can recommend tools consistent with common sense whether on test data sets or in real situations.

Keyword :

data sets for robotic vision data sets for robotic vision deep learning for visual perception deep learning for visual perception Computer vision for automation Computer vision for automation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xin, Jianjia , Wang, Lichun , Wang, Shaofan et al. Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot [J]. | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) : 8574-8581 .
MLA Xin, Jianjia et al. "Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot" . | IEEE ROBOTICS AND AUTOMATION LETTERS 7 . 4 (2022) : 8574-8581 .
APA Xin, Jianjia , Wang, Lichun , Wang, Shaofan , Liu, Yukun , Yang, Chao , Yin, Baocai . Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot . | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) , 8574-8581 .
Export to NoteExpress RIS BibTex
基于计算机图形学教学实践的理工科课程思政建设研究
期刊论文 | 2021 , PageCount-页数: 4 (09) , 15-18 | 计算机教育
Abstract&Keyword Cite

Abstract :

针对理工科课程在教学内容、方式上实施"润物细无声"式课程思政的难度,以计算机图形学为例,提出课程思政建设总体思路,在阐述具体课程思政建设过程及结果的基础上,凝练总体建设原则,给出一般化的理工科课程思政建设策略,为在理工科院校更广泛、更有效地开展课程思政建设提供思路。

Keyword :

计算机图形学 计算机图形学 课程思政 课程思政 科学方法论 科学方法论 理工科课程思政建设 理工科课程思政建设 内涵与外延 内涵与外延

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 孔德慧 , 李敬华 , 王立春 et al. 基于计算机图形学教学实践的理工科课程思政建设研究 [J]. | 计算机教育 , 2021 , PageCount-页数: 4 (09) : 15-18 .
MLA 孔德慧 et al. "基于计算机图形学教学实践的理工科课程思政建设研究" . | 计算机教育 PageCount-页数: 4 . 09 (2021) : 15-18 .
APA 孔德慧 , 李敬华 , 王立春 , 张勇 , 孙艳丰 . 基于计算机图形学教学实践的理工科课程思政建设研究 . | 计算机教育 , 2021 , PageCount-页数: 4 (09) , 15-18 .
Export to NoteExpress RIS BibTex
Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition SCIE
期刊论文 | 2021 , 15 (2) | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA
WoS CC Cited Count: 6
Abstract&Keyword Cite

Abstract :

Multi-view human action recognition remains a challenging problem due to large view changes. In this article, we propose a transfer learning-based framework called transferable dictionary learning and view adaptation (TDVA) model for multi-view human action recognition. In the transferable dictionary learning phase, TDVA learns a set of view-specific transferable dictionaries enabling the same actions from different views to share the same sparse representations, which can transfer features of actions from different views to an intermediate domain. In the view adaptation phase, TDVA comprehensively analyzes global, local, and individual characteristics of samples, and jointly learns balanced distribution adaptation, locality preservation, and discrimination preservation, aiming at transferring sparse features of actions of different views from the intermediate domain to a common domain. In other words, TDVA progressively bridges the distribution gap among actions from various views by these two phases. Experimental results on IXMAS, ACT4(2), and NUCLA action datasets demonstrate that TDVA outperforms state-of-the-art methods.

Keyword :

sparse representation sparse representation Action recognition Action recognition transfer learning transfer learning multi-view multi-view

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Bin , Kong, Dehui , Wang, Shaofan et al. Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition [J]. | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2021 , 15 (2) .
MLA Sun, Bin et al. "Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition" . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 15 . 2 (2021) .
APA Sun, Bin , Kong, Dehui , Wang, Shaofan , Wang, Lichun , Yin, Baocai . Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2021 , 15 (2) .
Export to NoteExpress RIS BibTex
Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition SCIE
期刊论文 | 2021 , 23 , 2857-2867 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 7
Abstract&Keyword Cite

Abstract :

Sparse representation is a powerful tool in many visual applications since images can be represented effectively and efficiently with a dictionary. Conventional dictionary learning methods usually treat each training sample equally, which would lead to the degradation of recognition performance when the samples from same category distribute dispersedly. This is because the dictionary focuses more on easy samples (known as highly clustered samples), and those hard samples (known as widely distributed samples) are easily ignored. As a result, the test samples which exhibit high dissimilarities to most of intra-category samples tend to be misclassified. To circumvent this issue, this paper proposes a simple and effective hardness-aware dictionary learning (HADL) method, which considers training samples discriminatively based on the AdaBoost mechanism. Different from learning one optimal dictionary, HADL learns a set of dictionaries and corresponding sub-classifiers jointly in an iterative fashion. In each iteration, HADL learns a dictionary and a sub-classifier, and updates the weights based on the classification errors given by current sub-classifier. Those correctly classified samples are assigned with small weights while those incorrectly classified samples are assigned with large weights. Through the iterated learning procedure, the hard samples are associated with different dictionaries. Finally, HADL combines the learned sub-classifiers linearly to form a strong classifier, which improves the overall recognition accuracy effectively. Experiments on well-known benchmarks show that HADL achieves promising classification results.

Keyword :

Dictionaries Dictionaries Training Training classification classification Boosting Boosting Visualization Visualization AdaBoost AdaBoost dictionary learning dictionary learning Task analysis Task analysis Face recognition Face recognition Sparse representation Sparse representation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Lichun , Li, Shuang , Wang, Shaofan et al. Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 : 2857-2867 .
MLA Wang, Lichun et al. "Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition" . | IEEE TRANSACTIONS ON MULTIMEDIA 23 (2021) : 2857-2867 .
APA Wang, Lichun , Li, Shuang , Wang, Shaofan , Kong, Dehui , Yin, Baocai . Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition . | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 , 2857-2867 .
Export to NoteExpress RIS BibTex
Pedestrian Detection Based on Two-Stream UDN SCIE
期刊论文 | 2020 , 10 (5) | APPLIED SCIENCES-BASEL
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

Featured Application This paper can be applied to an autonomous vehicle and driving assistance system. Abstract Pedestrian detection is the core of the driver assistance system, which collects the road conditions through the radars or cameras on the vehicle, judges whether there is a pedestrian in front of the vehicle, supports decisions such as raising the alarm, automatically slowing down, or emergency stopping to keep pedestrians safe, and improves the security when the vehicle is moving. Suffering from weather, lighting, clothing, large pose variations, and occlusion, the current pedestrian detection still has a certain distance from the practical applications. In recent years, deep networks have shown excellent performance for image detection, recognition, and classification. Some researchers employed deep network for pedestrian detection and achieve great progress, but deep networks need huge computational resources, which make it difficult to put into practical applications. In real scenarios of autonomous vehicles, the computation ability is limited. Thus, the shallow networks such as UDN (Unified Deep Networks) is a better choice, since it performs well while consuming less computation resources. Based on UDN, this paper proposes a new deep network model named two-stream UDN, which augments another branch for solving traditional UDN's indistinction of the difference between trees/telegraph poles and pedestrians. The new branch accepts the upper third part of the pedestrian image as input, and the partial image has less deformation, stable features, and more distinguished characters from other objects. For the proposed two-stream UDN, multi-input features including the HOG (Histogram of Oriented Gradients) feature, Sobel feature, color feature, and foreground regions extracted by GrabCut segmentation algorithms are fed. Compared with the original input of UDN, the multi-input features are more conducive for pedestrian detection, since the fused HOG features and significant objects are more significant for pedestrian detection. Two-stream UDN is trained through two steps. First, the two sub-networks are trained until converge; then, we fuse results of the two subnets as the final result and feed it back to the two subnets to fine tune network parameters synchronously. To improve the performance, Swish is adopted as the activation function to obtain a faster training speed, and positive samples are mirrored and rotated with small angles to make the positive and negative samples more balanced.

Keyword :

two-stream nets two-stream nets pedestrian detection pedestrian detection network training network training Unified Deep Net Unified Deep Net

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wentong , Wang, Lichun , Ge, Xufei et al. Pedestrian Detection Based on Two-Stream UDN [J]. | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
MLA Wang, Wentong et al. "Pedestrian Detection Based on Two-Stream UDN" . | APPLIED SCIENCES-BASEL 10 . 5 (2020) .
APA Wang, Wentong , Wang, Lichun , Ge, Xufei , Li, Jinghua , Yin, Baocai . Pedestrian Detection Based on Two-Stream UDN . | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
Export to NoteExpress RIS BibTex
Discriminative matrix-variate restricted Boltzmann machine classification model SCIE
期刊论文 | 2020 , 27 (5) , 3621-3633 | WIRELESS NETWORKS
Abstract&Keyword Cite

Abstract :

Matrix-variate Restricted Boltzmann Machine (MVRBM), a variant of Restricted Boltzmann Machine, has demonstrated excellent capacity of modelling matrix variable. However, MVRBM is still an unsupervised generative model, and is usually used to feature extraction or initialization of deep neural network. When MVRBM is used to classify, additional classifiers must be added. In order to make the MVRBM itself be supervised, in this paper, we propose improved MVRBMs for classification, which can be used to classify 2D data directly and accurately. To this end, on one hand, classification constraint is added to MVRBM to get Matrix-variate Restricted Boltzmann Machine Classification Model (ClassMVRBM). On the other hand, fisher discriminant analysis criterion for matrix-style variable is proposed and applied to the hidden variable, therefore, the extracted feature is more discriminative so as to enhance the classification performance of ClassMVRBM. We call the novel model Matrix-variate Restricted Boltzmann Machine Classification Model with Fisher discriminant analysis (ClassMVRBM-MVFDA). Experimental results on some publicly available databases demonstrate the superiority of the proposed models. Of which, the image classification accuracy of ClassMVRBM is higher than conventional unsupervised RBM, its variants and supervised Restricted Boltzmann Machine Classification Model (ClassRBM) for vector variable. Especially, the image classification accuracy of the proposed ClassMVRBM-MVFDA performs better than supervised ClassMVRBM and vectorial RBM-FDA.

Keyword :

MVRBM MVRBM RBM RBM ClassMVRBM-MVFDA ClassMVRBM-MVFDA ClassMVRBM ClassMVRBM

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jinghua , Tian, Pengyu , Kong, Dehui et al. Discriminative matrix-variate restricted Boltzmann machine classification model [J]. | WIRELESS NETWORKS , 2020 , 27 (5) : 3621-3633 .
MLA Li, Jinghua et al. "Discriminative matrix-variate restricted Boltzmann machine classification model" . | WIRELESS NETWORKS 27 . 5 (2020) : 3621-3633 .
APA Li, Jinghua , Tian, Pengyu , Kong, Dehui , Wang, Lichun , Wang, Shaofan , Yin, Baocai . Discriminative matrix-variate restricted Boltzmann machine classification model . | WIRELESS NETWORKS , 2020 , 27 (5) , 3621-3633 .
Export to NoteExpress RIS BibTex
Matrix-variate variational auto-encoder with applications to image process SCIE
期刊论文 | 2020 , 67 | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Variational Auto-Encoder (VAE) is an important probabilistic technology to model 1D vectorial data. However, when applying VAE model to 2D image, vectorization is necessary. Vectorization process may lead to dimension curse and lose valuable spatial information. To avoid these problems, we propose a novel VAE model based on matrix variables named as Matrix-variate Variational Auto-Encoder (MVVAE). In this model, input, hidden and latent variables are all in matrix form, therefore inherent spatial structure of 2D images can be maintained and utilized better. Especially, the latent variable is assumed to follow matrix Gaussian distribution which is more suitable for describing 2D images. To solve the weights and the posterior of latent variable, the variational inference process is given. The experiments are designed for three real-world application: reconstruction, denoising and completion. The experimental results demonstrate that MVVAE shows better performance than VAE and other probabilistic methods for modeling and processing 2D data. (C) 2020 Elsevier Inc. All rights reserved.

Keyword :

Image denoising Image denoising Face completion Face completion Variational inference Variational inference Variational autoencoder Variational autoencoder Matrix Gaussian distribution Matrix Gaussian distribution

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jinghua , Yan, Huixia , Gao, Junbin et al. Matrix-variate variational auto-encoder with applications to image process [J]. | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION , 2020 , 67 .
MLA Li, Jinghua et al. "Matrix-variate variational auto-encoder with applications to image process" . | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION 67 (2020) .
APA Li, Jinghua , Yan, Huixia , Gao, Junbin , Kong, Dehui , Wang, Lichun , Wang, Shaofan et al. Matrix-variate variational auto-encoder with applications to image process . | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION , 2020 , 67 .
Export to NoteExpress RIS BibTex
Matrix-Variate Restricted Boltzmann Machine Classification Model EI
会议论文 | 2019 , 295 LNICST , 486-497 | 11th EAI International Conference on Simulation Tools and Techniques, SIMUTools 2019
Abstract&Keyword Cite

Abstract :

Recently, Restricted Boltzmann Machine (RBM) has demonstrated excellent capacity of modelling vector variable. A variant of RBM, Matrix-variate Restricted Boltzmann Machine (MVRBM), extends the ability of RBM and is able to model matrix-variate data directly without vectorized process. However, MVRBM is still an unsupervised generative model, and is usually used to feature extraction or initialization of deep neural network. When MVRBM is used to classify, additional classifiers are necessary. This paper proposes a Matrix-variate Restricted Boltzmann Machine Classification Model (ClassMVRBM) to classify 2D data directly. In the novel ClassMVRBM, classification constraint is introduced to MVRBM. On one hand, the features extracted by MVRBM are more discriminative, on the other hand, the proposed model can be directly used to classify. Experiments on some publicly available databases demonstrate that the classification performance of ClassMVRBM has been largely improved, resulting in higher image classification accuracy than conventional unsupervised RBM, its variants and Restricted Boltzmann Machine Classification Model (ClassRBM). © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2019.

Keyword :

Matrix algebra Matrix algebra Deep neural networks Deep neural networks Classification (of information) Classification (of information) Image enhancement Image enhancement

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jinghua , Tian, Pengyu , Kong, Dehui et al. Matrix-Variate Restricted Boltzmann Machine Classification Model [C] . 2019 : 486-497 .
MLA Li, Jinghua et al. "Matrix-Variate Restricted Boltzmann Machine Classification Model" . (2019) : 486-497 .
APA Li, Jinghua , Tian, Pengyu , Kong, Dehui , Wang, Lichun , Wang, Shaofan , Yin, Baocai . Matrix-Variate Restricted Boltzmann Machine Classification Model . (2019) : 486-497 .
Export to NoteExpress RIS BibTex
一种基于判别矩阵变量受限玻尔兹曼机的图像识别方法 incoPat
专利 | 2019-04-15 | CN201910297655.2
Abstract&Keyword Cite

Abstract :

本发明公开一种基于判别式矩阵变量受限玻尔兹曼机模型的图像识别方法,采用基于判别的矩阵变量受限玻尔兹曼机用于二维图像分类,记为DisMVRBM,此模型能够直接对图像进行建模,而不需要向量化,保留了原始样本的结构信息。与MVRBM相比,本模型增加了标签层,意味着在提取特征的过程中融入了标签信息,使得提取的特征具有判别性,会提升分类性能;并且由于增加了标签层本模型可以直接当作一个独立的分类器,不用再链接其他的分类器,省去了对其他分类器的微调训练阶段。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 尹宝才 , 田鹏宇 , 李敬华 et al. 一种基于判别矩阵变量受限玻尔兹曼机的图像识别方法 : CN201910297655.2[P]. | 2019-04-15 .
MLA 尹宝才 et al. "一种基于判别矩阵变量受限玻尔兹曼机的图像识别方法" : CN201910297655.2. | 2019-04-15 .
APA 尹宝才 , 田鹏宇 , 李敬华 , 孔德慧 , 王立春 . 一种基于判别矩阵变量受限玻尔兹曼机的图像识别方法 : CN201910297655.2. | 2019-04-15 .
Export to NoteExpress RIS BibTex
一种有噪声图像的聚类处理方法 incoPat
专利 | 2019-03-04 | CN201910159122.8
Abstract&Keyword Cite

Abstract :

公开一种有噪声图像的聚类处理方法,其能够使图像聚类更具有鲁棒性。该方法构造一种基于深度变分自编码器的子空间聚类模型DVAESC,该模型在变分自编码器模型VAE框架中引入描述数据概率分布的均值参数的自表示层,以有效学习到邻接矩阵进而进行谱聚类。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李敬华 , 闫会霞 , 孔德慧 et al. 一种有噪声图像的聚类处理方法 : CN201910159122.8[P]. | 2019-03-04 .
MLA 李敬华 et al. "一种有噪声图像的聚类处理方法" : CN201910159122.8. | 2019-03-04 .
APA 李敬华 , 闫会霞 , 孔德慧 , 王立春 , 尹宝才 . 一种有噪声图像的聚类处理方法 : CN201910159122.8. | 2019-03-04 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 9 >

Export

Results:

Selected

to

Format:
Online/Total:363/6037134
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.