• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:王立春

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 6 >
Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot SCIE
期刊论文 | 2022 , 7 (4) , 8574-8581 | IEEE ROBOTICS AND AUTOMATION LETTERS
Abstract&Keyword Cite

Abstract :

When robots carry out task, selecting an appropriate tool is necessary. The current research ignores the fine-grained characteristic of tasks, and mainly focuses on whether the task can be completed. Little consideration is paid for the object being manipulated, which affects the task completion quality. In order to support task oriented fine-grained tool recommendation, based on common sense knowledge, this paper proposes Fine-grained Tool-Task Graph (FTTG) to describe multi-granularity semantics of tasks, tools, objects being manipulated and relationships among them. According to FTTG, a Fine-grained Tool-Task (FTT) dataset is constructed by labeling images of tools and objects being manipulated with the defined semantics. A baseline method named Fine-grained Tool Recommendation Network (FTR-Net) is also proposed in this paper. FTR-Net gives coarse-grained and fine-grained semantic predictions by simultaneously learning the common and special features of the tools and objects being manipulated. At the same time, FTR-Net constrains the distance between features of the well matched tool and object more smaller than that of those unmatched. The constraint and the special feature ensure FTR-Net provide fine-grained tool recommendation. The constraint and the common feature ensure FTR-Net provide coarse-grained tool recommendation when the fine-grained tools are not available. Experiments show that FTR-Net can recommend tools consistent with common sense whether on test data sets or in real situations.

Keyword :

data sets for robotic vision data sets for robotic vision deep learning for visual perception deep learning for visual perception Computer vision for automation Computer vision for automation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xin, Jianjia , Wang, Lichun , Wang, Shaofan et al. Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot [J]. | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) : 8574-8581 .
MLA Xin, Jianjia et al. "Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot" . | IEEE ROBOTICS AND AUTOMATION LETTERS 7 . 4 (2022) : 8574-8581 .
APA Xin, Jianjia , Wang, Lichun , Wang, Shaofan , Liu, Yukun , Yang, Chao , Yin, Baocai . Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot . | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) , 8574-8581 .
Export to NoteExpress RIS BibTex
Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition SCIE
期刊论文 | 2021 , 15 (2) | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA
Abstract&Keyword Cite

Abstract :

Multi-view human action recognition remains a challenging problem due to large view changes. In this article, we propose a transfer learning-based framework called transferable dictionary learning and view adaptation (TDVA) model for multi-view human action recognition. In the transferable dictionary learning phase, TDVA learns a set of view-specific transferable dictionaries enabling the same actions from different views to share the same sparse representations, which can transfer features of actions from different views to an intermediate domain. In the view adaptation phase, TDVA comprehensively analyzes global, local, and individual characteristics of samples, and jointly learns balanced distribution adaptation, locality preservation, and discrimination preservation, aiming at transferring sparse features of actions of different views from the intermediate domain to a common domain. In other words, TDVA progressively bridges the distribution gap among actions from various views by these two phases. Experimental results on IXMAS, ACT4(2), and NUCLA action datasets demonstrate that TDVA outperforms state-of-the-art methods.

Keyword :

sparse representation sparse representation Action recognition Action recognition transfer learning transfer learning multi-view multi-view

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Bin , Kong, Dehui , Wang, Shaofan et al. Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition [J]. | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2021 , 15 (2) .
MLA Sun, Bin et al. "Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition" . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 15 . 2 (2021) .
APA Sun, Bin , Kong, Dehui , Wang, Shaofan , Wang, Lichun , Yin, Baocai . Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2021 , 15 (2) .
Export to NoteExpress RIS BibTex
Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition SCIE
期刊论文 | 2021 , 23 , 2857-2867 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite

Abstract :

Sparse representation is a powerful tool in many visual applications since images can be represented effectively and efficiently with a dictionary. Conventional dictionary learning methods usually treat each training sample equally, which would lead to the degradation of recognition performance when the samples from same category distribute dispersedly. This is because the dictionary focuses more on easy samples (known as highly clustered samples), and those hard samples (known as widely distributed samples) are easily ignored. As a result, the test samples which exhibit high dissimilarities to most of intra-category samples tend to be misclassified. To circumvent this issue, this paper proposes a simple and effective hardness-aware dictionary learning (HADL) method, which considers training samples discriminatively based on the AdaBoost mechanism. Different from learning one optimal dictionary, HADL learns a set of dictionaries and corresponding sub-classifiers jointly in an iterative fashion. In each iteration, HADL learns a dictionary and a sub-classifier, and updates the weights based on the classification errors given by current sub-classifier. Those correctly classified samples are assigned with small weights while those incorrectly classified samples are assigned with large weights. Through the iterated learning procedure, the hard samples are associated with different dictionaries. Finally, HADL combines the learned sub-classifiers linearly to form a strong classifier, which improves the overall recognition accuracy effectively. Experiments on well-known benchmarks show that HADL achieves promising classification results.

Keyword :

Dictionaries Dictionaries Training Training classification classification Boosting Boosting Visualization Visualization AdaBoost AdaBoost dictionary learning dictionary learning Task analysis Task analysis Face recognition Face recognition Sparse representation Sparse representation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Lichun , Li, Shuang , Wang, Shaofan et al. Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 : 2857-2867 .
MLA Wang, Lichun et al. "Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition" . | IEEE TRANSACTIONS ON MULTIMEDIA 23 (2021) : 2857-2867 .
APA Wang, Lichun , Li, Shuang , Wang, Shaofan , Kong, Dehui , Yin, Baocai . Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition . | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 , 2857-2867 .
Export to NoteExpress RIS BibTex
基于计算机图形学教学实践的理工科课程思政建设研究
期刊论文 | 2021 , PageCount-页数: 4 (09) , 15-18 | 计算机教育
Abstract&Keyword Cite

Abstract :

针对理工科课程在教学内容、方式上实施"润物细无声"式课程思政的难度,以计算机图形学为例,提出课程思政建设总体思路,在阐述具体课程思政建设过程及结果的基础上,凝练总体建设原则,给出一般化的理工科课程思政建设策略,为在理工科院校更广泛、更有效地开展课程思政建设提供思路。

Keyword :

计算机图形学 计算机图形学 课程思政 课程思政 科学方法论 科学方法论 理工科课程思政建设 理工科课程思政建设 内涵与外延 内涵与外延

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 孔德慧 , 李敬华 , 王立春 et al. 基于计算机图形学教学实践的理工科课程思政建设研究 [J]. | 计算机教育 , 2021 , PageCount-页数: 4 (09) : 15-18 .
MLA 孔德慧 et al. "基于计算机图形学教学实践的理工科课程思政建设研究" . | 计算机教育 PageCount-页数: 4 . 09 (2021) : 15-18 .
APA 孔德慧 , 李敬华 , 王立春 , 张勇 , 孙艳丰 . 基于计算机图形学教学实践的理工科课程思政建设研究 . | 计算机教育 , 2021 , PageCount-页数: 4 (09) , 15-18 .
Export to NoteExpress RIS BibTex
Pedestrian Detection Based on Two-Stream UDN SCIE
期刊论文 | 2020 , 10 (5) | APPLIED SCIENCES-BASEL
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Featured Application This paper can be applied to an autonomous vehicle and driving assistance system. Abstract Pedestrian detection is the core of the driver assistance system, which collects the road conditions through the radars or cameras on the vehicle, judges whether there is a pedestrian in front of the vehicle, supports decisions such as raising the alarm, automatically slowing down, or emergency stopping to keep pedestrians safe, and improves the security when the vehicle is moving. Suffering from weather, lighting, clothing, large pose variations, and occlusion, the current pedestrian detection still has a certain distance from the practical applications. In recent years, deep networks have shown excellent performance for image detection, recognition, and classification. Some researchers employed deep network for pedestrian detection and achieve great progress, but deep networks need huge computational resources, which make it difficult to put into practical applications. In real scenarios of autonomous vehicles, the computation ability is limited. Thus, the shallow networks such as UDN (Unified Deep Networks) is a better choice, since it performs well while consuming less computation resources. Based on UDN, this paper proposes a new deep network model named two-stream UDN, which augments another branch for solving traditional UDN's indistinction of the difference between trees/telegraph poles and pedestrians. The new branch accepts the upper third part of the pedestrian image as input, and the partial image has less deformation, stable features, and more distinguished characters from other objects. For the proposed two-stream UDN, multi-input features including the HOG (Histogram of Oriented Gradients) feature, Sobel feature, color feature, and foreground regions extracted by GrabCut segmentation algorithms are fed. Compared with the original input of UDN, the multi-input features are more conducive for pedestrian detection, since the fused HOG features and significant objects are more significant for pedestrian detection. Two-stream UDN is trained through two steps. First, the two sub-networks are trained until converge; then, we fuse results of the two subnets as the final result and feed it back to the two subnets to fine tune network parameters synchronously. To improve the performance, Swish is adopted as the activation function to obtain a faster training speed, and positive samples are mirrored and rotated with small angles to make the positive and negative samples more balanced.

Keyword :

two-stream nets two-stream nets pedestrian detection pedestrian detection network training network training Unified Deep Net Unified Deep Net

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Wentong , Wang, Lichun , Ge, Xufei et al. Pedestrian Detection Based on Two-Stream UDN [J]. | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
MLA Wang, Wentong et al. "Pedestrian Detection Based on Two-Stream UDN" . | APPLIED SCIENCES-BASEL 10 . 5 (2020) .
APA Wang, Wentong , Wang, Lichun , Ge, Xufei , Li, Jinghua , Yin, Baocai . Pedestrian Detection Based on Two-Stream UDN . | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
Export to NoteExpress RIS BibTex
Matrix-variate variational auto-encoder with applications to image process SCIE
期刊论文 | 2020 , 67 | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Variational Auto-Encoder (VAE) is an important probabilistic technology to model 1D vectorial data. However, when applying VAE model to 2D image, vectorization is necessary. Vectorization process may lead to dimension curse and lose valuable spatial information. To avoid these problems, we propose a novel VAE model based on matrix variables named as Matrix-variate Variational Auto-Encoder (MVVAE). In this model, input, hidden and latent variables are all in matrix form, therefore inherent spatial structure of 2D images can be maintained and utilized better. Especially, the latent variable is assumed to follow matrix Gaussian distribution which is more suitable for describing 2D images. To solve the weights and the posterior of latent variable, the variational inference process is given. The experiments are designed for three real-world application: reconstruction, denoising and completion. The experimental results demonstrate that MVVAE shows better performance than VAE and other probabilistic methods for modeling and processing 2D data. (C) 2020 Elsevier Inc. All rights reserved.

Keyword :

Image denoising Image denoising Face completion Face completion Variational inference Variational inference Variational autoencoder Variational autoencoder Matrix Gaussian distribution Matrix Gaussian distribution

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jinghua , Yan, Huixia , Gao, Junbin et al. Matrix-variate variational auto-encoder with applications to image process [J]. | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION , 2020 , 67 .
MLA Li, Jinghua et al. "Matrix-variate variational auto-encoder with applications to image process" . | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION 67 (2020) .
APA Li, Jinghua , Yan, Huixia , Gao, Junbin , Kong, Dehui , Wang, Lichun , Wang, Shaofan et al. Matrix-variate variational auto-encoder with applications to image process . | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION , 2020 , 67 .
Export to NoteExpress RIS BibTex
Discriminative matrix-variate restricted Boltzmann machine classification model SCIE
期刊论文 | 2020 , 27 (5) , 3621-3633 | WIRELESS NETWORKS
Abstract&Keyword Cite

Abstract :

Matrix-variate Restricted Boltzmann Machine (MVRBM), a variant of Restricted Boltzmann Machine, has demonstrated excellent capacity of modelling matrix variable. However, MVRBM is still an unsupervised generative model, and is usually used to feature extraction or initialization of deep neural network. When MVRBM is used to classify, additional classifiers must be added. In order to make the MVRBM itself be supervised, in this paper, we propose improved MVRBMs for classification, which can be used to classify 2D data directly and accurately. To this end, on one hand, classification constraint is added to MVRBM to get Matrix-variate Restricted Boltzmann Machine Classification Model (ClassMVRBM). On the other hand, fisher discriminant analysis criterion for matrix-style variable is proposed and applied to the hidden variable, therefore, the extracted feature is more discriminative so as to enhance the classification performance of ClassMVRBM. We call the novel model Matrix-variate Restricted Boltzmann Machine Classification Model with Fisher discriminant analysis (ClassMVRBM-MVFDA). Experimental results on some publicly available databases demonstrate the superiority of the proposed models. Of which, the image classification accuracy of ClassMVRBM is higher than conventional unsupervised RBM, its variants and supervised Restricted Boltzmann Machine Classification Model (ClassRBM) for vector variable. Especially, the image classification accuracy of the proposed ClassMVRBM-MVFDA performs better than supervised ClassMVRBM and vectorial RBM-FDA.

Keyword :

MVRBM MVRBM RBM RBM ClassMVRBM-MVFDA ClassMVRBM-MVFDA ClassMVRBM ClassMVRBM

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jinghua , Tian, Pengyu , Kong, Dehui et al. Discriminative matrix-variate restricted Boltzmann machine classification model [J]. | WIRELESS NETWORKS , 2020 , 27 (5) : 3621-3633 .
MLA Li, Jinghua et al. "Discriminative matrix-variate restricted Boltzmann machine classification model" . | WIRELESS NETWORKS 27 . 5 (2020) : 3621-3633 .
APA Li, Jinghua , Tian, Pengyu , Kong, Dehui , Wang, Lichun , Wang, Shaofan , Yin, Baocai . Discriminative matrix-variate restricted Boltzmann machine classification model . | WIRELESS NETWORKS , 2020 , 27 (5) , 3621-3633 .
Export to NoteExpress RIS BibTex
Effective human action recognition using global and local offsets of skeleton joints SCIE
期刊论文 | 2019 , 78 (5) , 6329-6353 | MULTIMEDIA TOOLS AND APPLICATIONS
WoS CC Cited Count: 6
Abstract&Keyword Cite

Abstract :

Human action recognition based on 3D skeleton joints is an important yet challenging task. While many research work are devoted to 3D action recognition, they mainly suffer from two problems: complex model representation and low implementation efficiency. To tackle these problems, we propose an effective and efficient framework for 3D action recognition using a global-and-local histogram representation model. Our method consists of a global-and-local featuring phase, a saturation based histogram representation phase, and a classification phase. The global-and-local featuring phase captures the global feature and local feature of each action sequence using the joint displacement between the current frame and the first frame, and the joint displacement between pairwise fixed-skip frames, respectively. The saturation based histogram representation phase captures the histogram representation of each joint considering the motion independence of joints and saturation of each histogram bin. The classification phase measures the distance of each joint histogram-to-class. Besides, we produce a novel action dataset called BJUT Kinect dataset, which consists of multi-period motion clips and intra-class variations. We compare our method with many state-of-the-art methods on BJUT Kinect dataset, UCF Kinect dataset, Florence 3D action dataset, MSR-Action3D dataset, and NTU RGB+D Dataset. The results show that our method achieves both higher accuracy and efficiency for 3D action recognition.

Keyword :

Action recognition Action recognition Offsets Offsets Skeleton joints Skeleton joints Histogram representation Histogram representation Naive-Bayes-Nearest-Neighbor Naive-Bayes-Nearest-Neighbor

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Sun, Bin , Kong, Dehui , Wang, Shaofan et al. Effective human action recognition using global and local offsets of skeleton joints [J]. | MULTIMEDIA TOOLS AND APPLICATIONS , 2019 , 78 (5) : 6329-6353 .
MLA Sun, Bin et al. "Effective human action recognition using global and local offsets of skeleton joints" . | MULTIMEDIA TOOLS AND APPLICATIONS 78 . 5 (2019) : 6329-6353 .
APA Sun, Bin , Kong, Dehui , Wang, Shaofan , Wang, Lichun , Wang, Yuping , Yin, Baocai . Effective human action recognition using global and local offsets of skeleton joints . | MULTIMEDIA TOOLS AND APPLICATIONS , 2019 , 78 (5) , 6329-6353 .
Export to NoteExpress RIS BibTex
基于双通道混合3D-2D RBM模型的手势识别 CSCD PKU
期刊论文 | 2019 , 45 (05) , 428-435 | 北京工业大学学报
CNKI Cited Count: 4
Abstract&Keyword Cite

Abstract :

为了挖掘基于视频的动态手势识别问题中手势的固有时空表示,提出一种3D-2D受限玻尔兹曼机(restricted Boltzmann machine,RBM)模型,以便建模手势视频数据的时空相关信息.特别地,为了更好地描述动态手势的时空特征,提出传统手工定义特征与3D-2D RBM结合的混合特征表示方法,该方法首先提取Canny-2D HOG表观特征以及光流-2D HOG运动特征,然后基于3D-2D RBM进一步学习动态手势潜在的高层时空语义特征,提升动态手势的特征描述力.融合手势外观判别和运动判别的双通道融合判别改进了单通道分类的能力.在公开的剑桥手势数据集上的实验验证了所提方法的有效性和优越...

Keyword :

动态手势识别 动态手势识别 梯度直方图 梯度直方图 3D-2D受限玻尔兹曼机 3D-2D受限玻尔兹曼机 光流 光流

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 李敬华 , 淮华瑞 , 孔德慧 et al. 基于双通道混合3D-2D RBM模型的手势识别 [J]. | 北京工业大学学报 , 2019 , 45 (05) : 428-435 .
MLA 李敬华 et al. "基于双通道混合3D-2D RBM模型的手势识别" . | 北京工业大学学报 45 . 05 (2019) : 428-435 .
APA 李敬华 , 淮华瑞 , 孔德慧 , 王立春 , 孙艳丰 . 基于双通道混合3D-2D RBM模型的手势识别 . | 北京工业大学学报 , 2019 , 45 (05) , 428-435 .
Export to NoteExpress RIS BibTex
Matrix-Variate Restricted Boltzmann Machine Classification Model EI
会议论文 | 2019 , 295 LNICST , 486-497 | 11th EAI International Conference on Simulation Tools and Techniques, SIMUTools 2019
Abstract&Keyword Cite

Abstract :

Recently, Restricted Boltzmann Machine (RBM) has demonstrated excellent capacity of modelling vector variable. A variant of RBM, Matrix-variate Restricted Boltzmann Machine (MVRBM), extends the ability of RBM and is able to model matrix-variate data directly without vectorized process. However, MVRBM is still an unsupervised generative model, and is usually used to feature extraction or initialization of deep neural network. When MVRBM is used to classify, additional classifiers are necessary. This paper proposes a Matrix-variate Restricted Boltzmann Machine Classification Model (ClassMVRBM) to classify 2D data directly. In the novel ClassMVRBM, classification constraint is introduced to MVRBM. On one hand, the features extracted by MVRBM are more discriminative, on the other hand, the proposed model can be directly used to classify. Experiments on some publicly available databases demonstrate that the classification performance of ClassMVRBM has been largely improved, resulting in higher image classification accuracy than conventional unsupervised RBM, its variants and Restricted Boltzmann Machine Classification Model (ClassRBM). © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2019.

Keyword :

Matrix algebra Matrix algebra Deep neural networks Deep neural networks Classification (of information) Classification (of information) Image enhancement Image enhancement

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jinghua , Tian, Pengyu , Kong, Dehui et al. Matrix-Variate Restricted Boltzmann Machine Classification Model [C] . 2019 : 486-497 .
MLA Li, Jinghua et al. "Matrix-Variate Restricted Boltzmann Machine Classification Model" . (2019) : 486-497 .
APA Li, Jinghua , Tian, Pengyu , Kong, Dehui , Wang, Lichun , Wang, Shaofan , Yin, Baocai . Matrix-Variate Restricted Boltzmann Machine Classification Model . (2019) : 486-497 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 6 >

Export

Results:

Selected

to

Format:
Online/Total:2823/2567011
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.