Query:
学者姓名:王立春
Refining:
Year
Type
Indexed by
Source
Complex
Co-Author
Language
Clean All
Abstract :
When robots carry out task, selecting an appropriate tool is necessary. The current research ignores the fine-grained characteristic of tasks, and mainly focuses on whether the task can be completed. Little consideration is paid for the object being manipulated, which affects the task completion quality. In order to support task oriented fine-grained tool recommendation, based on common sense knowledge, this paper proposes Fine-grained Tool-Task Graph (FTTG) to describe multi-granularity semantics of tasks, tools, objects being manipulated and relationships among them. According to FTTG, a Fine-grained Tool-Task (FTT) dataset is constructed by labeling images of tools and objects being manipulated with the defined semantics. A baseline method named Fine-grained Tool Recommendation Network (FTR-Net) is also proposed in this paper. FTR-Net gives coarse-grained and fine-grained semantic predictions by simultaneously learning the common and special features of the tools and objects being manipulated. At the same time, FTR-Net constrains the distance between features of the well matched tool and object more smaller than that of those unmatched. The constraint and the special feature ensure FTR-Net provide fine-grained tool recommendation. The constraint and the common feature ensure FTR-Net provide coarse-grained tool recommendation when the fine-grained tools are not available. Experiments show that FTR-Net can recommend tools consistent with common sense whether on test data sets or in real situations.
Keyword :
data sets for robotic vision data sets for robotic vision deep learning for visual perception deep learning for visual perception Computer vision for automation Computer vision for automation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Xin, Jianjia , Wang, Lichun , Wang, Shaofan et al. Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot [J]. | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) : 8574-8581 . |
MLA | Xin, Jianjia et al. "Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot" . | IEEE ROBOTICS AND AUTOMATION LETTERS 7 . 4 (2022) : 8574-8581 . |
APA | Xin, Jianjia , Wang, Lichun , Wang, Shaofan , Liu, Yukun , Yang, Chao , Yin, Baocai . Recommending Fine-Grained Tool Consistent With Common Sense Knowledge for Robot . | IEEE ROBOTICS AND AUTOMATION LETTERS , 2022 , 7 (4) , 8574-8581 . |
Export to | NoteExpress RIS BibTex |
Abstract :
Multi-view human action recognition remains a challenging problem due to large view changes. In this article, we propose a transfer learning-based framework called transferable dictionary learning and view adaptation (TDVA) model for multi-view human action recognition. In the transferable dictionary learning phase, TDVA learns a set of view-specific transferable dictionaries enabling the same actions from different views to share the same sparse representations, which can transfer features of actions from different views to an intermediate domain. In the view adaptation phase, TDVA comprehensively analyzes global, local, and individual characteristics of samples, and jointly learns balanced distribution adaptation, locality preservation, and discrimination preservation, aiming at transferring sparse features of actions of different views from the intermediate domain to a common domain. In other words, TDVA progressively bridges the distribution gap among actions from various views by these two phases. Experimental results on IXMAS, ACT4(2), and NUCLA action datasets demonstrate that TDVA outperforms state-of-the-art methods.
Keyword :
sparse representation sparse representation Action recognition Action recognition transfer learning transfer learning multi-view multi-view
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Sun, Bin , Kong, Dehui , Wang, Shaofan et al. Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition [J]. | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2021 , 15 (2) . |
MLA | Sun, Bin et al. "Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition" . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA 15 . 2 (2021) . |
APA | Sun, Bin , Kong, Dehui , Wang, Shaofan , Wang, Lichun , Yin, Baocai . Joint Transferable Dictionary Learning and View Adaptation for Multi-view Human Action Recognition . | ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA , 2021 , 15 (2) . |
Export to | NoteExpress RIS BibTex |
Abstract :
Sparse representation is a powerful tool in many visual applications since images can be represented effectively and efficiently with a dictionary. Conventional dictionary learning methods usually treat each training sample equally, which would lead to the degradation of recognition performance when the samples from same category distribute dispersedly. This is because the dictionary focuses more on easy samples (known as highly clustered samples), and those hard samples (known as widely distributed samples) are easily ignored. As a result, the test samples which exhibit high dissimilarities to most of intra-category samples tend to be misclassified. To circumvent this issue, this paper proposes a simple and effective hardness-aware dictionary learning (HADL) method, which considers training samples discriminatively based on the AdaBoost mechanism. Different from learning one optimal dictionary, HADL learns a set of dictionaries and corresponding sub-classifiers jointly in an iterative fashion. In each iteration, HADL learns a dictionary and a sub-classifier, and updates the weights based on the classification errors given by current sub-classifier. Those correctly classified samples are assigned with small weights while those incorrectly classified samples are assigned with large weights. Through the iterated learning procedure, the hard samples are associated with different dictionaries. Finally, HADL combines the learned sub-classifiers linearly to form a strong classifier, which improves the overall recognition accuracy effectively. Experiments on well-known benchmarks show that HADL achieves promising classification results.
Keyword :
Dictionaries Dictionaries Training Training classification classification Boosting Boosting Visualization Visualization AdaBoost AdaBoost dictionary learning dictionary learning Task analysis Task analysis Face recognition Face recognition Sparse representation Sparse representation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Lichun , Li, Shuang , Wang, Shaofan et al. Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 : 2857-2867 . |
MLA | Wang, Lichun et al. "Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition" . | IEEE TRANSACTIONS ON MULTIMEDIA 23 (2021) : 2857-2867 . |
APA | Wang, Lichun , Li, Shuang , Wang, Shaofan , Kong, Dehui , Yin, Baocai . Hardness-Aware Dictionary Learning: Boosting Dictionary for Recognition . | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 , 2857-2867 . |
Export to | NoteExpress RIS BibTex |
Abstract :
针对理工科课程在教学内容、方式上实施"润物细无声"式课程思政的难度,以计算机图形学为例,提出课程思政建设总体思路,在阐述具体课程思政建设过程及结果的基础上,凝练总体建设原则,给出一般化的理工科课程思政建设策略,为在理工科院校更广泛、更有效地开展课程思政建设提供思路。
Keyword :
计算机图形学 计算机图形学 课程思政 课程思政 科学方法论 科学方法论 理工科课程思政建设 理工科课程思政建设 内涵与外延 内涵与外延
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 孔德慧 , 李敬华 , 王立春 et al. 基于计算机图形学教学实践的理工科课程思政建设研究 [J]. | 计算机教育 , 2021 , PageCount-页数: 4 (09) : 15-18 . |
MLA | 孔德慧 et al. "基于计算机图形学教学实践的理工科课程思政建设研究" . | 计算机教育 PageCount-页数: 4 . 09 (2021) : 15-18 . |
APA | 孔德慧 , 李敬华 , 王立春 , 张勇 , 孙艳丰 . 基于计算机图形学教学实践的理工科课程思政建设研究 . | 计算机教育 , 2021 , PageCount-页数: 4 (09) , 15-18 . |
Export to | NoteExpress RIS BibTex |
Abstract :
基于交互任务知识图谱的细粒度工具推荐方法及装置,能够很好地针对细粒度任务进行工具推荐,并且在最优工具不存在时,可以有效地检索到替代工具。方法包括:(1)建立交互任务知识图谱ITKG来定义交互任务、工具及被操作物体的多粒度语义;(2)通过交互工具推荐网络IT‑Net推荐细粒度任务适配的工具;(3)通过约束工具和被操作物体的粗粒度语义预测损失loss,通过细粒度语义预测loss,使IT‑Net学习到工具和被操作物体的共同特征和专有特征;(4)通过约束适配细粒度任务的工具和被操作物体的嵌入特征距离小于不适配细粒度任务的工具和被操作物体的嵌入特征距离,使IT‑Net学习适配细粒度任务的工具和被操作物体的特征关系。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 王立春 , 信建佳 , 王少帆 et al. 基于交互任务知识图谱的细粒度工具推荐方法及装置 : CN202111310620.1[P]. | 2021-11-04 . |
MLA | 王立春 et al. "基于交互任务知识图谱的细粒度工具推荐方法及装置" : CN202111310620.1. | 2021-11-04 . |
APA | 王立春 , 信建佳 , 王少帆 , 李敬华 , 孔德慧 , 尹宝才 . 基于交互任务知识图谱的细粒度工具推荐方法及装置 : CN202111310620.1. | 2021-11-04 . |
Export to | NoteExpress RIS BibTex |
Abstract :
基于出租车OD数据的非常态居民出行模式挖掘方法属于智能交通和数据挖掘领域。为了能够更好的挖掘出租车乘客出行规律,同时更深入的挖掘居民出行中存在的非常态模式,本发明提出了一种基于高维度稀疏张量分解的方法,即通过组织包括时间、经纬度、功能区属性等多维度信息为张量模型,对其进行低秩稀疏分解。为此,需要解决的关键技术问题包括:对研究区域划分功能区并把对应数据归到相应功能区内;组织时间、经纬度、功能区属性等对应数据构成张量模型;对张量模型做低秩稀疏分解,分别提取低秩模型和稀疏模型并做Tucker分解;对分解得到的基底矩阵做可视化,直观的展现乘客出行模式。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 王立春 , 张彬 , 王少帆 et al. 基于出租车OD数据的非常态居民出行模式挖掘方法 : CN202110120448.7[P]. | 2021-01-28 . |
MLA | 王立春 et al. "基于出租车OD数据的非常态居民出行模式挖掘方法" : CN202110120448.7. | 2021-01-28 . |
APA | 王立春 , 张彬 , 王少帆 , 孔德慧 , 尹宝才 . 基于出租车OD数据的非常态居民出行模式挖掘方法 : CN202110120448.7. | 2021-01-28 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明涉及一种基于3D‑Ghost模块的多模态训练单模态测试的动态手势识别方法,用于解决多模态训练单模态测试的动态手势识别问题,具体利用RGB数据和深度数据训练整体网络,整体网络采用并行双通道协作学习的结构,旨在通过不同模态网络之间传递知识来改善学习过程,通道m用于通过RGB数据识别动态手势,通道n用于通过深度数据识别动态手势;训练完成后,将RGB数据输入通道m进行动态手势识别,或者将深度数据输入通道n进行动态手势识别;其中通道采用I3D网络并对其进行改进,改进之处在于增加了注意力模块,部分3D卷积层替换为3D‑Ghost模块,对所有Inception‑V1子模块进行改进。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 李敬华 , 刘润泽 , 孔德慧 et al. 一种基于3D-Ghost模块的多模态训练单模态测试的动态手势识别方法 : CN202110544122.7[P]. | 2021-05-19 . |
MLA | 李敬华 et al. "一种基于3D-Ghost模块的多模态训练单模态测试的动态手势识别方法" : CN202110544122.7. | 2021-05-19 . |
APA | 李敬华 , 刘润泽 , 孔德慧 , 王少帆 , 王立春 , 尹宝才 . 一种基于3D-Ghost模块的多模态训练单模态测试的动态手势识别方法 : CN202110544122.7. | 2021-05-19 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明公开了一种基尼指数引导的基于自训练的语义分割方法,本发明提出基尼指数指导的自训练方法,利用基尼指数作为选取更为准确伪标签的指标,引入更多可靠的监督信息,可靠性的伪标签进行自监督训练,基于衡量不确定性和赋予伪标签的方式在训练阶段引入正确的监督信息,减小源域和目标域的差异,提高语义标注精度。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 王立春 , 胡玉杰 , 王少帆 et al. 一种基尼指数引导的基于自训练的语义分割方法 : CN202110318561.6[P]. | 2021-03-25 . |
MLA | 王立春 et al. "一种基尼指数引导的基于自训练的语义分割方法" : CN202110318561.6. | 2021-03-25 . |
APA | 王立春 , 胡玉杰 , 王少帆 , 孔德慧 , 李敬华 , 尹宝才 . 一种基尼指数引导的基于自训练的语义分割方法 : CN202110318561.6. | 2021-03-25 . |
Export to | NoteExpress RIS BibTex |
Abstract :
本发明公开了一种无监督领域自适应语义分割方法,基于源域图像训练神经网络;利用已训练网络计算目标域图像伪标签;利用源域图像和有伪标签的目标域图像重训练网络,进一步提高伪标签准确性,优化网络的泛化能力。本方法通过利用自训练方法,利用已训练网络提取高置信度的目标域伪标签,弥补了目标域缺少监督信息的缺点,与其他方法相比,丰富了目标域数据的信息,提升网络对目标域数据的学习能力;本方法着重考虑了基于类别的域间差异,针对源域和目标域的预测进行类相关性度量,约束两个域的类相关性一致,减小了两个域类级别的域间差异,提高了网络的泛化能力,本发明的性能优于其他无监督领域自适应语义分割方法。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 王立春 , 高宁 , 王少帆 et al. 一种无监督领域自适应语义分割方法 : CN202110026447.6[P]. | 2021-01-08 . |
MLA | 王立春 et al. "一种无监督领域自适应语义分割方法" : CN202110026447.6. | 2021-01-08 . |
APA | 王立春 , 高宁 , 王少帆 , 孔德慧 , 李敬华 , 尹宝才 . 一种无监督领域自适应语义分割方法 : CN202110026447.6. | 2021-01-08 . |
Export to | NoteExpress RIS BibTex |
Abstract :
一种RGB‑D信息互补的语义分割方法属于图像分割技术领域。本发明针对已有利用RGB和深度信息的方法只考虑单向补充的问题,提出一种RGB和深度信息交叉互补的RGB‑D语义分割网络结构,旨在对RGB和深度信息进行双向的逐层信息补充,达到提高语义分割效果的目的。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 王立春 , 顾娜娜 , 王少帆 et al. 一种RGB-D信息互补的语义分割方法 : CN202111009283.2[P]. | 2021-08-31 . |
MLA | 王立春 et al. "一种RGB-D信息互补的语义分割方法" : CN202111009283.2. | 2021-08-31 . |
APA | 王立春 , 顾娜娜 , 王少帆 , 杨臣 , 信建佳 , 尹宝才 . 一种RGB-D信息互补的语义分割方法 : CN202111009283.2. | 2021-08-31 . |
Export to | NoteExpress RIS BibTex |
Export
Results: |
Selected to |
Format: |