Indexed by:
Abstract:
In the task of Human-Robot Interaction, human beings are at the center of the whole cooperation process. Robots always pay attention to the cooperator’s operation behavior through multi-modal information, constantly speculate on human’s operation intention, and then plan their interaction trajectory in advance. Therefore, understanding human’s operation intention and predicting human’s trajectory play an important role in the interaction between robot and human. This paper presents a framework of human intention reasoning and trajectory prediction based on multi-modal information. It mainly consists of two parts: Firstly, the robot accurately infers the human operation intention by combining four kinds of information: the human behavior identified by GCN-LSTM model, the object type obtained by YOLO, the relationship between verbs in sentences and interested objects and targets extracted by dependency parsing language, and the gesture of handheld objects obtained by GGCNN. Then, after understanding the intention of human beings, combined with some tracks that human beings are walking out of at present, KMP is used to predict the next tracks of human beings, which is the preparation for the trajectory analysis of robots in future human-computer interaction tasks. We verify the framework on several data sets, and the test results show the effectiveness of the proposed framework. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 1865-0929
Year: 2023
Volume: 1787 CCIS
Page: 389-399
Language: English
Cited Count:
WoS CC Cited Count: 36
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 13
Affiliated Colleges: