Indexed by:
Abstract:
In object-stacking scenes, robotic manipulation is one of the most important research topics in robotics. It is particularly significant to reason object relationships and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. However, many existing methods focus on individual object features and ignore contextual information, leading to great challenges in efficiently reasoning manipulation relationship. In this paper, we introduce a novel graph-based visual manipulation relationship reasoning architecture that directly outputs object relationships and manipulation order. Our model first extracts features and detects objects from RGB images, and then adopts Graph Convolutional Network (GCN) to collect contextual information between objects. Moreover, a relationship filtering network is built to reduce object pairs before reasoning and improve the efficiency of relation reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes.
Keyword:
Reprint Author's Address:
Email:
Source :
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)
ISSN: 2161-4393
Year: 2021
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 3
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: