Indexed by:
Abstract:
Many of the detection methods used in robotic grasping tasks do not fully consider the multiscale information of graspable objects, thus reducing the accuracy of grasp detection in complex scenarios. To overcome this problem, we propose a new hybrid grasp-detection network model with a skip-connected encoder-decoder structure called HBGNet. To realize the complete fusion of different feature information dimensions, a multiscale skip connection mechanism is designed to add the features of various convolutional neural network (CNN) blocks in the encoder to the corresponding concat blocks in the decoder. The HBGNet encoder is designed as a multiscale hybrid (MCT) encoder, by combining multiple CNN blocks and transformer layers to acquire high- and low-level features simultaneously. The proposed HBGNet was trained and tested on the publicly available Cornell and Jacquard grasping datasets with accuracies of 99.75% and 97.36%, respectively. The performance of HBGNet was demonstrated via comparison and ablation experiments. A real-world robot grasp experiment was performed on the AUBO-i5 robotic platform to verify the generalization ability of HBGNet, with an average grasp success rate of 97.1%. Experimental results indicate that HBGNet can fully acquire multiscale information of grasped objects using hybrid networks and effectively complete grasp detection tasks in cluttered scenes in a generalizable manner.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
ISSN: 0018-9456
Year: 2025
Volume: 74
5 . 6 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 9
Affiliated Colleges: