Indexed by:
Abstract:
Although robotic grippers have been extensively used in industry nowadays, most of them still are lack of tactile perception to achieve some dexterous manipulation like grasping an unknown object using appropriate force. Hence, to make the grippers gain multiple types of tactile information, we combine the gripper with the dual-modal vision-based tactile sensor in our experiment. Different from existed texture recognition experiments, we build own texture dataset included 12 kinds of samples using the novel tactile transducer. At the same time, we compare K-Nearest Neighbor (KNN) with Residual Network (ResNet), the experiment results showcase that the accuracy of KNN, is only 66.11%, while the accuracy of ResNet based on deep convolution neural network is as high as 100.00%. In addition, to detect the contact force, we employ the nonlinear characteristic of BP neural network to establish the mapping relation between the two-dimensional displacement image of markers and the three-dimensional (3D) force vector. Experiments are implemented to demonstrate the sensor's performance of predicting the force within 4% margin of error. © 2021 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2021
Page: 19-25
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: