Indexed by:
Abstract:
Estimating the full 3D hand shape and pose from a single RGB image is essential for virtual reality and human-computer interaction applications, but it remains a challenging task. While current hand reconstruction methods have achieved high accuracy, they often come with heavy computational costs, limiting their ability to perform real-time inference. To tackle this issue, we present a novel learning-based network architecture that consists of two key components: (1) a 3D hand joint position detection module that efficiently extracts joint positions from hand images, and (2) an inverse kinematics module that precisely predicts joint rotations in a single feed forward pass. Comprehensive evaluations across several datasets demonstrate that our method generates accurate and realistic 3D hand meshes while outperforming existing methods in 3D hand pose estimation. © 2024 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2024
Page: 238-245
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: