• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Zuhe Li (Zuhe Li.) | Mengze Xue (Mengze Xue.) | Yuhao Cui (Yuhao Cui.) | Boyi Liu (Boyi Liu.) | Ruochong Fu (Ruochong Fu.) | Haoran Chen (Haoran Chen.) | Fujiao Ju (Fujiao Ju.)

Abstract:

Traditional human pose estimation methods typically rely on complex models and algorithms. Lite-HRNet can achieve an excellent performance while reducing model complexity. However, its feature extraction scale is relatively single, which can lead to lower keypoints’ localization accuracy in crowded and complex scenes. To address this issue, we propose a lightweight human pose estimation model based on a joint channel coordinate attention mechanism. This model provides a powerful information interaction channel, enabling features of different resolutions to interact more effectively. This interaction can solve the problem of human pose estimation in complex scenes and improve the robustness and accuracy of the pose estimation model. The introduction of the joint channel coordinate attention mechanism enables the model to more effectively retain key information, thereby enhancing keypoints’ localization accuracy. We also redesign the lightweight basic module using the shuffle module and the joint channel coordinate attention mechanism to replace the spatial weight calculation module in the original Lite-HRNet model. By introducing this new module, we not only improve the network calculation speed and reduce the number of parameters of the entire model, but also ensure the accuracy of the model, thereby achieving a balance between performance and efficiency. We compare this model with current mainstream methods on the COCO and MPII dataset. The experimental results show that this model can effectively reduce the number of parameters and computational complexity while ensuring high model accuracy.

Keyword:

deep learning lightweight module attention mechanism human pose estimation

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Source :

Electronics

ISSN: 2079-9292

Year: 2023

Issue: 1

Volume: 13

2 . 9 0 0

JCR@2022

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 8

Affiliated Colleges:

Online/Total:1729/10885966
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.