• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Ma, Nan (Ma, Nan.) | Wu, Zhixuan (Wu, Zhixuan.) | Feng, Yifan (Feng, Yifan.) | Wang, Cheng (Wang, Cheng.) | Gao, Yue (Gao, Yue.)

Indexed by:

EI Scopus SCIE

Abstract:

Recently, action recognition has attracted considerable attention in the field of computer vision. In dynamic circumstances and complicated backgrounds, there are some problems, such as object occlusion, insufficient light, and weak correlation of human body joints, resulting in skeleton-based human action recognition accuracy being very low. To address this issue, we propose a Multi-View Time-Series Hypergraph Neural Network (MV-TSHGNN) method. The framework is composed of two main parts: the construction of a multi-view time-series hypergraph structure and the learning process of multi-view time-series hypergraph convolutions. Specifically, given the multi-view video sequence frames, we first extract the joint features of actions from different views. Then, limb components and adjacent joints spatial hypergraphs based on the joints of different views at the same time are constructed respectively, temporal hypergraphs are constructed joints of the same view at continuous times, which are established high-order semantic relationships and cooperatively generate complementary action features. After that, we design a multi-view time-series hypergraph neural network to efficiently learn the features of spatial and temporal hypergraphs, and effectively improve the accuracy of skeleton-based action recognition. To evaluate the effectiveness and efficiency of MV-TSHGNN, we conduct experiments on NTU RGB+D, NTU RGB+D 120 and imitating traffic police gestures datasets. The experimental results indicate that our proposed method model achieves the new state-of-the-art performance.

Keyword:

spatial hypergraphs temporal hypergraphs multi-view time-series hypergraph neural network representation learning Skeleton-based action recognition

Author Community:

  • [ 1 ] [Ma, Nan]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 2 ] [Wu, Zhixuan]Beijing Union Univ, Coll Robot, Beijing Key Lab Informat Serv Engn, Beijing 100101, Peoples R China
  • [ 3 ] [Wang, Cheng]Beijing Union Univ, Coll Robot, Beijing Key Lab Informat Serv Engn, Beijing 100101, Peoples R China
  • [ 4 ] [Feng, Yifan]Tsinghua Univ, Sch Software, BNRist, BLBCI,THUIBCS, Beijing 100084, Peoples R China
  • [ 5 ] [Gao, Yue]Tsinghua Univ, Sch Software, BNRist, BLBCI,THUIBCS, Beijing 100084, Peoples R China

Reprint Author's Address:

  • [Gao, Yue]Tsinghua Univ, Sch Software, BNRist, BLBCI,THUIBCS, Beijing 100084, Peoples R China;;

Show more details

Related Keywords:

Source :

IEEE TRANSACTIONS ON IMAGE PROCESSING

ISSN: 1057-7149

Year: 2024

Volume: 33

Page: 3301-3313

1 0 . 6 0 0

JCR@2022

Cited Count:

WoS CC Cited Count: 4

SCOPUS Cited Count: 8

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 8

Affiliated Colleges:

Online/Total:815/10603016
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.