Indexed by:
Abstract:
Existing group activity recognition methods generally use optical flow image to represent motion within videos, which often struggle to capture the movements of individuals inaccurately. In this paper, we explore the effectiveness of more kinds of motion information for group activity recognition. We propose a novel multi-scale MOtion-based relational reasoning framework for Group Activity Recognition (MOGAR). It combines joint motion (intra-individual level) with trajectory (individual-level) and individual position (inter-individual level) to acquire richer activity representation. Specifically, it involves two branches: the trajectory branch utilizes individuals’ trajectories and positions to extract the motion feature at the individual and inter-individual levels. The joint branch extracts the motion features at the intra-individual level. Furthermore, the gated recurrent units (GRU) and Transformers are employed to enhance the corresponding features through gating mechanism and self-attention mechanism. The features from the two branches are concatenated for group activity recognition. The experiments on two public datasets demonstrate that our method achieves competitive performance and has potential benefits in terms of computational complexity. © 2024 Elsevier Ltd
Keyword:
Reprint Author's Address:
Email:
Source :
Engineering Applications of Artificial Intelligence
ISSN: 0952-1976
Year: 2025
Volume: 139
8 . 0 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: