Indexed by:
Abstract:
Object tracking serves as a prerequisite and foundation for higher-level driving tasks and has broad application prospects in various fields, including intelligent logistics and autonomous driving. In this paper, we propose a deep reinforcement learning-based object tracking control network model, that incorporates attention and Long Short-Term Memory (LSTM) mechanisms. The Asynchronous Advantage Actor-Critic (A3C) algorithm is employed for multi-threaded synchronous unsupervised training of the tracking network model, resulting in end-to-end dynamic object tracking control. Additionally, the Gradient-weighted Class Activation Mapping (Grad-CAM) method is utilized to analyze the interpretability of the network model. Experimental results demonstrate that by introducing the attention salience mechanism and LSTM temporal mechanism, the network can effectively focus on obstacle and target locations, thus enhancing its attentional capacity and interpretability. The trustworthiness of the object tracking model can be improved in terms of both tracking performance and interpretability. © 2023 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 2161-8070
Year: 2023
Volume: 2023-August
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 6
Affiliated Colleges: