Indexed by:
Abstract:
Deep neural networks (DNNs) have advanced autonomous driving, but their lack of transparency remains a major obstacle to real-world application. Attribution methods, which aim to explain DNN decisions, offer a potential solution. However, existing methods, primarily designed for image classification models, often suffer from performance degradation and require specialized algorithmic adjustments when applied to the diverse models in autonomous driving. To address this challenge, we introduce a universally applicable representation of traffic scenes, forming the basis for our unified attribution method. Specifically, we leverage the first-order Taylor expansion at a specific hidden layer, i.e., the product of gradients and feature maps, to represent abstract traffic scene information. This representation guides both the optimization of attribution path generation and the attribution computation, enabling consistent and effective attributions for both lane-change prediction and vision-based control models. Experiments on two distinct autonomous driving models demonstrate that our approach outperforms state-of-the-art methods in explanation accuracy and robustness, advancing the interpretability of DNN-based autonomous driving models. © 2025 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE Transactions on Intelligent Transportation Systems
ISSN: 1524-9050
Year: 2025
8 . 5 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: