Indexed by:
Abstract:
Given the critical need for more reliable autonomous driving systems, explainability has become a key focus within the research community. In autonomous driving models, even minor perception differences can significantly influence the decision-making process, and this impact often diverges markedly from human cognition. However, understanding the specific reasons why a model decides to stop or keep forward remains a significant challenge. This paper presents an attribution-guided visualization method aimed at exploring the triggers behind decision shifts, providing clear insights into the underlying "why" and "why not" of such decisions. We propose the cumulative layer fusion attribution method that identifies the parameters most critical to decision-making. These attributions are then used to inform the visualization optimization by applying attribution-guided weights to crucial generation parameters, ensuring that decision changes are driven only by modifications to critical information. Furthermore, we develop an indirect regularization method that increases visualization quality without necessitating additional hyperparameters. Experiments on large datasets demonstrate that our method produces insightful visualization explanations and outperforms state-of-the-art methods in both qualitative and quantitative evaluations.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
ISSN: 1524-9050
Year: 2024
Issue: 3
Volume: 26
Page: 4165-4177
8 . 5 0 0
JCR@2022
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 8
Affiliated Colleges: