Indexed by:
Abstract:
In recent years, end-to-end autonomous driving has become an emerging research direction in the field of autonomous driving. This method attempts to map the road images collected by the vehicle camera to the decision control of the vehicle. We propose a spatiotemporal neural network model with a visual attention mechanism to predict vehicle decision control in an end-to-end manner. The model is composed of CNN and LSTM and can extract temporal and spatial features from road image sequences. The visual attention mechanism in the model helps the model to focus on important areas in the image. We evaluated the model in the open racing car simulator TORCS, and the experiments showed that our model is better at predicting driving decisions than the simple CNN model. In addition, the visual attention mechanism in the model is conducive to improving the performance of the end-to-end autonomous driving model. © 2020 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
Year: 2020
Page: 2649-2653
Language: English
Cited Count:
SCOPUS Cited Count: 8
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 11
Affiliated Colleges: