Indexed by:
Abstract:
In this paper, we propose vertical attention and spatial attention network (VSANet), which is a semantic segmentation method based on Deeplabv3+ and attention module, for improving semantic segmentation effect for autonomous driving road scene images. The improvement of this paper is primarily on the following two aspects. One is that this paper introduces the spatial attention module (SAM) after the atrous convolution, which effectively obtains more spatial context information. Second, by studying the road scene image, it is found that there are considerable differences in the pixel-level distribution of the horizontal segmentation area in the image. For this reason, this paper introduces the vertical attention module (VAM), which can better segment the road scene image. A large number of experimental results indicate that the segmentation accuracy of the proposed model is improved by 1.94% compared with the Deeplabv3+ network model on the test dataset of Cityscapes dataset. © 2021 IEEE.
Keyword:
Reprint Author's Address:
Email:
Source :
ISSN: 2693--2814
Year: 2021
Page: 255-259
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 17
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 31
Affiliated Colleges: