• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Jing, Y. (Jing, Y..) | Zhang, T. (Zhang, T..) | Liu, Z. (Liu, Z..) | Hou, Y. (Hou, Y..) | Sun, C. (Sun, C..)

Indexed by:

EI Scopus SCIE

Abstract:

Road extraction from remote sensing images is very important in navigation, urban planning, traffic management and other fields. Deep learning methods have achieved great success in computer vision tasks. Therefore, road extraction from remote sensing images using deep learning methods can significantly improve the road extraction accuracy. However, these methods generally have problems such as low road extraction accuracy, slow training speed, high computational complexity, and poor road topology connectivity. In order to solve the above issues, we propose a Swin-ResUNet+ structure and use the new paradigm Swin-Transformer to extract roads in remote sensing images. Specifically, we construct an Edge Enhancement module based on residual connection and add this module to each stage of the encoder, which can obtain the edge information in remote sensing images. Based on the Edge Enhancement module, we propose a Swin-ResUNet+ structure in order to better capture the topology of roads. On the Massachusetts road dataset, our model has the least computational cost with only less than one percent accuracy decrease. On the DeepGlobe2018 road dataset, our model not only has the least computational complexity but also achieves the highest values of mIOU, mDC, mPA and F1-score. In a word, Swin-ResUNet+ obtains a much better trade-off between accuracy and efficiency than previous CNN-based and Transformer-based methods. © 2023 Elsevier Inc.

Keyword:

Road extraction UNet Swin-Transformer structure Semantic segmentation Remote sensing image Edge enhancement module

Author Community:

  • [ 1 ] [Jing Y.]Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 2 ] [Zhang T.]Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 3 ] [Liu Z.]Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 4 ] [Hou Y.]Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 5 ] [Sun C.]CSIRO Data61, PO Box 76, Epping, 1710, NSW, Australia

Reprint Author's Address:

Email:

Show more details

Related Keywords:

Source :

Computer Vision and Image Understanding

ISSN: 1077-3142

Year: 2023

Volume: 237

4 . 5 0 0

JCR@2022

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:19

Cited Count:

WoS CC Cited Count: 63

SCOPUS Cited Count: 10

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 10

Affiliated Colleges:

Online/Total:418/10554472
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.