Indexed by:
Abstract:
Due to its ability to capture long-range dependencies, self-attention mechanism based transformer models are introduced for hyperspectral image classification. However, the self-attention mechanism has only spatial adaptability but ignores channel adaptability, thus cannot well extract complex spectral-spatial information in hyperspectral images. To tackle this problem, in this paper, we propose a novel spectral-spatial large kernel attention network (SSLKA) for hyperspectral image classification. SSLKA consists of two consecutive cooperative spectral-spatial attention blocks with large convolution kernels, which can efficiently extract features in spectral and spatial domains simultaneously. In each cooperative spectral-spatial attention block, we employ the spectral attention branch and the spatial attention branch to generate the attention maps, respectively, and then fuse the extracted spatial features with the spectral features. With large kernel attention, we can enhance the classification performance by fully exploiting local contextual information, capturing long-range dependencies, as well as be adaptive in the channel dimension. Experimental results on widely used benchmark datasets show that our method achieves higher classification accuracy in terms of overall accuracy, average accuracy, and Kappa than several state-of-the-art methods. IEEE
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE Transactions on Geoscience and Remote Sensing
ISSN: 0196-2892
Year: 2024
Volume: 62
Page: 1-1
8 . 2 0 0
JCR@2022
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 10
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: