Remote Sensing, Vol. 15, Pages 2696: Spectral-Swin Transformer with Spatial Feature Extraction Enhancement for Hyperspectral Image Classification

JournalFeeds

Remote Sensing, Vol. 15, Pages 2696: Spectral-Swin Transformer with Spatial Feature Extraction Enhancement for Hyperspectral Image Classification

Remote Sensing doi: 10.3390/rs15102696

Authors:
Yinbin Peng
Jiansi Ren
Jiamei Wang
Meilin Shi

Hyperspectral image classification (HSI) has rich applications in several fields. In the past few years, convolutional neural network (CNN)-based models have demonstrated great performance in HSI classification. However, CNNs are inadequate in capturing long-range dependencies, while it is possible to think of the spectral dimension of HSI as long sequence information. More and more researchers are focusing their attention on transformer which is good at processing sequential data. In this paper, a spectral shifted window self-attention based transformer (SSWT) backbone network is proposed. It is able to improve the extraction of local features compared to the classical transformer. In addition, spatial feature extraction module (SFE) and spatial position encoding (SPE) are designed to enhance the spatial feature extraction of the transformer. The spatial feature extraction module is proposed to address the deficiency of transformer in the capture of spatial features. The loss of spatial structure of HSI data after inputting transformer is supplemented by proposed spatial position encoding. On three public datasets, we ran extensive experiments and contrasted the proposed model with a number of powerful deep learning models. The outcomes demonstrate that our suggested approach is efficient and that the proposed model performs better than other advanced models.

MDPI Publishing. Click here to Read More