Remote Sensing, Vol. 15, Pages 2689: Dual-Stream Feature Extraction Network Based on CNN and Transformer for Building Extraction

JournalFeeds

Remote Sensing, Vol. 15, Pages 2689: Dual-Stream Feature Extraction Network Based on CNN and Transformer for Building Extraction

Remote Sensing doi: 10.3390/rs15102689

Authors:
Liegang Xia
Shulin Mi
Junxia Zhang
Jiancheng Luo
Zhanfeng Shen
Yubin Cheng

Automatically extracting 2D buildings from high-resolution remote sensing images is among the most popular research directions in the area of remote sensing information extraction. Semantic segmentation based on a CNN or transformer has greatly improved building extraction accuracy. A CNN is good at local feature extraction, but its ability to acquire global features is poor, which can lead to incorrect and missed detection of buildings. The advantage of transformer models lies in their global receptive field, but they do not perform well in extracting local features, resulting in poor local detail for building extraction. We propose a CNN-based and transformer-based dual-stream feature extraction network (DSFENet) in this paper, for accurate building extraction. In the encoder, convolution extracts the local features for buildings, and the transformer realizes the global representation of the buildings. The effective combination of local and global features greatly enhances the network’s feature extraction ability. We validated the capability of DSFENet on the Google Image dataset and the ISPRS Vaihingen dataset. DSEFNet achieved the best accuracy performance compared to other state-of-the-art models.

MDPI Publishing. Click here to Read More