558
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Extracting urban impervious surface based on optical and SAR images cross-modal multi-scale features fusion network

, , , &
Article: 2301675 | Received 06 Sep 2023, Accepted 31 Dec 2023, Published online: 10 Jan 2024
 

ABSTRACT

Monitoring the spatiotemporal distribution of urban impervious surface is an essential indication for measuring the urbanization process. Optical and synthetic aperture radar (SAR) images are key data sources in urban impervious surface extraction. Because cities are highly heterogeneous scenes, using a single data source to extract urban impervious surface encounters a bottleneck in accuracy enhancement due to the limitation of a single-modal feature representation, which can be helped to overcome by fusing the two data. However, existing researches have primarily done fusion directly by layer stacking for urban impervious surface, without taking into account the modal differences between optical and SAR (optical-SAR) images, and thus cannot better realize complementarity between the two. As a result, this study proposes a cross-modal multi-scale features fusion segmentation network (CMFFNet) for optical-SAR images for urban impervious surface extraction. A cross-modal features fusion (CMFF) module is designed in the proposed CMFFNet to fully exploit the complementary information of optical-SAR images. Additionally, we propose a multi-scale features fusion (MSFF) module to fuse multi-scale features of optical-SAR images, taking into account the multi-scale characteristics of urban impervious surface. The results of the experiments demonstrate that the proposed CMFFNet outperforms current mainstream methods for extracting impervious surface.

Data availability statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by National Natural Science Foundation of China [grant number 42090012]; Key R & D project of Sichuan Science and Technology Plan [grant number 2022YFN0031]; Sichuan Science and Technology Program [grant number 2023YFN0022]; Zhizhuo Research Fund on Spatial–Temporal Artificial Intelligence[grant number ZZJJ202202]; the Special Fund of Hubei Luojia Laboratory [grant number 220100009]; Zhuhai industry university research cooperation project of China [grant number ZH22017001210098PWC].