2,007
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Water extraction from optical high-resolution remote sensing imagery: a multi-scale feature extraction network with contrastive learning

, , , , &
Article: 2166396 | Received 25 Aug 2022, Accepted 04 Jan 2023, Published online: 10 Jan 2023
 

ABSTRACT

Accurately spatiotemporal distribution of water bodies is of great importance in the fields of ecology and environment. Recently, convolutional neural networks (CNN) have been widely used for this purpose due to their powerful features extraction ability. However, the CNN methods have two limitations in extracting water bodies. First, the large variations in both the spatial and spectral characteristics of water bodies require that the CNN-based methods have the ability of extracting multi-scale features and using multi-layer features. Second, collecting enough samples is a difficult problem in the training phase of CNN. Therefore, this paper proposed a multi-scale features extraction network (MSFENet) for water extraction, and its advantages are contributed to two distinct features: (1) scale features extractor (MSFE) is designed to extract multi-layer multi-scale features of water bodies; (2) contrastive learning (CL) is adopted to reduce the sample size requirement. Experimental results show that MSFE can effectively improve the small water body extraction performance, and the CL can significantly improve the extraction accuracy when the training sample size is small. Compared with other methods, MSFENet achieves the highest F1-score and kappa coefficient in two datasets. Furthermore, spectral variability analysis shows that MSFENet is more robust than other neural networks in a spectrum variation scenario.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The GF-2 images are freely available as follows, Gaofen Image Dataset (GID): https://x-ytong.github.io/project/GID.html. The LoveDA are freely available as follows: https://github.com/Junjue-Wang/LoveDA And the relabeled images and codes that support the findings of this study are available from the corresponding author, upon reasonable request.

Additional information

Funding

This work was supported by the National Natural Science Foundation of China under Grant 41871372