ABSTRACT
Parcel-level farmland information contains rich spatial distribution and boundary details, which is crucial for digital agriculture and agricultural resource surveys. However, the spatial complexity and heterogeneity of features resulting from high resolution makes it difficult to obtain parcel-level information quickly and accurately. In addition, existing methods do not sufficiently take into account the spatial topological information, particularly for blurred boundaries. Here, we develop a multi-task network model to extract plot-level cropland information. Specifically, the model consists of a cascaded multi-task network with integrated semantic and edge detection, a refinement network with fixed edge local connectivity, and an integrated fusion model. To validate the performance of the model, two typical tests were conducted in Denmark (Europe) and Chongqing (Asia) with high-resolution remote sensing images provided by Sentinel-2 (10 m) and Google Earth (0.53 m) as data sources. The results show that our proposed model outperforms other baseline models and exhibits higher performance. This study is expected to provide important support for the design of new global agricultural information management systems in the future.
Acknowledgments
We thank the Danish Agency for Agriculture for providing Danish Land Parcel Identification System (LPIS) data (https://collections.eurodatacube.com/denmark-lpis/). This study is funded by the Key Laboratory of Land Satellite Remote Sensing Application, Ministry of Natural Resources of the People’s Republic of China under grant KLSMNR-202106. We thank the anonymous reviewers for their valuable comments and suggestions.
Disclosure statement
The authors declared that they have no conflicts of interest to this work.
Data availability statement
The code of our model is available from the link below (https://github.com/SonwYang/SLP-cropland-parcel-extraction), and the training dataset also is available from the authors upon reasonable request.
Authorship contribution statement
Leilei Xu, Fei Peng, Yongxing Wu, Peng Yang and Jia Xu: conceptualization and methodology; Leilei Xu, Peng Yang, Juanjuan Yu, Shiran Song and Hao Chen: visualization; Fei Peng, Leilei Xu: formal analysis; Leilei Xu, Peng Yang and Fei Peng: original draft; Yongxing Wu and Fei Peng: review and editing, supervision; All authors discussed the manuscript equally.