515
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Let the loss impartial: a hierarchical unbiased loss for small object segmentation in high-resolution remote sensing images

, , , , &
Article: 2254473 | Received 23 Dec 2022, Accepted 29 Aug 2023, Published online: 05 Sep 2023
 

ABSTRACT

The progress in optical remote sensing technology presents both a possibility and challenge for small object segmentation task. However, the gap between human vision cognition and machine behavior still poses an inherent constrains to the interpretation of small but key objects in large-scale remote sensing scenes. This paper summarizes this gap as a bias of the machine against small object segmentation task, called scale-induced bias. The scale-induced bias causes the degradation in the performance of conventional remote sensing image segmentation methods. Therefore, this paper applies a straightforward but innovative insight to mitigate the scale-induced bias. Specifically, we propose a universal impartial loss, which leverages the hierarchical approach to alleviate two sub-problems separately. The pixel-level statistical methodology is applied to remove the bias between the background and small objects, and an emendation vector is introduced to alleviate the bias between small object categories. Extensive experiments explicitly manifest that our method is fully compatible with the existing segmentation structures, armed with the hierarchical unbiased loss, these structures will achieve satisfactory improvement. The proposed method is validated on two benchmark remote sensing image datasets, where it achieved a competitive performance and could narrow the gap between the human vision cognition and machine behavior.

Acknowledgments

All authors would sincerely thank the reviewers and editors for their suggestions and opinions for improving this article.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

Thanks to the providers of the dataset. The data presented in this study are openly available at http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html and https://captain-whu.github.io/iSAID/index.html. The data used to support this work will be provided in the https://github.com/CVFishwgy/HU-loss.

Additional information

Funding

This research was funded by the National Natural Science Foundation of China under [Grant 62072391] and [Grant 62066013] and the Graduate Science and Technology Innovation Fund Project of Yantai University under [Grant GGIFYTU2320].