230
Views
0
CrossRef citations to date
0
Altmetric
Research Article

An unsupervised semantic segmentation method that combines the ImSE-Net model with SLICm superpixel optimization

, , , &
Article: 2341970 | Received 21 Jul 2023, Accepted 07 Apr 2024, Published online: 16 Apr 2024
 

ABSTRACT

In the field of remote sensing, using a large amount of labeled image data to supervise the training of fully convolutional networks for the semantic segmentation of images is expensive. However, using a small amount of labeled data can lead to reduced network performance. This paper proposes an unsupervised semantic segmentation method that combines the ImSE-Net model with SLICm superpixel optimization. First, the ImSE-Net model is used to extract semantic features from the image to obtain rough semantic segmentation results. Then, the SLICm superpixel segmentation algorithm is used to segment the input image into superpixel images. Finally, an unsupervised semantic segmentation model (UGLS) is used to combine high-level abstract semantic features with detailed information on superpixels to obtain edge-optimized semantic segmentation results. Experimental results show that compared with other semantic segmentation algorithms, our method more effectively handles unbalanced areas, such as object boundaries, and achieves better segmentation results, with higher semantic consistency.

Author contributions

Acquisition of the financial support for the project leading to this publication, H.N. and H.L. Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data, Z.Y., X.W. and K.Y. Preparation, creation, and/or presentation of the published work by those from the original research group, specifically critical review, commentary, or revision, including pre- or post-publication stages, H.N. and Z.Y.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The code used in this study are available by contacting the corresponding author.

Additional information

Funding

This research was funded by Scientific and Technological Innovation Team of Universities in Henan Province, grant number 22IRTSTHN008.