369
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Parallel algorithm for multi-viewpoint viewshed analysis on the GPU grounded in target cluster segmentation

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon
Article: 2308707 | Received 12 Oct 2023, Accepted 17 Jan 2024, Published online: 25 Jan 2024
 

ABSTRACT

Viewshed analysis is a significant method for GIS spatial analysis. The needed computational resources rise sharply with the increase in the number of viewpoints, making viewshed analysis inefficient in multi-viewpoint scenes. Building on the shadow map algorithm, a parallel algorithm for multi-viewpoint viewshed analysis based on target cluster segmentation for large-scale, multi-viewpoint complex three-dimensional scenes was proposed. Target cluster segmentation was achieved using uniform grid division and K-means spatial clustering, and visibility was determined by comparing the depths. The customized graphics processing unit (GPU) rendering pipeline was adopted to execute the algorithm efficiently and in parallel. The experimental results indicated that the efficiency of the proposed algorithm improves by 6.252%, 8.280%, and 9.047% on average for viewshed analysis with 150, 300, and 600 viewpoints, respectively, compared to the algorithm based solely on shadow map. It is also able to eliminate the majority of shadow acne, thus clearly improving accuracy. The algorithm is compatible with the terrain and features while fully utilizing the parallel computational capability of the GPU and avoiding the interpolation of the digital elevation model (DEM). It significantly improves the efficiency of analysis in large-scale and multi-viewpoint scenes.

Acknowledgements

The authors would like to thank anonymous peer reviewers for helpful and constructive reviews of the manuscript’s content.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Additional information

Funding

This work was supported by National Natural Science Foundation of China [grant number 42172330].