651
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Full-automatic high-precision scene 3D reconstruction method with water-area intelligent complementation and mesh optimization for UAV images

ORCID Icon, ORCID Icon, ORCID Icon, , & ORCID Icon
Article: 2317441 | Received 15 Nov 2023, Accepted 06 Feb 2024, Published online: 16 Feb 2024
 

ABSTRACT

Fast and high-precision urban scene 3D modeling is the foundational data infrastructure for the digital earth and smart cities. However, due to challenges such as water-area matching difficulties and issues like data redundancy and insufficient observations, existing full-automatic 3D modeling methods often result in water-area missing and many small holes in the models and insufficient local-model accuracy. To overcome these challenges, full-automatic high-precision scene 3D reconstruction method with water-area intelligent complementation on depth maps and mesh optimization is proposed. Firstly, SfM was used to calculated image poses and PatchMatch was used to generated initial depth maps. Secondly, a simplified GAN extracted water-area masks and ray tracing was used achieve high-precision auto-completed water-area depth values. Thirdly, fully connected CRF optimized water-areas and arounds in depth maps. Fourthly, high-precision 3D point clouds were obtained using depth map fusion based on clustering culling and depth least squares. Then, mesh was generated and optimized using similarity measurement and vertex gradients to obtain refined mesh. Finally, high-precision scene 3D models without water-area missing or holes were generated. The results showed that: to compare with the-state-of-art ContextCapture, the proposed method enhances model completeness by 14.3%, raises average accuracy by 14.5% and improves processing efficiency by 63.6%.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available from the corresponding author, upon reasonable request.

Additional information

Funding

This work is supported by the National Natural Science Foundation of China [grant number 42101449], the Natural Science Foundation of Hubei Province, China [grant number 2022CFB773], the Key Research and Development Project of Jinzhong City, China [grant number Y211006], the Science and Technology Program of Southwest China Research Institute of Electronic Equipment [grant number JS20200500114], the Chutian Scholar Program of Hubei Province, the Yellow Crane Talent Scheme, the Research Program of Wuhan University-Huawei Geoinformatics Innovation Laboratory [grant number K22-4201-011], the Key Laboratory of Urban Land Resources Monitoring and Simulation, Ministry of Land and Resources [grant number KF-2022-07-003] and the CRSRI Open Research Program [grant number CKWV20231167/KF].