911
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Reconstruction of a large-scale realistic three-dimensional (3-D) mountain forest scene for radiative transfer simulations

, , , , , & show all
Article: 2261993 | Received 10 May 2023, Accepted 18 Sep 2023, Published online: 30 Sep 2023

ABSTRACT

The realistic three-dimensional (3D) forest scene is an important input to 3D radiative transfer simulations, which are essential for analyzing the reflective properties of forest canopies. Previous studies utilized the voxel as an essential element to reconstruct the 3D forest scene, while they mainly focused on the small flattened areas and ignored the wood components. This study introduces a novel approach for reconstructing a realistic 3D mountain forest scene by incorporating branches into the voxel crown. To determine the optimal voxel size for simulating Bidirectional Reflectance Functions (BRFs) in a temperate deciduous mountain forest, this study reconstructed the forest scene using eight different voxel sizes, ranging from 30 to 100 cm with a step of 10 cm. Two forest scenes were examined to evaluate the impact of branches on radiative transfer simulations: one with branch voxel-based scenes and one without branches. The radiative transfer simulation is conducted using an efficient Monte Carlo path-tracing algorithm and has been implemented in the LargE-Scale remote sensing data and image Simulation framework (LESS) model, facilitating high-quality, large-scale simulations of forested environments. The finding revealed that the optimal voxel size for simulating BRFs in 30 m resolution is approximately 90 cm, smaller than the 100 cm used in flat areas. This study emphasized the significant impact of branches on the BRF simulations and underscored their critical role in scene reconstruction. The impact of branches is two-fold: branches themselves increase the simulated BRFs, whereas their shadows decrease them. Moreover, the effects of branches and their shadows decrease as the voxel size increases. The simulated spectral albedo exhibits maximum deviations of 0.71% and 1.04% in the red and NIR wavebands, respectively, while remaining below 0.2% in the blue waveband. Furthermore, the study suggests that if the precise branch architecture is unknown, constructing branches of the first generation is recommended to achieve better results. Additionally, the results demonstrate that the proposed scene achieves greater accuracy and robustness when compared to both the ellipsoid-based and the boundary-based scenes. The finding of this study can help researchers to better understand the underlying mechanisms driving the reflective properties of forest canopies, which can inform future studies and improve the accuracy of forest monitoring and ecological modeling.

1 Introduction

Essential climate variables (ECVs) furnish the empirical evidence needed to comprehend and forecast climate evolution, direct mitigation and adaptation strategies, evaluate risks, attribute climatic events to underlying causes, and serve as the foundation for climate services (Bojinski et al. Citation2014). ECVs, such as the fraction of photosynthetically active radiation (fAPAR) and leaf area index (LAI), are widely studied using Remote sensing (RS) data. Nevertheless, RS data cannot directly detect the biophysical, biochemical, and energy parameters of ECVs. Instead, researchers must infer these parameters by establishing relationships between the parameters and optical properties, such as reflectance and transmittance, using empirical and physical models (Chandrasekhar Citation1960). Therefore, it is essential to understand the interaction between the forest and light to accurately interpret vegetation RS data and develop inversion algorithms (Gastellu-Etchegorry, Martin, and Gascon Citation2004).

Radiative Transfer Models (RTMs) possess the capability to simulate the interaction between forests and light (Widlowski, Cote, and Beland Citation2014). The representation of a forest in RTMs is crucial as it significantly influences their performance (Janoutova et al. Citation2019; Li et al. Citation2018; Widlowski, Cote, and Beland Citation2014). Traditional one-dimensional radiative transfer models and geometric-optical models often simplify the crown structures as horizontally uniform canopies or simple 3D shapes, such as cone archetypes or ellipsoids (Li and Strahler Citation1992; Verhoef Citation1984). However, these simplified representations cannot capture the intricate spatial variations in canopy structure, including gaps, heterogeneous foliage distribution, and complex branching patterns (Xie et al. Citation2018). Consequently, they face challenges in accurately simulating the intricate propagation of light through the forest, resulting in limitations in capturing the fine-scale interactions between light and vegetation (Liu et al. Citation2022). In contrast, three-dimensional RTMs are based on realistic forest scene structures, enabling the accurate simulations of optical properties from the crown to plot scale. Unlike other models, three-dimensional RTMs minimize the requirement for scene simplification, thereby offering a more precise depiction of the heterogeneity of the land surface and the internal structure of the vegetation canopy (Gastellu-Etchegorry et al. Citation2015; Qi et al. Citation2019). However, three-dimensional RTMs require substantial prior information, such as tree structure parameters, which increases the storage space and computational time needed to execute the model. Consequently, it is crucial to reconstruct the forest scene more lightweight to accelerate RTMs running.

Numerous studies have utilized xfrog (Lintermann and Deussen Citation1999) or arbaro (Weber and Penn Citation1995) software to reconstruct realistic forest scenes by employing inventory data or crown properties derived from airborne laser scanning (ALS) for generating explicit tree models (Widlowski et al. Citation2015). Moreover, certain studies have employed field-measured parameters and parametric modeling of plant growth and topology to construct explicit tree models utilizing software like OnyxTree (www.onyxtree.com) (Jianbo et al. Citation2017; Woodgate et al. Citation2015). However, these parametric methods need to be improved in their capacity to accurately depict the intricate structure of forests at smaller scales, thereby restricting their broader applications (Liu et al. Citation2022). Therefore, it is necessary for enhanced methodologies in reconstructing forest scenes to furnish more precise and comprehensive information, thereby improving radiative transfer modeling.

Light detection and ranging (LiDAR) sheds new light on highly accurate and realistic forest scene reconstruction. LiDAR can penetrate the crown interior and capture the detailed 3D structure information of the canopy, making it an ideal tool for modeling realistic forest scenes with high levels of detail (Brown Citation2014; Brown et al. Citation2015). In reconstructing forest scenes, two main approaches have been pursued: individual tree based and voxel-based methods. Individual tree based methods typically involve four steps: (1) individual tree segmentation; (2) wood-leaf separation; (3) branch model reconstruction; and (4) leaf addition (Akerblom et al. Citation2018; Calders et al. Citation2018). This method aims to explicitly represent each tree’s stem, branches, and leaves by modeling them as geometry objects resembling their actual shape and assigning specific optical properties to each component (Beland and Kobayashi Citation2021; Cifuentes et al. Citation2018). However, the explicit reconstruction of each object in a forest necessitates a substantial amount of information and modeling effort, posing challenges for the application of the individual tree based method in large forest stands, such as those spanning over 300 m × 300 m (Akerblom and Kaitaniemi Citation2021).

In contrast, voxel-based methods provide a more practical solution for reconstructing large forest stands. This approach involves dividing the forest into smaller cubes or voxels filled with a turbid medium. Those voxels consist of small flat facets with defined angle distributions, area volume densities, and spectral properties. Voxel-based methods do not require the segmentation of individual trees, simplifying the forest scene reconstruction and enabling efficient computation of RTMs (Jianbo et al. Citation2017; Kukenbrink et al. Citation2021). However, the top-down scanning approach used by ALS makes it challenging to capture the details of branches, thus limiting the accuracy of simulated bidirectional reflectance factors (BRFs) based on the RTMs (Malenovsky et al. Citation2008; Widlowski, Cote, and Beland Citation2014). Despite significant progress in the field, the feasibility of voxel-based methods in mountain forests remains uncertain. The optimal voxel size and the impact of branches require in-depth exploration. The topography of mountain forests can significantly affect radiance transformation by changing the orientation of the target, altering the total optical depth, and more (Hu and Li Citation2022). Therefore, further research is crucial to determine the most suitable reconstruction method for mountain forest scenes.

This study focuses on two key objectives: 1) to qualify the impact of 3D mountain forest reconstruction (with branches vs. without branch voxel-based method) on the accuracy of the RTM in a temperate deciduous mountain forest; 2) to provide guidance for selecting an appropriate voxel size in the context of temperate deciduous mountain forests. To address these objectives, this study proposes an efficient voxel-based method for reconstructing a large mountain forest with branches. The branches are generated using the OnyxTree software, utilizing tree height and crown diameter information obtained from UAV laser scanning (ULS). The forest canopy is reconstructed through the subdivision into small voxels. Subsequently, the accuracy of the RTM based on two voxel models (with and without branches) is then evaluated under different voxel sizes and spatial resolutions, and the factors contributing to any deviation are analyzed.

2 Study area and materials

2.1 WangLang National Nature Reserve

The WangLang National Nature Reserve (103°55′-104°10′E, 32°49′-33°02N), situated in the SiChuan Province of China, spans an area of approximately 332.97 km2 and boasts a varied topography with elevations ranging from 2428 to 4869 meters. The primary objective of the Reserve is to protect the natural habitat of wildlife, such as pandas and golden monkeys. Previous studies have extensively investigated this area (Chen et al. Citation2020; Kang, Wang, and Li Citation2017; Xie et al. Citation2022). The Reserve exhibits a predominantly southeast-to-northwest slope with a gradient exceeding 30°. It experiences a temperate climate with an average annual temperature of 2.9°C and an annual precipitation of 859.9 mm. The primary experiment plot was chosen within the Reverse as a nine-hectare study site measuring 300 × 300 meters. This study site consists of a dense forest primarily composed of deciduous broadleaf trees, including red birch (Betula albosinensis Bruk) and rough birch (Betula utilis), with a smaller presence of coniferous trees such as fir (Abies fabri) and spruce (Picea asperata Mast.). The location of the study site is illustrated in .

Figure 1. The topography, ULS point cloud, and unmanned aerial vehicle image of the study area.

Figure 1. The topography, ULS point cloud, and unmanned aerial vehicle image of the study area.

2.2 Field measurements

Accurate branch models require field-measured tree diameter at breast height (DBH) and heights. In our study, DBH was measured at 1.3 meters above the local ground level using a diameter tape. In the field, we randomly measured the heights of trees with a DBH >5 cm and observed that they were primarily concentrated within the range of 10–30 meters. In this study, a moving sample strategy was used to select the LAI measurement plot in order to reduce the impact of the measurement disturbance (Fang et al. Citation2014). According to this strategy, we initially chose one plot at the center of the study area. Subsequently, three additional plots were selected based on considerations of spatial accessibility, the homogeneity of vegetation types, and comprehensive regional coverage. The LAI-2200 canopy analyzer instrument obtained field LAI value from August 26th to 31st, 2021, across four forest plots measuring 20 m × 20 m. We conducted nine measurements in each plot and took the average of the LAI of that plot (Fang et al. Citation2019). Finally, the mean LAI value of the four plots was calculated to represent an LAI of 3.6 for the study area.

2.3 ULS data acquisition

The ULS was conducted over the study area on 28 August 2021, utilizing a Livox Avia scanner mounted on a FeiMa D2000 aircraft. The NIR band of the laser repetition rate of 240 kHz was used, enabling a maximum of three returns per pulse. The flight line had a 60% overlap to ensure complete coverage of the study area. An approximate target point density of 200 points/m2 was achieved, offering a high-resolution perspective of the forest canopy (Hu et al. Citation2021). The sensor was flown at an altitude of 400 m above the ground level, ensuring high-precision data collection with a horizontal accuracy of 2 cm and a vertical accuracy of 3 cm.

2.4 Unmanned aerial vehicle image acquisition

In this study, an airborne flight campaign was conducted on 27 August 2021, to capture aerial images of the study region. The flight was conducted using a DJI P4 Multispectral aircraft with a high-resolution color sensor. The aircraft was flown 400 m above the ground, capturing imagery with a spatial resolution of approximately 0.3 m. The flight occurred during optimal lighting conditions, with a low solar zenith angle from 15:56 to 16:07 local time. The high-quality aerial images collected during this campaign were utilized to validate radiative transfer simulations.

2.5 Landsat-8 OLI data

The Landsat-8 Surface Reflectance imagery was obtained from the United States Geological Survey (https://earthexplorer.usgs.gov/). The Level-2 products, which represent top-of-canopy reflectance data, were utilized in this study. The images were acquired on 23 August 2021, which closely coincided with the collection of the ULS data. The path and row numbers of the imagery were 130 and 37, respectively. These images were used to evaluate the quality of the simulated image and the optimal voxel size for reconstructing the mountain forest scene.

3 Methods

The study comprises four main steps. The study workflow, illustrated in , includes the following steps: data preprocessing, realistic 3D mountain forest scene reconstruction, radiative transfer simulation, and accuracy assessment.

Figure 2. Workflow of the study. In this figure, LAD represents leaf area density, and UAV stands for unmanned aerial vehicle.

Figure 2. Workflow of the study. In this figure, LAD represents leaf area density, and UAV stands for unmanned aerial vehicle.

3.1 Data preprocessing

The ULS data underwent several preprocessing steps, including outlier removal, filtering, canopy height model generation, and individual tree segmentation, as described in Guo et al. (Citation2017). Outlier removal was carried out to eliminate noise resulting from system errors and weather conditions. Filtering aimed to differentiate ground and nonground points (i.e. crown points in our study) from ULS data. For this purpose, the cloth simulating filtering (CSF) method was employed, which is well suited for mountainous areas (Zhang et al. Citation2016). Finally, ordinary kriging was used to generate a 0.5-meter resolution digital elevation model (DEM) and digital surface model (DSM) using ground and crown points (Guo et al. Citation2010). The canopy height model (CHM) was derived by subtracting the DEM from the DSM.

Individual trees were segmented from the CHM using a watershed algorithm as described in Chen et al. (Citation2006). The minimum and maximum tree height was set to 10 m and 30 m, respectively, and the minimum and maximum crown diameter were set to 2 m and 10 m, respectively, based on the realistic situation in the study area. This process successfully segmented a total of 921 trees. Visual inspection revealed a high consistency between the segmented trees’ outlines and CHM image ().

Figure 3. Individual tree segmentation display.

Figure 3. Individual tree segmentation display.

3.2 Realistic 3D mountain forest scene reconstruction

An individual tree serves as the fundamental unit of the forest ecosystem. While individual trees may exhibit variations in morphology and structure, those belonging to the same species often share similar structural characteristics. Utilizing typical branch models to represent individual trees with similar structures in the study area significantly simplifies the forest scene reconstruction process and reduces storage and computational requirements. The 3D branch models were generated using the OnyxTree software (www.onyxtree.com) based on parametric algorithm, enabling the creation of realistic branch models by considering parameters like tree height and crown diameter. From the field measurements and the results of individual tree segmentation, we found that the tree heights were mainly concentrated between 10 to 30 meters, with the crown diameters falling within the range of 2 to 10 meters. As a result, 294 branch models were generated with a range of tree heights (10 m to 30 m with a step of 1 m) and crown diameters (2 m to 15 m with a step of 1 m). The spectral properties of the branches were assigned from the optical database of LargE-Scale remote sensing data and image Simulation framework (LESS) as default birch_branch properties.

The crown models were reconstructed using a voxel-based method. A voxel represents a cube unit filled with a finite number of leaves, each defined by leaf area density, leaf angle distribution, and spectral properties. The leaf area density was computed using the Beer-Lambert law from transmittance, as outlined in Eq. (1) (Beland, Widlowski, and Fournier Citation2014; Grau et al. Citation2017; Vincent et al. Citation2017). This approach provides a comprehensive and efficient method for reconstructing 3D crown models.

(1) LADcal=lnPgapG(θ)×s(1)

Where LADcal represents leaf area density. s is the voxel size used in the crown reconstruction. G(θ) is the leaf projection function, which can be computed by given leaf angle distributions (Nilson Citation1971). This study assumed that the leaves conformed to a spherical distribution, so G(θ) is set to 0.5 (Weiss et al. Citation2004). Pgap is the gap fraction, which is defined as the proportion of light transmission through the interspace between leaves and branches. The gap fraction can be computed based on Eq. (2) (Hosoi and Omasa Citation2006):

(2) Pgap=nground+ncrown_belownground+ncrown_below+ncrown(2)

Where nground is the number of ground points below the voxel perpendicularly. ncrown_below is the number of crown points below the voxel perpendicularly. ncrown is the number of crown points in this voxel. The leaf area density for a given crown can be calculated by combining Equationequation (1) and Equationequation (2) . The plot LAI is then determined using Equationequation (3):

(3) LAIcal=LADcal×snlen×nwid(3)

Where LAIcal represents the LAI calculated in the study area. nlen and nwid are the number of voxels in the length and width of the scene, respectively. To ensure that the LAI of the forest scene is the same when using different voxel sizes to reconstruct the crown, LADcal needs to be calibrated to LADtrue, as proposed by Schneider et al. (Citation2014)

(4) LADtrue=LAItrueLAIcal×LADcal(4)

Where LADtrue is the true LAD in each voxel, and LAItrue represents the true LAI in the study area.

The spectral properties of the leaves were obtained from the LOPEX93 database, accessible online at http://opticleaf.ipgp.fr/index.php?page=database. In this study, the spectral properties of the European birch were used as it closely resembled most species found in the study area. The voxel models were created with various leaf area densities, ranging from 0.1 to 5 m2/m3 with a step of 0.1 m2/m3. Each leaf element within the voxel was simulated as a rectangle with 0.1 m x 0.1 m dimensions. The voxel model with the leaf area density value most similar to the calculated result using EquationEquation (4) was selected and placed on the DEM.

3.3 Radiative transfer simulation

This study employed the LESS model to simulate the radiative transfer of light within the forest environment. LESS, a 3D radiative transfer model, employs ray tracing techniques to simulate the absorption, reflection, and transmission of incident light within the scene (Qi et al. Citation2019; Yan et al. Citation2021, Citation2021). The model generates simulation data using the input parameters of the 3D realistic forest scene, observation geometry, and illumination conditions.

To assess the impact of the branches on the radiative transfer simulation, this study conducted two types of forest scenes: with and without branch voxel-based scenes. Voxel-based scene with branches includes branch models, crown voxel models, and DEM, whereas a voxel-based scene without branches only consists of crown voxel models and DEM. To determine the optimal voxel size in a temperate deciduous mountain forest, this study reconstructed the forest scene with eight different voxel sizes ranging from 30 to 100 cm with a step of 10 cm. Previous studies have shown that the accuracy of simulated reflectance decreases when the voxel size exceeds 100 cm (Liu et al. Citation2022; Widlowski, Cote, and Beland Citation2014). As a result, voxels larger than 100 cm were not considered in this study. The sun zenith and azimuth used in the RTM simulation were set to those of the Landsat 8 image, precisely 39.764° and 131.928° respectively. The view zenith, azimuth, and sensor height in the RTM simulation were set to 0°, 180°, and 705 km respectively. The RTM simulation utilized wavebands covering the blue band (450–510 nm), red band (630–680 nm), and near-infrared (NIR) band (845–885 nm). The optical properties of the leaves, branches, and understory used in the RTM simulation are presented in . To avoid errors from rays escaping the scene near the edges, this study replicated the nine-hectare scene 100 times to create a sizable virtual scene.

Table 1. Reflectance (r) and transmittance (t) values at several wavelengths (nm) for the leaf (l), branch (b), and understory (u) objects used in the realistic forest scene.

The scene simulating the UAV images was consistent with simulated satellite images. The sun zenith azimuth used in the simulation was set to match those of the acquired UAV image, which were 21.752° and 130.228° respectively. The view zenith, azimuth, and sensor height in the simulation were set to 0°, 180°, and 400 m respectively.

3.4 Accuracy assessment

The UAV image was utilized as a reference to assess the accuracy of the simulated geometrical shapes of the trees and shadows, as it only provide the digital number information. The quantitative accuracy assessment of the RTM simulation was evaluated by comparing it with the Landsat 8 reflectance image.

The impact of branches on radiative simulation was examined by comparing the with-branch forest scene with branches to the forest scene without branches. Bidirectional Reflectance Functions (BRFs), spectral albedo, and vegetation indices (VIs) at the top of the canopy (TOC) were employed to assess the influence of branches on the radiative properties of the forest. BRFs represent the ratio of reflected radiant flux from the surface to the reflected radiant flux from an ideal diffuse reference panel, considering specific incidents or viewing cases. Spectral albedo is the ratio of reflected energy to incident energy in a specific waveband over the scene. The accuracy of voxel-based radiative transfer (RT) simulations for practical remote sensing applications was evaluated by quantifying the deviation of VIs data between the forest scenes with branch and without branch. The normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) were proxies for assessing vegetation abundance and vigor. The impact of branches was evaluated by computing the normalized differences (δ) of simulated BRFs, spectral albedo, and VIs using the following equation:

(5) δ=vrr(5)

Where v and r are simulated values from with and without forest scenes, respectively.

4 Results and analysis

4.1 Visualization of the forest scene with different voxel sizes

presents a series of realistic scene reconstructions using various voxel sizes. The series demonstrates a decrease in the required number of voxels as the voxel size increases to represent the study area. The decline is illuminated by the reduction in required voxel, from 4,835,523 at a voxel size of 30 cm to 531,441 at 100 cm. The figure also demonstrates the impact of voxel size on representing gaps within the crown. The gaps are retained more reliably when the voxel size is small, such as 30 cm (). With an increased voxel size to 40 cm, only the more significant gaps within the crown remain visible (). The gaps within the crown become nearly undetectable at the voxel size larger than 80 cm ().

Figure 4. Graphical depiction of realistic 3D mountain forest scene reconstruction based on different voxel sizes and the original point cloud. The plot size is 30 m × 30 m, which is a part of the study area. The colors do not represent actual optical properties.

Figure 4. Graphical depiction of realistic 3D mountain forest scene reconstruction based on different voxel sizes and the original point cloud. The plot size is 30 m × 30 m, which is a part of the study area. The colors do not represent actual optical properties.

Despite not being evident in , there is a decrease in the mean leaf area index (LAI) per voxel of the study as the voxel size increases. The decrease in LAI-per-voxel values can be attributed to the leaf-free volume within the tree crown. As shown in , increasing gaps within the crowns are incorporated into the voxel as the voxel size increases from 30 cm to 100 cm. Consequently, this substantially increases the total downward projected area of the voxelized tree crowns, leading to a smaller mean LAI per voxel.

4.2 The best voxel size to reconstruct the mountain forest scene

The relationship between the voxel size and the accuracy of the simulated BRFs is illuminated in The results indicate an initial increase in accuracy with increasing the voxel size, followed by a peak and subsequent decrease in accuracy. The low initial accuracy can be attributed to the limitations of UAV LiDAR, which is less precise compared to terrestrial LiDAR. In the case of smaller voxel sizes, like 20 cm, gaps may appear in the reconstructed scene due to less laser density, resulting in less accurate leaf hits. Increasing the voxel size fills these gaps, leading to a reconstructed scene that resembles a realistic forest. Our study observed a decreased accuracy when the voxel size reached 90 cm. This decrease can be attributed to the increasing impact of clumping effects within individual voxels, as previously reported by Sinoquet et al. (Citation2005).

Figure 5. The R-square of BRFs between simulated images based on scene reconstruction and Landsat 8 image as a function of voxel size. BRF simulations were carried out in the red band (655 nm) and NIR band (865 nm) and based on the with branch voxel forest scene. The simulated sun zenith (39.764°) and sun azimuth (131.928°) were the same as those of the Landsat 8 image.

Figure 5. The R-square of BRFs between simulated images based on scene reconstruction and Landsat 8 image as a function of voxel size. BRF simulations were carried out in the red band (655 nm) and NIR band (865 nm) and based on the with branch voxel forest scene. The simulated sun zenith (39.764°) and sun azimuth (131.928°) were the same as those of the Landsat 8 image.

This study proposes using 90 cm voxels to reconstruct the forest scene in mountainous areas, smaller than the sizes typically used or recommended in flatter areas (Cao et al. Citation2021; Liu et al. Citation2022). In contrast to flatter areas, where the distance between a voxel and the terrain remains constant, the distance between the voxel and the terrain in mountainous areas varies due to the slope. This variation arises from changes in the vertical position of the voxel relative to the ground surface caused by the terrain slope. Employing large voxels in such complex terrain leads to a coarser reconstructed scene compared to flatter areas. Therefore, it is recommended to use a smaller voxel size to achieve a more accurate representation of the forest in mountainous areas.

4.3 The optimal branch complexity to reconstruct the realistic forest scene

The complexity of branches is a critical factor in determining the level of detail for representing branches in forest modeling. To examine this parameter, the study explored four levels of branch complexity in this study: without branch, first-generation branches, branches with only stem, and branches with secondary branching (). These were denoted as without branches, first branches, complex branches, and simple branches, respectively. shows that the simulated BRFs based on the first branches achieve the highest accuracy compared to the other simulated BRFs. Compared to the simulated BRFs based on the without branches, the accuracy of the simulated BRFs based on the first branches improved from 0.727 to 0.7283 in the red band and from 0.7364 to 0.7404 in the NIR band. However, since the branches were generated solely based on tree height and crown diameter, the disparity between the complex and realistic branches increases, leading to lower accuracy. It is important to note that the point cloud used in this study may already contain the branches’ information. This information is considered and treated as leaves during the inversion and reconstruction process. Hence, including an additional branch could introduce discrepancies in the results. The results depicted in indicate that the simulated BRFs based on complex branches have lower accuracy than those based on first-generation branches. Specifically, the simulated BRFs based on complex branches exhibit higher values in the red band than those based on first-generation branches, which is because, as branch complexity increases, more light reaches the branches. Branch reflectance is higher than understory reflectance, leading to higher simulated BRFs. However, when the branch structure is simplified to a cylinder, it becomes challenging for light to reach the branches, resulting in simulated BRFs similar to those based on the without branch voxel model. Therefore, the simulated BRFs based on simple branches have only a small difference from those based on the no branch voxel model. In conclusion, when the precise branch architecture is unknown, it is recommended to simulate first-generation branches to obtain more accurate results.

Figure 6. Schematic diagram of the same tree with different branch complexity: (a) without branch; (b) first branches; (c) simple branches; (d) complex branches. The height and crown diameter are 12 m and 7 m respectively. They were generated by OnyxTree software.

Figure 6. Schematic diagram of the same tree with different branch complexity: (a) without branch; (b) first branches; (c) simple branches; (d) complex branches. The height and crown diameter are 12 m and 7 m respectively. They were generated by OnyxTree software.

Figure 7. Pixel-wise comparisons between simulated BRF and Landsat 8 BRF in the red and NIR band with different branch complexity; the simulated BRFs are based on (a) without branch, (b) first branches, (c) simple branches and (d) complex branches; the simulations were all in the same illumination and view condition, and the voxel size is 90 cm.

Figure 7. Pixel-wise comparisons between simulated BRF and Landsat 8 BRF in the red and NIR band with different branch complexity; the simulated BRFs are based on (a) without branch, (b) first branches, (c) simple branches and (d) complex branches; the simulations were all in the same illumination and view condition, and the voxel size is 90 cm.

4.4 Impact of branches on radiative transfer simulation

4.4.1 Impact of branches on BRFs simulation

illustrates the mean normalized BRF differences in simulated Bidirectional Reflectance Factors (BRFs) between the with-branch and without-branch models at various voxel sizes. The results demonstrate that the branches have two effects on the simulated BRFs. Firstly, the branches themselves increase the simulated BRFs due to their higher reflectance than the understory. This effect is more prominent in the red waveband than in the NIR waveband because the branch reflectance in the red waveband (0.1033) is 2.28 times higher than the understory reflectance (0.04512). In contrast, in the NIR waveband, the branch reflectance (0.4606) is 1.57 times higher than the understory reflectance (0.2932). Moreover, the branch’s high reflectance and the strong absorption of the foliage pigments increase the BRF in the red waveband. Secondly, the shadows cast by branches reduce the simulated BRFs.

Figure 8. Spatial pattern of the normalized differences between simulated BRF based on with branch and without branch forest scene in the red and NIR band with different voxel sizes; (a1-d1) difference images in the red band with the voxel size of 30 cm, 50 cm, 70 cm and 90 cm, respectively; (a2-d2) difference images in the NIR band with the voxel size of 30 cm, 50 cm, 70 cm and 90 cm, respectively. The red color represents that the simulated BRF based on with branch model is higher than the without branch model; the blue color represents that the simulated BRF based on with branch model is lower than the without branch model. All simulations were set with the same illumination and view condition.

Figure 8. Spatial pattern of the normalized differences between simulated BRF based on with branch and without branch forest scene in the red and NIR band with different voxel sizes; (a1-d1) difference images in the red band with the voxel size of 30 cm, 50 cm, 70 cm and 90 cm, respectively; (a2-d2) difference images in the NIR band with the voxel size of 30 cm, 50 cm, 70 cm and 90 cm, respectively. The red color represents that the simulated BRF based on with branch model is higher than the without branch model; the blue color represents that the simulated BRF based on with branch model is lower than the without branch model. All simulations were set with the same illumination and view condition.

To further investigate the impact of the branches on BRFs simulations, we conducted a quantitative analysis of the mean normalized BRF differences. Specifically, the positive and negative differences separately were calculated and labeled as REDbranch and NIRbranch for the positive differences (represented by the red parts in ) and REDshadow and NIRshadow for the negative differences (represented by the blue parts in ). Initially, this study extracted the region where δ is greater than 3 and smaller than −1 in the 30 cm reconstruction (as shown in ). This region represents the direct impact of branches themselves and branch shadows, respectively. Subsequently, the mean difference in BRFs was calculated between these two regions.

Our results, shown in , indicate that the effect of branches themselves and branch shadows both decrease as the voxel size increases is because the gaps within the crowns are filled, hindering light penetration to reach branches. Consequently, branches have minimal impact on the simulated BRFs. This trend is also evident in 2. Notably, when the voxel size reaches 90 cm, the NIRbranch changes from positive to negative because there is a transition in the vegetation beneath the branches, shifting from understory to leaves. It is observed that the reflectance of leaves in the NIR waveband (0.4764) is higher than that of branches (0.4606), resulting in this alteration in BRF values. Furthermore, the difference between NIRshadow and NIRbranch is almost twice as large. Hence, if the reflectance of branches is similar to that of the understory, the shadows cast by branches will significantly impact the simulated BRFs more. Overall, our study offers valuable insights into the quantitative effects of branches on BRFs simulations.

Figure 9. The mean normalized BRF differences between with branch and without branch forest scene as a function of voxel size. The REDbranch, NIRbranch, REDshadow. and NIRshadow are the effects of the branches themselves and the branch’s shadows on the BRF simulation in the red and NIR waveband, respectively.

Figure 9. The mean normalized BRF differences between with branch and without branch forest scene as a function of voxel size. The REDbranch, NIRbranch, REDshadow. and NIRshadow are the effects of the branches themselves and the branch’s shadows on the BRF simulation in the red and NIR waveband, respectively.

4.4.2 Impact of branches on albedo simulation

Our findings demonstrate that the impact of branches on spectral albedo simulation decreases as voxel size increases, aligning with observed reflectance trends (as shown in ). More specifically, the deviation of spectral albedo is minimal in the blue waveband, exhibiting a deviation of merely 0.2%. The maximum deviations of spectral albedo are 0.71% and 1.04% in the red and NIR wavebands, which both occur at a voxel size of 30 cm. The minimal deviation of spectral albedo in the red waveband is 0.13%, which occurs at a voxel size of 90 cm. In comparison, the minimal deviation in the NIR waveband is 0.27%, which occurs at a voxel size of 100 cm (as shown in ).

Figure 10. Normalized spectral albedo deviation between with branch and without branch forest scene under different voxel sizes.

Figure 10. Normalized spectral albedo deviation between with branch and without branch forest scene under different voxel sizes.

These results demonstrate that the impact of branches on spectral albedo varies with wavelength, with the most significant impact observed in the red and NIR wavebands. The ideal voxel size for simulating spectral albedo is also wavelength-dependent, resulting in distinct minimal deviations across the blue, red and NIR wavebands at varying voxel sizes. In detail, the blue waveband exhibits the smallest deviation at a voxel size of 30 cm, while the red waveband shows the smallest deviation at a voxel size of 90 cm. In contrast, the NIR waveband demonstrates the smallest deviation at a voxel size of 100 cm. Therefore, researchers and practitioners should carefully consider the wavelength-dependent effects of branches when reconstructing 3D forest scenes and simulating spectral albedo using voxel-based radiative transfer simulations.

4.4.3 Impact of branches on vegetation indices simulation

The simulated NDVI and EVI have been analyzed at various voxel sizes, and the results are presented in . As the voxel size increases, the normalized NDVI difference value transitions from positive to negative on the principal plane. On the principal and orthogonal planes, the normalized NDVI difference values are below 0.2% for all voxel sizes and view zenith angles (VZAs). The EVI difference values for voxel sizes of 30 and 50 are negative and exceeded 0.4% for all VZAs on the principal and orthogonal planes. For the VZAs near the hot spot direction, the normalized difference of NDVI and EVI change sharply because there is only the impact of branches themselves in this direction. In the hot spot direction, the normalized NDVI deviation reaches its local maximum values, while the normalized EVI deviation reaches its local minimum values.

Figure 11. The normalized difference of NDVI and EVI of the with branch forest scene simulation, compared with the without branch forest scene simulation. (a) difference on the principal plane and (b) on the orthogonal plane. The sun zenith and azimuth are 30° and 100° respectively.

Figure 11. The normalized difference of NDVI and EVI of the with branch forest scene simulation, compared with the without branch forest scene simulation. (a) difference on the principal plane and (b) on the orthogonal plane. The sun zenith and azimuth are 30° and 100° respectively.

It is also worth noting that NDVI is nonlinear and saturates in high biomass vegetation areas (Gitelson Citation2004; Huete et al. Citation2002). The sensitivity of NDVI to LAI is weakened as LAI exceeds a threshold value, usually between 2 and 3 (Carlson and Ripley Citation1997). In contrast, EVI was developed to enhance the vegetation signal and exhibit improved sensitivity in high biomass regions. The contrasting sensitivities of NDVI and EVI in this forest plot explain the difference in deviations. NDVI is less sensitive than EVI in this forest plot because the LAI is 3.6. Therefore, NDVI differs less than EVI between with and without branch forest scene simulations.

4.5 Spatial structures of the simulated visible image

presents the simulated image of the study area, demonstrating a solid resemblance to the actual landscape regarding the main distribution of individual trees compared to the UAV image. Additionally, the subplot accurately depicts the tree shadow, consistent with those observed in the UAV image. Nevertheless, the simulated image exhibits two minor discrepancies. Firstly, the image exhibits a coarser appearance than the UAV image, attributed to the utilization of a voxel size larger than the resolution of the UAV image. Secondly, the color is slightly different between the two images, which can be attributed to the differences in optical properties between the reconstructed scene elements and the realistic forest.

Figure 12. Comparison between UAV image and simulated visible image: (a) UAV image; (b) generated image. The simulated visible image was based on a 90 cm voxel with branches model.

Figure 12. Comparison between UAV image and simulated visible image: (a) UAV image; (b) generated image. The simulated visible image was based on a 90 cm voxel with branches model.

5 Discussion

5.1 Advantages of the proposed method compared with ellipsoid and boundary-based scene

To highlight the strengths of the proposed scene reconstruction method for radiative transfer simulations, this study conducted a comparative analysis with two alternative methods: the ellipsoid-based method and the boundary-based method (). The ellipsoid approach utilizes ellipsoids to model individual tree point clouds, whereas the boundary-based approach employs arbitrary-shaped boundaries to encapsulate clusters of crown point clouds. For the boundary-based approach, an alpha value of 2 was assigned (Qi et al. Citation2022). The branch structures and positions remained consistent with those in the proposed method. The optical properties of the ellipsoids and boundaries were assigned to the leaves’ properties.

Figure 13. Schematic diagram of (a) ellipsoid-based scene and (b) boundary-based scene.

Figure 13. Schematic diagram of (a) ellipsoid-based scene and (b) boundary-based scene.

It can be seen that the proposed scene reconstruction method exhibits superior performance in terms of BRF accuracy compared to the other two methods (). The proposed method surpasses the ellipsoid-based and boundary-based methods in terms of flexibility and accuracy when simulating remote sensing images, especially for the intricate forest canopies prevalent in mountain areas. The accuracy of the simulated BRFs based on the ellipsoid scene is significantly lower than those generated using the proposed method, primarily due to the oversimplification of the forest crown when approximating it with an ellipsoid. Although the accuracy of the simulated BRFs based on the boundary scene is comparable to that of the proposed scene, the simulated BRFs still exhibit significant overestimation compared to the actual images captured by Landsat 8, highlighting the robustness of the proposed method.

Figure 14. Pixel-wise comparisons between simulated BRF and Landsat 8 BRF in the red and NIR band with different scene reconstruction approaches; the simulated BRFs are based on (a) the ellipsoid scene and the boundary scene. The simulations were all in the same illumination and view condition.

Figure 14. Pixel-wise comparisons between simulated BRF and Landsat 8 BRF in the red and NIR band with different scene reconstruction approaches; the simulated BRFs are based on (a) the ellipsoid scene and the boundary scene. The simulations were all in the same illumination and view condition.

5.2 Limitations in the proposed method and prospects for future work

It is important to note that the branch models utilized in this study were generated using OnyxTree software, which may exhibit variations from real branches. Nevertheless, incorporating branch models is vital for precise transfer radiative simulations, as evident in Section 4.4. Previous studies have demonstrated that neglecting branches in simplified canopy structures can produce simulated reflectance errors exceeding 50% at a simulation resolution of 1 × 1 m (Widlowski, Cote, and Beland Citation2014). Additionally, Malenovsky et al. (Citation2008) observed that incorporating wood elements in canopy reflectance simulations can induce reflectance variations of up to 4% in the near infrared band and 2% in the red band at a resolution of 0.4 m. Numerous studies have endeavored to enhance the precision of branch models by reconstructing them using terrestrial and backpack LiDAR (Fan et al. Citation2020; Hackenberg et al. Citation2014; Raumonen et al. Citation2013). Although acquiring branch point cloud data through airborne LiDAR poses challenges, future studies should explore the possibility of integrating multi-source LiDAR data to reconstruct forest scenes that are more representative of reality (Dai et al. Citation2019; Shao et al. Citation2022).

This study solely focused on reconstructing the crown of broadleaf trees, disregarding the presence of conifer trees within the study area. Future work should concentrate on improving the accuracy of forest characterization by employing deep learning for tree species classification, either utilizing on point cloud data alone or in conjunction with hyperspectral data (Modzelewska, Fassnacht, and Sterenczak Citation2020; Wang et al. Citation2023; Xi et al. Citation2020). By doing this, the forest scene can be reconstructed to incorporate various tree species, thereby enhancing the overall precision of the analysis. Furthermore, the study employed a default database to assign identical optical properties to all elements in the large scene, encompassing leaf reflectance and transmittance, branch reflectance, and understory reflectance. However, this simplification fails to capture the real forests’ spatially varying optical properties. We recommend that future studies collect field spectral data and allocate them to the scene elements to enhance the accuracy of forest simulations. Finally, this study solely focused on mountain forest scene reconstruction, but it is crucial to acknowledge that the results may vary in different environments depending on a variety of parameters such as topography, fractional vegetation cover, and tree type. Therefore, further studies would assess the suggested methodology across a wider range of forest types in order to gain a more comprehensive understanding of its application and effectiveness.

6 Conclusion

This study introduced a novel approach for reconstructing 3D mountain forest scenes by incorporating branches into the voxel crown. As the voxel size increases, the accuracy of the simulated BRFs first increases and then decreases. Our study demonstrated the significant influence of branches on simulating forest BRFs and identified the optimal voxel size for mountain forests reconstruction to be approximately 90 cm, yielding R2 values of 0.7283 and 0.7404 in the red and NIR bands, respectively. Notably, this voxel size is smaller than that used in flattened areas. This study investigated the influence of branches on simulated BRFs and demonstrated their substantial impact. Branches have a dual effect on BRFs with the ability to both increase and decrease them depending on multiple factors. On the one hand, branches exhibit higher reflectance than the understory or leaves, thus contributing to increased BRFs as they bounce off incident light. However, branches can also cast shadows that reduce the amount of light that reaches the ground or other surfaces, resulting in decreased BRFs in shaded areas. Additionally, this study reveals that the branches effect on BRFs depends on the voxel size employed in the simulations, with both branches and branch shadows exhibiting diminishing impact in the red and NIR waveband as the voxel size increases. Regarding simulated spectral albedo, the maximum deviations are 0.71% and 1.04% in the red and NIR wavebands, whereas the deviation remains below 0.2% in the blue waveband. The minimum deviation of spectral albedo in the red waveband is 0.13%. Compared to the ellipsoid and boundary based scene, the proposed scene offers notable advantages in terms of accuracy and robustness.

In conclusion, our study provides valuable insights into the impact of 3D forest structure representation in RTMs and emphasizes the significance of incorporating branches in the reconstruction of mountain forests. Further studies should assess the performance of RTMs using 3D models from various forest types to enhance our understanding of the impact of forest structure representation on RTMs.

Acknowledgments

This work was supported by the National Key Research and Development Program of China (Grant No.2020YFA0608702) and National Natural Science Foundation of China (Grant No. 41631180 and Grant No. 42271398).

Disclosure statement

No potential conflict of interest was reported by the authors.

Data availability statement

The data that support the findings of the study area are available from the first author, [Xiaohan Lin, [email protected]], upon reasonable request.

Additional information

Funding

This work was supported by the [National Key Research and Development Program of China] under Grant [number 2020YFA0608702] and [National Natural Science Foundation of China] under Grant [number 41631180, 42271398].

References

  • Akerblom, M., and P. Kaitaniemi. 2021. “Terrestrial Laser Scanning: A New Standard of Forest Measuring and Modelling?” Annals of Botany 128 (6): 653–18. https://doi.org/10.1093/aob/mcab111.
  • Akerblom, M., P. Raumonen, E. Casella, M. I. Disney, F. M. Danson, R. Gaulton, L. A. Schofield, and M. Kaasalainen. 2018. “Non-Intersecting Leaf Insertion Algorithm for Tree Structure Models.” Interface Focus 8 (2): 14. https://doi.org/10.1098/rsfs.2017.0045.
  • Beland, M., and H. Kobayashi. 2021. “Mapping Forest Leaf Area Density from Multiview Terrestrial Lidar.” Methods in Ecology and Evolution 12 (4): 619–633. https://doi.org/10.1111/2041-210x.13550.
  • Beland, M., J. L. Widlowski, and R. A. Fournier. 2014. “A Model for Deriving Voxel-Level Tree Leaf Area Density Estimates from Ground-Based LiDar.” Environmental Modelling & Software 51:184–189. https://doi.org/10.1016/j.envsoft.2013.09.034.
  • Bojinski, S., M. Verstraete, T. C. Peterson, C. Richter, A. Simmons, and M. Zemp. 2014. “The Concept of Essential Climate Variables in Support of Climate Research, Applications, and Policy.” Bulletin of the American Meteorological Society 95 (9): 1431–1443. https://doi.org/10.1175/bams-d-13-00047.1.
  • Brown, A. J. 2014. “Equivalence Relations and Symmetries for Laboratory, LIDAR, and Planetary Mueller Matrix Scattering Geometries.” Journal of the Optical Society of America A-Optics Image Science and Vision 31 (12): 2789–2794. https://doi.org/10.1364/josaa.31.002789.
  • Brown, A. J., T. I. Michaels, S. Byrne, W. B. Sun, T. N. Titus, A. Colaprete, M. J. Wolff, G. Videen, and C. J. Grund. 2015. “The Case for a Modern Multiwavelength, Polarization-Sensitive LIDAR in Orbit Around Mars.” Journal of Quantitative Spectroscopy & Radiative Transfer 153:131–143. https://doi.org/10.1016/j.jqsrt.2014.10.021.
  • Calders, K., N. Origo, A. Burt, M. Disney, J. Nightingale, P. Raumonen, M. Akerblom, Y. Malhi, and P. Lewis. 2018. “Realistic Forest Stand Reconstruction from Terrestrial LiDar for Radiative Transfer Modelling.” Remote Sensing 10 (6): 15. https://doi.org/10.3390/rs10060933.
  • Cao, B., Q. Jianbo, E. Chen, Q. Xiao, Q. Liu, and L. Zengyuan. 2021. “Fine Scale Optical Remote Sensing Experiment of Mixed Stand Over Complex Terrain (FOREST) in the Genhe Reserve Area: Objective, Observation and a Case Study.” International Journal of Digital Earth 14 (10): 1411–1432. https://doi.org/10.1080/17538947.2021.1968047.
  • Carlson, T. N., and D. A. Ripley. 1997. “On the Relation Between NDVI, Fractional Vegetation Cover, and Leaf Area Index.” Remote Sensing of Environment 62 (3): 241–252. https://doi.org/10.1016/s0034-4257(97)00104-1.
  • Chandrasekhar, S. 1960. Radiative Transfer. New York: Dover.
  • Chen, Q., D. Baldocchi, P. Gong, and M. Kelly. 2006. “Isolating Individual Trees in a Savanna Woodland Using Small Footprint Lidar Data.” Photogrammetric Engineering & Remote Sensing 72 (8): 923–932. https://doi.org/10.14358/pers.72.8.923.
  • Chen, X. Y., X. R. Wang, J. Q. Li, and D. W. Kang. 2020. “Species Diversity of Primary and Secondary Forests in Wanglang Nature Reserve.” Global Ecology and Conservation 22. https://doi.org/10.1016/j.gecco.2020.e01022.
  • Cifuentes, R., D. Van der Zande, C. Salas-Eljatib, J. Farifteh, and P. Coppin. 2018. “A Simulation Study Using Terrestrial LiDar Point Cloud Data to Quantify Spectral Variability of a Broad-Leaved Forest Canopy.” Sensors 18 (10): 11. https://doi.org/10.3390/s18103357.
  • Dai, W. X., B. S. Yang, X. L. Liang, Z. Dong, R. G. Huan, Y. S. Wang, and W. Y. Li. 2019. “Automated Fusion of Forest Airborne and Terrestrial Point Clouds Through Canopy Density Analysis.” Isprs Journal of Photogrammetry & Remote Sensing 156:94–107. https://doi.org/10.1016/j.isprsjprs.2019.08.008.
  • Fang, H. L., W. J. Li, S. S. Wei, and C. Y. Jiang. 2014. “Seasonal Variation of Leaf Area Index (LAI) Over Paddy Rice Fields in NE China: Intercomparison of Destructive Sampling, LAI-2200, Digital Hemispherical Photography (DHP), and AccuPar Methods.” Agricultural and Forest Meteorology 198:126–141. https://doi.org/10.1016/j.agrformet.2014.08.005.
  • Fang, H. L., Y. H. Zhang, S. S. Wei, W. J. Li, Y. C. Ye, T. Sun, and W. W. Liu. 2019. “Validation of Global Moderate Resolution Leaf Area Index (LAI) Products Over Croplands in Northeastern China.” Remote Sensing of Environment 233. https://doi.org/10.1016/j.rse.2019.111377.
  • Fan, G. P., L. L. Nan, F. X. Chen, Y. Q. Dong, Z. M. Wang, H. Li, and D. Y. Chen. 2020. “A New Quantitative Approach to Tree Attributes Estimation Based on LiDar Point Clouds.” Remote Sensing 12 (11). https://doi.org/10.3390/rs12111779.
  • Gastellu-Etchegorry, J. P., E. Martin, and F. Gascon. 2004. “DART: A 3D Model for Simulating Satellite Images and Studying Surface Radiation Budget.” International Journal of Remote Sensing 25 (1): 73–96. https://doi.org/10.1080/0143116031000115166.
  • Gastellu-Etchegorry, J. P., T. G. Yin, N. Lauret, T. Cajgfinger, T. Gregoire, E. Grau, J. B. Feret, et al. 2015. “Discrete Anisotropic Radiative Transfer (DART 5) for Modeling Airborne and Satellite Spectroradiometer and LIDAR Acquisitions of Natural and Urban Landscapes.” Remote Sensing 7 (2): 1667–1701. https://doi.org/10.3390/rs70201667.
  • Gitelson, A. A. 2004. “Wide Dynamic Range Vegetation Index for Remote Quantification of Biophysical Characteristics of Vegetation.” Journal of Plant Physiology 161 (2): 165–173. https://doi.org/10.1078/0176-1617-01176.
  • Grau, E., S. Durrieu, R. Fournier, J. P. Gastellu-Etchegorry, and T. G. Yin. 2017. “Estimation of 3D Vegetation Density with Terrestrial Laser Scanning Data Using Voxels. A Sensitivity Analysis of Influencing Parameters.” Remote Sensing of Environment 191:373–388. https://doi.org/10.1016/j.rse.2017.01.032.
  • Guo, Q. H., W. K. Li, H. Yu, and O. Alvarez. 2010. “Effects of Topographic Variability and Lidar Sampling Density on Several DEM Interpolation Methods.” Photogrammetric Engineering & Remote Sensing 76 (6): 701–712. https://doi.org/10.14358/pers.76.6.701.
  • Guo, Q., Y. Su, T. Hu, X. Zhao, F. Wu, Y. Li, J. Liu, et al. 2017. “An Integrated UAV-Borne Lidar System for 3D Habitat Mapping in Three Forest Ecosystems Across China.” International Journal of Remote Sensing 38 (8–10): 2954–2972. https://doi.org/10.1080/01431161.2017.1285083.
  • Hackenberg, J., C. Morhart, J. Sheppard, H. Spiecker, and M. Disney. 2014. “Highly Accurate Tree Models Derived from Terrestrial Laser Scan Data: A Method Description.” Forests 5 (5): 1069–1105. https://doi.org/10.3390/f5051069.
  • Hosoi, F., and K. Omasa. 2006. “Voxel-Based 3-D Modeling of Individual Trees for Estimating Leaf Area Density Using High-Resolution Portable Scanning Lidar.” IEEE Transactions on Geoscience & Remote Sensing 44 (12): 3610–3618. https://doi.org/10.1109/tgrs.2006.881743.
  • Huete, A., K. Didan, T. Miura, E. P. Rodriguez, X. Gao, and L. G. Ferreira. 2002. “Overview of the Radiometric and Biophysical Performance of the MODIS Vegetation Indices.” Remote Sensing of Environment 83 (1–2): 195–213. https://doi.org/10.1016/s0034-4257(02)00096-2.
  • Hu, G. Y., and A. N. Li. 2022. “BOST: A Canopy Reflectance Model Suitable for Both Continuous and Discontinuous Canopies Over Sloping Terrains.” IEEE Transactions on Geoscience & Remote Sensing 60:19. https://doi.org/10.1109/tgrs.2022.3226460.
  • Hu, T. Y., X. L. Sun, Y. J. Su, H. C. Guan, Q. H. Sun, M. Kelly, and Q. H. Guo. 2021. “Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications.” Remote Sensing 13 (1). https://doi.org/10.3390/rs13010077.
  • Janoutova, R., L. Homolova, Z. Malenovsky, J. Hanus, N. Lauret, and J. P. Gastellu-Etchegorry. 2019. “Influence of 3D Spruce Tree Representation on Accuracy of Airborne and Satellite Forest Reflectance Simulated in DART.” Forests 10 (3): 35. https://doi.org/10.3390/f10030292.
  • Jianbo, Q., D. Xie, D. Guo, and G. Yan. 2017. “A Large-Scale Emulation System for Realistic Three-Dimensional (3-D) Forest Simulation.” IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing 10 (11): 4834–4843. https://doi.org/10.1109/JSTARS.2017.2714423.
  • Kang, D. W., X. R. Wang, and J. Q. Li. 2017. “Resting Site Use of Giant Pandas in Wanglang Nature Reserve.” Scientific Reports 7. https://doi.org/10.1038/s41598-017-14315-x.
  • Kukenbrink, D., F. D. Schneider, B. Schmid, J. P. Gastellu-Etchegorry, M. E. Schaepman, and F. Morsdorf. 2021. “Modelling of Three-Dimensional, Diurnal Light Extinction in Two Contrasting Forests.” Agricultural and Forest Meteorology 296:13. https://doi.org/10.1016/j.agrformet.2020.108230.
  • Li, W. K., Q. H. Guo, S. L. Tao, and Y. J. Su. 2018. “VBRT: A Novel Voxel-Based Radiative Transfer Model for Heterogeneous Three-Dimensional Forest Scenes.” Remote Sensing of Environment 206:318–335. https://doi.org/10.1016/j.rse.2017.12.043.
  • Lintermann, B., and O. Deussen. 1999. “Interactive Modeling of Plants.” IEEE Computer Graphics and Applications 19 (1): 56–65. https://doi.org/10.1109/38.736469.
  • Li, X. W., and A. H. Strahler. 1992. “Geometric-Optical Bidirectional Reflectance Modeling of the Discrete Crown Vegetation - Effect of Crown Shape and Mutual Shadowing.” IEEE Transactions on Geoscience & Remote Sensing 30 (2): 276–292. https://doi.org/10.1109/36.134078.
  • Liu, C., K. Calders, F. Meunier, J. P. Gastellu‐Etchegorry, J. Nightingale, M. Disney, N. Origo, W. Woodgate, and H. Verbeeck. 2022. “Implications of 3D Forest Stand Reconstruction Methods for Radiative Transfer Modeling: A Case Study in the Temperate Deciduous Forest.” Journal of Geophysical Research Atmospheres 127 (14). https://doi.org/10.1029/2021jd036175.
  • Liu, C., K. Calders, F. Meunier, J. P. Gastellu-Etchegorry, J. Nightingale, M. Disney, N. Origo, W. Woodgate, and H. Verbeeck. 2022. “Implications of 3D Forest Stand Reconstruction Methods for Radiative Transfer Modeling: A Case Study in the Temperate Deciduous Forest.” Journal of Geophysical Research-Atmospheres 127 (14). https://doi.org/10.1029/2021jd036175.
  • Malenovsky, Z., E. Martin, L. Homolova, J. P. Gastellu-Etchegorry, R. Zurita-Milla, M. E. Schaepman, R. Pokorny, J. Clevers, and P. Cudlin. 2008. “Influence of Woody Elements of a Norway Spruce Canopy on Nadir Reflectance Simulated by the DART Model at Very High Spatial Resolution.” Remote Sensing of Environment 112 (1): 1–18. https://doi.org/10.1016/j.rse.2006.02.028.
  • Modzelewska, A., F. E. Fassnacht, and K. Sterenczak. 2020. “Tree Species Identification within an Extensive Forest Area with Diverse Management Regimes Using Airborne Hyperspectral Data.” International Journal of Applied Earth Observation and Geoinformation 84:13. https://doi.org/10.1016/j.jag.2019.101960.
  • Nilson, T. 1971. “Theoretical Analysis of Frequency Oh Gaps in Plant Stands.” Agricultural Meteorology 8 (1): 25. https://doi.org/10.1016/0002-1571(71)90092-6.
  • Qi, J. B., D. H. Xie, J. Y. Jiang, and H. G. Huang. 2022. “3D Radiative Transfer Modeling of Structurally Complex Forest Canopies Through a Lightweight Boundary-Based Description of Leaf Clusters.” Remote Sensing of Environment 283:18. https://doi.org/10.1016/j.rse.2022.113301.
  • Qi, J. B., D. H. Xie, T. G. Yin, G. J. Yan, J. P. Gastellu-Etchegorry, L. Y. Li, W. M. Zhang, X. H. Mu, and L. K. Norford. 2019. “LESS: LargE-Scale Remote Sensing Data and Image Simulation Framework Over Heterogeneous 3D Scenes.” Remote Sensing of Environment 221:695–706. https://doi.org/10.1016/j.rse.2018.11.036.
  • Raumonen, P., M. Kaasalainen, M. Akerblom, S. Kaasalainen, H. Kaartinen, M. Vastaranta, M. Holopainen, M. Disney, and P. Lewis. 2013. “Fast Automatic Precision Tree Models from Terrestrial Laser Scanner Data.” Remote Sensing 5 (2): 491–520. https://doi.org/10.3390/rs5020491.
  • Schneider, F. D., R. Letterer, F. Morsdorf, J. P. Gastellu-Etchegorry, N. Lauret, N. Pfeifer, and M. E. Schaepman. 2014. “Simulating Imaging Spectrometer Data: 3D Forest Modeling Based on LiDar and in situ Data.” Remote Sensing of Environment 152:235–250. https://doi.org/10.1016/j.rse.2014.06.015.
  • Shao, J., W. Yao, P. Wan, L. Luo, P. Wang, L. Yang, J. Lyu, and W. Zhang. 2022. “Efficient Co-Registration of UAV and Ground LiDar Forest Point Clouds Based on Canopy Shapes.” International Journal of Applied Earth Observation and Geoinformation 114:103067. https://doi.org/10.1016/j.jag.2022.103067.
  • Sinoquet, H., G. Sonohat, J. Phattaralerphong, and C. Godin. 2005. “Foliage Randomness and Light Interception in 3-D Digitized Trees: An Analysis from Multiscale Discretization of the Canopy.” Plant, Cell & Environment 28 (9): 1158–1170. https://doi.org/10.1111/j.1365-3040.2005.01353.x.
  • Verhoef, W. 1984. “Light-Scattering by Leaf Layers with Application to Canopy Reflectance Modeling - the SAIL Model.” Remote Sensing of Environment 16 (2): 125–141. https://doi.org/10.1016/0034-4257(84)90057-9.
  • Vincent, G., C. Antin, M. Laurans, J. Heurtebize, S. Durrieu, C. Lavalley, and J. Dauzat. 2017. “Mapping Plant Area Index of Tropical Evergreen Forest by Airborne Laser Scanning. A Cross-Validation Study Using LAI2200 Optical Sensor.” Remote Sensing of Environment 198:254–266. https://doi.org/10.1016/j.rse.2017.05.034.
  • Wang, B., J. Y. Liu, J. N. Li, and M. Z. Li. 2023. “UAV LiDar and Hyperspectral Data Synergy for Tree Species Classification in the Maoershan Forest Farm Region.” Remote Sensing 15 (4). https://doi.org/10.3390/rs15041000.
  • Weber, J. P., and J. Penn. 1995. Creation and Rendering of Realistic Trees. In Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, Association for Computing Machinery, New York. https://doi.org/10.1145/218380.218427
  • Weiss, M., F. Baret, G. J. Smith, I. Jonckheere, and P. Coppin. 2004. “Review of Methods for in situ Leaf Area Index (LAI) Determination Part II. Estimation of LAI, Errors and Sampling.” Agricultural and Forest Meteorology 121 (1–2): 37–53. https://doi.org/10.1016/j.agrformet.2003.08.001.
  • Widlowski, J. L., J. F. Cote, and M. Beland. 2014. “Abstract Tree Crowns in 3D Radiative Transfer Models: Impact on Simulated Open-Canopy Reflectances.” Remote Sensing of Environment 142:155–175. https://doi.org/10.1016/j.rse.2013.11.016.
  • Widlowski, J. L., C. Mio, M. Disney, J. Adams, I. Andredakis, C. Atzberger, J. Brennan, et al. 2015. “The Fourth Phase of the Radiative Transfer Model Intercomparison (RAMI) Exercise: Actual Canopy Scenarios and Conformity Testing.” Remote Sensing of Environment 169:418–437. https://doi.org/10.1016/j.rse.2015.08.016.
  • Woodgate, W., M. Disney, J. D. Armston, S. D. Jones, L. Suarez, M. J. Hill, P. Wilkes, M. Soto-Berelov, A. Haywood, and A. Mellor. 2015. “An Improved Theoretical Model of Canopy Gap Probability for Leaf Area Index Estimation in Woody Ecosystems.” Forest Ecology and Management 358:303–320. https://doi.org/10.1016/j.foreco.2015.09.030.
  • Xie, X. Y., J. Tian, C. L. Wu, A. N. Li, H. A. Jin, J. H. Bian, Z. J. Zhang, X. Nan, and Y. Jin. 2022. “Long-Term Topographic Effect on Remotely Sensed Vegetation Index-Based Gross Primary Productivity (GPP) Estimation at the Watershed Scale.” International Journal of Applied Earth Observation and Geoinformation 108. https://doi.org/10.1016/j.jag.2022.102755.
  • Xie, D. H., X. Y. Wang, J. B. Qi, Y. M. Chen, X. H. Mu, W. M. Zhang, and G. J. Yan. 2018. “Reconstruction of Single Tree with Leaves Based on Terrestrial LiDar Point Cloud Data.” Remote Sensing 10 (5). https://doi.org/10.3390/rs10050686.
  • Xi, Z. X., C. Hopkinson, S. B. Rood, and D. R. Peddle. 2020. “See the Forest and the Trees: Effective Machine and Deep Learning Algorithms for Wood Filtering and Tree Species Classification from Terrestrial Laser Scanning.” Isprs Journal of Photogrammetry & Remote Sensing 168:1–16. https://doi.org/10.1016/j.isprsjprs.2020.08.001.
  • Yan, G. J., Q. Chu, Y. Y. Tong, X. H. Mu, J. B. Qi, Y. J. Zhou, Y. N. Liu, et al. 2021. “An Operational Method for Validating the Downward Shortwave Radiation Over Rugged Terrains.” IEEE Transactions on Geoscience & Remote Sensing 59 (1): 714–731. https://doi.org/10.1109/tgrs.2020.2994384.
  • Yan, K., Y. M. Zhang, Y. Y. Tong, Y. L. Zeng, J. B. Pu, S. Gao, L. Y. Li, et al. 2021. “Modeling the Radiation Regime of a Discontinuous Canopy Based on the Stochastic Radiative Transport Theory: Modification, Evaluation and Validation.” Remote Sensing of Environment 267:16. https://doi.org/10.1016/j.rse.2021.112728.
  • Zhang, W. M., J. B. Qi, P. Wan, H. T. Wang, D. H. Xie, X. Y. Wang, and G. J. Yan. 2016. “An Easy-To-Use Airborne LiDar Data Filtering Method Based on Cloth Simulation.” Remote Sensing 8 (6). https://doi.org/10.3390/rs8060501.