792
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Denoising and classification of urban ICESat-2 photon data fused with Sentinel-2 spectral images

, , , , &
Pages 4346-4367 | Received 22 May 2023, Accepted 09 Oct 2023, Published online: 18 Oct 2023

ABSTRACT

The ICESat-2 (Ice, Cloud, and Land Elevation Satellite-2) can collect earth surface elevation data with high precision on a global scale. However, the collected photon data contains a large amount of background noise due to the influence of sunlight, cloud reflection, and other factors. For photon data of different scenes, how to effectively denoise and achieve accurate classification of photon point clouds is crucial for subsequent applications. This study proposes a random forest based method for denoising and classifying ICESat-2 photon data in urban areas by fusing spectral features from Sentinel-2 images and spatial distribution features from photon data. The experimental results show that the method can effectively identify various types of photons. Compared with the reference data, the overall accuracy of photon denoising and classification is 95.97% on average, and the average kappa coefficient is 94.18%. Further analysis demonstrates that the addition of sentinel-2 spectral information can effectively improve the classification accuracy of photon point clouds in urban areas, and the photon classification method of combining photon lidar data and optical images can be a promising solution to improve classification accuracy.

1. Introduction

The rapid development of cities since the twenty-first century has caused the structure of cities to change tremendously (Li et al. Citation2021). Investigating the dynamic changes of cities is critical for urban planning and management, as well as sustainable development (Chen et al. Citation2021). Remote sensing is an efficient and cost-effective technology for monitoring urban dynamic changes (Griffiths et al. Citation2010). Among traditional urban land cover classification studies, multi-spectral images are primarily used as the data source (Gong and Howarth Citation1990; Citation1992; Yin et al. Citation2011). However, there have been challenges during the complex extraction process, for example, the ‘same substance, different spectra and same spectra, different substances’, which hinder the accurate classification of urban scenes (Li and Gong Citation2016; Rosso, Ustin, and Hastings Citation2005).

Space-borne LiDAR is an active remote sensing method with high vertical measurement accuracy and penetration capability that can collect accurate 3D data from the Earth repeatedly. It has been widely used to detect the surface structure of the Earth and other planets, and also can be used as a new data source to study urban dynamic changes (Yang et al. Citation2019; Zhou et al. Citation2015). In 2003, NASA launched the satellite ICESat/GLAS (Ice, Cloud, and Land Elevation Satellite, Geoscience Laser Altimeter System), which has been successfully applied in building extraction, height inversion, and change monitoring in urban areas (Gong et al. Citation2011; Voegtle and Steinle Citation2003). After the first generation of space-borne LiDAR satellites, NASA launched the second generation of satellite-based LiDAR, ICESat-2/ATLAS (Ice, Cloud, and Land Elevation Satellite-2, Advanced Topographic Laser Altimeter System) in 2018. Compared to GLAS, ATLAS can obtain photon point cloud data with higher density and smaller ground footprint diameter (Abdalati et al. Citation2010; Mulverhill et al. Citation2022; Neuenschwander and Magruder Citation2016). Although ICESat-2 is not designed for urban applications, its excellent global altimetric capabilities allow it to obtain important three-dimensional position information of various typical ground objects in urban scenes, which can be used to accurately monitor ground surface change (Zhao, Wu, Li et al. Citation2023; Zhao, Wu, Shu et al. Citation2022b). For example, Lao et al. (Citation2021) used ICESat-2 data to estimate the building heights in urban, and the quantitatively evaluated results demonstrate a strong consistency between the estimated heights and the heights from terrestrial laser scanning (TLS) data, with root mean square error (RMSE) between 0.3 and 0.45 m.

When the ICESat-2/ATLAS satellite collects data, ATLAS records all photon events and generates point clouds. Because the laser pulses emitted and detected by ATLAS are weak signals, it is difficult to distinguish whether the photon events are from the ground objects, solar radiation, atmospheric scattering, or the instrument itself, resulting in a significant amount of noise in the collected photon point clouds (Neumann et al. Citation2019). Therefore, the effective removal of noise photons and accurate classification of various objects like vegetation, buildings, and water from urban signal photons are crucial for subsequent thematic applications such as building height inversion (Lian et al. Citation2022). Currently, most existing photon point cloud processing methods treat denoising and classification as two separate processes, and the majority of studies are based on forest scenes (Neuenschwander and Magruder Citation2019; Wang et al. Citation2023; Zhu et al. Citation2020).

For denoising of photon data, most approaches distinguish the signal photons from the ICESat-2 photon data by analyzing the spatial distribution difference between noise photons and the signal photon, and three types of methods are often used for the denoising of photon data: (1) Denoising algorithm based on density spatial clustering. Density-based methods such as OPTICS (Ordering Points To Identify the Clustering Structure, Ankerst et al. Citation1999) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise, Ester et al. Citation1996) are the most commonly used algorithms for distinguishing signal from noise. Recently, the improved OPTICS algorithm (Zhu et al. Citation2021) and DBSCAN algorithm (Zhang, Zhou, and Luo Citation2022) have been used for photon data denoising. They are effective for filtering noisy photons in small areas, but the denoising results are highly sensitive to the input parameters, making them unsuitable for complex scenes in large areas (Wang, Pan, and Glennie Citation2016; Zhang and Kerekes Citation2015; Zheng et al. Citation2023). (2) Denoising algorithms based on raster images. In such algorithms, the photon point clouds are first rasterized into a two-dimensional image, with image elements containing density information about the photon points, and then the image processing methods are applied to filter out noisy photon points. In this process, some useful information may be lost due to point cloud rasterization, which affects the subsequent application of signal photons (Awadallah et al. Citation2013; Chen and Pang Citation2015; Magruder et al. Citation2012). (3) Denoising algorithms based on local statistical analysis. These algorithms first compute the feature parameters of each photon by using local spatial statistical analysis, and then the noise and signal photons are identified by setting the corresponding thresholds of feature parameters. Several existing studies have indicated that these methods are more suitable for filtering photon point cloud data in forested areas (Nie et al. Citation2018; Zhu et al. Citation2018). However, these methods are heavily influenced by the setting of thresholds, and how to choose an effective threshold value remains a challenge in the denoising progress (Tang et al. Citation2016; Yang et al. Citation2021).

For the classification of photon point clouds, most existing studies focus on classifying the vegetation canopy surface and ground surface in the forest area by analyzing the spatial distribution characteristics of photon point clouds (Chen et al. Citation2019; Gwenzia et al. Citation2016; Moussavi et al. Citation2014; Popescu et al. Citation2018). In addition, the machine learning algorithms are often used to identify the ICESat-2 photon point cloud of various scenes, and the results are promising, such as shoreline classification (Liu et al. Citation2022), forest burned and unburned area classification (Liu, Popescu, and Malambo Citation2020), and land cover classification (Pan et al. Citation2022). However, scenes of forest, sea, and bare land mentioned above are relatively single, and the aggregation of similar objects is strong. In this case, determining the photon categories of different surface objects is relatively easy. For urban areas, more categories of objects are contained, such as buildings, vegetation, water, and ground, and the same objects often have different characteristics. In this case, it is difficult to distinguish multiple object categories by using single data feature (Awrangjeb, Zhang, and Fraser Citation2012). As a result, it is critical to synthesize the features from multiple data sources for the denoising and classification of photon point cloud data in urban areas.

To solve the problem of accurate classification of ICESat-2 photon point clouds in urban areas, we propose a method for denoising and classifying photon point clouds based on random forest algorithm by fusing spectral features from Sentinel-2 images and spatial distribution features from ICESat-2 photon data. First, a coarse denoising algorithm combining Principal Component Analysis (PCA) and grid division is proposed to filter the obvious noise photons, by considering the spatial density differences; Second, 13 photon features and 11 Sentinel-2 spectral features are extracted by analyzing the structural characteristics of the urban scene, followed by feature optimization, the optimal combination of features is chosen as the input of the random forest classifier; Finally, fine denoising and classification are integrated into one process to identify different objects from the photon point cloud, and the classification results are compared with the reference data for qualitative and quantitative evaluation of the proposed method.

2. Materials

2.1. Study area

The study area lies in the urban area of Jiaozuo City, Henan Province (35.16°N-35.26°N, 113.25°E-113.28°E). Jiaozuo city with a total area of about 4071 km2, is located in the northwestern part of Henan. The city is bordered by the Yellow River to the south and the Taihang Mountains to the northwest, and is an important part of the Central Plains urban agglomeration. As shown in , the research area is covered by multiple ICESat-2/ATLAS ground tracks, and the ground tracks also contain information on buildings, bare land, vegetation, and water areas, indicating that typical elements of urban scenes are well represented.

Figure 1. ICESat-2 photon data distribution in the urban area of Jiaozuo, Henan, China. The four red squares in the figure represent the four typical ground objects contained in the ICESat-2 tracks.

Figure 1. ICESat-2 photon data distribution in the urban area of Jiaozuo, Henan, China. The four red squares in the figure represent the four typical ground objects contained in the ICESat-2 tracks.

2.2. Data

The data used for the study are photon point clouds, multispectral images, and airborne LiDAR point clouds. The detailed description of the data is shown in .

  1. ICESat-2 photon point cloud data. ICESat-2/ATLAS has been collecting photon data since October 2018, and the National Snow and Ice Data Center (NSIDC) published 21 standard data products from ICESat-2/ATLAS in May 2019, divided into L0, L1, L2, and L3, labeled ATL00-ATL21 (no ATL05) in that order. The ATL03 global positioning data product is an L2 product that records positioning information for each photon event, including acquisition time, latitude, longitude, along-track distance, and ellipsoidal height, as well as confidence that each photon is a signal or noise photon (Neumann et al. Citation2022). In this study, ATL03 photon data within the study area from March 2019 to September 2020 were collected, and the ATL03 photon point cloud data of eight beams under different acquisition conditions (day/night, strong beam/weak beam) was selected through data quality filtering.

  2. Sentinel-2 multispectral imagery. Sentinel-2 is a high-resolution multispectral imaging satellite launched by the European Space Agency (ESA), divided into two satellites, 2A and 2B, with complementary, revisit cycles of five days, and ground resolutions of 10, 20, and 60 m. The satellite covers 13 spectral bands in the visible, near-infrared, and short-wave infrared ranges, which are used for land monitoring to accurately identify urban boundaries and provide data support for urban area classification (https://sentinel.esa.int/documents/247904/685211/sentinel-2_user_handbook and https://directory.eoportal.org/web/eoportal/satellite-missions/cmissions/copernicus-sentinel-2). Sentinel-2 L1C-class data images from June 2019 to September 2020 were chosen to align the data acquisition period as closely as possible with ATL03 photon data.

  3. Airborne LiDAR point cloud. The airborne LiDAR point cloud was collected by the FEIMA Unmanned Aerial Vehicle (UAV) D2000 flight platform mounted the D-LiDAR2000 light-weight airborne LiDAR system. The D-LiDAR2000 LiDAR system has triple-echo technology with a laser point frequency of 240 kpts/s and a ranging accuracy of 2 cm. The data used in this study were collected in the ICESat-2/ATLAS satellite transit area in April 2021, with a UAV flight height of 120 m and an average point cloud density of 135 points/m2 in the survey area. In this study, the ICESat-2 photon and airborne lidar point clouds were matched and compared in the UTM planar coordinate system, while the high-resolution remote sensing images from Google Earth were used as an auxiliary to assist in manually labeling the ATL03 photon point cloud. The labeled photon point cloud will be used as reference data to verify and evaluate the accuracy of the proposed method.

Table 1. Description of the data used.

3. Methods

In this study, we propose a method of photon point cloud classification by fusing features extracted from ICESat-2 photons and Sentinel-2 imagery, which mainly includes three steps:(1) data pre-processing, (2) features extraction from the photon point cloud and Sentinel-2 imagery, (3) photon point cloud classification and accuracy evaluation. The flow chart of the proposed method is illustrated in .

Figure 2. Flow chart of the proposed method.

Figure 2. Flow chart of the proposed method.

3.1. Data preprocessing

The data preprocessing mainly contains coarse denoising of photon data and multispectral image preprocessing.

3.1.1. Coarse denoising

Coarse denoising is a process to filter out a large number of noise photons and find the rough location of the signal photons. In ATL03 data, the densities of signal and noise photon point clouds are different in the along-track and elevation directions, which shows a dense distribution of signal photons and a sparse distribution of noise photons. Principal Component Analysis (PCA) is a feature extraction technique based on linear mapping, which can be commonly used in unsupervised dimensionality reduction (Wang et al. Citation2022). According to the study of Cunningham and Ghahramani (Citation2015), the two-dimensional photon point cloud can be downscaled to a straight line point in two-dimensional space by using PCA, which can reflect the majority of the information of the original photon point clouds. Therefore, the PCA method is first adopted in this study for coarse denoising of photon point clouds. The process schematic diagram is shown in .

Figure 3. Schematic diagram of photon point cloud coarse denoising process.

Figure 3. Schematic diagram of photon point cloud coarse denoising process.

The following are the detailed processing steps:

  1. Grid division. The raw photon point cloud is divided into grids: in the along-track direction, we divide the photon point cloud with 100 m intervals; in the elevation direction, we divide photon point cloud by the interval number of 10 (Neuenschwander et al. Citation2022; Popescu et al. Citation2018; Zhu et al. Citation2018). The parameter for the interval number in the elevation direction is an empirical value determined through extensive testing and comparative analyses. This procedure can be seen in (a).

  2. Dimensionality reduction by PCA. For grids in each column, the grid with maximum number of photons is determined as the interest grid, and then the points in the interest grid are processed by using the PCA algorithm. First, the zero-mean methods (Zhao et al. Citation2022a) are used to centralize the two-dimensional coordinate data (along-track distance and elevation) in each interest grid, and then a covariance matrix is computed based on the centralized data. Second, the eigenvalues and eigenvectors of the covariance matrix are calculated to construct a transformation matrix, which is determined by the eigenvector corresponding to the largest eigenvalue. Finally, the transformation matrix is applied to downscale the two-dimensional photon point clouds to derive two-dimensional points on a straight line. The interest grids and linear points determined by PCA are shown in (b).

  3. Determination of elevation range. Within each interest grid along-track, the maximum and minimum of the PCA line point elevation values are used to construct a rough elevation range of the signal photons. To avoid filtering out signal photons in this step, we extend the rough elevation range by 40 m upwards and 20 m downwards (Zhu et al. Citation2018), which is used as the final elevation range of the coarse signal photons. This procedure is depicted in (c).

  4. Rejection of noise photons. Remove the obvious noise photons are rejected by filtering the photons outside the final elevation range, which can be seen in (d).

3.1.2. Image preprocessing

The Sentinel-2 data used in this study are ESA-released L1C-class multispectral data, which are atmospheric apparent reflectance products that have been orthorectified and geometrically refined corrections but have not been atmospherically corrected (Drusch et al. Citation2012). Therefore, prior to the random forest classifier features extraction, the plug-in Sen2cor released by ESA is used to generate a radiometrically calibrated and atmospherically corrected atmospheric bottom reflectance data product. Meanwhile, the resolution of all bands was resampled to the same 20 m as the along-track segments of ATL03 data (Neumann et al. Citation2022). Finally, the Sentinel-2 images are temporally and spatially filtered, then de-clouded, mosaiced, and cropped to produce synthetic images that are prerequisites for spectral feature extraction.

3.2. Features extraction

After the coarse denoising, a large number of noise photons that are far away from the signal photon can be removed and the coarse signal photons can be derived; for the noise photons that are close to the signal photon, they can be identified by a random forest classification model, which is constructed by using the optimized features extracted from ATL03 photon data and Sentinel-2 imagery. Meanwhile, the remaining signal photons can be classified into ground, water, buildings, and vegetation by the designed classification model.

3.2.1. Extraction of ICESat-2 photon features

In the along-track direction, ATL03 photon data in each 20 m segment present different spatial distribution characteristics for different ground objects (Li et al. Citation2020). Based on this, 13 photon features are extracted from the coarse signal photons, which can be seen in .

Table 2. Features extracted from ATL03 photon data and description.

In , the 13 selected features can be divided into two categories: (1) Descriptive indicators that are obtained directly through the corresponding field of ATL03 photon data products. Descriptive indicators in this study include: elevation in WGS-84 projection (HAE), the along-track distance of photon points (AT_dist), signal confidence, and solar elevation. (2) Statistical indicators that are computed from the ATL03 photon data. First, the grid was divided by 20 m*20 m in the along-track distance and elevation direction of the ATL03 photon data, and numbered consecutively from left to right and top to bottom. Record the serial number of the grid to which each photon belongs (Gird_number). Then, calculate the difference between the height of each photon point in each 20 m window and the height mean, median, skewness, kurtosis, and different quantiles of all photon points in the window. The statistical indicators include grid serial number (Gird_number), the mean difference (Dist_mean), the difference in median (Dist_median), the difference in quantile (Dist_p25, Dist_p50, Dist_p75, Dist_p95), the difference in skewness (SKEW), and difference in kurtosis (KURT). These features all have characteristic information in the vertical direction of the ground surface and can distinguish between different types of feature targets with varying spatial densities.

3.2.2. Extraction of Sentinel-2 images features

After the geographical registration of Sentinel-2 imagery, the corresponding spectral values of each ATL03 photon can be extracted according to the specified location, which can be used to calculate the spectral indices. In this study, 7 spectral bands of the Sentinel-2 images are selected as image features, along with 4 spectral indices: normalized difference vegetation index (NDVI), normalized building index (NDBI11 and NDBI12), and improved normalized difference water body index (MNDWI), the specific information is shown in .

Table 3. Features extracted from Sentinel-2 images and description.

As shown in , 7 bands of the Sentinel-2 images (B3, B4, B5, B6, B8a, B11, and B12) are first selected for the classification of the ATL03 photons. In addition, The NDVI index is calculated by using bands B8a and B4, because it can effectively identify vegetation in urban areas (Yang et al. Citation2019). Meanwhile, the improved normalized difference water index (MNDWI) is derived by using bands B3 and B11, which can enhance the contrast between the water and building and facilitate the accurate extraction of water information in urban areas. The building pixels have higher brightness values in the 11th and 12th bands (Liu et al. Citation2019). As a result, these two bands combined with the 8th bands are chosen to construct two NDBI indices, which are NDBI11 and NDBI12, respectively. Both NDBI11 and NDBI12 can better reflect the building categories in the city and have advantages in the characterization of other feature categories.

3.3. Classification and accuracy evaluation

The random forest algorithm is an integrated learning algorithm that uses multiple decision trees to discriminate and classify the data, and also can provide the important evaluation of each feature in the classification. After the photon and spectral feature vectors have been extracted as described in Section 3.2, 70% of ATL03 coarse signal photons derived from the data preprocessing are randomly chosen as the training sample and the remaining 30% as the test data, respectively. The training samples were divided into two groups of training data: daytime beams and nighttime beams. The classification models are obtained by individually training the two sets of sample data using the random forest algorithm. Then the obtained models were used to predict the photon categories of each daytime and nighttime test data, respectively. In this study, the coarse signal photons are classified into five categories: noise, bare ground, buildings, water, and vegetation. Meanwhile, to reduce the complexity and generalization error of the training model, feature variable importance ranking and feature variable correlation analysis are conducted during the classification process, and the 24 features extracted from photon data and sentinel-2 images are successively reduced by one feature with the lowest importance and higher correlation, which are fed into the random forest model for training. By this means, the optimized combination of features can be derived for the classification of the coarse signal photons.

When random forests are used for classification, a reference dataset needs to be produced for the construction of the classification model and the performance evaluation of the proposed method. To this end, the coarse signal photons are manually labeled by using FEIMA D-LiDAR2000 airborne LiDAR data and Google Earth high-resolution remote sensing images as the reference.

4. Results and accuracy evaluation

4.1. Coarse denoising results

The ICESat-2/ATLAS ATL03 photon point cloud data contains a significant amount of background noise that is widely distributed. (a and c) show the point cloud profiles of the raw ATL03 photon data in daytime and nighttime, PCA one-dimensional line points under the grid, respectively. We can see that the raw ATL03 photon data contains a significant amount of background noise, especially in the daytime. (b and d) show the results of coarse denoising of daytime and nighttime photon beams by using the proposed method, respectively. We can see that the rough location of signal photons can be located and a large number of noise photons can be removed by using PCA and grid division, which reduces the difficulty of subsequent classification of photons.

Figure 4. Raw photon point cloud coarse denoising results. (a) and (c) show the raw photon point cloud profiles of daytime and nighttime, PCA one-dimensional line points under the grid, and (b) and (d) are the corresponding coarse denoising results.

Figure 4. Raw photon point cloud coarse denoising results. (a) and (c) show the raw photon point cloud profiles of daytime and nighttime, PCA one-dimensional line points under the grid, and (b) and (d) are the corresponding coarse denoising results.

4.2. Qualitative evaluation of the classification

Next, using the trained random forest model and the optimized feature parameters, the photon point cloud classification results of urban areas were generated ( and ).

Figure 5. Classification results of coarse signal photon point clouds during daytime and nighttime. Figure (b) is an enlarged view of the blue square region in Figure (a), and Figure (d) is an enlarged view of the blue square region in Figure (c). The two blue squares in Figures (b) and (d) show some regions where the photon class is misclassified.

Figure 5. Classification results of coarse signal photon point clouds during daytime and nighttime. Figure (b) is an enlarged view of the blue square region in Figure (a), and Figure (d) is an enlarged view of the blue square region in Figure (c). The two blue squares in Figures (b) and (d) show some regions where the photon class is misclassified.

Figure 6. Classification results of typical urban objects. The magenta lines in the four figures represent the original ICESat-2 ground track.

Figure 6. Classification results of typical urban objects. The magenta lines in the four figures represent the original ICESat-2 ground track.

shows the point cloud profiles of the classification results. (a and c) show the classification results of the two photon beams after coarse denoising for daytime and nighttime, respectively. As shown in (a and c), signal photons located on various objects can be effectively identified, both in daytime and nighttime. Furthermore, the local enlargements of the classification result are shown in (b and d), which indicate that the buildings (yellow), ground (black), water (blue), vegetation (green), and the remaining noise photons (red) can be correctly classified. In , the classification results of the various types of photons are presented on high-resolution images of the study area. It can be seen that the geographic locations of the various types of signal photons are highly matched to the corresponding objects on the high-resolution images. These experimental results demonstrate that the proposed method can effectively classify the photon point cloud in urban areas, and further identify the remaining noise after the coarse denoising process. However, the blue rectangles in (b and d) show some photons that are misclassified. In (b), some noise photons above the top of the building are misclassified as vegetation photons. In (d), some photons between the top of the building and the ground are also misclassified as vegetation photons.

4.3. Quantitative evaluation of the classification

For quantitative evaluation of the proposed method, a total of eight ATL03 photon beams in the test area were classified, and the results of the classification were shown in . Two evaluation metrics were used to quantitatively evaluate the proposed method, including overall accuracy, and kappa coefficient.

Table 4. Quantitative evaluation of the photon classification results.

The eight beams of photon data in the test area are named with the acquisition date and beam number, and are divided into four categories: strong beam in the daytime, weak beam in the daytime, strong beam in the nighttime, and weak beam in the nighttime. As can be seen from the experimental results in , for the urban area photon point cloud, the denoising and classification accuracy of the strong laser beam is higher than that of the weak laser beam, and the nighttime is higher than that of the daytime. Especially, the strong beam at night achieves the highest accuracy while the weak beam in the daytime is the lowest. However, regardless of day or night, strong or weak beams, the overall accuracy of classification exceeds 94%. The average overall accuracy of photon point cloud classification for eight beams is 95.97% with an average kappa coefficient of 94.18%.

Several studies for land cover classification have been conducted using ICESat-2 data. Liu et al. (Citation2022) classified burned and unburned areas in forest areas using ICESat-2 ATL08 data products based on random forest and logistic regression methods, which achieved 83% and 76% classification overall accuracy, respectively. Li et al. (Citation2020) used the ICESat-2 ATL08 data product for land cover classification. For the four types of surface cover classification, the overall accuracy of the strong and weak laser beams was better than 85%, and the kappa coefficient was greater than 70%. Compared with these studies, the proposed method achieved better performance for the classification of the urban ATL03 photon point cloud.

4.4. Evaluation of the denoising result

For evaluation of the denoising performance of the proposed method, the noise photons identified by the coarse denoising and classification processes are combined together, and the other types of photons are used as signal photons. The accuracy is verified by comparing it with the reference data. In addition, the results of the denoising accuracy evaluation of the ATL08 data product (Neuenschwander et al. Citation2022) were used as a comparison.

shows the evaluation results of the proposed method and ATL08 algorithm for denoising photon point clouds in urban areas. Four evaluation metrics were used to quantitatively evaluate the proposed method, including precision, recall, f1_score, and kappa coefficient.

Figure 7. Comparison of denoising results between the proposed method and ATL08 algorithm.

Figure 7. Comparison of denoising results between the proposed method and ATL08 algorithm.

From , it can be seen that the proposed method in this study outperforms the ATL08 algorithm for denoising urban photon point clouds. Both algorithms have high Precision, Recall, and F1_score values, indicating that they are effective in identifying noise photons for photon data in urban areas. According to the kappa agreement results, the method proposed in this study has a high kappa coefficient value, which indicates that the denoising results are in good agreement with the reference data and that the results are highly reliable. The lower kappa coefficient value of the ATL08 algorithm indicates its higher chance agreement for distinguishing between noise photons and signal photons. The ATL08 algorithm was developed to extract topography and canopy height from the ATL03 photon point clouds (Neuenschwander et al. Citation2022). The applicability of the algorithm may be limited when applied to urban areas with mixed buildings and vegetation. Especially for the nighttime beams with lower noise photons, the feature of the photon point cloud in urban areas is more significant, causing the ATL08 algorithm to be serendipitous for distinguishing signal and noise photons.

5. Discussion

5.1. Feature selection

Based on ICESat-2 and Sentinel-2 data, a total of 24 features are extracted in this study. shows the importance ranking results of the extracted features from the eight ATL03 photon data, which are indicated by different colored columns. From , we can see that the ICESat-2 data features are more important than the Sentinel-2 features in the classification process. In urban scenes, the vertical structure information of various ground objects differs significantly, and ICESat-2 data can better describe the vertical structure information of the ground surface. Therefore, features from ICESat-2 data show a relatively high contribution to the photon classification.

Figure 8. Importance ranking of features. The columns of different colors indicate the importance of each feature for different photon beams.

Figure 8. Importance ranking of features. The columns of different colors indicate the importance of each feature for different photon beams.

For the features obtained from ICESat-2 data, the elevation (HAE) and Signal confidence of each photon are the most important features for the classification of the eight beams of ATL03 photon data. The importance of skewness difference (SKEW) and kurtosis difference (KURT) for photons in the 20 m window is also relatively high. Among the Sentinel-2 spectral variables, BAND11 (short-wave infrared, SWIR) and BAND8 (visible and near-infrared, VNIR) show relatively high importance, which are the most significant spectral variables in the classification of photon point clouds. In addition, the MNDWI and NDBI12 spectral indices also make contributions to the classification, because they are effective for the distinction of water and buildings.

shows the correlations of all 24 ICEsat-2 and Sentinel-2 features for the eight photon beams. From , we can see that most of the features are significantly correlated with other features, but the correlation coefficients are quite different. In addition, there are strong correlations among ICESat-2 features, bands of Sentinel-2, and each spectral index, and the majority of ICESat-2 photon statistical features have a strong correlation with the photon elevation (HAE). However, feature variables including BAND11, Solar elevation, Dist_mean, and SKEW are less correlated with other feature variables, indicating that they can provide additional information to classify photons into different types such as building photons, water photons, and vegetation photons.

Figure 9. Correlation analysis with Pearson test at 0.05 confidence level, where circle size indicates the strength of the correlation and red/blue color indicates positive/negative correlation.

Figure 9. Correlation analysis with Pearson test at 0.05 confidence level, where circle size indicates the strength of the correlation and red/blue color indicates positive/negative correlation.

According to the importance analysis () and correlation analysis (), the 24 classifier features were successively reduced one by one with the lowest importance and higher correlation, and then fed into the random forest classifier for model training.

shows the changes of the overall accuracy and kappa coefficient for one daytime photon beam and one nighttime photon beam by using a different number of features. As can be seen from (a), for the daytime photon data, the overall accuracy shows an up-and-down state with small fluctuations, along with the number of features decreasing from 24 to 13. The overall accuracy reaches the maximum of 96.34% when the number of features is 19. During the number of features gradually decreases from 13 to 1, the overall accuracy decreases from 96.34% to 62.04%. The kappa coefficient has a similar magnitude of change to the overall accuracy and reaches a maximum of 95.06% when the number of features is 19. From (b), we can see that, the overall accuracy and kappa coefficient shows an overall decreasing trend, but the change is small. The overall accuracy and kappa coefficient reach to the maximum value when the number of features is 23, which are 96.90% and 95.44%, respectively. When the number of features decreases from 6 to 1, the classification accuracy changes significantly, with the minimum overall accuracy and kappa coefficient being 72.53% and 59.60%, respectively.

Figure 10. Overall accuracy and kappa coefficient of photon classification with the different number of features.

Figure 10. Overall accuracy and kappa coefficient of photon classification with the different number of features.

The line graph variations of the classification accuracy for the eight ATL03 photon beams (daytime and nighttime) in this study are similar to . However, when the overall accuracy and kappa coefficient are the maximum, the number of four beam features are 16, 18, 19, 14 for the daytime and 21, 23, 18, 20 for the nighttime, respectively (see ). By following a comprehensive analysis of the accuracy change for eight photon beams, the combination of ten features with higher importance and low correlation is selected for the classifier, which includes HAE, AT_dist, Solar elevation, Gird_number, Dist_mean, Dist_median, Dist_p75, SKEW, KURT, and BAND11. These ten features are fed into the random forest classifier to build the classification model and predict photon categories. By this means, the overall accuracy of denoise and classification of the eight photon beams is 95.19% on average, and the kappa coefficient is 93.81% on average.

5.2. Misclassification analysis

shows the confusion matrix of two photon beams for daytime and nighttime. we can see that, the proportion of correct classification of five types of photons is high, with water having the highest classification accuracy, followed by ground, building, noise, and vegetation in that order. The overall classification results of daytime and nighttime beams are similar for each photon category, but there is a significant difference in the correct classification of noise photons. There is a misclassification between noise photons, ground photons, and building photons of the nighttime beam.

Figure 11. Confusion matrix for daytime and nighttime photon beams regarding the classification of five types of photons.

Figure 11. Confusion matrix for daytime and nighttime photon beams regarding the classification of five types of photons.

shows the producer and the user accuracy for daytime and nighttime photon beams regarding the classification of five types of photons. As can be seen in , the classification accuracy of water photons is high, with the producer and user accuracy above 99% for both daytime and nighttime water photons. For the noise photons with considerable disparities in classification results, the producer and user accuracy for the daytime beam are 97.64% and 97.54%, respectively, while the producer and user accuracy for the nighttime beam are 95.79% and 83.31%, respectively. In addition, the classification results of vegetation photons for the daytime beam were poor, with producer and user accuracy of 88.42% and 83.31%, respectively.

Table 5. Producer's accuracy and User's accuracy for daytime and nighttime photon beams regarding the classification of five types of photons.

For the above situation, we analyze the following reasons. First, the misclassification of noise photons with building photons may be due to the weak penetration of ATLAS in the building area, and some noise photons were recorded below the roof of the building. The ICEsat-2 photon features and Sentinel-2 spectral features of these noise photons are similar to the signal photons on top of buildings, resulting in misidentification as building photons. Second, the vegetation photons are easily confused with ground photons both daytime and night. We analyze that it is due to the presence of a large amount of low shrubby vegetation in urban areas. The ground is shaded by the vegetation canopy, and the underlying vegetation photons are highly close to the ground photons, which causes the categories of both to be misidentified. Furthermore, the potential reasons for misclassification may be due to the reference data in this study being obtained manually with the help of airborne LiDAR and Google Earth images, there may be category discrimination errors in photon point cloud sparse areas, resulting in final accuracy verification errors.

5.3. Classification accuracy with different feature combinations

In this study, seven experiments are designed for the comparison of different feature combinations, which are as follows: Only the elevation of ICESat-2 (HAE), all Sentinel-2 raw band features, HAE and all Sentinel-2 raw band features, all Sentinel-2 spectral features, all ICESat-2 photon point cloud features, all Sentinel-2 spectral features and all ICESat-2 photon point cloud features, and the optimal combination of Sentinel-2 and ICESat-2 features. The seven feature combinations are used for random forest classification. shows the classification results of two photon beams for daytime and nighttime.

Figure 12. Classification accuracy of daytime and nighttime beams by using different feature combinations.

Figure 12. Classification accuracy of daytime and nighttime beams by using different feature combinations.

From , we can see that, the ICESat-2 features outperform the sentinel-2 spectral features for the classification, and the classification accuracy of nighttime photon beam is higher than that of daytime. We analyze the reasons for such results as follows: First, the daytime photon data contains more noisy photons, which are easily confused with signal photons. Second, the significant vertical structure differences of various ground objects in urban make the features from ICESat-2 contribute more to the classification, which is consistent with the results of feature importance ranking in the classification process. In addition, due to the dense distribution of urban objects, each sentinel-2 image pixel may contain multiple ground objects, such as buildings, vegetation, and ground, which affects the contribution of sentinel-2 spectral features in the photon classification process, because these photons show the same spectral value but belong to different object.

From , we can also see that, the elevation information of the photon data has a great influence in the classification of the urban photon point cloud. Compared to using only the raw band information of the data, the overall accuracy and kappa coefficient can be improved by 15%−41% when HAE and raw bands are combined to construct a classifier. Compared to using all ICESat-2 and Sentinel-2 features separately, the overall accuracy and kappa coefficient can be improved by 4%−6%, when all ICESat-2 and all sentinel-2 features are combined together for the classification. By feature optimizing, the classification accuracy will be further improved with the optimal combination of features. The daytime photon beam can be improved by 0.3% and 0.77%, and the nighttime photon beam can be improved by 0.03% and 0.02%, respectively. This result demonstrates that feature optimization can further improve the classification accuracy while increasing the classification efficiency. In general, the feature fusion of photon LiDAR and optical images can be a promising solution for the classification of photon data.

6. Conclusion

In this study, a random forest-based method was proposed for denoising and classifying photon point clouds in urban areas, by fusing features extracted from ICESat-2 and Sentinel-2 data. First, the proposed coarse denoising algorithm that combines grid division and PCA was used to quickly filter out obvious noise photon point clouds. Second, the optimal features combination was selected by feature importance and correlation analysis, which can improve the efficiency and accuracy for denoising and classification in complex urban scenes. Finally, the fine denoising and classification of photon point clouds are integrated into one process. By this means, feature variables from photon data and multispectral can be fused tightly. Experimental results confirmed that the proposed method performs well for the denoising and classifying of photon data in urban areas, with the overall accuracy of photon denoising and classification is 95.97% on average, and an average Kappa coefficient is 94.18%. Further analysis demonstrates that the addition of sentinel-2 spectral information can effectively improve the classification accuracy of photon point clouds in urban areas, and the photon classification method of combining photon lidar data and optical images can be a promising solution to improve classification accuracy.

However, there are some limitations for the algorithm proposed in this paper. In areas with high surface coverage, the ICESat-2 emitted pulses may not penetrate the coverage to reach the ground surfaces, resulting in missing or sparse ground photons. This makes it difficult for accurately identifying ground photons in urban areas. In future research, we will improve the algorithm to achieve more effective photon denoising and higher classification accuracy. Furthermore, the study of denoising and classification of photon point clouds in urban areas can be combined with other space-borne LiDAR data, such as Global Ecosystem Dynamics Investigation (GEDI). To achieve the highly refined classification of the urban area feature targets, and then to realize the generation of thematic products like building heights in urban areas.

Acknowledgments

We would like to thank all anonymous reviewers and editors for many constructive comments on the manuscript.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was supported by the State Key Project of National Natural Science Foundation of China–Key projects of joint fund for regional innovation and development [grant number U22A20566], the National Natural Science Foundation of China [grant number 42271365], and the Fundamental Re-search Funds for the Universities of Henan Province [grant number NSFRF220203].

References

  • Abdalati, W., H. J. Zwally, R. Bindschadler, B. Csatho, S. L. Farrell, H. A. Fricker, D. Harding, et al. 2010. “The ICESat-2 Laser Altimetry Mission.” Proceeding of the IEEE 98 (5): 735–775. https://doi.org/10.1109/JPROC.2009.2034765.
  • Ankerst, M., M. M. Breunig, H. P. Kriegel, and J. Sander. 1999. “OPTICS: Ordering Points to Identify the Clustering Structure.” ACM SIGMOD Record 28 (2): 49–60. https://doi.org/10.1145/304181.304187.
  • Awadallah, M. S., S. Ghannam, L. Abbott, and A. M. Ghanem. 2013. “Active Contour Models for Extracting Ground and Forest Canopy Curves from Discrete Laser Altimeter Data.” In Proceedings of 13th International Conference on LiDAR Applications for Assessing Forest Ecosystems, 1–8. Beijing, People’s Republic of China.
  • Awrangjeb, M., C. Zhang, and C. S. Fraser. 2012. “Automatic Reconstruction of Building Roofs through Effective LiDAR and Multispectral Imagery Integration.” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences 1:203–208. https://doi.org/10.5194/isprsannals-I-3-203-2012.
  • Chen, B. W., and Y. Pang. 2015. “A Denoising Approach for Detection of Canopy and Ground from ICESat-2's Airborne Simulator Data in Maryland, USA.” Applied Optics and Photonics China 9671:383–387. https://doi.org/10.1117/12.2202777.
  • Chen, B. W., Y. Pang, Z. Y. Li, P. North, J. Rosette, G. Sun, J. Suárez, I. Bye, and H. Lu. 2019. “Potential of Forest Parameter Estimation Using Metrics from Photon Counting LiDAR Data in Howland Research Forest.” Remote Sensing 11 (7): 856. https://doi.org/10.3390/rs11070856.
  • Chen, R. S., Z. Q. Zhao, D. Xu, and Y. Chen. 2021. “Progress of Research on Sustainable Development Index for Cities and Urban Agglomerations.” Progress in Geography 40 (1): 61–72. https://doi.org/10.18306/dlkxjz.2021.01.006.
  • Cunningham, J. P., and Z. Ghahramani. 2015. “Linear Dimensionality Reduction: Survey, Insights, and Generalizations.” Machine Learning Research 16 (1): 2859–2900. https://doi.org/10.48550/arXiv.1406.0873.
  • Drusch, M., U. D. Bello, S. Carlier, O. Colin, V. Fernandez, F. Gascon, B. Hoersch, et al. 2012. “Sentinel-2: ESA's Optical High-Resolution Mission for GMES Operational Services.” Remote Sensing of Environment 120:25–36. https://doi.org/10.1016/j.rse.2011.11.026.
  • eoPortal – Earth Observation Directory & News [DB/OL]. https://directory.eoportal.org/web/eoportal/satellite-missions/cmissions/copernicus-sentinel-2.
  • Ester, M., H.-P. Kriegel, J. Sander, and X. Xu. 1996. “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise.” In Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, 226–231. Portland, OR: AAAI Press.
  • Gong, P., and P. J. Howarth. 1990. “The Use of Structural Information for Improving Land-cover Classification Accuracies at the Rural-urban Fringe.” Photogrammetric Engineering and Remote Sensing 56:67–73.
  • Gong, P., and P. J. Howarth. 1992. “Land-use Classification of SPOT HRV Data Using a Cover-Frequency Method.” International Journal of Remote Sensing 13 (8): 1459–1471. https://doi.org/10.1080/01431169208904202.
  • Gong, P., Z. Li, H. B. Huang, G. Q. Sun, and L. Wang. 2011. “ICESat GLAS Data for Urban Environment Monitoring.” IEEE Transactions on Geoscience Remote Sensing 49 (3): 1158–1172. https://doi.org/10.1109/TGRS.2010.2070514.
  • Griffiths, P., P. Hostert, O. Gruebner, and S. Linden. 2010. “Mapping Megacity Growth with Multi-sensor Data.” Remote Sensing of Environment 114 (2): 426–439. https://doi.org/10.1016/j.rse.2009.09.012.
  • Gwenzia, D., M. A. Lefsky, V. P. Suchdeo, and D. J. Harding. 2016. “Prospects of the ICESat-2 Laser Altimetry Mission for Savanna Ecosystem Structural Studies Based on Airborne Simulation Data.” ISPRS Journal of Photogrammetry and Remote Sensing 118:68–82. https://doi.org/10.1016/j.isprsjprs.2016.04.009.
  • Lao, J. Y., C. Wang, X. X. Zhu, X. H. Xi, S. Nie, J. L. Wang, F. Cheng, and G. Q. Zhou. 2021. “Retrieving Building Height in Urban Areas Using ICESat-2 Photon-Counting LiDAR Data.” International Journal of Applied Earth Observation and Geoinformation 104:1569–8432. https://doi.org/10.1016/j.jag.2021.102596.
  • Li, X., and P. Gong. 2016. “Urban Growth Models: Progress and Perspective.” Science Bulletin 61 (21): 1637–1650. https://doi.org/10.1007/s11434-016-1111-1.
  • Li, B. B., H. Xie, X. H. Tong, D. Ye, K. P. Sun, and M. Li. 2020. “Land Cover Classification Using ICESat-2 Data with Random Forest.” Infrared and Laser Engineering 49 (11): 20200292. https://doi.org/10.3788/IRLA20200292.
  • Li, L., J. Zhu, G. Cheng, and B. Zhang. 2021. “Detecting High-rise Buildings from Sentinel-2 Data Based on Deep Learning Method.” Remote Sensing 13 (20): 4073. https://doi.org/10.3390/rs13204073.
  • Lian, W. Q., G. Zhang, H. Cui, Z. W. Chen, S. D. Wei, C. Y. Zhu, and Z. G. Xie. 2022. “Extraction of High-accuracy Control Points Using ICESat-2 ATL03 in Urban Areas.” International Journal of Applied Earth Observation and Geoinformation 115:103116. https://doi.org/10.1016/j.jag.2022.103116.
  • Liu, C. D., J. Li, Q. H. Tang, J. W. Qi, and X. H. Zhou. 2022. “Classifying the Nunivak Island Coastline Using the Random Forest Integration of the Sentinel-2 and ICESat-2 Data.” Land 11 (2): 240. https://doi.org/10.3390/land11020240.
  • Liu, M., S. Popescu, and L. Malambo. 2020. “Feasibility of Burned Area Mapping Based on ICESat−2 Photon Counting Data.” Remote Sensing 12 (1): 24. https://doi.org/10.3390/rs12010024.
  • Liu, Z. L., Q. B. Zhang, D. P. Yue, Y. G. Hao, and K. Su. 2019. “Extraction of Urban Built-up Areas Based on Sentinel-2A and NPP-VIIRS Nighttime Light Data.” Remote Sensing for Land Resources 31 (4): 227–234. https://doi.org/10.6046/gtzyyg.2019.04.29.
  • Magruder, L. A., M. E. Wharton, K. D. Stout, and A. L. Neuenschwander. 2012. “Noise Filtering Techniques for Photon-counting LiDAR Data.” Proceedings of the SPIE 8379, Laser Radar Technology and Applications XVII 8379. https://doi.org/10.1117/12.919139.
  • Moussavi, M. S., W. Abdalati, T. Scambos, and A. Neuenschwander. 2014. “Applicability of an Automatic Surface Detection Approach to Micro-pulse Photon-counting Lidar Altimetry Data: Implications for Canopy Height Retrieval from Future ICESat-2 Data.” International Journal of Remote Sensing 35 (13): 5263–5279. https://doi.org/10.1080/01431161.2014.939780.
  • Mulverhill, C., N. C. Coops, T. Hermosilla, J. C. White, and M. A. Wulderb. 2022. “Evaluating ICESat-2 for Monitoring, Modeling, and Update of Large Area Forest Canopy Height Products.” Remote Sensing of Environment 271:112919. https://doi.org/10.1016/j.rse.2022.112919.
  • Neuenschwander, A., and L. Magruder. 2016. “The Potential Impact of Vertical Sampling Uncertainty on ICESat-2/ATLAS Terrain and Canopy Height Retrievals for Multiple Ecosystems.” Remote Sensing 8 (12): 1039. https://doi.org/10.3390/rs8121039.
  • Neuenschwander, A. L., and L. A. Magruder. 2019. “Canopy and Terrain Height Retrievals with ICESat-2: A First Look.” Remote Sensing 11 (14): 1721. https://doi.org/10.3390/rs11141721.
  • Neuenschwander, A., K. Pitts, B. Jelley, J. Robbins, J. Markel, S. Popescu, R. Nelson, et al. 2022. “Ice, Cloud, and Land Elevation Satellite (ICESat-2) Project Algorithm Theoretical Basis Document (ATBD) for Land-Vegetation Along-Track Products (ATL08).” Version 6. ICESat-2 Project. https://doi.org/10.5067/8ANPSL1NN7YS.
  • Neumann, T. A., A. Brenner, D. Hancock, J. Robbins, A. Gibbons, J. Lee, K. Harbeck, J. Saba, S. Luthcke, and T. Rebold. 2022. “Ice, Cloud, and Land Elevation Satellite (ICESat-2) Project Algorithm Theoretical Basis Document (ATBD) for Global Geolocated Photons ATL03.” Version ICESat-2 Project. https://doi.org/10.5067/GA5KCLJT7LOT.
  • Neumann, T. A., A. J. Martino, T. Markus, S. Bae, M. R. Bock, A. C. Brenner, K. M. Brunt, et al. 2019. “The Ice, Cloud, and Land Elevation Satellite–2 Mission: A Global Geolocated Photon Product Derived from the Advanced Topographic Laser Altimeter System.” Remote Sensing of Environment 233:111325. https://doi.org/10.1016/j.rse.2019.111325.
  • Nie, S., C. Wang, X. H. Xi, S. Z. Luo, G. Y. Li, J. Y. Tian, and H. T. Wang. 2018. “Estimating the Vegetation Canopy Height Using Micro-pulse Photon-counting LiDAR Data.” Optics Express 26 (10): A520–A540. https://doi.org/10.1364/OE.26.00A520.
  • Pan, J. Y., C. Wang, J. L. Wang, F. Gao, Q. W. Liu, J. P. Zhang, and Y. C. Deng. 2022. “Land Cover Classification Using ICESat-2 Photon Counting Data and Landsat 8 OLI Data: A Case Study in Yunnan Province, China.” IEEE Geoscience and Remote Sensing Letters 19:1–5. https://doi.org/10.1109/LGRS.2022.3209725.
  • Popescu, S. C., T. Zhou, R. Nelson, A. Neuenschwander, R. Sheridan, L. Narine, and K. M. Walsh. 2018. “Photon Counting LiDAR: An Adaptive Ground and Canopy Height Retrieval Algorithm for ICESat-2 Data.” Remote Sensing of Environment 208:154–170. https://doi.org/10.1016/j.rse.2018.02.019.
  • Rosso, P. H., S. L. Ustin, and A. Hastings. 2005. “Mapping Marshland Vegetation of San Francisco Bay, California, Using Hyperspectral Data.” International Journal of Remote Sensing 26 (23): 5169–5191. https://doi.org/10.1080/01431160500218770.
  • SUHET. 2015. Sentinel-2 User Handbook. [ESA Standard Document]. Paris, France: European Space Agency. https://sentinel.esa.int/documents/247904/685211/sentinel-2_user_handbook.
  • Tang, H., A. Swatantran, T. Barrett, P. DeCola, and R. Dubayah. 2016. “Voxel-Based Spatial Filtering Method for Canopy Height Retrieval from Airborne Single-Photon Lidar.” Remote Sensing 8 (9): 771. https://doi.org/10.3390/rs8090771.
  • Voegtle, T., and E. Steinle. 2003. “On the Quality of Object Classification and Automated Building Modeling Based on Laserscanning Data.” IAPRS 34, part 3/W13, Dresden, 149–155.
  • Wang, S. F., C. Liu, W. Y. Li, S. J. Jia, and H. Yue. 2023. “Hybrid Model for Estimating Forest Canopy Heights Using Fused Multimodal Spaceborne LiDAR Data and Optical Imagery.” International Journal of Applied Earth Observation and Geoinformation 122:103431. https://doi.org/10.1016/j.jag.2023.103431.
  • Wang, S. S., F. P. Nie, Z. Wang, R. Wang, and X. L. Li. 2022. “Robust Principal Component Analysis via Joint Reconstruction and Projection.” IEEE Transactions on Neural Networks and Learning Systems, 1–15. https://doi.org/10.1109/TNNLS.2022.3214307.
  • Wang, X. A., Z. G. Pan, and C. Glennie. 2016. “A Novel Noise Filtering Model for Photon-counting Laser Altimeter Data.” IEEE Geoscience and Remote Sensing Letters 13 (7): 947–951. https://doi.org/10.1109/LGRS.2016.2555308.
  • Yang, P. F., H. Q. Fu, J. J. Zhu, Y. Li, and C. C. Wang. 2021. “An Elliptical Distance Based Photon Point Cloud Filtering Method in Forest Area.” IEEE Geoscience and Remote Sensing Letters 19:1–5. https://doi.org/10.1109/LGRS.2021.3124612.
  • Yang, X. B., C. Wang, X. H. Xi, P. Wang, Z. Lei, W. F. Ma, and S. Nie. 2019. “Extraction of Multiple Building Heights Using ICESat/GLAS Full-Waveform Data Assisted by Optical Imagery.” IEEE Geoscience and Remote Sensing Letters 16 (12): 1914–1918. https://doi.org/10.1109/LGRS.2019.2911967.
  • Yin, J., Z. Yin, H. D. Zhong, S. Y. Xu, X. M. Hu, J. Wang, and J. P. Wu. 2011. “Monitoring Urban Expansion and Land use/Land Cover Changes of Shanghai Metropolitan Area During the Transitional Economy (1979–2009) in China.” Environmental Monitoring and Assessment 177 (1–4): 609–621. https://doi.org/10.1007/s10661-010-1660-8.
  • Zhang, J. S., and J. Kerekes. 2015. “An Adaptive Density-based Model for Extracting Surface Returns from Photon-counting Laser Altimeter Data.” IEEE Geoscience and Remote Sensing Letters 12 (4): 726–730. https://doi.org/10.1109/LGRS.2014.2360367.
  • Zhang, X., Y. N. Zhou, and J. C. Luo. 2022. “Deep Learning for Processing and Analysis of Remote Sensing Big Data: A Technical Review.” Big Earth Data 6 (4): 527–560. https://doi.org/10.1080/20964471.2021.1964879.
  • Zhao, B. T., X. Dong, Y. C. Guo, X. F. Jia, and Y. R. Huang. 2022a. “PCA Dimensionality Reduction Method for Image Classification.” Neural Processing Letters 54:347–368. https://doi.org/10.1007/s11063-021-10632-5.
  • Zhao, Y., B. Wu, Q. X. Li, L. Yang, H. C. Fan, J. P. Wu, and B. L. Yu. 2023. “Combining ICESat-2 Photons and Google Earth Satellite Images for Building Height Extraction.” International Journal of Applied Earth Observation and Geoinformation 117:103213. https://doi.org/10.1016/j.jag.2023.103213.
  • Zhao, Y., B. Wu, S. Shu, L. Yang, J. P. Wu, and B. L. Yu. 2022b. “Evaluation of ICESat-2 ATL03/08 Surface Heights in Urban Environments Using Airborne LiDAR Point Cloud Data.” IEEE Geoscience and Remote Sensing Letters 19:1–5. https://doi.org/10.1109/LGRS.2021.3127540.
  • Zheng, X. B., C. P. Hou, M. Y. Huang, D. Ma, and M. D. Li. 2023. “A Density and Distance-based Method for ICESat-2 Photon-counting Data Denoising.” IEEE Geoscience and Remote Sensing Letters 20:1–5. https://doi.org/10.1109/LGRS.2023.3249960.
  • Zhou, Y. H., F. Qiu, A. A. Ali, and S. A. Mohammed. 2015. “ICESat Waveform-based Land-cover Classification Using a Curve Matching Approach.” International Journal of Remote Sensing 36 (1): 36–60. https://doi.org/10.1080/01431161.2014.990648.
  • Zhu, X. X., S. Nie, C. Wang, X. H. Xi, and Z. Y. Hu. 2018. “A Ground Elevation and Vegetation Height Retrieval Algorithm Using Micro-pulse Photon-counting Lidar Data.” Remote Sensing 10 (12): 1962. https://doi.org/10.3390/rs10121962.
  • Zhu, X. X., S. Nie, C. Wang, X. H. Xi, J. S. Wang, D. Li, and H. Y. Zhou. 2021. “A Noise Removal Algorithm Based on OPTICS for Photon-counting LiDAR Data.” IEEE Geoscience and Remote Sensing Letters 18 (8): 1471–1475. https://doi.org/10.1109/LGRS.2020.3003191.
  • Zhu, X. X., C. Wang, S. Nie, F. F. Pan, X. H. Xi, and Z. Y. Hu. 2020. “Mapping Forest Height Using Photon-counting LiDAR Data and Landsat 8 OLI Data: A Case Study in Virginia and North Carolina, USA.” Ecological Indicators 114:106287. https://doi.org/10.1016/j.ecolind.2020.106287.