213
Views
0
CrossRef citations to date
0
Altmetric
Research Article

A MWCMLAI-Net method for LAI inversion in maize and rice using GF-3 and Lutan radar data

, &
Article: 2341128 | Received 27 Dec 2023, Accepted 03 Apr 2024, Published online: 11 Apr 2024

ABSTRACT

This study aimed at alleviating the problems of unsatisfactory inversion accuracy and weak model stability in LAI remote sensing quantitative inversion. The properties and complex scattering mechanism of SAR data specify the polarization combinations and frequencies. This paper proposes an improved water cloud model combined with a deep neural network (MWCMLAI-Net) for high-precision inversion. The polarized GF-3 (C-band) and Lutan (L-band) were used to investigate the potential of SAR images to estimate LAI, a strong indicator of crop productivity. The study selected xiangfu district in the eastern part of Kaifeng City, Henan Province, as the test area and investigated the LAI of maize and rice. The RVIFreeman Model, backward scattering coefficient extracted by the modified cloud and water model (MWCM), and LAI obtained by the inversion of the MWCM were used as the inputs, and the MWCMLAI-Net inversion of the LAI was constructed. The results showed that the model’s inverted LAI fitting accuracies of maize and rice for the three fertility periods were better than the other models, with R2 above 0.8516 and RMSE below 0.3999 m2/m2. The addition of noise did not affect the results.

1. Introduction

The leaf area index (LAI), defined as the sum of the one-sided areas of green leaves on the ground, is an important parameter for describing vegetation growth. It plays an important role in the carbon and water cycles of vegetation. Crops are an important part of the ecosystem, and LAI is an important indicator of crop yield, photosynthesis capacity, and growth and health status (Ma and Liang Citation2022; Stephen et al. Citation2015; Yan et al. Citation2019). In this paper, LAI was defined as half of the sum of the total leaf area per unit surface area (Li and Mu Citation2021). Therefore, LAI’s rapid and accurate inversion is of great research significance for agricultural monitoring and biogeochemical cycles. Although traditional ground-based measurements of LAI can obtain localized surface-specific LAI values, these methods are time-consuming, severely restricting their practical application when the need is to monitor. These methods are time-consuming, severely restricting their practical application when the need is to monitor repeatedly over large regions. Remote sensing technology effectively monitors vegetation LAI on a regional or global scale due to its advantages of a large coverage area and high timeliness (Pasolli et al. Citation2015; Wei et al. Citation2017; Xiao et al. Citation2016).

Based on optical remote sensing images, the main approaches of LAI remote sensing inversion consist of an optical vegetation index model and a physical model, but the inversion accuracy of the two models is subject to the presence of clouds in the optical remote sensing images, which leads to poor reliability and universality (Jin et al. Citation2019; Liu, Liu, and Chen Citation2012). However, synthetic aperture radar (SAR) has significant advantages: it operates day and night in all weather, can penetrate clouds and fog, and can penetrate the dense vegetation canopy to a certain degree. Thus, SAR has a greater potential for application in the inversion and monitoring of vegetation parameters (Thota et al. Citation2018). Researchers have studied the application of SAR sensors in agricultural monitoring, and some have used C-band SAR data to monitor agriculture (Suraj et al. Citation2022; Wang et al. Citation2023), such as ERS-1, ERS-2, RADARSAT-1, Sentinel-1, and GF-3. Another group uses L-band SAR data to monitor agriculture, such as JERS-1, ALOS-PALSAR, ALOS-2 PALSAR-2, and Lutan. Although C-band microwaves interact more with the upper canopy, longer L-band wavelength responses result from greater interaction with the canopy soil and scattering directly from the soil. Furthermore, Bistatic radar signatures have emerged as a new area in the field of microwave remote sensing with the enormous potential of monitoring/retrieving vegetation and geophysical parameters of various land covers and soil moisture. In the bistatic system, transmission and receiver are physically separated from each other that will help in providing multi-dimensional information of the target and will be helpful to enhance target radar cross-section due to its geometrical effect (Shubham et al. Citation2023; Suraj et al. Citation2022).

There are three methods for LAI remote sensing inversion using SAR remote sensing images. One method is to establish an empirical model of crop and polarization parameters of radar images by statistical means, which is simple and easy to invert but lacks theoretical support (Chakraborty, Manjunath, and Panigrahy Citation2005; Shao et al. Citation2001). Another method is to establish a mechanism model by simulating the physical process of the radar beam into the crop canopy and below. The mechanism model better describes radar backscattering characteristics and the crop canopy action mechanism, but the input parameters are cumbersome, and the inverse solution is difficult. The third approach uses a semi-empirical water cloud model (WCM), which has some physical significance and is simplified for LAI calculations, showing some advantages in crop monitoring (Beriaux et al. Citation2013; Graham and Harris Citation2003). Based on this, researchers have begun to consider optimization algorithms for water cloud model coefficients. Prevot, Champion, and Guyot (Citation1993) used quasi-Newton for solving water cloud model coefficients. Yang et al. (Citation2016) used a genetic algorithm to solve the coefficients of the water cloud model, and the results proved that the root mean square error (RMSE) of the coefficients obtained by this method is smaller than that of the traditional gradient continuous unconstrained algorithm. However, this model requires too many parameters, and parameters are adjusted to predict a variable of interest using a training dataset of input-output data pairs, which come from concurrent measurements of the parameter and the corresponding reflectance / radiance observation. Several nonparametric regression algorithms are available in the machine learning literature, and recently they have been introduced for biophysical parameter retrieval.

Typically, machine learning methods are able to cope with the strong nonlinearity of the functional dependence between the biophysical parameter and the observed reflected radiance. They may therefore be suitable candidates for operational applications. Several studies have demonstrated that machine learning algorithms are effective for modeling vegetation LAI using remotely sensed data and field measurements. Artificial neural networks (ANN) fit well on complex, high-dimensional, and nonlinear data, and have high accuracy. Support vector machines (SVM) similarly support high-dimensional inputs in regression models, but they need fewer training samples than ANN. Random forests (RF) has high precision, high calculation speed, and robustness in parameter estimation, and it can rank variables according to their importance in LAI estimation. Deep learning algorithms is a machine learning method that mimics the learning system of humans, have come out as an effective tool for many agricultural applications with better performances. Deep convolutional neural networks (CNN) have proven particularly successful for picture classification and regression applictions. They are very efficient from a computational point of view. Researchers have used image-based convolutional neural networks to obtain more accurate results than those obtained in classical ones. CNNs outperform classical machine learning methods due to their inherent capability to extract a large number of features and to perform prediction tasks accurately. Therefore, deep learning techniques learn through data mining, proving that deep learning provides a way to discover relationships between variables in a high-dimensional space. The inversion of leaf area index by coupling physical mechanism and deep learning model can not only deal with complex physical characteristics, improve the theory and optimize the system, but also make up for the interpretability of machine learning and improve the efficiency and accuracy of the model.

In summary, the inversion of the LAI model based on optical images is vulnerable to cloud coverage, leading to low inversion accuracy. Deep learning method has great application potential in quantitative LAI inversion. The research presented here examines the potential of multi-polarization GF-3 (C-band) and HH polarization Lutan (L-band) for LAI estimation over rice and maize canopies. This paper proposes MWCMLAI-Net, taking into account the polarization-decomposed radar vegetation index, the backward scattering coefficient extracted by the Modified Cloud and Water Model (MWCM), and the LAI obtained from the inversion of the Modified Cloud and Water Model (MWCM) as input factors. The network extracts the scattering features by introducing a convolutional layer, pooling the dimensional reduction, and a fully connected layer to perform nonlinear mapping of scattering features to LAI. The proposed model shows excellent noise immunity and stability.

2. Materials and methods

2.1. Materials

2.1.1. Study area

The study area is in the Xiangfu Area (34°30′ – 34°56′N, 114°07′ – 114°43′E) in the eastern part of Kaifeng City, Henan Province. The region is in the central part of the East Yu Plain and the south bank of the Yellow River, belonging to the Yellow River alluvial fan plain. It has an average annual precipitation of 627.5 mm and a maximum summer temperature of 42.9°C. This is an important rice production area in China and belongs to the typical warm-temperate continental monsoon climate. The location of the study region and the distribution of sampling points are shown in .

Figure 1. Location of the study region and distribution of sampling points.

Figure 1. Location of the study region and distribution of sampling points.

2.1.2. Satellite data collection and processing

This study selected theGF-3 SAR and Lutan SAR data from the study area. GF-3 SAR data were downloaded from the China Platform of Earth Observation System (CPEOS). GF-3 SAR data band was the C-band, incidence angles ranged from 20° to 41°, and the transit times were 9 May, 5 June, and 16 July 2023. The operating frequency was 5.4 GHz, and the spatial resolution was 8 m. Lutan SAR data were downloaded from the Natural Resources Satellite Remote Sensing Cloud Service Platform (NRSRSCSP). Lutan SAR data band was the L-band, incidence angles ranged from 10° to 60°, and the transit times were 24 May, 22 June, and 24 July 2023. The operating frequency was 1.26 GHz, and spatial resolution was 3 m. GF-3 SAR and Lutan SAR data were processed as follows:

The first is radiometric correction. The raw DN values were converted into sigma nought values based on the lookup tables in the GF-3 SAR and Lutan SAR data using the following Eq. 1, and backscatter was obtained by radiative correction based on the radar incidence angle. (1) σ0=DN2Aσ2(1) Where, Aσ is the transformation coefficient for the radar image pixels that are the areas falling between points in the

LUT, and σ0 is the backscatter coefficient, usually sigma nought. Then, filtering was performed using a 5 × 5 Boxcar filter to reduce the noise in the radar data (Liu, Liu, and Chen Citation2012). Next is a geometric correction. The image is geometrically corrected according to the parameters in the data header file, where the orthorectification uses TanDEM-X elevation model data (DEM) data. Finally, it is aligned with an image with a fine correction history.

2.1.3. Field data observation

The experiment analyzed the LAI inversion of rice and maize in three growth stages (). The selected acquisition dates were May 12, June 17, and July 20, 2023, almost synchronous with the satellite transit time. Fifty rice and maize quadrats with a size of 16 m × 16 m were selected, and three spots with the same growth status were chosen. LAI was measured by an LAI-2200 canopy analyzer (LI-COR, USA). Four areas were selected within a sample plot, and the mean value of three measurements was selected as the LAI of the plot. Four sub-sample points were randomly selected to collect soil samples from 0 to 5 cm and 0 to 10 cm within each sample point, and the weight moisture content was calculated in the laboratory using the drying method and, combined with the soil density parameters, converted into volumetric water content. The coordinates of the sample points were measured by a Tianbao hand-held GPS with a planar positioning accuracy of better than 1 m during all ground sampling.

Table 1. GF-3 and Lutan SAR acquisitions and LAI measurement dates.

2.2. Methods

The properties and complex scattering mechanism of GF-3 (C-band) and Lutan (L-band) SAR data specify the polarization combinations and frequencies. Modeling was performed by examining the correlation between maize and rice LAI with GF-3 fully polarized SAR data and Lutan HH-polarized SAR data. The correlation between the polarization-decomposed radar vegetation index RVIFreeman model and LAI was analyzed for GF-3 SAR and Lutan SAR data, respectively. The total backscattering coefficient σ0 obtained from the improved water cloud model and the measured LAI correlation were analyzed for the GF-3 SAR and Lutan SAR data, respectively. Finally, the backscattering coefficients obtained from the improved water cloud model, the LAI, and RVIFreeman models obtained from the inversion of the improved water cloud model were input to the deep learning LAI-Net model, which takes full advantage of the fact that this network model can support the training of a small amount of data with a small sample size in order to obtain a better accuracy of the LAI inversion, as shown in .

Figure 2. Flowchart of leaf area index inversion by MWCMLAI-Net.

Figure 2. Flowchart of leaf area index inversion by MWCMLAI-Net.

2.2.1. Radar vegetation indices for Freeman-Durden polarization decomposition

To eliminate the effect of insufficient reflectivity caused by topographic complexity, Xiangfu District, Kaifeng City, Henan Province, was selected as the experimental area, where backward scattering originates mainly from canopy body scattering, secondary scattering from leaves and stems, and ground surface scattering. Therefore, the Freeman – Durden polarization decomposition method was introduced to construct the vegetation index. Freeman and Durden constructed a polarization-decomposed vegetation index based on three simple mechanism scattering models, odd scattering, even scattering, and body scattering (Freeman and Durden Citation1998), by decomposing the covariance matrices of the fully polarized SAR data to obtain the scattering power components of the three scattering mechanisms. The total covariance matrix T3 is given below: (2) T3=FsCv+FdCd+FvCs(2) where Cv, Cd, and Cs represent the volume, secondary, surface, and spiral scattering, respectively. Fs, Fd, and Fv, respectively, correspond to the contributions of the four scattering quantities. In this paper, the polarization decomposition index RVIFreeman was defined according to Cv as follows: (3) RVIFreeman=Cv/(Cv+Cd+Cs)(3) When the radar-irradiated area is bare ground, the body scattering component, Cv, tends to 0, and RVIFreeman also tends to 0. When the observation area contains more woods or grass bodies, the energy of the radar wave penetrating the vegetation canopy and emitting single scattering at the ground surface is reduced, and the two-surface scattering component incident to the ground bouncing back to the tree trunks or stems of the vegetation becomes smaller, at which time both Cd and Cs are reduced, whereas dense vegetation will increase Cv and therefore, the value of RVIFreeman becomes larger, tending to 1. In Freeman polarization decomposition, the total scattering power Span is: (4) Span=|SHH|2+2|SHV|2+|SVV|2(4) (5) RVIFreeman=8|SHV|2|SHH|2+2|SHV|2+|SVV|2(5) And the backward scattering coefficientσ0 can be expressed as follows: (6) σ0=4πA|Sij|2(6) where A is the radar-irradiated area due to the radar vegetation index constructed in this paper based on GF-3 and Lutan SAR imagery. Due to the large soil moisture content of the underlying surface of rice and maize in the study area, Shen et al. (Citation2015) was adopted. Based on the proposed model, the model can be improved to simulate the backscattering of vegetation in the study area, and better simulation results are obtained. For the two-layer structure of vegetation in the study area, the total backscattering is considered to be a linear combination of body scattering of stems and leaves, ground surface scattering, and secondary reflection of leaves and stems and ground. The vegetation leaves were simulated into narrow elliptic ellipses, the vegetation stems were simulated by the infinite-length dielectric cylinder model, the surface was simulated by the integral equation model, and the vegetation dielectric constant was obtained by the Debye-Cole double dispersion relation model. GF-3 SAR imagery is fully polarized and RVIGF3Freeman is expressed as Eq. 7, and Lutan SAR imagery is HH-polarized and RVILutanFreeman is expressed as Eq. 8, the (7) RVIGF3Freeman=8σ0HVσ0HH+2σ0HV+σ0VV(7) (8) RVILutanFreeman=8σ0HHσ0HH+2σ0HH+σ0HH(8)

2.2.2. MWCM

The water cloud model (WCM) was proposed by Attema and Ulaby (Citation1978). Based on this, the expression is improved as follows to effectively distinguish different crops: (9) σ0=σveg(θ)+τ2(θ)σsoil(θ),(9) (10) σveg(θ)=A×M1×cos(θ)×[1τ2(θ)],M1=LC1(10) (11) τ2(θ)=exp[2B×M2cos(θ)],M2=LC2(11) where total backscatter σ0 is expressed as the incoherent sum of backscatter from vegetation σveg(θ) and backscatter from the underlying surface σsoil(θ) which is attenuated by the vegetation layer through the two way attenuation factor τ2(θ). and θ is the incidence angle of radar waves. M1 and M2 are the vegetation canopy parameters of maize and rice, respectively. L is the LAI and C1 and C2 are the parameters. A and B are the parameters of the model.

Mehdi et al. (Citation2015) combined the bare soil and water cloud models, simplified the model using LAI as a crop parameter, and constructed the relationship between LAI (i.e. M) and soil water content and radar backscatter for each polarization. To avoid the large uncertainty in the water cloud model estimate of soil water content caused by the remote sensing inversion LAI error, the LAI was replaced by the radar vegetation index to obtain an improved radar index water cloud model with the following equations: (12) σsoil(θ)=D×MS+E(12) (13) fVI=RVI(13) (14) σ0=fVI×A×M1×cos(θ)×[1τ2(θ)]+(1fVI)exp[2B×M2/cos(θ)]×(D×MS+E)(14) where D is the sensitivity of the radar to soil moisture, E is the backscattering coefficient of the soil, and MS is the SMC from the SVRM. Cv, Cd, Cs, and Ch represent the volume, secondary, surface, and spiral scattering, respectively. MWCM is a function of the input parameters that quantify crop growth and soil surface parameters.

The lookup table (LUT) directly compares the radiative transfer model-simulated backward scattering coefficients with image backward scattering coefficients (Jochem et al. Citation2019). It is a widely used and simple inversion method in the field of vegetation parameter inversion for remotely sensed data. The first step is to screen the input parameters that are sensitive to the target parameters as free parameters, use their value range and step size to uniformly sample to generate multiple parameter combinations, and finally input the different parameter combinations into the radiative transfer model to establish the correspondence between the parameter combinations and the output values of the model. The settings are shown in .

Table 2. The range of all parameters in generating look up table.

In this paper, the Laplace distribution (LP) is used as the cost function of the LUT (John and Ren Citation2012), defined as follows: (15) LP=i=1n|σobsσmod|(15) where σobs is the backscattering value calculated using the SAR image; σmod is the backscattering value simulated by MWCM, and n is the number of pixel points in the image.

2.2.3 LAI-Net

2.2.3.1. LAI-Net model structure

LAI-Net takes the backward scattering coefficients extracted by MWCM as input, with a size of 1 × 244, and generates LAI as output, with a size of 1 × 1. The network structure consists of two convolutional layers, one pooling layer, and three fully connected layers. The two convolutional layers, with a convolutional kernel size of 1 × 3 and a step size of 2, ensure that the feature information is extracted from the backscatter coefficient data while narrowing the gap between the feature information dimension and the LAI dimension at the output. In this case, the number of channels in the first convolutional layer is 1, and the number of convolutional kernels is set to 4; the number of channels in the second convolutional layer is 4, and the number of convolutional kernels is set to 16, which is later activated by the ReLU function to enhance the nonlinearity of the network (Digvijay, Santanu, and Guanghui Citation2018; Yarotsky Citation2016). The output of the convolutional layer is input to the pooling layer to reduce dimensionality, and the pooling size is 1 × 3 with a step size of 3. Finally, the output of the pooling layer is input to the fully connected layer. The number of neurons in the fully connected layer decreases layer by layer, and the number of neurons in the first, second, and third fully connected layers is set to 32, 8, and 1, respectively. The activation function activates the outputs of each fully connected layer, the ReLU function activates the first and second fully connected layers, and the third fully connected layer is activated by the Sigmoid function. The ReLU function activates the second fully connected layer, and the third fully connected layer is activated by the Sigmoid function. In addition, dropout regularization is introduced between the inputs of the fully connected layers and the first fully connected layer to prevent overfitting.

2.2.3.2. LAI-Net model training

First, samples are randomly selected as training samples from the sample totality, including backscatter coefficients and LAI. Then, the network hyperparameters are set, including the initial learning rate η = 0.001, the data batch size batchsize = 100, and the number of iterations epoch = 2000. In addition, the training network parameters are used in the Adam algorithm. The hyperparameters in the Adam algorithm are the default values of the PyTorch framework, and the learning rate declines in a step pattern, in which for the first 1,000 iterations, 250 times per iteration η = 0.6 × η, after that, every 1,000 times per iteration η = 0.6 × η, and finally, the training samples are substituted into the LAI-Net network based on forward propagation and backpropagation. The network parameters are updated, which contain the weights and bias terms of the convolutional layer and the weights and bias terms of the fully connected layer.

2.2.3.3. LAI-Net model validation

The data in the sample totality minus the training samples are used for validation in the network accuracy test. First, the validation samples are input to the trained LAI-Net network model to obtain LAI data. Then, the coefficient of determination (R2) and the root mean square error (RMSE) are used to calculate the correlation and difference between the LAI inversion values and the measured LAI. Last, the distribution of the difference in the LAI data was plotted to visualize the characterization.

3. Results and discussion

3.1. LAI Inversion by RVIFreeman Model

The Freeman – Durden polarization decomposition of GF-3 SAR and Lutan SAR data was performed to analyze the correlation between the three scattering components of rice and maize and the vegetation LAI, as shown in and . The correlation between the radar vegetation index RVIFreeman and LAI, as defined in this paper, is shown in and .

Figure 3. Regression Analysis of Radar Vegetation Index RVIFreeman and LAI Constructed from GF-3 SAR Image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 3. Regression Analysis of Radar Vegetation Index RVIFreeman and LAI Constructed from GF-3 SAR Image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 4. Regression Analysis of Radar Vegetation Index RVIFreeman and LAI Constructed from Lutan SAR Image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 4. Regression Analysis of Radar Vegetation Index RVIFreeman and LAI Constructed from Lutan SAR Image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Table 3. Accuracy evaluation of the Freeman-DurdenFreeman-Durden polarization decomposition.

Table 4. Accuracy evaluation of the Freeman-Durden polarization decomposition.

From and , it can be seen that the medium scattering component has the highest correlation with LAI. In contrast, secondary scattering predominates among the scattering components at lower LAI because the vegetation in the experimental area has a very high surface water content and is flat. Even the underlying surface is a water surface in some areas, which makes it very easy to form dihedral angular reflections for sparse vegetation.

From and , we see that overall, the correlation of the RVIFreeman model is greater than the body scattering component Fv. Compared with the C-band, the accuracy of maize LAI inversion was higher for the three fertility periods due to the ability of the L-band to penetrate further into the crop canopy. However, for crops with lower rice biomass, the L-band inversion accuracy was lower than the C-band inversion accuracy, mainly due to canopy – soil interactions affecting the likelihood accuracy.

3.2. LAI Inversion by MWCM

3.2.1. LAI inversion by the backscattering coefficient of MWCM

Based on GF-3 SAR images, MWCM was employed to extract the backscattering value of HH, HV, and VV polarization modes, and the correlation with measured LAI was established, as shown in and . Based on Lutan SAR images, MWCM was employed to extract the backscattering value of HH polarization modes, and the correlation with measured LAI was established, as shown in and .

Figure 5. Correlation analysis between the backscattering coefficient model based on GF-3 SAR imagery and LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 5. Correlation analysis between the backscattering coefficient model based on GF-3 SAR imagery and LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 6. Correlation analysis between Correlation analysis between the backscattering coefficient model based on Lutan SAR imagery and LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 6. Correlation analysis between Correlation analysis between the backscattering coefficient model based on Lutan SAR imagery and LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Table 5. Evaluation of backscatter coefficient model accuracy based on GF-3 SAR image.

Table 6. Evaluation of backscatter coefficient model accuracy based on Lutan SAR image.

From and , and and , none of the three backscattering coefficients LAI directly correlates strongly due to the complexity of the radar – vegetation interaction mechanism and the influence of the surface conditions. But the changes in quantity, geometry, and distribution of leaves, stems, and ears had significant effects on the SAR backscatter. The responses of backscattering coefficients and decomposition parameters to the rice and maize plant growth were further interpreted (He et al. Citation2019). Based on the GF-3 SAR image, the correlation between the backscattering coefficient and LAI for HV polarization is higher than that for HH and VV polarization because the cross-polarization is mainly related to the scattering from the vegetation body. In addition, the co-polarization better reflects the secondary scattering and the surface scattering from the ground. Based on the HH polarization, the correlation between the backscatter coefficient extracted by the Lutan satellite and corn LAI is higher than that of the GF-3 SAR image, mainly due to the strong wavelength penetrability. Therefore, in the high LAI region, the main component of the total scattering power is the volume scattering component, and the secondary scattering and surface scattering are weak. However, in the low LAI region, the radar scattering effect is more complex, and it is easy to form a strong secondary scattering component, which will lead to the weakening of the sensitivity of the ratio value of the volume scattering component to LAI.

3.2.2. LAI inversion by MWCM

Based on the GF-3 SAR image, since the cross-polarization backscattering coefficient is mainly related to vegetation scattering, the parameters in MWCM were determined by the nonlinear least squares method under HV cross-polarization. Based on the Lutan SAR image, since the image was acquired only for the HH polarization. Therefore, the parameters in MWCM were determined by the nonlinear least squares method under HH polarization, as shown in and . MWCM was used to invert LAI with the help of the least squares algorithm and the LUT method, as shown in and .

Figure 7. Correlation analysis between MWCM and LAI based on GF-3 SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 7. Correlation analysis between MWCM and LAI based on GF-3 SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 8. Correlation analysis between MWCM and LAI based on Lutan SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 8. Correlation analysis between MWCM and LAI based on Lutan SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Table 7. Model parameters at polarizations.

Table 8. Model parameters at polarizations.

From and , the fitting accuracies based on Lutan SAR image with maize LAI at three fertility stages were higher than those of GF-3 SAR images, and the fitting accuracies were in the order of booting stage (R2 = 0.8451), jointing stage (R2 = 0.8290), and heading stage (R2 = 0.8050), which proved that the L-band is more suitable for inverting maize LAI. Because the fitting accuracies based on the GF-3 SAR images with rice LAI at three fertility stages were higher than those of the Lutan SAR images, the fitting accuracies were in the order of jointing (R2 = 0.8296), booting (R2 = 0.8209) and heading (R2 = 0.8063), which proved that the C-band was more suitable for inverting rice LAI. In conclusion, when LAI becomes large, the C-band signal easily saturates, leading to low inversion accuracy, which is suitable for inverting rice LAI. It is suitable for inverting rice LAI. Compared with the C-band, the model inverts maize LAI better at the longer L-band because of the stronger penetration into the canopy.

3.3. LAI inversion by MWCMLAI-Net

In this study, MWCMLAI-Net was adopted to invert the LAI of rice and maize in three growth stages, and regression analysis was carried out separately, as shown in and .

Figure 9. Correlation analysis between MWCMLAI-Net and LAI based on GF-3 SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 9. Correlation analysis between MWCMLAI-Net and LAI based on GF-3 SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 10. Correlation analysis between MWCMLAI-Net and LAI based on Lutan SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 10. Correlation analysis between MWCMLAI-Net and LAI based on Lutan SAR image. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

As can be seen from and , the MWCMLAI-Net inversion of maize and rice LAI accuracy is the highest for the three fertility periods. This is due to the network model structure and the parameter update method. This model's advantage is that it combines convolutional, pooling, and fully connected layers to sequentially extract backscattering features, dimensionality reduction, and linear regression. The parameters of the MWCMLAI-Net model include the ones of the convolutional layer and those of the fully connected layer; the parameters of the convolutional layer are used for feature extraction, and the parameters of the fully connected layer are used for the nonlinear mapping of backscattering features to LAI. Due to the large number of model parameters and each network layer parameter's functions, the nonlinear relationship between backward scattering coefficients and LAI can be fully approximated.

3.4. Noise immunity analysis of MWCMLAI-Net

To check the noise immunity of MWCMLAI-Net in LAI quantitative inversion, speckle noise is added to the GF-3 SAR and Lutan SAR data to simulate the data with noise, and LAI quantitative inversion is carried out on the data. The results of numerical experiments for LAI quantitative inversion are shown in and .

Table 9. LAI of maize estimation accuracy with MWCMLAI-Net (with noise).

Table 10. LAI of rice estimation accuracy with MWCMLAI-Net (with noise).

As can be seen from and , the R2 of quantitative inversion of maize and rice LAI using MWCMLAI-Net can reach more than 0.85 when noise is included. Therefore, the MWCMLAI-Net proposed in this paper has good stability and noise immunity, which can ensure high-precision inversion results for maize and rice LAI.

3.5. MWCMLAI-Net verification

To verify the reliability of the LAI inversion model, the LAI value obtained by MWCMLAI-Net inversion was studied, and the measured LAI was taken as the true value. Regression analysis was conducted between the remaining 30 true values, other than the modeling samples and the LAI value inverted by MWCMLAI-Net, to obtain the correlation between the LAI inversion value and the true value in each growth stage, as shown in and .

Figure 11. Based on GF-3 SAR image, the MWCMLAI model and LAI verification analysis. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 11. Based on GF-3 SAR image, the MWCMLAI model and LAI verification analysis. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 12. Based on Lutan SAR image, the MWCMLAI model and LAI verification analysis. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 12. Based on Lutan SAR image, the MWCMLAI model and LAI verification analysis. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

As and show, the fitting accuracy of the Lutan SAR image with maize LAI at three fertility stages was higher than that of the GF-3 SAR image, and the fitting accuracies were, in order, at the booting stage (R2 = 0.8855), jointing stage (R2 = 0.8837), and heading stage (R2 = 0.8797). The fitting accuracies based on GF-3 SAR images with rice LAI at three fertility stages were higher than those of Lutan SAR images, and the fitting accuracies were in the order of booting stage (R2 = 0.8900), jointing stage (R2 = 0.8689), and heading stage (R2 = 0.8507). The conclusions obtained by MWCMLAI-Net inversion of LAI for maize and rice in three growth stages were consistent with the model inversion conclusions. These results indicate that the LAI inversion model of rice at three growth stages based on GF-3 SAR can truly reflect the growth and changes of summer maize, and the LAI inversion model of corn at three growth stages based on Lutan SAR can truly reflect the growth and changes of summer maize.

3.6. MWCMLAI-Net cross validation

To further verify the accuracy of the inversion model, the LAI images of rice and maize at five typical growth stages were obtained by inversion. They were resampled to 500 m, and 100 points were randomly selected. Regression analysis was performed on the LAI (GF-3 LAI and Lutan LAI) based on GF-3 SAR images and Lutan SAR images and the MODIS leaf area index products (MOD15A2, MODIS LAI) of the corresponding time phase, respectively, to obtain the correlation between the three growth stages of rice and maize as shown in and .

Figure 13. Correlation analysis between GF-3 LAI and MODIS LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 13. Correlation analysis between GF-3 LAI and MODIS LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 14. Correlation analysis between Lutan LAI and MODIS LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

Figure 14. Correlation analysis between Lutan LAI and MODIS LAI. (a – c) are from the jointing, booting, and heading stages of rice, respectively. (d – f) are from the jointing, booting, and heading stages of maize, respectively.

As can be seen from and , GF-3 LAI and Lutan LAI at the three growth periods of rice showed good consistency with MODIS LAI, respectively. Based on GF-3 SAR images, the fitting accuracy was as follows: jointing stage, heading stage, and booting stage. Based on the Lutan SAR images, the fitting accuracy was as follows: heading stage, jointing stage, and booting stage. GF-3 LAI and Lutan LAI in the three growth periods of maize also showed good consistency with MODIS LAI. Based on GF-3 SAR images, the fitting accuracy was as follows: jointing stage, heading stage, and booting stage. Based on the Lutan SAR images, the fitting accuracy was as follows: jointing stage, heading stage, and booting stage. It is proved that MODIS LAI products reflect the growth status of crops when the required accuracy is not high.

3.7. Mapping of LAI inversion by MWCMLAI-Net

Based on the verified reliability of each inversion model, MWCMLAI-Net was employed to generate LAI’s of maize and rice spatial distribution maps in three growth stages, as shown in and .

Figure 15. Based on GF-3 SAR image, LAI images inverted by MWCMLAI-Net. (a – c) are from the jointing, booting, and heading stages, respectively.

Figure 15. Based on GF-3 SAR image, LAI images inverted by MWCMLAI-Net. (a – c) are from the jointing, booting, and heading stages, respectively.

Figure 16. Based on Lutan SAR image, LAI images inverted by MWCMLAI-Net. (a – c) are from the jointing, booting, and heading stages, respectively.

Figure 16. Based on Lutan SAR image, LAI images inverted by MWCMLAI-Net. (a – c) are from the jointing, booting, and heading stages, respectively.

As can be seen from , based on GF-3 SAR images, LAI values of maize in the jointing stage ranged from 0.2 to 1.5, and LAI values of maize mostly ranged from 0 to 1.3, with an average value of 1.22. At the booting stage, LAI values of maize ranged from 2 to 3, and LAI values of maize were mostly concentrated between 2.0 and 2.5, with an average value of 2.35. At the heading stage, LAI values of maize ranged from 4 to 6, and LAI values of maize were mostly concentrated between 4 and 5, with an average value of 4.35. At the jointing stage, LAI values of rice ranged from 0 to 1.5, and LAI values of rice mostly ranged from 0.8 to 1.0, with an average value of 0.86. At the booting stage, LAI values of rice ranged from 2 to 3, and LAI values of rice mostly ranged from 2.0 to 2.3, with an average value of 2.15. At the heading stage, LAI values ranged from 4 to 5, and LAI values of rice were mostly concentrated between 4.0 and 4.6, with an average value of 4.35.

As shown in , based on Lutan SAR images, LAI values of maize in the jointing stage ranged from 0.2 to 1.5, and LAI values of maize mostly ranged from 0.8 to 1.4, with an average value of 1.26. At the booting stage, LAI values of maize ranged from 2 to 3, and LAI values of maize mostly ranged from 2.00 to 2.78, with an average value of 2.65. At the heading stage, LAI values of maize ranged from 4 to 5, and LAI values of maize were mostly concentrated between 4.5 and 5, with an average value of 4.78. At the jointing stage, LAI values of rice ranged from 0 to 1.5, and LAI values of rice mostly ranged from 0.6 to 1.3, with an average value of 0.72. LAI values of rice at the booting stage ranged from 2 to 3, and LAI values of rice mostly ranged from 2.0 to 2.2, with an average value of 2.13. At the heading stage, LAI values ranged from 4 to 5, and LAI values of rice were mostly concentrated between 3.8 and 4.3, with an average value of 4.12.

Therefore, the changes in LAI in the three growth periods were consistent with the changes in actual farmland LAI in the core study area over time. The larger LAI values of maize and rice were mainly concentrated in the northeast. The larger LAI values of rice were mainly concentrated in the southwest because the planting time there was earlier than that in other regions, so their plants grew more luxuriantly. GF-3 SAR and Lutan SAR remote sensing data have strong LAI retrieval ability, and their high temporal and spatial resolution can make them important data sources for agricultural remote sensing research instead of traditional medium-resolution remote sensing data.

For the last decades, remote sensing satellite SAR data’s progress provides an environment for further research on crop biophysical parameters. Deep learning showed significant potential in broad areas; utilizing these methods recently has grown to solve remote sensing problems. The MWCMLAI-Net model inverted LAI fitting accuracies of maize and rice for the three fertility periods were better than the other models, with R2 above 0.8516 and RMSE below 0.3999 m2/m2. Compared to deep learning methods (Castro-Valdecantos et al. Citation2022; Liu et al. Citation2021), the MWCMLAI-Net model showed the best performance and simplicity for data preparation and application. MWCMLAI-Net model takes the output of the convolutional layer as the input of the pooled layer for data dimensionality reduction, and effectively obtains the data spectrum. Dropout regularization is introduced between the input of the fully connected layer and the first fully connected layer to effectively prevent overfitting of the model.

Deep learning has already been considered as a promising tool for solving diferent kinds of problems in agriculture. However, there are several limitations in the application of the proposed model. For instance, in this study, the parameter calibration process is performed manually, so the obtained parameters might not be optimal. In the future, the automatic parameter optimization effect could be strengthened through a combination of different global optimization algorithms.

4. Conclusions

In this study, we applied the MWCMLAI-Net model to GF-3 and Lutan SAR images to invert the maize and rice LAI for three fertility periods in Xiangfu district, east of Kaifeng City, Henan Province, and validated them using the measured LAI data. The experimental results show that:

  1. The polarization-decomposed radar vegetation index, the backscattering coefficient extracted by the modified cloud and water model (MWCM), and the LAI obtained from the inversion of the MWCM, respectively, are fitted to the measured LAI with a low accuracy as inputs to the MWCMLAI-Net model.

  2. The MWCMLAI-Net model inverted LAI fitting accuracies of maize and rice for the three fertility periods were better than the other models, with R2 above 0.8516 and RMSE below 0.3999 m2/m2. The MWCMLAI-Net model in the process of construction, the convolutional layer, pooling layer, and fully connected layer are combined to complete the process of extracting spectral features, reducing dimensionality, and performing nonlinear regression analysis. Due to the forward propagation and backpropagation in the network, it learns the data features better and predicts the data completely.

  3. In addition, the anti-noise performance of the network model was verified, and the results show that it still provides high-precision inversion of LAI despite the noise.

Considering these results and the ability to acquire SAR data regardless of cloudiness or sun, both C – and L-band sensors can act as reliable data sources to monitor the productivity of corn and soybean crops. Although the data collected during the experiment was extensive, further validation of these models is necessary to ensure that the coefficients are robust and applicable spatially and temporally.

Acknowledgement

This study was partially funded by the HeNan Remote sensing Institute. The authors would like to thank the field crews who participated in the experiment.

Data availability statement

The codes are available from the corresponding author, upon reasonable request.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This research was funded by the 2016 National Key Research and Development Plan [grant number 2016YFC0803103], Research on Key Technology of Agricultural Remote Sensing Monitoring [grant number 12210243], and the Henan Provincial University Innovation Team Support Plan [grant number 14IRTSTHN026].

References

  • Attema, E. P. W., and F. T. Ulaby. 1978. “Vegetation Modeled as a Water Cloud.” Radio Science 13 (2): 357–364. https://doi.org/10.1029/RS013i002p00357.
  • Beriaux, E., C. Lucau-Danila, E. Auquiere, and P. Defourny. 2013. “Multiyear Independent Validation of the Water Cloud Model for Retrieving Maize Leaf Area Index from SAR Time Series.” International Journal of Remote Sensing 34 (12): 4156–4181. https://doi.org/10.1080/01431161.2013.772676.
  • Castro-Valdecantos, P., O. E. Apolo-Apolo, M. Pérez-Ruiz, and G. Egea. 2022. “Leaf Area Index Estimations by Deep Learning Models Using RGB Images and Data Fusion in Maize.” Precision Agriculture 23: 1949–1966. https://doi.org/10.1007/s11119-022-09940-0.
  • Chakraborty, M., K. R. Manjunath, and S. Panigrahy. 2005. “Rice Crop Parameter Retrieval Using Multi-Temporal, Multi-Incidence Angle Radarsat SAR Data.” ISPRS Journal of Photogrammetry and Remote Sensing 59 (5): 310–322. https://doi.org/10.1016/j.isprsjprs.2005.05.001.
  • Digvijay, B., S. D. Santanu, and L. Guanghui. 2018. “Complexity of Training ReLU Neural Network.” Discrete Optimization, https://doi.org/10.48550/arXiv.1809.10787.
  • Freeman, A., and S. L. Durden. 1998. “A Three-Component Scattering Model for Polarimetric SAR Data.” IEEE Transactions on Geoscience and Remote Sensing 36 (3): 963–973. https://doi.org/10.1109/36.673687.
  • Graham, A. J., and R. Harris. 2003. “Extracting Biophysical Parameters from Remotely Sensed Radar Data: A Review of the Water Cloud Model.” Progress in Physical Geography: Earth and Environment 27 (2): 217–229. https://doi.org/10.1191/0309133303pp378ra.
  • He, Z., S. Li, Y. Wang, Y. Hu, and F. Chen. 2019. “Assessment of Leaf Area Index of Rice for a Growing Cycle Using Multi-Temporal C-Band PolSAR Datasets.” Remote Sensing 11 (22): 2640. https://doi.org/10.3390/rs11222640.
  • Jin, H. A., A. N. Li, W. X. Xu, Z. Q. Xiao, J. Y. Jiang, and H. Z. Xue. 2019. “Evaluation of Topographic Effects on Multiscale Leaf Area Index Estimation Using Remotely Sensed Observations from Multiple Sensors.” ISPRS Journal of Photogrammetry and Remote Sensing 154: 176–188. https://doi.org/10.1016/j.isprsjprs.2019.06.008.
  • Jochem, V., M. Zbyněk, V. Christiaan, C. V. Gustau, G. E. Jean-Philippe, L. Philip, N. Peter, and M. Jose. 2019. “Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods.” Surveys in Geophysics 40: 589–629. https://doi.org/10.1007/s10712-018-9478-y.
  • John, P., and H. N. Ren. 2012. “Stein's method and the Laplace distribution.” Mathematics, https://doi.org/10.48550/arXiv.1210.5775.
  • Li, W. H., and X. H. Mu. 2021. “Using Fractal Dimension to Correct Clumping Effect in Leaf Area Index Measurement by Digital Cover Photography.” Agricultural and Forest Meteorology 311: 108695. https://doi.org/10.1016/j.agrformet.2021.108695.
  • Liu, X. N., and B. Cheng. 2012. “Polarimetric SAR speckle filtering for high-resolution SAR images using Radarsat-2 POLSAR SLC data.” Computer Vision in Remote Sensing, International Conference on IEEE. https://doi.org/10.1109/CVRS.2012.6421284.
  • Liu, S. B., X. L. Jin, C. W. Nie, S. Y. Wang, X. Yu, M. H. Cheng, M. C. Shao, et al. 2021. “Estimating Leaf Area Index Using Unmanned Aerial Vehicle Data: Shallow vs. Deep Machine Learning Algorithms.” Plant Physiology 187 (3): 1551–1576. https://doi.org/10.1093/plphys/kiab322.
  • Liu, Y., R. G. Liu, and J. M. Chen. 2012. “Retrospective Retrieval of Long-Term Consistent Global Leaf Area Index (1981–2011) from Combined AVHRR and MODIS Data.” Journal of Geophysical Research: Biogeosciences 117 (G4): G04003. https://doi.org/10.1029/2012jg002084.
  • Ma, H., and S. L. Liang. 2022. “Development of the GLASS 250-m Leaf Area Index Product (Version 6) from MODIS Data Using the Bidirectional LSTM Deep Learning Model.” Remote Sensing of Environment 273: 112985. https://doi.org/10.1016/j.rse.2022.112985.
  • Mehdi, H., H. Mc Nairn, A. Merzouki, and P. Anna. 2015. “Estimation of Leaf Area Index (LAI) in Corn and Soybeans Using Multi-Polarization C- and L-Band Radar Data.” Remote Sensing of Environment 170: 77–89. https://doi.org/10.1016/j.rse.2015.09.002.
  • Pasolli, L., S. Asam, M. Castelli, L. Bruzzone, G. Wohlfahrt, M. Zebisch, and C. Notarnicola. 2015. “Retrieval of Leaf Area Index in Mountain Grasslands in the Alps from MODIS Satellite Imagery.” Remote Sensing of Environment 165: 159–174. https://doi.org/10.1016/j.rse.2015.04.027.
  • Prevot, L., I. Champion, and G. Guyot. 1993. “Estimating Surface Soil Moisture and Leaf Area Index of a Wheat Canopy Using a Dual-Frequency (C and X Bands) Scatterometer.” Remote Sensing of Environment 46 (3): 331–339. https://doi.org/10.1016/0034-4257(93)90053-Z.
  • Shao, Y., X. T. Fan, H. Liu, J. H. Xiao, S. Ross, B. Brisco, R. Brown, and G. Staples. 2001. “Rice Monitoring and Production Estimation Using Multitemporal RADARSAT.” Remote Sensing of Environment 76 (3): 310–325. https://doi.org/10.1016/S0034-4257(00)00212-1.
  • Shen, G. Z., J. J. Liao, H. D. Guo, and J. Liu. 2015. “Poyang Lake Wetland Vegetation Biomass Inversion Using Polarimetric RADAR-SAT-2 Synthetic Aperture Radar Data.” Journal of Applied Remote Sensing 9: 096077. https://doi.org/10.1117/1.JRS.9.096077.
  • Shubham, K. S., P. Rajendra, K. S. Prashant, A. Y. Suraj, P. Y. Vijay, and S. Jyoti. 2023. “Incorporation of First-Order Backscattered Power in Water Cloud Model for Improving the Leaf Area Index and Soil Moisture Retrieval Using Dual-Polarized Sentinel-1 SAR Data.” Remote Sensing of Environment 296: 113756. https://doi.org/10.1016/j.rse.2023.113756.
  • Stephen, R. H., T. Ralf, P. Marion, C. T. Edgar, N. Reuben, and M. E. Robert. 2015. “The Relationship Between Leaf Area Index and Microclimate in Tropical Forest and oil Palm Plantation: Forest Disturbance Drives Changes in Microclimate.” Agricultural and Forest Meteorology 201: 187–195. https://doi.org/10.1016/j.agrformet.2014.11.010.
  • Suraj, A. Yadav, P. Rajendra, P. Y. Vijay, V. Bhagyashree, K. S. Shubham, S. Jyoti, and K. S. Prashant. 2022. “Far-field Bistatic Scattering Simulation for Rice Crop Biophysical Parameters Retrieval Using Modified Radiative Transfer Model at X- and C-Band.” Remote Sensing of Environment 272: 112959. https://doi.org/10.1016/j.rse.2022.112959.
  • Thota, S., K. Dheeraj, S. S. Hari, and Parul Patel. 2018. “Advances in Radar Remote Sensing of Agricultural Crops: A Review.” International Journal on Advanced Science, Engineering and Information Technology 8: 1126. https://doi.org/10.18517/ijaseit.8.4.5797.
  • Wang, R., J. M. Chen, L. M. He, J. Liu, J. L. Shang, J. G. Liu, and T. F. Dong. 2023. “A Novel Semi-Empirical Model for Crop Leaf Area Index Retrieval Using SAR co- and Cross-Polarizations.” Remote Sensing of Environment 296: 113727. https://doi.org/10.1016/j.rse.2023.113727.
  • Wei, X. Q., X. F. Gu, Q. Y. Meng, T. Yu, X. Zhou, Z. Wei, K. Jia, and C. M. Wang. 2017. “Leaf Area Index Estimation Using Chinese GF-1 Wide Field View Data in an Agriculture Region.” Sensors 17 (7): 1593. https://doi.org/10.3390/s17071593.
  • Xiao, Z. Q., T. T. Wang, S. L. Liang, and R. Sun. 2016. “Estimating the Fractional Vegetation Cover from GLASS Leaf Area Index Product.” Remote Sensing 8 (4): 337. https://doi.org/10.3390/rs8040337.
  • Yan, G. J., R. H. Hu, J. H. Luo, M. Weiss, H. L. Jiang, X. H. Mu, D. H. Xie, and W. M. Zhang. 2019. “Review of Indirect Optical Measurements of Leaf Area Index: Recent Advances, Challenges, and Perspectives.” Agricultural and Forest Meteorology 265: 390–411. https://doi.org/10.1016/j.agrformet.2018.11.033.
  • Yang, Z., K. Li, Y. Shao, B. Brisco, and L. Liu. 2016. “Estimation of Paddy Rice Variables with a Modified Water Cloud Model and Improved Polarimetric Decomposition Using Multi-Temporal RADARSAT-2 Images.” Remote Sensing 8: 878. https://doi.org/10.3390/rs8100878.
  • Yarotsky, D. 2016. “Error Bounds for Approximations with Deep ReLU Networks.” Neural Networks 94: 103–114. https://doi.org/10.1016/j.neunet.2017.07.002.