1,415
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Deep learning-based super-resolution for harmful algal bloom monitoring of inland water

, , , , , , , , & show all
Article: 2249753 | Received 07 Feb 2023, Accepted 13 Aug 2023, Published online: 01 Sep 2023

ABSTRACT

Inland water frequently occurs during harmful algal blooms (HABs), rendering it challenging to comprehend the spatiotemporal features of algal dynamics. Recently, remote sensing has been applied to effectively detect the algal spatiotemporal behaviors in expensive water bodies. However, image sensor resolution limitation can render the understanding of spatiotemporal features of relatively small water bodies challenging. In addition, few studies have improved the resolution of remote sensing images to investigate inland water quality, owing to the image sensor resolution limitations. Therefore, this study applied deep learning-based Super-resolution for transforming satellite imagery of 20 m to airborne imagery of 5 m. After performing atmospheric correction for the acquired images, we adopted super-resolution (SR) methodologies using a super-resolution convolutional neural network (SRCNN) and super-resolution generative adversarial networks (SRGAN) to estimate the Chlorophyll-a (Chl-a) concentration in the Geum River of South Korea. Both methods generated SR images with water reflectance at 665, 705, and 740 nm. Then, two band-ratio algorithms at 665 and 740 nm wavelengths were applied to the reflectance images to estimate the Chl-a concentration maps. The SRCNN model outperformed SRGAN and bicubic interpolation with peak signal-to-noise ratios (PSNR), mean square errors (MSE), and structural similarity index measures (SSIM) for the validation dataset of 24.47 (dB), 0.0074, and 0.74, respectively. SR maps from the SRCNN provided more detailed spatial information on Chl-a in the Geum River compared to the information obtained from satellite images. Therefore, these findings showed the potential of deep learning-based SR algorithms by providing further information according to the algal dynamics for inland water management with remote sensing images.

1. Introduction

Harmful algal blooms (HABs) phenomena degrade inland water quality and aquatic ecosystems by releasing toxic and odorous compounds, such as microcystin and 2-methylisoborneol (Baek et al. Citation2022; Gerber Citation1977). Recently, the size and duration of algal blooms have accelerated owing to rapid urbanization, global warming, and climate change, in connection with increasing nutrient loading and warm water (O’Neil et al. Citation2012). Previous studies have indicated that water monitoring campaigns should be conducted to understand these HABs by providing quantitative and qualitative algal blooms to mitigate the degradation of inland water quality (Jang et al. Citation2022). However, HABs monitoring is difficult to understand the spatiotemporal distribution features of algal dynamics from the traditional in situ monitoring because it is conducted at a specific time and location in rivers and reservoirs (Park, Tae Kim, and Hyoung Lee Citation2020). Therefore, advanced monitoring of spatiotemporal variations is vital for preventing HABs during water quality management.

Remote sensing data from airborne and satellite monitoring have been introduced to acquire spatiotemporal features of algal dynamics in expansive water bodies (Pyo et al. Citation2022). Multidimensional data explore HABs phenomena using the water spectral reflectance that estimates the chlorophyll-a (Chl-a) concentration (Hong et al. Citation2021). Lin et al. (Citation2018) utilized remote sensing data from satellite imagery to identify cyanobacterial blooms in an eutrophic lake. He et al. (Citation2020) estimated Chl-a concentration using Chl-a retrieval algorithms with satellite-derived reflectance. However, remote sensing images have challenging issues depending on imaging sensor limitation such as spatial, spectral, and temporal resolutions (Yang et al. Citation2015). Satellite remote sensing is utilized for a wide range of environmental monitoring but generally provides low resolution spectral and spatial information, rendering it difficult to detect features in relatively small regions (Tao et al. Citation2019).

Super-resolution (SR) technology enhances image quality to provide complementary information of the spatial resolution by reconstructing images from low-resolution imagery (Yang et al. Citation2019). Recently, SR algorithms have progressed with deep learning based on convolutional neural networks (CNN) and generative adversarial network (GAN) models (Dong et al. Citation2015; Ledig et al. Citation2017). These deep-learning algorithms are advanced technologies that provide high-resolution imagery to address the challenges associated with the low spatial resolution of satellite imagery (Yang et al. Citation2015). SR algorithms have been applied to obtain preliminary optical properties associated with water quality. Zhang and Huang (Citation2011) used a machine learning method to increase the satellite resolution with better spatial resolution for the visible band. Su et al. (Citation2021) utilized a CNN-based model to resolve the SR of subsurface temperature imagery on a global scale using satellite remote-sensing data for advanced detection. Although remote sensing using the SR technique can be useful for environmental monitoring, relatively few studies have applied SR algorithms to HABs monitoring.

Here, we proposed deep-learning algorithms for the SR of satellite imagery in the Geum River, South Korea. Our study adopted three SR algorithms: bicubic interpolation, super-resolution convolutional neural network (SRCNN), and super-resolution generative adversarial networks (SRGAN). Three SR algorithms generated SR imagery that could estimate the Chl-a concentration using inland water reflectance. The main purpose of our study was to measure remote sensing via airborne and satellite imagery to acquire the spatiotemporal features of algal dynamics in an expansive water body, conduct single super-resolution imagery to generate water reflectance and compare the performance of the SR methods, and acquire a fine resolution map of Chl-a distribution using the bio-optical algorithm and SR imagery.

2. Materials and methods

2.1. Study area

The Geum River is the major river and third largest river in the mid-western province of the Republic of Korea. shows the Geum River basin that reaches the neighboring sea around the Korean Peninsula (N 36.35°–36.52°, E 127.48°–127.60°). It supplies water to surrounding cities, such as the Chungcheoung province, for municipal, domestic, agricultural, and industrial use. The basin area and length of the Geum River are 9,912.15 km2 and 360.70 km, respectively (Lee et al. Citation2018). There are nine intake stations and several industrial complexes along the mainstream of the Geum River. Moreover, this region is dominated by a monsoon climate, associated with intense rainfall (Kim et al. Citation2022). In the last three decades, annual temperature and precipitation have been recorded at 10.9°C and 1,295 mm from June to August, respectively (Choi et al. Citation2021). During this reason, the Geum River experiences annual HABs due to the inflow of non-point and point sources from intensive runoff and industrial complexes (Lee et al. Citation2016). In this study, we chose three representative regions along the Geum River basin that is represented in .

Figure 1. Location of the Geum River. (a) downstream, (b) midstream, and (c) upstream of the Geum River in the Republic of South Korea.

Figure 1. Location of the Geum River. (a) downstream, (b) midstream, and (c) upstream of the Geum River in the Republic of South Korea.

2.2. Research overview

For HABs monitoring, we used remote sensing imagery with enhanced resolution and water reflectance to provide further spatiotemporal information on algal dynamics. We applied deep learning-based SR to estimate Chl-a concentration in three steps: (1) input data preparation (), (2) deep learning-based SR training (), and (3) estimation of Chl-a concentration (). Two monitoring campaigns were conducted using an airborne approach to measure hyperspectral high-resolution (HR) images. Additionally, we collected Sentinel-2 satellite multispectral low-resolution (LR) images. The hyperspectral and multispectral reflectance signals decreased the atmospheric effects related to the adjacency effect, heterogeneous land surface, water vapor, and aerosols using specific atmospheric correction software. Subsequently, water surface reflectance bands, including B04 (665 nm), B05 (705 nm), and B06 (740 nm), were prepared for multispectral input data in the deep learning models. For efficient SR training, the input data were normalized. SRCNN and SRGAN were then applied to obtain a single image of SR from LR to HR images. The generated SR imagery with water reflectance estimated the Chl-a concentration by applying a bio-optical algorithm using specific spectral information related to Chl-a biomass. Finally, SR Chl-a maps were generated and compared to identify the feasibility of deep learning-based super-resolution for water monitoring.

Figure 2. Research flowchart to achieve SR from LR imagery by using the deep learning-based SR algorithms for acquiring the fine-resolution map of Chl-a distribution; (a) indicates the image process of preparing LR and HR input data; (b) denotes the application of deep learning-based SR including SRCNN and SRGAN models; (c) is the SR image generation performance evaluation of SRCNN and SRGAN and generation of the Chl-a distribution maps.

Figure 2. Research flowchart to achieve SR from LR imagery by using the deep learning-based SR algorithms for acquiring the fine-resolution map of Chl-a distribution; (a) indicates the image process of preparing LR and HR input data; (b) denotes the application of deep learning-based SR including SRCNN and SRGAN models; (c) is the SR image generation performance evaluation of SRCNN and SRGAN and generation of the Chl-a distribution maps.

2.3. Data acquisition

In this study, we conducted two monitoring campaigns (airborne and satellite sensing) and collected hyperspectral and multispectral images from the Geum River on 30 September 2019 and 24 October 2020. The airborne captured hyperspectral imagery was monitored using an AISA Eagle sensor (SPECIM Inc., Finland) that was perpendicularly installed on a Cessna 208 multipurpose aircraft (Fig S1). This airborne monitoring campaign was performed under specific conditions including a flying altitude of 3 km, a monitoring time of 3 h starting from 8:30 AM, and weather states of fair days with low wind speed. The spectral range for hyperspectral imagery was 400–970 nm, with spectral and spatial resolutions of 4 nm and 2 m, respectively (Table S1). Thus, this hyperspectral image dataset included a total of 47 sections according to the Geum River monitoring campaigns. Moreover, multispectral images were the Sentinel-2 Level-1C product that was downloadable from the Sentinels Scientific Data Hub (ESA, https://scihub.copernicus.eu/). The Sentinel-2 satellite orbits the world regularly at a mean altitude of 786 km while providing continuous remote sensing imagery with a five-day revisit frequency (Lanorte et al. Citation2019). The multi-spectral instrument measures 13 optical bands with spatial resolutions of 10 m, 20 m, and 60 m, ranging from 443 to 2,290 nm (Tables S1 and S2). We collected two multispectral images with 20 m resolution for the Geum River, in which the cloud cover percentages were 0.93% and 0.00%, respectively.

2.4. Airborne and satellite image preprocessing

The image processing implemented geometric and atmospheric corrections for the airborne images. Hyperspectral imagery has been applied for geometric correction to decrease geometric distortion of remote sensing images (Luan et al. Citation2014). Moreover, airborne imagery was applied atmospheric correction to eliminate atmospheric and illumination effects on hyperspectral imagery using atmospheric and topographic correction 4 (ATCOR 4) software (Tuominen and Lipping Citation2011). ATCOR4 calculates the radiative transfer function by adopting Moderate Resolution Atmospheric Transmission version 6 (MODTRAN6) for atmospheric correction computing the optical parameters according to the weather and observation conditions (Richter and Schläpfer Citation2002). Moreover, these images treated HR imagery and were resized to the spatial resolution of 5 m by using the weighted average of pixels.

The Sentinel-2 reflectance data were distributed to Level-1C products containing Top of Atmosphere (TOA) reflectance, which was influenced by atmospheric effects including aerosol particles, water vapor, ozone, and the existence of clouds (Nazeer et al. Citation2021). The TOA reflectance is converted to Bottom of atmosphere (BOA) reflectance using the Sen2Cor processor with Sentinel Application (SNAP) software to decrease atmospheric effects (Mueller-Wilm, Devignot, and Pessiot Citation2019). The Sen2Cor processor supports the terrain, cirrus, atmospheric correction, and scene classification tasks of Sentinel-2 Level-1C (Main-Knorn et al. Citation2017). Further, the remote sensing images classified the water area reflectance to select water indices that were separated between water and non-water pixels in the imagery (Mondejar and Tongco Citation2019).

2.5. Super-resolution of satellite imagery using deep-learning models

Our study applied deep-learning models to perform the super-resolution with satellite imagery. The super-resolution algorithms adopted CNN- and GAN-based models. The CNN model widely deals with multidimensional imagery to extract meaningful image features using forward and backward propagation during the model train (Naranjo-Torres et al. Citation2020). In the CNN model, the convolutional layers with kernels are performed to train image features moved along with the input data by calculating the weight and bias. The GAN was designed to generate the new data using two neural network models competing with each other (Goodfellow et al. Citation2014). These networks that are contained the generator and discriminator can produce the images and distinguish images between real and fake. In this study, we implemented the CNN-based SRCNN model and GAN-based SRGAN model to enhance the image resolution, which contained multidimensional imagery with water reflectance bands, resulting in the calculation of Chl-a concentration maps. Prior to the simulation of deep learning models, we applied data preprocessing which was the max normalization to reduce the scale of the dataset with a range of zero to one. Input data was then divided into training and validation as about 60% and 40% for monitoring campaigns. Thereby, the remote sensing data was fed into the deep learning model to increase the spatial resolution of LR imagery. This study utilized Python 3.6 programming language and TensorFlow API version 2.50 for deep learning simulation. Furthermore, our models were operated using an Intel® Core i9-11900K 3.50 GHz processor, NVIDIA GeForce RTX 3090 graphic card, and 128 Gigabytes of DDR 4 random-access memory.

2.5.1 Super-resolution CNN (SRCNN)

Our study implemented the super-resolution using SRCNN as the ideal algorithm to enhance image resolution (Ahn, Kang, and Sohn Citation2018; Kim, Kwon Lee, and Mu Lee Citation2016). The SRCNN was suggested as a type of CNN to increase the resolution for single-image super-resolution. It directly learns the end-to-end mapping represented via the CNN between LR images as input data and the enhanced resolution image as output data (Dong et al. Citation2015). We designed the stack of satellite imagery as input data so that the SRCNN model could learn the features of water reflectance (). The multidimensional data for the LR imagery were extracted from the feature vectors, and the size and number of kernels were 5 × 5 and 32, respectively. To extract the water reflectance features, ResNet was utilized in the SRCNN model structure that contained convolutional layers, a batch normalization layer, and a PReLU activation function (Kaiming et al. Citation2016). The SRCNN model transposed the resolution of the LR imagery to increase the image quality, which was conducted with convolutional layers and an upscaling factor (4×) in the up-sampling layer. Finally, our model reconstructed high-quality images, which increased the image resolution with water reflectance bands to provide further information for remote sensing. To minimize the loss between the SR and HR images, the SRCNN model applied loss function with means square error (MSE) during the model training. The following equation indicates the MSE:

(1) Lθ=1ni=1n||FYi;θXi||2(1)

Figure 3. Description of the SRCNN and SRGAN model; (a) is the SRCNN model, (b) and (c) indicate the generator and discriminator in the SRGAN model, respectively.

Figure 3. Description of the SRCNN and SRGAN model; (a) is the SRCNN model, (b) and (c) indicate the generator and discriminator in the SRGAN model, respectively.

where n is the number of training samples, F is the HR image, and X is the output image that generates SR images from the satellite images of LR (Dong et al. Citation2015). The loss was minimized using a stochastic gradient descent with standard backpropagation (Leibe et al. Citation2016). Therefore, we reduced the SRCNN model error between the SR and HR images using the Adam optimizer to update the weights.

2.5.2 Super-Resolution Generative Adversarial Network (SRGAN)

The SRGAN model is a type of GAN that enhances the resolution of LR imagery by incorporating the generator and discriminator of two neural networks, which were used to generate SR images and distinguish between SR and HR images (Ledig et al. Citation2017). The SRGAN model is shown in . The generator network, consisting of ResNet, produces an SR image from the satellite LR images. In contrast, a discriminator network was used to distinguish between SR and HR images. Therefore, realistic SR images were produced to deceive the discriminator (Goodfellow et al. Citation2014). The SRGAN model is represented by the value function V, which is calculated using SR images with the generator and discriminator. The following equation represents the total network:

(2) minGmaxDVD,G=ExlogDx+Ezlog1DGz(2)

where G is the generator, discriminator (D) is the estimate of the probability that is trained to maximize the probability log(D(x)) from the HR images x, G(z) represents the generator output when the LR images z are given to minimize pixel-wise error measurement by D, Ex indicates the expected value over all HR images, and Ezis the expected value over all random inputs to the G.

In our study, we defined the loss function lSRthat was calculated using the content loss and adversarial loss (Equation 4). The content loss lMSESR was based on the MSE value, which is the most widely applied optimization target for SR (Equation 5). The adversarial loss, lGenSR, was utilized to generate realistic SR images to deceive the discriminator (Equation 6), and the loss function of SRGAN can be calculated by the following equations:

(3) lSR=lMSESR+103lGenSR(3)
(4) lMSESR=1r2WHx=1rWy=1rH(Ix,yHRGθG(ILR)x,y)2(4)
(5) lGenSR=1ni=1n1DθGDθGILR(5)

where the loss function, lSR, is calculated with content loss, lMSESR and adversarial loss, lGenSR, Ix,y represents the reflectance value of the image data at point (x, y) for the dimensions of the Ix,yHR and Ix,yLR, W and H are the width and height for the pixel numbers of the images, r and θ are the x4 of the upscaling factor and the denoted weights and biases, and n is the number of image data, respectively.

2.6. Generation of super-resolution map of Chlorophyll-a concentration

This study generated a spatial distribution map of the SR imagery using a bio-optical algorithm derived from the Chl-a optical properties to detect and estimate algal blooms in the surface water system (Pyo et al. Citation2016). The SR algorithms generated high-quality imagery with an enhanced resolution that was applied to the bio-optical algorithm to determine pigment concentration using the apparent optical properties of inland water reflectance (Mishra, Schaeffer, and Keith Citation2014). We applied a two-band ratio algorithm, which is a typical semi-empirical algorithm used for Chl-a estimation. It contains 665 nm and 705 nm spectral bands related to Chl-a concentration (Gitelson et al. Citation2009; Moses et al. Citation2009). The following band ratio algorithm can be estimated Chl-a concentration:

(6) Chlamg/m3Rrsλ2Rrsλ1(6)

where Chl-a concentration is proportional to the two-band ratio of λ1 and λ2 which indicate the B04(665 nm) and B05(705) nm remote sensing reflectance [sr−1], respectively. This study produced Chl-a concentration ratio maps using SR imagery containing spectral information associated with Chl-a.

2.7. Performance evaluation

We applied evaluation matrices to calculate the image quality using The MSE, peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality between the HR and SR was compared using the MSE and PSNR which are widely applied indices (Sara, Akter, and Shorif Uddin Citation2019). The SSIM compares the similarity between two images (Dou et al. Citation2020). Matrices were obtained using the following equations:

(7) MSE=1M×N×Ox=1M y=1Nz=1O Ix,y,z )2(7)
(8) PSNR=20log10MAXfMSE(8)
(9) SSIMISR,IHR=2μISRμIHR+C12σISR,IHR+C2μISR2+μIHR2+C1σISR2+σIHR2+C2(9)

where M and N indicate super-resolved image rows and column numbers, O is the number of image channels, MAXf indicates the peak signal level in the image data, μISR and μIHR represent the mean values of the image ISR andIHR, σISR,IHR is the covariance of the image ISR andIHR, and c indicates a constant value to avoid a divide-by-zero error. The absence of noise between SR and HR imagery means that the MSE and PSNR are zero and infinite values, respectively.

3. Results and discussion

3.1. Spatial variability of water reflectance in the processed satellite and airborne data

We transformed the BOA reflectance using the Sen2Cor library that applied an atmospheric correction to the visible (VIS), near-infrared (NIR), and shortwave infrared (SWIR) bands in order to decrease the atmospheric effect from the TOA reflectance () (Louis et al. Citation2016). The TOA contains spectral information during the monitoring campaigns, which consists of the water reflectance as well as the aerosol and gas molecules, whereas the BOA is the corrected product that was subjected to atmospheric correction to eliminate atmospheric effects according to the water vapor and aerosol optical thickness (Main-Knorn et al. Citation2017). The average TOA decreased in the BOA when atmospheric correction was used. Furthermore, the average reflectance between B01 (443 nm) and B02 (490 nm) was significantly reduced by approximately 0.09 and 0.06, respectively (). These bands are strongly affected by aerosols and gaseous molecules (Kokhanovsky Citation2008). However, the averaged B09 (945 nm) spectral region slightly increased the reflectance value from the TOA to the BOA because the wavelength between 935 nm and 955 nm was influenced concurrently by the aerosol and water vapor effects. The selected spectral values from B01 (443 nm) to B09 (945 nm) typically presented low spectral reflectance from 0.0004 to 0.1350 for the lowest TOA reflectance values, which was approximately 1.5 times greater than the BOA reflectance values []. The TOA reflectance values for the VIS bands (B01 – B04) were within the range of 0.04–0.13, whereas the values in NIR and SWIR were within the range of 0.0008–0.05 []. This implies that the surface reflectance has high values owing to atmospheric effects, such as aerosols and water vapor.

Figure 4. The reflectance spectra range from 465–955 nm in the Geum River. The dash-dot line represents the highest and lowest reflectance values for September 30, 2019. The Solid line with the blue marker indicates the mean reflectance values according to the satellite imagery bands (B01–B09) on the downstream (a, d, and g), midstream (b, e, and h), and upstream (c, f, and i) regions of the Geum River.

Figure 4. The reflectance spectra range from 465–955 nm in the Geum River. The dash-dot line represents the highest and lowest reflectance values for September 30, 2019. The Solid line with the blue marker indicates the mean reflectance values according to the satellite imagery bands (B01–B09) on the downstream (a, d, and g), midstream (b, e, and h), and upstream (c, f, and i) regions of the Geum River.

Figure 5. Reflectance spectra range from 465–955 nm in the Geum River. The dash-dot line represents the highest and lowest reflectance values for the October 24, 2020. The black line with the blue marker indicates the mean reflectance values according to the satellite imagery bands (B01–B09) on the downstream (a, d, and g), midstream (b, e, and h), and upstream (c, f, and i) regions of the Geum River.

Figure 5. Reflectance spectra range from 465–955 nm in the Geum River. The dash-dot line represents the highest and lowest reflectance values for the October 24, 2020. The black line with the blue marker indicates the mean reflectance values according to the satellite imagery bands (B01–B09) on the downstream (a, d, and g), midstream (b, e, and h), and upstream (c, f, and i) regions of the Geum River.

3.2. Super-resolution of deep learning models

3.2.1 Training and validation of SRCNN

We developed an SRCNN model to enhance image resolution from satellite imagery of LR reflectance spectra and airborne HR imagery. The SRCNN increased the spatial resolution of satellite imagery of 20 m to airborne imagery of 5 m. The SRCNN model conducted a single-image super-resolution by minimizing the pixel-wise error between the SR and HR imagery. For the B04 (665 nm), B05(705 nm), and B06(740 nm) bands of the Sentinel-2 satellite, the quality and quantitative results of the representative area included the PSNR, showing the generated SR images in and S2-S3. show the generated reflectance maps of the bands at the representative area in the Geum River. The generated SR imagery by SRCNN shows is relatively similar to the reflectance values following the spatial patterns of HR imagery. Moreover, the SRCNN model showed a PSNR ranging from 23.28 to 29.49 (dB) for the spatial distribution images of B04(665 nm). In addition, the model performance for the validation dataset was evaluated as 22.86 (dB), 25.48(dB), and 27.89 (dB), respectively (). For B05, the PSNR values ranged from 22.10 to 30.43 (dB) for the training data and from 17.81 to 30.13 (dB) for the validation dataset [. The SRCNN model produces a super-resolved single SR image, which can provide preliminary information for remote sensing reflectance. Thus, the SRCNN model directly produces SR images with high learning ability by adopting pixel loss for network optimization (Dong et al. Citation2015). Galar et al. (Citation2019) presented the SRCNN model for Sentinel-2 with high-spatial-resolution imagery based on a CNN model. Huang et al. (Citation2017) demonstrated super-resolution reconstruction of real-world remote sensing imagery using the SRCNN model, which increased the resolution of Sentinel-2 remote sensing images. Müller et al. (Citation2020) applied that multispectral satellite imagery was performed to increase image resolution by the deep convolutional neural network model.

When comparing SRCNN with the bicubic method, the interpolation method showed the relatively poor quality of results than deep learning-based SR techniques. It has insufficient results for small and narrow inland waters because the bicubic method is calculated with the nearest pixel value that has a non-value due to non-water pixels in imagery. The interpolation method showed relatively lower performance in terms of PSNR values than the SRCNN performance. The bicubic interpolation method showed PSNR ranging from 11.78 to 16.57 (dB) for the training data and from 8.09 to 25.02 (dB) for validation. Moreover, the bicubic results showed that the SR image produced by the interpolation method was blurry and had insufficient details compared to the SRCNN model for remote sensing monitoring. The bicubic method directly affects the LR of satellite imagery because the interpolation technique produces HR images with convolution using the average of pixels in the nearest 4 × 4 neighborhood (Viaña-Borja and Ortega-Sánchez Citation2019). Keys (Citation1981) introduced the cubic convolution interpolation method to estimate missing pixels using the weighted average of nearby pixels with known values; however, the interpolation-based approach restored the generated image to be overly blurry and vanish.

3.2.2 Training and validation of SRGAN

The overall visual comparison of the SR images for the SRGAN is presented in with the model performance values. for the training of B04 (665 nm) shows a PSNR ranging from 14.74 to 19.95 (dB) for the training dataset. In the validation, the PSNR values for B04 ranged from 14.70 to 25.35 (dB) in . Furthermore, the SR images with the SRGAN model for the B05 (705 nm) and B06 (740 nm) bands were calculated using the PSNR model evaluation. For B05 (705 nm), the SRGAN model was evaluated PSNR values ranging from 18.84 to 25.83 (dB) for the training data and from 17.19 to 23.71 (dB) for the validation data [. Salgueiro Romero, Marcello, and Vilaplana (Romero, Luis, and Vilaplana Citation2020) applied a deep learning algorithm to accomplish single-image super-resolution and increased the 10-m spatial resolution to a resolution of 2-m using the GAN-based SR model.

and S2–S3 present a comparison of the spatial distribution with the SRGAN and bicubic interpolation method, including training and validation. For overall reflectance, the training data set showed the averaged PSNR of 17.8 (dB), 21.2 (dB), and 22.2 (dB) for the SRGAN, while the bicubic method had the averaged PSNR of 14.7, 13.5, and 14.1 for the B04 (665 nm), B05 (705 nm), and B06 (740 nm), respectively. The SR algorithms achieved super-resolved resolution of 20-m to 5-m for SR images. The deep learning-based SR images could measure the absorption properties of Chl-a concentration to provide preliminary information for monitoring HABs phenomena than the interpolation method. This implies that the deep learning-based SR model generated more fine-spatial resolution images than the bicubic interpolation method for monitoring narrow and small rivers and reservoirs. Galar et al. (Citation2020) presented the GAN model applied to the RGB and NIR bands of Sentinel-2 imagery to enhance the resolution from 10-m to 5 m or 2.5 m spatial resolution. However, the SRGAN model appears as the checkboard artifact that was inevitably produced from the noise in real-world images in (Wang, Chen, and Hoi Citation2020). It could interrupt the stability and the enhanced monitoring using real-world imagery with deep learning-based SR algorithms. Kim et al. (Citation2020) showed that the GAN-based model avoided the checkboard effect by combining interpolation and convolutional modules, thereby they resulted in stability and enhanced image quality. To produce stable SR images with real-world images, the SRGAN model might be enhanced image quality by combining the interpolation module and additional input images.

Figure 6. The quality training results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B04(665nm) on September 30, 2019. The red square indicates the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

Figure 6. The quality training results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B04(665nm) on September 30, 2019. The red square indicates the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

Figure 7. The quality training results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B05(705nm) on September 30, 2019. The red square indicates the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

Figure 7. The quality training results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B05(705nm) on September 30, 2019. The red square indicates the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

Figure 8. The quality validation results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B04 (665 nm) on October 24, 2020. The red squares indicate the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

Figure 8. The quality validation results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B04 (665 nm) on October 24, 2020. The red squares indicate the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

Figure 9. The quality validation results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B05(705 nm) on October 24, 2020. The red squares indicate the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

Figure 9. The quality validation results of super-resolved representative area imagery. (a) is downstream, (b) indicates the midstream, and (c) represents the upstream of Geum river basin for the B05(705 nm) on October 24, 2020. The red squares indicate the zoom-in view of representative images by the SR methods. The red circles represent the checkboard artifact in the visual image results.

3.2.3 Performance comparison of SRCNN with SRGAN

This study compared the SR techniques of CNN-based methods with the training and validation performance presented in . The SR images generated using the deep learning models were similar to the airborne reflectance at the entire site. The SRCNN model achieved the best performance with the highest PNSR and SSIM values, and the average PSNR and SSIM were evaluated at 25.21 (dB) and 0.79, respectively (). This implies that the SRCNN model is suitable for generating SR images from remote sensing (Chang and Luo Citation2019). A previous study showed that the proposed model maintained the spectral radiometry of SR imagery for LR imagery after performing super-resolution. Additionally, the SRCNN model allows an end-to-end mapping process to reconstruct super-resolved imagery using the extracted image features between the LR and HR images as input datasets (Dong et al. Citation2015). When calculating the evaluation performance, the SRGAN model showed that the PSNR, MSE, and SSIM averaged 21.08 (dB), 0.0111, and 0.61, respectively. The deep-learning-based SR algorithm results for the Geum River were better than those obtained using the interpolation method. This implies that the SR algorithm with deep learning could be used to characterize water bodies for remote sensing (Wang, Bayram, and Sertel Citation2022). SR algorithms using deep learning techniques perform various loss functions and architectures that deal with the data-driven learning process between the LR and HR images (Ledig et al. Citation2017). However, real-world spectral imagery is generally non-uniform and varies in its characterization with water bodies (Honggang et al. Citation2022). Thus, the SRGAN model generated relatively low-quality imagery compared with the end-to-end mapping process of SRCNN model. Moreover, the visual image results by the SRGAN often appeared in the checkboard patterns due to the generator with deconvolution (Zhao et al. Citation2019). It means that the model architecture might appropriately choose based on the input data imposing a large of computing power to achieve a suitable model simulation. Xia et al. (Citation2021) also presented that the model complexity had a negative impact on model simulation by excessive computation capacity depending on the input data, which required an appropriate model structure to achieve a suitable model performance.

Table 1. Comparison of interpolation method, SRGAN, SRCNN, and HR images.

3.3 Fine resolution map of Chlorophyll a distribution from SRCNN and SRGAN

In this study, we estimated the Chl-a concentration ratio using the bio-optical algorithm with SR images, as shown in . The bio-optical algorithm was applied to the reflectance at B04 (665 nm) and B05 (705 nm) associated with Chl-a and produced spatial distribution maps of the concentration ratio using the band ratio (B05/B04) of the HR and SR images (). Previous studies present a positive relationship between Chl-a concentration and the applied bio-optical algorithm with the coefficient of determination and root MSE values of 0.75 and 24.64, described in detail by Hong et al. (Citation2022). To compare the spatial distribution maps, SRCNN imagery ( (d), (i), and (n)) for training and validation datasets showed that existed the spatial distribution pattern of the Chl-a ratio was similar to HR imagery [ (a), (f), and (k)]. These results imply that they provided preliminary information to monitor the Chl-a concentration using remote sensing SR imagery regarding water quality []. Su et al. (Citation2021) suggested a CNN-based SR model using remote sensing imagery that provides spatial data of higher resolution to observe mesoscale phenomena in the subsurface temperature field. As shown in (e), (j), and (o), the SRGAN model underestimated Chl-a concentration. Moreover, the SRGAN results showed checkerboard patterns caused by the transposed convolutional layer (Lei, Shi, and Zou Citation2019). The deeper the SRGAN network, the more difficult it is to train and restore finer texture details for super-resolution (Yang et al. Citation2019). Cai, Meng, and Ho (Citation2020) stated that it is difficult for deeper networks to achieve SR reconstruction of high-resolution imagery in the real world, often resulting in incorrect SR simulation imagery.

Figure 10. Spatial distribution maps of different SR algorithms were used to the band ratio (B05/B04) for estimation of the Chl-a concentration ratio. These maps indicate September 30, 2019, for the downstream (a–e), midstream (f–j), and upstream (k–o) of the Geum River.

Figure 10. Spatial distribution maps of different SR algorithms were used to the band ratio (B05/B04) for estimation of the Chl-a concentration ratio. These maps indicate September 30, 2019, for the downstream (a–e), midstream (f–j), and upstream (k–o) of the Geum River.

Figure 11. Spatial distribution maps of different SR algorithms were used to the band ratio (B05/B04) for estimation of the Chl-a concentration ratio. These maps indicate October 24, 2020, for the downstream (a–e), midstream (f–j), and upstream (k–o) of the Geum River.

Figure 11. Spatial distribution maps of different SR algorithms were used to the band ratio (B05/B04) for estimation of the Chl-a concentration ratio. These maps indicate October 24, 2020, for the downstream (a–e), midstream (f–j), and upstream (k–o) of the Geum River.

3.4 Super-resolution using deep learning for water remote sensing

This study employed a deep learning-based SR model to generate high-resolution remote sensing imagery using LR satellite imagery. SR imagery coupled with a bio-optical algorithm was used to estimate the Chl-a concentration ratio for remote water sensing. The CNN-based algorithm, specifically the SRCNN, demonstrated remarkable performance in enhancing the resolution of satellite imagery. The SRCNN effectively extracted the distinctive features of remote sensing images, resulting in SR imagery that closely matched the quality of HR imagery (Chen et al. Citation2016). In particular, the SRCNN model exhibited the highest PSNR and SSIM values and the lowest MSE value. This can provide supplementary information for monitoring HABs based on SR algorithms that enhance the resolution from LR satellite imagery to HR imagery. Gargiulo et al. (Citation2019) implemented deep learning algorithms using a CNN model to generate SR images and extract remote sensing image features. They enhanced the spatial resolution of Sentinel-2 images to 10 m resolution using the original 20 m spatial resolution imagery. Therefore, CNN-based SR models are an effective algorithm for generating super-resolved imagery that can be applied to small and narrow inland waters, to advance the detection of the HAB phenomenon for water quality management (Kaiming et al. Citation2016; Yang et al. Citation2019).

Our study may be limited by insufficient monitoring campaigns for validation. This limitation indicates the difficulty in collecting data for monitoring inland water using airborne and satellite data simultaneously. Satellite and airborne monitoring methods entail time lags dictated by their respective monitoring schedules. Satellites traverse specific orbits at defined intervals and capture images depicting the reflectance of inland water surfaces. Airborne monitoring campaigns are executed under predetermined conditions, including parameters such as flying altitude, monitoring time, and weather conditions. This temporal disparity between satellite and airborne monitoring could complicate validation of the practical application of deep learning-based SR research for real-world monitoring images and SR imagery. Wang et al. (Citation2021) performed super-resolution analysis using 40 aerial images, because they considered that real-world images contain a high signal-to-noise ratio. However, further studies must be performed to establish additional data acquisition and monitoring areas.

The fine-resolution distribution of Chl-a quantification was estimated as a proxy indicator for phenomena related to HABs. However, owing to the presence of non-linear relationships influenced by habitat-specific factors and interactions, it might be deemed insufficient to solely rely on the Chl-a distribution as a monitoring indicator for accurate assessment of HABs phenomena. Wang et al. (Citation2023) proposed a data-based inferential model to characterize the variability of Chl-a and its relationship with the occurrence of algal blooms. They also emphasized the influence of the repeated bloom phenomenon on other biogeochemical factors, including salinity and Chl-a triggering. As a result, they suggested that the analysis of bloom indicators should consider the uncertainties and spatial distribution of blooms to account for multiple triggering factors. Additionally, multiple trigger factors could be investigated to address the uncertainty assessment and sensitivity associated with Chl-a attributed to variation of the algal bloom phenomena. Francesca Pianosi et al. (Citation2016) showed that sensitivity analyses can utilize environmental modeling to investigate dominant parameters and uncertainty assessments. Therefore, algal bloom-specific indicators based on remote sensing information might require dominant bands of satellite imagery to account for sensitivity factors attributed to the variations in algal blooms. Together, these results can provide crucial insights into the application of deep-learning-based super-resolution algorithms and remote sensing for overcoming the spatial resolution challenges arising from equipment limitations and providing further information for water management.

4. Conclusion

This study determined whether remote sensing with deep learning models could provide preliminary information for monitoring water quality according to HABs phenomena. To achieve remote sensing super-resolution in the Geum River, we applied SR algorithms based on the CNN architecture to increase the quality of LR imagery to HR imagery. Thus, LR satellite and HR airborne spectral images were employed for model training. We performed remote sensing of HABs with airborne platforms to monitor eutrophic phenomena to train and validate deep learning achievements during the monitoring campaigns of 2019 and 2020, respectively. Deep learning models have been developed to resolve SR from LR imagery to HR images using retrieved reflectance information. Furthermore, we estimated the Chl-a concentration map using a two-band ratio algorithm with SR imagery. The major findings of our study are as follows:

  1. Remote sensing can provide the spatiotemporal distribution of water quality in terms of water resources. From the remote sensing results, atmospheric effects such as aerosols and water vapor influenced the measurement of water reflectance for water quality monitoring.

  2. The SRCNN model was ideal algorithm among of the deep learning-based SR algorithms, with the highest PSNR, SSIM and the lowest MSE values for evaluation metrics.

  3. The generated SR images provided preliminary information by estimating the Chl-a concentration using a bio-optical algorithm method that could be applied to monitor HABs phenomena for water quality management in narrow and small water bodies.

In dealing with water quality issues related to eutrophication phenomena, this study shows that remote sensing with deep learning-based SR has significant potential to provide further information associated with algal dynamics. Moreover, our study contributes to overcoming the limitations of remote sensing of inland water for water quality monitoring.

Author contributions

D.H.K. designed the modeling and wrote the manuscript. S.M.H. and A.A. (Ph.D.) assisted in developing deep learning algorithms. S.H.P. (Ph.D.), K.H.Y. (Ph.D.), and K.H.K (Ph.D.) participated in the data collection and performed the image processing. J.C.P. (Ph.D.) and K.H.C. (Professor) revised the manuscript draft. All authors read and approved the final manuscript.

Acknowledgments

This research was supported by the Water Environmental and Infrastructure Research Program (NIER-2021-01-01-058) funded by the National Institute of Environmental Research. This work is also partially supported by MSIT through Sejong Science Fellowship, funded by National Research Foundation of Korea (NRF) [No.2021R1C1C2010703].

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available from the corresponding author, J.C.Pyo, upon reasonable request.

Correction Statement

This article has been republished with minor changes. These changes do not impact the academic content of the article.

Additional information

Funding

This work was supported by the National Institute of Environmental Research [NIER-2021-01-01-058]; National Research Foundation of Korea [2021R1C1C2010703].

References

  • Ahn, N., B. Kang, and K. A. Sohn. 2018. “Fast, accurate, and lightweight super-resolution with cascading residual network.“ In Proceedings of the European conference on computer vision (ECCV) (pp. 252–21). https://doi.org/10.1007/978-3-030-01249-6_16
  • Baek, S.-S., E.-Y. Jung, J. Pyo, Y. Pachepsky, H. Son, and K. Hwa Cho. 2022. “Hierarchical Deep Learning Model to Simulate Phytoplankton at Phylum/Class and Genus Levels and Zooplankton at the Genus Level.” Water Research 218:118494. https://doi.org/10.1016/j.watres.2022.118494.
  • Cai, J., Z. Meng, and C. M. Ho. 2020. “Residual channel attention generative adversarial network for image super-resolution and noise reduction.“ In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (pp. 454–455). https://doi.org/10.1109/CVPRW50498.2020.00235
  • Chang, Y., and B. Luo. 2019. “Bidirectional Convolutional LSTM Neural Network for Remote Sensing Image Super-Resolution.” Remote Sensing 11 (20): 2333. https://doi.org/10.3390/rs11202333.
  • Chen, Y., H. Jiang, C. Li, X. Jia, and P. Ghamisi. 2016. “Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks.” IEEE Transactions on Geoscience and Remote Sensing 54 (10): 6232–6251. https://doi.org/10.1109/TGRS.2016.2584107.
  • Choi, H., C.-M. Lee, D. Chan Koh, and Y. Yeol Yoon. 2021. “Recharge and Spatial Distribution of Groundwater Hydrochemistry in the Geum River Basin, South Korea.” Journal of Radioanalytical and Nuclear Chemistry 330 (2): 397–412. https://doi.org/10.1007/s10967-021-07807-8.
  • Dong, C., C. Change Loy, H. Kaiming, and X. Tang. 2015. “Image Super-Resolution Using Deep Convolutional Networks.” IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (2): 295–307. https://doi.org/10.1109/TPAMI.2015.2439281.
  • Dou, X., C. Li, Q. Shi, and M. Liu. 2020. “Super-Resolution for Hyperspectral Remote Sensing Images Based on the 3D Attention-SRGAN Network.” Remote Sensing 12 (7): 1204. https://doi.org/10.3390/rs12071204.
  • Galar, M., R. Sesma, C. Ayala, L. Albizua, and C. Aranda. 2020. “Super-Resolution of Sentinel-2 Images Using Convolutional Neural Networks and Real Ground Truth Data.” Remote Sensing 12 (18): 2941. https://doi.org/10.3390/rs12182941.
  • Galar, M., R. Sesma, C. Ayala, and C. Aranda. 2019. “SUPER-RESOLUTION for SENTINEL-2 IMAGES.” International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences XLII-2/W16:95–102. https://doi.org/10.5194/isprs-archives-XLII-2-W16-95-2019.
  • Gargiulo, M., A. Mazza, R. Gaetano, G. Ruello, and G. Scarpa. 2019. “Fast Super-Resolution of 20 M Sentinel-2 Bands Using Convolutional Neural Networks.” Remote Sensing 11 (22): 2635. https://doi.org/10.3390/rs11222635.
  • Gerber, N. N. 1977. “Three Highly Odorous Metabolites from an Actinomycete: 2-Isopropyl-3-Methoxy-Pyrazine, Methylisoborneol, and Geosmin.” Journal of Chemical Ecology 3 (4): 475–482. https://doi.org/10.1007/BF00988190.
  • Gitelson, A. A., D. Gurlin, W. J. Moses, and T. Barrow. 2009. “A Bio-Optical Algorithm for the Remote Estimation of the Chlorophyll-A Concentration in Case 2 Waters.” Environmental Research Letters 4 (4): 045003. https://doi.org/10.1088/1748-9326/4/4/045003.
  • Goodfellow, I., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2014. “Generative Adversarial Nets.” Advances in Neural Information Processing Systems 27. https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf.
  • Hong, S. M., S.-S. Baek, D. Yun, Y.-H. Kwon, H. Duan, J. Pyo, and K. Hwa Cho. 2021. “Monitoring the Vertical Distribution of HABs Using Hyperspectral Imagery and Deep Learning Models.” Science of the Total Environment 794:148592. https://doi.org/10.1016/j.scitotenv.2021.148592.
  • Honggang, C., X. He, L. Qing, Y. Wu, C. Ren, R. E. Sheriff, and C. Zhu. 2022. “Real-World Single Image Super-Resolution: A Brief Review.” Information Fusion 79:124–145. https://doi.org/10.1016/j.inffus.2021.09.005.
  • Hong, S. M., K. Hwa Cho, S. Park, T. Kang, M. Sung Kim, G. Nam, and J. Pyo. 2022. “Estimation of Cyanobacteria Pigments in the Main Rivers of South Korea Using Spatial Attention Convolutional Neural Network with Hyperspectral Imagery.” GIScience & Remote Sensing 59 (1): 547–567. https://doi.org/10.1080/15481603.2022.2037887.
  • Huang, N., Y. Yang, J. Liu, X. Gu, and H. Cai. 2017. “Single-Image Super-Resolution for Remote Sensing Data Using Deep Residual-Learning Neural Network.“ Neural Information Processing 622–30. https://doi.org/10.1007/978-3-319-70096-0_64.
  • Hua, S., A. Wang, T. Zhang, T. Qin, X. Du, and X.-H. Yan. 2021. “Super-Resolution of Subsurface Temperature Field from Remote Sensing Observations Based on Machine Learning.” International Journal of Applied Earth Observation and Geoinformation 102:102440. https://doi.org/10.1016/j.jag.2021.102440.
  • Jang, W., Y. Park, J. Pyo, S. Park, J. Kim, J. H. Kim, K. H. Cho, J.-K. Shin, and S. Kim. 2022. “Optimal Band Selection for Airborne Hyperspectral Imagery to Retrieve a Wide Range of Cyanobacterial Pigment Concentration Using a Data-Driven Approach.” Remote Sensing 14 (7): 1754. https://doi.org/10.3390/rs14071754.
  • Junyu, H., Y. Chen, J. Wu, D. A. Stow, and G. Christakos. 2020. “Space-Time Chlorophyll-A Retrieval in Optically Complex Waters That Accounts for Remote Sensing and Modeling Uncertainties and Improves Remote Estimation Accuracy.” Water Research 171:115403. https://doi.org/10.1016/j.watres.2019.115403.
  • Kaiming, H., X. Zhang, S. Ren, and J. Sun. 2016. “Deep Residual Learning for Image Recognition.” Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition, 27-30 June 2016. IEEE. Las Vegas, NV, USA. https://doi.org/10.1109/CVPR.2016.90
  • Keys, R. 1981. “Cubic Convolution Interpolation for Digital Image Processing.” IEEE Transactions on Acoustics, Speech, and Signal Processing 29 (6): 1153–1160. https://doi.org/10.1109/TASSP.1981.1163711.
  • Kim, S., M. Kim, H. Kim, S.-S. Baek, W. Kim, S. D. Kim, and K. Hwa Cho. 2022. “Chemical Accidents in Freshwater: Development of Forecasting System for Drinking Water Resources.” Journal of Hazardous Materials 432:128714. https://doi.org/10.1016/j.jhazmat.2022.128714.
  • Kim, J., J. Kwon Lee, and K. Mu Lee. 2016. “Accurate Image Super-Resolution Using Very Deep Convolutional Networks.” Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition, 27-30 June 2016. IEEE: Las Vegas, NV, USA. https://doi.org/10.1109/CVPR.2016.182
  • Kim, G., J. Park, K. Lee, J. Lee, J. Min, B. Lee, D. K. Han, and K. Hanseok 2020. “Unsupervised Real-World Super Resolution with Cycle Generative Adversarial Network and Domain Discriminator.” Paper presented at the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 14-19 June 2020, Seattle, WA, USA. IEEE. https://doi.org/10.1109/CVPRW50498.2020.00236 .
  • Kokhanovsky, A. A. 2008. Aerosol Optics: Light Absorption and Scattering by Particles in the Atmosphere. American Library Association: Springer Science & Business Media.
  • Lanorte, A., G. Cillis, G. Calamita, G. Nolè, A. Pilogallo, B. Tucci, and F. De Santis. 2019. “Integrated Approach of RUSLE, GIS and ESA Sentinel-2 Satellite Data for Post-Fire Soil Erosion Assessment in Basilicata Region (Southern Italy).” Geomatics, Natural Hazards and Risk 10 (1): 1563–1595. Natural Hazards and Risk. https://doi.org/10.1080/19475705.2019.1578271.
  • Ledig, C., L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, and Z. Wang. 2017. “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.” Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition, 21-26 July 2017, Honolulu, HI, USA. IEEE. https://doi.org/10.1109/CVPR.2017.19.
  • Lee, J., C.-G. Kim, J. Eun Lee, N. Won Kim, and H. Kim. 2018. “Application of Artificial Neural Networks to Rainfall Forecasting in the Geum River Basin, Korea.” Water 10 (10): 1448. https://doi.org/10.3390/w10101448.
  • Lee, J., J. Yoon, I. Choi, H. Joo, B. Lim, and S. Lee. 2016. “Vertical Distribution of Harmful Cyanobacterial in the Daechung Reservoir.” Journal of Korean Society on Water Environment 1:464–465.
  • Leibe, B., J. Matas, N. Sebe, and M. Welling. 2016. “Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, the Netherlands.” Proceedings, Part IV, Amsterdam, Netherland, October 11–14, 2016, 9908. Springer.
  • Lei, S., Z. Shi, and Z. Zou. 2019. “Coupled Adversarial Training for Remote Sensing Image Super-Resolution.” IEEE Transactions on Geoscience and Remote Sensing 58 (5): 3633–3643. https://doi.org/10.1109/TGRS.2019.2959020.
  • Lin, Q., C. Hu, P. M. Visser, and R. Ma. 2018. “Diurnal Changes of Cyanobacteria Blooms in Taihu Lake as Derived from GOCI Observations.” Limnology and Oceanography 63 (4): 1711–1726. https://doi.org/10.1002/lno.10802.
  • Louis, J., V. Debaecker, B. Pflug, M. Main-Knorn, J. Bieniarz, U. Mueller-Wilm, E. Cadau, and F. Gascon. 2016. “Sentinel-2 Sen2Cor: L2A Processor for Users.” Paper presented at the Proceedings Living Planet Symposium 2016, Prague, Czech Republic.
  • Luan, K., X. Tong, M. Yanhua, R. Shu, X. Weiming, and X. Liu. 2014. “Geometric Correction of PHI Hyperspectral Image without Ground Control Points.” Paper presented at the IOP Conference Series: Earth and Environmental Science, 22-26 April, 2013, Beijing China. https://doi.org/10.1088/1755-1315/17/1/012193.
  • Main-Knorn, M., B. Pflug, J. Louis, V. Debaecker, U. Müller-Wilm, and F. Gascon. 2017. Sen2Cor for Sentinel-2. Paper presented at the Image and Signal Processing for Remote Sensing XXIII.
  • Mishra, D. R., B. A. Schaeffer, and D. Keith. 2014. “Performance Evaluation of Normalized Difference Chlorophyll Index in Northern Gulf of Mexico Estuaries Using the Hyperspectral Imager for the Coastal Ocean.” GIScience & Remote Sensing 51 (2): 175–198. https://doi.org/10.1080/15481603.2014.895581.
  • Mondejar, J. P., and A. F. Tongco. 2019. “Near Infrared Band of Landsat 8 as Water Index: A Case Study Around Cordova and Lapu-Lapu City, Cebu, Philippines.” Sustainable Environment Research 29 (1): 1–15. https://doi.org/10.1186/s42834-019-0016-5.
  • Moses, W. J., A. A. Gitelson, S. Berdnikov, and V. Povazhnyy. 2009. “Satellite Estimation of Chlorophyll-$ a $ Concentration Using the Red and NIR Bands of MERIS—The Azov Sea Case Study.” IEEE Geoscience and Remote Sensing Letters 6 (4): 845–849. https://doi.org/10.1109/LGRS.2009.2026657.
  • Mueller-Wilm, U., O. Devignot, and L. Pessiot. 2019. “Sen2Cor Software Release Note.” ESA, January S2-PDGS-MPC-L2A-SRN-V2.8.0 (2).
  • Müller, M. U., N. Ekhtiari, R. M. Almeida, and C. Rieke. 2020. “Super-Resolution of Multispectral Satellite Images Using Convolutional Neural Networks.” arXiv Preprint arXiv V-1 (2020): 33–40.
  • Naranjo-Torres, J., M. Mora, R. Hernández-García, R. J. Barrientos, C. Fredes, and A. Valenzuela. 2020. “A Review of Convolutional Neural Network Applied to Fruit Image Processing.” Applied Sciences 10 (10): 3443. https://doi.org/10.3390/app10103443.
  • Nazeer, M., C. Olayinka Ilori, M. Bilal, J. Elizabeth Nichol, W. Wu, Z. Qiu, and B. Krishna Gayene. 2021. “Evaluation of Atmospheric Correction Methods for Low to High Resolutions Satellite Remote Sensing Data.” Atmospheric Research 249:105308. https://doi.org/10.1016/j.atmosres.2020.105308.
  • O’Neil, J. M., T. W. Davis, M. A. Burford, and C. J. Gobler. 2012. “The Rise of Harmful Cyanobacteria Blooms: The Potential Roles of Eutrophication and Climate Change.” Harmful Algae 14:313–334. https://doi.org/10.1016/j.hal.2011.10.027.
  • Park, J., K. Tae Kim, and W. Hyoung Lee. 2020. “Recent Advances in Information and Communications Technology (ICT) and Sensor Technology for Monitoring Water Quality.” Water 12 (2): 510. https://doi.org/10.3390/w12020510.
  • Pianosi, F., and K. Beven, Jim J. Freer, J. W. Hall W , J. Rougier, D. B. Stephenson, and T. Wagener. 2016. “Sensitivity analysis of environmental models: A systematic review with practical workflow.” Environmental Modelling & Software 79: 214–232.
  • Pyo, J., S. H. Ha, Y. A. Pachepsky, H. Lee, R. Ha, G. Nam, M. S. Kim, J. Im, and K. Hwa Cho. 2016. “Chlorophyll-A Concentration Estimation Using Three Difference Bio-Optical Algorithms, Including a Correction for the Low-Concentration Range: The Case of the Yiam Reservoir, Korea.” Remote Sensing Letters 7 (5): 407–416. https://doi.org/10.1080/2150704X.2016.1142680.
  • Pyo, J., S. Min Hong, J. Jang, S. Park, J. Park, J. Hoon Noh, and K. Hwa Cho. 2022. “Drone-Borne Sensing of Major and Accessory Pigments in Algae Using Deep Learning Modeling.” GIScience & Remote Sensing 59 (1): 310–332. https://doi.org/10.1080/15481603.2022.2027120.
  • Richter, R., and D. Schläpfer. 2002. “Geo-Atmospheric Processing of Airborne Imaging Spectrometry Data. Part 2: Atmospheric/Topographic Correction.” International Journal of Remote Sensing 23 (13): 2631–2649. https://doi.org/10.1080/01431160110115834.
  • Romero, S., J. M. Luis, and V. Vilaplana. 2020. “Super-Resolution of Sentinel-2 Imagery Using Generative Adversarial Networks.” Remote Sensing 12 (15): 2424. https://doi.org/10.3390/rs12152424.
  • Sara, U., M. Akter, and M. Shorif Uddin. 2019. “Image Quality Assessment Through FSIM, SSIM, MSE and PSNR—A Comparative Study.” Journal of Computer and Communications 7 (3): 8–18. https://doi.org/10.4236/jcc.2019.73002.
  • Tao, L., J. Wang, Y. Zhang, Z. Wang, and J. Jiang. 2019. “Satellite Image Super-Resolution via Multi-Scale Residual Deep Neural Network.” Remote Sensing 11 (13): 1588. https://doi.org/10.3390/rs11131588.
  • Tuominen, J., and T. Lipping. 2011. Atmospheric Correction of Hyperspectral Data Using Combined Empirical and Model Based Method. Paper presented at the Proceedings of the 7th European association of remote sensing laboratories SIG-imaging spectroscopy workshop, Edinburgh, Scotland, UK.
  • Viaña-Borja, S. P., and M. Ortega-Sánchez. 2019. “Automatic Methodology to Detect the Coastline from Landsat Images with a New Water Index Assessed on Three Different Spanish Mediterranean Deltas.” Remote Sensing 11 (18): 2186. https://doi.org/10.3390/rs11182186.
  • Wang, P., B. Bayram, and E. Sertel. 2022. “A Comprehensive Review on Deep Learning Based Remote Sensing Image Super-Resolution Methods.” Earth-Science Reviews 232:104110. https://doi.org/10.1016/j.earscirev.2022.104110.
  • Wang, Z., J. Chen, and S. C. Hoi. 2020. “Deep Learning for Image Super-Resolution: A Survey.” IEEE Transactions on Pattern Analysis and Machine Intelligence 43 (10): 3365–3387. https://doi.org/10.1109/TPAMI.2020.2982166.
  • Wang, H., E. Galbraith, and M. Convertino. 2023. “Algal Bloom Ties: Spreading Network Inference and Extreme Eco-Environmental Feedback.” Entropy 25 (4): 636.
  • Wang, C., Z. Ruifei, Y. Bai, P. Zhang, and H. Fan. 2021. “Single-Frame Super-Resolution for High Resolution Optical Remote-Sensing Data Products.” International Journal of Remote Sensing 42 (21): 8099–8123. https://doi.org/10.1080/01431161.2021.1971790.
  • Xia, H., L. Chu, J. Pei, W. Liu, and J. Bian. 2021. “Model Complexity of Deep Learning: A Survey.” Knowledge and Information Systems 63 (10): 2585–2619. https://doi.org/10.1007/s10115-021-01605-0.
  • Yang, D., Z. Li, Y. Xia, and Z. Chen. 2015. Remote Sensing Image Super-Resolution: Challenges and Approaches. Paper presented at the 2015 IEEE International Conference on Digital Signal Processing (DSP), 21-24 July 2015, Singapore, 21-24 July 2015. https://doi.org/10.1109/ICDSP.2015.7251858.
  • Yang, W., X. Zhang, Y. Tian, W. Wang, J.-H. Xue, and Q. Liao. 2019. “Deep Learning for Single Image Super-Resolution: A Brief Review.” IEEE Transactions on Multimedia 21 (12): 3106–3121. https://doi.org/10.1109/TMM.2019.2919431.
  • Zhang, H., and B. Huang. 2011. Scale Conversion of Multi Sensor Remote Sensing Image Using Single Frame Super Resolution Technology. Paper presented at the 2011 19th International Conference on Geoinformatics, 24-26 June 2011, Shanghai, China. IEEE. https://doi.org/10.1109/GeoInformatics.2011.5980856.
  • Zhao, G., M. Zhang, J. Liu, and J.-R. Wen. 2019. “Unsupervised Adversarial Attacks on Deep Feature-Based Retrieval with GAN.” arXiv Preprint arXiv 1907:05793.