631
Views
0
CrossRef citations to date
0
Altmetric
Research Article

An improved color consistency optimization method based on the reference image contaminated by clouds

ORCID Icon, , , , , , , , & show all
Article: 2259559 | Received 01 May 2023, Accepted 12 Sep 2023, Published online: 19 Sep 2023

ABSTRACT

Optimizing color consistency across multiple images is a crucial step in creating accurate digital orthophoto maps (DOMs). However, current color balance methods that rely on a reference image are susceptible to cloud and cloud shadow interference, making it challenging to ensure color fidelity and a uniform color transition between images. To address these issues, an improved method for color consistency optimization has been proposed to enhance image quality using optimized low-resolution reference images. Initially, the original image is utilized to reconstruct areas affected by clouds or cloud shadows on the reference image. For seamless cloning, a Poisson blending algorithm is employed to minimize color differences between reconstructed and other regions. Subsequently, based on a weighting approach, the high-frequency information obtained through Gaussian and bilateral filtering is superimposed to smooth the image boundary and ensure color continuity between images. Finally, local linear models are constructed to correct image color based on the optimized reference and down-sampled images. To validate the robustness of this approach, we tested it on two challenging datasets covering a wide area. Compared to state-of-the-art methods, our approach offers significant advantages in both quantitative indicators and visual quality.

1. Introduction

Digital Orthophoto Maps (DOMs) are crucial in land cover classification, change detection, and national geographic feature extraction (Deng et al. Citation2018; Jalal et al. Citation2019; Yoo and Lee Citation2016). The generation of DOM usually involves the seamless fusion of multiple images with extended coverage. However, remote sensing images are typically captured by different sensors at varying angles and under different imaging conditions, resulting in substantial differences in radiation across images. Color inconsistencies may severely affect the visual quality of mosaic images. To overcome this challenge and obtain seamless mosaic images, extensive research has been conducted by many scholars.

Currently, two primary methods are employed for color correction, namely color transfer and color consistency optimization (Xia, Yao, and Gao Citation2019). The color transfer method can be divided into two steps: first, a reference image is selected manually or automatically, and then image hues are transferred from the reference image to other images following the shortest path algorithm. Therefore, the focus of this method lies in the selection of reference images and the determination of color transfer paths. Unlike color transfer, the key to color consistency optimization lies in optimizing color consistency across multiple images globally, which is implemented by minimizing a designed energy function.

In some advanced color transfer technologies, in addition to adjusting image colors, the quality of the original image, such as texture details, is also well preserved. Su et al. (Citation2014) improved image quality by utilizing multi-scale detail processing and self-learning filtering to minimize the normalized Kullback-Leibler distance. This allowed them to achieve color fidelity, a seamless appearance of detail, and suppress corrosive artifacts. Similarly, Wang et al. (Citation2017) accurately transferred image tones based on the L0-norm gradient-preserving algorithm and similarity-preserving color mapping to maintain the similarity and detail of images as much as possible. In addition, some scholars have made different attempts to select reference images used in color transfer methods. For instance, Zhang et al. (Citation2017) used absolute radiometric calibration images as reference images. Xie et al. (Citation2018) selected the image subset instead of an image as a reference image from the weighted image graph. Pastucha et al. (Citation2022) used image position relationships to group similar images and then selected the largest group as the reference group. Although these algorithms achieve more robust color transfer by using more accurate references, their results are vulnerable to external noise interference and may not produce high-quality output images. Therefore, some color transfer methods based on reliable features and robust correspondence have been proposed. For example, Hwang et al. (Citation2019) strengthened the probability modeling and spatial constraints of color transfer in 3D color space to eliminate the impact of mismatch, spatially varying illumination, and noise on the resulting image. In their research, He et al. (Citation2019) adopted semantically meaningful dense correspondence to improve the accuracy of color transfer between images with perceptually similar semantic structures. In addition, Oskarsson (Citation2021) proposed a feature-based method to perform robust color transfer fitting for data containing coarse outliers.

As mentioned earlier, the reference image and transfer path are critical to the color transfer algorithms. However, the scientific selection of reference images remains an unresolved issue. Additionally, the shortest transfer path cannot avoid accumulated error, which limits the application of color transfer algorithms in large-scale datasets. The color consistency optimization approach effectively avoids the selection of reference images and error accumulation by utilizing energy function optimization for color correction.

In some mainstream color consistency optimization methods, much attention has been paid to developing algorithms that integrate the benefits of local and global optimizations (Liu et al. Citation2020, Citation2021; Yu et al. Citation2017; Zhang et al. Citation2020). These approaches adopt a whole-to-part idea, which effectively considers the color consistency of both the whole and the part. However, they are susceptible to ground feature alterations and geometric deviations between images. To eliminate the negative impact of pixel changes, some methods use pseudo-invariant features (PIF) or robust models to fit the color correspondence between images to improve overall color consistency (Li et al. Citation2022; Moghimi et al. Citation2022). The existing PIF recognition methods, such as those based on multiple rules (Moghimi et al. Citation2021; Xu et al. Citation2021) and key point descriptors (Kim and Han Citation2021; Moghimi et al. Citation2021, Citation2022; Varish et al. Citation2023), are mostly data-oriented in their design. Therefore, the applicability of these methods to various types of data still needs to be studied. Due to incomplete visual quality control, the images generated by color correction algorithms based on PIF may not be of high quality. In order to optimize color consistency while maintaining image quality, some scholars have conducted in-depth research. For instance, Niu et al. (Citation2019) used a coarse-to-fine strategy and guided filtering to achieve global and local color consistency while retaining the original image’s structural information. Liu et al. (Citation2019) proposed a gradient preservation-based color correction approach for image structure consistency that uses local mapping to correct the approximate region and combines region mapping, feature extraction, and gradient optimization to optimize image color difference while protecting image structure. Furthermore, Xia et al. (Citation2019) and Xia et al. (Citation2017) used the parametric quadratic spline model to transform the color consistency constraint into an effective parameter expression and designed an energy function to achieve the maintenance of image quality and optimization of color consistency between images. To enhance image contrast and reduce color inconsistencies, Li et al. (Citation2022) used original color information to increase image contrast, which produces high-quality corrected images even when the image contrast is extremely low.

Currently, using external color references is a mainstream method for correcting the color of images that cover a wide range of areas. For example, Zhou (Citation2015) conducted gamma correction on the image by leveraging the calculated local average of the original image and the external reference image. Yu et al. (Citation2016) proposed a color balance approach that employs a color reference library. Cui et al. (Citation2021) simulated the image color distribution by extracting low-frequency information from an external low-resolution reference image and achieved automatic color correction using a model that combines defogging and radiometric correction. These methods make full use of the advantages of color references and avoid the problems of “two-body” (Chen et al. Citation2014) and color error propagation that exist in traditional color consistency optimization approaches. However, they may encounter issues when dealing with interfering factors, such as clouds and cloud shadows in the reference image.

From the aforementioned studies, it is evident that several algorithms have been proposed and substantial progress has been made in eliminating color differences and improving image quality. However, the effectiveness of these algorithms, which rely on external references, is vulnerable to incorrect color mapping due to pixel variations between the reference and original images. Therefore, the objective of this paper is to introduce an improved color consistency optimization method based on a low-resolution reference image that is contaminated by clouds.

Our research contributions can be summarized as follows:

  1. We creatively use Poisson blending to reduce the content changes between the reference and original images caused by various factors in order to obtain a more accurate reference color and then achieve the fidelity of the color of the features in the image. It provides a new way of thinking to solve the problem of incorrect color mapping in current color correction methods based on reference images.

  2. By using the proposed method of image boundary filling and image high-frequency information weighting, the high-frequency information of the image boundary is effectively smoothed, and a uniform transition of color between images is achieved.

The remaining parts of this article are organized as follows: Section 2 describes the proposed algorithm in detail. Section 3 provides details on the experiments conducted. Finally, Section 4 presents the conclusions.

2. Proposed method

As shown in , the proposed method consists of two steps aimed at optimizing color inconsistencies between images. The first step involves the use of two optimization modules to enhance the quality of the low-resolution reference image and the down-sampled image obtained from the original image. In the second step, the resulting images are generated based on the optimized reference and down-sampling images. Below, we provide a detailed description of these two steps.

Figure 1. The flow chart of the proposed method, with two blue modules highlighted as the major improvements.

Figure 1. The flow chart of the proposed method, with two blue modules highlighted as the major improvements.

2.1. Optimize the reference image

Our objective at this stage is to acquire a reference image with a high frequency that is comparable to the original image. To achieve this, we employ a technique that superimposes the high frequency Hsrcdown of the down-sampled image Isrcdown with the low frequency Lref of the reference image Iref. This process results in an image Idstdown, which exhibits a low frequency that aligns precisely with the color distribution of the reference image, while its high frequency is derived from the original image. To obtain superior quality Idstdown, we adopt two optimization modules to process Lref and Hsrcdown, respectively. In the following section, we present a detailed account of the processing flow for these two modules.

2.1.1. Optimize the Lref based on Poisson blending

Clouds and their accompanying shadows are an unavoidable presence in low-resolution reference images (Wang et al. Citation2023). If left unaddressed, these disturbances will contaminate the low-frequency information extracted from the reference image, which in turn affects the quality of the resulting image. In order to demonstrate the impact of clouds and their shadows on the optimization of color consistency, reference images containing these interferences are used in this paper. The subsequent experimental results are shown in .

Figure 2. An example illustrating the influence of clouds and their shadows on the color correction results. The red box is used to indicate the area that contains clouds and cloud shadows. (c) the result is generated by the unimproved algorithm using texture information extracted from (a) and tonal information obtained from (b).

Figure 2. An example illustrating the influence of clouds and their shadows on the color correction results. The red box is used to indicate the area that contains clouds and cloud shadows. (c) the result is generated by the unimproved algorithm using texture information extracted from (a) and tonal information obtained from (b).

shows that when clouds and their shadows appear in the reference image, even if these interfering factors do not exist in the corresponding regions of the original image, they appear in the corresponding locations of the generated image after color consistency optimization. After careful analysis, it was found that the clouds and cloud shadows on the reference and input images made the pixels change between the reference and input images. These varying pixels result in the wrong color mapping, which in turn makes the resultant image exhibit unnatural colors. For example, regions in the resultant image without clouds exhibit cloudy tones, while regions with clouds do not.

To solve this problem, we employ Poisson blending (Fang et al. Citation2019; Hu et al. Citation2020) to reconstruct the content of the original image on the reference image to reduce the varying pixels and thus improve the accuracy of color mapping. It is worth noting that the significance of our adoption of Poisson blending is not the single removal of clouds and cloud shadows, but the reduction of varying pixels. Therefore, the proposed method can be further applied to deal with the problem of inter-image content variations caused by other factors, e.g. human activities and seasonal changes. As shown in , we used Poisson blending on the reference image to reconstruct the clouds on the original image to reduce the effect of content changes between images on the color correction results. shows the significant content changes between the original and reference images caused by the clouds. If the clouds are not reconstructed on the reference image, the color correction results will exhibit discordant colors in areas where clouds are present. Specifically, highlighted clouds will be processed as darker colors. With Poisson blending, there is no significant content change between the output image and the original image, which effectively improves the quality of the color correction results.

Figure 3. An example of reconstructing the content (e.g. clouds) in the original image using a Poisson blending strategy.

Figure 3. An example of reconstructing the content (e.g. clouds) in the original image using a Poisson blending strategy.

Why is Poisson blending used to minimize content changes between images? This is because Poisson blending is effective in achieving seamless cloning between images, and even if there are large color differences between the reference image and the original image, the method is able to seamlessly reconstruct the content of the original image on the reference image. In the next section, we briefly introduce the processing flow of Poisson blending by taking the example of reconstructing the content of the original image in the region containing clouds and cloud shadows in the reference image. The masks of clouds and cloud shadows required by the algorithm have been extracted by manual methods, and the schematic diagram of the algorithm is shown in .

Figure 4. The diagrammatic sketch of an approach for repairing areas polluted by clouds and cloud shadows based on Poisson blending. The target image and the patch represent the low-resolution reference image and the original image to be color corrected, respectively. The goal is to restore the contaminated areas on the reference image using information from the original image.

Figure 4. The diagrammatic sketch of an approach for repairing areas polluted by clouds and cloud shadows based on Poisson blending. The target image and the patch represent the low-resolution reference image and the original image to be color corrected, respectively. The goal is to restore the contaminated areas on the reference image using information from the original image.

Initially, we perform a convolution operation on the patch and target images using the convolution kernels k1 = [0, −1, 1] and k1T to compute the gradient maps in the x and y directions. Subsequently, the gradients within the cloud and cloud shadow regions in the target image are removed based on the obtained mask image, while the gradients within the corresponding regions of the patch image are retained. The results of this mask operation are then superimposed to generate the gradient map of the image to be reconstructed. After that, we perform a convolution operation using the convolution kernels k2 = [−1, 1, 0] and k2T on the gradient map in order to compute the partial derivative maps of the gradient map in the x and y directions. The partial derivative maps in both directions are then superimposed to generate the divergence of the image to be reconstructed. Finally, the following Poisson equation is built and solved based on the divergence and the original image (Pérez, Gangnet, and Blake Citation2003):

(1) Δf|D=divV,withf|D=fD(1)

where Δ is the Laplace operator; divV represents the divergence of gradient field V(α, β), i.e. divV = ∂ α/∂ x + ∂ β/∂ y; f|D=fD is the Dirichlet boundary condition. The solution of the Poisson equation is the image to be reconstructed.

In order to deepen our understanding of the process of establishing and solving Poisson’s equation, we will use diagrams to illustrate it in detail. As shown in , the pixel values at the boundaries of the contaminated region are known. After determining the divergence of the image to be reconstructed, we can use the divergence formula to generate the following equation:

(2) [f2(2)+f2(5)+f1(c)+f1(b)]4f1(a)=div(a)[f2(3)+f1(a)+f1(d)+f2(8)]4f1(b)=div(b)[f1(a)+f2(9)+f2(14)+f1(d)]4f1(c)=div(c)[f1(b)+f1(c)+f2(15)+f2(12)]4f1(d)=div(d)(2)

Since all other quantities in the system of equations except f1a, f1b, f1c, and f1d are known, the system of equations has only one set of solutions. Therefore, the values of the tainted pixels can be calculated by solving the equations. In summary, regardless of the size of the contaminated region, the Poisson equation can be established to reconstruct the contaminated region as long as the pixel values at the boundaries of the contaminated region (Dirichlet boundary conditions) and the pixel divergence values of the contaminated region are known.

It is worth noting that, according to the Poisson blending principle, the quality of the reconstructed image is sensitive to the pixel values around the contaminated region. In order to obtain more robust results, we use the expansion operation in morphology to extend the boundary of the extraction mask so that the contaminated area can be completely covered. Poisson blending is then used to reconstruct the content of the original image on the reference image. Finally, Gaussian filtering is applied to the reference image to separate out the low-frequency information in the image. Since Gaussian filtering is scale and rotation invariant and is effective in removing most of the noise that follows a normal distribution, we employ it to extract the low-frequency information of the image.

Figure 5. Illustration of the Poisson equation’s establishment and solving. The pixels a, b, c, and d are contaminated by the cloud and its shadow. f1 is described as an unknown intensity function in the contaminated region. f2 is defined as a known intensity function on the unpolluted area of image T.

Figure 5. Illustration of the Poisson equation’s establishment and solving. The pixels a, b, c, and d are contaminated by the cloud and its shadow. f1 is described as an unknown intensity function in the contaminated region. f2 is defined as a known intensity function on the unpolluted area of image T.

2.1.2. Optimize the  Hscrdown based on Gaussian and bilateral filtering

In order to ensure that the resolution of the reference image and the original image are the same, we use an average down-sampling technique, which is described as follows: Assuming the width and height of the original image are W and H, based on the resolution relationship between the original and down-sampled images, the width and height of the down-sampled image can be calculated, denoted as w and h. Then, the original image is divided into multiple image blocks with a size of R × C, where R=W/w+1 andC=H/h+1. Finally, we calculate the mean of pixels within each image block and use this value as the pixel value of the down-sampled image. Since the pixels located at the image boundaries have larger gradient values, Hsrcdown extracted directly from Isrcdown by Gaussian filtering has larger values at the corresponding locations. This can lead to obvious splicing traces between the color-corrected images. To demonstrate this problem, we conducted a set of experiments as shown in .

Figure 6. An example is given to illustrate the effect of extracting high-frequency information from an image directly using Gaussian filtering on the results after color correction.

Figure 6. An example is given to illustrate the effect of extracting high-frequency information from an image directly using Gaussian filtering on the results after color correction.

From , it can be clearly seen that the image located in the red box region shows obvious stitching traces. To alleviate this problem, a boundary smoothing strategy including Gaussian filtering and bilateral filtering is proposed. The method has two core elements. One, modifying the invalid values around the image. The second is weighting the high-frequency information extracted through Gaussian and bilateral filtering.

Inevitably, there are some invalid zero values around the processed DOM image. Due to the presence of these zero values, the resultant values of the convolution operation using Gaussian filtering for pixels located at the DOM boundaries tend to be low. This will further result in the extracted high-frequency information having large gradient values at the corresponding positions. To alleviate this problem, in this paper, the zero value is modified based on the idea of boundary padding to reduce the gradient value of pixels located at the image boundary. Boundary filling can be categorized into four types: zero filling, constant filling, mirror filling, and repeat filling. It is worth noting that constant fill and zero fill are ineffective at reducing the gradient values of the pixels at the boundary. Therefore, the image can be preprocessed by mirror fill or repeat fill before using the two filtering methods to extract the high-frequency information of the image. The process of modifying the invalid values around the image based on the idea of repeat filling will be described in detail below. There are two steps in total. Suppose the size of a remote sensing image is M × N.

In step 1, each row of pixels is traversed in left-to-right, top-to-bottom order, starting from row 0 until the M-1th row of pixels is traversed. While traversing each row of pixels, the column numbers of the pixels with zero values are recorded first. These column numbers can form n mutually disjoint sets. For example, Qi=xixipi,qi,pi<qi,xi,pi,qi0,N1N,\breaki1,2,,n, where pj>qj1+1,j2,3,,n. Then, based on the following rules, modify the value of the pixel whose column number belongs to set Qi.

  1. If pi=0 and qi=N1 hold, then record the row number of the current row and don’t modify the pixel value of the current row.

  2. If pi1<0, then S=qi+1; otherwise, S=pi1; if qi+1N, then E=pi1; otherwise, E=qi+1. After that, the pixels with column numbers pi,pi+1,,andpi+qi2 are replaced with the pixel whose column number is S, and the pixels with column numbers pi+qi2+1,pi+qi2+2,,andqi are replaced with the pixel whose column number is E. Notably, we often replace the invalid pixels with the mean value of the non-zero pixels around a pixel with column number S or E in order to prevent noise interference.

In step 2, since the row numbers recorded in the first step can also be formed into n non-intersecting sets, therefore, based on the idea in the first step, the values of S and E can be calculated first, after which the pixels of row pi,pi+1,,andpi+qi2 are replaced by the pixels in row S, and the pixels of row pi+qi2+1,pi+qi2+2,,andqi are replaced by the pixels in row E.

Although the step of modifying the invalid values around the image proves to be effective in reducing the gradient values of the pixels at the image boundaries, the robustness of this technique is limited and is not sufficient to deal with complex image stitching scenarios. Since bilateral filtering has the effect of preserving the edges of the image, the gradient values in the high-frequency information obtained based on this method are small. Reasonable weighting of the high-frequency information with small gradient values and the high-frequency information extracted by Gaussian filtering can help to further smooth the gradient values of the pixels located at the image boundaries. The steps for determining the weight values are as follows: First, a weight matrix of the same size as the high-frequency information is generated, and all the matrix values are initialized to 1. Subsequently, the weight matrix is convolved using the convolution kernel used in the Gaussian filter. The result of this process is considered the final weight value. Finally, the optimized high-frequency information HO is obtained by the following equation:

(3) HO=WHG+1WHB(3)

where represents the point multiplication of the matrix; HG and HB represent high-frequency information extracted by Gaussian and bilateral filtering, respectively; W is the weight matrix.

The important parameters of Gaussian and bilateral filtering involved in this subsection are described below. The key parameters of Gaussian filtering are the size of the convolution kernel and the standard deviation of the Gaussian distribution. Among them, the convolution kernel size ksize is set to one-twentieth of the image diagonal length, and the standard deviation σ is calculated according to the following equation:

(4) σ=0.3ksize10.51+0.8(4)

In addition, the key parameters of bilateral filtering include the size of the convolution kernel and the standard deviation of the Gaussian distribution in the coordinate space and color space, denoted as Ksize, σspace, and σcolor, respectively, where Ksize=ksize, σspace=50, and σcolor=12.5. They are all empirical values.

2.2. Color correction

During relative radiometric correction, it is generally assumed that the pixel brightness values of remote sensing images acquired at different times conform to a linear relationship:

(5) Idstr,c=a×Isrcr,c+b(5)

where Idstr,c and Isrcr,c represent the pixel values located at (r, c) in the target and original images, respectively; r and c are the row and column numbers where the pixels are located, respectively; and a and b are parameters in the linear model. It is worth noting that the down-sampling result, Idstdown, of the target image mentioned in this section refers to the optimized reference image discussed in Section 2.1. Given the wide coverage of satellite images, it is challenging to fit the color differences of various regions in an image using a single linear model. To address this issue, we propose a linear model construction method based on image blocks. Specifically, for each image block of size w × h, we assume that the pixels in the block adhere to a linear model, which can be expressed by Equationequation (6).

(6) rhcwIdstr,c=rhcwa×Isrcr,c+b(6)

Divide both sides of Equationequation (6) by the number of pixels (w × h) to obtain Equationequation (7).

(7) Iˉdstr,c=a×Iˉsrcr,c+b,    rh,cw(7)

where Iˉdstr,c and Iˉsrcr,c represent the mean intensities of the image blocks in the target and original images, respectively. Moreover, when dark-tone objects are present in the image block, the block’s brightness mainly originates from diffuse light. In theory, the block’s brightness hardly changes after color correction. That is, when Isrcr,c → 0, then Idstr,c → 0, and substituting Idstr,c → 0 into formula (6) yields b → 0. Thus, for the image block, it holds that:

(8) Iˉdstr,ca×Iˉsrcr,caIˉdstr,cIˉsrcr,c(8)

The gain coefficient a in the linear model can be calculated from the corresponding pixels of Isrcdown and Idstdown, as Isrcdown is obtained by mean down-sampling the original image. To avoid interference from brightness burst points and abnormal values caused by bright ground objects, the gain coefficients of pixels in Isrcdown and Idstdown that do not conform to the 3σ principle are set to 1. The abnormal values in the gain coefficient map are further eliminated based on the 3σ principle, where any abnormal value is set to 1. Furthermore, for a source image, its gain coefficient map can be obtained through bilinear interpolation up-sampling, which is denoted as A. Similarly, based on bilinear interpolation, Idstdown and Isrcdown can be up-sampled to the size of the source image. As the up-sampled image is relatively smooth and fits the original image well, the low-frequency information for the target image and the source image can be obtained from the up-sampled images Ldst and Lsrc, respectively. EquationEquation (9) can be derived from Equationequation (5).

(9) Idst=A×IsrcLsrc+A×Lsrc+b(9)

Since b is a small quantity relative to A × Lsrc, based on Equationequations (8) and (Equation9), we can list Equationequation (10).

(10) Idst=A×IsrcLsrc+Ldst(10)

Formula (10) enables the computation of the color correction outcome for each image. The resultant image comprises the low and high frequencies derived from the color-corrected reference image and stretched original image, respectively. It is important to highlight that color image processing requires prior conversion of the color space from RGB to YCbCr and subsequent band-by-band image correction.

depicts two examples that visually illustrate the effectiveness of our proposed approach. The left column showcases a comparison between the color-corrected result image (c) generated based on the reference image (b) that contains clouds and cloud shadows and the original image (a). In the red box area of (c), we can observe the appearance of clouds and cloud shadows. Moreover, in the blue box area, there are significant changes in ground features between (b) and (a), which causes the color of the blue box area in (c) to appear unnatural. On the other hand, the resulting image (d) generated by our improved approach using the low-resolution reference image shows significant advantages over (c). The right column of demonstrates another example where there are apparent stitching traces in the red box area of (c). Our method addresses this issue effectively, as evidenced by the results presented in (d). Overall, our proposed approach can effectively smooth the image boundary and reduce the impact of clouds and their shadows in the reference image.

Figure 7. Input images (a); low-resolution reference images from the LocalSpace viewer software (b); (c) and (d) are images processed by methods without and with two optimization modules, respectively. The red boxes highlight areas with mosaic traces, clouds, or cloud shadows. The blue box highlights the areas with significant changes in ground features between (a) and (b).

Figure 7. Input images (a); low-resolution reference images from the LocalSpace viewer software (b); (c) and (d) are images processed by methods without and with two optimization modules, respectively. The red boxes highlight areas with mosaic traces, clouds, or cloud shadows. The blue box highlights the areas with significant changes in ground features between (a) and (b).

3. Experiments

3.1. Experimental datasets

The study compares the performance of various approaches for optimizing color consistency in two datasets, TAIHU and CHINA, which were provided by the Land Satellite Remote Sensing Application Center, Ministry of Natural Resources of P.R. China. Prior to color correction, all images in the datasets were converted into a unified coordinate system. Subsequently, the method described in Hong et al. (Citation2022) was utilized to transform the images into 8-bit images, whose color consistency was then optimized to visualize the algorithmic results. Due to the time-consuming nature of optimizing source images in the CHINA dataset, the images were down-sampled using local averaging. This resulted in a change in the image resolution from 2.1 m to 50 m.

The CHINA dataset comprised 6,766 ZY3–01/02 images, while the TAIHU dataset contained 41 ZY3–01 images, collectively covering a significant portion of China. ZY3–01 and ZY3–02 represent the first and second satellites in China’s Resource 3 series, respectively. Given the varied ground object types and complex imaging conditions, the images acquired between 2012 and 2017 exhibit obvious color inconsistency, thereby establishing representative datasets for measuring the effectiveness of diverse algorithms. For further details, contains detailed and specific information. provides a schematic image of the study area, whereas depicts the external reference image obtained from the LocaSpace Viewer software, with a resolution of 363 × 306 m. In addition, the masks of clouds and cloud shadows used in the method are extracted manually.

Figure 8. Research data and areas (a) a schematic image of the study area; (b) original and reference images superimposed together; (c) a low-resolution reference image.

Figure 8. Research data and areas (a) a schematic image of the study area; (b) original and reference images superimposed together; (c) a low-resolution reference image.

Table 1. Detailed and specific information about the datasets.

3.2. Evaluation metrics

This study employs three objective indicators to quantitatively evaluate the effect of color consistency optimization methods. The first measure is the quality considering color Euclidean distance (QCCED), which assesses the impact of different approaches on image quality and the color difference between images. The QCCED value is directly proportional to the color consistency; thus, higher values indicate better color consistency. The second indicator is structural similarity (SSIM), which evaluates the similarity of the structure, contrast, and brightness between two images, with a range between 0 and 1. Higher values indicate less structural difference between the two images. The last indicator is the one-dimensional image entropy (OIE), which calculates the information richness of aggregated features in the gray-level distribution. A higher level of information richness is indicative of better image quality. It is crucial to note that during the quantitative assessment of color images, the CED value in the QCCED indicator should be computed in the CIELAB color space.

(1) Quality Considering Color Euclidean Distance (QCCED):

For two overlapped images Ia and Ib, this metric is calculated as:

(11) QCCEDIa,Ib=MGIa+MGIbCEDIa,Ib+c(11)

where c is a constant with a value of 1. Ia and Ib are images with overlap after color correction. MG(•) is the average gradient of the image. CED(•) denotes the color Euclidean distance between overlapping areas of two images.

(2) Structural Similarity (SSIM)

The calculation formula is as follows:

(12) SSIMIa,Ib=SIa,IbCIa,IbLIa,Ib(12)

where SIa,Ib, CIa,Ib, and LIa,Ib are the structure, contrast, and luminance similarities between images Ia and Ib, respectively.

(3) 1-D Image Entropy (OIE)

This indicator is defined as:

(13) OIEp=k=0255pklog2pk(13)

where pk denotes the frequency of a pixel with a value of k. For more detailed information on the above indicators, please access the paper (Hong et al. Citation2022).

3.3. Comparative experiments

To assess the effectiveness of our proposed algorithm, we compared it with two state-of-the-art methodologies. The first approach, global tilt adjustment (GTA) (Yu et al. Citation2017), compensates for neighboring image contrast, color, and brightness. This method has been integrated into the OrthoVista module in Inpho 7.0 software. The second method, block adjustment-based radiometric normalization (BARN) (Zhang et al. Citation2020), optimizes global color difference using block adjustment, classifies pixels roughly based on the normalized difference vegetation index (NDVI), and further reduces local color inconsistencies in images using a block adjustment strategy for similar pixels. Since the experimental data lacks the near-infrared band, we modified the BARN method and retained only the strategy for optimizing global color difference. Furthermore, for the CHINA dataset with a large number of images, we grouped images and used the BARN strategy to select a reference image for each group. These two methods are currently the mainstream color correction methods and hold significant comparative value. We evaluated the experimental results both qualitatively and quantitatively.

displays the complete mosaic image of the TAIHU dataset after processing with different methods. The image in (b), processed using the BARN method, exhibits a uniform hue but noticeable stitching traces and color differences in the area marked by red boxes. Furthermore, this method necessitates the selection of a reference image, making the hue of the entire mosaic image linked to the reference image. The color inconsistencies between images corrected using the GTA approach are substantially reduced. Nevertheless, the GTA approach relies on the image itself, resulting in significant color differences in the region marked by red boxes in the processed image (c). However, visual analysis suggests that the color consistency of (c) is markedly improved in comparison to (b). In general, the image (d), processed using the proposed method, has a consistent color distribution and optimal color consistency equivalent to that of the reference image.

presents the quantitative results obtained from various algorithms on the TAIHU dataset. The statistical analysis indicates that the QCCED and CED indicators of images processed by our proposed method are significantly superior to those processed by the GTA and BARN methods, which is in line with the visual assessment. However, there is a slight discrepancy in the OIE indicator when compared with the GTA approach. In contrast to the BARN methodology, our approach exhibits a noticeable gap in the SSIM indicator. This is due to the fact that the color distribution of the selected low-resolution reference image is significantly different from the original image, which results in reduced structural similarity between the resulting and original images.

Table 2. Quantitative results of diverse methodologies on the TAIHU dataset.

Figure 9. Comparison of the results of diverse methods on the TAIHU dataset. (a) input images; (b) and (c) results obtained by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. To exhibit inconsistencies, the red boxes highlight areas with mosaic traces and color differences.

Figure 9. Comparison of the results of diverse methods on the TAIHU dataset. (a) input images; (b) and (c) results obtained by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. To exhibit inconsistencies, the red boxes highlight areas with mosaic traces and color differences.

presents the evaluation of diverse methods on the CHINA dataset, which encompasses a vast portion of China. At a macroscale, it can be observed that, despite the presence of considerable color disparities between the source images, all approaches have achieved considerable enhancements. To conduct an objective appraisal, we have extracted the images from regions A, B, and C in , which exhibit substantial color variations due to sensor distortion and imaging discrepancies. These three sets of experiments have allowed us to accurately evaluate the performance of various algorithms on datasets that cover different regions. The quantitative outcomes of diverse methodologies in the three regions are provided in . In , we used a pseudo-color scheme instead of a grayscale scheme to better show the color differences between images.

Table 3. Quantitative results from diverse methodologies in three areas of the CHINA dataset.

depicts the outcome of applying various color correction approaches to area A of the CHINA dataset. Prior to the processing, noticeable disparities in contrast and brightness among source images led to an uneven light and dark distribution in the entire mosaic image. displays significant superiority over other algorithms by mitigating the discrepancies in image brightness and contrast, smoothing the color transition between images, and optimizing the brightness and dark distribution of mosaic images. However, excels in correcting image contrast and brightness, but the color transition between images is stark, particularly in the red boxes. Additionally, demonstrates marked overexposed areas within the blue box and apparent stitching traces in the red box. Since the BARN approach necessitates the selection of multiple reference images when processing a large-area dataset, the presence of prominent differences in brightness between reference images may result in substantial brightness differences across the entire study area after algorithmic processing. The results in demonstrate that the proposed methodology outperforms the other approaches in CED and QCCED indicators. Regarding the OIE indicator, the performance of our results is comparable to that of the original image. As mentioned earlier, the significant difference between our results and the original image in the SSIM indicator is attributable to the substantial color inconsistency between the original and reference images. In addition, the tone gradient in is caused by a tone gradient in the reference image.

Figure 10. Experimental results on the CHINA dataset. (a) input images; (b) and (c) results corrected by the BARN and GTA methods, respectively; (d) the result generated by our approach. To accurately evaluate algorithms, we selected the results located in the A, B, and C regions from the CHINA dataset for quantitative analysis.

Figure 10. Experimental results on the CHINA dataset. (a) input images; (b) and (c) results corrected by the BARN and GTA methods, respectively; (d) the result generated by our approach. To accurately evaluate algorithms, we selected the results located in the A, B, and C regions from the CHINA dataset for quantitative analysis.

Figure 11. Comparison of results obtained upon using images in area A. (a) input images; (b) and (c) results obtained by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. To display inconsistencies, a pseudo-color scheme is utilized. The red boxes indicate a clear color difference, whereas the blue boxes indicate overexposed areas.

Figure 11. Comparison of results obtained upon using images in area A. (a) input images; (b) and (c) results obtained by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. To display inconsistencies, a pseudo-color scheme is utilized. The red boxes indicate a clear color difference, whereas the blue boxes indicate overexposed areas.

Figure 12. Experimental results obtained upon using images in area B. (a) input images; (b) and (c) results corrected by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. A pseudo-color scheme is employed to depict the color discrepancies in the results. The red boxes indicate obvious color discontinuity, and the blue boxes highlight overexposed areas.

Figure 12. Experimental results obtained upon using images in area B. (a) input images; (b) and (c) results corrected by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. A pseudo-color scheme is employed to depict the color discrepancies in the results. The red boxes indicate obvious color discontinuity, and the blue boxes highlight overexposed areas.

The results obtained by processing images in region B using different approaches are presented in . It is evident that the overall visual effect of is superior to that of . exhibits a notable difference in brightness in the blue box, and apparent stitching traces still exist in the red box. Although does not have an obvious uneven brightness distribution, the visible color transition between images is still present, as demonstrated in the red box. In summary, outperforms the other methods in terms of visual effects. In terms of quantitative evaluation, although performs better than in the CED indicator, achieves the best performance in the QCCED indicator. This indicates that the clarity of is considerably better than that of .

illustrates the results of color correction methods applied to the region C dataset. On the whole, all methods have shown significant improvements. However, still exhibit color inconsistencies in the area marked by the red box, indicating the limitations of these approaches. As previously discussed, the brightness differences in the region highlighted by the blue box in are caused by significant brightness differences between the reference images. Overall, our method demonstrates the most visually satisfying outcome, as depicted in . This observation is supported by the quantitative results presented in .

Figure 13. Experimental results obtained using images in area C. (a) input images; (b) and (c) results corrected by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. To exhibit differences, a pseudo-color scheme is used to display the results. The red boxes represent areas with significant color discrepancies, whereas the blue boxes highlight underexposed regions.

Figure 13. Experimental results obtained using images in area C. (a) input images; (b) and (c) results corrected by the BARN and GTA methodologies, respectively; (d) the result generated by our approach. To exhibit differences, a pseudo-color scheme is used to display the results. The red boxes represent areas with significant color discrepancies, whereas the blue boxes highlight underexposed regions.

In summary, the performance of the BARN methodology is dependent on the reference image chosen, and this selection is especially crucial when processing images with wide coverage. Conversely, when processing simple scene images, the GTA method is generally more effective. However, since the GTA method requires the optimization of all original images simultaneously, it may not produce the most optimal results for a specific region. Moreover, the color information in the image overlap region often fails to conform to the simple models at different stages. As a result, neither of these two methodologies can achieve satisfactory results for images captured under complex imaging conditions. By utilizing the optimized low-resolution reference image, our approach exhibits excellent comprehensive capabilities and produces visually satisfactory outcomes.

To further visualize the effect of the proposed method, shows the experimental results obtained using sample images from the TAIHU dataset and regions A, B, and C. These images are selected from the areas marked in red boxes in . From , it can be seen that there are obvious stitching marks between the original images due to color differences. After processing with the BARN and GTA algorithms, there are still very obvious stitching seams between the images. Thanks to the edge smoothing strategy based on Gaussian bilateral filtering, our method effectively realizes the uniform color transition between images, and the overall color consistency performance is optimal.

Figure 14. Experimental results obtained using sample images from the TAIHU and CHINA datasets. (a) from left to right, there are the original images from the TAIHU dataset, followed by the images generated by BARN, GTA, and our method; (b), (c), and (d) are images from regions A, B, and C, respectively; from top to bottom, there are the input images, followed by the images corrected by BARN, GTA, and our approach.

Figure 14. Experimental results obtained using sample images from the TAIHU and CHINA datasets. (a) from left to right, there are the original images from the TAIHU dataset, followed by the images generated by BARN, GTA, and our method; (b), (c), and (d) are images from regions A, B, and C, respectively; from top to bottom, there are the input images, followed by the images corrected by BARN, GTA, and our approach.

4. Conclusion

In order to address the issue of existing algorithms being susceptible to cloud and shadow disturbances in reference images, we propose a color consistency optimization approach that utilizes optimized low-resolution reference images to enhance the quality of remote sensing images. Our methodology comprises three main steps. Firstly, we use Poisson blending to repair areas affected by clouds and cloud shadows, with good results even when using original images with radiation differences as patches. Secondly, we employ a boundary smoothing strategy based on Gaussian and bilateral filtering to achieve uniform color transitions between images. Finally, we use local linear models based on the optimized reference image to reduce color inconsistencies between images. To evaluate the efficacy of our proposed methodology, we selected two regions with different land covers and compared our approach with two state-of-the-art algorithms.

The following conclusions can be drawn:

  1. Quantitative and qualitative evaluations conducted on two datasets with diverse landforms and regions have demonstrated that our approach is both practical and robust.

  2. When compared to two state-of-the-art algorithms, our approach demonstrated an average decrease of 5.324 and 1.205 in the CED indicator, respectively, and a slight increase of 0.142 and 0.443 in the QCCED metric, respectively.

  3. Experimental findings demonstrate that our proposed methodology outperforms other existing approaches in terms of maintaining color consistency and smooth transitions between images.

However, there are limitations to our method, which are summarized as follows:

  1. The quality of the image generated by the proposed algorithm is largely influenced by the quality of the reference image.

  2. The efficiency of making a mask of the changed region between the reference and original images limits the fully automatic processing capability of the proposed method.

We aim to address these limitations in future research.

Acknowledgments

The authors would like to express gratitude to the anonymous reviewers for their valuable comments and suggestions, which helped improve the quality of this paper. The authors would like to thank the Land Satellite Remote Sensing Application Center, Ministry of Natural Resources of P.R. China, for providing ZY3-01/02 satellite imagery.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Additional information

Funding

This study was supported in part by the National Natural Science Foundation of China under Grants 42241164, 41871325, and the National Key R&D Program of China under Grant 2018YFB0505400.

References

  • Chen, C., Z. Chen, M. Li, Y. Liu, L. Cheng, and Y. Ren. 2014. “Parallel Relative Radiometric Normalisation for Remote Sensing Image Mosaics.” Computers & Geosciences 73:28–20. https://doi.org/10.1016/j.cageo.2014.08.007.
  • Cui, H., G. Zhang, T. Wang, X. Li, and J. Qi. 2021. “Combined Model Color-Correction Method Utilizing External Low-Frequency Reference Signals for Large-Scale Optical Satellite Image Mosaics.” IEEE Transactions on Geoscience and Remote Sensing 59 (6): 4993–5007. https://doi.org/10.1109/TGRS.2020.3018591.
  • Deng, Z., H. Sun, S. Zhou, J. Zhao, L. Lei, and H. Zou. 2018. “Multi-Scale Object Detection in Remote Sensing Imagery with Convolutional Neural Networks.” ISPRS Journal of Photogrammetry and Remote Sensing 145:3–22. https://doi.org/10.1016/j.isprsjprs.2018.04.003.
  • Fang, F., T. Wang, Y. Fang, and G. Zhang. 2019. “Fast Color Blending for Seamless Image Stitching.” IEEE Geoscience and Remote Sensing Letters 16 (7): 1115–1119. https://doi.org/10.1109/LGRS.2019.2893210.
  • He, M., J. Liao, D. Chen, L. Yuan, and P. V. Sander. 2019. “Progressive Color Transfer with Dense Semantic Correspondences.” ACM Transactions on Graphics 38 (2): 1–18. https://doi.org/10.1145/3292482.
  • Hong, Z., C. Xu, X. Tong, S. Liu, R. Zhou, H. Pan, Y. Zhang, Y. Han, J. Wang, and S. Yang. 2022. “Efficient Global Color, Luminance, and Contrast Consistency Optimization for Multiple Remote Sensing Images.” IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing 16:622–637. https://doi.org/10.1109/JSTARS.2022.3229392.
  • Hu, C., L. Z. Huo, Z. Zhang, and P. Tang. 2020. “Multi-Temporal Landsat Data Automatic Cloud Removal Using Poisson Blending.” IEEE Access 8:46151–46161. https://doi.org/10.1109/ACCESS.2020.2979291.
  • Hwang, Y., J.-Y. Lee, I. S. Kweon, and S. J. Kim. 2019. “Probabilistic Moving Least Squares with Spatial Constraints for Nonlinear Color Transfer Between Images.” Computer Vision and Image Understanding 180:1–12. https://doi.org/10.1016/j.cviu.2018.11.001.
  • Jalal, R., Z. Iqbal, M. Henry, G. Franceschini, M. S. Islam, M. Akhter, Z. T. Khan, et al. 2019. “Toward Efficient Land Cover Mapping: An Overview of the National Land Representation System and Land Cover Map 2015 of Bangladesh.” IEEE Journal of Selected Topics in Applied Earth Observations & Remote Sensing 12 (10): 3852–3861. https://doi.org/10.1109/JSTARS.2019.2903642.
  • Kim, T., and Y. Han. 2021. “Integrated Preprocessing of Multitemporal Very-High-Resolution Satellite Images via Conjugate Points-Based Pseudo-Invariant Feature Extraction.” Remote Sensing 13 (19): 3990. https://doi.org/10.3390/rs13193990.
  • Li, Y., L. Li, J. Yao, M. Xia, and H. Wang. 2022. “Contrast-Aware Color Consistency Correction for Multiple Images.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15:4941–4955. https://doi.org/10.1109/jstars.2022.3183188.
  • Liu, K., T. Ke, P. Tao, J. He, K. Xi, and K. Yang. 2020. “Robust Radiometric Normalization of Multitemporal Satellite Images via Block Adjustment without Master Images.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13:6029–6043. https://doi.org/10.1109/jstars.2020.3028062.
  • Liu, P., Y. Niu, J. Chen, and Y. Shi. 2019. “Color Correction for Stereoscopic Images Based on Gradient Preservation.” In Intelligent Computing: Proceedings of the 2019 Computing Conference, London, United Kingdom.1001–1011.
  • Liu, X., G. Zhou, W. Zhang, and S. Luo. 2021. “Study on Local to Global Radiometric Balance for Remotely Sensed Imagery.” Remote Sensing 13 (11): 2068. https://doi.org/10.3390/rs13112068.
  • Li, Y., H. Yin, J. Yao, H. Wang, and L. Li. 2022. “A Unified Probabilistic Framework of Robust and Efficient Color Consistency Correction for Multiple Images.” ISPRS Journal of Photogrammetry and Remote Sensing 190:1–24. https://doi.org/10.1016/j.isprsjprs.2022.05.009.
  • Moghimi, A., T. Celik, A. Mohammadzadeh, and H. Kusetogullari. 2021. “Comparison of Keypoint Detectors and Descriptors for Relative Radiometric Normalization of Bitemporal Remote Sensing Images.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14:4063–4073. https://doi.org/10.1109/JSTARS.2021.3069919.
  • Moghimi, A., A. Mohammadzadeh, T. Celik, and M. Amani. 2021. “A Novel Radiometric Control Set Sample Selection Strategy for Relative Radiometric Normalization of Multitemporal Satellite Images.” IEEE Transactions on Geoscience and Remote Sensing 59 (3): 2503–2519. https://doi.org/10.1109/TGRS.2020.2995394.
  • Moghimi, A., A. Mohammadzadeh, T. Celik, B. Brisco, and M. Amani. 2022. “Automatic Relative Radiometric Normalization of Bi-Temporal Satellite Images Using a Coarse-To-Fine Pseudo-Invariant Features Selection and Fuzzy Integral Fusion Strategies.” Remote Sensing 14 (8): 1777. https://doi.org/10.3390/rs14081777.
  • Moghimi, A., A. Sarmadian, A. Mohammadzadeh, T. Celik, M. Amani, and H. Kusetogullari. 2022. “Distortion Robust Relative Radiometric Normalization of Multitemporal and Multisensor Remote Sensing Images Using Image Features.” IEEE Transactions on Geoscience & Remote Sensing 60:1–20. https://doi.org/10.1109/TGRS.2021.3063151.
  • Niu, Y., X. Zheng, T. Zhao, and J. Chen. 2019. “Visually Consistent Color Correction for Stereoscopic Images and Videos.” IEEE Transactions on Circuits and Systems for Video Technology 30 (3): 697–710. https://doi.org/10.1109/TCSVT.2019.2897123.
  • Oskarsson, M. 2021. Robust Image-To-Image Color Transfer Using Optimal Inlier Maximization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA. 786–795.
  • Pastucha, E., E. Puniach, W. Gruszczyński, P. Ćwiąkała, W. Matwij, and H. S. Midtiby. 2022. “Relative Radiometric Normalisation of Unmanned Aerial Vehicle Photogrammetry‐Based RGB Orthomosaics.” Photogrammetric Record 37 (178): 228–247. https://doi.org/10.1111/phor.12413.
  • Pérez, P., M. Gangnet, and A. Blake. 2003.”Poisson image editing.”ACM Trans Graph 22 (3): 313–318. https://doi.org/10.1145/882262.882269.
  • Su, Z., K. Zeng, L. Liu, B. Li, and X. Luo. 2014. “Corruptive Artifacts Suppression for Example-Based Color Transfer.” IEEE Transactions on Multimedia 16 (4): 988–999. https://doi.org/10.1109/tmm.2014.2305914.
  • Varish, N., M. K. Hasan, A. Khan, A. T. Zamani, V. Ayyasamy, S. Islam, and R. Alam. 2023. “Content-Based Remote Sensing Image Retrieval Method Using Adaptive Tetrolet Transform Based GLCM Features.” Journal of Intelligent & Fuzzy Systems 44 (6): 9627–9650. https://doi.org/10.3233/JIFS-224083.
  • Wang, Z., D. Zhou, X. Li, L. Zhu, H. Gong, and Y. Ke. 2023. “Virtual Image-Based Cloud Removal for Landsat Images.” GIScience & Remote Sensing 60 (1): 2160411. https://doi.org/10.1080/15481603.2022.2160411.
  • Wang, D., C. Zou, G. Li, C. Gao, Z. Su, and P. Tan. 2017. “ℒ0 Gradient-Preserving Color Transfer.” Computer Graphics Forum 36 (7): 93–103. https://doi.org/10.1111/cgf.13275.
  • Xia, M., J. Yao, and Z. Gao. 2019. “A Closed-Form Solution for Multi-View Color Correction with Gradient Preservation.” ISPRS Journal of Photogrammetry and Remote Sensing 157:188–200. https://doi.org/10.1016/j.isprsjprs.2019.09.004.
  • Xia, M., J. Yao, R. Xie, M. Zhang, and J. Xiao. 2017. “Color Consistency Correction Based on Remapping Optimization for Image Stitching.” In Proceedings of the IEEE international conference on computer vision workshops, Venice, Italy. 2977–2984.
  • Xie, R., M. Xia, J. Yao, and L. Li. 2018. “Guided Color Consistency Optimization for Image Mosaicking.” ISPRS Journal of Photogrammetry and Remote Sensing 135:43–59. https://doi.org/10.1016/j.isprsjprs.2017.11.012.
  • Xu, H., Y. Wei, X. Li, Y. Zhao, and Q. Cheng. 2021. “A Novel Automatic Method on Pseudo-Invariant Features Extraction for Enhancing the Relative Radiometric Normalization of High-Resolution Images.” International Journal of Remote Sensing 42 (16): 6153–6183. https://doi.org/10.1080/01431161.2021.1934912.
  • Yoo, E. J., and D.-C. Lee. 2016. “True Orthoimage Generation by Mutual Recovery of Occlusion Areas.” GIScience & Remote Sensing 53 (2): 227–246. https://doi.org/10.1080/15481603.2015.1128629.
  • Yu, L., Y. Zhang, M. Sun, X. Zhou, and C. Liu. 2017. “An Auto-Adapting Global-To-Local Color Balancing Method for Optical Imagery Mosaic.” ISPRS Journal of Photogrammetry and Remote Sensing 132:1–19. https://doi.org/10.1016/j.isprsjprs.2017.08.002.
  • Yu, L., Y. Zhang, M. Sun, and X. Zhu. 2016. “Colour Balancing of Satellite Imagery Based on a Colour Reference Library.” International Journal of Remote Sensing 37 (24): 5763–5785. https://doi.org/10.1080/01431161.2016.1249306.
  • Zhang, X., R. Feng, X. Li, H. Shen, and Z. Yuan. 2020. “Block Adjustment-Based Radiometric Normalization by Considering Global and Local Differences.” IEEE Geoscience and Remote Sensing Letters 19:1–5. https://doi.org/10.1109/LGRS.2020.3031398.
  • Zhang, Y., L. Yu, M. Sun, and X. Zhu. 2017. “A Mixed Radiometric Normalization Method for Mosaicking of High-Resolution Satellite Imagery.” IEEE Transactions on Geoscience and Remote Sensing 55 (5): 2972–2984. https://doi.org/10.1109/TGRS.2017.2657582.
  • Zhou, X. 2015. “Multiple Auto-Adapting Color Balancing for Large Number of Images.” The International Archives of Photogrammetry, Remote Sensing & Spatial Information Sciences 40 (7): 735. https://doi.org/10.5194/isprsarchives-XL-7-W3-735-2015.