Next Article in Journal
Precipitation Monitoring Using Commercial Microwave Links: Current Status, Challenges and Prospectives
Previous Article in Journal
A Kriging Method for the Estimation of ALS Point-Cloud Accuracy without Ground Truth
Previous Article in Special Issue
A Flexible Spatiotemporal Thick Cloud Removal Method with Low Requirements for Reference Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Prior-Knowledge-Based Generative Adversarial Network for Unsupervised Satellite Cloud Image Restoration

1
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
2
School of Automation, Nanjing University of Information Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(19), 4820; https://doi.org/10.3390/rs15194820
Submission received: 1 August 2023 / Revised: 27 September 2023 / Accepted: 28 September 2023 / Published: 4 October 2023

Abstract

:
High-quality satellite cloud images are of great significance for weather diagnosis and prediction. However, many of these images are often degraded due to relative motion, atmospheric turbulence, instrument noise, and other factors. In the satellite imaging process, the degradation also cannot be completely corrected. Therefore, it is necessary to further improve the satellite cloud image quality for real applications. In this study, we propose an unsupervised image restoration model with a two-stage network, in which the first stage, named the Prior-Knowledge-based Generative Adversarial Network (PKKernelGAN), aims to learn the blur kernel, and the second stage, named the Zero-Shot Deep Residual Network (ZSResNet), aims to improve the image quality. In PKKernelGAN, we propose a satellite cloud imaging loss function, which is a novel objective function that brings optimization of a generative model into the prior-knowledge domain. In ZSResNet, we build a dataset which contains the original satellite cloud image as high-quality images (HQ) paired with low-quality images (LQ) generated by the blur kernel learning from PKKernelGAN. The above innovations lead to a more efficient local structure in satellite cloud image restoration. The original dataset of our experiment is from the Sunflower 8 satellite provided by the Japan Meteorological Agency. This dataset is divided into training and testing sets to train and test PKKernelGAN. Then, ZSResNet is trained by the “LQ–HQ” image pairs generated by PKKernelGAN. Compared with other supervised and unsupervised deep learning models for image restoration, our model has a better performance. Extensive experiments have demonstrated that our proposed model can achieve better performance on different datasets.

1. Introduction

High quality images are urgently desired in many applications such as surveillance, medical imaging, satellite imaging, and face recognition. Satellite cloud images are important characteristic data for detecting and identifying cloud shapes, structures, brightness, and texture. A high-quality satellite cloud image will undoubtedly provide better data for atmospheric science research on cloud location, intensity, and development prediction. However, due to the limitations and influence of the satellite sensor performance, equipment stability, the external environment, data transmission, and other realistic conditions, the quality of satellite cloud images are often degraded, and image detail and texture information are blurred.
Image restoration aims to recover a high-quality image from a given blurred low-quality image. Several reviews [1,2] have summarized the image restoration algorithms. Image restoration was first proposed by Harris, and is called the Harris spectral extrapolation [3], but it was not widely recognized at first. In 1984, Tsai and Huang [4] proposed the method of restoring a single frame image from low restoration image sequences, and this technology was widely studied. In 2014, influenced by deep learning, Dong et al. [5] proposed the first model SRCNN based on a deep neural network, which uses a three-layer neural network to simulate the traditional model and has achieved the best peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Subsequently, supervised models based on deep neural networks, such as VDSR [6], EDSR [7], SRGAN [8], CycleGAN [9], WESPE [10], and DBPI [11] have been proposed, which have attained good natural image restoration. However, these methods usually need paired HQ–LQ samples as the training dataset. It is difficult to acquire such paired samples in real scenarios. Due to the domain gap, these methods produce unpleasant artifacts and fail on real-world images. Recently, ZSSR [12], RealSR [13], BSRGAN [14], and RealESRGAN [15] have recognized the significance of the real-world image restoration task. They propose novel realistic degradation frameworks for image restoration, which contain kernel estimation methods to preserve the original domain attributes.
In the field of satellite image analysis, many deep-learning-based networks have achieved good performance on satellite cloud images. Gaurav Kumar Nayak [16] proposed a model based on high and low cloud image frequency information. Magdy [17] introduced several image networks trained by cloud images. Fu [18] proposed a TV-L1 decomposition algorithm for infrared cloud image restoration. Jin [19] projected a coupled dictionary learning algorithm, which changed the update strategy of dictionary pairs and introduced the optimal orthogonal matching tracking algorithm to obtain a high-resolution cloud image. Zhou [20] presented an image restoration algorithm for infrared cloud image based on sparse representation, which is a model with full use of the structural similarity information contained in the infrared cloud image blocks, used to improve the resolution. Su [21] tried to optimize the global feature information based on a convolutional neural network and achieved better restoration results than the interpolation and sparse methods. The above algorithms based on deep neural network have made certain contributions to the research on satellite cloud image restoration. These algorithms use ideal downsampling kernels to simulate the degradation process of satellite cloud image when constructing training datasets. Therefore, it is necessary to estimate the degradation process of meteorological satellite cloud images and propose deep learning based image restoration methods corresponding to the degradation characteristics.
As is known, image restoration is typically a highly ill-posed problem since there are a lot of solutions. In general, strong prior knowledge is needed in ill-posed problem. In this paper, a GAN model constrained by the physical imaging prior knowledge of a satellite for blur kernel estimation is proposed. More specifically, we introduce PKKernelGAN, which estimates the blur kernel that best preserves the distribution of the LQ image. The Generator is trained to produce a blurred image; then, the Discriminator cannot distinguish between the patch distribution of the blurred image and the patch distribution of the original image. In other words, the Generator is trained to fool the Discriminator into believing that all the patches of the blurred image are actually taken from the original image. Then, the trained Generator is a blur kernel estimator generating LQ images with the kernel. PKKernelGAN is fully unsupervised, requiring no training data other than the input image itself. The visual comparison of a cloud image, with the low-quality image, high-quality image, and the restored image based on our method is shown in Figure 1. It shows that the restored image based on our method is a good version. The main contributions of this paper are:
  • Physical imaging prior-knowledge introduction. We construct a degradation matrix based on satellite physical imaging prior knowledge for PKKernelGAN training;
  • Prior-knowledge loss function. We propose a satellite imaging loss function, which is a novel objective function that brings satellite physical imaging prior knowledge into the optimization process;
  • A benchmark dataset. We build a dataset that contains the original satellite cloud image acting as a high-quality image paired with low-quality images generated by the blur kernel from PKKernelGAN.
In the following sections, we first review the related works in Section 2. In Section 3, the network architecture and loss function are discussed. Quantitative and qualitative comparisons, among some representative methods and the proposed method, are included in Section 4. Finally, Section 5 concludes this paper.

2. Related Work

2.1. Degradation Process

Image restoration is a task that involves recovering a high-quality (HQ) image from a low-quality (LQ) image. The LQ image is considered the output of a degradation process, which can be represented as follows:
I LQ = D ( I HQ ) + N .
The goal of image restoration is to estimate the HQ image I HQ from the observed LQ image I LQ and to reverse the effects of the degradation process D ( · ) and noise N. This is typically achieved through the use of advanced image restoration techniques, such as super-resolution, deblurring, denoising, and inpainting, which leverage deep learning methods and optimization algorithms to improve the visual quality and fidelity of the reconstructed images.
In practical applications, it is challenging to obtain accurate image blur kernels, and a single low-quality image may correspond to multiple different high-quality images after restoration. Therefore, accurately estimating the blur kernel and employing stable algorithms for image restoration is a highly challenging task. Accordingly, in this work, we consider modeling the satellite imaging process as a constraint to guide the restoration of meteorological satellite images, taking it as a degradation factor.
During the process of receiving meteorological satellite data, cloud images often suffer from image quality degradation issues, such as unclear grayscale levels, low resolution, and the presence of noise. These degradations are caused by factors such as the accuracy of the receiving instrument and atmospheric turbulence [20]. In particular, in the on-orbit operation, various movements of the satellite platform and onboard components can cause jitter in the pointing direction of the camera. This can result in image shifts during the integration imaging process, thereby affecting the image quality. This physical process can often be approximated as a superposition of several sinusoidal vibrations [22], as shown in Formula (2):
x ( t ) = A 1 sin ( 2 π f 1 t + ϕ 1 ) + A 2 sin ( 2 π f 2 t + ϕ 2 ) + + A n sin ( 2 π f n t + ϕ n ) ,
where x ( t ) represents the total signal as a function of time t, A i is the amplitude of the i-th sinusoidal vibration, f i is the frequency of the i-th sinusoidal vibration in Hertz (cycles per second), and ϕ i is the initial phase of the i-th sinusoidal vibration.
During the push-broom imaging process of meteorological satellites, the pixel values of each row in the image are acquired at different imaging moments. Therefore, the vibration-induced image shift during the integration imaging process not only causes image blurring, but also results in the offset of the integral center position of pixels. The offset amounts for pixels in each row may also differ to some extent. This leads to the irregularity in the sampling positions of satellite detector pixels, resulting in geometric distortions in the image. The mathematical expression for the above-mentioned irregular sampling can be described as Formula (3):
ε ( x ) = i = 1 M A i ( x ) a cos ( 2 π f i 1 / T int x + ϕ i ) ,
where ε is the offset of the pixel sampling position x, α is the pixel size, M is the number of harmonics, A i is the amplitude of the i-th sinusoidal vibration, T int is the camera single-level integration time, f i is the vibration frequency component, and ϕ i is the initial phase of the i-th harmonics.
Let us assume that the ideal sampling position (m, n) of the detector pixel in the image plane λ can be represented as Formula (4):
λ m n = ( x m n , y m n ) = ( m , n ) .
The offset of the pixel center position caused by the vibration-induced image shift can be represented as Formula (5):
ε ( m , n ) = ( ε x ( m , n ) , ε y ( m , n ) ) .
Then, the actual sampling position Λ of the pixel in the image plane can be represented as Formula (6), where is M, N is the image size:
Λ = Ω + ε ( Ω ) = ( m + ε x ( m , n ) , n + ε y ( m , n ) ) m = 1 , n = 1 M , N ,
where Ω represents the ideal sampling position, as shown in Formula (7):
Ω = ( [ 1 , M ] × [ 1 , N ] ) Z 2 .
From the description of the satellite imaging process above, it is evident that the process of image degradation is complex. After satellite launch, it is even more difficult to obtain a definitive image degradation model. Therefore, it is hard to achieve good results by using image restoration methods based on image degradation models. In view of this, this paper will focus on how to directly estimate the complex satellite imaging blur kernels from the obtained cloud images, train deep networks for satellite cloud image restoration, and improve the image restoration effectiveness.

2.2. Kernel Generative Adversarial Networks

Generative adversarial networks (GANs) have been widely studied since 2014. GANs consist of two models: a generator and a discriminator. They can achieve high-quality reconstruction images through the adversarial training of generator and discriminator networks. In 2019, KernelGAN [23] was proposed, which is a deep-learning-based approach for estimating blur kernels from degraded images. It is useful in the context of image restoration tasks. The main idea of KernelGAN is to use a Generative Adversarial Network (GAN) to learn the distribution of image blur kernels from a set of real degraded images. The objective of KernelGAN is defined as:
G * ( I LQ ) = a r g min G max D { E x p a t c h e s ( I LQ ) [ | D ( x ) 1 | + | D ( G ( x ) ) | ] + R } ,
where I LQ is the input image, G is the generator, and D is the discriminator. In Formula (8), m a t h c a l R is the regularization term on the LQ-kernel resulting from the generator G:
R = α L s u m ̲ t o ̲ 1 + β L b o u n d a r i s e + γ L s p a r s e + δ L c e n t e r ,
s u c h t h a t L s u m ̲ t o ̲ 1 = | 1 Σ i , j k i , j | L b o u n d a r i s e = Σ i , j | k i , j · m i , j | L s p a r s e = Σ i , j | k i , j | 1 / 2 L c e n t e r = ( x 0 , y 0 ) Σ i , j k i , j | · ( i , j ) Σ i , j k i , j | 2
where:
L s u m ̲ t o ̲ 1 = | 1 Σ i , j k i , j | encourages k to sum to 1. k is the image blur kernel.
L b o u n d a r i s e = Σ i , j | k i , j · m i , j | , penalizing non-zero values close to the boundaries. m is a constant mask of weights exponentially growing with distance from the center of k.
L s p a r s e = Σ i , j | k i , j | 1 / 2 encourages sparsity to prevent the network from over smoothing kernels.
L c e n t e r = ( x 0 , y 0 ) Σ i , j k i , j | · ( i , j ) Σ i , j k i , j | 2 encourages k ’s center of mass to be at the center of the kernel, where ( x 0 , y 0 ) denote the center indices.
The training process of KernelGAN involves minimizing the above objective function, allowing the generator to generate blur kernels that are similar to real blur kernels, which can be used for image restoration and enhancement tasks. However, when it comes to satellite cloud images, directly applying KernelGANs for satellite image restoration still cannot effectively learn the complex degradation process of satellite cloud images. This calls for further introduction of constraint conditions to guide the learning process of the KernelGANs model.

3. Proposed Method

As analyzed in Section 2.1, the imaging process of a meteorological satellite is complex. Various factors such as the satellite system operation, orbit altitude, position, and atmospheric turbulence may introduce interference, which is transmitted along with the satellite images. Therefore, the degradation process of cloud images cannot be simply regarded as a typical downsampling process. Accurately estimating these interference factors can lead to a more precise understanding of the degradation process of satellite cloud images and improving the performance of cloud image restoration algorithm.
In this paper, we propose a two-stage model shown in Figure 2. The first stage is estimating the blur kernels using the Prior-Knowledge based on Kernel Estimate Generative Adversarial Network (PKKernelGAN), which involves the satellite imaging prior knowledge. The second stage involves training the Zero-shot Deep Residual Network for Image Restoration (ZSResNet) based on the “LQ–HQ” image pairs datasets created based on blur kernels estimated by PKKernelGAN.
The first stage is PKKernelGAN, which is responsible for estimating the blur kernels based on the prior knowledge of the satellite imaging physics process. We utilize a generative adversarial network (GAN) as the backbone for the blur kernel estimation. By considering the satellite imaging prior knowledge (discussed in Section 2.1) as a loss function for the network training, the generator network learns the pixel distribution characteristics of satellite cloud images and generates degraded images based on this distribution as inputs for the discriminator. Then, the discriminator outputs a label matrix and, for each pixel in the matrix, makes a judgment on whether the image patch comes from the original or not. After training, the generator is the estimator for the input image blur kernel.
The second stage is ZSResNet, which is responsible for satellite cloud image restoration. ZSResNet employs the blur kernel estimator generated by the PKKernelGAN to perform degradation operations on the original satellite cloud images. These degraded images together with the original cloud images form a paired dataset of “LQ–HQ” image pairs. Then, based on the zero-shot learning approach, ZSResNet model, an unsupervised model for satellite cloud image restoration is trained based on these “LQ–HQ” image pairs.

3.1. PKKernelGAN

3.1.1. Models

We developed the PKKernelGAN network model, which is based on the estimation of the satellite image blur kernel. When a low-quality input image is given, the generator (PKKernel-G), as shown in Figure 3a, is responsible for learning the blur kernel, while the discriminator (PKKernel-D), as shown in Figure 3b, is to distinguish between real degraded images and images degraded by the blur kernel. The training objective of the generator is achieved when the discriminator finds it challenging to distinguish generated degraded images from real degraded images. At this point, the generator becomes a probability distribution, representing the estimated blur kernel network.
PKKernel-G is a generative network composed of five layers of “Convolution+ReLU” convolutional layers. The kernel sizes for the first three layers are 7 × 7, 5 × 5, and 3 × 3, respectively, while the rest of the layers have 1 × 1 kernel sizes. Its purpose is to estimate the blur kernel for the cloud image. On the other hand, PKKernel-D is a discriminative network composed of one layer of 7 × 7 convolution, five layers of “Convolution + SpecNorm + BatchNorm + ReLU” 1 × 1 convolutions, and one layer of “Convolution + SpecNorm + Sigmoid” activation. Its function is to discriminate the similarity between the generated degraded images and the original degraded images.

3.1.2. Loss Function

In order to preserve the spatial structure of the LQ image and improve the visual quality of the restoration image, we define a loss function as the constraint in PKKernelGAN training for a better cloud image blur kernel generator network.
We constructed a loss function L P K consisting of four components: the summation loss, boundary loss, sparsity loss, and imaging degradation loss. The expression is as follows:
L P K = α L s u m ̲ t o ̲ 1 + β L b o u n d a r i s e + γ L s p a r s e + λ L c l o u d ,
where α , β , γ , and λ represent the weight coefficients for each loss function; L s u m ̲ t o ̲ 1 is the summation loss; the purpose of this loss function is to encourage the summation of k to be equal to 1, the variable k represents the satellite imaging blur kernel output by the PKKernelGAN network, and m, n are the pixel coordinates of the blur kernel. L b o u n d a r i s e is the boundary loss; the purpose of this loss function is to penalize non-zero values that are close to the boundaries. The constant weight μ increases exponentially as the distance between the boundary pixels and the center of k grows larger, where m , n represent the pixel coordinates of the blur kernel. L s p a r s e is the sparsity loss; the purpose of this function is to encourage sparsity, preventing the network from generating excessively smooth kernels, where m , n represent the pixel coordinates of the blur kernel also.
In particular, L c l o u d is the imaging degradation loss. The specific calculation method is as follows:
L c l o u d = V A R ( k , k c l o u d ) = | v a r ( k ( m , n ) ) v a r ( k c l o u d ( m , n ) ) | = | 1 M N m = 1 M n = 1 N ( k ( m , n ) k ¯ ( m , n ) 2 ) 1 M N m = 1 M n = 1 N ( k c l o u d ( m , n ) k ¯ c l o u d ( m , n ) 2 ) | ,
where k c l o u d represents the imaging degradation matrix constructed based on the meteorological satellite imaging physical prior knowledge, as described in Section 2.1, m , n represents the pixel coordinates of the blur kernel, and M , N is the size of the blur kernel. The introduced loss function “ L c l o u d ” based on the satellite physical degradation model can be used to constrain the network model’s solution, enabling the generator a more accurately estimation of the satellite cloud image’s degradation process. As a result, the generated blur kernel estimation network PKKernelGAN can better incorporate information from the physical imaging prior blur kernel.

3.2. ZSResNet

3.2.1. Models

In practical applications, deep-learning-based image restoration networks require a “LQ–HQ” paired dataset of satellite cloud images for training. However, directly obtaining such a dataset is challenging. In our proposed method, the first stage involves training the PKKernelGAN, which estimates the blur kernels. This network can infer the mapping between low-quality and high-quality image pairs, enabling us to obtain the dataset required for image restoration. Therefore, based on the first stage model, we acquire the original satellite cloud images and their degraded counterparts as the dataset.
Zero-shot learning, in general, refers to the ability of a model to perform a task without relying on any predefined samples or pretrained networks. In the context of the second stage of our proposed method, zero-shot learning means that the network is trained to perform image restoration without relying on any specific pairs of low-quality and high-quality images. Instead, it learns to extract image-specific internal information during training, which allows it to produce high-quality images for any given low-quality test image.
The network architecture for the second stage ZSResNet, as shown in Figure 4, consists of several residual networks. During the training process, the estimated blur kernel network from the first stage is used to obtain pairs of low-quality and high-quality images from the test images. The convolutional neural network is then trained to perform image restoration using only the test images, learning the internal information specific to the images. During the testing phase, the trained network takes a low-quality test image as input and produces a high-quality image as output.

3.2.2. Loss Function

The Mean Square Error (MSE) is commonly used as a loss function in image reconstruction. MSE measures the average difference between the pixel values of the reconstructed image and the ground truth image, i.e., the square of the pixel differences between the reconstructed and ground truth images, averaged over all pixels. By minimizing the MSE, we encourage the pixel-wise matching between the reconstructed and ground truth images, leading to more accurate reconstruction. The expression for the MSE loss function is as follows:
MSE = 1 n i = 1 n ( I i I i ^ ) 2 ,
where n is the total number of data samples, I i is the ground truth value of the i-th sample, and I i ^ is the predicted value by the model for the i-th sample. The MSE loss measures the average square difference between the predicted values and the ground truth values. It is commonly used as a loss function for regression tasks, such as image restoration, where the goal is to minimize the difference between the restored image and the ground truth high-quality image.

4. Experiment and Analysis

In this section, we present the experimental setup and analysis of the proposed method for satellite cloud image restoration. We first describe the datasets used in the experiments and the implementation details of the PKKernelGAN and ZSResNet models. Next, we evaluate the performance of the proposed method and the comparing methods including supervised and unsupervised image restoration mainstream algorithms. Finally, we conduct ablation studies to analyze the impact of different components of the proposed method on the overall performance.

4.1. Dataset and Implementation Details

The dataset used in this experiment is derived from the Sunflower 8 satellite, sourced from Japan Meteorological Agency. This dataset consists of 2,400,800 × 800 infrared channel images with a pixel resolution 2 km. The dataset was split into two subsets: a training set containing 2000 images and a testing set containing 400 images. The dataset preparation ensures a sufficient and diverse set of samples to train and test the PKKernelGAN and ZSResNet models effectively. It allows us to verify the ability of the models to handle various cloud conditions and produce high-quality restored images, contributing to the comprehensive evaluation of the proposed method’s performance. The code environment used in this study is Python 3.6, and the deep learning frameworks utilized are Pytorch and Tensorflow 1.0. The hardware configuration is as follows: the CPU is an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz, with 16 GB of RAM, and an NVIDIA GeForce GTX 1060 graphics card, with 8 GB dedicated memory.

4.2. Evaluation Metrics

4.2.1. PSNR

The Peak Signal-to-Noise Ratio (PSNR) is a commonly used metric for evaluating the quality of image restoration algorithms. It measures the peak signal-to-noise ratio between the original (ground truth) image and the restored (reconstructed) image. The PSNR is calculated using the mean square error (MSE) between the two images and is expressed in decibels (dB). The formula to calculate the PSNR is as follows:
PSNR = 10 · log 10 MAX 2 MSE ,
where MAX is the maximum possible pixel value (e.g., 255 for 8-bit images) and MSE is the Mean Square Error between the original high-quality image and the restored image. A higher PSNR value indicates better image quality, as it represents a lower amount of distortion between the the restored image and the original image.

4.2.2. SSIM

The Structural Similarity Index (SSIM) is another popular image quality assessment metric used to evaluate the similarity between two images. The SSIM takes into account both the luminance (brightness) and structural information of the images and provides a more perceptually meaningful measure of image quality. The SSIM index is calculated using three components: luminance (L), contrast (C), and structure (S). The formula to compute the SSIM is as follows:
SSIM ( I 1 , I 2 ) = ( 2 μ 1 μ 2 + c 1 ) ( 2 σ 12 + c 2 ) ( μ 1 2 + μ 2 2 + c 1 ) ( σ 1 2 + σ 2 2 + c 2 ) ,
where I 1 and I 2 are the two images being compared (original and restored images, for example), μ 1 and μ 2 are the average values (means) of the two images, σ 1 2 and σ 2 2 are the variances of the two images, σ 12 is the covariance of the two images, and c 1 and c 2 are two constants added to stabilize the division.
The SSIM values range from −1 to 1, where 1 indicates perfect similarity and −1 indicates complete dissimilarity between the images. Higher SSIM values represent better image quality, as they indicate a higher level of similarity between the original and restored images.

4.2.3. NIQE

The Naturalness Image Quality Evaluator (NIQE) is a no-reference image quality assessment metric that aims to measure the naturalness of an image. Unlike the PSNR and SSIM, which require a reference image to compare the quality, the NIQE does not rely on any reference image, and is therefore considered a no-reference metric. The NIQE algorithm is designed to capture the natural scene statistics and image artifacts. It is based on the assumption that natural images follow specific statistical properties, and deviations from these properties are indicative of image quality degradation. The NIQE formula can be expressed as follows:
NIQE = 1 N i = 1 N ( m i μ i ) 2 σ i + log ( σ i ) ,
where N is the number of image blocks or patches, m i is the mean of the i-th image block, μ i is the mean of means of all image blocks, and σ i is the standard deviation of the i-th image block.
The NIQE is a full-reference image quality metric used to assess the quality of an image based on its naturalness and visual fidelity. It quantifies the deviation of an image from natural image statistics. A lower NIQE value indicates better image quality, as it means the image is closer to the natural images in terms of its statistical properties. The NIQE is particularly useful for evaluating the quality of images that are heavily distorted or degraded.

4.3. Experimental Results

Compared to KernelGAN + ZSSR The restored satellite cloud images based on the designed two-stage method are shown in Figure 5. The results demonstrate that our proposed method has effective implementation on satellite cloud image restoration.
Figure 5 shows that our algorithm can estimate the blur kernel more accurately from real-world images, and our algorithm demonstrates a clear advantage in significantly improving the image quality. First, our algorithm performs image restoration using real-world images from the training dataset, and the low-quality images are generated by PKKernelGAN. The result of our algorithm has more abundant and realistic details. Compared to directly using KernelGAN for the generated low-quality images, our algorithm leverages the prior-knowledge constraints to estimate the blur kernel from real images; hence, it has an advantage in generating high-quality restored images. Secondly, during the testing process, our algorithm shows a higher level of accuracy. The unsupervised model ZSResNet based on the estimated blur kernel from actual satellite cloud images leads to higher quality restoration results compared to the actual cloud images. In contrast, direct usage of ZSSR can not achieve such accurate blur kernel estimation, limiting its performance in image restoration. In conclusion, our algorithm PKKernelGAN + ZSResNet demonstrates a clear advantage in improving the image quality compared to the direct usage of KernelGAN + ZSSR.
We evaluated the comparison restoration results in Figure 5 using the NIQE metric. Table 1 shows the objective evaluation results. Comparing the proposed method with the KernelGAN + ZSSR approach, our proposed method achieved a lower NIQE score of 5.2080, and the KernelGAN + ZSSR method obtained a higher NIQE score of 6.4841. These results indicate that our method performs better in terms of image quality, with lower artifacts and improved perceptual fidelity for restored satellite cloud images.
Compared to other methods The satellite cloud images based on the designed two-stage method are shown in Figure 6. Compared to existing methods, our method produces little noise and few artifacts, indicating that the blur kernel estimated by PKKenelGAN is closer to the real blur kernel. The results demonstrate that our proposed method is effective for satellite cloud image restoration.
In this experiment, we artificially constructed pairs of LQ and HQ images to facilitate image restoration. The LQ images are generated by a blur kernel estimated from real-world satellite cloud images based on PKKernelGAN. We compared the results with other algorithms, including supervised algorithms and unsupervised algorithms. Comparing the performance of our algorithm with the supervised algorithms VDSR and SRGAN, our approach demonstrates a significant advantage in improving the image quality. VDSR and SRGAN are traditional supervised algorithms that may be limited by their network structures, and may struggle to capture the image’s complex features. Therefore, VDSR and SRGAN cannot learn the blue kernel accurately, leading to limitations in the quality of the restored images. Comparing the performance of our method with the unsupervised algorithms RealSR, BSRGAN and real-ESRGAN, our method also achieved more abundant and realistic details in the restoration image. The experiments indicate that while unsupervised deep learning algorithms for image restoration can estimate the image blur kernel to a certain extent, for satellite cloud images, incorporating the satellite imaging physical model into the deep learning network allows for a more accurate estimation of the image blur kernel, leading to superior image reconstruction results.
Table 2 shows the average objective evaluation results of the testing experiment with the PSNR and SSIM metrics, comparing our method and other algorithms. Our method achieved a higher PSNR of 32.8481 dB and a higher SSIM of 0.9144, indicating better image quality and structural similarity compared to other algorithms. The performance of our method was better than the supervised algorithms; so, the application of our method will not be limited by “LQ–HQ” image pairs. Furthermore, our method is better than the unsupervised algorithms, which shows that our method is more targeted for image blur kernel estimation. These results demonstrate the superiority of our proposed method in terms of both the PSNR and SSIM for satellite cloud image restoration.
Overall, the quantitative results and visual comparisons demonstrate the effectiveness and superiority of the proposed method for satellite cloud image restoration. The prior knowledge in the PKKernelGAN loss function enables our method to achieve satisfactory performance in various satellite cloud image restoration scenarios. Furthermore, the quantitative results in Table 1 and Table 2 demonstrate that our algorithm achieves superior image restoration quality compared to other supervised and unsupervised methods.
Experiment extension in another dataset In order to extend our experiments and give more evidence that our proposed method is useful, we extended our experiments in a new dataset. The new dataset used in this experiment was the “NWPU-RESISC45” dataset, sourced from Google Earth. This dataset consisted of 700 images of 256 × 256 × 3, with a pixel resolution of 30 m. Figure 7 presents some visual comparisons among the supervised methods, unsupervised methods, and our method. Similar conclusions to the comparisons in Figure 6 can be drawn. Our method generates much clearer details compared to other methods; in particular, on regions with a fine-scale aliasing structure in the cloud image, our method has a better performance. As can be seen in Figure 7, VDSR and SRGAN introduce unpleasant artifacts, while RealSR, BSRGAN, and RealESRGAN produce relatively smooth structures. In contrast, our proposed method suppresses the generation of artifacts and encourages sharp details. Table 3 summarizes the performance of all these versions on the new dataset. It can be observed from the table that the proposed method is better than all the comparison algorithms. This experiment shows the performance of the proposed method as compared to the ones coming from the common blur kernel estimation on the new dataset.
More qualitative results We further analyzed the performance of our algorithm on the Sunflower 8, NWPU-RESISC45 and WHUS2-CRv [24] datasets to find its strengths and limitations. Figure 8 shows the image restoration results. All images in Figure 8 are the original image size for comprehensive observations. The proposed method generates much more detail in fine regions, improving the visual quality. Thus, we can conclude that the proposed algorithm is better than the other algorithms on the testing datasets. Figure 9 and Figure 10 show some cases in which the restoration results of all of the comparisons methods on such images are not ideal. It can be observed that there is a common characteristic in these cases. In the image, there are not only cloud features but also complex land surface features, such as urban blocks and patchy farmlands. The image restoration algorithm performs poorly in areas where cloud and land surface features overlap. The reasons affecting the algorithm’s performance will be discussed in Section 4.4.

4.4. Ablation Studies

As mentioned earlier, the proposed method consists of two crucial modules: (1) the loss function incorporating satellite imaging prior knowledge, and (2) the cross-layer connections’ image restoration network based on zero-shot learning. To investigate the contributions of these modules to the proposed methods, two additional models were designed: without loss function L c l o u d in KernelGAN and with the cross layer in the ZSSR module. The detailed configurations of these two models are as follows.

4.4.1. Effect of Loss Function

The proposed PKKernelGAN network structure is based on KernelGAN. However, the KernelGAN network structure can only simulate common size ranges and shapes of image blur kernels. Whether this blur kernel estimation method is suitable for satellite cloud image data needs to be explored through experiments. Through our research, we found that for complex and specific images like satellite cloud images, KernelGAN’s estimation of the blur kernel is not applicable, as shown in Figure 11. The generated blur kernel under such conditions does not accurately model the degradation process of satellite cloud images and often produces undesirable “rippling” artifacts, resulting in poor visual quality, as shown in Figure 12.
Through experimental studies, we found that for the specific nature of satellite cloud images, incorporating the loss function constructed in Section 3.1 of this paper into the total loss function of the GAN network enables the generator to learn cloud image blur kernels that closely approximate the prior knowledge of satellite imaging. As a result, the blur kernel estimation results are shown in Figure 13, and the satellite cloud image restoration results are shown in Figure 14 and Table 4.
In the experiments presented in Figure 9 and Figure 10, we observed that all comparative algorithms exhibited unsatisfactory performance in the reconstruction of cloud images containing terrain features. Due to the fact that the prior-knowledge of satellite imaging considered in this paper is only one vibration model, the constraints on kernel estimation may not have been optimal yet. We provide visual results based on our algorithm, and found that the blur kernel estimation for cloud images with terrain interference significantly differs from the target blur kernel, while the results for cloud images without complex terrain interference are comparatively satisfactory, as shown in Figure 15. This experiment shows that when the underlying land surface has non-homogeneous land cover features in the original image, our algorithm is not accurate enough for kernel estimation.

4.4.2. Cross Layer for Feature Extraction

In order to enhance the cloud image feature extraction, this paper introduced cross-layer connections between each layer of the ZSResNet. The network structure was modified and tested while keeping the other parameters unchanged to test whether the changes in the network structure led to improvements. The experimental results, as shown in Figure 16 and Table 5, demonstrate that the cross-layer connections network structure used in this paper indeed achieves better image restoration performance from both the visual results and objective evaluation metrics. The improved performance is attributed to two main factors. Firstly, the cross-layer connections allow the features from shallow layers to be continuously propagated to deep layers, enabling the reuse of a large number of features. This leads to generating a significant number of features with fewer convolution steps, which enhances the feature extraction capability of the network while maintaining a certain network depth. Secondly, during the backpropagation process, the shallow layer can receive gradient signals from all subsequent layers, which alleviates the problem of gradient vanishing to some extent during the training process. This aids in stabilizing and accelerating the training process. Overall, the introduction of cross-layer connections in ZSResNet structure improves image restoration performance by enhancing feature extraction and alleviating gradient vanishing issues during training.

4.5. Limitations and Discussion

In this paper, we have compared the degenerate kernel estimation performance of different methods and evaluated their strengths and weaknesses. We compared the image restoration results of the KernelGAN and the proposed PKKernelGAN for estimating the blur kernels of satellite cloud images and showed the performance of blur kernel estimation constrained by the loss function from satellite imaging prior knowledge. We highlighted how the incorporation of prior knowledge in PKKernelGAN helps to generate more accurate blur kernel estimates which, in turn, leads to higher-quality image restoration results. Additionally, we compared the performance of the network with and without cross-layer connections to demonstrate the effectiveness of the proposed ZSResNet in improving image restoration. We explained how the zero-shot learning-based cross-layer connections in ZSResNet enhance the network’s ability to handle complex cloud image features, resulting in better image restoration performance. We also extended the experiment in other datasets to verify the effectiveness of the proposed method and find the advantages and limitations. However, the limitations of our algorithm can be overcome. For instance, we can apply our algorithm to meteorological satellite images (specifically designed for cloud observation) without land surface features, which can be easily achieved using satellite image acquisition software. Our algorithm remains competitive for super-resolving meteorological satellite cloud images without interference from land features. Additionally, our algorithm can also be extended to enhance the resolution of resource satellite images, which predominantly capture oceans, deserts, and other homogeneous land cover features. In our future work, we will explore blur kernel estimation methods customized for meteorological and resource satellites. Incorporating more prior knowledge from satellite imaging into kernel estimation constraints will be a central focus of our upcoming research.

5. Conclusions and Future Work

In this work, we proposed a novel approach for satellite cloud image restoration using a combination of PKKernelGAN and ZSResNet. The PKKernelGAN network effectively estimates the blur kernels of satellite cloud images by incorporating prior knowledge from satellite imaging. The ZSResNet network facilitates complex cloud feature extraction and high-quality image restoration. Through the experiments and analysis, we demonstrated the superiority of our proposed approach over existing methods in terms of image quality and visual fidelity. Our results show that by leveraging prior knowledge and incorporating cross-layer connections, our approach achieves more accurate blur kernel estimation and generates high-quality restored cloud images. The combination of these two crucial modules enhances the performance of our image restoration network and outperforms traditional methods.
While our proposed approach shows promising results, there are several avenues for future research and improvements: (1) Expanding the training dataset with diverse cloud cover conditions and land features, establishing new deep learning models considering the multiple degradation scenarios, and improving the model’s generalization and robustness. (2) Addressing challenges associated with extreme cloud conditions, such as typhoon and cloud movements, to further enhance the model’s performance.

Author Contributions

Conceptualization, L.Z.; methodology, L.Z. and X.D.; software, L.Z. and X.D.; validation, X.D.; writing—original draft preparation, L.Z. and X.D.; writing—review and editing, L.Z.; project administration, Q.S.; funding acquisition, L.Z. and Q.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 61802199 and 62372235.

Data Availability Statement

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Bashir, S.M.A.; Wang, Y.; Khan, M.; Niu, Y. A comprehensive review of deep learning-based single image super-resolution. PeerJ Comput. Sci. 2021, 7, e621. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, H.; He, X.; Qing, L.; Wu, Y.; Ren, C.; Sheriff, R.E.; Zhu, C. Real-world single image super-resolution: A brief review. Inf. Fusion 2022, 79, 124–145. [Google Scholar] [CrossRef]
  3. Harris, J.L. Diffraction and resolving power. JOSA 1964, 54, 931–936. [Google Scholar] [CrossRef]
  4. Tsai, R.Y.; Huang, T.S. Multiframe image restoration and registration. Multiframe Image Restor. Regist. 1984, 1, 317–339. [Google Scholar]
  5. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the IEEE European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; pp. 184–199. [Google Scholar]
  6. Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
  7. Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
  8. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
  9. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  10. Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; van Gool, L. Wespe: Weakly supervised photo enhancer for digital cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 691–700. [Google Scholar]
  11. Kim, J.; Jung, C.; Kim, C. Dual back-projection-based internal learning for blind super-resolution. IEEE Signal Process. Lett. 2020, 27, 1190–1194. [Google Scholar] [CrossRef]
  12. Shocher, A.; Cohen, N.; Irani, M. “Zero-shot” super-resolution using deep internal learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3118–3126. [Google Scholar]
  13. Ji, X.; Cao, Y.; Tai, Y.; Wang, C.; Li, J.; Huang, F. Real-world super-resolution via kernel estimation and noise injection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 466–467. [Google Scholar]
  14. Zhang, K.; Liang, J.; Van Gool, L.; Timofte, R. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 4791–4800. [Google Scholar]
  15. Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
  16. Nayak, G.K.; Jain, S.; Babu, R.V.; Chakraborty, A. Fusion of Deep and Non-Deep Methods for Fast Super-Resolution of Satellite Images. In Proceedings of the 2020 IEEE Sixth International Conference on Multimedia Big Data (BigMM), New Delhi, India, 24–26 September 2020; pp. 267–271. [Google Scholar]
  17. Keshk, H.M.; Abdel-Aziem, M.M.; Ali, A.S.; Assal, M. Performance evaluation of quality measurement for super-resolution satellite images. In Proceedings of the 2014 Science and Information Conference, London, UK, 27–29 August 2014; pp. 364–371. [Google Scholar]
  18. Fu, R.; Zhou, Y.; Yan, W. Infrared nephogram super-resolution algorithm based on TV-L1 decomposition. Opt. Precis. Eng. 2016, 24, 937–944. [Google Scholar]
  19. Jin, W.; Fu, R. Nephogram super-resolution algorithm using over-complete dictionary via sparse representation. J. Remote Sens. 2012, 16, 275–285. [Google Scholar]
  20. Zhou, Y.; Fu, R.; Yan, W. A Method of Infrared Nephogram Super-resolution Based on Structural Group Sparse Representation. Opto-Electron. Eng. 2016, 43, 126–132. [Google Scholar]
  21. Su, J. Research on Super-Resolution Reconstruction Algorithm of Infrared Cloud Image Based on Learning. Master’s Thesis, University of Chinese Academy of Sciences, Shanghai, China, 2018. [Google Scholar]
  22. Holst, G.C. Electro-Optical Imaging System Performance; SPIE: Bellingham, WA, USA, 2008. [Google Scholar]
  23. Bell-Kligler, S.; Shocher, A.; Irani, M. Blind super-resolution kernel estimation using an internal-GAN. Adv. Neural Inf. Process. Syst. 2019, 32, 284–293. [Google Scholar]
  24. Li, J.; Wu, Z.; Hu, Z.; Zhang, J.; Li, M.; Mo, L.; Molinier, M. Thin cloud removal in optical remote sensing images based on generative adversarial networks and physical model of cloud distortion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 373–389. [Google Scholar] [CrossRef]
Figure 1. Visual comparison (for a better view, zoom-in on a screen) of cloud images. (a) Low-quality image, (b) high-quality image, (c) restored image based on our method.
Figure 1. Visual comparison (for a better view, zoom-in on a screen) of cloud images. (a) Low-quality image, (b) high-quality image, (c) restored image based on our method.
Remotesensing 15 04820 g001
Figure 2. The proposed method framework with PKKernelGAN and ZSResNet.
Figure 2. The proposed method framework with PKKernelGAN and ZSResNet.
Remotesensing 15 04820 g002
Figure 3. The PKKernelGAN network structure.
Figure 3. The PKKernelGAN network structure.
Remotesensing 15 04820 g003
Figure 4. The ZSResNet network structure.
Figure 4. The ZSResNet network structure.
Remotesensing 15 04820 g004
Figure 5. Visual comparison of cloud images ×2. (a) Original satellite cloud image, (b) details of the original image, (c) details of the image restored using KernelGAN + ZSSR, (d) details of the image restored using our proposed method. The red area is cropped from different results and enlarged for visual convenient.
Figure 5. Visual comparison of cloud images ×2. (a) Original satellite cloud image, (b) details of the original image, (c) details of the image restored using KernelGAN + ZSSR, (d) details of the image restored using our proposed method. The red area is cropped from different results and enlarged for visual convenient.
Remotesensing 15 04820 g005
Figure 6. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2. The PSNR/SSIM scores are denoted below the results, respectively.
Figure 6. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2. The PSNR/SSIM scores are denoted below the results, respectively.
Remotesensing 15 04820 g006
Figure 7. Visual comparison (for a better view, zoom-in on a screen) of cloud images ×2. The PSNR/SSIM scores are denoted below the results, respectively.
Figure 7. Visual comparison (for a better view, zoom-in on a screen) of cloud images ×2. The PSNR/SSIM scores are denoted below the results, respectively.
Remotesensing 15 04820 g007
Figure 8. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2.
Figure 8. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2.
Remotesensing 15 04820 g008
Figure 9. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2.
Figure 9. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2.
Remotesensing 15 04820 g009
Figure 10. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2.
Figure 10. Visual comparison (for a better view, zoom-in on a screen) on cloud images ×2.
Remotesensing 15 04820 g010
Figure 11. Blind estimation results of the blur kernel without incorporating prior knowledge from satellite imaging. From left to right are the target blur kernel and the estimation results.
Figure 11. Blind estimation results of the blur kernel without incorporating prior knowledge from satellite imaging. From left to right are the target blur kernel and the estimation results.
Remotesensing 15 04820 g011
Figure 12. Image restoration results using KernelGAN directly for satellite cloud image blur kernel estimation. Four examples from (a) to (d), First row: original images. Second row: restored images with“rippling” artifacts.
Figure 12. Image restoration results using KernelGAN directly for satellite cloud image blur kernel estimation. Four examples from (a) to (d), First row: original images. Second row: restored images with“rippling” artifacts.
Remotesensing 15 04820 g012
Figure 13. Blur kernel estimation results with the inclusion of physical degradation constraints. From left to right are the target blur kernel and the estimation results.
Figure 13. Blur kernel estimation results with the inclusion of physical degradation constraints. From left to right are the target blur kernel and the estimation results.
Remotesensing 15 04820 g013
Figure 14. Image restoration results without and with the physical degradation constraints. (a) Original image, (b) restored image without physical degradation constraints, (c) restored image with physical degradation constraints. The red area is cropped from different results and enlarged for visual convenient.
Figure 14. Image restoration results without and with the physical degradation constraints. (a) Original image, (b) restored image without physical degradation constraints, (c) restored image with physical degradation constraints. The red area is cropped from different results and enlarged for visual convenient.
Remotesensing 15 04820 g014
Figure 15. Image restoration and the kernel estimation results based on our method. (a) Original image with complex terrain interference and the target kernel, (b) restored image and the estimated kernel, (c) original image without complex terrain interference and the target kernel, (d) restored image and the estimated kernel.
Figure 15. Image restoration and the kernel estimation results based on our method. (a) Original image with complex terrain interference and the target kernel, (b) restored image and the estimated kernel, (c) original image without complex terrain interference and the target kernel, (d) restored image and the estimated kernel.
Remotesensing 15 04820 g015
Figure 16. Comparison of the image restoration results without and with cross-layer connections’ network structure improvement. (a) Original image, (b) restored image without network improvement, (c) restored image with network improvement. The red area is cropped from different results and enlarged for visual convenient.
Figure 16. Comparison of the image restoration results without and with cross-layer connections’ network structure improvement. (a) Original image, (b) restored image without network improvement, (c) restored image with network improvement. The red area is cropped from different results and enlarged for visual convenient.
Remotesensing 15 04820 g016
Table 1. Objective evaluation results of the NIQE metric, comparing the proposed method with the KernelGAN + ZSSR approach.
Table 1. Objective evaluation results of the NIQE metric, comparing the proposed method with the KernelGAN + ZSSR approach.
Experimental Groups1234Average NIQE
PKKernelGAN + ZSResNet5.13705.17605.38145.13775.2080
KernelGAN + ZSSR7.01036.19176.23476.49956.4841
Table 2. Comparisons between different methods (including supervised and unsupervised algorithms). The referenced supervised methods are tested with the models trained by our dataset. The referenced unsupervised methods are tested with their officially provided models. The scale factor is ×2. The best results are denoted in red, and the second-best are denoted in blue.
Table 2. Comparisons between different methods (including supervised and unsupervised algorithms). The referenced supervised methods are tested with the models trained by our dataset. The referenced unsupervised methods are tested with their officially provided models. The scale factor is ×2. The best results are denoted in red, and the second-best are denoted in blue.
Experimental GroupsMethodsPSNRSSIM
supervised/no kernel estimationVDSR [6]32.06360.9096
SRGAN [8]31.27230.9100
unsupervised/kernel estimationRealSR [13]32.53100.8963
BSRGAN [14]30.89790.9005
RealESRGAN [15]31.49260.9106
Ours32.84810.9144
Table 3. Comparisons between different methods (including supervised and unsupervised algorithms). The referenced supervised methods are tested with models trained by our dataset. The referenced unsupervised methods are tested with their officially provided models. The scale factor is ×2. The best results are denoted in red, and the second-best are denoted in blue.
Table 3. Comparisons between different methods (including supervised and unsupervised algorithms). The referenced supervised methods are tested with models trained by our dataset. The referenced unsupervised methods are tested with their officially provided models. The scale factor is ×2. The best results are denoted in red, and the second-best are denoted in blue.
Experimental GroupsMethodsPSNRSSIM
supervised/no kernel estimationVDSR [6]28.13520.7903
SRGAN [8]28.19320.7920
unsupervised/kernel estimationRealSR [13]28.70160.7955
BSRGAN [14]27.46020.7540
RealESRGAN [15]27.77030.7927
Ours28.92680.8119
Table 4. Objective evaluation results of the PSNR and SSIM metrics, comparing the image restoration results without and with the physical degradation constraints.
Table 4. Objective evaluation results of the PSNR and SSIM metrics, comparing the image restoration results without and with the physical degradation constraints.
Physical ConstraintsAverage PSNR/SSIM
without25.6633/0.5133
with30.0310/0.6232
Table 5. Objective evaluation results of the PSNR and SSIM metrics, comparing the image restoration results without and with cross-layer connections network structure.
Table 5. Objective evaluation results of the PSNR and SSIM metrics, comparing the image restoration results without and with cross-layer connections network structure.
Cross-Layer ConnectionsAverage PSNR/SSIM
without24.5646/0.5347
with29.5384/0.6121
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, L.; Duanmu, X.; Sun, Q. A Prior-Knowledge-Based Generative Adversarial Network for Unsupervised Satellite Cloud Image Restoration. Remote Sens. 2023, 15, 4820. https://doi.org/10.3390/rs15194820

AMA Style

Zhao L, Duanmu X, Sun Q. A Prior-Knowledge-Based Generative Adversarial Network for Unsupervised Satellite Cloud Image Restoration. Remote Sensing. 2023; 15(19):4820. https://doi.org/10.3390/rs15194820

Chicago/Turabian Style

Zhao, Liling, Xiaoao Duanmu, and Quansen Sun. 2023. "A Prior-Knowledge-Based Generative Adversarial Network for Unsupervised Satellite Cloud Image Restoration" Remote Sensing 15, no. 19: 4820. https://doi.org/10.3390/rs15194820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop