Next Article in Journal
Machine-Learning-Based Near-Surface Ozone Forecasting Model with Planetary Boundary Layer Information
Previous Article in Journal
Intelligent Traffic Monitoring through Heterogeneous and Autonomous Networks Dedicated to Traffic Automation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Remote Sensing Image Deblurring Based on Overlapped Patches’ Non-Linear Prior

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(20), 7858; https://doi.org/10.3390/s22207858
Submission received: 30 August 2022 / Revised: 10 October 2022 / Accepted: 11 October 2022 / Published: 16 October 2022
(This article belongs to the Section Remote Sensors)

Abstract

:
The remote sensing imaging environment is complex, in which many factors cause image blur. Thus, without prior knowledge, the restoration model established to obtain clear images can only rely on the observed blurry images. We still build the prior with extreme pixels but no longer traverse all pixels, such as the extreme channels. The features are extracted in units of patches, which are segmented from an image and partially overlap with each other. In this paper, we design a new prior, i.e., overlapped patches’ non-linear (OPNL) prior, derived from the ratio of extreme pixels affected by blurring in patches. The analysis of more than 5000 remote sensing images confirms that OPNL prior prefers clear images rather than blurry images in the restoration process. The complexity of the optimization problem is increased due to the introduction of OPNL prior, which makes it impossible to solve it directly. A related solving algorithm is established based on the projected alternating minimization (PAM) algorithm combined with the half-quadratic splitting method, the fast iterative shrinkage-thresholding algorithm (FISTA), fast Fourier transform (FFT), etc. Numerous experiments prove that this algorithm has excellent stability and effectiveness and has obtained competitive processing results in restoring remote sensing images.

1. Introduction

In actual imaging processes, the acquired images always face blurring problems caused by the imaging equipment and environment. In order to remove the blurs, image restoration technology has gradually attracted the attention of researchers and developed into a significant branch of image processing. The ideal image degradation process can be simply described as:
Y = K X + N ,
where Y, K, X, and N represent the observed blurry image, blur kernel, original clear image, and noise, respectively, and ∗ represents the convolution operator. In fact, the cause of blurring is unknown, i.e., lack of prior information, which makes it an ill-conditioned problem with infinite solutions when simultaneously solving the blur kernel and the clear image. A traditional blind image deblurring algorithm is dedicated to finding the optimal global solution, i.e., the blur kernel, by using image information to optimize the equation, then utilizing the non-blind image deblurring algorithm to obtain a clear image.
Due to the complexity and diversity of actual situations, parametric models [1,2] obviously cannot accurately describe the blur kernel, so they are not competent for the work of image restoration. With the research results by Rudin et al. [3] in 1992, the theory of partial differential has gradually become an essential part of image restoration. Traditional image restoration models can be roughly divided into two frameworks: Maximum A Posterior (MAP) [4,5,6,7] and Variable Bayesian (VB) [8,9,10,11,12], which are based on probability theory and statistics. Considering the computation and complexity of the algorithm, although the VB framework is more stable, most methods still choose the MAP framework. At this time, the introduction of appropriate conditions can make the MAP framework avoid the problem that the naive MAP approaches with a sparse derivative prior tend to the trivial solutions, which were proven by Levin et al. [13]. The MAP framework considers that solving the blur kernel and the clear image at the same time can be regarded as solving the standard maximum posterior probability, which is expressed as:
( X , K ) = arg max P ( X , K | Y ) arg max P ( Y | X , K ) P ( X ) P ( K )
where P ( Y | X , K ) is the noise distribution, and P ( X ) and P ( K ) are the prior distributions of the clear image and blur kernel, respectively. We take the negative logarithm of each item in Equation (2) to construct the following regularized model:
( X , K ) = arg min Ψ ( Y K X ) + α φ ( X ) + β ψ ( K )
where Ψ ( : ) is the fidelity term, φ ( X ) and ψ ( K ) are the regularization functions on X and K, and α and β are the corresponding parameters.
As one of the most prominent features in images, edge information has always played an important role in image restoration algorithms. However, if there is an image that lacks sharp edges, it is necessary to take prior knowledge as the core, which can distinguish clear images from blurry images, and use image edge information as an auxiliary to achieve the desired deblurring. Channels are independent planes that store the color information of an image. The channel prior has gradually entered researchers’ vision since Pan et al. [14] verified that the distribution of dark channel pixels is significantly different for clear and blurry images. The dark channel prior performs well when dealing with natural, text, facial, and low-light images. Yan et al. [15] designed a restoration algorithm based on extreme channel prior, including dark and bright channels, to deal with insufficient dark channel pixels in the image. Ge et al. [16] proposed a non-linear channel prior with extreme channels, which has been proven by experiments to effectively solve the problem of algorithm performance degradation due to the lack of extreme pixels. At the same time, the local feature of images as the prior knowledge also shines in the field of image deblurring, e.g., local maximum gradient prior [17], local maximum difference prior [18], patch-wise minimal pixels prior [19], etc.
In this paper, inspired by the non-linear channel prior and the patch-wise minimal pixels prior, we design a new non-linear prior and develop the corresponding algorithm to deal with the task of remote sensing image restoration. To further improve operational efficiency and reduce running time, our method does not traverse all of an image’s pixels but the patches divided by the image when extracting features. Each patch will have a partial overlap to record more detailed feature information. After that, similar to how the extreme channels extract image features, the extreme pixels of each patch are found and made into two sets, i.e., the local minimum intensity set and the local maximum intensity set. We use a convex L1-norm constraint on the ratio of corresponding elements in the two local intensity sets as a non-linear prior. It is clear that OPNL prior prefers clear images rather than blurry images through the analysis of more than 5000 remote sensing images. It is difficult to solve the image restoration model based on OPNL prior directly, so we use the half-quadratic splitting method to decompose the whole model into several subproblems that are easier to solve. The non-linear term can be converted into a linear term by constructing a linear operator using the result of the previous iteration in the algorithm. The contributions of this research are as follows:
(1)
In this paper, we propose a new prior based on extreme pixels in patches, i.e., OPNL prior. The analysis and tests of more than 5000 remote sensing images show that OPNL prior is more conducive to clear images.
(2)
A new image restoration algorithm is designed based on OPNL prior for remote sensing images, which can effectively deblur while maintaining good convergence and stability.
(3)
The experimental results show that the proposed method outperforms the comparative methods in dealing with blurry remote sensing images. Even in the face of remote sensing images with complex textures and many details, the method can still obtain very competitive deblurring results.
The paper’s outline is as follows: Section 2 organizes the achievements made in the field of blind image deblurring over the years. In Section 3, we will elaborate on the OPNL prior. Section 4 establishes an image restoration model and designs the corresponding optimization algorithm. Section 5 presents the experimental results of our algorithms. Section 6 quantitatively analyzes the performance of our method. Section 7 is the conclusion.

2. Related Work

This section classifies existing blind image deblurring methods into three categories and briefly reviews recent achievements.

2.1. Edge Detection-Based Algorithms

Edges are favored by researchers and exist in many blind image restoration methods because they can clearly describe image features and are easy to extract. Joshi et al. [20] proposed using sharp edges to estimate blur kernel with sub-pixel accuracy to restore a clear image. Cho and Lee [21] believe that the prominent edges in an image play a dominant role in estimating blur kernel and designed algorithms, including bilateral filtering, shock filtering, and gradient magnitude thresholding to extract strong edges. Xu and Jia [22] proposed a new edge selection method to deal with strong edges that are not conducive to image restoration. By analyzing natural images and synthetic structures, Sun et al. [23] can obtain appropriate patch priors for image edges and corners and build a new patch-based image deblurring model. It is worth noting that edge detection-based algorithms fail when blurry images do not have enough suitable strong edges. However, even image prior-based algorithms still need edge information to build models, e.g., enhanced low-rank prior [24], L0-regularized intensity and gradient prior [25], and local maximum gradient prior [17].

2.2. Image Priors-Based Algorithms

Such algorithms typically use prior knowledge composed of image features to optimize results in favor of clear images rather than blurry images. Shan et al. [26] incorporated noise distribution and new local smoothness prior into the probability model of image deblurring to reduce artifacts. Krishnan et al. [27] concluded that L 1 / L 2 regularization prior is more conducive to clear images after analyzing the images with Gaussian blur. Levin et al. [10] improved the conventional MAP algorithm by deriving a simple approximate MAP model without increasing the algorithm’s complexity. Kotera et al. [28] showed that the MAP model combined with heavy-tailed prior could obtain good processing results. Michaeli and Irani [29] used the deviation between the actual patches and the ideal patches in different image scales as a prior for image deblurring. Ren et al. [24] used a new enhanced low-rank prior to improve the effectiveness of deblurring, which combines low-rank priors on similar patches of blurry images and their gradient images. Zhong et al. [30] designed a high-order variational model based on the statistical characteristics of impulse noise, which can deblur and preserve image details under the interference of impulse noise. As Pan et al. [14] applied dark channels to the field of blind image deblurring with great success, sparse channels became the source of much prior knowledge. Yan et al. [15] proposed a bright channel as opposed to the dark channel and combined them into the extreme channel prior. Some improved priors based on extreme channels have also been successful, e.g., the priors of the image restoration model constructed by Yang et al. [31] and Ge et al. [16]. Zhou et al. [32] analyzed the effect of blur on different channels in the color space and established an image restoration model based on a single luminance channel prior, which is referred to as the dark channel prior. In recent years, blind image restoration algorithms that take local image features as the priors have also developed rapidly, e.g., local maximum gradient prior [17], local maximum difference prior [18], and patch-wise minimal pixels prior [19]. Our algorithm can also be viewed as such. For nighttime images, Chen et al. [33] developed a new image deblurring model by introducing the latent mapping relationship based on the saturated and unsaturated pixels of the images.

2.3. Deep Learning-Based Algorithms

The rapid development of deep learning technology makes it shine in various fields, and the field of image restoration is no exception. Early learning network-based algorithms still need to be designed with reference to traditional algorithms, e.g., methods proposed by Sun et al. [34] and Schuler et al. [35]. Li et al. [36] combined deep learning with traditional algorithms and used deep convolutional neural networks to realize image discrimination and feature extraction. However, this method is not suitable for dealing with highly nonuniform blurs, such as motion blur and defocus blur. Unlike algorithms relying on blur kernels to restore images, the end-to-end learning network can directly obtain clear images from blurry images through training. Nah et al. [37] creatively proposed a multi-scale convolutional neural network and used a multi-scale loss function to remove complex motion blur. Cai et al. [38] introduced channel priors into neural networks and developed the Dark and Bright Channel Prior embedded Network (DBCPeNet), which effectively handles blurring in dynamic scenes. To further improve the algorithm’s computational efficiency and deblurring quality, Zhang et al. [39] and Suin et al. [40] both developed a patch-based hierarchical network. Pan et al. [41] expected the algorithm to have more functions and restore images from more aspects. Based on physical models, they trained end-to-end using a generative adversarial network framework that finally solves problems such as image deblurring, dehazing, and deraining. Many algorithms focus on dealing with a certain kind of image or blur, e.g., ID-Net [42], DCTResNet [43], and LSFNet [44]. However, due to many parameters, deep learning-based algorithms require long-term and large-scale training to obtain excellent processing results.

3. Overlapped Patches’ Non-Linear Prior

This section introduces the OPNL prior and demonstrates that it is beneficial for clear images rather than blurry images.

3.1. The Overlapped Patches’ Extreme Intensity Pixels

OPNL prior is based on sets of extreme pixels with overlapping patches. The extreme pixels in an overlapping patch i are defined as:
M i ( X ) ( i ) = min ( x , y ) Ω 1 ( i ) min c r , g , b } X ( x , y , c )
M a ( X ) ( i ) = max ( x , y ) Ω 2 ( i ) max c r , g , b } X ( x , y , c )
where ( x , y ) are pixel coordinates, Ω 1 and Ω 2 are the domain of patch pixels, and c represents a color channel. Since conventional remote sensing images only have one channel, Equations (4) and (5) can be described as:
M i ( X ) ( i ) = min ( x , y ) Ω 1 ( i ) X ( x , y )
M a ( X ) ( i ) = max ( x , y ) Ω 2 ( i ) X ( x , y )
Due to the presence of blur in an image, as demonstrated by Pan et al. and Yan et al., extreme pixels in a patch are averaged with surrounding pixels, which results in M i increasing and M a decreasing for each patch. The inferences are as follows:
1.
Let M i ( B ) and M i ( C ) represent the local minimum intensity pixel sets of a clear image and the corresponding blurry image, respectively, then:
M i ( B ) ( i ) M i ( C ) ( i )
2.
Let M a ( B ) and M a ( C ) represent the local maximum intensity pixel sets of a clear image and the corresponding blurry image, respectively, then:
M a ( B ) ( i ) M a ( C ) ( i )

3.2. The Overlapped Patches’ Non-Linear Prior

Inspired by Ge et al. [16], in order to enhance the discrimination of image features for clear images and blurry images, we use the extreme intensity pixels of overlapping patches to form a non-linear term OPNL:
N ( X ) ( i ) = min ( x , y ) Ω 1 ( i ) X ( x , y ) max ( x , y ) Ω 2 ( i ) X ( x , y ) = M i ( X ) ( i ) M a ( X ) ( i )
The dimensions of Ω 1 and Ω 2 in the prior are the same. Furthermore, if there is a patch such that M a ( X ) ( l ) = 0 and M i ( X ) ( l ) = 0 hold, then N ( X ) ( l ) = 0 .
For the case where N ( X ) ( i ) of other patches is not 0, it can be deduced that:
1 M i ( B ) ( i ) M i ( C ) ( i ) M i ( B ) ( i ) M a ( C ) ( i ) M i ( C ) ( i ) M a ( B ) ( i ) = N ( B ) ( i ) N ( C ) ( i )
Accordingly, we can design a new prior, i.e., ψ ( X ) = N ( X ) 1 , accumulated by the convex L1-norm. It can be seen from the above formula: N ( B ) ( i ) N ( C ) ( i ) , i.e., N ( B ) 1 > N ( C ) 1 , in which one case is excluded that N ( B ) 1 = N ( C ) 1 , which requires all elements in image X to be equal. In experiments to test the effectiveness of OPNL prior, we selected 5200 images from the AID dataset [45] and added one of the eight blur kernels from the Levin dataset [13] to each image. As shown in Figure 1a, the large pixel values of OPNL for blurry images are significantly more than those for clear images. Figure 1b shows that minimizing the OPNL prior is more conducive to obtaining a clear image in the image deblurring model.

4. Solving Process of Image Deblurring Algorithm

Under the MAP framework, we introduce a new prior, i.e., OPNL prior, into the image deblurring model and develop a corresponding solution algorithm. The objective function of our algorithm is:
min K , X K X Y 2 2 + α N ( X ) 1 + β X 0 + γ K 2 2
where α , β , and γ are the weights of the corresponding regularization terms. The first term is the fidelity term, which aims to mitigate the influence of noise on the restoration result by minimizing it. The other items are the correlation term of OPNL prior, the regularization term of the image gradient, and the constraint term of the blur kernel constrained by the L2-norm to smooth the blur kernel. It is not practical to directly solve the objective function. Thus, we can use the projected alternating minimization (PAM) method to decompose the model to obtain the clear image and blur kernel, respectively. We replace M a ( X ) of the OPNL prior term with M a ( X P ) , where X P is the image result of the previous iteration:
min X K X Y 2 2 + α M i ( X ) M a ( X p ) 1 + β X 0
min K K X Y 2 2 + γ K 2 2
Based on the multi-scale image pyramid, the estimated blur kernel can be obtained after the loop iteration. Then, with the blur kernel and the blurry image as initial conditions, an image non-blind deblurring algorithm is used to restore the clear image.

4.1. Estimating the Latent Image

To further reduce the difficulty of solving, we apply the half-quadratic splitting method to separate the prior term and the regularization term of the gradient by introducing auxiliary variables. Equation (13) can be rewritten as:
min X , t , r K X Y 2 2 + α t M a ( X p ) 1 + β r 0 + λ 1 M i ( X ) t 2 2 + λ 2 X r 2 2
where λ 1 and λ 2 are the penalty parameters, and t and r are auxiliary variables. When λ 1 and λ 2 grow to infinity, Equations (13) and (15) are equivalent. After that, other variables are fixed, and each variable can be solved alternately. Based on the blurry image, the auxiliary variable t is solved by:
min t α t M a ( X p ) 1 + λ 1 M i ( X ) t 2 2
The relationship between the non-linear operators ( M i ( X ) and M a ( X ) ) and the pixels of the original image is expressed by constructing the mapping matrices (I and A), that is:
I ( i , j ) = 1 , 0 , j = arg min j Ω 1 ( i ) X ( j ) o t h e r w i s e
A ( i , j ) = 1 , 0 , j = arg max j Ω 2 ( i ) X ( j ) o t h e r w i s e
The same as in Ge et al. [16], the sparse matrix I by explicit calculation, Equation (16) is rewritten as follows:
min t α t Ma ( X p ) 1 + λ 1 IX t 2 2
where t , X , and Ma ( x ) are vector forms of t, X, and Ma(x), respectively. Further, we rewrite Equation (19):
min e α e 1 + + λ 1 IX d i a g ( Ma ( X ) ) e 2 2
We can handle the classic convex L1-regularized problem by applying the fast iterative shrinkage-thresholding algorithm (FISTA) [46], whose contraction operator is:
D c ( x ) i = sgn ( x i ) max ( | x i c | , 0 )
The solution process of t is shown in Algorithm 1.
Algorithm 1 Solving auxiliary variables t in (20).
  • Input: A = d i a g ( Ma ( X p ) ) , B = IX , α α , λ 1 ,
  • g = max ( e i g ( A T A ) ) , m = 1 , q 1 = 1 .
  • Maximum iteration M, initial value y 1 = s 0 .
  • While m M
  •      s m = T α / λ 1 ( y m g A T ( Ay m B ) )
  •      q m + 1 = 1 + 1 + 4 q m 2 2
  •      y m + 1 = s m + q m 1 q m + 1 ( s m s m 1 )
  •      m = m + 1
  • End while
  • t = As M
Then solve the auxiliary variable r:
min r β r 0 + λ 2 X r 2 2
The result of Equation (22) is:
r = 0 , X , X 2 < β / λ 2 o t h e r w i s e
After completing the solution of the auxiliary variables, we solve clear image X by:
min X K X Y 2 2 + λ 1 M i ( X ) t 2 2 + λ 2 X r 2 2
We replace all items with matrix vector form:
min X KX Y 2 2 + λ 1 IX t 2 2 + λ 2 X r 2 2
K represents the blur kernel Toeplitz form. Equation (25) appears to be a simple least squares problem, but it cannot be solved directly using Fast Fourier Transform (FFT). Therefore, we continue to introduce an auxiliary variable u :
min X KX Y 2 2 + λ 1 Iu t 2 2 + λ 2 X r 2 2 + λ 3 X u 2 2
where λ 3 is a penalty parameter. We decompose Equation (26) into two sub-problems to solve the clear image X and the auxiliary variable u , respectively:
min X λ 1 Iu t 2 2 + λ 3 X u 2 2
min X KX Y 2 2 + λ 2 X r 2 2 + λ 3 X u 2 2
Both sub-problems have closed solutions. The solution of Equation (27) is:
u = λ 1 I T t + λ 3 X λ 1 I T I + λ 3
Equation (28) is solved by FFT:
X = F 1 F ( K ) ¯ F ( Y ) + λ 2 F ( ) ¯ F ( r ) + λ 3 F ( u ) F ( K ) ¯ F ( K ) + F ( ) ¯ F ( ) + λ 3
where F, F 1 ( · ) , and F ( · ) ¯ represent FFT, the inverse FFT, and the complex conjugate operators of FFT, respectively. The above solution process is shown in Algorithm 2.
Algorithm 2 Solving the Latent Image X.
  • Input: Blurry image Y and blur kernel K.
  • Initialize λ 1 , X Y .
  • For i 1 to 5, do
  •      Solve t using Algorithm 1.
  •      Initialize λ 3 .
  •    For  j 1 to 4 do
  •      Solve u using Equation (27).
  •      Initialize λ 2 .
  •      While  λ 2 < λ 2 max .
  •       Solve r using Equation (22).
  •       Solve X using Equation (28).
  •       λ 2 2 λ 2 .
  •      End while
  •      λ 3 4 λ 3 .
  •    End for
  •    λ 1 4 λ 1 .
  • End for
  • Output: latent image X.

4.2. Estimating the Blur Kernel

As proposed in the literature [14,15,16,17,18,19,25,31,32], a more accurate blur kernel can be estimated by replacing the image gray term in Equation (14) with the image gradient term. Equation (14) can be modified as follows:
min K K X Y 2 2 + γ K 2 2
FFT can estimate the blur kernel:
K = F 1 F ( X ) ¯ F ( Y ) F ( X ) ¯ F ( X ) + γ
Note that the blur kernel K requires non-negative and normalization processing. The solution process is shown in Algorithm 3.
Algorithm 3 Estimating the blur kernel K.
  • Input: Blurry image Y.
  • Initialize K with results from the coarser level.
  • While i max _ i t e r , do
  •      Solve for X using Algorithm 2.
  •      Solve for K using Equation (31).
  • End while
  • Output: Blur kernel K and intermediate latent image X.

4.3. Details about the Algorithm

This section describes the settings for the parameters in our algorithm. The algorithm uses a coarse-to-fine image pyramid with a down-sampling factor of 2 / 2 , and the number of loops for each layer is 5. In the loop of each layer, the algorithm will complete the image estimation, blur kernel estimation, and normalization, in turn. Then, the blur kernel estimated in the loop of this layer is expanded by up-sampling and passed to the next layer. The algorithm solution process is shown in Figure 2. Through a large number of experiments, we usually set the relevant parameters as α = 0.002 0.01 , β = 0.002 0.004 , and γ = 2 , the patch _ size is 20 × 20 , and the patch overlap rate is f = 25 % . The maximum number of loops in Algorithm 1 is set to 500. All parameters are not unique and can be adjusted according to needs. Finally, using the blurry image and the estimated blur kernel as the initial conditions, a clear image will be obtained by the non-blind image deblurring algorithm.

5. Experiment Results

Remote sensing interpretation experts selected a total of 30 scenarios, including 10,000 remote sensing images, from Google Maps. These images constitute a large-scale aerial image dataset, i.e., the AID dataset [45]. All experimental tests are conducted on the AID dataset. This section shows the processing ability of our algorithm for remote sensing images by comparing it with four different prior algorithms, i.e., dark channel (Dark) prior [14], L0 regular intensity and gradient (L0) prior [25], patch-wise minimal pixels (PMP) prior [19], and non-linear channel (NLC) prior [16].

5.1. Simulated Remote Sensing Image Experiment

First, the ability of our algorithm to remove the blur is tested, which is often encountered in the process of remote sensing imaging. We selected four images of high quality from the AID dataset, as shown in Figure 3, in which motion blur, Gaussian blur, and defocus blur were added for testing. Three representative and full-reference evaluation indicators are selected, i.e., Peak-Signal-to-Noise Ratio (PSNR), Structural-Similarity (SSIM) [47], and Root Mean Square Error (RMSE).

5.1.1. Motion Blur

We set the relevant parameters of the motion blur, i.e., the angle is 0 , and the displacement is 10 pixels. Table 1 shows the evaluation results of the images with motion blur processed by all algorithms. For motion blur, the L0 method performs poorly. It can be seen that using the L0-norm to constrain the pixels and gradients in the image will cause severe over-sharpening. The dark channel prior is very effective for image processing with few texture details, but there will still be slight residual over-sharpening when processing other images. Although PMP, NLC, and our method have better visual effects in most processing results, our method has achieved higher objective evaluation indicators. For Figure 3b, with very complex texture, other methods fail to smooth the tiny details that result in the restored image with irregular edges and poor evaluation of objective indicators. Figure 4 shows the restoration results of some images.

5.1.2. Gaussian Blur

We set the relevant parameters of the Gaussian blur, i.e., the size is 20 × 20, and the standard deviation is 0.5. Table 2 shows the evaluation results of images with Gaussian blur processed by all algorithms. When faced with the interference of Gaussian blur, the images processed by Dark, L0, and PMP all have over-sharpening that becomes more and more serious as the complexity of the textures increases, i.e., Figure 3b cannot be restored. The NLC algorithm generally performs well, i.e., most of the processing results have good visual effects. However, it retains too many tiny details when dealing with Figure 3b, which leads to a deterioration in image quality. By comparison, the images restored by our algorithm have advantages in both visual effects and objective evaluation indicators. Figure 5 shows the restoration results of some images.

5.1.3. Defocus Blur

We set the relevant parameters of the defocus blur, i.e., the radius is 2. Table 3 shows the evaluation results of images with defocus blur processed by all algorithms. For defocus blur, Dark and L0 cannot recover more image details, and the processing results of these two also have different degrees of over-sharpening. The performance of the other three methods is basically good. Still, for Figure 3b with a complex texture, only NLC and our method can achieve satisfactory results for visual effect, but our algorithm is also slightly better in objective evaluation indicators. Comparing the results processed by these methods, it can be seen that our algorithm can competently deal with the effect of defocus blur on image quality. Figure 6 shows the restoration results of some images.

5.2. Real Remote Sensing Image Experiment

Finally, we observe the ability of our algorithm to solve real problems. A total of five images were used for testing, which consisted of four blurry images selected from the AID dataset and a target image obtained in the experiment, as shown in Figure 7. Due to the lack of original reference images, the full-reference evaluation indicators will be replaced by no-reference evaluation indicators, i.e., Entropy (E) [48], Average Gradient (AG), and Point sharpness (P) [49]. Table 4 shows the evaluation results of real remote sensing. Although Dark and L0 have obtained higher objective evaluation indicators, the processing results of the two methods have serious over-sharpening, which causes the image edges to be unsmooth and the details to look very messy. The performance of PMP is excellent for remote sensing images with less detail. However, when more tiny details are in the image, the restoration results will leave artifacts and generate false information. Compared with NLC, our algorithm achieves the same competitive subjective visual effect and slightly outperforms the objective evaluation indicators. The comprehensive evaluation shows that our algorithm has a good ability to solve practical problems. Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12 show the restoration results.

6. Analysis and Discussion

This section presents the performance analysis of our proposed algorithm, including the effectiveness of OPNL prior, the influence of hyper-parameters, algorithm convergence, computational speed, and algorithm limitations. All the tests are based on the Levin dataset [13], consisting of four images and eight blur kernels. To maintain test accuracy, we uniformly specify the number of iterations and estimated blur kernel size and use the same non-blind restoration method for all algorithms. Quantitative evaluation parameters are chosen as Error-Ratio [13], Peak-Signal-to-Noise Ratio (PSNR), Structural-Similarity (SSIM) [47], and Kernel Similarity [50]. All experiments are run on a computer with an Intel Core i5-1035G1 CPU and 8 GB RAM.

6.1. Effectiveness of the OPNL Prior

Theoretically, the OPNL prior tends to clear images in the minimization problem, i.e., the restoration model based on OPNL prior can complete the task of image deblurring. However, the performance of OPNL prior in practice still needs to be quantitatively evaluated. Figure 13 shows the results of comparing our algorithm with other algorithms, Dark, L0, PMP, and NLC. It can be seen that the cumulative Error-Ratio of the other methods, except L0, all have little difference, but our algorithm has higher PSNR and SSIM. In summary, OPNL prior has proven to be effective in restoring degraded images in theory and practice.

6.2. Effect of Hyper-Parameters

The proposed restoration model mainly includes five super parameters, i.e., α , β , γ , patch _ size , and overlap rate (f). To explore the impact of changes in hyper-parameters on the processing results, we adopt a single-variable method, i.e., change only one parameter at a time, and calculate the kernel similarity between the estimated blur kernel and the ground truth kernel. The experimental results are shown in Figure 14. A large number of experiments show that the algorithm is stable, whose processing results will not be affected by significant changes in the hyper-parameters.

6.3. Algorithm Convergence and Running Time

The projected alternating minimization (PAM) algorithm aims to find the optimal solution for the image restoration model. Reference [6] considers that the delay normalization of the blur kernel in the iterative process of the PAM algorithm makes the algorithms, which are based on the total variation, converge. Compared with reference [6], OPNL prior and L0-norm of the image gradient in our algorithm undoubtedly increase the complexity of the model. Using the PAM algorithm and the half-quadratic splitting method can simplify the restoration model into several sub-problems, all of which have convergent solutions. However, the convergence of the restoration model still needs to be further verified. Based on the Levin dataset, the convergence can be quantitatively tested by calculating the mean value of the objective function with Equation (12) and the mean value of kernel similarity on the premise of several iterations at the optimal scale of the image pyramid. From the results in Figure 15, it can be seen that our algorithm converges after 20 iterations, and the kernel similarity tends to be stable after 25 iterations, both of which prove the effectiveness of our method.
In addition, we also test the running time of each algorithm, which is shown in Table 5. Our algorithm can obtain more competitive results in less time by comprehensive comparison.

6.4. Algorithm Limitations

Although our algorithm has excellent performance, it still has limitations. First, our algorithm cannot take into account the effects of blur and noise simultaneously. The deblurring ability of our algorithm will decrease when the image is seriously polluted by noise, such as stripe noise caused by the non-uniform response of the detector. It means that image restoration requires additional steps to denoise. In addition, the proposed algorithm builds an image pyramid, which uses the PAM algorithm, the half-quadratic splitting method, and others in the loop of each layer. This structure will undoubtedly increase the algorithm’s computational complexity and running time, which is a common problem with traditional algorithms. Therefore, future research will focus on designing algorithms with broader applications and faster operation based on OPNL prior to create conditions for practical engineering applications.

7. Conclusions

We reduce the algorithm’s computational complexity and running time by changing the way of extracting image features from traversing all pixels to patches segmented from the image that partially overlap each other. According to the extreme pixels extracted from each patch, these are designed with overlapped patches’ non-linear prior, which has been proven to favor clear images in the energy minimization problem, and the corresponding image deblurring algorithm. A large number of comparative experiments have confirmed that the restoration results of remote sensing images obtained by this algorithm are better than other algorithms. Even in the face of remote sensing images with complex texture details, our algorithm can still restore satisfactory images. It is believed that the proposed algorithm can further promote the research of remote sensing image restoration technology.

Author Contributions

Conceptualization, Z.Z. and L.Z.; methodology, Z.Z. and L.Z.; funding acquisition, L.Z. and W.X.; resources, L.Z. and W.X.; writing—original draft preparation, Z.Z. and L.Z., writing—review and editing, Z.Z., L.Z., W.X., T.G., X.W. and B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 62075219; and the Key Technological Research Projects of Jilin Province, China, under Grant 20190303094SF.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The AID data used in this paper are available at the following link: http://www.captain-whu.com/project/AID/ (accessed on 27 April 2022). The Levin data used in this paper are available at the following link: www.wisdom.weizmann.ac.il/~levina/papers/LevinEtalCVPR09Data.zip (accessed on 9 May 2022).

Acknowledgments

The author is very grateful to the editors and anonymous reviewers for their constructive suggestions. At the same time, The author also thanks Ge Xianyu for his help, which is of great significance.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
OPNLOverlapped Patches’ Non-Linear
PAMProjected Alternating Minimization
FISTAFast Iterative Shrinkage-Thresholding Algorithm
MAPMaximum A Posteriori
VBVariational Bayes
DBCPeNetDark and Bright Channel Prior embedded Network
ID-NetIdentity Document Network
DCTResNetDiscrete Cosine Transform Residual Network
LSFNetLong-short-exposure Fusion Network
AIDAerial Image Dataset
PSNRPeak-Signal-to-Noise Ratio
SSIMStructural-Similarity
RMSERoot Mean Squard Error
DarkDark Channel
L0L0 regular intensity and gradient
PMPPatch-wise Minimal Pixels
NLCNon-Linear Channel
EEntropy
AGAverage Gradient
PPoint sharpness

References

  1. Rugna, J.D.; Konik, H. Automatic blur detection for meta-data extraction in content-based retrieval context. In Proceedings of the Internet Imaging V, San Jose, CA, USA, 18–22 January 2004; Volume 5304, pp. 285–294. [Google Scholar] [CrossRef] [Green Version]
  2. Liu, R.; Li, Z.; Jia, J. Image partial blur detection and classification. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
  3. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  4. Chan, T.; Wong, C.K. Total variation blind deconvolution. IEEE Trans. Image Process. 1998, 7, 370–375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Liao, H.; Ng, M.K. Blind Deconvolution Using Generalized Cross-Validation Approach to Regularization Parameter Estimation. IEEE Trans. Image Process. 2011, 20, 670–680. [Google Scholar] [CrossRef] [PubMed]
  6. Perrone, D.; Favaro, P. Total Variation Blind Deconvolution: The Devil Is in the Details. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2909–2916. [Google Scholar] [CrossRef] [Green Version]
  7. Rameshan, R.M.; Chaudhuri, S.; Velmurugan, R. Joint MAP Estimation for Blind Deconvolution: When Does It Work? In Proceedings of the 8th IndianConference on Vision, Graphics and Image Processing, Mumbai, India, 16–19 December 2012; p. 50. [Google Scholar] [CrossRef]
  8. Likas, A.; Galatsanos, N. A variational approach for Bayesian blind image deconvolution. IEEE Trans. Signal Process. 2004, 52, 2222–2233. [Google Scholar] [CrossRef]
  9. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing Camera Shake from a Single Photograph. ACM Trans. Graph. 2006, 25, 787–794. [Google Scholar] [CrossRef] [Green Version]
  10. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the CVPR, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2657–2664. [Google Scholar] [CrossRef]
  11. Wipf, D.; Zhang, H. Analysis of Bayesian Blind Deconvolution. In Proceedings of the Energy Minimization Methods in Computer Vision and Pattern Recognition, Lund, Sweden, 19–21 August 2013; pp. 40–53. [Google Scholar] [CrossRef]
  12. Perrone, D.; Diethelm, R.; Favaro, P. Blind Deconvolution via Lower-Bounded Logarithmic Image Priors. In Proceedings of the Energy Minimization Methods in Computer Vision and Pattern Recognition, Hong Kong, China, 13–16 January 2015; pp. 112–125. [Google Scholar] [CrossRef] [Green Version]
  13. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar] [CrossRef]
  14. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Blind Image Deblurring Using Dark Channel Prior. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar] [CrossRef]
  15. Yan, Y.; Ren, W.; Guo, Y.; Wang, R.; Cao, X. Image Deblurring via Extreme Channels Prior. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6978–6986. [Google Scholar] [CrossRef]
  16. Ge, X.; Tan, J.; Zhang, L. Blind Image Deblurring Using a Non-Linear Channel Prior Based on Dark and Bright Channels. IEEE Trans. Image Process. 2021, 30, 6970–6984. [Google Scholar] [CrossRef]
  17. Chen, L.; Fang, F.; Wang, T.; Zhang, G. Blind Image Deblurring With Local Maximum Gradient Prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1742–1750. [Google Scholar] [CrossRef]
  18. Liu, J.; Tan, J.; He, L.; Ge, X.; Hu, D. Blind Image Deblurring via Local Maximum Difference Prior. IEEE Access 2020, 8, 219295–219307. [Google Scholar] [CrossRef]
  19. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.K. A Simple Local Minimal Intensity Prior and an Improved Algorithm for Blind Image Deblurring. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 2923–2937. [Google Scholar] [CrossRef]
  20. Joshi, N.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, Alaska, 23–28 June 2008; pp. 1–8. [Google Scholar] [CrossRef]
  21. Cho, S.; Lee, S. Fast Motion Deblurring. ACM Trans. Graph. 2009, 28, 1–8. [Google Scholar] [CrossRef]
  22. Xu, L.; Jia, J. Two-Phase Kernel Estimation for Robust Motion Deblurring. In Proceedings of the 11th European Conference on Computer Vision: Part I, Heraklion, Greece, 5–11 September 2010; pp. 157–170. [Google Scholar] [CrossRef] [Green Version]
  23. Sun, L.; Cho, S.; Wang, J.; Hays, J. Edge-based blur kernel estimation using patch priors. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, USA, 19–21 April 2013; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  24. Ren, W.; Cao, X.; Pan, J.; Guo, X.; Zuo, W.; Yang, M.H. Image Deblurring via Enhanced Low-Rank Prior. IEEE Trans. Image Process. 2016, 25, 3426–3437. [Google Scholar] [CrossRef]
  25. Pan, J.; Hu, Z.; Su, Z.; Yang, M.H. L0 -Regularized Intensity and Gradient Prior for Deblurring Text Images and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 342–355. [Google Scholar] [CrossRef]
  26. Shan, Q.; Jia, J.; Agarwala, A. High-Quality Motion Deblurring from a Single Image. ACM Trans. Graph. 2008, 27, 1–11. [Google Scholar] [CrossRef] [Green Version]
  27. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 233–240. [Google Scholar] [CrossRef]
  28. Kotera, J.; Šroubek, F.; Milanfar, P. Blind Deconvolution Using Alternating Maximum a Posteriori Estimation with Heavy-Tailed Priors. In Proceedings of the 15th International Conference Computer Analysis of Images and Patterns (CAIP), York, UK, 27–29 August 2013; pp. 59–66. [Google Scholar] [CrossRef]
  29. Michaeli, T.; Irani, M. Blind Deblurring Using Internal Patch Recurrence. In Proceedings of the 13th European Conference Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; pp. 783–798. [Google Scholar] [CrossRef] [Green Version]
  30. Zhong, Q.; Wu, C.; Shu, Q.; Liu, R.W. Spatially adaptive total generalized variation-regularized image deblurring with impulse noise. J. Electron. Imaging 2018, 27, 1–21. [Google Scholar] [CrossRef]
  31. Yang, D.; Wu, X. Dual-Channel Contrast Prior for Blind Image Deblurring. IEEE Access 2020, 8, 227879–227893. [Google Scholar] [CrossRef]
  32. Zhou, L.; Liu, Z. Blind Deblurring Based on a Single Luminance Channel and L1-Norm. IEEE Access 2021, 9, 126717–126727. [Google Scholar] [CrossRef]
  33. Chen, L.; Zhang, J.; Lin, S.; Fang, F.; Ren, J.S. Blind Deblurring for Saturated Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6304–6312. [Google Scholar] [CrossRef]
  34. Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 769–777. [Google Scholar] [CrossRef] [Green Version]
  35. Schuler, C.J.; Hirsch, M.; Harmeling, S.; Schölkopf, B. Learning to Deblur. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1439–1451. [Google Scholar] [CrossRef]
  36. Li, L.; Pan, J.; Lai, W.S.; Gao, C.; Sang, N.; Yang, M.H. Blind image deblurring via deep discriminative priors. Int. J. Comput. Vis. 2019, 127, 1025–1043. [Google Scholar] [CrossRef]
  37. Nah, S.; Kim, T.H.; Lee, K.M. Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 257–265. [Google Scholar] [CrossRef] [Green Version]
  38. Cai, J.; Zuo, W.; Zhang, L. Dark and Bright Channel Prior Embedded Network for Dynamic Scene Deblurring. IEEE Trans. Image Process. 2020, 29, 6885–6897. [Google Scholar] [CrossRef]
  39. Zhang, H.; Dai, Y.; Li, H.; Koniusz, P. Deep Stacked Hierarchical Multi-Patch Network for Image Deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5971–5979. [Google Scholar] [CrossRef] [Green Version]
  40. Suin, M.; Purohit, K.; Rajagopalan, A.N. Spatially-Attentive Patch-Hierarchical Network for Adaptive Motion Deblurring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
  41. Pan, J.; Dong, J.; Liu, Y.; Zhang, J.; Ren, J.; Tang, J.; Tai, Y.W.; Yang, M.H. Physics-Based Generative Adversarial Models for Image Restoration and Beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 2449–2462. [Google Scholar] [CrossRef] [Green Version]
  42. Tian, H.; Sun, L.; Dong, X.; Lu, B.; Qin, H.; Zhang, L.; Li, W. A Modeling Method for Face Image Deblurring. In Proceedings of the 2021 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS), Macau, China, 5–7 December 2021; pp. 37–41. [Google Scholar] [CrossRef]
  43. Maharjan, P.; Xu, N.; Xu, X.; Song, Y.; Li, Z. DCTResNet: Transform Domain Image Deblocking for Motion Blur Images. In Proceedings of the 2021 International Conference on Visual Communications and Image Processing (VCIP), Munich, Germany, 5–8 December 2021; pp. 1–5. [Google Scholar] [CrossRef]
  44. Chang, M.; Feng, H.; Xu, Z.; Li, Q. Low-Light Image Restoration With Short- and Long-Exposure Raw Pairs. IEEE Trans. Multimedia 2022, 24, 702–714. [Google Scholar] [CrossRef]
  45. Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef] [Green Version]
  46. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. Soc. Ind. Appl. Math. (SIAM) J. Img. Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  47. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, H.; Zhong, W.; Wang, J. Research of Measurement for Digital Image Definition. J. Image Graph. 2004, 9, 828–831. [Google Scholar] [CrossRef]
  50. Hu, Z.; Yang, M.H. Good Regions to Deblur. In Proceedings of the 12th European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; pp. 59–72. [Google Scholar] [CrossRef]
Figure 1. (a) The average pixel value distribution of OPNL for 5200 images. (b) The average value of OPNL prior for 5200 images.
Figure 1. (a) The average pixel value distribution of OPNL for 5200 images. (b) The average value of OPNL prior for 5200 images.
Sensors 22 07858 g001
Figure 2. A brief flow chart of our algorithm.
Figure 2. A brief flow chart of our algorithm.
Sensors 22 07858 g002
Figure 3. Selected remote sensing image. (Simulate). (a) School, (b) Mountain, (c) River, (d) Park.
Figure 3. Selected remote sensing image. (Simulate). (a) School, (b) Mountain, (c) River, (d) Park.
Sensors 22 07858 g003
Figure 4. The restoration results of different methods for Figure 3a,d with motion blur. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 4. The restoration results of different methods for Figure 3a,d with motion blur. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g004
Figure 5. The restoration results of different methods for Figure 3b,c with Gaussian blur. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 5. The restoration results of different methods for Figure 3b,c with Gaussian blur. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g005
Figure 6. The restoration results of different methods for Figure 3b,d with defocus blur. (a) Blurry Image, (b) Dark [14],(c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 6. The restoration results of different methods for Figure 3b,d with defocus blur. (a) Blurry Image, (b) Dark [14],(c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g006
Figure 7. Selected remote sensing image. (Real). (a) Port, (b) School∗, (c) Airport, (d) Square, (e) Target Image.
Figure 7. Selected remote sensing image. (Real). (a) Port, (b) School∗, (c) Airport, (d) Square, (e) Target Image.
Sensors 22 07858 g007
Figure 8. The restoration results of different methods for Figure 7a. (a) Blurry Image, (b) Dark [14], (c) L0 citeref25, (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 8. The restoration results of different methods for Figure 7a. (a) Blurry Image, (b) Dark [14], (c) L0 citeref25, (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g008
Figure 9. The restoration results of different methods for Figure 7b. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 9. The restoration results of different methods for Figure 7b. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g009
Figure 10. The restoration results of different methods for Figure 7c. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 10. The restoration results of different methods for Figure 7c. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g010
Figure 11. The restoration results of different methods for Figure 7d. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 11. The restoration results of different methods for Figure 7d. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g011
Figure 12. The restoration results of different methods for Figure 7e. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Figure 12. The restoration results of different methods for Figure 7e. (a) Blurry Image, (b) Dark [14], (c) L0 [25], (d) PMP [19], (e) NLC [16], (f) Ours.
Sensors 22 07858 g012
Figure 13. Quantitative evaluation on the benchmark dataset [13]. (a) Comparisons in terms of cumulative Error-Ratio. (b) Comparisons in terms of average PSNR. (c) Comparisons in terms of average SSIM.
Figure 13. Quantitative evaluation on the benchmark dataset [13]. (a) Comparisons in terms of cumulative Error-Ratio. (b) Comparisons in terms of average PSNR. (c) Comparisons in terms of average SSIM.
Sensors 22 07858 g013
Figure 14. Sensitivity analysis of the hyper-parameters in our method. (a) Effect of α on kernel similarity. (b) Effect of β on kernel similarity. (c) Effect of γ on kernel similarity. (d) Effect of patch _ size on kernel similarity. (e) Effect of f on kernel similarity.
Figure 14. Sensitivity analysis of the hyper-parameters in our method. (a) Effect of α on kernel similarity. (b) Effect of β on kernel similarity. (c) Effect of γ on kernel similarity. (d) Effect of patch _ size on kernel similarity. (e) Effect of f on kernel similarity.
Sensors 22 07858 g014
Figure 15. Convergence analysis of the proposed method. (a) The average value of the objective function (12) under the optimal scale of the image pyramid. (b) Kernel Similarity.
Figure 15. Convergence analysis of the proposed method. (a) The average value of the objective function (12) under the optimal scale of the image pyramid. (b) Kernel Similarity.
Sensors 22 07858 g015
Table 1. Objective Evaluation Results of Remote Sensing Images with Motion Blur.
Table 1. Objective Evaluation Results of Remote Sensing Images with Motion Blur.
MethodFigure 4aFigure 4b
PSNRSSIMRMSEPSNRSSIMRMSE
Dark [14]24.92470.8343 2.08 × 10 4 8.73370.1263 5 . 94 × 10 5
L 0 [25]17.3890.6035 2.81 × 10 4 9.61710.2032 7.51 × 10 5
PMP [19]25.51640.8466 1.07 × 10 4 10.45650.3279 1.78 × 10 4
NLC [16]26.03950.8514 1.55 × 10 4 10.81960.3499 2.67 × 10 4
Ours27.02790.8594 9 . 21 × 10 5 17.07090.5676 6.28 × 10 5
MethodFigure 4cFigure 4d
PSNRSSIMRMSEPSNRSSIMRMSE
Dark [14]29.96020.8143 2.14 × 10 4 17.94930.6659 2.19 × 10 4
L 0 [25]25.14940.7386 2.43 × 10 4 14.1390.534 2.52 × 10 4
PMP [19]29.94020.8139 2.31 × 10 4 22.59680.7977 2.64 × 10 4
NLC [16]28.90470.7755 2.45 × 10 4 23.48880.8028 1.91 × 10 4
Ours31.07160.8327 1 . 90 × 10 4 24.41080.83 1 . 82 × 10 4
Table 2. Objective Evaluation Results of Remote Sensing Images with Gaussian Blur.
Table 2. Objective Evaluation Results of Remote Sensing Images with Gaussian Blur.
MethodFigure 4aFigure 4b
PSNRSSIMRMSEPSNRSSIMRMSE
Dark [14]12.07950.4805 3.46 × 10 4 1.22520.0336 4.41 × 10 4
L 0 [25]10.37510.4253 4.18 × 10 4 1.13970.0072 4.24 × 10 4
PMP [19]13.94060.5948 3.19 × 10 4 4.30680.065 3.41 × 10 5
NLC [16]21.320.8159 3.88 × 10 4 11.86770.4549 4.76 × 10 4
Ours22.7410.8074 2 . 17 × 10 4 17.71990.5875 1 . 16 × 10 4
MethodFigure 4cFigure 4d
PSNRSSIMRMSEPSNRSSIMRMSE
Dark [14]24.9270.6602 1.55 × 10 4 11.40010.4046 1.18 × 10 4
L 0 [25]14.57660.2655 2.11 × 10 4 7.55020.3127 4.96 × 10 5
PMP [19]26.95580.7792 1.61 × 10 4 14.17030.5774 1.08 × 10 4
NLC [16]27.43960.7738 1.45 × 10 4 20.16920.7744 3.40 × 10 5
Ours31.1320.8943 1 . 43 × 10 4 23.88390.8481 4 . 37 × 10 6
Table 3. Objective Evaluation Results of Remote Sensing Images with Defocus Blur.
Table 3. Objective Evaluation Results of Remote Sensing Images with Defocus Blur.
MethodFigure 4aFigure 4b
PSNRSSIMRMSEPSNRSSIMRMSE
Dark [14]25.71060.868 9.54 × 10 5 6.5280.0915 1.55 × 10 4
L 0 [25]20.69620.759 6.79 × 10 5 10.4130.3589 2.42 × 10 4
PMP [19]26.04040.8709 6.82 × 10 5 18.97930.7026 1.78 × 10 4
NLC [16]27.25960.892 2.45 × 10 5 21.30350.7968 1.29 × 10 4
Ours28.76080.909 1 . 87 × 10 5 24.88680.8684 3 . 96 × 10 5
MethodFigure 4cFigure 4d
PSNRSSIMRMSEPSNRSSIMRMSE
Dark [14]31.75880.8616 1.60 × 10 4 21.74530.781 8.82 × 10 5
L 0 [25]30.69860.8476 1.82 × 10 4 16.21030.6318 6.96 × 10 5
PMP [19]32.18460.8671 1.83 × 10 4 27.58990.8956 4.23 × 10 5
NLC [16]30.26850.8337 1.39 × 10 4 27.57280.8918 9.69 × 10 5
Ours32.4780.869 1 . 35 × 10 4 28.29890.8983 3 . 00 × 10 5
Table 4. Objective Evaluation Results on Real Remote Sensing Images.
Table 4. Objective Evaluation Results on Real Remote Sensing Images.
MethodFigure 7aFigure 7bFigure 7cFigure 7dFigure 7e
EAGPEAGPEAGPEAGPEAGP
Dark [14]6.47570.01810.12636.98360.09680.67817.21320.09250.62866.97550.02670.18337.15810.09760.6845
L 0 [25]6.58990.03160.21966.92610.12690.88797.23650.10720.72836.99770.03280.22667.10040.10230.7172
PMP [19]6.47340.01820.12526.92690.07520.52517.22330.08520.57656.97450.02680.18437.17250.08450.5918
NLC [16]6.46230.01680.11566.80720.04590.31827.21530.05870.39636.96820.02490.17157.21450.04080.2828
Ours6.48260.01890.13016.83130.05090.35287.22250.06810.45926.97980.0290.1997.22790.04340.3008
Table 5. Running Time (in seconds) Comparison.
Table 5. Running Time (in seconds) Comparison.
Method125 × 125255 × 255600 × 600
Dark [14]53.3166.83790.27
L 0 [25]12.9231.61146.96
PMP [19]18.5919.7948.19
NLC [16]25.3374.47475.18
Ours20.0851.07371.61
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, Z.; Zheng, L.; Xu, W.; Gao, T.; Wu, X.; Yang, B. Blind Remote Sensing Image Deblurring Based on Overlapped Patches’ Non-Linear Prior. Sensors 2022, 22, 7858. https://doi.org/10.3390/s22207858

AMA Style

Zhang Z, Zheng L, Xu W, Gao T, Wu X, Yang B. Blind Remote Sensing Image Deblurring Based on Overlapped Patches’ Non-Linear Prior. Sensors. 2022; 22(20):7858. https://doi.org/10.3390/s22207858

Chicago/Turabian Style

Zhang, Ziyu, Liangliang Zheng, Wei Xu, Tan Gao, Xiaobin Wu, and Biao Yang. 2022. "Blind Remote Sensing Image Deblurring Based on Overlapped Patches’ Non-Linear Prior" Sensors 22, no. 20: 7858. https://doi.org/10.3390/s22207858

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop