Next Article in Journal
Characterizing Tasks for Teaching Mathematics in Dynamic Geometry System and Modelling Environments
Next Article in Special Issue
Geometric Metric Learning for Multi-Output Learning
Previous Article in Journal
Comparison of HP Filter and the Hamilton’s Regression
Previous Article in Special Issue
A RUL Prediction Method of Small Sample Equipment Based on DCNN-BiLSTM and Domain Adaptation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Image Deblurring via a Novel Sparse Channel Prior

1
School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China
2
Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(8), 1238; https://doi.org/10.3390/math10081238
Submission received: 25 February 2022 / Revised: 3 April 2022 / Accepted: 6 April 2022 / Published: 9 April 2022

Abstract

:
Blind image deblurring (BID) is a long-standing challenging problem in low-level image processing. To achieve visually pleasing results, it is of utmost importance to select good image priors. In this work, we develop the ratio of the dark channel prior (DCP) to the bright channel prior (BCP) as an image prior for solving the BID problem. Specifically, the above two channel priors obtained from RGB images are used to construct an innovative sparse channel prior at first, and then the learned prior is incorporated into the BID tasks. The proposed sparse channel prior enhances the sparsity of the DCP. At the same time, it also shows the inverse relationship between the DCP and BCP. We employ the auxiliary variable technique to integrate the proposed sparse prior information into the iterative restoration procedure. Extensive experiments on real and synthetic blurry sets show that the proposed algorithm is efficient and competitive compared with the state-of-the-art methods and that the proposed sparse channel prior for blind deblurring is effective.

1. Introduction

The goal of blind image deblurring is to restore a sharp image and a blur kernel from the input degraded image. The degradation types include motion blur, noise, out-of-focus and camera shake. Assuming that the blur is uniform and spatially invariant, the mathematical formulation of the blurring process can be modeled as
b = l k + n
where b is the blurry input, k is the blur kernel and n is the additive noise. The ∗ denotes the convolution operator. This problem is highly ill-posed because both the latent sharp image l and blur kernel k are unknown. In order to make this problem well-posed, most existing methods utilize the statistics of natural images to estimate the blur kernel. For example, a heavy-tailed distribution [1], patch recurrence prior [2], nuclear norm [3,4], low-rank prior [5], sparse prior [6], multiscale latent prior [7] or additional information of a specific image [8,9,10] have been used to estimate a better kernel.
Strong sparsity of image intensity and gradient has been widely used in low-level computer vision processing problems. It also has mature applications in the field of image deblurring [6,11,12,13], such as the L 1 / L 2 [14] norm, the reweighted L 1 norm [15], the L 0 norm prior [16,17,18,19] and the sparse prior–local maximum gradient (LMG) [20]. For favoring clear images over blurry ones, the edge selection method [21,22,23] is embedded in the blind deconvolution framework. However, strong edges are not always available in many cases. The channel prior was introduced by He et al. for image defogging in Ref. [24]. Then, Pan et al. [18] enforced the sparsity of the dark channel by the L 0 norm for kernel estimation. Unfortunately, this prior does not work well on images with large noise and large numbers of pixels. To solve this problem, Yan et al. [19] proposed an extreme channel prior (ECP) which utilizes both the dark channel and bright channel for estimating the blur kernel.
In this paper, a novel sparse channel prior is proposed for blind image deblurring. Inspired by [18,19,24], we take the advantages of the DCP and BCP to construct a confrontation constraint D/B. We prove its characteristic from a mathematical perspective and explore how these properties can be used to estimate the blur kernel. In the proposed algorithm, the optimization of the proposed prior is a challenging problem. We use the idea of auxiliary variables and the alternating minimization method to decompose the problem into independent subproblems optimised by the alternating direction minimization (ADM) method. The main contributions of this work can be stated as follows:
  • A new D/B prior is presented for kernel estimation, which fully explores the relationship between the DCP and BCP. We also verify the effectiveness of D/B.
  • We develop an effective optimization strategy for kernel estimation based on the idea of auxiliary variables and the alternating direction minimization (ADM) method.
  • Experiments on four databases show that the proposed method is competitive compared with the state-of-the-art blind deblurring algorithms.
The rest of this paper is organized as follows. Section 2 introduces the related work. The proposed D/B is detailed in Section 3. Our blind deblurring model and optimization strategy are presented in Section 4. Section 5 shows the experimental results. Further discussion of our proposed deblurring algorithm is given in Section 6. Section 7 summarizes this paper.

2. Related Work

Blind image deblurring algorithms have made great progress due to the use of the proper kernel estimation model. In this part, we introduce the methods related to our work in an appropriate context.
The success of many blind image deblurring algorithms is based on the use of the statistical characteristics of the image intensity and gradient. Krishnan et al. [14] presented the L 1 / L 2 norm based on the sparsity of image intensity. The L 1 / L 2 norm is a normalized version of L 1 , which enhances the sparsity of L 1 . Levin et al. [1] observed the heavy-tailed distribution of image intensities and introduced a maximum posteriori (MAP) framework. Shan et al. [25] introduced a probability model to fit the sparse gradient distribution of a natural image. Pan et al. [16] developed a method in which both intensity and gradient are regularized by the L 0 norm for text image deblurring. These methods are limited by the modeling of more complex image structures and contexts.
Another group of blind image deblurring methods [22,23] employs a significant edge detection step for kernel estimation. Specifically, Cho et al. [21] predicted sharp edges by the bilateral and shock filters. Joshi et al. [26] detected image contours by locating the subpixels’ extrema. These methods cannot capture the sparse kernel and structures, which makes the restored image blurry and noisy sometimes. To solve these problems, researchers have proposed many better models to estimate the blur kernel. Xu et al. [27] presented a two-phase kernel estimation algorithm, which separates kernel initialization from the iterative support detection (ISD)-based kernel refinement step, giving an efficient estimation process and maintaining many small structures. Zoran and Weiss [28] proposed the expected patch log likelihood (EPLL) method, which imposes a prior on the patches of the final image. However, this will iteratively restore the degradation. Vardan et al. [29] exploited the multiscale prior to further improve the EPLL and reduce the error to that of the global modeling. Bai et al. [7] developed a multiscale latent structures (MSLS) prior. Based on the MSLS prior, their deblurring algorithm consists of two stages: sharp image estimation in the coarse scales and a refinement process in the finest scale. For the patch-based methods, global modeling is a difficult problem.
With the rapid development of the deep learning method, remarkable results have been achieved in the field of blind image deblurring [30,31,32,33,34]. For example, convolutional neural networsk (CNN) [35], Wasserste generative adversarial networks (GAN) [36], deep hierarchical multipatch networks (DMPHN) [37], ConvLSTM [38] and scale-recurrent networks (SRN) [39] are all designed for image deblurring. Zheng et al. [40] presented an edge heuristic multiscale GAN, which utilizes the edge’s information to conduct the deblurring process in a coarse-to-fine manner for nonuniform blur. Liang et al. [41] learned novel neural network structures from RAW images and achieved superb performance. Chang et al. [42] proposed a long–short-exposure fusion network (LSFNet) for low-light image restoration by using the pairs of long- and short-exposure images. The success of deep-learning-based methods mainly relies on the consistency between training and test data, which limits the generalization ability of these methods.
Recently, the classical dark channel prior (DCP) has been proved effective for image deblurring. The DCP was introduced by He et al. [24] for image defogging. It is based on the observation that there is at least one color channel that has very low and close-to-zero pixel values on outdoor haze-free nonsky image patches. Pan et al. [18] further found that most elements of the dark channel are zero for nature images and then enhanced the sparsity of dark channel for image deblurring. Inspired by the DCP, the bright channel prior (BCP) is proposed. That is, in most of nature patches, at least one color channel has very high pixel values. Yan et al. [19] used the simple addition of the DCP and BCP to form an extreme channel prior (ECP) for a blind image deblurring algorithm. However, the relationship between the BCP and DCP is not fully explored in the ECP.

3. Proposed Sparse Channel Prior

To explain that the proposed sparse channel vary after blurring, we model the blurring process as described in [43]. For an image I, consider the noise is small enough to be neglected. We have:
b ( x ) = z Ψ x l x + m 2 z k z
where x and m denote the coordinates of the pixel and the size of the blur kernel k, respectively. Ψ x represents an image patch centered at x, z Ψ x k z = 1 and k z 0 . [ · ] is a rounding operator.
Inspired by the two channels (dark and bright channels) and the statistics of images, we observe that when the dark channel is more different from the bright channel of one image patch, the edges are more salient, which is helpful to estimate an accurate blur kernel. To formally describe this observation, the proposed sparse channel prior is defined by:
R x = min y Ψ x min c r , g , b I c y / max y Ψ x max c r , g , b I c y + ϵ = D ( x ) / B ( x ) + ϵ
where x and y denote the coordinates of the pixel, ϵ is a non-negative constant and Ψ x represents an image patch centered at x. I c is the c-th color channel of image I. As described in Equation (3), B x = max y Ψ ( x ) max c ( r , g , b ) I c y represents the BCP and D x = min y Ψ ( x ) min c ( r , g , b ) I c y represents the DCP. Dark channels are obtained by two minimization operations: min c r , g , b and min y Ψ x . The bright channel is obtained by two maximization operations: max c r , g , b and max y Ψ x . In the implementations of the DCP and BCP, if I is a gray image, then only the latter operation is performed. A small value of R ( x ) implies there are salient edges in the image patch. On the contrary, a large R ( x ) implies that there are fine structures in an image patch. The reason is that when the edge is salient, the pixel values are more different between the two sides of edges. It means that the minimum value is more different from the maximum value of the image patch. Conversely, when the difference between the DCP and BCP is not that large, the image edge is unclear, and the value of R ( x ) is large. Therefore, it is natural to think that if the DCP is equal to or slightly smaller than the BCP, small edges can be accurately removed by minimizing Equation (3).
Consider a natural image that was blurred by a blur kernel. Blur reduces the maximum pixel value and increases the minimum pixel value of one patch. In other words, the DCP of one patch will increase and the BCP will decrease. Let R ( b ) and R ( l ) denote the proposed sparse channel of the blurred and clear image, respectively, when the l ( x ) = max y Ψ ( x ) l ( y ) = min y Ψ ( x ) l ( y ) , R ( b ) ( x ) R ( l ) ( x ) . To further apply this proposition to the definition of the proposed sparse channel, we have:
R b x = min y Ψ ( x ) min c ( r , g , b ) b c y max y Ψ ( x ) max c ( r , g , b ) b c y + ϵ = min y Ψ x b y max y Ψ x b y + ϵ = min y Ψ x z Φ x l y + m 2 z k z max y Ψ x z Φ x l y + m 2 z k z + ϵ z Φ x min y Ψ x l y + m 2 z k z z Φ x max y Ψ x l y + m 2 z k z + ϵ z Φ x min y ^ Ψ ^ x l y ^ + m 2 z k z z Φ x max y ^ Ψ ^ x l y ^ + m 2 z k z + ϵ = min y ^ Ψ ^ x l y ^ max y ^ Ψ ^ x l y ^ + ϵ = R ( l ) ( x )
Let m ^ and S Ψ denote the size of Ψ ^ ( x ) and Ψ ( x ) , respectively. Then we have m ^ = S Ψ + m . Equation (4) shows that R ( x ) of the image patch centered at x after blurring is no less than the value of the original image patch centered at x.
Equation (4) proves R ( l ) ( x ) R ( b ) ( x ) . This means that after blurring, the difference between the DCP and the BCP is smaller than that of the corresponding patch in a sharp image. In other words, R ( x ) always favors the sharp image. We further validate our analysis on the dataset [44]. Figure 1a–c show the histogram of the average number of dark channel pixels, bright channel pixels and D/B channel pixels, respectively. As can be observed, a large portion of the pixels in the dark channels and bright channels possess very small or large values, and our D/B channel pixels possess smaller values than those of the DCP and BCP. As shown in Figure 1, the proposed sparse channels of clear images have significantly more zero elements than those of blurred images. Thus, the sparsity of the proposed channel is a natural metric to distinguish clear images from blurred images. This observation motivates us to introduce a new regularization term to enforce sparsity of the proposed channels in latent images.

Proposed Sparse Channel as an Image Prior

Equation (4) shows that after blurring, the difference between the DCP and BCP is smaller than that of the corresponding patch in a sharp image. Therefore, in order to generate sharp and reliable salient edges, we propose a novel sparse channel prior which combines the D/B and L 0 norm:
P x = D ( x ) 0 B ( x ) 0 + ϵ
We define P ( x ) as a D/B prior, and the L 0 norm is used for sparsity. Let Ψ ( x ) denote one patch of the image I. If there exist some pixels x Ψ ( x ) such that I ( x ) = 0 , we have
P ( b ) x P ( l ) ( x )
where P ( b ) ( x ) and P ( l ) ( x ) denote the D/B prior of the blurred and clear image, respectively. This property directly follows from Equation (4). In the framework of MAP, by minimizing the sparse prior P ( x ) , we obtain a result that favors a sharp image. This property is also validated using dataset [44]. As shown in Figure 1c, the average number of D/B channels in clear images has significantly more zero elements than that of blurred ones.

4. Proposed Blind Deblurring Model

Based on the proposed D/B prior, we construct the blind deblurring model under the maximum a posteriori (MAP) framework.
argmin l , k l k b 2 2 + μ P l + ϑ l 0 + γ k 2 2
where P l is our proposed prior, ∇ denotes the gradient operation and μ , ϑ and γ are non-negative weights. The data-fitting term of our model ensures that the latent sharp image is consistent with the observed image. l 0 is the L 0 norm of the image gradient, which is used to suppress ringing and artifacts. Finally, we use the L 2 norm to increase the sparsity of the blur kernel.

4.1. Optimization

In this part, we adopt the ADM method to obtain the solution to the objective function. By using the idea of alternating optimization, we can obtain two independent subproblems about l and k, respectively:
argmin l l k b 2 2 + μ D ( l ) 0 B ( l ) 0 + ϵ + ϑ l 0
and
argmin k l k b 2 2 + γ k 2 2
Equation (9) is a classical least squares problem with respect to k. By introducing the auxiliary variable g, which is related to l , Equation (8) can be written as follows:
argmin l , g l k b 2 2 + λ l g 2 2 + μ D ( l ) 0 B ( l ) 0 + ϵ + ϑ g 0
Equation (10) can be decomposed into:
argmin l l k b 2 2 + λ l g 2 2 + μ D ( l ) 0 B ( l ) 0 + ϵ
and
argmin g λ l g 2 2 + ϑ g 0
Equation (12) is an L 0 norm minimization problem for g.

4.2. Estimating Intermediate Image l

For the k-th iteration, we consider B ( l ) estimated in the ( k 1 ) -th iteration as a constant. Denoting
w k = μ / ( B ( l ) 0 + ϵ )
Equation (11) can be rewritten as follows:
argmin l l k b 2 2 + λ l g 2 2 + w k D ( l ) 0
By introducing an auxiliary variable, p, which is related to D l , Equation (14) can be reformulated as follows:
argmin l , p l k b 2 2 + ξ D ( l ) p 2 2 + λ l g 2 2 + w k p 0
Using the idea of alternating optimization, we can obtain two independent subproblems to solve for l and p, respectively:
argmin l l k b 2 2 + ξ D ( l ) p 2 2 + λ l g 2 2
and
argmin p ξ D ( l ) p 2 2 + w k p 0
Equation (16) contains all quadratic terms, and we can obtain its solution by the least squares method. In each iteration, the FFT (Fast Fourier Transform) is used to accelerate the computation process. Its closed-form solution is given as follows:
l = F 1 F k ¯ F b + ξ F p + λ F g F k ¯ F k + λ F ¯ F + ξ
where F g = F v ¯ F g v + F h ¯ F g h and F · and F 1 · are the Fast Fourier Transform (FFT) and its inverse, respectively. F · ¯ denotes the complex conjugate operator of FFT and v and h are gradients in the vertical and horizontal directions, respectively.

4.3. Estimating p and g

Equations (12) and (17) are minimization problems of the L 0 norm. Due to the difficulty of solving the L 0 norm minimization problem, we adopt the method described in Ref. [13]. As a result, the solution of Equation (17) can be expressed as:
p = D ( l ) , D ( l ) w k ξ 0 , otherwise
Given l, the solution of Equation (12) can be expressed as:
g = l , l 2 ϑ λ 0 , otherwise

4.4. Estimating Blur Kernel k

Since the updating of the blur kernel is an independent subproblem, we estimate k in the gradient space. Specifically, we obtain the solution to the blur kernel by minimizing the following problem though the known intermediate image l:
min k l k y 2 2 + γ k 2 2
where ∇ denotes the gradient operation. Note that we use Equation (21) to estimate the blur kernel instead of Equation (9), which helps to suppress ringing artifacts and eliminate noise. The closed-form solution to Equation (21) is obtained by FFT.
k = F 1 F l ¯ F y F l ¯ F l + γ
The coarse-to-fine strategy is used in the process of blur kernel estimation, which is similar to that used in [26,45]. In the process of solving the problem, it is very important to restrict the small values of the blur kernel by thresholding at fine scale, which enhances the robustness of the algorithm to noise.

4.5. Estimating Latent Sharp Image

Although the latent sharp images can be estimated from Equation (18), this formulation is less effective for fine-texture details. For the purpose of suppressing ringing and artifacts, we fine-tune the final restored image. With the estimated blur kernel and blur input image y, we can use the nonblind deconvolution method to obtain the final latent sharp image l l a t e n t . Algorithm 1 summarizes the main steps of the final latent sharp image restoration method. Firstly, we estimate the restored image l h by the method in Ref. [46] using the hyper-Laplacian prior. Then we restore image l r according to the method in Ref. [47] using the total variation prior. Finally, the latent sharp image l l a t e n t is calculated by the average of the two restored images, i.e., l l a t e n t = ( l h + l r ) / 2 . The main steps of our proposed algorithm are summarized as Algorithm 2.
Algorithm 1 Final latent sharp image restoration.
Input: Blurry image b and estimated kernel k.
1: Estimate latent image l h by using the method described in [46] with Laplacian prior;
2: Estimate latent image l r by using the method described in [47] with total variation prior;
3: Restore the final sharp image l l a t e n t :
l l a t e n t = ( l h + l r ) / 2 .
Output: Sharp latent image l l a t e n t .
Algorithm 2 The proposed blind deblurring algorithm
Input: Blurry image y;
1:     Initialize the intermediate image l and blur kernel k;
2:     Estimate blur kernel k from b;
3:     Alternately calculate l and k by the manner of coarse-to-fine levels:
4:        Estimate intermediate image l by Equation (18);
5:        Estimate blur kernel k by Equation (22);
6:     Interpolate solution to finer level as initialization;
7:     Calculate the latent sharp image according to Algorithm 1.
Output: Sharp latent image l l a t e n t .
We first initialize the intermediate image l and blur kernel k according to the blurry input. Then we alternately update l and k. In order to avoid falling into a local minimum, our algorithm is executed in a coarse-to-fine manner. The results of the coarse layer are up-sampled with the bilinear interpolation method as the initialization of the next fine layer. Finally, a latent sharp image is obtained by Algorithm 1 with the estimated blur kernel.

5. Results

We examine our method and compare it with the state-of-the-art BID methods on different image datasets, including a synthetic image dataset and real-world blurred images. We then evaluate the quality of deblurring models by different metrics, including the peak signal-to-noise ratio (PSNR, unit: dB), which is a measure of image quality, and cumulative error ratio (CER). The higher the CER value, the better the model.
In all the experiments, the parameter settings of our model are as follows: μ = ϑ = 0.003 , γ = 2 and the size of image patch to compute the D/B channel is set to be 35. The maximum iteration is empirically set to 5 as a trade-off between accuracy and speed.

5.1. Synthetic Image Deblurring

We first test our method on the synthetic image dataset [44] for quantitative evaluations. This dataset includes 4 ground truth images and 12 different kernels. We compare our results with the state-of-the-art methods [11,14,18,19,21,27,48]. Our algorithm performs well with other methods on this benchmark dataset. Additionally, we present a challenging example in Figure 2. We record the largest PSNR calculated by comparing each restored result with 199 ground truth images captured along the camera shake trajectory in Figure 3. Since the proposed method considers not only BCP and DCP information but also the relationship between them, the PSNR values of the restored images achieved by our method are higher than those of the state-of-the-art algorithms [11,14,18,19,25,45,48,49,50].
We also test our algorithm against the competing methods [6,14,18,19,21,48,51,52] on another benchmark dataset [12], which includes four ground truth images and eight different kernels. One example is shown in Figure 4 with a visual result comparison against the state-of-the-art methods [18,19]. Although the image restored by Pan et al. [18] performs well against other approaches, the generated image still contains significant fake textures and blur regions in Figure 4b. The algorithm proposed by Yan et al. [19] considers both the DCP and BCP, but the generated result still has unclear edges, as Figure 4c shows. However, our method generates a sharp image with fine textures, as shown in Figure 4d. We can observe that the result is more visually pleasing than that of others. The main reason is that the enhanced edges in local patches help to remove the small textures and fine details. Figure 5a plots the cumulative error ratios of our method and the other competing methods. Note that our D/B-based method outperforms state-of-the-art algorithms by 100% under error ratio 2. All the experimental results consistently show that our method is competitive on this dataset.
We further carry out experiments of our method against the state-of-the-art approaches [16,19] on text images from the dataset [16]. This dataset consists of 15 images and eight different kernels ranging in size from 13 × 13 to 27 × 27 . Figure 6 visually shows that our method performs well on a challenging blurry image in comparison with [19] and the method designed for text images [16]. As shown in the figure, the DCP and ECP also help the blind deblurring of text images. Our deblurred result in Figure 6d utilizing the proposed D/B generates sharper edges and clearer text compared to other results [16,19]. Another text example is shown in Figure 7. Note that the text becomes extremely sharper after the deblurring process, which demonstrates that our proposed L 0 norm based on the D/B is helpful for kernel estimation and image deblurring. In particular, sharp text images contain more salient edges in local patches, which drives our D/B to perform well. Table 1 presents the average PSNR values of the deblurred results on the text image dataset [16] compared with the state-of-the-art methods. Our method achieves the maximum PSNR value.

5.2. Real Image Delurring

In this part, we test our method on real-world blurred images against the recent state-of-the-art blind single image deblurring methods [11,14,18,19,21,48]. We analyze the deblurring results qualitatively as the blur kernels and ground truth images are unknown. Figure 8 shows one challenging real-world blurred image. The recovered images generated by the proposed algorithm are sharper and clearer than those generated by [11,14,18,19,21,48]. As shown in Figure 8, the blurry image contains large and small edges and textures, which causes trouble for deblurring with the methods designed for natural images. Pan et al. [18] exploited the dark channel and achieved encouraging results. However, the deblurred image still contains visually blurry artifacts. In contrast, by further utilizing the edge information in local patches, our method generates sharper and clearer image details compared with other methods as shown in Figure 8. As a second example, we present deblurring results on a challenging image in Figure 9. Note that our deblurred image has clear background and sharp edges against other results.

5.3. The Effectiveness of Proposed Sparse Channel Prior

In this subsection, experiments are conducted to verify the performance of the proposed D/B for blind image deblurring. As mentioned above, the proposed D/B regularization term considers the contrast and salient edges’ information in local patches. To demonstrate the effectiveness of the proposed prior, we compare the proposed method with the DCP-based method [18] and the ECP-based method [19] in image deblurring. Figure 10 shows the changes of the DCP, the BCP and the proposed sparse channel prior in each phase of the image. Initially, the contrast and clarity of the DCP, the BCP and the proposed sparse channel prior of the proceeding blurred images are very low, while the contrast of the middle layer is significantly improved and the final restored images have a higher contrast and sharper contour. At this time, the ringing and artifacts in the images are greatly reduced. Note in each stage, the proposed sparse channel prior has a clearer outline than the DCP and BCP. Compared with the literature [18], the proposed method estimates the blur kernels better with less artifacts. Figure 11 shows the quantitative evaluations on the benchmark dataset [12] by the ECP and our method with and without the proposed D/B. Note that the PSNR (Figure 11a) of the proposed D/B-based method is higher than that of the ECP and our method without D/B. Moreover, our method with the proposed D/B prior performs more favorably in terms of error ratio (Figure 11b) than without the D/B regularization, which further demonstrates the effectiveness of the proposed D/B-based methods. The proposed D/B-based algorithm generates the results with PSNR values higher than the other two methods.
In addition, our method has a higher success rate on the dataset [22], as shown in Figure 5b. All the results consistently demonstrate that the proposed sparse channel prior improves the deblurring performance.

6. Discussion

6.1. Comparison with Other Related Methods

In this part, we will discuss some methods most related to the algorithm in this paper. The dark channel prior was used by Pan et al. [18] for blind image deblurring. They enhanced the sparsity of the DCP and achieved good results on low-light images. Yan et al. [19] used the ECP to solve the problem that the DCP has less effect on sky images. However, the ECP is a simple addition of the DCP and BCP, and the relationship between them has not been deeply studied.
Figure 10 shows the intermediate images of three different methods (Refs. [18,19] and ours). Although the intermediate results become clearer and sharper as iterations increase, the images (Figure 10c) generated by our method have sharper edges and clearer contents than those of Refs. [18] (Figure 10a) and [19] (Figure 10b). Figure 12 shows the results generated by these three methods on some challenging images, including real blurred and low-light images. Our results have fewer blurred areas and ringings, which look more pleasant. Table 2 shows the error ratio of two related approaches [18,19] on dataset [22], and the proposed method fails on one image in which the error ratio value is larger than 4.
In order to analyze the three methods in more detail, we show the different maps of the DCP, the BCP and our D/B in Figure 13. Although the dark channel, bright channel and our D/B map of the recovered image all have improvement with respect to that of the corresponding blurry image, our D/B map improves more than the dark channel and bright channel. Moreover, our D/B map is clearer (higher contrast and sharper edges) than the dark channel and bright channel in both the blurry image and recovered image. All have improvement with respect to that of the corresponding blurry image.

6.2. Convergence Analysis

Blind deconvolution is a highly ill-posed problem and we introduce a new spare prior to make the problem produce feasible results in this paper. The optimization scheme of our model is challenging, and with the idea of auxiliary variables and the alternating direction minimization (ADM) method, one may question the convergence. Thus, we show the traces of the objective function (computed from Equation (8)) and kernel similarity [53] on dataset [12] with respect to iterations in Figure 14. Figure 14a shows our method converges after less than 30 iterations, and Figure 14b shows the kernel similarity [53] becomes higher with more iterations. Overall, our method converges well after less than 30 iterations.

6.3. Running Time

We simply explain the computational complexity through the running time of the algorithms. We select several competing algorithms closely related to this paper to run on the same database as this algorithm. All experiments were carried out under the same computer. The running time on different sizes of images is summarized in Table 3. As can be seen from the table, our algorithm is faster than [19] and slower than [14].

7. Conclusions

In this paper, a novel, simple yet efficient image prior D/B for blind image deblurring is proposed, which builds on the DCP and BCP. An extensive investigation on natural images shows that the DCP behaves inversely to the BCP, and a large difference between the DCP and BCP preserves salient edges. For blind image deblurring, salient edges are helpful to estimate the blur kernel. In order to utilize the advantages of the DCP and BCP and further exploit the edge information in a local patch, we propose the D/B prior for image deblurring. The D/B prior preserves the main edges and eliminates the fine textures of intermediate latent images. Meanwhile, it retains the advantages of the DCP and BCP. The feasibility and effectiveness of using the D/B prior to estimate the blur kernel are discussed. The experimental results show that our algorithm is competitive with the state-of-the-art algorithms. In addition, experiments with our proposed prior show that it can significantly improve the performance of the deblurring algorithm.

Author Contributions

Conceptualization, D.Y. and X.W.; methodology, D.Y.; software, D.Y.; validation, D.Y., H.Y. and X.W.; formal analysis, D.Y.; investigation, D.Y., H.Y. and X.W.; resources, D.Y.; data curation, D.Y.; writing—original draft preparation, D.Y.; writing—review and editing, D.Y., H.Y. and X.W.; visualization, D.Y. and H.Y.; supervision, D.Y. and X.W.; project administration, X.W.; funding acquisition, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

The research was funded by the National Natural Science Foundation of China (Grant No.62020106012, U1836218) and the 111 Project of Ministry of Education of China (Grant No. B12018).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Efficient marginal likelihood optimization in blind deconvolution. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2657–2664. [Google Scholar]
  2. Michaeli, T.; Irani, M. Blind deblurring using internal patch recurrence. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 783–798. [Google Scholar]
  3. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar] [CrossRef] [Green Version]
  4. Yair, N.; Michaeli, T. Multi-scale weighted nuclear norm image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3165–3174. [Google Scholar]
  5. Ren, W.; Cao, X.; Pan, J.; Guo, X.; Zuo, W.; Yang, M.H. Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 2016, 25, 3426–3437. [Google Scholar] [CrossRef] [PubMed]
  6. Xu, L.; Zheng, S.; Jia, J. Unnatural l0 sparse representation for natural image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1107–1114. [Google Scholar]
  7. Bai, Y.; Jia, H.; Jiang, M.; Liu, X.; Xie, X.; Gao, W. Single Image Blind Deblurring Using Multi-Scale Latent Structure Prior. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 2033–2045. [Google Scholar] [CrossRef] [Green Version]
  8. Han, Y.; Kan, J. Blind color-image deblurring based on color image gradients. Signal Process. 2019, 155, 14–24. [Google Scholar] [CrossRef]
  9. Cao, X.; Ren, W.; Zuo, W.; Guo, X.; Foroosh, H. Scene text deblurring using text-specific multiscale dictionaries. IEEE Trans. Image Process. 2015, 24, 1302–1314. [Google Scholar] [PubMed]
  10. Varghese, N.; Mohan Mahesh, M.R.; Rajagopalan, A.N. Fast Motion-Deblurring of IR Images. IEEE Signal Process. Lett. 2022, 29, 459–463. [Google Scholar] [CrossRef]
  11. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing camera shake from a single photograph. In ACM Transactions on Graphics (TOG); ACM: New York, NY, USA, 2006; Volume 25, pp. 787–794. [Google Scholar]
  12. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
  13. Xu, L.; Lu, C.; Xu, Y.; Jia, J. Image smoothing via L 0 gradient minimization. In ACM Transactions on Graphics (TOG); ACM: New York, NY, USA, 2011; Volume 30, p. 174. [Google Scholar]
  14. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 233–240. [Google Scholar]
  15. Yang, D.Y.; Wu, X.J.; Yin, H.F. Blind image deblurring via enhanced sparse prior. J. Electron. Imaging 2021, 30, 023031. [Google Scholar] [CrossRef]
  16. Pan, J.; Hu, Z.; Su, Z.; Yang, M.H. l_0-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 342–355. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, R.W.; Yin, W.; Xiong, S.; Peng, S. Lo-Regularized Hybrid Gradient Sparsity Priors for Robust Single-Image Blind Deblurring. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, AB, Canada, 15–20 April 2018; pp. 1348–1352. [Google Scholar]
  18. Pan, J.; Sun, D.; Pfister, H.; Yang, M. Deblurring Images via Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2315–2328. [Google Scholar] [CrossRef] [PubMed]
  19. Yan, Y.; Ren, W.; Guo, Y.; Wang, R.; Cao, X. Image deblurring via extreme channels prior. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4003–4011. [Google Scholar]
  20. Chen, L.; Fang, F.; Wang, T.; Zhang, G. Blind Image Deblurring with Local Maximum Gradient Prior. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  21. Cho, S.; Lee, S. Fast motion deblurring. ACM Trans. Graph. (TOG) 2009, 28, 145. [Google Scholar] [CrossRef]
  22. Sun, L.; Cho, S.; Wang, J.; Hays, J. Edge-based blur kernel estimation using patch priors. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, USA, 19–21 April 2013; pp. 1–8. [Google Scholar]
  23. Zhou, Y.; Komodakis, N. A map-estimation framework for blind deblurring using high-level edge priors. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2014; pp. 142–157. [Google Scholar]
  24. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  25. Shan, Q.; Jia, J.; Agarwala, A. High-quality motion deblurring from a single image. ACM Trans. Graph. (TOG) 2008, 27, 73. [Google Scholar] [CrossRef]
  26. Joshi, N.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  27. Xu, L.; Jia, J. Two-phase kernel estimation for robust motion deblurring. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 157–170. [Google Scholar]
  28. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar] [CrossRef] [Green Version]
  29. Papyan, V.; Elad, M. Multi-Scale Patch-Based Image Restoration. IEEE Trans. Image Process. 2016, 25, 249–261. [Google Scholar] [CrossRef] [PubMed]
  30. Sun, J.; Cao, W.; Xu, Z.; Ponce, J. Learning a convolutional neural network for non-uniform motion blur removal. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 769–777. [Google Scholar] [CrossRef] [Green Version]
  31. Ren, D.; Zuo, W.; Zhang, D.; Xu, J.; Zhang, L. Partial Deconvolution With Inaccurate Blur Kernel. IEEE Trans. Image Process. 2018, 27, 511–524. [Google Scholar] [CrossRef]
  32. Nah, S.; Kim, T.H.; Lee, K.M. Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017. [Google Scholar]
  33. Su, S.; Delbracio, M.; Wang, J.; Sapiro, G.; Heidrich, W.; Wang, O. Deep Video Deblurring for Hand-Held Cameras. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017; pp. 237–246. [Google Scholar] [CrossRef]
  34. Gong, D.; Yang, J.; Liu, L.; Zhang, Y.; Reid, I.; Shen, C.; Van Den Hengel, A.; Shi, Q. From Motion Blur to Motion Flow: A Deep Learning Solution for Removing Heterogeneous Motion Blur. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017; pp. 3806–3815. [Google Scholar] [CrossRef] [Green Version]
  35. Li, L.; Pan, J.; Lai, W.; Gao, C.; Sang, N.; Yang, M. Learning a Discriminative Prior for Blind Image Deblurring. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6616–6625. [Google Scholar] [CrossRef] [Green Version]
  36. Kupyn, O.; Budzan, V.; Mykhailych, M.; Mishkin, D.; Matas, J. DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8183–8192. [Google Scholar]
  37. Zhang, H.; Dai, Y.; Li, H.; Koniusz, P. Deep Stacked Hierarchical Multi-Patch Network for Image Deblurring. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 5971–5979. [Google Scholar] [CrossRef] [Green Version]
  38. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.Y.; Wong, W.K.; Woo, W.C. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2015; pp. 802–810. [Google Scholar]
  39. Tao, X.; Gao, H.; Shen, X.; Wang, J.; Jia, J. Scale-Recurrent Network for Deep Image Deblurring. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8174–8182. [Google Scholar] [CrossRef] [Green Version]
  40. Zheng, S.; Zhu, Z.; Cheng, J.; Guo, Y.; Zhao, Y. Edge Heuristic GAN for Non-Uniform Blind Deblurring. IEEE Signal Process. Lett. 2019, 26, 1546–1550. [Google Scholar] [CrossRef] [Green Version]
  41. Liang, C.H.; Chen, Y.A.; Liu, Y.C.; Hsu, W.H. Raw Image Deblurring. IEEE Trans. Multimed. 2022, 24, 61–72. [Google Scholar] [CrossRef]
  42. Chang, M.; Feng, H.; Xu, Z.; Li, Q. Low-Light Image Restoration With Short- and Long-Exposure Raw Pairs. IEEE Trans. Multimed. 2022, 24, 702–714. [Google Scholar] [CrossRef]
  43. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1628–1636. [Google Scholar]
  44. Köhler, R.; Hirsch, M.; Mohler, B.; Schölkopf, B.; Harmeling, S. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 27–40. [Google Scholar]
  45. Hirsch, M.; Schuler, C.J.; Harmeling, S.; Schölkopf, B. Fast removal of non-uniform camera shake. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 463–470. [Google Scholar]
  46. Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. In Proceedings of the 22nd International Conference on Neural Information Processing Systems (NIPS’09), Vancouver, BC, Canada, 7–10 December 2009; Curran Associates Inc.: Red Hook, NY, USA, 2009; pp. 1033–1041. [Google Scholar]
  47. Chan, S.H.; Khoshabeh, R.; Gibson, K.B.; Gill, P.E.; Nguyen, T.Q. An augmented Lagrangian method for total variation video restoration. IEEE Trans. Image Process. 2011, 20, 3097–3111. [Google Scholar] [CrossRef] [PubMed]
  48. Wen, F.; Ying, R.; Liu, Y.; Liu, P.; Truong, T.K. A Simple Local Minimal Intensity Prior and An Improved Algorithm for Blind Image Deblurring. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2923–2937. [Google Scholar] [CrossRef]
  49. Cho, T.S.; Paris, S.; Horn, B.K.; Freeman, W.T. Blur kernel estimation using the radon transform. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 241–248. [Google Scholar]
  50. Whyte, O.; Sivic, J.; Zisserman, A.; Ponce, J. Non-uniform deblurring for shaken images. Int. J. Comput. Vis. 2012, 98, 168–186. [Google Scholar] [CrossRef] [Green Version]
  51. Dong, J.; Pan, J.; Su, Z.; Yang, M.H. Blind image deblurring with outlier handling. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2478–2486. [Google Scholar]
  52. Pan, L.; Hartley, R.; Liu, M.; Dai, Y. Phase-Only Image Based Kernel Estimation for Single Image Blind Deblurring. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 6027–6036. [Google Scholar] [CrossRef] [Green Version]
  53. Hu, Z.; Yang, M.H. Good Regions to Deblur. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2012; pp. 59–72. [Google Scholar] [CrossRef]
Figure 1. The statistics of the DCP, the BCP and our proposed D/B prior: (ac) average channel pixels distribution of bright, dark and our D/B, respectively.
Figure 1. The statistics of the DCP, the BCP and our proposed D/B prior: (ac) average channel pixels distribution of bright, dark and our D/B, respectively.
Mathematics 10 01238 g001
Figure 2. Visual comparison of the results using one challenging image from dataset [44]. The image (a) is blurry input; (bh) are deblurring results of Ref. [21], Ref. [27], Ref. [14], Ref. [48], Ref. [18], Ref. [19] and our proposed method, respectively.
Figure 2. Visual comparison of the results using one challenging image from dataset [44]. The image (a) is blurry input; (bh) are deblurring results of Ref. [21], Ref. [27], Ref. [14], Ref. [48], Ref. [18], Ref. [19] and our proposed method, respectively.
Mathematics 10 01238 g002
Figure 3. Quantitative evaluations on benchmark dataset [44]. Our method performs competitively against the state-of-the-art deblurring approaches.
Figure 3. Quantitative evaluations on benchmark dataset [44]. Our method performs competitively against the state-of-the-art deblurring approaches.
Mathematics 10 01238 g003
Figure 4. A comparison of our method with state-of-the-art methods. The images (ad) are blurry input, result of Pan et al. [18], result of Yan et al. [19] and our result, respectively. The PSNR values of (bd) are 30.19, 30.33 and 32.15, respectively.
Figure 4. A comparison of our method with state-of-the-art methods. The images (ad) are blurry input, result of Pan et al. [18], result of Yan et al. [19] and our result, respectively. The PSNR values of (bd) are 30.19, 30.33 and 32.15, respectively.
Mathematics 10 01238 g004
Figure 5. Quantitative results of our method on two benchmark datasets [12,22]: (a) error ratios comparison between our approach and the other methods on the benchmark dataset [12]; (b) quantitative evaluations on the benchmark dataset [22].
Figure 5. Quantitative results of our method on two benchmark datasets [12,22]: (a) error ratios comparison between our approach and the other methods on the benchmark dataset [12]; (b) quantitative evaluations on the benchmark dataset [22].
Mathematics 10 01238 g005
Figure 6. A comparison of our method with state-of-the-art methods. The images (ad) are blurry input, result of Pan et al. [16], result of Yan et al. [19] and our result, respectively.
Figure 6. A comparison of our method with state-of-the-art methods. The images (ad) are blurry input, result of Pan et al. [16], result of Yan et al. [19] and our result, respectively.
Mathematics 10 01238 g006
Figure 7. Visual comparison of the results using one challenging image: (a) blurry image; (bh) deblurring results generated by Ref. [14], Ref. [6], Ref. [52], Ref. [48], Ref. [19], Ref. [18] and our method, respectively. The recovered image by the proposed algorithm is visually more pleasing.
Figure 7. Visual comparison of the results using one challenging image: (a) blurry image; (bh) deblurring results generated by Ref. [14], Ref. [6], Ref. [52], Ref. [48], Ref. [19], Ref. [18] and our method, respectively. The recovered image by the proposed algorithm is visually more pleasing.
Mathematics 10 01238 g007
Figure 8. Visual comparison of the results using one challenging image. (a) is blurry input and (bh) are generated by [11,14,18,19,21,48] and our proposed method, respectively.
Figure 8. Visual comparison of the results using one challenging image. (a) is blurry input and (bh) are generated by [11,14,18,19,21,48] and our proposed method, respectively.
Mathematics 10 01238 g008
Figure 9. An example of real-world image results. The images (ae) are blurry input, result of Krishnan et al. [14], result of Pan et al. [18], result of Yan et al. [19] and our result, respectively.
Figure 9. An example of real-world image results. The images (ae) are blurry input, result of Krishnan et al. [14], result of Pan et al. [18], result of Yan et al. [19] and our result, respectively.
Mathematics 10 01238 g009
Figure 10. Visual comparison of the intermediate results generated during iteration. (ac) are intermediate results generated during iteration using the DCP, ECP and our sparse channel prior, respectively.
Figure 10. Visual comparison of the intermediate results generated during iteration. (ac) are intermediate results generated during iteration using the DCP, ECP and our sparse channel prior, respectively.
Mathematics 10 01238 g010
Figure 11. Quantitative results of our method on benchmark dataset [12]: (a) quantitative evaluations on the benchmark dataset by [12] and our method with and without D/B; (b) error comparison between our approach and the other methods.
Figure 11. Quantitative results of our method on benchmark dataset [12]: (a) quantitative evaluations on the benchmark dataset by [12] and our method with and without D/B; (b) error comparison between our approach and the other methods.
Mathematics 10 01238 g011
Figure 12. Deblurring results on some challenging examples. (a) Blurry inputs. (bd) Deblurring results generated by Ref. [18], Ref. [19] and our method, respectively.
Figure 12. Deblurring results on some challenging examples. (a) Blurry inputs. (bd) Deblurring results generated by Ref. [18], Ref. [19] and our method, respectively.
Mathematics 10 01238 g012
Figure 13. Visual comparison of different maps. (a) is blurry image. (bd) are dark channel, bright channel and our D/B map of (a), respectively. (e) is recovered image. (fh) are dark channel, bright channel and our D/B map of (e), respectively.
Figure 13. Visual comparison of different maps. (a) is blurry image. (bd) are dark channel, bright channel and our D/B map of (a), respectively. (e) is recovered image. (fh) are dark channel, bright channel and our D/B map of (e), respectively.
Mathematics 10 01238 g013
Figure 14. Convergence analysis of the proposed algorithm. (a) Energy value computed from Equation (8). (b) Average kernel similarity [53] becomes higher with more iterations.
Figure 14. Convergence analysis of the proposed algorithm. (a) Energy value computed from Equation (8). (b) Average kernel similarity [53] becomes higher with more iterations.
Mathematics 10 01238 g014
Table 1. PSNR values of state-of-the-art text image deblurring methods.
Table 1. PSNR values of state-of-the-art text image deblurring methods.
MethodCho et al. [21]Xu et al. [13]Levin et al. [1]Pan et al. [16]Ours
PSNR23.8026.2124.9027.9428.23
Table 2. Quality evaluation of competitive methods on dataset [22] in terms of error ratio.
Table 2. Quality evaluation of competitive methods on dataset [22] in terms of error ratio.
Error Ratio≤2≤3≤4
Pan et al. [18]594/640627/640633/640
Yan et al. [19]596/640636/640638/640
Ours623/640639/640639/640
Table 3. Running time (s) of competing approaches.
Table 3. Running time (s) of competing approaches.
Image SizeKrishnan et al. [14]Pan et al. [18]Yan et al. [19]Ours
255 × 2554.80111.51306.56115.04
600 × 60021.82563.331250.12571.57
800 × 80095.621150.172331.021202.51
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, D.; Wu, X.; Yin, H. Blind Image Deblurring via a Novel Sparse Channel Prior. Mathematics 2022, 10, 1238. https://doi.org/10.3390/math10081238

AMA Style

Yang D, Wu X, Yin H. Blind Image Deblurring via a Novel Sparse Channel Prior. Mathematics. 2022; 10(8):1238. https://doi.org/10.3390/math10081238

Chicago/Turabian Style

Yang, Dayi, Xiaojun Wu, and Hefeng Yin. 2022. "Blind Image Deblurring via a Novel Sparse Channel Prior" Mathematics 10, no. 8: 1238. https://doi.org/10.3390/math10081238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop