1. Introduction
Image restoration is the process of estimating the clean image from the corrupt/noisy image. Up to now, a great number of models have been proposed to console the image restoration problem. Bilateral filtering with the explicit kernel is widely used for its effectiveness in removing noise-like structures [
1]. The output of this method is a weighted average of the nearby pixels at the specific pixel, where the weight combinators depend on the intensity similarities based on the guidance image. The median filter [
2] is a well-known edge-aware operator, which is a special case of local histogram filters [
3]. He et al. proposed a novel explicit image filter called guided filter, which computes the filtering output by considering the content of a guidance image [
4]. Edge-preserving smoothing can be achieved by other local filtering, for example, targeted image denoising filtering (TID) [
5], transform-domain collaborative filtering(BM3D) [
6], transform-domain collaborative filtering with shape-adaptive principal component analysis(BM3D-PCA) [
7] and principal component analysis with local pixel grouping(LPG-PCA) [
8]. Chambolle and Pock [
9] proposed a first-order primal-dual algorithm for the non-smooth convex optimization problems with a convergence rate of
. He and Yuan [
10] focused on the convergence analysis and proposed a modified primal-dual algorithm for solving a saddle-point problem. Chen and Xu [
11] proposed an iterative algorithm for sparse view X-ray tomography to avoid solving the large scale. They provided the rigorous proofs of the convergence for the aforementioned problems.
Another type of image regularization is total variation, which has achieved great success in image restoration and sharpening edges [
12]. Chan and Shen [
13] repaired the images by minimizing the image gradient based on the curvature-driven diffusions. To solve image inpainting and image reduces problems, Esedoḡlu and Shen introduced the Mumford–Shah–Euler model [
14] based on the Mumford–Shah image segmentation model [
15], Li et al. concerned with fast domain decomposition methods for solving the total variation minimization problems in image processing [
16]. In [
17], the authors proposed the group sparsity-based algorithm for nuclear radiation-contaminated video restoration. Liu presented a novel nonconvex extension model that closely incorporates the advantages of total generalized variation and edge-enhancing nonconvex penalties [
18]. Bertozzi et al. [
19,
20] introduced a well-known inpainting method with the modified Cahn–Hilliard equation [
21]. By using the Allen–Cahn equation [
22], which describes the motion of mean curvature flow [
23,
24], Li et al. extended Chan and Shen’s model [
12] and proposed the fast image inpainting method with the efficient hybrid numerical solver [
25,
26].
The existing diffusion methods used the
-norm or
-norm in their proposed energy term and obtained the clean images by minimizing it to convergence. However, these methods suffer from a tendency of over-smooth on the processed image data due to the property of soft-thresholding. In order to sharpen the edges during the noise reduction, a novel fast and accurate method based on the
gradient minimization [
27] has been proposed to measure the sparsity of the solution with sharper edges. Some methods treat the image noises as signal type noise [
28], such as speckle noise, pepper noise, and Gaussian noise. True image noise is much different to single noise, but is a mixture type noise. In this paper, we will propose a new image restoration method by coupling with
,
, and
gradient minimization. The proposed method can be solved by the half-quadratic splitting method. We decoupled the iterations over the smoothing step and performed a Poisson approach after the alternating reformulation. Two edge-preserving steps have been used through the optimizing process to sharpen the edge of the target. The proposed iterative method is efficient and easy to implement. The main advantage of the proposed method can be summarized as follows: (i) This method transforms the problem of denoising to the optimization problem in which the clear image can be obtained by the iterative methods. (ii) The processing of noise reduction computation is based on the pixel information of the single image, which does not require a lot of training cost for the same type of images to obtain the desired accuracy. (iii) To the best of our knowledge, this is the first investigation on the combination of the hard- and soft-thresholding, which can balance the competitive advantage with proper parameter combinators. Several numerical tests will be presented to demonstrate the robustness and efficiency of our method.
The outline of this paper is as follows. The governing equations for the image restoration method are illustrated in
Section 2. The proposed operator splitting algorithm is described in
Section 3. The computational examples have been presented in
Section 4 to demonstrate the efficiency and robustness of our proposed method. In
Section 5, we drew the conclusions.
2. Proposed Image Restoration Method
For a 2D image representation, we combine the following
,
, and
norm with gradient regularization version:
where
,
is a given image in a domain
. The fidelity term
is used for the detection of similarity between the processed image
and the given image
I in
. In Equation (
1),
,
, and
are the
,
, and
norms, respectively. Let us briefly review the definition of
-norm: if
, then
. Otherwise,
. Therefore the term
can reduce the noises and make edges sharpen. The minimization problem (
1) is difficult to optimize directly due to the combinatorial nature of
,
, and
minimization. Recently, a scalable algorithm was proposed for image processing [
27,
29] and surface smoothing [
30,
31] by considering the
norm gradient optimization. Our work extends the
,
, and
minimizations concept to image restoration. Let us provide the following Theorem:
Theorem 1. By introducing a set of auxiliary variables and , the minimization problem (
1)
is equivalent minimize the following equation: Here, γ and ζ are two weight parameters directly controlling the similarity between ψ, φ, and the gradient of ϕ.
Proof. The minimization problem (
2) generally can be minimized by an alternating minimization method in the following manner:
In order to eventually force to match and , both of these optimizations alternate until convergence by increasing and at each iteration. The idea is summarized as:
Step 1. By considering the independence of
, we can rewrite Equation (
3) as
Considering every point
, we need to minimize
Let us consider Equation (
7): if
, the minimum of
can be obtained by setting
; otherwise the relationship between
and
should be analyzed. Let us summarized the relations as: When
, we start splitting in the following two situations.
- (1)
By considering
, the minimal value
can be obtained by setting
which can be shown as
- (2)
By considering that
, we can obtain
Therefore, we should only let
to obtain the minimum
. Therefore, by combining Equations (
8) and (
9), the minimum energy (
6) is produced when
. Hence, we can obtain the following condition:
Step 2. Let us rewrite Equation (
4) by considering the independence of
as:
For every individual point
x, we need to minimize
Here,
and
and
. It is obvious that
and
are the minima points of
and
, respectively. Here,
is the sign function, which is defined as 1 for positive argument and -1 for negative argument. If
, then
. Therefore,
is the minimization of
. In the similar fashion, if
, we should set
to minimize the
. It is obvious that if
, then
is the minimization of
. In summary, we can obtain the following condition:
Step 3. The expression (
5) is quadratic in
and trivial to minimize. Following the Euler–Lagrange formulation,
minimizes Equation (
5) as
Therefore, in order to get the solution of the minimization problem (
2), three optimizations (Equations (
10), (
13), and (
14)) alternate until convergence by increasing
and
at each iteration. □
Some notations should be summarized as follows: (i)
and
are the convex combinators for controlling the similarity between
and
with
. Since the
-norm and
regularized optimization problem is know as computationally intractable, we use the equivalent problem of the original optimization problem to perform the computation. (ii) The choice of
and
determines the smoothing effect for the gradient of
. The
norm focuses on the equilibrium of noise pixels, while the
norm focuses on the presence of noise pixels. Thus the iterated soft-thresholding algorithm for
norms homogenizes the noise in the image, while the
norm can enhance the contrast of image and suppress low-amplitude details. (iii) The proposed model is the convex combination of three norms with different emphases. The advantages of the proposed optimization model Equation (
2) can be activated by selecting appropriate combination parameters according to the characteristics of different noisy images.
3. Numerical Solution
In this section, we consider the fast scheme with the Fourier-spectral method. Let us assume that there are
pixels on a 2D image, where
and
are even integers. Let
, for
,
. Furthermore, let
be an approximation of
, where
s is the iterative step. The discrete cosine transform
is defined as
where
The variables
and
are defined as
and
, respectively. The inverse discrete cosine transform
is
At the beginning of each time step, given
,
,
,
, and
, we want to find
,
,
,
, and
, by solving the descretized equations of three optimizations ((
10)–(
14)) in time. The outline of our proposed method can be summarized as:
Step 1. Update and , where is not smaller than 1 to make and increase with each iteration. and are two initial parameters.
Step 2. Solve
from
and
by using Equation (
10) as:
Here,
at the
node is define as
Here, and are defined in the similar fashion.
Step 3. Solve
from
and
by using Equation (
13) as:
Step 4. Solve
from
and
by using Equation (
14):
Equation (
18) can be transformed into the discrete cosine space as follows:
Here,
is a complex number. We employ the discrete cosine transform for the Laplacian operator, which is defined as
Therefore, we obtain the following corresponding function
:
The above iterations have completed in one time step. Our alternating minimization algorithm will stop if
and
are larger than the given values
and
, respectively. We should note that the proposed method, i.e., Equations (
16) and (
17), consists of two explicit evaluations of closed-form solutions, which makes fast convergence possible. The computational complexities are
, where
N is the size of the mesh grid. The implicit Poisson type equation can be solved by fast discrete cosine transform method (Equation (
20)) with a computational complexity of
[
32].
4. Numerical Tests
Various numerical results are presented in this section on several representative images. We show that some good results can be obtained through our algorithm. In the following part of numerical experiments, we compare and explore the results from both qualitative and quantitative levels, which comprehensively evaluate the proposed noise reduction method. Unless otherwise stated, we will use the following parameters: , , , , .
4.1. Noise Reduction for Gaussian Noise Processing
In this subsection, we use four kinds of images, i.e., electronic composite image, JEPG compressed image, photo taken by camera, and scanning image, to demonstrate the effect of our method on the images with 10% Gaussian noise as shown in
Figure 1. The top row is the initial images with Gaussian noise and the bottom row is the smoothing results obtained by our method. From
Figure 1a–d, the computational domains are with the
,
,
, and
pixels, respectively. The Non-default parameters are chosen as
,
, and
, respectively. It is obvious that the gaussian noise can be eliminate well by the proposed method. Furthermore, some notations should be pointed that: (i) The proposed method can keep a sharp edge while reducing the noise, which can be seen in
Figure 1a. (ii) The textures can be smoothed out due to the influence of the soft thresholds, which can be seen in
Figure 1b. (iii) The defects in the photos, such as shadows and light spots, are considered as the part of the composition and will not be handled by our algorithm, which can be seen in
Figure 1c. (iv) The small non–noisy pixels in the images can be removed due to the influence of the hard thresholds, which can be seen in
Figure 1d.
4.2. Noise Reduction for Speckle Noise Processing
Speckle noise is the noise that arises due to the effect of environmental conditions on the imaging sensor during image acquisition, it is common in active radar, synthetic aperture radar, medical ultrasound and optical coherence, and reduces the quality of tomography images [
33]. In this subsection, we use the text image, photo taken by camera, remote sensing image, scanning image, to demonstrate the effect of our method on the images with speckle noise as shown in
Figure 2. The top row is the initial images with speckle noise and the bottom row is the smoothing results obtained by our method. The computational meshgrid is
,
,
, and
, respectively. The non-default parameters are chosen as
,
, and
, respectively. It is obvious that the speckle noise can be removed by our method without destroying any features of the noisy images such as edges and the structure. We should emphasize that the speckle noise influence the quality of ultrasound image and it can reduces important information from image as edge, shape, and intensity value, which is easy to handle with our method. Since our method is essentially a weighted average combination of soft threshold and hard threshold, the image edge and shape can be captured well by choosing the proper weight parameters.
4.3. Noise Reduction for Poisson Noise Processing
Poisson noise is a noise model conforming to Poisson distribution, which is suitable to describe the probability distribution of the number of random events in unit time [
34]. In this subsection, we use a cartoon image to demonstrate the effect of our method on the images with Poisson noise as shown in
Figure 3. From
Figure 3a–d, we show the original images with poission noise, the smoothing images by the proposed method, the contour line of (a), and the contour line of (b), respectively. The computational meshgrid is
. We chose the non–default parameters as
,
,
,
, and
, respectively. By comparing
Figure 1a,b, it can be seen that our algorithm can eliminate gaussian noise without affecting the edges and the shape of the noisy image. To better demonstrate this property, we plot the contour line of the image, which verifies that the proposed method is efficient for obtaining the clean gradient map and in accordiance with human perception.
4.4. Comparison between the Existing Method and the Proposed Method
In this subsection, we compare the results obtained by the proposed method and the existing method, i.e., TID [
5], BM3D [
6], BM3D-PCA [
7], LPG-PCA [
8], and NLM [
35], to verify the efficiency of the proposed method. We have re-implemented and modified the internal method so that the above methods can be fairly compared with the proposed method. The computational meshgrid is
and we choose the non–default parameters as
,
, and
, respectively.
To prepare the numerical test, we added randomly excessive noise to the original image
Figure 4a and obtain the test noisy image
Figure 4b. We not only perform the corresponding qualitative numerical experiment as shown in
Figure 4, but also implies the corresponding quantitative numerical experiment as shown in
Table 1. It is obvious that our method provide the cleaner results with the same input image in
Figure 4. Furthermore, the proposed method globally sharpen prominent edges and remains the text pixel while eliminating the noise. Two metrics, namely peak signal–to–noise ratio (PSNR) and structural similarity index map (SSIM), are quantitative measure used to estimate the quality of the denoised images, which can be defined as
where
I is the clean image,
K is the denoised image,
are the average of
I and
K, respectively; and
are the variance of
I and
K, respectively. Here,
is the covariance of
I and
K,
with
. High PSNR and SSIM values indicate a good restore of the fingerprint image [
36]. By comparing these two indicators as shown in
Table 1, our method shows better utilization of the target image.
4.5. Comparison between the Smoothing Results with or without Our Method for the Existing Method
In this subsection, we compare the results obtain by the existing method with or without our method. To verify the efficiency of the proposed method, we perform the numerical test on a real scanning image for ancient ruins as shown in
Figure 5, which is covered with a dense collection of weathered holes and cracks.
The computational meshgrid is
and the non–default parameters are
,
, and
, respectively. The smoothing results have been shown in
Figure 6.
The top and bottom row shows the results obtained by the existing method without and with our method, respectively. It is obvious that the results obtained with the addition of our method will be smoother. Moreover, we compute the PSNR and SSIM indicators with Equation (21) as shown in
Table 2. For the five existing methods, the combination with our method can significantly improve image quality, which corresponds to our expectation.
4.6. Comparison Test between the Smoothing Results with Deep Learning Image Denoising Schemes and Our Proposed Method
In this subsection, we compare the results obtained by the deep learning image denoising schemes and our proposed method. The existing method denoises the damage images using a MATLAB solver denoisingImage in Deep Learning Toolbox based on the denoising convolutional neural networks(DcNN) [
37]. We perform the numerical test on various images types, which are the scanning image, photo taken by camera, and JPEG compressed image, respectively. From
Figure 7a–d, the images are the ground truth, image with 10% Gaussian noise, results obtained by our proposed method, respectively.
The computational meshgrid for the top row, middle row, and bottom row is
,
, and
, respectively. The non–default parameters are
,
, and
, respectively. As can be seen from the comparable results, our method maintains the similarity with the original image and actively reduces the amplitude gradient difference between the noise points and the surrounding pixels to obtain better results. What we need to emphasize is that our method adaptively adjusts for specific images, which can better process different noise information appropriately. Moreover, the PSNR and SSIM indicators with Equation (21) for
Figure 7c,d have been shown in
Table 3. We can see that our method shows better utilization of the target images from the point of perspective view.
We should note that it is unfair for this type comparison in some aspects. The proposed method works well without the training cost, while the DcNN based method require expensive training costs to improve the corresponding accuracy.
4.7. Comparison Test between the Smoothing Results with Existing Structure-Texture Decomposition Method and Our Proposed Method
In this subsection, we compare the results obtained by the existing structure-texture decomposition method and our proposed method. We perform the numerical test on a electronic composite image with complex boundaries and complicated geometry as shown in
Figure 8a. Since the boundary is not clear and the covered triangles have different sizes and shapes, it brings hard challenges for boundary extraction. The computational meshgrid is
and the non–default parameters are
,
, and
, respectively. We add the
Gaussian noise to the ground truth (
Figure 8a) and obtain the damage image
Figure 8b.
Figure 8c,d are the denoising results obtain by the method in [
38] and our proposed method, respectively. The images in bottom row (
Figure 8e,f) show the edge detection results correspond to
Figure 8c,d, respectively. As can be seen from the comparable results, the structure-texture method in [
38] recognized the edges as the texture of the target images by making the boundary too smooth, while our method can increase the contrast between the two sides of the edges and prominent the boundary.
Moreover, we compute the PSNR and SSIM indicators with Equation (21). The PSNR of
Figure 8c,d is
and
, respectively. The SSIM of
Figure 8c,d is
and
, respectively. Based on the quantitative perspective, our method can significantly improve image quality, which corresponds to our expectation. From the comparison of
Figure 8e,f, our method is easier to detect the edges with less errors.
4.8. Computational Cost
Finally, we present the performance of all above tests in
Table 4 and
Figure 9. The CPU times for each tests above have been presented in
Table 4. Then we compute the average CPU time per iteration. Let us fit the curve with the computational domain size based on the MATLAB routine polyfit functional. The numerical tests are performed in MATLAB on a computer with a 3-GHz CPU and 8GB of RAM. It is obvious from
Table 4 that the computational cost of the proposed method is low.
As can be seen from
Figure 9, the computational cost is linear with respect to the domain size. Various notations should be remarked as follows: (i) The proposed method does not require the extensive training times to sacrifice an acceptable level of accuracy. (ii) The proposed scheme is based on the partial differential equation, which makes our approach independent of the specific dataset and noise type. (iii) The computational complexity of our approach depends on the fast Fourier transform, which is proved to be
and is easy to implement from the numerical point of view.