Next Article in Journal
The Rates of Convergence for Functional Limit Theorems with Stable Subordinators and for CTRW Approximations to Fractional Evolutions
Previous Article in Journal
A Preconditioned Iterative Method for a Multi-State Time-Fractional Linear Complementary Problem in Option Pricing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Non-Convex Hybrid Overlapping Group Sparsity Model with Hyper-Laplacian Prior for Multiplicative Noise

1
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
2
College of Science, China University of Petroleum, Qingdao 266580, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(4), 336; https://doi.org/10.3390/fractalfract7040336
Submission received: 13 March 2023 / Revised: 6 April 2023 / Accepted: 10 April 2023 / Published: 17 April 2023

Abstract

:
Multiplicative noise removal is a quite challenging problem in image denoising. In recent years, hyper-Laplacian prior information has been successfully introduced in the image denoising problem and significant denoising effects have been achieved. In this paper, we propose a new hybrid regularizer model for removing multiplicative noise. The proposed model consists of the non-convex higher-order total variation and overlapping group sparsity on a hyper-Laplacian prior regularizer. It combines the advantages of the non-convex regularization and the hybrid regularization, which may simultaneously preserve the fine-edge information and reduce the staircase effect at the same time. We develop an effective alternating minimization method for the proposed nonconvex model via an alternating direction method of multipliers framework, where the majorization–minimization algorithm and the iteratively reweighted algorithm are adopted to solve the corresponding subproblems. Numerical experiments show that the proposed model outperforms the most advanced model in terms of visual quality and certain image quality measurements.

1. Introduction

Image noise removal is an extremely significant pre-processing step in the image-processing task. Multiplicative noise frequently presents in synthetic aperture radars, ultrasound imaging, and laser images, which causes image quality degradation. Therefore, the problem of multiplicative noise removal is very important. Mathematically, the degraded observation image model with multiplicative noise is represented as
f = u · η ,
where f denotes the degraded image, u denotes an original image, and the multiplicative noise η follows a Gamma distribution with the probability density function (PDF):
P ( η , L ) = L L Γ ( L ) η L 1 e L η , L > 1 ,
where Γ ( · ) is a Gamma function, and L is an integer to represent the noise level. The smaller the value of L, the more severe the damage of the noise becomes. The mean value of η is 1 and the variance of η is 1 / L .
Variational methods are effectively used for removing multiplicative noise. Among these variational methods, the total variation (TV)-based method is the best known and the most popular and efficient, since it can effectively preserve the image edges while suppressing noise [1,2]. In [3], a famous TV-based multiplicative noise removal model (RLO model) with two equality constraints was first designed by Rudin et al., which can be addressed by a gradient projection algorithm. Unfortunately, the RLO model can only deal with the noise that follows Gaussian distribution and cannot yield satisfactory recovery results for Gamma distribution. For the removal of the multiplicative Gamma noise, Aubert and Aujol [4] proposed the following variational model (the AA model) based on total variational regularization:
min u α l o g u + f u , 1 + u 1 ,
where α is a positive regular parameter. The problem (3) is a non-convex optimization problem, so it is difficult to obtain a globally optimal solution. To overcome the effects of non-convex objective functions, many researchers have considered many convex models over the past decade. In [5,6,7], the authors used logarithm transforming to derive the corresponding strictly convex model:
min z α z + f e z , 1 + z 1 ,
where z = l o g u . In addition, Steidl and Teuber [8] chose to use the I-divergence as the data fidelity term of the objective function and proposed a convex optimization model:
min z α z f l o g z , 1 + z 1 .
The advantage of the above model is that a nonlinear log transformation is not required. By introducing an overlapping group sparsity total variation regularization term into an I-divergence data fidelity term, Liu et al. [9] proposed the convex variational model (“OGSTVD” for short) for removing multiplicative noise:
min z α l o g z + f z , 1 + ϕ ( z ) ,
where the ϕ ( z ) is the OGSTV regular function. They also illustrate the effectiveness of their model in image restoration by some numerical results.
All of the above models are TV-based convex regularizer ones. Although many studies showed that TV-based models have a good performance in keeping sharp edges, they tend to produce undesired staircase effects. To compensate for these shortcomings, many higher-order TV regularization terms are also proposed. By combining the first-order and second-order TV, Liu [10] proposed a hybrid regularization model for multiplicative noise removal. Shama et al. [11] proposed a model based on second-order total generalized variational regularization (TGV) to remove multiplicative noise. Influenced by model (4), Lv [12] proposed a TGV-based model (M-TGV for short) for multiplicative noise removal with multilook M. To obtain high-quality recovered images, some non-convex regularizers were introduced into the variational models. The non-convex regularizer can effectively smooth the homogeneous region of the image while preserving the edge details of the image. Chartrand [13] gives a non-convex optimization problem whose objective function is the l p norm, giving an arbitrary sparse signal under the theoretical constitutability condition. The models with non-convex regularizers can also be found in [14,15,16,17]. Extensive studies show that the models with non-convex regularizers have outperformed the models with convex regularizers in preserving the edges of the image.
Natural image gradients obey a heavy-tailed distribution, and hyper-Laplacian (HL) prior is more approximated to the heavy-tailed distribution than the Gaussian or Laplacian prior. Krishnan et al. [18] proposed a fast non-blurred image deblurring method based on a super-Laplace prior. Many scholars studied multispectral image denoising [19,20,21,22,23] based on the global spectral structure of the HL prior regularization term. Shi et al. [24] combined the OGSTV and HL prior regularizer (OGSHL) to remove Gaussian noise. They showed that their proposed model can effectively utilize more texture information and find a satisfying balance between suppressing staircase effects and recovering the structure information.
Using the above strategy as inspiration, we develop a novel hybrid regularization model to eliminate multiplicative noise. The model combines the advantages of nonconvex regularization and OGSHL regularization. More specifically, the model can be expressed as follows:
min z α z f l o g z , 1 + ϕ O H ( z ) + ω 2 z p p ,
where the ϕ O H ( z ) is the OGSHL regular function and 0 < p < 1 . According to what we know, there has been no attempt to combine overlapping group sparsity on a hyper-Laplacian (OGSHL) prior with higher-order non-convex total variation regularizers to remove multiplicative noise. The main contributions of this paper are summarized as follows: (1) We propose a new hybrid regular denoising model that combines a non-convex higher-order total variation and overlapping group sparsity total variation on a hyper-Laplacian prior. It can suppress the staircase effect while retaining more image detail information. (2) To make the optimization model easy to handle, we propose an efficient alternating minimization method. (3) The experimental results verify that the proposed method outperforms many state-of-the-art methods.
The remaining portions of this article are outlined below. Section 2 introduces the definitions of the higher-order TV and an OGSHL. In Section 3, based on the alternating direction method of multipliers, we develop an efficient algorithm to solve the corresponding multiplicative removal problem. In Section 4, numerical results show the effectiveness of the proposed method. In Section 5, a summary of the paper is presented.

2. Preliminaries

We present some relevant background knowledge in this section.

2.1. Second-Order TV

For any z R m × n , z i , j denotes the intensity value of z at pixel ( i , j ) for i = 1 , , m , j = 1 , , n . The definitions of the first-order forward and backward difference operators are given below:
x + z i , j = z i + 1 , j z i , j i < m , z 1 , j z n , j i = m , y + z i , j = z i , j + 1 z i , j j < n , z i , 1 z i , n j = n , x z i , j = z 1 , j z n , j i = 1 , z i + 1 , j z i , j i > 1 .   y z i , j = z i , 1 z i , n j = 1 , z i , j + 1 z i , j j > 1 .
By introducing the operators above, the second-order differential operator is expressed as
x x + z i , j = x x + z i , j , y y + z i , j = y y + z i , j , x y + + z i , j = x + y + z i , j , y x + + z i , j = y + x + z i , j ,
z 1 = i , j ( z ) i , j = i , j ( x + z ) i , j 2 + ( y + z ) i , j 2 . 2 z 1 = i , j ( 2 z ) i , j = i , j ( x x + z ) i , j 2 + ( x y + + z ) i , j 2 + ( y x + + z ) i , j 2 + ( y y + z ) i , j 2 .

2.2. Overlapping Group Sparsity on Hyper-Laplacian Prior

With respect to the two-dimensional image matrix z R n × n , a K × K -point group is defined in [25] as
z ˜ ( i , j ) , K = z ( i m 1 , j m 1 ) z ( i m 1 , j + m 2 ) z ( i m 1 + 1 , j m 1 ) z ( i m 1 + 1 , j + m 2 ) z ( i + m 2 , j m 1 ) z ( i + m 2 , j + m 2 ) R K × K ,
where m 1 = K 1 2 and m 2 = K 2 , t denotes the nearest integer that is not nearer than t. It is clear that z ˜ ( i , j ) , K can be seen as a square block of continuous K × K sampling of z. According to the dictionary parsimonious search, Liu et al. [25] arranged z ˜ as a set of column vectors z, i.e., z ( i , j ) , K = z ˜ ( i , j ) , K ( : ) . Then, the two-dimensional overlapping group sparsity regularizer is defined as
φ ( z ) = i , j = 1 n z ( i , j ) , K 2 = i , j = 1 n k 1 , k 2 = m 1 m 2 z ( i + k 1 , j + k 2 ) 2 .
From the above definition, the definition of the overlapping group sparse total variation ϕ ( z ) can be expressed as
ϕ z = φ x + z + φ y + z .
The hyper-Laplacian prior theory has recently attracted much attention since it offers a benign approximation to the heavy-tailed distribution of natural image gradients. Kyongson et al. [26] define the OGS-HL regularizer ϕ O H ( z ) by
ϕ O H z = φ O H x + z + φ O H y + z .
with
φ O H ( z ) = i , j = 1 n z ( i , j ) , K r 2 = i , j = 1 n k 1 , k 2 = m 1 m 2 z ( i + k 1 , j + k 2 ) 2 r ,
where | z ( i , j ) , K | r is a vector whose elements are the r-th power of the absolute value of the corresponding element, r is the scale parameter of the hyper-Laplacian distribution, 0 < r < 1 , see [26]. If r = 1 , ϕ O H ( z ) = ϕ ( z ) .

3. Proposed Method with Adaptive Parameter Adjustment

In the framework of the alternating direction method of multipliers (ADMM), the variable splitting technique is used to solve the proposed model. We first introduce new auxiliary variables w , v , and q. Then, the original unconstrained optimization problem (7) can be converted into an equivalent constrained minimization problem as follows:
min z , w , v , q α w f l o g w , 1 + ϕ O H ( v ) + ω q p p , s . t . w = z , v = z , q = 2 z .
We define the corresponding augmented Lagrangian function of (8) as follows:
L A ( z , w , v , q ; μ 1 , μ 2 , μ 3 ) = α w f l o g w , 1 + ϕ O H ( v ) + ω q p p μ 1 T ( w z ) + β 1 2 w z 2 2 μ 2 T ( v z ) + β 2 2 v z 2 2 μ 3 T ( q 2 z ) + β 3 2 q 2 z 2 2 , = α w f l o g w , 1 + ϕ O H ( v ) + ω q p p + β 1 2 w z μ 1 β 1 2 2 + β 2 2 v z μ 2 β 2 2 2 + β 3 2 q 2 z μ 3 β 3 2 2 ,
where β i ( i = 1 , 2 , 3 ) are positive penalty parameters, and μ i ( i = 1 , 2 , 3 ) are the Lagrange multipliers, respectively. The optimal solution of (8) can be obtained by finding the saddle point of L A ( z , w , v , q ; μ 1 , μ 2 , μ 3 ) under the framework of the ADMM. We alternately solve the following subproblems in the framework:
w k + 1 = argmin w L A ( z k , w , v k , q k ; μ 1 k , μ 2 k , μ 3 k ) , q k + 1 = argmin q L A ( z k , w k + 1 , v k , q ; μ 1 k , μ 2 k , μ 3 k ) , v k + 1 = argmin v L A ( z k , w k + 1 , v , q k + 1 ; μ 1 k , μ 2 k , μ 3 k ) , z k + 1 = argmin z L A ( z , w k + 1 , v k + 1 , q k + 1 ; μ 1 k , μ 2 k , μ 3 k ) ,
and the Lagrange multiplier parameters are updated as follows:
μ 1 k + 1 = μ 1 k β 1 ( w k + 1 z k + 1 ) , μ 2 k + 1 = μ 2 k β 2 ( v k + 1 z k + 1 ) , μ 3 k + 1 = μ 3 k β 3 ( q k + 1 2 z k + 1 ) .
Firstly, fixing z = z k , v = v k and q = q k , the w-subproblem is following optimization problem:
w k + 1 = argmin w L A ( w , q k , v k , z k ; μ 1 k , μ 2 k , μ 3 k ) = argmin w α w f l o g w , 1 + β 1 2 w z k μ 1 k β 1 2 2 .
Using the Euler–Lagrange equation of (9), we can obtain the optimal solution of the w subproblem:
α ( 1 f w ) + β 1 ( w z k μ 1 k β 1 ) = 0 .
Then, the solution of w is
w k + 1 = 1 2 { ( z k + μ 1 k β 1 α β 1 ) + ( z k + μ 1 k β 1 α β 1 ) 2 + 4 α f β 1 } .
Secondly, the q-subproblem is equivalent to the following nonconvex optimization problem:
q k + 1 = argmin q L A ( z k , v k , w k + 1 , q ; μ 1 k , μ 2 k , μ 3 k ) = argmin q ω q p p + β 3 2 q 2 z k μ 3 k β 3 2 2 .
The q-subproblem (11) is rewritten as
q k + 1 = argmin q τ q p p + 1 2 q y k 2 2 ,
where y k = 2 z k + μ 3 k β 3 , τ = ω / β 3 . In this paper, we adopted the the iterative re-weighted 1 algorithm (IRL1) method [27] to solve this non-convex p regularized problem (12). As in [27], we approximated the nonconvex minimization (12) to the following weighted 1 problem:
q k + 1 = argmin q i η i q i + 1 2 q y k 2 2 ,
where η is a weight vector with each component of η i = τ p ( q i k + ϵ ) 1 p and ϵ > 0 is a number close to zero. The minimization problem (13) has a optimal solution and is obtained by the one-dimensional shrink operator:
q k + 1 = m a x y k η i , 0 s g n ( y k ) .
Thirdly, the minimization of v-subproblem can be expressed as
v k + 1 = argmin v L A ( z k , w k + 1 , v , q k + 1 ; μ 1 k , μ 2 k , μ 3 k ) = argmin v ϕ O H ( v ) + β 2 2 v z k μ 2 k β 2 2 2 .
By using the majorization–minimization (MM) [26], the problem (15) can be iteratively solved as
v k + 1 = arg min v β 2 2 v v 0 2 2 + r 2 Λ ( v k ) v k r 1 v 2 2 ,
where Λ ( v ) is a diagonal matrix whose elements of each diagonal are
[ Λ ( v ) ] l , l = i , j = m 1 m 2 ( k 1 , k 2 = m 1 m 2 v ( s i + k 1 , t j + k 2 ) 2 r ) 1 2 ,
with l = ( s 1 ) n + t , for s , t = 1 , , n , and r ( 0 , 1 ) , ⊙ denotes element-wise multiplication, v 0 = z k + μ 2 k β 2 . Therefore, the explicit optimal solution to the v subproblem is given as follows:
v k + 1 = I + r β 2 Λ ( v k ) T Λ ( v k ) S ( v k ) 1 v 0 ,
where I denotes an n 2 identity matrix, and S ( v ) = d i a g ( v 2 r 2 ) .
Finally, the z subproblem can be simplified as
z k + 1 = argmin z L A ( z , v k + 1 , w k + 1 , q k + 1 ; μ 1 k , μ 2 k , μ 3 k ) = argmin z β 1 2 w k + 1 z μ 1 k β 1 2 2 + β 2 2 v k + 1 z μ 2 k β 2 2 2 + β 3 2 q k + 1 2 z μ 3 k β 3 2 2 ,
By differentiating the minimization problem (18) directly, its optimal solution can be obtained from the following Euler–Lagrange equation:
β 1 I + β 2 T + β 3 ( 2 ) T 2 z k + 1 = β 1 ( w k + 1 μ 1 k β 1 ) + β 2 T ( v k + 1 μ 2 k β 2 ) + β 3 ( 2 ) T ( q k + 1 μ 3 k β 3 ) .
By using the fast Fourier transform, the optimal solution of (19) can be given as
z k + 1 = F 1 ( F ( β 1 ( w k + 1 μ 1 k β 1 ) + β 2 T ( v k + 1 μ 2 k β 2 ) + β 3 ( 2 ) T ( q k + 1 μ 3 k β 3 ) ) F ( β 1 I + β 2 T + β 3 ( 2 ) T 2 ) ) .
We give a detailed description of the proposed method (named Algorithm 1: NHOGSHL) for removing multiplicative noise as follows.
Algorithm 1: NHOGSHL for image restoration under multiplicative noise
   Input: f , α , ω , β 1 , β 2 , β 3 , K , N i t e r , ε = 10 5 .
   Initialize: k = 0 , z 0 = f , t 0 = 1 , μ 1 0 = 0 , μ 2 0 = 0 , μ 3 0 = 0 .
   While z k + 1 z k 2 z k 2 < ε
      (1): Update w k + 1 by solving (10);
      (2): Update q k + 1 by solving (14);
      (3): Update v k + 1 by solving (17);
      (4): Update z k + 1 by solving (20);
      (5): Update μ 1 k + 1 , μ 2 k + 1 , μ 3 k + 1
μ 1 k + 1 = μ 1 k β 1 ( w k + 1 z k + 1 ) , μ 2 k + 1 = μ 2 k β 2 ( v k + 1 z k + 1 ) , μ 3 k + 1 = μ 3 k β 3 ( q k + 1 2 z k + 1 ) .
   End
   Output z k + 1 .

4. Numerical Experiments

In this section, we illustrate the effectiveness of the NHOGSHL model (7), compared with the following three models: the CONVEX model in [28], the OGSTVD model in [9], and the M-TGV model in [12]. The six gray-scale images shown in Figure 1, whose sizes are all 256 × 256 , were used in the experiment. All the experimental results of this article were obtained under MATLAB R2014a running Matlab code on a PC equipped with 4.00 GB RAM and Intel(R) Core(TM) i5-6500U CPU(3.20 GHz). In addition to selecting the peak signal-to-noise ratio (PSNR, unit: dB) as a quantitative and qualitative index to measure image quality, the structural similarity index measurement (SSIM) [29] was also calculated to assess the visual quality. They are defined as follows:
P S N R = 10 l o g 10 255 2 1 n 2 z 0 z 2 2 ,
where z 0 denotes the original image and z denotes the recovered clean image.
S S I M = ( 2 μ z 0 μ z + c 1 ) ( 2 σ z 0 z + c 2 ) ( μ z 0 2 + μ z 2 + c 1 ) ( σ z 0 2 + σ z 2 + c 2 ) ,
where μ z 0 and μ z are the means of the z 0 and z, respectively. σ z 0 2 and σ z 2 represent the standard deviation of the z 0 and z, respectively, and σ z 0 z represents the covariance of the z 0 and z. c 1 and c 2 are normal numbers with denominator values close to zero. As the PSNR value is higher, the SSIM value is nearer to 1, and the image recovery is better. In all experiments, the stop criterion is set to
z k + 1 z k 2 z k 2 < 10 5 .

4.1. Parameters Setting

In this section, we will describe in detail the best values for the different parameters α , ω , p , K , N i t e r , and r. We take three images of “Boats”, “Camera”, and “Lin” as examples to illustrate the process of parameter selection. We added multiplicative noise level with L = 20 to these three images.
First, we assume that the other parameters are known and let α change between 10 and 90, so as to determine the optimal value of parameter α . The variation in PSNR and SSIM with the value of parameter α is plotted in Figure 2. We can see that when the α value is around 60, the proposed method has the best effect. Similarly, the optimal value of ω can be obtained. When the parameter ω is around 0.7, both PSNR and SSIM reach the maximum. Therefore, the parameter ω was set to 0.7 in the following experiments.
Next, Figure 2 plots the PSNR and SSIM and the relationship between the parameters p and r. From Figure 2, we can see that the optimal results can be obtained when p ( 0.5 , 0.7 ) and r = 0.8 . Therefore, we set p = 0.6 and r = 0.8 .
Finally, Figure 2 shows that the optimal value of the group size K = 3 . The experimental results tend to converge to the optimum when the number of iterations is N i t e r > 10 . Therefore, we set N i t e r = 10 in our experiments.

4.2. Experimental Results

In this section, we conducted some experiments to verify the good performance of the proposed algorithm. Firstly, we gave the denoising results of the different methods a noise level of L = 30 in Figure 3. Figure 3 shows that our method is more competitive in both visual effect and quantitative analysis. We selected the region of the images (the marked red box area in Figure 3) to enlarge to better observe the recovered image. As seen in Figure 3 and Figure 4, our method provides superior recovery for the structure of the restored image. Next, in Figure 5 and Figure 6, the recovery images of “Boats” and “Tulips” are shown for four different methods at a noise level of L = 20 . The other compared methods produce staircase artifacts and lose a lot of detail, while our method successfully overcomes the above shortcomings. Finally, under the noise level L = 10 , the recovered images of “Lin” and “Peppers” by the different methods are shown in Figure 7 and Figure 8. As can be seen in the figures, our method has a great advantage in restoring sharp edges, such as the earrings of “Lin” and the terriers of “Peppers”. Compared with the two models, M-TGV and our method have better performance in restoring detail information and texture structure. Compared with the M-TGV method, our method can restore more details and textures.
To clearly highlight the competitiveness of our method, Figure 9 plots the fifth column of the zoomed “Boats” image under the noise level L = 20 . From the fitting results shown in Figure 9, our method can obtain much better fitting results than the other three methods.
Table 1, Table 2 and Table 3 summarize all the PSNRs and SSIMs of the different methods under the different noise levels. From the results in the tables, it is obvious that our method has a higher PSNR and SSIM. These also demonstrate the effectiveness of our method in terms of visual effect and quantitative analysis.

4.3. Convergence Analysis

In this section, we verify the convergence of our method. We plot the curves of SSIM and RelErr values of the restored images versus the iterations under different noise levels in Figure 10. As seen in Figure 10, the curves of relative error are stable as the number of iterations rises.

5. Conclusions

This paper presents a new nonconvex regularization model for multiplicative noise removal. The new model employs non-convex p norm regularization and OGSHL regularization as a hybrid regularizer. An efficient alternating method is proposed based on an MM algorithm and the iteratively reweighted algorithm to solve the NHOGSHL model under the framework of ADMM. Numerical experiments demonstrate that the NHOGSHL model is competitive against the compared methods. In future work, we hope to extend this method to deal with problems related to removed mixed noises.

Author Contributions

J.Z. proposed the algorithm and designed the experiments; Y.W. and J.W. performed the experiments; J.Z. and B.H. wrote the paper; All authors read and approved the final manuscript.

Funding

This work was supported by a Project of Shandong Province Higher Educational Science and Technology Program (J17KA166), by the Joint Funds of the National Natural Science Foundation of China (U22B2049).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, F.; Ng, M.K.; Shen, C.M. Multiplicative noise removal with spatially varying regularization parameters. Siam J. Imaging Sci. 2010, 3, 1–20. [Google Scholar] [CrossRef]
  2. Shi, B.L.; Huang, L.H.; Pang, Z.F. Fast algorithm for multiplicative noise removal. J. Vis. Commun. Image Represent. 2012, 23, 126–133. [Google Scholar] [CrossRef]
  3. Rudin, L.; Lions, P.L.; Osher, S. Multiplicative denoising and deblurring: Theory and algorithms. In Geometric Level Set Methods in Imaging, Vision, and Graphics; Springer: Berlin/Heidelberg, Germany, 2003; pp. 103–119. [Google Scholar]
  4. Aubert, G.; Aujol, J.F. A variational approach to removing multiplicative noise. Siam J. Appl. Math. 2008, 68, 925–946. [Google Scholar] [CrossRef]
  5. Bioucas-Dias, J.M.; Figueiredo, A.T. Multiplicative noise removal using variable splitting and constrained optimization. IEEE Trans. Image Process. 2010, 19, 1720–1730. [Google Scholar] [CrossRef]
  6. Huang, Y.M.; Ng, M.K.; Wen, Y.W. A new total variation method for multiplicative noise removal. Siam J. Imaging Sci. 2009, 2, 20–40. [Google Scholar] [CrossRef]
  7. Shi, J.; Osher, S. A nonlinear inverse scale space method for a convex multiplicative noise model. Siam J. Imaging Sci. 2008, 1, 294–321. [Google Scholar] [CrossRef]
  8. Steidl, G.; Teuber, T. Removing mulitiplicative noise by Douglas-Rachford splitting method. J. Math. Imaging Vis. 2010, 36, 168–184. [Google Scholar] [CrossRef]
  9. Liu, J.; Huang, T.Z.; Liu, G.; Wang, S.; Lv, X.G. Total variation with overlapping group sparsity for speckle noise reduction. Neurocomputing 2016, 216, 502–513. [Google Scholar] [CrossRef]
  10. Liu, P. Hybrid higher-order total variation model for multiplicative noise removal. Iet Image Process. 2020, 14, 862–873. [Google Scholar] [CrossRef]
  11. Shama, M.G.; Huang, T.Z.; Liu, J.; Wang, S. A convex total generalized variation regularized model for multiplicative noise and blur removal. Appl. Math. Comput. 2016, 276, 109–121. [Google Scholar] [CrossRef]
  12. Lv, Y.H. Total generalized variation denoising of speckled images using a primal-dual algorithm. J. Appl. Math. Comput. 2020, 62, 489–509. [Google Scholar] [CrossRef]
  13. Chartrand, R. Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 2007, 14, 707–710. [Google Scholar] [CrossRef]
  14. Han, Y.; Feng, X.C.; Baciu, G.; Wang, W.W. Nonconvex sparse regularizer based speckle noise removal. Pattern Recognit. 2013, 46, 989–1001. [Google Scholar] [CrossRef]
  15. Chen, X.J.; Zhou, W.J. Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization. Siam J. Imaging Sci. 2010, 3, 765–790. [Google Scholar] [CrossRef]
  16. Nikolova, M. Analysis of the recovery of edges in images and signals by minimizing nonconvex regularized least-squares. Multiscale Model. Simul. 2005, 4, 960–991. [Google Scholar] [CrossRef]
  17. Nikolova, M.; Ng, M.K.; Zhang, S.; Ching, W.K. Efficient reconstruction of piecewise constant images using nonsmooth nonconvex mininmization. Siam J. Imaging Sci. 2008, 1, 2–25. [Google Scholar] [CrossRef]
  18. Krishnan, D.; Fergus, R. Fast image deconvolution using hyper-Laplacian priors. In Proceedings of the Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009, Vancouver, BC, Canada, 7–10 December 2009; pp. 1033–1041. [Google Scholar]
  19. Levin, A.; Weiss, Y.; Durand, F.; Freeman, W.T. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2009, Miami, FL, USA, 20–25 June 2009; pp. 1964–1971. [Google Scholar]
  20. Fergus, R.; Singh, B.; Hertzmann, A.; Roweis, S.T.; Freeman, W.T. Removing camera shake from a single photograph. ACM Trans. Graphics 2006, 25, 787–794. [Google Scholar] [CrossRef]
  21. Chang, Y.; Yan, L.; Zhong, S. Hyper-Laplacian regularized unidirectional lowrank tensor recovery for multispectral image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 5901–5909. [Google Scholar]
  22. Kong, J.; Lu, K.; Jiang, M. A new blind deblurring method via hyper-Laplacian prior. Procedia Comput. Sci. 2017, 107, 789–795. [Google Scholar] [CrossRef]
  23. Zuo, W.M.; Meng, D.Y.; Zhang, L.; Feng, X.C.; Zhang, D. A generalized iterated shrinkage algorithm for non-convex sparse coding. In Proceedings of the IEEE International Conference on Computer Vision 2013, Sydney, Australia, 3–6 December 2013; pp. 217–224. [Google Scholar]
  24. Shi, M.Z.; Han, T.T.; Liu, S.Q. Total variation image restoration using hyper-Laplacian prior with overlapping group sparsity. Signal Process. 2016, 126, 65–76. [Google Scholar] [CrossRef]
  25. Liu, J.; Huang, T.Z.; Selesnick, I.W.; Lv, X.G.; Chen, P.Y. Image restoration using total variation with overlapping group sparsity. Inf. Sci. 2015, 295, 232–246. [Google Scholar] [CrossRef]
  26. Jon, K.; Sun, Y.; Li, Q.X.; Liu, J.; Wang, X.F.; Zhu, W.S. Image restoration using overlapping group sparsity on hyper-Laplacian prior of image gradient. Neurocomputing 2021, 420, 57–69. [Google Scholar] [CrossRef]
  27. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  28. Zhao, X.L.; Wang, F.; Ng, M.K. A new convex optimization model for multiplicative noise and blur removal. Siam J. Imaging Sci. 2014, 7, 456–475. [Google Scholar] [CrossRef]
  29. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Test images: (a) Camera, (b) Peppers, (c) Boats, (d) Tulips, (e) Lin, (f) Man.
Figure 1. Test images: (a) Camera, (b) Peppers, (c) Boats, (d) Tulips, (e) Lin, (f) Man.
Fractalfract 07 00336 g001
Figure 2. (al) The PSNR and SSIM values with respect to the parameters α , ω , p, K, N i t e r , and r.
Figure 2. (al) The PSNR and SSIM values with respect to the parameters α , ω , p, K, N i t e r , and r.
Fractalfract 07 00336 g002
Figure 3. Recovery results of four algorithms on different images with the noise at L = 30 : (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Figure 3. Recovery results of four algorithms on different images with the noise at L = 30 : (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Fractalfract 07 00336 g003
Figure 4. Zoomed-in region of Figure 3: (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Figure 4. Zoomed-in region of Figure 3: (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Fractalfract 07 00336 g004
Figure 5. Recovery results of four algorithms on different images with the noise at L = 20 : (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Figure 5. Recovery results of four algorithms on different images with the noise at L = 20 : (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Fractalfract 07 00336 g005
Figure 6. Zoomed-in region of Figure 5: (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Figure 6. Zoomed-in region of Figure 5: (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Fractalfract 07 00336 g006
Figure 7. Recovery results of four algorithms on different images with the noise at L = 10 : (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Figure 7. Recovery results of four algorithms on different images with the noise at L = 10 : (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Fractalfract 07 00336 g007
Figure 8. Zoomed-in region of Figure 7: (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Figure 8. Zoomed-in region of Figure 7: (a) Degraded. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL. (f) Degraded. (g) CONVEX. (h) OGSTVD. (i) M-TGV. (j) NHOGSHL.
Fractalfract 07 00336 g008
Figure 9. Slice of Boats (the 5th column) and their corresponding denoising results under the noise level L = 20 : (a) Noise image. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL.
Figure 9. Slice of Boats (the 5th column) and their corresponding denoising results under the noise level L = 20 : (a) Noise image. (b) CONVEX. (c) OGSTVD. (d) M-TGV. (e) NHOGSHL.
Fractalfract 07 00336 g009
Figure 10. The SSIM and the ReErr values versus iteration in our model for different images with different noise levels: (a) L = 20 . (b) L = 30 . (c) L = 20 . (d) L = 30 .
Figure 10. The SSIM and the ReErr values versus iteration in our model for different images with different noise levels: (a) L = 20 . (b) L = 30 . (c) L = 20 . (d) L = 30 .
Fractalfract 07 00336 g010
Table 1. Summary of the results of PSNR values and SSIM values restored by different algorithms under the noise level of L = 30 .
Table 1. Summary of the results of PSNR values and SSIM values restored by different algorithms under the noise level of L = 30 .
LevelImageCONVEX [28]OGSTVD [9]M-TGV [12]NHOGSHL
PSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIM
L = 30Tulips25.23/0.761525.39/0.760927.14/0.831227.29/0.8344
Man26.19/0.752726.08/0.728828.39/0.832629.04/0.8583
Camera25.91/0.795826.75/0.793029.06/0.826029.17/0.8453
Boats26.74/0.760327.56/0.786927.87/0.792629.46/0.8372
Lin30.88/0.881429.93/0.846832.47/0.897032.61/0.9194
Peppers27.01/0.792227.40/0.794928.99/0.818629.56/0.8377
Table 2. Summary of the results of PSNR values and SSIM values restored by different algorithms under the noise level of L = 20 .
Table 2. Summary of the results of PSNR values and SSIM values restored by different algorithms under the noise level of L = 20 .
LevelImageCONVEX [28]OGSTVD [9]M-TGV [12]NHOGSHL
PSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIM
L = 20Tulips24.46/0.731024.89/0.744725.96/0.782226.04/0.7971
Man25.63/0.733825.88/0.721127.07/0.782627.90/0.8212
Camera25.58/0.787026.16/0.727427.84/0.811328.13/0.8294
Boats26.06/0.743426.77/0.735026.78/0.762328.35/0.8122
Lin29.03/0.869629.39/0.822931.39/0.898331.82/0.9164
Peppers26.28/0.777326.69/0.747427.90/0.801228.78/0.8220
Table 3. Summary of the results of PSNR values and SSIM values restored by different algorithms under the noise level of L = 10 .
Table 3. Summary of the results of PSNR values and SSIM values restored by different algorithms under the noise level of L = 10 .
LevelImageCONVEX [28]OGSTVD [9]M-TGV [12]NHOGSHL
PSNR/SSIMPSNR/SSIMPSNR/SSIMPSNR/SSIM
L = 10 Tulips22.85/0.684223.52/0.684524.38/0.716524.51/0.7466
Man24.34/0.696824.48/0.659025.65/0.728026.46/0.7787
Camera23.78/0.756423.91/0.721426.30/0.769726.80/0.7986
Boats23.91/0.685825.28/0.681725.05/0.696626.57/0.7553
Lin26.67/0.845127.70/0.785129.50/0.855529.93/0.8835
Peppers24.25/0.740125.22/0.709026.41/0.765027.03/0.7804
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, J.; Wei, Y.; Wei, J.; Hao, B. A Non-Convex Hybrid Overlapping Group Sparsity Model with Hyper-Laplacian Prior for Multiplicative Noise. Fractal Fract. 2023, 7, 336. https://doi.org/10.3390/fractalfract7040336

AMA Style

Zhu J, Wei Y, Wei J, Hao B. A Non-Convex Hybrid Overlapping Group Sparsity Model with Hyper-Laplacian Prior for Multiplicative Noise. Fractal and Fractional. 2023; 7(4):336. https://doi.org/10.3390/fractalfract7040336

Chicago/Turabian Style

Zhu, Jianguang, Ying Wei, Juan Wei, and Binbin Hao. 2023. "A Non-Convex Hybrid Overlapping Group Sparsity Model with Hyper-Laplacian Prior for Multiplicative Noise" Fractal and Fractional 7, no. 4: 336. https://doi.org/10.3390/fractalfract7040336

Article Metrics

Back to TopTop