Next Article in Journal
Non-Minimal Approximation for the Type-I Seesaw Mechanism
Next Article in Special Issue
A Symmetry of Boundary Functions Method for Solving the Backward Time-Fractional Diffusion Problems
Previous Article in Journal
Artificial Intelligence-Based Malware Detection, Analysis, and Mitigation
Previous Article in Special Issue
Application of Aboodh Homotopy Perturbation Transform Method for Fractional-Order Convection–Reaction–Diffusion Equation within Caputo and Atangana–Baleanu Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved TV Image Denoising over Inverse Gradient

Faculty of Science, Kunming University of Science and Technology, Kunming 650500, China
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(3), 678; https://doi.org/10.3390/sym15030678
Submission received: 13 February 2023 / Revised: 27 February 2023 / Accepted: 4 March 2023 / Published: 8 March 2023

Abstract

:
Noise in an image can affect one’s extraction of image information, therefore, image denoising is an important image pre-processing process. Many of the existing models have a large number of estimated parameters, which increases the time complexity of the model solution and the achieved denoising effect is less than ideal. As a result, in this paper, an improved image-denoising algorithm is proposed based on the TV model, which effectively solves the above problems. The L1 regularization term can make the solution generated by the model sparser, thus facilitating the recovery of high-quality images. Reducing the number of estimated parameters, while using the inverse gradient to estimate the regularization parameters, enables the parameters to achieve global adaption and improves the denoising effect of the model in combination with the TV regularization term. The split Bregman iteration method is used to decouple the model into several related subproblems, and the solutions of the coordinated subproblems are derived as optimal solutions. It is also shown that the solution of the model converges to a Karush–Kuhn–Tucker point. Experimental results show that the algorithm in this paper is more effective in both preserving image texture structure and suppressing image noise.

1. Introduction

Symmetry in a mathematical functional equation refers to a transformation or operation, including the operators in fractional calculus, fractal, and local fractional calculus that leave the equation unchanged [1]. In image processing, the transformation and inverse transformation of matrices, mapping and inverse mapping, Fourier transform, and inverse variation all fall under the category of symmetry. The concept of symmetry is therefore widely used in image denoising.
Image denoising is an active research problem in the field of image processing. Common noises generated during image acquisition or transmission include Gaussian noise, impulse noise, Poisson noise, etc. These noises often lead to image quality degradation. The process of recovering a clean, high-quality image from the observed low-quality noisy image f : Ω R is known as image denoising, and this process is reversible. Here, the image domain Ω denotes the bounded connected open set R 2 , which has a Lipschitz boundary. The process of directly recovering a noisy image is undesirable because it lacks some a priori information about the image. Over the last few decades, researchers have proposed many solutions to this problem and have applied these methods to problems such as noise removal from acquired images in seismic surveys [2], hyper parameterized model identification [3], and fault estimation [4]. Among these, regularization methods are widely used in image processing [5,6] and have produced a general form of Gaussian noise removal models.
min u λ 2 u f 2 2 + R ( u )
here, R ( u ) denotes the regularization term, which is typically used to describe a priori information about the image, such as smoothness, continuity, and bounded variation. λ denotes the regularization parameter, which is used to balance u f 2 2 and R ( u ) . The data fidelity term u f 2 2  has a penalty on the reconstructed image u. The aim is to bring the observed noisy image f closer to the original high-quality image by solving model (1).
In model (1), the key to improving the denoising ability of the model is the correct choice of the regularization term R ( u ) . The intrinsic features of an image can be obtained by using model-based methods [7,8], discriminative-based methods [9], and variational-based methods [10,11] to find suitable a priori information about the image. This paper focuses on variational-based approaches. The classical variational model is the ROF model the fully variational TV model proposed by Rudin et al. The TV model can be used for image restoration [12], image reconstruction [13,14], blind deconvolution [15], and vector-valued images [16]. Although the TV model provides better results, it still does not describe the local details of the image very well. The specific reason is that, in numerical experiments, the difference grid of the TV model depends only on the horizontal and vertical directions and does not guarantee the Euler–Lagrange equations of the response spread in the direction of the edge tangents. In addition, the TV model does not preserve the geometric features of the image and is prone to step artifact effects in flat regions. To overcome these drawbacks, many improvement models have been proposed. For example, Lui et al.’s deep learning-based image segmentation method effectively solved the problem of adhesion and overlap between adjacent particles of mineral images [17]. Zhou et al. proposed a radical unsupervised remote sensing image classification method to further improve the classification accuracy of the model [18]. Zhao et al. designed a residual network structure by dividing the input feature map into two parts, which reduced the network parameters and improved the network inference speed [19]. In another example, Wang et al. proposed a fractional-order TV model, which has served better in color image denoising and decomposition [20]. Kazemi Golbaghi et al. proposed a new fractional-order full variational model, a model whose order depends on the image automatically assigned to it and is better able to capture the edges and details of the image [21]. Lian et al. proposed a non-convex fractional-order television model that has an excellent ability to overcome step effects and maintain neat contours [22]. Although these methods can eliminate step artifacts to some extent, they are computationally complex and often result in phenomena such as blurring of edges or leaving residual noise. To further improve the performance of the model, a well-known approach is to use second-order (or higher-order) variations instead of TV terms, which also avoids step artifact effects to some extent. Duan et al. decomposed the image into two parts, structure and texture, and proposed an edge-weighted second-order variational model for image decomposition; this model has improved recovery over the TV model [23]. Fang et al. combined convolutional neural networks with traditional variational models, using relevant edge features obtained from noisy images as a priori information to make the models strongly adaptive [24]. Phan et al. used bounded Hessian regularize to eliminate step effects and preserve image edge structure [25]. However, the second-order variational model, while avoiding the step artifact effect to some extent, may not be as effective as the first-order variational model in terms of denoising. In addition, the number of parameters [26] involved in estimating the model is larger, and solving all of them at the same time is difficult. This fact motivates us to find methods that reduce the number of parameters but still have a better denoising effect. In other words, the proposed model not only removes step artifact effects better and retains more image information, but also simplifies the computational complexity of the parameters.
The main contributions of this paper are: firstly, the inclusion of the L1 parameter regularization term makes the solution of the TV model more sparse, and, to some extent, also improves the step artifact effect generated by the TV model. Secondly, the number of parameters to be estimated is reduced and the inverse gradient multiscale is used to capture the edges of the image to estimate the regularization parameters, making the parameters globally adaptive. Applying this regularization parameter to the model together with the TV regularization term can significantly improve the quality of image denoising. Finally, the split Bregman iteration method (SBIM) is used to decouple the multivariate image denoising model into several related solvable subproblems, and the solutions of the coordinated subproblems are obtained as optimal solutions to the original problem. It is also shown that the solution of the model converges to a Karush–Kuhn–Tucker (KKT) point.

2. Related Work

The proposed algorithm is closely related to the regularization term, so this section focuses on several image-denoising problems that contain different regularization terms.

2.1. The ROF Model

The classical ROF model, proposed by Rudin et al. in 1992, can be represented in the form of a minimized energy generalization function as follows.
arg min u { λ 2 u f 2 2 + u 1 }
Here, u 1 denotes the TV regularization term. The TV term is capable of suppressing solutions where the model produces oscillations and discontinuities and is often used to deal effectively with image edges. However, the degree of diffusion of model (2) in the local normal direction is always zero, which also usually leads to the generation of segmental constant solutions. In other words, the TV regularization term tends to give rise to step artifact effects in the model.

2.2. p-Order TV-Based Model

In order to overcome the step artifact effect produced by the ROF model, some researchers have introduced a p-order full variational-based model, which can be expressed as follows.
arg min u { λ 2 u f 2 2 + p u 1 }
When p = 2 and 2 u 1 = i = 1 m j = 1 l u x x ( i , j ) 2 + u x y ( i , j ) 2 + u y x ( i , j ) 2 + u y y ( i , j ) 2 , model (3) is a higher order full variance model, about which there are applications such as the Laplacian penalty [27,28], the anisotropic second-order regularization [29], the Hessian Schatten-norm regularization [30], etc. In fact, the segmented linear solution generated by the second-order derivative of the segmented vanishing can be a better fit for smooth intensity changes. As a result, such models have better denoising performance than TV models in terms of maintaining smooth regions. However, this model tends to lead to blurred edges. When 0 < p < 1 , model (3) is transformed into a fractional-order variational model, which is also better at removing step artifact effects, but has a higher computational complexity.

2.3. Lasso Regression Model

The Lasso regression model was first proposed by Robert Tibshirani. The model allows for variable selection and complexity adjustment (regularization) when fitting a generalized linear model. Thus, the Lasso regression model can be used to solve approximate solutions to the original problem regardless of whether the target-dependent variable is continuous, binary, or multivariate discrete. In image processing, the Lasso regularization term allows the model to produce a sparse matrix of weights for feature selection. To a certain extent, it can also prevent the underfitting of the model. The model can be expressed as follows.
arg min u f A u 2 2 + λ u 1
However, the model is not derivable everywhere and it is not possible to obtain a solution to the model by direct derivation. Some researchers have used the coordinate descent method and the Least Angle Regression method for solving.

3. The Proposed New Model

3.1. New Models

Typically, images have different structures in different regions. Image denoising aims to remove noise while retaining as much structural information as possible. Whereas the TV regularization term can effectively preserve the structural information of the image, the lasso regularization term can improve the sparsity of the model solution. Therefore, we propose an improved TV model.
arg min u { 1 2 A u f 2 2 + α u 1 + β u 1 }
Model (5) is effective in removing image noise, preserving image edge information, and eliminating step artifacts produced by the conventional TV model and its variants. However, parameter estimation for this model is a major challenge and the effectiveness of the step artifact removal depends on the robust parameters chosen. In model (5) we consider a simultaneous multiplication by κ , κ > 0 , and set β = κ α , which gives the following form.
arg min u { κ 2 A u f 2 2 + β u 1 + β κ u 1 }
arg min u { κ 2 β A u f 2 2 + u 1 + κ u 1 }
To simplify the parameters, let λ = κ β , then the following model can be obtained.
arg min u { λ 2 A u f 2 2 + u 1 + κ u 1 }
In model (8), λ is the parameter for the data fidelity term and κ > 0 is the equilibrium regularization parameter. The model has the following advantages: (i) in practice, the equilibrium parameter κ of model (8) plays a preferential role in eliminating noise or facilitating the elimination of step artifacts; (ii) the parameters λ of the model are easier to estimate than α and β of model (5).
In solving the model, estimating the values of the parameters is an important task. In [31,32] it was demonstrated that the simultaneous use of the inverse gradient-driven parameters with the TV regularization term can significantly improve the denoising quality of damaged images. Therefore, this paper further improves the denoising performance of the model by introducing a multi-scale inverse gradient adaptive regularization parameter that depends on the noisy image, which is expressed as follows.
λ ( f ) = μ 1 + τ max ρ G ρ * f 2 2
Here, G ρ = 1 2 π ρ 2 exp ( x 1 2 + x 2 2 2 ρ 2 ) is a two-dimensional Gaussian kernel function, * denotes a two-dimensional convolution operation, τ is a constant taking value in the range 10 4 10 2 and μ = 2 / 9 . For the scale parameter ρ , only five scale levels are considered in this paper, ρ = 1 , 2 , 3 , 4 , 5 ,respectively. The regularization parameter λ ( f ) is globally adaptive to noisy images. Thus, model (8) can be rewritten as.
arg min u { λ ( f ) 2 A u f 2 2 + u 1 + κ u 1 } ,
the value of the scaling parameter ρ corresponds to the value of G ρ * f 2 . Of all the values G ρ * f 2 , we choose the maximum value as the value of the parameter λ ( f ) estimated by the model.

3.2. Solving the Model

It can be seen that model (10) is a larger-scale non-convex optimization problem and solving model (10) directly is difficult. An effective numerical solution method is SBIM, which is commonly used to solve high-dimensional signal processing problems such as machine learning, computer vision, image, and signal processing. This method is closely related to the dual decomposition, the alternating direction multiplier method, and the Dykstra’s alternating projection method, etc. SBIM involves decomposing a larger global problem into a series of solvable local sub-problems related to variables and computing the global solution by using the solutions of the sub-problems.
In this paper, model (11) is solved by iteratively updating the original variables and the corresponding dual variables of the augmented Lagrange function. Specifically, this paper introduces auxiliary variables v and w then reformulates model (10) as the following constrained optimization problem.
arg min u { λ ( f ) 2 A u f 2 2 + u 1 + κ u 1 } s . t .   v = u   ,   w = u
To simplify the process of solving model (11), this paper introduces two dual variables (Lagrange multipliers) y = ( y 1 , y 2 ) T and p = ( p 1 , p 2 ) T . The problem is then reformulated as.
min max u , v , w , y , p L ( u , v , w , y , p ) = λ ( f ) 2 A u f 2 2 + v 1 + y , v u + τ 1 2 v u 2 2 + κ w 1 + p , w u + τ 2 2 w u 2 2 ,
where L ( u , v , w , y , p ) denotes the augmented Lagrange function τ 1 and τ 2 denote the penalty parameters. Further rewriting of the problem (12) yields.
min max u , v , w , b 1 , b 2 L ( u , v , w , b 1 , b 2 ) = λ ( f ) 2 A u f 2 2 + v 1 + τ 1 2 v u b 1 k 2 2 + κ w 1 + τ 2 2 w u b 2 k 2 2 ,
here, b 1 = y / τ 1 and b 2 = p / τ 2 . All the variables in problem (13) are difficult to solve simultaneously because all of them are coupled together. If SBIM is used multiple variables of the problem can be decoupled into corresponding sub-problems. At this point, one can consider fixing the other variables and solving the subproblems for each variable to obtain the optimal solution to the original problem, as shown in Algorithm 1.
Algorithm 1: SBI to solve the problem (13).
Input:
(1)
Set parameters κ , τ 1 and τ 2 ;
(2)
initialization: original values of u 0 , v 0 , w 0 , b 1 0 , b 2 0 ;
(3)
Iterate (14a)–(14e) below until stopping criterion is met;
{ (14a) u k + 1 : = arg min u L ( u , v k , w k , b 1 k , b 2 k ) (14b) v k + 1 : = arg min v L ( u k , v , w k , b 1 k , b 2 k ) (14c) w k + 1 : = arg min w L ( u k , v k , w , b 1 k , b 2 k ) (14d) b 1 k + 1 : = arg min b 1 L ( u k , v k , w k , b 1 k , b 2 k ) (14e) b 2 k + 1 : = arg min b 2 L ( u k , v k , w k , b 1 k , b 2 k )
Output:   u : = u k + 1 as the restored image.
The computational efficiency of Algorithm 1 depends on how well the individual subproblems are solved with high accuracy. The subproblems represented by (14a)–(14e) are shown to be solved as follows.

3.2.1. Solution of Related Sub-Problems

(1)
Subproblem (14a). This subproblem is a smooth convex optimization problem and can be expressed as.
u k + 1 = arg min u λ ( f ) 2 A u k f 2 2 + τ 1 2 v k u k b 1 k 2 2 + τ 2 2 w k u k b 2 k 2 2
The Euler–Lagrange equations for this problem can be obtained by using the variational method.
( λ ( f ) A T A + τ 1 T + τ 2 I ) u k + 1 = λ ( f ) A T f + τ 1 T v k τ 1 T b 1 k + τ 2 w k τ 2 b 2 k
For different boundary conditions, the solution process of the linear Equation (16) corresponds to different numerical methods. The Laplace operator Δ is semi-negative definite when using the zero Neumann boundary condition or the zero Dirichlet boundary condition. In this case, the preprocessed conjugate gradient (PCG) method is the solution that can be used. In this paper, the boundary conditions used are assumed to be periodic, then problem (16) can be solved using fast Fourier variation.
u k + 1 = F 1 ( F ( λ ( f ) A T f + τ 1 T v k τ 1 T b 1 k + τ 2 w k τ 2 b 2 k ) F ( λ ( f ) A T A + τ 1 T + τ 2 I ) )
Here, F ( ) and F 1 ( ) denote the fast Fourier variation and its inverse variation.
(2)
Subproblem (14b). This subproblem can be expressed as.
v k + 1 = arg min v v k 1 + τ 1 2 v k u k b 1 k 2 2
It can be found that problem (18) is a convex optimization problem and, according to Theorem 1, the local solution of this problem can be solved by a threshold operator.
v k + 1 = s h r i n k ( u k + b 1 k , 1 τ 1 ) = u k + b 1 k u k + b 1 k max ( u k + b 1 k 1 τ 1 , 0 )
Theorem 1.
For a convex optimization problem
X k + 1 = arg min v α X 1 + β 2 X Y 2 2
the solution to this problem is defined   [ 0 , 1 ] 2 and can be formulated concretely as.
X k + 1 = s h r i n k ( Y , α β ) = Y Y max ( Y α β , 0 )
(3)
Subproblem (14c). Subproblem (14c) can be expressed as.
w k + 1 = arg min w κ w k 1 + τ 2 2 w k u k b 2 k 2 2
Similarly to subproblem (14b), this subproblem is also a convex optimization problem and one of its local solutions can be expressed in the following form.
w k + 1 = s h r i n k ( u k + b 2 k , κ τ 2 ) = u k + b 2 k u k + b 2 k max ( u k + b 2 k κ τ 2 , 0 )

3.2.2. Update of the Multiplier

It can be noted that the multipliers b 1 and b 2 are functions on the dual variables y and p , respectively, and the process of iteratively updating the multipliers is also the process of iteratively updating y and p . The use of SBIM to solve model (10) is accompanied by the generation of multipliers. The multipliers should be updated as the subproblem is continuously updated. A total of 2 multipliers are involved in this paper and the corresponding iterative update equations are given below.
b 1 k + 1 = 2 b 1 k + u k v k b 2 k + 1 = 2 b 2 k + u k w k

4. Convergence Analysis

In this section, we perform a convergence analysis of the proposed algorithm based on Theorem 2 [33].
Theorem 2.
A basic model for non-negative matrix decomposition is to use the least squares loss function to measure the approximation of the matrix, resulting in the following standard non-negative matrix decomposition problem.
min f ( X , Y ) 1 2 X Y M 2 2   s . t .   X 0 , Y 0
Let Z k k = 1 be the sequence generated by using SBIM on (23). If Z k k = 1 satisfies the condition lim k ( z k + 1 z k ) = 0 , then the cumulative point Z k k = 1 is a KKT point of model (23).
Assume that U and V are auxiliary variables introduced in the process of solving model (23) and that Λ and Π are Lagrange multipliers. Define a relevant six-tuple Z ( X , Y , U , V , Λ , Π ) , then the KKT condition that model (23) should satisfy is shown below.
( X Y M ) Y T + Λ = 0 X T ( X Y M ) + Π = 0 X U = 0 Y V = 0 Λ 0 U , Λ U = 0 Π 0 V , Π V = 0
Based on the splitting operator, the Lagrange function for the problem (12) can be expressed as.
L ( u k + 1 , v k + 1 , w k + 1 , y k + 1 , p k + 1 ) = λ ( f ) 2 A u f 2 2 + v 1 y T , v u + τ 1 2 v u b 1 2 2 + κ w 1 p T , w u + τ 2 2 w u b 2 2 2 .
Let X be the KKT condition as shown below.
λ ( f ) A T ( A u * f ) p * = 0 v * u * = 0 w * u * = 0 0 v * 1 + y * 0 κ w * + p *
Proof .
First, let x k = ( u k , v k , w k , b 1 k , b 2 k ) be the iteration in Algorithm 1, x ˜ k = ( u k , v k , w k , τ 1 b 1 k , τ 2 b 2 k ) . The subproblem (15) about u can be obtained by applying SBIM to the problem (12), which in turn leads to the optimality condition (16). At this point, Equation (27) holds.
λ ( f ) A T A ( u k + 1 u k ) + τ 1 T ( u k + 1 u k ) + τ 2 ( u k + 1 u k ) = λ ( f ) A T f + τ 1 T v k τ 1 T b 1 k + τ 2 w k τ 2 b 2 k λ ( f ) A T A u k τ 1 T u k τ 2 u k v k + 1 v k = s h r i n k ( u k + b 1 k , 1 τ 1 ) v k w k + 1 w k = s h r i n k ( u k + b 2 k , κ τ 2 ) w k b 1 k + 1 b 1 k = b 1 k + u k v k b 2 k + 1 b 2 k = b 1 k + u k w k
It follows from Theorem 1 that lim k ( x k x k + 1 ) = 0 and that the left-hand side of Equation (27) tends to 0 when k , and, by extension, the right-hand side of Equation (27) also tends to 0. Therefore, when k , all terms tend to be 0
( λ ( f ) A T A u k + τ 1 T u k + τ 2 u k ( λ ( f ) A T f + τ 1 T v k τ 1 T b 1 k + τ 2 w k τ 2 b 2 k ) ) 0 ( s h r i n k ( u k + b 1 k , 1 τ 1 ) v k ) 0 ( s h r i n k ( u k + b 2 k , κ τ 2 ) w k ) 0 ( b 1 k + u k v k ) 0 ( b 1 k + u k w k ) 0
The following system of equations is easily obtained by analyzing the expressions (27) and (28).
λ ( f ) A T A u * + τ 1 T u * + τ 2 u * = λ ( f ) A T f + τ 1 T v * τ 1 T b 1 * + τ 2 w * τ 2 b 2 * v * = s h r i n k ( u * + b 1 * , 1 τ 1 ) w * = s h r i n k ( u * + b 2 * , κ τ 2 )
In summary, it can be found that the KKT condition is satisfied for all accumulations concerning to x ˜ k . However, the KKT condition is the only necessary optimality condition for the non-convex optimization problem (11). Therefore, there is no guarantee that the optimal point of (24) is the point of accumulation. □

5. Numerical Experiments and Analysis

5.1. Image Dataset and Experimental Environment Setup

A Windows 10 system running with 8 CPUs of memory and MATLAB version R2018b is the experimental environment set up for this paper. In this paper, the denoising performance of the proposed model is evaluated using natural and artificial images with different resolutions. The images used are all grey-scale images and the natural images have multi-scale edges with rich texture structure, as shown in Figure 1.

5.2. Image Quality Assessment Indicators

Common metrics used to evaluate the quality of image recovery are peak signal-to-noise ratio (PSNR), structural self-similarity (SSIM), feature similarity, multi-scale structural similarity, and perceptual similarity. In this paper, SSIM and PSNR are used as evaluation metrics. the higher the PSNR value, the better the image recovery. The evaluation of SSIM relies on the human visual system (HVS) and SSIM [ 0 , 1 ] , the closer its value is to 1, the better the structure retention of the image. The relevant definitions are as follows.
PSNR ( u * , u ) = 10 log 10 ( 255 2 M N u * u 2 2 )
SSIM ( u * , u ) = 2 μ u * μ u + C 1 μ u * 2 + μ u 2 + C 1 2 σ u * u + C 2 σ u * 2 + σ u 2 + C 2
Here, u * and u denote the recovered image and the original image, respectively, μ denotes the mean, σ denotes the covariance, C 1 and C 2 denote the constants, and M ,   N denotes the maximum width and length of the image, respectively.

5.3. Numerical Experiments

Random Gaussian noise with mean 0 and variance σ = 10 , 20 was added to all test images. The noisy images were processed for the first time using a mean filter and the result was used as the initial value for the algorithm. Next, the denoising performance was tested using the algorithm proposed in this paper. The proposed algorithm was compared with the PSNR and SSIM of related algorithms under the same PSNR and SSIM solution formulas to evaluate the algorithm’s denoising performance, including LATV [34], TVAL3 [35], NGS [36], TVAL3 [37], and TVBH [11]. Experimenting with the algorithm of this paper on natural images, the PSNR and SSIM values of the algorithm can be obtained as shown below.
Adding random Gaussian noise with variance σ = 10 , 20 to the natural image, the PSNR and SSIM values of this algorithm and the comparison test can be obtained as shown in Table 1, and the denoising effect is shown in Figure 2 and Figure 3. Adding Gaussian noise with the variance of σ = 20 to the artificial image and using the TVBH model at the parameter ratio of 1/2, the denoising performance of the algorithm and the TVBH model is shown in Figure 4.
In Table 1, when σ = 10 , the PSNR and SSIM values for each algorithm are higher for Lena, Barbara, Boats, and Baboon, but the proposed algorithm has higher values for the evaluation metrics than the other algorithms in most cases. When σ = 20 , the PSNR and SSIM values of each algorithm decreased, but the proposed algorithm’s PSNR and SSIM were still slightly higher than those of the other models. Therefore, the denoising performance of the proposed algorithm has been improved.
Figure 2a shows the noisy image with σ = 10 , Figure 2b,e show the denoising effect of LATV, T-ASTV, T-ASTV, and TVAL3 algorithms, respectively, and Figure 2f shows the denoising effect of the algorithm in this paper. In the red boxed line is the enlarged right eye part. Overall, the denoising effect of each model is acceptable, but the clarity of Figure 2e is slightly higher than that of Figure 2b–d. Figure 2f also has higher clarity, looking at the middle of the hat, where the texture is more obvious looking at the red boxed part, and Figure 2b,e show a more obvious step effect, while Figure 2f also shows a step effect, but to a lesser extent than Figure 2b,e. Thus, the proposed algorithm improves the step artifact effect of the model.
Figure 3 presents the denoising effect of each model on the baboon for variance σ = 20 . Figure 3a shows the noisy image with variance, Figure 3b–e show the denoising effect of the LATV, T-ASTV, T-ASTV, and TVAL3 algorithms, respectively, and Figure 3f shows the denoising effect of the algorithms in this paper. It can be found that Figure 3b,e remove some noise, but there is still a lot of noise remaining in the image. In contrast, Figure 3c,d remove more noise, but there is blurring. A closer look shows that Figure 3f removes more noise, while the image is not blurred and shows clearer texture detail.
In Figure 4, Figure 4a shows the noisy image with σ = 20 , Figure 4b shows the denoised image of the TVBH model, and Figure 4c shows the denoised image of the proposed algorithm. Looking at the red boxed parts, we find that the whiteness of the triangles, circles, and squares in Figure 4c is more obvious than in Figure 4a,b, indicating that the proposed algorithm removes more Gaussian noise. Therefore, the proposed model noise filtering is better.

6. Conclusions

In this paper, we propose an image denoising algorithm for multi-scale parameter estimation, taking advantage of the fact that the TV regularization term can remove noise, the preservation of edges, and the L1 norm regularization term can promote the sparsity of the model solution and enhance the model to suppress step artifact effects. In the algorithm, we only need to estimate the value of one parameter κ , which effectively reduces the complexity of the estimated parameters. Furthermore, based on the values of PSNR and SSIM of the proposed algorithm and the comparison algorithm, it can be determined that the proposed algorithm has better denoising performance and the robustness of the model to noise is enhanced. Moreover, based on the denoising effect plots, it can be found that the step artifact effect of the image is suppressed and the noise filtering effect is enhanced. Overall, the proposed algorithm has a better denoising performance and outperforms the comparison model.
In the experiments, it can be found that although the model proposed in this paper achieves good results, there are still some problems, such as the presence of a small amount of noise in the image or the incomplete preservation of image details. In future research work, on the one hand, some new concepts related to fuzzy fractional calculus [38], and Hermite–Hadamard Inequalities [39,40,41] can be used instead of the concepts associated with existing models to improve existing algorithms and thus improve the image denoising performance of the models. On the other hand, the edge detection function can be applied to detect the edges of the image, thus better preserving the texture details of the image [42].

Author Contributions

Conceptualization, M.L.; methodology, M.L.; soft, M.L. and S.B.; validation, M.L.; formal analysis, M.L.; investigation, M.L.; resources, M.L.; data curation, M.L., S.B., G.C. and X.Z.; writing—original draft preparation, M.L.; writing—review and editing, M.L.; visualization, M.L., supervision, S.B., G.C. and X.Z. project administration, M.L.; funding acquisition, G.C. All authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (11461037) and High Quality Postgraduate Courses of Yunnan Province (109920210027).

Data Availability Statement

Experimental data for tis study can be obtained by looking up or contacting the authors on GitHub.

Acknowledgments

We thank the editor and anonymous reviewers for their valuable comments and suggestions on our manuscript. National Natural Science Foundation of China (11461037) and High Quality Postgraduate Courses of Yunnan Province (109920210027) are gratefully acknowledged.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Al-Shamasneh, A.R.; Ibrahim, R.W. Image Denoising Based on Quantum Calculus of Local Fractional Entropy. Symmetry 2023, 15, 396. [Google Scholar] [CrossRef]
  2. Zhong, T.; Wang, W.; Lu, S.; Dong, X.; Yang, B. RMCHN: A Residual Modular Cascaded Heterogeneous Network for Noise Suppression in DAS-VSP Records. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
  3. Sun, L.; Hou, J.; Xing, C.; Fang, Z. A Robust Hammerstein-Wiener Model Identification Method for Highly Nonlinear Systems. Processes 2022, 10, 2664. [Google Scholar] [CrossRef]
  4. Xu, S.; Dai, H.; Feng, L.; Chen, H.; Chai, Y.; Zheng, W.X. Fault Estimation for Switched Interconnected Nonlinear Systems with External Disturbances via Variable Weighted Iterative Learning. IEEE Trans. Circuits Syst. II Express Briefs 2023. [Google Scholar] [CrossRef]
  5. Aubert, G.; Kornprobst, P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations, 2nd ed.; Springer e-books; Springer: New York, NY, USA, 2006; ISBN 978-0-387-44588-5. [Google Scholar]
  6. Scherzer, O. Handbook of Mathematical Methods in Imaging; Springer Science & Business Media: New York, NY, USA, 2010; ISBN 0-387-92919-3. [Google Scholar]
  7. Cai, N.; Zhou, Y.; Wang, S.; Ling, B.W.-K.; Weng, S. Image Denoising via Patch-Based Adaptive Gaussian Mixture Prior Method. Signal Image Video Process. 2016, 10, 993–999. [Google Scholar] [CrossRef]
  8. Liu, H.; Li, L.; Lu, J.; Tan, S. Group Sparsity Mixture Model and Its Application on Image Denoising. IEEE Trans. Image Process. 2022, 31, 5677–5690. [Google Scholar] [CrossRef]
  9. Bhujle, H.V.; Vadavadagi, B.H. NLM Based Magnetic Resonance Image Denoising—A Review. Biomed. Signal Process. Control 2019, 47, 252–261. [Google Scholar] [CrossRef]
  10. Phan, T.D.K. A Weighted Total Variation Based Image Denoising Model Using Mean Curvature. Optik 2020, 217, 164940. [Google Scholar] [CrossRef]
  11. Pang, Z.-F.; Zhang, H.-L.; Luo, S.; Zeng, T. Image Denoising Based on the Adaptive Weighted TV Regularization. Signal Process. 2020, 167, 107325. [Google Scholar] [CrossRef]
  12. Chen, Y.; Zhang, H.; Liu, L.; Tao, J.; Zhang, Q.; Yang, K.; Xia, R.; Xie, J. Research on Image Inpainting Algorithm of Improved Total Variation Minimization Method. J. Ambient Intell. Humaniz. Comput. 2021, 1–10. [Google Scholar] [CrossRef]
  13. Pang, Z.-F.; Zhou, Y.-M.; Wu, T.; Li, D.-J. Image Denoising via a New Anisotropic Total-Variation-Based Model. Signal Process. Image Commun. 2019, 74, 140–152. [Google Scholar] [CrossRef]
  14. Dong, F.; Ma, Q. Single Image Blind Deblurring Based on the Fractional-Order Differential. Comput. Math. Appl. 2019, 78, 1960–1977. [Google Scholar] [CrossRef]
  15. Chowdhury, M.R.; Qin, J.; Lou, Y. Non-Blind and Blind Deconvolution Under Poisson Noise Using Fractional-Order Total Variation. J. Math. Imaging Vis. 2020, 62, 1238–1255. [Google Scholar] [CrossRef]
  16. Jaouen, V.; Gonzalez, P.; Stute, S.; Guilloteau, D.; Chalon, S.; Buvat, I.; Tauber, C. Variational Segmentation of Vector-Valued Images with Gradient Vector Flow. IEEE Trans. Image Process. 2014, 23, 4773–4785. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, Y.; Zhang, Z.; Liu, X.; Wang, L.; Xia, X. Efficient Image Segmentation Based on Deep Learning for Mineral Image Classification. Adv. Powder Technol. 2021, 32, 3885–3903. [Google Scholar] [CrossRef]
  18. Zhou, G.; Yang, F.; Xiao, J. Study on Pixel Entanglement Theory for Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5409518. [Google Scholar] [CrossRef]
  19. Zhao, L.; Wang, L. A New Lightweight Network Based on MobileNetV3. KSII Trans. Internet Inf. Syst. 2022, 16, 1–15. [Google Scholar] [CrossRef]
  20. Wang, W.; Xia, X.-G.; Zhang, S.; He, C.; Chen, L. Vector Total Fractional-Order Variation and Its Applications for Color Image Denoising and Decomposition. Appl. Math. Model. 2019, 72, 155–175. [Google Scholar] [CrossRef]
  21. Kazemi Golbaghi, F.; Eslahchi, M.R.; Rezghi, M. Image Denoising by a Novel Variable-order Total Fractional Variation Model. Math. Methods Appl. Sci. 2021, 44, 7250–7261. [Google Scholar] [CrossRef]
  22. Lian, W.; Liu, X. Non-Convex Fractional-Order TV Model for Impulse Noise Removal. J. Comput. Appl. Math. 2023, 417, 114615. [Google Scholar] [CrossRef]
  23. Duan, J.; Qiu, Z.; Lu, W.; Wang, G.; Pan, Z.; Bai, L. An Edge-Weighted Second Order Variational Model for Image Decomposition. Digit. Signal Process. 2016, 49, 162–181. [Google Scholar] [CrossRef]
  24. Fang, Y.; Zeng, T. Learning Deep Edge Prior for Image Denoising. Comput. Vis. Image Underst. 2020, 200, 103044. [Google Scholar] [CrossRef]
  25. Phan, T.D.K. A High-Order Convex Variational Model for Denoising MRI Data Corrupted by Rician Noise. In Proceedings of the 2022 IEEE Ninth International Conference on Communications and Electronics (ICCE), Nha Trang, Vietnam, 27–29 July 2022; pp. 283–288. [Google Scholar]
  26. Thanh, D.N.H.; Prasath, V.B.S.; Hieu, L.M.; Dvoenko, S. An Adaptive Method for Image Restoration Based on High-Order Total Variation and Inverse Gradient. Signal Image Video Process. 2020, 14, 1189–1197. [Google Scholar] [CrossRef]
  27. Chan, T.; Marquina, A.; Mulet, P. High-Order Total Variation-Based Image Restoration. SIAM J. Sci. Comput. 2000, 22, 503–516. [Google Scholar] [CrossRef]
  28. Scherzer, O. Denoising with Higher Order Derivatives of Bounded Variation and an Application to Parameter Estimation. Computing 1998, 60, 1–27. [Google Scholar] [CrossRef]
  29. Lysaker, M.; Lundervold, A.; Tai, X.-C. Noise Removal Using Fourth-Order Partial Differential Equation with Applications to Medical Magnetic Resonance Images in Space and Time. IEEE Trans. Image Process. 2003, 12, 1579–1590. [Google Scholar] [CrossRef]
  30. Lefkimmiatis, S.; Ward, J.P.; Unser, M. Hessian Schatten-Norm Regularization for Linear Inverse Problems. IEEE Trans. Image Process. 2013, 22, 1873–1888. [Google Scholar] [CrossRef] [Green Version]
  31. Surya Prasath, V.B.; Vorotnikov, D.; Pelapur, R.; Jose, S.; Seetharaman, G.; Palaniappan, K. Multiscale Tikhonov-Total Variation Image Restoration Using Spatially Varying Edge Coherence Exponent. IEEE Trans. Image Process. 2015, 24, 5220–5235. [Google Scholar] [CrossRef] [Green Version]
  32. Prasath, V.B.S. Quantum Noise Removal in X-Ray Images with Adaptive Total Variation Regularization. Informatica 2017, 28, 505–515. [Google Scholar] [CrossRef] [Green Version]
  33. Zhang, Y. An Alternating Direction Algorithm for Nonnegative Matrix Factorization; Department of Computational and Applied Mathematics Rice University: Houston, TX, USA, 2010. [Google Scholar]
  34. Grasmair, M. Locally Adaptive Total Variation Regularization. In Scale Space and Variational Methods in Computer Vision; Tai, X.-C., Mørken, K., Lysaker, M., Lie, K.-A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5567, pp. 331–342. ISBN 978-3-642-02255-5. [Google Scholar]
  35. Li, C.; Yin, W.; Zhang, Y. User’s Guide for TVAL3: TV Minimization by Augmented Lagrangian and Alternating Direction Algorithms. CAAM Rep. 2009, 20, 4. [Google Scholar]
  36. Liu, H.; Xiong, R.; Zhang, X.; Zhang, Y.; Ma, S.; Gao, W. Nonlocal Gradient Sparsity Regularization for Image Restoration. IEEE Trans. Circuits Syst. Video Technol. 2017, 27, 1909–1921. [Google Scholar] [CrossRef]
  37. Kumar, A.; Omair Ahmad, M.; Swamy, M.N.S. Tchebichef and Adaptive Steerable-Based Total Variation Model for Image Denoising. IEEE Trans. Image Process. 2019, 28, 2921–2935. [Google Scholar] [CrossRef]
  38. Khan, M.B.; Santos-García, G.; Noor, M.A.; Soliman, M.S. Some New Concepts Related to Fuzzy Fractional Calculus for up and down Convex Fuzzy-Number Valued Functions and Inequalities. Chaos Solitons Fractals 2022, 164, 112692. [Google Scholar] [CrossRef]
  39. Khan, M.B.; Santos-García, G.; Noor, M.A.; Soliman, M.S. New Hermite–Hadamard Inequalities for Convex Fuzzy-Number-Valued Mappings via Fuzzy Riemann Integrals. Mathematics 2022, 10, 3251. [Google Scholar] [CrossRef]
  40. Macías-Díaz, J.; Khan, M.; Noor, M.; Allah, A.; Alghamdi, S. Hermite-Hadamard Inequalities for Generalized Convex Functions in Interval-Valued Calculus. AIMS Math. 2022, 7, 4266–4292. [Google Scholar] [CrossRef]
  41. Khan, M.B.; Treanțǎ, S.; Soliman, M.S. Generalized Preinvex Interval-Valued Functions and Related Hermite–Hadamard Type Inequalities. Symmetry 2022, 14, 1901. [Google Scholar] [CrossRef]
  42. Berinde, V.; Ţicală, C. Enhancing Ant-Based Algorithms for Medical Image Edge Detection by Admissible Perturbations of Demicontractive Mappings. Symmetry 2021, 13, 885. [Google Scholar] [CrossRef]
Figure 1. Experimental images.
Figure 1. Experimental images.
Symmetry 15 00678 g001
Figure 2. The denoising effect of each algorithm on Lena for σ = 10 .Where (a) shows the noisy image with σ = 10 , (be) show the denoising effect of LATV, T-ASTV, T-ASTV, and TVAL3 algorithms, respectively, and (f) shows the denoising effect of the algorithm in this paper.
Figure 2. The denoising effect of each algorithm on Lena for σ = 10 .Where (a) shows the noisy image with σ = 10 , (be) show the denoising effect of LATV, T-ASTV, T-ASTV, and TVAL3 algorithms, respectively, and (f) shows the denoising effect of the algorithm in this paper.
Symmetry 15 00678 g002
Figure 3. The denoising effect of each algorithm on Lena for σ = 20 . Where (a) shows the noisy image with σ = 20 , (be) show the denoising effect of LATV, T-ASTV, T-ASTV, and TVAL3 algorithms, respectively, and (f) shows the denoising effect of the algorithm in this paper.
Figure 3. The denoising effect of each algorithm on Lena for σ = 20 . Where (a) shows the noisy image with σ = 20 , (be) show the denoising effect of LATV, T-ASTV, T-ASTV, and TVAL3 algorithms, respectively, and (f) shows the denoising effect of the algorithm in this paper.
Symmetry 15 00678 g003
Figure 4. The denoising effect of artificial images on our algorithm and TVBH algorithm. Where (a) shows the noisy image with σ = 20 , (b), show the denoising effect of TVBH model, and (c) shows the denoising effect of the algorithm in this paper.
Figure 4. The denoising effect of artificial images on our algorithm and TVBH algorithm. Where (a) shows the noisy image with σ = 20 , (b), show the denoising effect of TVBH model, and (c) shows the denoising effect of the algorithm in this paper.
Symmetry 15 00678 g004
Table 1. PSNR and SSIM of different algorithms with different variances.
Table 1. PSNR and SSIM of different algorithms with different variances.
DeltaMethodsEvaluating Criterion (PSNR/SSIM)
LenaBarbaraBoatsBaboon
Delta = 10LATV32.2371/0.883429.8621/0.882431.1085/0.893127.5627/0.8906
T-ASTV32.4306/0.890129.5901/0.891331.1069/0.894727.2326/0.8952
NGS32.4195/0.898330.2378/0.897631.0814/0.894328.0918/0.8917
TVAL332.3914/0.884330.1947/0.886231.1716/0.901328.0125/0.8922
ours32.4321/0.898730.4974/0.893231.8834/0.902928.3017/0.8923
Delta = 20LATV28.1323/0.890625.0115/0.902927.3971/0.901225.1216/0.9023
T-ASTV28.4741/0.887825.5346/0.898927.4461/0.906324.9958/0.9046
NGS28.4552/0.892425.8304/0.896827.3014/0.910324.8649/0.9014
TVAL328.4545/0.892925.8467/0.899427.40130.904625.1246/0.9127
ours28.4938/0.905026.8427/0.900927.8708/0.906825.2037/0.9091
Note: The bolded font is the optimal value.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.; Cai, G.; Bi, S.; Zhang, X. Improved TV Image Denoising over Inverse Gradient. Symmetry 2023, 15, 678. https://doi.org/10.3390/sym15030678

AMA Style

Li M, Cai G, Bi S, Zhang X. Improved TV Image Denoising over Inverse Gradient. Symmetry. 2023; 15(3):678. https://doi.org/10.3390/sym15030678

Chicago/Turabian Style

Li, Minmin, Guangcheng Cai, Shaojiu Bi, and Xi Zhang. 2023. "Improved TV Image Denoising over Inverse Gradient" Symmetry 15, no. 3: 678. https://doi.org/10.3390/sym15030678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop