Next Article in Journal
Journey Planning Algorithms for Massive Delay-Prone Transit Networks
Next Article in Special Issue
Local Convergence of an Efficient Multipoint Iterative Method in Banach Space
Previous Article in Journal
An Unknown Radar Emitter Identification Method Based on Semi-Supervised and Transfer Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Restoration Using a Fixed-Point Method for a TVL2 Regularization Problem

1
Department of Mathematics, Chungbuk National University, Cheongju 28644, Korea
2
Department of Mathematics, College of Natural Sciences, Chungbuk National University, Cheongju 28644, Korea
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(1), 1; https://doi.org/10.3390/a13010001
Submission received: 16 November 2019 / Revised: 2 December 2019 / Accepted: 16 December 2019 / Published: 18 December 2019

Abstract

:
In this paper, we first propose a new TVL2 regularization model for image restoration, and then we propose two iterative methods, which are fixed-point and fixed-point-like methods, using CGLS (Conjugate Gradient Least Squares method) for solving the new proposed TVL2 problem. We also provide convergence analysis for the fixed-point method. Lastly, numerical experiments for several test problems are provided to evaluate the effectiveness of the proposed two iterative methods. Numerical results show that the new proposed TVL2 model is preferred over an existing TVL2 model and the proposed fixed-point-like method is well suited for the new TVL2 model.

1. Introduction

Image restoration is the fundamental problem in image processing that recovers a true image from a blurry and noisy image. The problem of image restoration usually reduces to find the optimal solution u R m based on the following model:
f = A u + ε ,
where A R m × m is a blurring operator, ε R m is an unknown white Gaussian noise with variance σ , and f and u denote the observed degraded image and the original image, respectively. Our purpose is to restore the original image u from blurred and noisy image f as well as possible.
Over the past few decades, optimization techniques and various variation models [1,2,3] have been widely studied and applied in many image processing fields. The well-known ROF (Rudin–Osher–Fatemi) total variation model [3] produces the deblurred image given by the following minimization problem:
min u 1 2 A u f 2 2 + β T V ( u ) ,
where T V ( u ) is the total variation (TV) of u and β > 0 is a regularization parameter.
Many computational methods for solving problem (2) have been proposed in recent years. For example, the time-marching PDE method, the subgradient descent method, the Newton-like method, the second-order cone programming method, the lagged diffusivity fixed-point method, and the split Bregman method have been proposed by many researchers (see [1,3,4,5,6,7,8,9,10,11,12,13] for details).
Recently, Chen et al. [14] proposed a fixed-point method for solving the following constrained TVL2 deblurring problem:
min u C 1 2 A u f 2 2 + α 2 u 2 2 + β T V ( u ) ,
where α and β are positive constants, and C is a closed convex subset of R n 2 . In this paper, we only consider the case for C = R n 2 . That is, we only consider the unconstrained TVL2 problem (3) with C = R n 2 . This approach motivates us to propose a new TVL2 deblurring model
min u R n 2 1 2 A u f 2 2 + α u 2 + β T V ( u ) ,
where A R n 2 × n 2 is a blurring matrix, u R n 2 is an original image, f R n 2 is a degraded image, α  and β are positive regularization parameters, and T V ( u ) stands for the isotropic TV of u. Note that the new TVL2 model (4) uses a non-smooth term u 2 instead of using a smooth term u 2 2 for the purpose of better preserving the edges and corners in the restored images. The isotropic TV of u is defined by
T V ( u ) = i = 1 n 2 | ( u ) i x | 2 + | ( u ) i y | 2 = i = 1 n 2 ( u ) i x ( u ) i y 2 ,
where the discrete gradient operator : R n 2 R 2 n 2 is defined by
( u ) i = ( ( u ) i x , ( u ) i y ) , i = 1 , 2 , , n 2
with
( u ) i x = 0 if i mod n = 1 , u i u i 1 if i mod n 1 , and ( u ) i y = 0 if i n , u i u i n if i > n .
Notice that TVL2 problems (3) have a unique solution since its objective function is strictly convex, while the new TVL2 problem (4) may not have a unique solution since its objective function is just convex, not strictly convex.
The purpose of this paper is to propose two iterative methods, which are fixed-point and fixed-point-like methods, using CGLS (Conjugate Gradient Least Squares method [15]) for solving the new proposed TVL2 problem (4). This paper is organized as follows. In Section 2, we introduce some definitions and properties that are used in this paper. In Section 3, we first propose a fixed-point method using CGLS for solving TVL2 problem (4), and then we provide convergence analysis for the fixed-point method. In Section 4, we propose a fixed-point-like method using CGLS for solving TVL2 problem (4). In Section 5, we just provide the split Bregman methods for solving TVL2 problems (3) and (4) to see how efficiently the fixed-point and fixed-point-like methods perform. In Section 6, we describe how to carry out numerical experiments for several test problems in order to evaluate the effectiveness of the proposed two iterative methods. This can be done by comparing their performances for TVL2 problem (4) with those of the fixed-point method proposed in [14] for TVL2 problem (3) and the split Bregman methods for TVL2 problems (3) and (4). In Section 7, we provide numerical results for all test problems. Lastly, some conclusions are drawn.

2. Preliminaries

In this section, we briefly refer to some definitions and important properties which will be the foundation for the development of algorithms proposed in this paper.
Definition 1.
Let φ : R n R { + } be a proper, convex, and lower semi-continuous function (see [16]). The proximity operator of φ at x R n is defined by
prox φ ( x ) = argmin 1 2 u x 2 2 + φ ( u ) : u R n .
Definition 2.
Let φ : R n R { + } be a proper, convex, and lower semi-continuous function. The subdifferential of φ at x R n is defined by
φ ( x ) = { y R n : φ ( z ) φ ( x ) + y , z x , z R n } .
Definition 3.
A nonlinear operator H : R n R n is called non-expansive if for any x , y R n ,
H ( x ) H ( y ) 2 x y 2 .
Definition 4.
A nonlinear operator H : R n R n is called firmly non-expansive if for any x , y R n ,
H ( x ) H ( y ) 2 2 x y , H ( x ) H ( y ) .
It is easy to show that a firmly non-expansive operator is non-expansive. The following proposition shows a relationship between the proximity operator and the subdifferential of a convex function.
Proposition 1.
If φ is a convex function defined on R n and x , y R n , then (see [16, 17, 18])
y φ ( x ) x = prox φ ( x + y ) .
Let φ : R 2 n 2 R be a convex function defined by
φ ( d ) = i = 1 n 2 d i d n 2 + i 2 for each d = ( d i ) R 2 n 2 ,
and let B be a 2 n 2 × n 2 matrix that represents a discrete gradient operator ∇, which is
B = I n D D I n ,
where I n is the n × n identity matrix, ⊗ denotes the Kronecker product, and D is the first order finite difference matrix of order n
D = 0 0 0 1 1 0 0 1 1 0 0 0 1 1 .
Then, the isotropic TV of u R n 2 can be expressed as
T V ( u ) = ( φ B ) ( u ) .
We now provide the fixed-point method, called Algorithm 1, for solving TVL2 problem (3), which was proposed by Chen et al. [14].
Algorithm 1 Fixed-point method for TVL2 problem (3)
1:
Given : observed image f, positive parameters α , β , λ and κ ( 0 , 1 )
2:
Initialization : v 0 = 0 and u 0 = f
3:
for k = 0 to m a x i t do
4:
 Solve ( A T A + α I ) u k + 1 = A T f λ B T v k for u k + 1
5:
v k + 1 = κ v k + ( 1 κ ) ( I prox β λ φ ) ( v k + B u k + 1 )
6:
if u k + 1 u k 2 u k + 1 2 < t o l then
7:
  Stop
8:
end if
9:
end for
Notice that line 4 of Algorithm 1 is solved using CGLS instead of using CG (Conjugate Gradient method [19,20,21]) since the linear system in line 4 is equivalent to solving the following least squares problem
min u f λ α B T v k A α I u 2 2 .
For all algorithms presented in this paper, maxit denotes the maximum number of iterations, and tol denotes the tolerance value of the stopping criterion.

3. Fixed-Point Method for TVL2 Problem (4)

In this section, we propose a fixed-point method using CGLS for solving the new TVL2 regularization problem (4). From relation (12), TVL2 problem (4) can be expressed as
min u R n 2 1 2 A u f 2 2 + α u 2 + β ( φ B ) ( u ) .
Using Proposition 1, we can obtain the following property for a solution of TVL2 problem (13).
Theorem 1.
If φ is a real valued convex function on R 2 n 2 , B is a 2 n 2 × n 2 matrix, A is an n 2 × n 2 matrix, u is a solution to model (13), then, for any γ , λ > 0 there exist vectors a R n 2 and b R 2 n 2 such that
a = ( I prox 1 γ · 2 ) ( u + a ) ,
b = ( I prox β λ φ ) ( B u + b ) ,
( A T A ) u = A T f α γ a λ B T b .
Conversely, if there exists γ , λ > 0 , a R n 2 , b R 2 n 2 and u R n 2 satisfying Equations (14)–(16), then u is a solution to model (13).
Proof. 
Assume that u R n 2 is a solution to the model (13). By Fermat’s rule in convex analysis for model (13), we obtain the following equivalent relation for the solution u
0 A T ( A u f ) + α · ( · 2 ) ( u ) + β B T ( φ ) ( B u ) = A T ( A u f ) + α · ( · 2 ) ( u ) + B T ( β φ ) ( B u ) .
For any γ , λ > 0 , we can choose two vectors a ( 1 γ · 2 ) ( u ) and b ( β λ φ ) ( B u ) satisfying
A T ( A u f ) + α γ a + λ B T b = 0 .
Using Proposition 1, one obtains
u = prox 1 γ · 2 ( u + a ) and B u = prox β λ φ ( B u + b ) .
From Equation (19), we obtain Equations (14) and (15). In addition, from Equation (18), we obtain Equation (16).
Conversely, suppose that there exist γ , λ > 0 , a , b and u R n 2 satisfying Equations (14)–(16). From Equation (16), we obtain Equation (18). By Proposition 1, Equations (14) and (15) ensure a ( 1 γ · 2 ) ( u ) and b ( β λ φ ) ( B u ) , respectively. Using these relations and Equation (18), one obtains
0 = A T ( A u f ) + α γ a + λ B T b A T ( A u f ) + α ( · 2 ) ( u ) + β B T ( φ ) ( B u ) .
Consequently, Equation (17) holds. Thus, u is a solution to model (13). □
From Equations (14)–(16) in Theorem 1, we can develop a fixed-point algorithm which converges to a solution to TVL2 problem (4). We now describe how to develop the fixed-point algorithm. Let u be an approximate solution to the ill-conditioned linear system (16) in Theorem 1. Then, u can be expressed as
u = M ( A T f α γ a λ B T b ) ,
where M is a symmetric positive semi-definite matrix approximating an inverse of the linear system matrix in (16). For example, we can choose M = ( A T A ) r , which is a truncated pseudoinverse of A T A using the r largest positive singular values of A T A . Substituting Equation (20) into Equations (14) and (15), one obtains
a = ( I prox 1 γ · 2 ) ( I n 2 α γ M ) a λ M B T b + M A T f ,
b = ( I prox β λ φ ) α γ B M a + ( I 2 n 2 λ B M B T ) b + B M A T f .
Let us define some operators. For the given convex functions 1 γ · 2 on R n 2 and β λ φ on R 2 n 2 , we define an operator T : R 3 n 2 R 3 n 2 at a vector u v R 3 n 2 with u R n 2 and v R 2 n 2 as follows:
T u v = ( I prox 1 γ · 2 ) ( u ) ( I prox β λ φ ) ( v ) .
We also introduce an affine transformation L : R 3 n 2 R 3 n 2 defined, for all a b R 3 n 2 with a R n 2 and b R 2 n 2 , by
L a b = I n 2 α γ M λ M B T α γ B M I 2 n 2 λ B M B T a b + M A T f B M A T f ,
and an operator G : R 3 n 2 R 3 n 2 defined by
G = T L .
Then, Equations (21) and (22) can be expressed as
w = G w ,
where w = a b .
Proposition 2.
The operator G defined by (25) has a fixed point.
Proof. 
Since a solution of TVL2 problem (13) exists, from Equation (26) and the first part of the proof of Theorem 1, G has a fixed point. □
Lemma 1.
If the operator T is defined by Equation (23), then the operator T is non-expansive.
Proof. 
Note that I prox 1 γ · 2 and I prox β λ φ are firmly non-expansive and thus non-expansive [7]. For any vectors s = u v R 3 n 2 and t = a b R 3 n 2 with u , a R n 2 and v , b R 2 n 2
T ( s ) T ( t ) 2 2 = ( I prox 1 γ · 2 ) ( u ) ( I prox 1 γ · 2 ) ( a ) ( I prox β λ φ ) ( v ) ( I prox β λ φ ) ( b ) 2 2 = ( I prox 1 γ · 2 ) ( u ) ( I prox 1 γ · 2 ) ( a ) 2 2 + ( I prox 1 γ · 2 ) ( v ) ( I prox 1 γ · 2 ) ( b ) 2 2 u a 2 2 + v b 2 2 = u a v b 2 2 = s t 2 2 .
Hence, one obtains T ( s ) T ( t ) 2 s t 2 , which means that the operator T is non-expansive. □
Let
c : = M A T f B M A T f R 3 n 2 , P : = I n 2 B , R : = α γ I n 2 0 0 λ I 2 n 2 .
Then, Label (24) can be expressed as
L w = ( I 3 n 2 P M P T R ) w + c .
Proposition 3.
If φ is a convex function on R 2 n 2 , B is a 2 n 2 × n 2 matrix, A is an n 2 × n 2 matrix and α , γ , λ are positive constants such that I 3 n 2 P M P T R 2 1 , then G is non-expansive.
Proof. 
Since the operator T is non-expansive by Lemma 1, for all w 1 , w 2 R 3 n 2 , we have
G ( w 1 ) G ( w 2 ) 2 = T ( L w 1 ) T ( L w 2 ) 2 L w 1 L w 2 2 .
By the assumption I 3 n 2 P M P T R 2 1 and Equation (27), we obtain
L w 1 L w 2 2 = ( I 3 n 2 P M P T R ) ( w 1 w 2 ) 2 I 3 n 2 P M P T R 2 w 1 w 2 2 w 1 w 2 2 .
Hence, G is non-expansive. □
Let S : R N R N be an operator. Then, the Picard iteration of the operator S is defined by
x i + 1 = S x i ( i = 0 , 1 , 2 , )
for a given vector x 0 R N . For κ ( 0 , 1 ) , the κ -averaged operator S κ of S is defined by
S κ = κ I + ( 1 κ ) S .
Proposition 4
(Optial κ -averaged Theorem). Let C be a closed convex set in R 3 n 2 and let S : C C be a non-expansive mapping with at least one fixed point. Then, for any w 0 C and κ ( 0 , 1 ) , the Picard iteration of S κ converges to a fixed point of S (see [22]).
Theorem 2.
If φ is a convex function, A is an n 2 × n 2 matrix, B is a 2 n 2 × n 2 matrix and α , γ , λ > 0 are positive constants such that I 3 n 2 P M P T R 2 1 , then for any κ ( 0 , 1 ) the Picard iteration of G κ converges to a fixed point of G.
Proof. 
From Propositions 2 and 3, we know that the opertor G has a fixed point and is non-expansive. Hence, for any w 0 R 3 n 2 and κ ( 0 , 1 ) , the Picard iteration of G κ converges to a fixed point of G by Proposition 4. □
For a square matrix K, let ρ ( K ) denote the spectral radius of K. Then, the following lemma can be obtained.
Lemma 2.
Let φ and B be defined by Equations (10) and (11) respectively, and let A be a given n 2 × n 2 blurring matrix. If we choose α , γ and λ such that
0 < λ = α γ < 2 ρ ( P M P T ) ,
then I 3 n 2 P M P T R 2 1 and thus the operator G is non-expansive.
Proof. 
Since λ = α γ and thus R = λ I 3 n 2 , one obtains
I 3 n 2 P M P T R = I 3 n 2 λ P M P T .
Note that P M P T is a symmetric positive semi-definite matrix. Hence,
I 3 n 2 P M P T R 2 = ρ ( I 3 n 2 λ P M P T ) = max { | 1 λ · λ min ( P M P T ) | , | 1 λ · λ max ( P M P T ) | } ,
where λ min ( P M P T ) and λ max ( P M P T ) denote the minimum and maximaum eigenvalues of P M P T , respectively. Since 0 < λ = α γ < 2 ρ ( P M P T ) , 0 λ · λ min ( P M P T ) < 2 and 0 < λ · λ max ( P M P T ) < 2 . Hence, one obtains
I 3 n 2 P M P T R 2 1 .
Therefore, the operator G is non-expansive by Proposition 3. □
Theorem 3.
If the assumptions of Lemma 2 hold and κ ( 0 , 1 ) , then the Picard iteration of G k converges to a fixed point of G.
Proof. 
The proof follows from Lemma 2 and Theorem 2. □
From Theorem 2 and the Picard iteration of the κ -averaged operator G κ = κ I + ( 1 κ ) G , we can obtain a fixed-point method, called Algorithm 2, which converges to a solution to TVL2 problem (4).
Algorithm 2 Fixed-point method for TVL2 problem (4)
1:
Given : observed image f, positive parameters α , β , γ , λ and κ ( 0 , 1 )
2:
Initialization : a 0 = 0 , b 0 = 0 and u 0 = f
3:
for k = 0 to m a x i t do
4:
a ^ k + 1 = I prox 1 γ · 2 u k + a k
5:
a k + 1 = κ a k + ( 1 κ ) a ^ k + 1
6:
b ^ k + 1 = I prox β λ φ B u k + b k
7:
b k + 1 = κ b k + ( 1 κ ) b ^ k + 1
8:
 Solve A T A u k + 1 = A T f α γ a k + 1 λ B T b k + 1 for u k + 1
9:
if u k + 1 u k 2 u k + 1 2 < tol then
10:
  Stop
11:
end if
12:
end for
The linear system in line 8 of Algorithm 2 is ill-conditioned, so we need to consider how to find an approximate solution to the ill-conditioned linear system. A typical method for finding an approximate solution to the linear system is
u k + 1 = M ( A T f α γ a k + 1 λ B T b k + 1 ) ,
where M = ( A T A ) r . However, computation of ( A T A ) r is very time-consuming when A is large. Thus, we want to propose a different approach for finding an approximate solution to the linear system in line 8 of Algorithm 2. We first split the coefficient matrix A T A into
A T A = ( A T A + δ D ) ( δ D ) ,
where δ is a positive constant and D = d i a g ( A T A ) is a diagonal part of A T A . Then, the ill-conditioned linear system can be solved using the following iterative method:
Inner Solver
 Choose y 1 = u k
for = 1 to m a x l
  Solve ( A T A + δ D ) y + 1 = δ D y + A T f α γ a k + 1 λ B T b k + 1 for y + 1
end for
u k + 1 = y + 1 ,
where u k refers to the restored image computed at the previous kth step, and the optimal parameter δ > 0 is chosen by numerical tries. Semi-convergence analysis for Inner Solver has been studied by Han and Yun [23]. Algorithm 2 is an iterative method that converges to a solution to TVL2 problem (4) as k , so we do not have to get an accurate solution to the linear system in line 8 of Algorithm 2. For this reason, we have used m a x l = 1 for Inner Solver.
Notice that the linear system in Inner Solver is equivalent to solving the following least squares problem:
min u f 1 δ D 1 2 ( δ D y α γ a k + 1 λ B T b k + 1 ) A δ D 1 2 u 2 2 .
Hence, the linear system in Inner Solver is solved by applying the CGLS to (29).

4. Fixed-Point-like Method for TVL2 Problem (4)

In this section, we propose a fixed-point-like method using CGLS for solving the new TVL2 problem (4) that can be obtained by modifying Algorithm 2. Notice that Algorithm 2 computes a ^ k + 1 and a k + 1 before the solution step of finding u k + 1 (see lines 4 and 5 of Algorithm 2). However, the fixed-point-like method to be proposed in this section computes a ^ k + 1 and a k + 1 after the solution step of finding u k + 1 . Below, we describe how to develop the fixed-point-like method in detail. We first split line 4 of Algorithm 2 into
a ^ k + 1 = u k + a k + 1 2 ,
where a k + 1 2 = a k prox 1 γ · 2 u k + a k . Replacing the old value u k of Equation (30) with the new value u k + 1 , one obtains the following equation:
a ^ k + 1 = u k + 1 + a k + 1 2 .
Then, the solution step (i.e., line 8 of Algorithm 2) is changed to
A T A u k + 1 = A T f α γ a ^ k + 1 λ B T b k + 1 ,
where a ^ k + 1 is computed using Equation (31) instead of using Equation (30). Substituting Equation (31) into Equation (32), one obtains
( A T A + α γ I ) u k + 1 = A T f α γ a k + 1 2 λ B T b k + 1 .
After finding u k + 1 from Equation (33), we compute a ^ k + 1 using Equation (31) and a k + 1 = κ a k + ( 1 κ ) a ^ k + 1 . By incorporating the above ideas into Algorithm 2, we can obtain a fixed-point-like method, called Algorithm 4, for solving TVL2 problem (4).
In addition, notice that the linear system in line 7 of Algorithm 3 is equivalent to solving the following least squares problem:
min u f 1 α γ ( α γ a k + 1 2 + λ B T b k + 1 ) A α γ I u 2 2 .
Hence, the linear system in line 7 of Algorithm 3 is solved using the CGLS instead of using the CG.
Algorithm 3 Fixed-point-like method for TVL2 problem (4)
1:
Given : observed image f, positive parameters α , β , γ , λ and κ ( 0 , 1 )
2:
Initialization : a 0 = 0 , b 0 = 0 and u 0 = f
3:
for k = 0 to m a x i t do
4:
a k + 1 2 = a k prox 1 γ · 2 u k + a k
5:
b ^ k + 1 = I prox β λ φ B u k + b k
6:
b k + 1 = κ b k + ( 1 κ ) b ^ k + 1
7:
 Solve ( A T A + α γ I ) u k + 1 = A T f α γ a k + 1 2 λ B T b k + 1 for u k + 1
8:
a ^ k + 1 = u k + 1 + a k + 1 2
9:
a k + 1 = κ a k + ( 1 κ ) a ^ k + 1
10:
if u k + 1 u k 2 u k + 1 2 < t o l then
11:
  Stop
12:
end if
13:
end for

5. Split Bregman Methods for TVL2 Problems (3) and (4)

In this section, we just provide the alternating split Bregman methods for solving the TVL2 problems (3) and (4) in order to evaluate the performance of Algorithms 2 and 3. For more details about the alternating split Bregman method as well as its convergence analysis, we refer to [5,8]. The following Algorithms 4 and 5 are the alternating split Bregman methods corresponding to the TVL2 problems (3) and (4), respectively.
As was done in line 7 of Algorithm 3, the linear systems in line 4 of Algorithms 4 and 5 are solved using the CGLS instead of using the CG.
Algorithm 4 Split Bregman method for TVL2 problem (3)
1:
Given : observed image f, positive parameters α , β , λ
2:
Initialization : a 0 = 0 , b 0 = 0 and u 0 = f
3:
for k = 0 to m a x i t do
4:
 Solve ( A T A + λ B T B + α I ) u k + 1 = A T f + λ B T ( a k b k ) for u k + 1
5:
a k + 1 = prox β λ φ ( B u k + 1 + b k )
6:
b k + 1 = b k + B u k + 1 a k + 1
7:
if u k + 1 u k 2 u k + 1 2 < t o l then
8:
  Stop
9:
end if
10:
end for
Algorithm 5 Split Bregman method for TVL2 problem (4)
1:
Given : observed image f, positive parameters α , β , λ , γ
2:
Initialization : a 0 = b 0 = 0 , c 0 = d 0 = 0 and u 0 = f
3:
for k = 0 to m a x i t do
4:
 Solve ( A T A + λ B T B + γ I ) u k + 1 = A T f + λ B T ( d k c k ) + γ ( a k b k ) for u k + 1
5:
a k + 1 = prox α γ · 2 ( u k + 1 + b k )
6:
d k + 1 = prox β λ φ ( B u k + 1 + c k )
7:
b k + 1 = b k + u k + 1 a k + 1
8:
c k + 1 = c k + B u k + 1 d k + 1
9:
if u k + 1 u k 2 u k + 1 2 < t o l then
10:
  Stop
11:
end if
12:
end for

6. Numerical Experiments

In this section, we describe how to carry out numerical experiments for several test problems to evaluate the efficiency of two iterative methods, called Algorithms 2 and 3, using CGLS for solving the new proposed TVL2 problem (4). Performance of Algorithms 2 and 3 is evaluated by comparing their numerical results with those of the existing fixed-point method called Algorithm 1 and the split Bregman methods called Algorithms 4 and 5.
All numerical tests have been performed using Matlab R2016a on a personal computer equipped with Intel Core i5-3337 1.8 GHz CPU and 8 GB RAM. For numerical experiments, we have used three types of PSFs (point spread functions) which are Gaussian blur with standard deviation 9 and Average blur and Motion blur of size 9 × 9 . PSF arrays P for Gaussian blur with standard deviation 9 and Average blur and Motion blur of size 9 × 9 are generated by the built-in Matlab function f s p e c i a l ( G a u s s i a n , [ 9 , 9 ] , 9 ) , f s p e c i a l ( a v e r a g e , 9 ) and
P = z e r o s ( 9 ) ; P ( 4 : 6 , : ) = f s p e c i a l ( m o t i o n , 9 , 1 ) ,
respectively. The blurred and noisy image f is generated by
f = A · v e c ( X ) + v e c ( E ) ,
where A stands for the blurring matrix that can be generated by the PSF array P according to the reflexive boundary condition, and E is the Gaussian white noise with mean 0 and standard deviation 3 that can generated using Matlab function E = 3 × r a n d n ( m , n ) , where ( m , n ) denotes the size of true image.
In order to illustrate efficiency of the proposed algorithms, we have used four test images with an intensity range of [ 0 , 255 ] such as Cameraman, Lena, House, and Boat with pixel size 256 × 256 . To evaluate the quality of the restored images, we have used the peak signal-to-noise ratio (PSNR) between the original image and restored image, which is defined by
PSNR = 10 log 10 max i , j | u i j | 2 · m · n u u ˜ F 2 ,
where · F represents the Frobenius norm, u ˜ denotes the restored image of the original image u with size m × n , and u i j stands for the value of the original image u at the pixel point ( i , j ) . In general, the larger PSNR stands for the better quality of the restored image.
For all numerical experiments, an initial image was set to the blurred and noisy image f, κ = 1 × 10 6 , m a x i t = 150 , and t o l is set to 5 × 10 4 (for Algorithms 1 and 5), 2 × 10 4 (for Algorithm 4) or 1 × 10 3 (for Algorithms 2 and 3). For the CGLS method that is used to solve a linear system every iteration of Algorithms 1–5, the tolerance for stopping criterion is set to 5 × 10 2 (for Algorithms 1, 4 and 5) or 1 × 10 3 (for Algorithms 2 and 3), and the maximum number of iterations is set to 60.

7. Numerical Results

In this section, we provide numerical results for four test images that are listed in Table 1, Table 2, Table 3 and Table 4 and Figure 1. In Table 1, Table 2, Table 3 and Table 4, “Alg” represents the algorithm number to be used, “ P 0 ”represents the PSNR values for the blurred and noisy image f, “PSNR” represents the PSNR values for the restored image, “Iter” denotes the number of iterations required for Algorithms 1–5, the values in parentheses under the “Iter” column refer to the average number of iterations for CGLS, and “ α , β , γ , λ ” and “ δ ” denote parameters that are chosen by numerical tries. Notice that, according to Theorem 1, the parameters α , β , γ , λ should be chosen appropriately for good performance of the fixed-point methods.
As can be seen in Table 1, Table 2, Table 3 and Table 4, Algorithm 3 restores the true image better than Algorithms 1 and 2. This means that the fixed-point-like method for TVL2 problem (4) restores the true image better than the fixed-point methods for TVL2 problems (3) and (4). The linear system in Algorithm 3 that is obtained by computing a k + 1 after the solution step of finding u k + 1 is well-conditioned, while the linear system in Algorithm 2 is ill-conditioned. This is the reason why Algorithm 3 restores the true image significantly better than Algorithm 2. Since PSNR values of Algorithm 2 are about 0.3 to 1.0 smaller than those of Algorithm 3 for all test images, numerical results of Algorithm 2 are not provided for House and Boat images in Table 3 and Table 4. Figure 1 shows the restored images by the fixed-point method, called Algorithm 1, for TVL2 problem (3) and the fixed-point-like method, called Algorithm 3, for TVL2 problem (4).
The fixed-point method (Algorithm 1) for the TVL2 model (3) performs worse than the corresponding split Bregman method (Algorithm 4), while the fixed-point-like method (Algorithm 3) for the TVL2 model (3) performs almost as well as the corresponding split Bregman method (Algorithm 5). Note that both the split Bregman methods for the TVL2 models (3) and (4) perform the same for almost all cases. When considering the number of iterations required for the CGLS, the total number of iterations for Algorithm 3 is less than that of Algorithm 1. Each iteration of Algorithms 1–3 requires one linear system solver CGLS and two matrix-times-vector operations that are the main time-consuming kernels, and each iteration of CGLS requires two matrix-times-vector operations. In addition, there are some additional vector-update operations that can be negligible as compared with matrix-times-vector operation. This means that the execution time of Algorithm 3 for the new TVL2 problem (4) is less than that of Algorithm 1 for TVL2 problem (3). For example, for the Cameraman image with Gaussian blur, the CPU times for Algorithms 1 and 3 are about 68 and 42 seconds, respectively.

8. Conclusions

In this paper, we first proposed a new TVL2 regularization model (4) for image restoration, and then we proposed the fixed-point method (Algorithm 2) and the fixed-point-like method (Algorithm 3) for solving TVL2 problem (4). According to numerical experiments, the fixed-point-like method for the new TVL2 problem (4) restores true image better than the fixed-point method for TVL2 problem (4). The reason for this is that the linear system in line 7 of Algorithm 3 is well-conditioned and a k + 1 is computed after the solution step of finding u k + 1 , i.e., a k + 1 is computed using the new value of u k + 1 instead of using the old value of u k .
Both of the split Bregman methods for TVL2 problems (3) and (4) perform the same for almost all cases, while the fixed-point-like method for TVL2 problem (4) performs better than the fixed-point method for TVL2 problem (3). It can be also seen that the execution time of the fixed-point-like method for TVL2 problem (4) is less than that of the fixed-point method for TVL2 problem (3). Hence, it can be concluded that the new proposed TVL2 model (4) for image restoration is preferred over the TVL2 model (3), and the proposed fixed-point-like method (Algorithm 3) is well suited for the new TVL2 model (4).
The fixed-point-like method and TVL2 model (4) proposed in this paper can be applied to the image inpainting problem or image restoration problem with Poisson noise. Future work will study these kinds of problems.

Author Contributions

Conceptualization, J.H.Y.; Formal analysis, J.H.Y.; Methodology, K.S.K. and J.H.Y.; Software, K.S.K. and J.H.Y.; Writing—original draft, K.S.K. and J.H.Y.; Writing—review and editing, K.S.K. and J.H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2018R1A6A3A01012976), and the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (No. 2019R1F1A1060718).

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions which greatly improved the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chambolle, A.; Lions, P.L. Image recovery via total variation minimization and related problems. Numer. Math. 1997, 76, 167–188. [Google Scholar] [CrossRef]
  2. Chan, T.F.; Shen, J. Image Processing and Analysis: Variational, PDE, Wavelet, and Stochastic Methods; Society for Industrial and Applied Mathematics: Philadelpha, PA, USA, 2005. [Google Scholar]
  3. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  4. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Cai, J.F.; Osher, S.; Shen, Z. Split Bregaman methods and frame based image restoration. Multiscale Model. Simul. 2009, 8, 337–369. [Google Scholar] [CrossRef]
  6. Chan, T.F.; Mulet, P. On the convergence of the lagged diffusivity fixed point method in total variation image restoration. SIAM J. Numer. Anal. 1999, 36, 354–367. [Google Scholar] [CrossRef] [Green Version]
  7. Combettes, P.L.; Wajs, V.R. Signal Recovery by Proximal Forward-Backward Splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  8. Goldstein, T.; Osher, S. The Split Bregman Method for L1 Regularized Problems. SIAM J. Imaging Sci. 2008, 2, 323–343. [Google Scholar] [CrossRef]
  9. Goldforb, D.; Yin, W. Second-order cone programming methods for total variation-based image restoration. SIAM J. Sci. Comput. 2005, 27, 622–645. [Google Scholar] [CrossRef] [Green Version]
  10. Ng, M.K.; Qi, L.; Yang, Y.F.; Huang, Y.M. On Semismooth Newton’s Methods for Total Variation Minimization. J. Math. Imaging Vis. 2007, 27, 265–276. [Google Scholar] [CrossRef]
  11. Vogel, C.R.; Oman, M.E. Iterative methods for total variation denoising. SIAM J. Sci. Comput. 1996, 17, 227–238. [Google Scholar] [CrossRef]
  12. Vogel, C.R.; Oman, M.E. First, robust total variation-based reconstruction of noisy, blurring images. IEEE Trans. Image Process. 1998, 7, 813–824. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Yun, J.H. Performance of relaxed iterative methods for image deblurring problems. J. Algorithms Comput. Technol. 2019, 13, 16. [Google Scholar] [CrossRef] [Green Version]
  14. Chen, D.Q.; Zhang, H.; Cheng, L.Z. A fast fixed point algorithm for total variation deblurring and segmentation. J. Math. Imaging Vis. 2012, 43, 167–179. [Google Scholar] [CrossRef]
  15. Bjorck, A. Numerical Methods for Least Squares Problems; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1996. [Google Scholar]
  16. Moreau, J.J. Proximite et dualite dans un espace hilbertien. Bull. Soc. Math. France 1965, 93, 273–299. [Google Scholar] [CrossRef]
  17. Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: denoising. Inverse Probl. 2011, 27, 30. [Google Scholar] [CrossRef]
  18. Lu, J.; Qiao, K.; Shen, L.; Zou, Y. Fixed-point algorithms for a TVL1 image restoration model. Int. J. Comput. Math. 2018, 95, 1829–1844. [Google Scholar] [CrossRef]
  19. Hestenes, M.R.; Stiefel, E. Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 1952, 49, 409–436. [Google Scholar] [CrossRef]
  20. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2003. [Google Scholar]
  21. Axelsson, O. Iterative Solution Methods; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
  22. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  23. Han, Y.D.; Yun, J.H. Performance of the Restarted Homotopy Perturbation Method and Split Bregman Method for Multiplicative Noise Removal. Math. Probl. Eng. 2018, 2018, 21. [Google Scholar] [CrossRef]
Figure 1. Restored images by Algorithms 1 and 3 (The first row images are Cameramen images for Gaussian blur, the second row images are Lena images for Motion blur, the third row images are House images for Motion blur, and the fourth row images are Boat images for Average blur).
Figure 1. Restored images by Algorithms 1 and 3 (The first row images are Cameramen images for Gaussian blur, the second row images are Lena images for Motion blur, the third row images are House images for Motion blur, and the fourth row images are Boat images for Average blur).
Algorithms 13 00001 g001
Table 1. Numerical results for Cameraman image.
Table 1. Numerical results for Cameraman image.
Blur P 0 Alg α β γ λ δ tol PSNRIter
Gaussian20.8510.00160.12 0.00041 5 × 10 4 25.1767(47)
21.150.1350.050.014.2 1 × 10 3 24.6034(9)
31.150.1350.050.01 1 × 10 3 25.51132(8)
40.000010.135 0.02 2 × 10 4 25.5356(10)
50.150.140.0050.008 5 × 10 4 25.5228(13)
Average20.7610.00160.13 0.00041 5 × 10 4 25.2458(48)
20.750.140.0950.00954.2 1 × 10 3 24.6433(9)
30.750.140.0950.0095 1 × 10 3 25.59120(8)
40.000010.145 0.02 2 × 10 4 25.6042(14)
50.010.140.0020.009 5 × 10 4 25.6127(15)
Motion21.8510.00170.22 0.00049 5 × 10 4 28.0551(36)
20.850.230.0850.0021.3 1 × 10 3 27.5241(7)
30.850.230.0850.002 1 × 10 3 28.5767(12)
40.000010.24 0.03 2 × 10 4 28.5136(7)
50.010.240.030.003 5 × 10 4 28.5937(10)
Table 2. Numerical results for Lena image.
Table 2. Numerical results for Lena image.
Blur P 0 Alg α β γ λ δ tol PSNRIter
Gaussian22.5510.00160.17 0.0004 5 × 10 4 26.1769(48)
21.90.170.0850.016.2 1 × 10 3 25.8935(8)
31.90.170.0850.01 1 × 10 3 26.2292(5)
40.000010.17 0.09 2 × 10 4 26.2966(5)
50.010.1750.000090.04 5 × 10 4 26.2928(7)
Average22.4410.00160.17 0.0004 5 × 10 4 26.2069(48)
21.90.180.0750.00956.2 1 × 10 3 25.8936(8)
31.90.180.0750.0095 1 × 10 3 26.27107(5)
40.000010.20 0.09 2 × 10 4 26.3363(5)
50.010.1800.000100.04 5 × 10 4 26.3328(8)
Motion23.0610.00170.24 0.0005 5 × 10 4 28.2857(37)
21.10.260.0750.00751.3 1 × 10 3 28.0232(7)
31.10.260.0750.0075 1 × 10 3 28.4546(11)
40.000010.26 0.07 2 × 10 4 28.4643(5)
50.010.2650.000100.025 5 × 10 4 28.4618(8)
Table 3. Numerical results for House image.
Table 3. Numerical results for House image.
Blur P 0 Alg α β γ λ tol PSNRIter
10.00200.20 0.00051 5 × 10 4 30.0971(41)
Gaussian24.1933.20.200.0750.02 1 × 10 3 30.3082(4)
40.000010.20 0.13 2 × 10 4 30.5256(5)
50.010.2100.00010.055 5 × 10 4 30.5224(7)
10.00200.20 0.00051 5 × 10 4 30.0258(41)
Average24.0533.10.190.070.0095 1 × 10 3 30.24117(4)
40.000010.20 0.13 2 × 10 4 30.5056(5)
50.010.2150.00010.0055 5 × 10 4 30.5024(7)
10.00170.41 0.00051 5 × 10 4 32.8768(31)
Motion27.0131.50.550.0450.01 1 × 10 3 33.3282(12)
40.000010.50 0.60 2 × 10 4 33.4253(6)
50.010.5200.00200.100 5 × 10 4 33.5218(6)
Table 4. Numerical results for Boat image.
Table 4. Numerical results for Boat image.
Blur P 0 Alg α β γ λ tol PSNRIter
10.00160.12 0.00041 5 × 10 4 25.1360(49)
Gaussian21.2832.50.130.070.009 1 × 10 3 25.1758(6)
40.000010.14 0.11 2 × 10 4 25.2974(5)
50.100.120.00030.075 5 × 10 4 25.3136(6)
10.00160.12 0.00041 5 × 10 4 25.1954(50)
Average21.1932.40.1350.0750.008 1 × 10 3 25.2462(5)
40.000010.13 0.18 2 × 10 4 25.3885(5)
50.010.1300.00010.070 5 × 10 4 25.3835(7)
10.00180.24 0.00052 5 × 10 4 27.7054(36)
Motion23.3230.60.2650.090.02 1 × 10 3 27.8734(14)
40.000010.26 0.12 2 × 10 4 27.9152(5)
50.010.2550.00010.050 5 × 10 4 27.9122(4)

Share and Cite

MDPI and ACS Style

Kim, K.S.; Yun, J.H. Image Restoration Using a Fixed-Point Method for a TVL2 Regularization Problem. Algorithms 2020, 13, 1. https://doi.org/10.3390/a13010001

AMA Style

Kim KS, Yun JH. Image Restoration Using a Fixed-Point Method for a TVL2 Regularization Problem. Algorithms. 2020; 13(1):1. https://doi.org/10.3390/a13010001

Chicago/Turabian Style

Kim, Kyoum Sun, and Jae Heon Yun. 2020. "Image Restoration Using a Fixed-Point Method for a TVL2 Regularization Problem" Algorithms 13, no. 1: 1. https://doi.org/10.3390/a13010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop