Next Article in Journal / Special Issue
Predictive Modeling and Control Strategies for the Transmission of Middle East Respiratory Syndrome Coronavirus
Previous Article in Journal
Lie Symmetry Classification, Optimal System, and Conservation Laws of Damped Klein–Gordon Equation with Power Law Non-Linearity
Previous Article in Special Issue
New Quality Measures for Quadrilaterals and New Discrete Functionals for Grid Generation

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Preconditioning Technique for an Image Deblurring Problem with the Total Fractional-Order Variation Model

1
PYP-Math, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
2
The Interdisciplinary Research Center in Construction and Building Materials, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
Math. Comput. Appl. 2023, 28(5), 97; https://doi.org/10.3390/mca28050097
Submission received: 2 August 2023 / Revised: 12 September 2023 / Accepted: 18 September 2023 / Published: 22 September 2023

## Abstract

:
Total fractional-order variation (TFOV) in image deblurring problems can reduce/remove the staircase problems observed with the image deblurring technique by using the standard total variation (TV) model. However, the discretization of the Euler–Lagrange equations associated with the TFOV model generates a saddle point system of equations where the coefficient matrix of this system is dense and ill conditioned (it has a huge condition number). The ill-conditioned property leads to slowing of the convergence of any iterative method, such as Krylov subspace methods. One treatment for the slowness property is to apply the preconditioning technique. In this paper, we propose a block triangular preconditioner because we know that using the exact triangular preconditioner leads to a preconditioned matrix with exactly two distinct eigenvalues. This means that we need at most two iterations to converge to the exact solution. However, we cannot use the exact preconditioner because the Shur complement of our system is of the form $S = K * K + λ L α$ which is a huge and dense matrix. The first matrix, $K * K$, comes from the blurred operator, while the second one is from the TFOV regularization model. To overcome this difficulty, we propose two preconditioners based on the circulant and standard TV matrices. In our algorithm, we use the flexible preconditioned GMRES method for the outer iterations, the preconditioned conjugate gradient (PCG) method for the inner iterations, and the fixed point iteration (FPI) method to handle the nonlinearity. Fast convergence was found in the numerical results by using the proposed preconditioners.

## 1. Introduction

Although TV regularization is a commonly employed technique in image deblurring problems [1,2,3,4], one significant drawback is the appearance of the “staircase effect”, wherein edges are depicted as a sequence of steps rather than smooth transitions. This phenomenon arises because TV regularization encourages the creation of piecewise constant regions, which leads to the appearance of blocks around edges rather than accurately capturing their continuous nature. As a result, researchers are actively investigating and advancing alternative regularization techniques and algorithms to remove or reduce these “staircase effects” and enhance the overall quality of image deblurring methods. An alternative regularization approach is the TFOV model [5,6,7,8]. TFOV regularization presents a robust method for enhancing image deblurring, offering a combination of benefits such as edge preservation, flexibility, noise resilience, and reduction in staircase effects. Its effectiveness has been substantiated in numerous studies, significantly contributing to the progression of image deblurring techniques. However, when it comes to discretizing the Euler–Lagrange equations of the TFOV-based model, a substantial nonlinear and ill-conditioned system emerges. Efficiently solving such systems poses a considerable challenge for numerical methods, even with the application of potent numerical algorithms like Krylov subspace methods, such as the generalized minimal residual (GMRES) and conjugate gradient (CG). These methods tend to exhibit slow convergence in this context. One potential remedy to address this slow convergence is the use of preconditioning techniques. Preconditioning is a technique used to transform a linear system of the form $A x = b$ into another system to improve the spectral properties of the system matrix. A preconditioner is a matrix P that is easy to invert and the preconditioned matrix $P − 1 A$ shows good clustering behavior for the eigenvalues. This is because rapid convergence is often associated with a clustered spectrum of $P − 1 A$. In the preconditioning technique, we solve $P − 1 A x = P − 1 b$ instead of solving the original $A x = b$ because the new system $P − 1 A x = P − 1 b$ converges rapidly when we use a suitable preconditioner. To apply the preconditioner matrix P within a Krylov subspace method, we need to compute the multiplication of a matrix by a vector of the form $z = P r$ at each iteration. Hence, evaluating this product must be cheap. In the literature, several preconditioners are developed in [9] for a special linear system, such as block preconditioners and constraint preconditioners. For diagonal preconditioners, we can refer to Silvester and Wathen [10] and Wathen and Silvester [11]. For the block triangular preconditioners, we can refer to Bramble and Pasciak [12] and [13,14,15,16], as well as the references therein. For constraint preconditioners, see, for example, Axelsson and Neytcheva [17]. Other preconditioners based on Hermitian/skew-Hermitian splitting are studied in [18,19,20]. Recently, several new preconditioners for Krylov subspace methods have been introduced. For example, Cao et al. [21] derived two block triangular Schur complement preconditioners from a splitting of the ($1 , 1$)-block of the two-by-two block matrix. Chen and Ma [22] proposed a generalized shift-splitting preconditioner for saddle point problems with a symmetric positive definite ($1 , 1$)-block. Salkuyeh et al. [23] proposed a modified generalized shift-splitting preconditioner where the ($1 , 1$)-block is symmetric positive definite and the ($2 , 2$)-block matrix is symmetric positive semidefinite (not zero). Very recently, block diagonal and block triangular splitting preconditioners were studded by Beik et al. [24], and the authors introduced new variants of the splitting preconditioners and obtained new results for the convergence of the associated stationary iterations and new bounds for the eigenvalues of the corresponding preconditioned matrices. Moreover, they considered inexact versions as preconditioners for flexible Krylov subspace methods. A good survey of preconditioning techniques for general linear systems can be found in [9,25,26].
In our paper, we consider the following two-by-two block nonlinear system of equations:
$I n K h − K h * λ L h α ( U h ) ︸ A V h U h ︸ x = Z h 0 ︸ b .$
This system is obtained by discretizing the Euler-Lagrange equations associated with TFOV in image deblurring problems, and the coefficient matrix of this system has a size of $2 n$ by $2 n$, where $n : = N 2$ and N is the number of pixels in the image. The coefficient matrix of this system is non-symmetric, ill conditioned, dense, and huge. These properties complicate the development of an efficient numerical algorithm. We know that using direct methods for solving Equation (1) requires $O ( N 3 )$ and, hence, they are not applicable here. For this system, iterative methods, like Krylov subspace methods, are applicable. However, their convergence is too slow because they are sensitive to the condition numbers. Hence, preconditioning is needed to accelerate the convergence of the Krylov subspace methods. In this paper, we propose two block triangular preconditioners for Equation (1). In the literature, it has been shown that such preconditioners are among the most effective for solving problems of the saddle point type. Moreover, it is known that using the exact triangular preconditioner leads to a preconditioned matrix with exactly two distinct eigenvalues [25]. This means that we need at most two iterations to converge to the exact solution. Since the coefficient matrix A is not symmetric, the suitable outer iterative method is the GMRES method [27], and since the ($2 , 2$)-block in the matrix A is symmetric positive definite, the suitable inner iterative method is the CG method. However, using the GMRES Krylov subspace method as a preconditioner within a different Krylov subspace method (the CG method) may lead to a changing preconditioner. In such cases, the preconditioner matrix changes from step to step. For this reason, we use flexible GMRES (FGMRES) instead of GMRES [27]. The flexibility here means that FGMRES is designed to be flexible in terms of the choice of the inner Krylov subspace method and the choice of the preconditioner. This flexibility allows FGMRES to adapt to different Krylov subspace methods. FGMRES can be restarted after a certain number of iterations to control the memory usage and computational cost, especially when solving multiple linear systems with different right-hand sides. The main contributions of this work are follows:
• We propose two block triangular preconditioners and study the bounds of the eigenvalues of the preconditioned matrices. In addition, we demonstrate the effectiveness of our algorithm in the numerical results by starting with the fixed point iteration (FPI) Method as in [28] to linearize the nonlinear primal system $K T K + λ L h α ( U m ) U m + 1 = K T Z , m = 0 , 1 , …$, then we use the preconditioned conjugate gradient (PCG) method [29] for the inner iterations. After that, we use FGMRES method for the outer iterations. We illustrate the performance of our approach by calculating the peak signal-to-noise ratio (PSNR), CPU-time, residuals and the number of iterations. Finally, we calculate the PSNR for different values of the order of the fractional derivative, $α$, to show the impact of using the TFOV model.
The remainder of this paper is organized as follows: Section 2 presents the mathematical model of the image deblurring problem, different regularization models, three definitions of the fractional derivative, and the Euler–Lagrange equations associated with the TFOV model. System (1) is obtained at the end of this section. Section 3 presents all theoretical contributions of this paper. Section 4 reports some numerical results that show the efficiency of our preconditioners. Section 5 briefly states our conclusions.

## 2. Problem Setup

We know that blurring and noise affect the quality of the received image. To deblur an image, we need a mathematical model of how it was blurred. The recorded image z and the true (exact) image $u$ are related by the equation
$z = K u + ε ,$
where $K$ denotes the following blurring operator:
$( K u ) ( x ) = ∫ Ω k ( x , x ′ ) u ( x ′ ) d x ′ , x ∈ Ω$
with translation-invariant kernel, $k ( x , x ′ ) = k ( x − x ′ )$, known as the point spread function (PSF). $ε$ is the additive noise function. $Ω$ will denote a square in $R 2$ on which the image intensity is defined. When $K$ is the identity operator, the problem (2) becomes image de-noising. In this paper, we focus on de-blurring problem. The PSF function must be known. However, if it is unknown, another technique named blind deconvolution can be used [30]. The operator $K$ is compact, so the problem (2) is ill-posed [31], and then the resulting matrix systems from the discretization of this problem are highly ill-conditioned. In this case, directly solving this problem is difficult. The most popular approach to obtain a well-posed problem is to add a regularization term. Different regularization terms are used in the literature, for example:
• Tikhonov regularization [32] is used to stabilize the problem (2) and also called as penalized least squares. In this case, the problem is then to find a u that minimize the functional
$F ( u ) = 1 2 ‖ K u − z ‖ 2 + λ J ( u ) ,$
with a small positive parameter $λ$ called the regularization parameter that controls the trade-off between the data fitting term (the first term) and the regularization term (the second term). $‖ · ‖$ denotes the norm in $L 2 ( Ω )$. The functional J has to be known. Common choices for the functional J are
$J ( u ) = ∫ Ω u 2 d x ,$
the above functional gives what is known as Tikhonov regularization with the identity, and
$J ( u ) = ∫ Ω ∣ ∇ u ∣ 2 d x ,$
where $∣ · ∣$ denotes Euclidean norm, and $∇ = ∂ ∂ x , ∂ ∂ y$. When u is discontinuous, the functional in (5) often induces either oscillations or ringing. However, in the functional (6), we need to assume that u is a smooth function. Although, this model is easy to use and simple to calculate, it cannot preserve image edges. Hence, both the above choices are unsuitable for image processing applications when we need to recover sharp contrasts modeled by discontinuities in u [28].
• Total Variation (TV): One of the most commonly used regularization models is the TV. It was introduced for the first time [33] in edge-preserving image denoising by Rudin, Osher and Fatemi (ROF) and it has improved in recent years for image de-noising, de-blurring, in-painting, blind de-convolution, and processing [1,2,3,4,34,35,36,37,38,39]. When using the TV model, the problem is then to find a u that minimizes the functional
$F ( u ) = 1 2 ‖ K u − z ‖ 2 + λ J T V ( u ) ,$
where
$J T V ( u ) = ∫ Ω ∣ ∇ u ∣ d x .$
Note that, we do not require the continuity of u. Hence, (8) is a good regularization in image processing. However, the Euclidean norm, $∣ ∇ u ∣$, is not differentiable at zero. Common modification is to add a small positive parameter $β$. The resulting is in the modified functional:
$J T V β ( u ) = ∫ Ω ∣ ∇ u ∣ 2 + β 2 d x .$
The well-posedness of the above minimization problem (7) with the functional given in (9) is studied and analyzed in the literature, such as in [1]. The success of using TV regularization is that TV gives a balance between the ability to describe piecewise smooth images and the complexity of the resulting algorithms. Moreover, the TV regularization performs very well for removing noise/blur while preserving edges. Despite the good contributions of the TV regularization mentioned above, it favors a piecewise constant solution in the bounded variation (BV) space which often leads to the staircase effect. Thus, stair casing remains one of the drawbacks of the TV regularization. To remove the stair case effects, two modifications to the TV regularization model have been used in the literature. The first approach is to higher the order of the derivatives in the TV regularization term, such as the mean curvature or a nonlinear combination of the first and second derivatives [40,41,42,43,44,45]. These modifications remove/reduce the staircase effects and they are effective but they are computationally expensive due to the increasing the order of the derivatives or due to the nonlinearity terms. The second approach is to use the fractional-order derivatives in the TV regularization terms as shown in [46,47].

#### 2.1. Fractional-Order Derivative in Image Deblurring

The most important advantage of using fractional differential equations is their nonlocal property. The integer order differential operator is a local operator but the fractional order differential operator is nonlocal. This means that the next state of a system depends not only on its current state but also upon all of its historical states. This is more realistic and it is one reason why fractional calculus has become more and more popular.
In image deblurring problems, the blurring is considered nonlocal in some cases and local in others depending on the cause of the blur. For example, if a body is moving while the background is stationary, then the blur is local and in case the camera is moving then the blur is nonlocal. The blurring operator is a convolution operator that depends on the definition of the kernel functions. In the case of a moving camera, the blurring operator involves each pixel in the image, which means that the blurring process is a nonlocal. The bulrring is nonlocal, in this case, so it is appropriate to choose a regularization operator with the same nonlocal property. This property is available in operators that contain fractional derivatives. Comparative studies have shown that fractional-order differentials are more reliable than the integer-order differentials for enhancing edges in image processing. A similar trend has been observed for the texture and area-preserving properties. Therefore, images processed by fractional differentials are clearer and have higher contrast [48]. This approach is widely used in image processing [5,6,7,8,49,50]. These works have shown that the fractional-order derivative performs well in achieving a satisfactory compromise between avoiding staircasing and preserving important fine-scale features such as edges and textures. In this paper, we compare the results of the usual TV model with the TFOV model for two image deblurring problems. From Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, we can see that the TFOV shows better edge enhancement results than TV in some regions where we can observe that the PSNR at $α = 1$ is lower than PSNR at $α > 1$.
Example 1.
We used the exact Golden House image plotted in Figure 9, the deblurred Golden House image using the TV-model plotted in Figure 27, and the deblurred Golden House image using the TFOV-model plotted in Figure 29. We took a vertical line almost in the middle of these images and plotted the results of the cross sections in Figure 1. In Figure 1, we highlighted three corners (boxes). From Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6, we can clearly see that the TFOV-based image deblurring results are smoother than the TV-based image deblurring. Additionally, in each corner TV based image deblurring creates a higher error than TFOV. This shows that TFOV-based image deblurring is better and edge-preserving.
The numerical results in the above examples reflect good performance, and motivating us to use the TFOV model in our preconditioners.

#### 2.2. The TFOV-Model

Let $B V α ( Ω )$ denotes the space of functions of $α$-bounded variation on $Ω$ defined by
$B V α ( Ω ) : = u ∈ L 1 ( Ω ) | T V α ( u ) < + ∞ .$
with the $B V α$ norm $| | u | | B V α = | | u | | L 1 + ∫ Ω | ∇ α u | d x$. The parameter $α$ represents the order of the fractional derivatives, the fractional-order total variation of u, $T V α$ is defined by
$T V α ( u ) = ∫ Ω | ∇ α u | d x : = sup ϕ ∈ T ∫ Ω − u d i v α ϕ d x ,$
and $d i v α ϕ = ∂ α ϕ 1 ∂ x + ∂ α ϕ 2 ∂ y$, $∂ α ϕ 1 ∂ x$ and $∂ α ϕ 2 ∂ y$ denote the fractional-order derivative along the x and y directions respectively. The space T denotes the space of special test functions
$T : = ϕ ∈ C 0 ℓ ( Ω , R 2 ) | ϕ ( x ) | ≤ 1 , ∀ x ∈ Ω$
where $| ϕ ( x ) | = Σ i = 1 2 ϕ i 2$ and $C ℓ ( Ω , R 2 )$ denote the space of $α$-order continuously differentiable functions. Hence, when the TFOV- model is used, the problem is then to find a $u ∈ B V α ( Ω ) ∩ L 2 ( Ω )$ that minimizes the functional
$F α ( u ) = 1 2 ‖ K u − z ‖ 2 + λ J T V α β ( u )$
where $J T V α β$ is called the modified total fractional-order variation and defined by
$J T V α β ( u ) = ∫ Ω ∣ ∇ α u ∣ 2 + β 2 d x ,$
where $| ∇ α u | 2 = ( D x α u ) 2 + ( D y α u ) 2$ where $D x α , D y α$ are the fractional derivative operators along the x and y directions respectively. Existence and uniqueness of a minimizer to the above problem (10) with the functional (11) are studied and analyzed in the literature [8,51].

#### 2.3. Fractional-Order Derivatives

Several definitions have been proposed for fractional-order derivatives [52,53,54]. We shall present some of them below. For a systematic presentation of mathematics, a fractional-order derivative is denoted as function operator $D α [ a , x ]$, where a and x are the bounds of the integrals, and $α$ is the order of the fractional derivative such that $0 < n − 1 < α < n$ where $n = [ α ] + 1$ and $[ · ]$ is the greatest integer function.
• Riemann–Liouville (RL) definitions: The left- and right-sided RL derivatives of order $α$ of a function $f ( x )$ are given as follows:
$D α [ a , x ] f ( x ) = 1 Γ ( n − α ) d d x n ∫ a x ( x − t ) n − α − 1 f ( t ) d t$
and
$D α [ x , b ] f ( x ) = ( − 1 ) n Γ ( n − α ) d d x n ∫ x b ( t − x ) n − α − 1 f ( t ) d t$
where $Γ ( · )$ is the gamma function, defined by
$Γ ( z ) = ∫ 0 ∞ e − t t z − 1 d t .$
• Grünwald–Letnikov (GL) definitions: The left- and right-sided GL derivatives are defined by
$G D [ a , x ] α f ( x ) = lim h → 0 Σ j = 0 [ x − a h ] ( − 1 ) j C α j f ( x − j h ) h α$
and
$G D [ x , b ] α f ( x ) = lim h → 0 Σ j = 0 [ b − x h ] ( − 1 ) j C α j f ( x + j h ) h α$
where
$C α j = α ( α − 1 ) … ( α − j + 1 ) j ! .$
• Caputo (C) definitions: The left- and right-sided Caputo derivatives are defined by
$C D [ a , x ] α f ( x ) = 1 Γ ( n − α ) ∫ a x ( x − t ) n − α − 1 f ( n ) ( t ) d t$
and
$C D [ x , b ] α f ( x ) = ( − 1 ) n Γ ( n − α ) ∫ x b ( t − x ) n − α − 1 f ( n ) ( t ) d t$
where $f ( n )$ denotes the $n th$-order derivative of function $f ( x )$.

#### 2.4. Euler-Lagrange Equations

In this subsection, we present the Euler-Lagrange equations associated with the TFOV in image de-blurring problem.
Theorem 1.
If $α ∈ ( 1 , 2 )$, the Euler-Lagrange equations for the functional given in (10) are:
$K * ( K u − z ) + λ L α ( u ) u = 0 , i n Ω D α − 2 ∇ α u | ∇ α u | 2 + β 2 · n → = 0 , D α − 1 ∇ α u | ∇ α u | 2 + β 2 · n → = 0 , o n ∂ Ω ,$
where $K *$ is the adjoint operator of the integral operator $K$ and the nonlinear deferential operator $L α ( u )$ is given by:
$L α ( u ) w = ( − 1 ) n ∇ α . ∇ α w | ∇ α u | 2 + β 2 .$
Proof.
Let $ν ∈ W 1 α ( Ω )$ be a function. Then for $u ∈ W 1 α ( Ω ) ⊂ B V α ( Ω )$, the first order Gateaux derivative of the functional $F α ( u )$ of (10) in the direction of $ν$ is
$∂ F α ( u ) ν ∂ ν = l i m t → 0 F α ( u + t ν ) − F α ( u ) t$
$= lim t → 0 G 1 ( u + t ν ) − G 1 ( u ) t + lim t → 0 G 2 ( u + t ν ) − G 2 ( u ) t ,$
where $G 1 ( u ) = 1 2 ∫ Ω ( K u − z ) d x$ and $G 2 ( u ) = λ J T V β α ( u )$. By using the Taylor series in the direction of t, we have
$∂ F α ( u ) ν ∂ ν = ∫ Ω K * ( K u − z ) d x + ∫ Ω ( W . ▽ α ν ) d x ,$
where $W = λ ∇ α u | ∇ α u | 2 + β 2$. Now consider,
$∫ Ω ( W . ▽ α ν ) d x = ( − 1 ) n ∫ Ω ( ν C d i v α W ) d x$
$− ∑ j = 0 n − 1 ( − 1 ) j ∫ 0 1 D [ a , b ] α + j − n W 1 ∂ n − j − 1 ν ( x ) ∂ x 1 n − j − 1 | x 1 = 0 x 1 = 1 d x 2 − ∑ j = 0 n − 1 ( − 1 ) j ∫ 0 1 D [ a , b ] α + j − n W 2 ∂ n − j − 1 ν ( x ) ∂ x 2 n − j − 1 | x 2 = 0 x 2 = 1 d x 1 ,$
where we know that $n = 2$ for $1 < α < 2$.
Case-I: If $u ( x ) | ∂ Ω = b 1 ( x )$ and $∂ u ( x ) ∂ n | ∂ Ω = b 2 ( x )$, so $u ( x ) + t ν ( x ) | ∂ Ω = b 1 ( x )$ and $∂ u ( x ) + t ν ( x ) ∂ n | ∂ Ω = b 2 ( x )$. Then it suffices to take $ν ∈ C 0 1 ( Ω , R )$, this implies
$∂ i ν ( x ) ∂ n i | ∂ Ω = 0 , i = 0 , 1 ,$
$⇒ ∂ n − j − 1 ν ( x ) ∂ x 1 n − j − 1 | x 1 = 0 , 1 = ∂ n − j − 1 ν ( x ) ∂ x 2 n − j − 1 | x 2 = 0 , 1 = 0 , n − j − 1 = 0 , 1 .$
Hence (22) reduces to (19).
Case-II: If $ν ∈ W 1 α ( Ω )$, then
$∂ n − j − 1 ν ( x ) ∂ x 1 n − j − 1 | x 1 = 0 , 1 ≠ 0 , ∂ n − j − 1 ν ( x ) ∂ x 2 n − j − 1 | x 2 = 0 , 1 ≠ 0 , n − j − 1 = 0 , 1 .$
So boundary terms in (23) can only become zero if
$D [ a , b ] α + j − n W 1 | x 1 = 0 , 1 = D [ a , b ] α + j − n W 2 | x 2 = 0 , 1 = 0$
$⇒ D α + j − n W . n = 0 , j = 0 , 1 .$
This completes the proof. □
Note that (19) is a nonlinear integro-differential equation of elliptic type. Equation (19) can be expressed as a nonlinear first order system [55]:
$K * K u + λ ∇ α . v → = K * z ,$
$− ∇ α u + ∣ ∇ α u ∣ 2 + β v → = 0 → ,$
with the dual, or flux, variable
$v → = ∇ α u ∣ ∇ α u ∣ 2 + β .$
We apply the Galerkin method to (24) and (25) together with the midpoint quadrature for the integral term and cell-centered finite difference method for the derivative part.

#### 2.5. Discretization of the Fractional Derivative

First, we divide the square domain $Ω = ( 0 , 1 ) × ( 0 , 1 )$ into $N 2$ equal squares (cells) where N denotes the number of equispaced partitions in the x or y directions. Then, we follow the same discretization in [8,51]. We define a spatial partition $( x k , y l )$ (for all $k , l = 0 , 1 , … , N + 1$) of image domain $Ω$. Assume u has a zero Dirichlet boundary condition, we consider the discretization of the $α$-order fractional derivative at the inner point $( x k , y l )$ (for all $k , l = 0 , 1 , … , N$) on $Ω$ along the x-direction by using the shifted Gr$ü$nwald approximation approach [56,57]
$D α f ( x k , y l ) = δ 0 α f ( x k , y l ) h α + O ( h ) = 1 2 δ − α f ( x k , y l ) h α + δ + α f ( x k , y l ) h α + O ( h ) = 1 2 h α Σ j = 0 k + 1 ω j α f k − j + 1 l + Σ j = 0 N − k + 2 ω j α f k + j − 1 l + O ( h )$
where $f s l = f s , l$ and $ω j α = ( − 1 ) j α j j = 0 , 1 , … , N$ and $ω 0 α = 1 , ω j α = ( 1 − 1 + α j ) ω j − 1 α$ for $j > 0$. Observe from (27) that the first order estimate of the $α$-order fractional derivative $D [ a , b ] α f ( x k , y l )$ along the x-direction at the point $( x k , y l )$ with a fixed $y l$ is a linear combination of $N + 2$ values $f l 0 , f l 1 , … , f l N , f l N + 1$. After incorporating the zero boundary condition in the matrix approximation of fractional derivative, all N equations of fractional derivatives along the x direction in (27) can be written simultaneously in the matrix form
$δ 0 α f ( x 1 , y l ) δ 0 α f ( x 2 , y l ) ⋮ ⋮ δ 0 α f ( x N , y l ) = 1 2 h α 2 ω 1 α ω 0 α + ω 2 α ω 3 α ⋯ ω N α ω 0 α + ω 2 α 2 ω 1 α ⋱ ⋱ ⋮ ω 3 α ⋱ ⋱ ⋱ ω 3 α ⋮ ⋱ ⋱ 2 ω 1 α ω 0 α + ω 2 α ω N α ⋯ ω 3 α ω 0 α + ω 2 α 2 ω 1 α ︸ B α N f 1 l f 2 l ⋮ ⋮ f N l .$
From the definition of fractional-order derivative (27), for any $1 < α < 2$, the coefficients $ω k α$ have the following properties:
(1)
$ω 0 α = 1 , ω 1 α = − α < 0 , 1 ≥ ω 2 α ≥ ω 3 α ≥ … ≥ 0 .$
(2)
$∑ k = 0 ∞ ω k α = 0 , ∑ k = 0 m ω k α ≤ 0 ( m ≥ 1 )$.
Hence by the Gershgorin circle theorem, we can derive that matrix $B α N$ is a symmetric and negative definite Toeplitz matrix (i.e., $− B α N$ is a positive definite Toeplitz matrix). Let $U ∈ R N × N$ denote the solution matrix at all nodes $( k h x ; l h y ) , k , l = 1 , … , N$ corresponding to x-direction and y-direction spatial discretization nodes. Denote by $u → ∈ R N 2 × 1$, the ordered solution vector of U. The direct and discrete analogue of differentiation of arbitrary $α$ order derivative is
$u x α = ( I N ⊗ B α N ) u → = B x α u →$
Similarly, all values of $α$-th order y-direction derivative of $u ( x ; y )$ at these nodes are approximated by
$u y α = ( B α N ⊗ I N ) u → = B y α u → ,$
where
$u x α = ( u 11 α , … , u N 1 α , u 12 α , … , u N N α ) T , u y α = ( u 11 α , … , u 1 N α , u 21 α , … , u N N α ) T ,$
$u → = u 11 , u 12 , … , u N N$ and ⊗ denotes the Kronecker product. For more details in the discretization, we refer to [54,58]. Now, using the cell center finite difference Method (CCFDM), the fractional discretization shown above, and using the fact that $[ ( − 1 ) n ∇ α · ]$ is the adjoint operator of the operator $∇ α$, then (24) and (25) leads to the following system
$V + K h U = Z , K h * V − λ ( L α h U m ) U m + 1 = 0 , m = 0 , 1 , 2 … N F ,$
where $N F$ is the number of Fixed-Point Iterations used to linearize the nonlinear term in the square root in (26). The matrix $K h$ is obtained form using the midpoint quadrature for the integral operator as follows:
$( K u ) ( x i , y j ) ≈ [ K h U ] i j , i , j = 1 , 2 , … , N .$
with entries $[ K h U ] i j , l m = h 2 k ( x i − x j , y l − y m )$. With using the lexicographical order, $K h$ is a block Toeplitz with Toeplitz block (BTTB) matrix. The need for BTTB property will be discussed later in the paper. The discrete scheme of the matrix $L α h U$ is given by:
$L α ( U m ) U m + 1 = [ B N ( D 1 ( U m ) ) ∘ ( B N U m + 1 ) ] + [ ( D 2 ( U m ) ∘ ( U m + 1 B M ) ) ] B N$
where ∘ is the point wise multiplication and m is the $m - th$ Fixed-Point Iteration. U is an $N × N$-size reshaped matrix of the vector u and the matrices $D 1 ( U m )$ and $D 2 ( U m )$ are the diagonal of the Hadamard inverses of the non-zero matrices $B x α ( U m )$ and $B y α ( U m )$ respectively.

#### 2.6. Difficulties in TFOV-Model Compared to TV-Model

In this subsection, we compare the TFOV-system (1):
$I n K h − K h * λ L h α ( U h ) ︸ A α V h U h ︸ x = Z h 0 ︸ b ,$
and the following TV-system:
$I n K h − K h * λ L h ( U h ) ︸ A V h U h ︸ x = Z h 0 ︸ b .$
In the TFOV system (1), the fractional matrix $L h α$ is obtained from discretizing a fractional deferential operator and it is dense. The density property leads to an expensive matrix-vector multiplication. In this case, the coefficient matrix $A α$ in the system (1) contains three dense submatrices, while in TV system (34), the non-fractional matrix $L h$ is obtained from discretizing a non-fractional deferential operator and it is a sparse matrix, then the coefficient matrix A in the system (34) contains only two dense submatrices. Further, the Schur complement matrix associated with (1) is a sum of two dense matrices while the Schur complement matrix associated with (34) is a sum of one dense matrix and one sparse matrix.

## 3. Preconditioning Technique

In the literature, it has been shown that block triangular preconditioners are among the most effective preconditioners for solving saddle point problems. In this paper, we develop two block triangular preconditioners for solving (1). First, we present our main preconditioner matrix [12] and its inverse:
$P = I n K 0 − S , P − 1 = I n K S − 1 0 − S − 1 ,$
where $S = K * K + λ L α$ is the Schur complement matrix. We notice that the Schur complement matrix contains the product $( K * K )$ which is not a BTTB matrix. We know that a BTTB matrix-vector product computation cost $O ( N log N )$ but using a BCCB extension. Since this extension is not an easy task in some cases, the idea of using a circulant matrix as a preconditioner for a Toeplitz matrix is needed. This idea was first proposed by Strang [59] and Olkin [60] and extended by others to block Toeplitz systems for example Chan et al. [61]. Many researchers use Toeplitz preconditioners and block Toeplitz preconditioners for Toeplitz systems. For instance, Chan et al. [62], and Lin and Fu-Rong [63]. Band Toeplitz preconditioner and band BTTB preconditioner are proposed by Chan and Raymond [64] and Serra and Stefano [65]. In Lin et al. [66], BTTB preconditioners for BTTB systems are discussed. Several kinds of circulant preconditioners have been proposed to be good preconditioners, see for instance [59,62,67,68,69]. Several kinds of circulant preconditioners have been proposed and proven to be good preconditioners. Therefore, the PCG methods with circulant preconditioners converge very fast when they are used to solve Toeplitz systems. Motivated by these papers, we propose the following two block triangular preconditioners:
$P 1 = I n K 0 − S 1 , P 2 = I n K 0 − S 2 ,$
where $S 1 = ( I + λ L T V )$ and $S 2 = ( C * C + λ L T V )$. Where I is the denoising operator, the identity matrix is a circulant matrix, and $L T V$ comes from discretaizing the TV model $( α = 1 )$ which is a sparse matrix and C is the Strang circulant approximation of the matrix K [59]. These circulant approximations are very important to allow us to use the FFT and the convolution theorem. We know that all circulant matrices can be diagonalized by the Fourier matrix, see [70]. Also using FFT and the convolution theorem will reduce the cost of the computation from $O ( N 2 )$ into $O ( N log N )$. Moreover, all that is needed for computation is the first column or the first row of the circulant matrix, which decreases the amount of required storage. This reduction in the computations and storage leads to efficient solvers for our problem (1).

## 4. Preconditioned GMRES Algorithm

In this section, we give a detailed algorithms for using our preconditioner P ($P 1$ and $P 2$). In Algorithm 1, GMRES method is used to solve the linear system (1).
 Algorithm 1 Preconditioned GMRES Algorithm 1:Choose $x 0$ as the initial guess2:Compute $r ˜ 0 = b − A x 0$3:Solve P$r 0 = r ˜ 0$4:Let $β 0 = ∥ r 0 ∥ ,$ and compute $v ( 1 ) = r 0 / β 0$5:for $k = 1 , 2 , …$ until $β k < τ β 0$ do6:     $w ˜ 0 ( k + 1 ) = A v ( k )$7:     Solve P $w 0 ( k + 1 ) = w ˜ 0 ( k + 1 )$8:     for $l = 1$ to k do9:           $h l k = 〈 w l ( k + 1 ) , v ( l ) 〉$10:         $w l ( k + 1 ) = w l ( k + 1 ) − h l k v ( l )$11:     end for12:     $h k + 1 , k = w k + l ( k + 1 ) / h k + 1 , k$13:     Compute $y ( k )$ such that $β k = ∥ β 0 e 1 − H ^ k y ( k ) ∥$ is minimized, where $H ^ k = [ h i j ] 1 ≤ i ≤ k + 1 ≤ , 1 ≤ j ≤ k$ and $e 1 = ( 1 , 0 , … , 0 ) T$14:end for15:$x ( k ) = x 0 + V k y ( k )$
In Algorithm 1, in Steps 3 and 7, we need to solve a matrix times a vector of the form
$I n K 0 − S ︸ P x 1 x 2 ︸ x = b 1 b 2 ︸ b ,$
where $S = S 1$ or $S = S 2$. To do the above multiplications, we use the conjugate gradients method as in Algorithms 2 and 3:
 Algorithm 2 $P 1$-Conjugate Gradient Method Algorithm 1:$x 1 = x ( 1 : n ) = b ( 1 : n ) − K x 2 ;$2:$S 1 = P 1 ( n + 1 : 2 n , n + 1 : 2 n ) ;$3:$b 2 = b ( n + 1 : 2 n ) ;$4:$x 2 = x ( n + 1 : 2 n )$5:Solve for $x 2$ in the system $− S 1 x 2 = b 2$ using conjugate gradient method.
 Algorithm 3 $P 2$-Conjugate Gradient Method Algorithm 1:$x 1 = x ( 1 : n ) = b ( 1 : n ) − K x 2 ;$2:$S 2 = P 2 ( n + 1 : 2 n , n + 1 : 2 n ) ;$3:$b 2 = b ( n + 1 : 2 n ) ;$4:$x 2 = x ( n + 1 : 2 n )$;5:Solve for $x 2$ in the system $− S 2 x 2 = b 2$ using conjugate gradient method.

#### Eigenvalues Estimates

In this subsection, we need to study the eigenvalues of the exact preconditioned matrix $P − 1 A$. Since $P − 1 A$ and $A P − 1$ are similar matrices, they have the same eigenvalues. Hence we study the eigenvalues of the matrix $A P − 1$.
Theorem 2.
If the linear system (1) is left preconditioned by the matrix P, then the preconditioned matrix is
$A P − 1 = I n 0 − K * − I n ,$
and its minimal polynomial is $( ν − 1 ) ( ν + 1 )$ where ν is the eigenvalue of the matrix $A P − 1$.
Proof.
Since $A P − 1$ and $P − 1 A$ are similar, it is easy to study the eigenvalues of $A P − 1$ instead of $P − 1 A$. From the form of $A P − 1$, we notice that the preconditioned matrix has only two distinct eigenvalues $± 1$ and then we notice that a minimal polynomial of degree at most 2. Hence, when Krylov subspace methods like FGMRES is used, then it converges in 2 iterations or less, in exact arithmetic. This property is of practical use when inexpensive approximations of the Schur complement exist. However, when we approximate the Schur complement matrix S by the matrix $S 1$ or $S 2$, we have the following eigenvalue estimation. □
Theorem 3.
If the linear system (1) is left preconditioned by the matrix $P 1$ or $P 2$, then the eigenvalues of the preconditioned matrices
$A ( P 1 ) − 1 = I n 0 − K * − S S 1 − 1 , a n d A ( P 2 ) − 1 = I n 0 − K * − S S 2 − 1$
are described as follows:
$ν + = { 1 } , a n d ν − ∈ [ σ 1 , σ n ] ,$
where $σ 1$ and $σ n$ are the minimum and the maximum eigenvalues of the matrix $( − S S 1 − 1 )$ or $( − S S 2 − 1 )$.
Example 2.
In this example, our aim is to verify that the bounds given in the above theorem are matched. We take $N = 16 , i . e . , n = ( 16 ) 2 = 256$ and we fix $α = 1.4 , β = 0.1 , λ = 0.001$. For this task, we use the preconditioner $P 1$ and we use the test image “Golden House”. We notice that the positive eigenvalues are equal to one whereas the negatives are contained in the interval $[ σ 1 , σ n ]$ where $σ 1$ and $σ n$ are defined in the above theorem. In this example $σ 1 ≅ − 1.01$ and $σ n ≅ − 1$. The results of this example are plotted and shown in Figure 7 and Figure 8. Moreover, in this experiment, we find that the $c o n d ( A ) = 3.2915 × 10 4$ and $c o n d ( ( P 1 ) − 1 A ) = 1.6219$ which indicate that our preconditioner is effective.
From Figure 7 and Figure 8, we notice that the preconditioned matrix has a good clustering behavior of the eigenvalues. The eigenvalues are clustering around 1 and $− 1$. This clustering verifies the above theorem guarantees fast convergence of the FGMRES method.

## 5. Numerical Results

In this section, we experimentally study the performance of the FGMRES method with the proposed preconditioners $P 1$ and $P 2$. In the following numerical experiments, we implement Algorithms 1–3, and we take the zero vector to be the initial guess. We stopped the outer iterations (FGMRES) when the residuals satisfies $∥ b − A x k ∥ < 10 − 7 ‖ b ‖$ where $x k = ( v k , u k )$ is the solution vector in the $k - th$ iteration. We used only one iteration of the Fixed-Point Iteration method to linearize the nonlinear term and then we used the PCG for the inner iterations and it is stopped when the tolerance is $10 − 9$. No restarting is used for FGMRES algorithm. For this purpose, two famous $128 × 128$ test images, called Retinal Image and Golden House are used in the experiments, as shown in Figure 9 and Figure 10 and they are blurred by the motion kernel as shown in Figure 11.
In order to show the performance of the proposed preconditioners, we need to calculate the PSNR which is commonly used in the signal processing field. It can be calculated by the following formula:
$P S N R ( u e , u d ) = 10 log n max 1 ≤ i , j ≤ n ∣ u e ∣ ∑ i = 1 n x ∑ j = 1 n x u e i j − u d i j 2$
where $u e$ and $u d$ are the exact and deblurred images, respectively. Bigger PSNR means better deblurring performance.

#### 5.1. The Parameters $β$ and $λ$ Selecting

The value of the parameters $β$ and $λ$ also play a vital role in the performance of the numerical technique used for the image deblurring model. Small values of $β$ affect the convergence rate of the iterations in the numerical technique but do not change the quality of the deblurred images. We have chosen $β = 1 , β = 0.1$ and $β = 0.01$ which are commonly used in the literature [28]. We noticed no significant difference in the results between these values. Regarding the values of the regularization parameters $λ$, we have chosen $λ$ small enough, $10 − 3$, $10 − 5$, $10 − 6$ and $10 − 8$ to ensure the best deblurring performance of the corresponding deblurring model. These values are commonly used in the literature [28].
Example 3.
In this example, we show the impact of our preconditioners on the convergence speed of the FGMRES algorithm for the fractional-order image deblurring problem. We fix $N = 128$, $β = 0.1$, and the regularization parameter $λ = 0.001$. No restarting is used for the FGMRES algorithm and it is stopped when the tolerance is $10 − 7$. We use the test image “Golden House”. In each FGMRES iteration, the logarithm of $| | r ( k ) | | 2 | | r ( 0 ) | | 2$ is calculated and then plotted for different values of the regularization parameter λ in Figure 12 and Figure 13.
In Figure 12 and Figure 13, we show the algorithm of the ratio of the current residual norm to the initial residual norm, plotted against the number of FGMRES iteration, for different values of the regularization parameter λ. $N P$ stands for FGMRES without preconditioners, $P 1$ stands for FGMRES with the preconditioner $P 1$ and $P 2$ stands for FGMRES with the preconditioner $P 2$. The results in Figure 12 and Figure 13 show that our block triangular preconditioners $P 1$ and $P 2$ significantly accelerate the convergence of FGMRES, compared to FGMRES without preconditioners. Additionally, $P 1$ outperforms $P 2$.
Example 4.
In this example, we show the effectiveness of our proposed preconditioners in deblurring images. We used two blurred images (of size $128 × 128$) shown in Figure 14 and Figure 15. We select the following parameters: $α = 1.8$, $β = 1$, and $λ = 0.00001$. We used our preconditioners $P 1$ and $P 2$ to deblur the images and the results are shown in Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23.
From Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22 and Figure 23, the results show that our preconditioners are effective in deblurring images, with significant improvement in the PSNR. For example, the PSNR of deblurred image in Figure 16 is $49.41$, compared to the PSNR $22.978$ for the blurred image in Figure 23.
Example 5.
In this example, we compare the total CPU-time (in seconds) required for the convergence of the FGMRES with and without our proposed preconditioners $P 1$ and $P 2$. The results are shown in Table 1 for different $N , α , β , λ$.
From Table 1, the results show that both $P 1$ and $P 2$ can significantly reduce the CPU-time required for convergence, compared to FGMRES without preconditioners. For example, for $N = 128$, $α = 1.6$, $β = 0.01$, and $λ = 10 − 6$, the CPU-time for FGMRES without preconditioning is 76.64 s, while the CPU-time for FGMRES with $P 1$ is 35.86 s and the CPU-time for FGMRES with $P 2$ is 38.22. Overall, the results show our proposed preconditioners $P 1$ and $P 2$ are effective in accelerating the convergence of FGMRES for the fractional-order image deblurring problem. This can lead to significant reductions in CPU-time, which is important for practical applications.

#### 5.2. GMRES versus FGMRES

In this experimental result, we compared the performance of GMRES and FGMRES with our preconditioner $P 1$ using the following parameters: $N = 64$, $α = 1.4$, $β = 0.1$, and $λ = 10 − 5$. We used the test image “Golden House”. We used both GMRES and FGMRES. In this example, both GMRES and FGMRES were stopped when the tolerance was $10 − 7$ and no restarting is used. The comparison results are shown in Figure 24, where $P 1 G M$ stand for $G M R E S$ with $P 1$ and $P 1 F G$ stands for $F G M R E S$ with $P 1$. As shown in the figure, FGMRES is performed slightly better than GMRES.
In the following numerical result, we show the comparison of our TFOV-based algorithm with TV-based algorithm [28].
Example 6.
In this example, we compare our TFOV- based algorithm with the TV-based algorithm on the nontextured peppers image. We use a Gaussian kernel with standard deviation $σ = 1.5$. The results are shown in Figure 25, Figure 26, Figure 27, Figure 28, Figure 29 and Figure 30. The size of each subfigure is $256 × 256$. The subfigures are as follows: (a) exact image (b) blurry image (c) deblurred image by TV (d) deblurred image by NP (e) deblurred image by $P 1$ and (f) deblurred image $P 2$. For numerical calculations, we used the motion kernel. For the TV-based method we used $β = 1$ and λ varying from $10 − 2$ to $10 − 4$, according to [28]. The parameters for TFOV-based method are listed in Table 2. For comparison we used three different values of N: 64, 128 and 256. Their corresponding blurred PSNRs are 20.1827, 20.1124 and 20.5531 respectively. For the stopping criteria of the numerical methods, we used tolerance $t o l = 10 − 7$.
Remark 1.
• Figure 27, Figure 28, Figure 29 and Figure 30 are almost similar, indicating that all methods generate the same quality results.
• From Figure 31, Figure 32 and Figure 33, we can clearly see the effectiveness of preconditioning. For all values of N, the number of $P 1$ and $P 2$ iterations is much lower than the number of TFOV-based NP and TV-based $P 1$ iterations to reach the required accuracy $t o l = 10 − 7$. The later fixed-point iterations also have similar results.
• From Table 2, we observed that the PSNR by the TFOV-based PGMRES method is almost the same as that of the ordinary TFOV-based GMRES method, but much higher than that of the TV-based $P 1$ method for all values of N. However, the $P 1$ and $P 2$ methods generate this better PSNR in much fewer iterations. For example, to achieve a better PSNR the $P 1$ method needs only 18 iterations, and the $P 2$ method needs only 20 iterations for $N = 64$. However, the NP method needs 120+ iterations to get the same PSNR. The TV-based $P 1$ method also takes 120+ iterations to get its lower PSNR. The same is the case for other values of N. This means that the TFOV-based FGMRES method is faster than the TFOV-based GMRES and TV-based $P 1$ methods.
Example 7.
In this example, we utilized satellite images used by Chowdhury et al. [71]. The images underwent deliberate blurring and were corrupted by Poisson noise, resulting in the presence of blurring artifacts. To achieve the blurring, we applied a kernel with specific parameters, namely, we used the Gaussian build in kernel $“ f s p e c i a l ( ‘ g a u s s i a n ’ , 9 , s q r t ( 3 ) ) ”$. The introduction of Poisson noise to the image presents a substantial challenge for most deblurring techniques, as this type of noise frequently occurs in scenarios involving photon counting across various imaging methods. Simultaneously, blurring is an inevitable consequence due to the underlying physical principles of the imaging system, which can be thought of as the convolution of the image with a point spread function. For the sake of comparison, we chose to employ the non-blind fractional order TV-based algorithm (NFOV) proposed by Chaudhury et al. [71]. The restored satellite images can be seen in Figure 34, Figure 35, Figure 36, Figure 37 and Figure 38, with each image sized at $128 × 128$. We configured the parameters for the NFOV method as specified in the reference by Chowdhury et al. [71]. For comparison, we have used two different values of N. These are 64 and 128. Their corresponding blurred PSNR are 20.2985 and 20.4559 respectively. The computational technique’s stopping criterion is determined by a tolerance value of $t o l = 10 − 7$. Additional details regarding this experiment can be located in Table 3.
Remark 2.
Upon examining Figure 35, Figure 36, Figure 37 and Figure 38 and Table 3, it becomes apparent that the results generated by all methods are virtually indistinguishable. Nevertheless, our proposed methods (GMRES and FGMRES) exhibit slightly higher PSNR values while demanding significantly less CPU time. This observation underscores the improved efficiency and speed of our suggested methods (GMRES and FGMRES) in comparison to the NFOV technique.

## 6. Conclusions

In this paper we have proposed two block triangular preconditioners for solving the generalized saddle point system which is derived from discretizing the Euler Lagrange equations associated with the TFOV in image de-blurring based problems. We have investigated the performance of the proposed preconditioners with the FGMRES method. We have tested this method on three types of digital images. We have also compared our algorithm with TV based algorithm. Our experiments show that the block triangular preconditioners are very effective. We have also shown that our technique improves the quality of the reconstruction images via calculation of the PSNR. We showed the performance of both GMRES and FGMRES with our proposed preconditioner and we concluded that FGMRES is slightly better than GMRES. Few iterations and CPU-time are needed to obtain a fast rate of convergence and good de-blurring performance. Circulant approximations are used in the first term of the Shur complement to reduce the cost of the computation from $O ( N 2 )$ into $O ( N log N )$ and reduce the storage. The spectrums of the preconditioned matrices are clustered around 1 and $− 1$.

## Funding

This research was funded by King Fahd University of Petroleum and Minerals (KFUPM-IRC-CBM) grant number INCB2315.

## Data Availability Statement

No data were used to support this study.

## Acknowledgments

The author would like to acknowledge the support provided by King Fahd University of Petroleum & Minerals (KFUPM), Saudi Arabia. The author also would like to thank the referees for their very careful reading and valuable comments. The support provided by the Interdisciplinary Research Center for Construction & Building Materials (IRC-CBM) at King Fahd University of Petroleum & Minerals (KFUPM), Saudi Arabia, for funding this work through Project No. INCB2315, is also greatly acknowledged.

## Conflicts of Interest

The author declares that there is no conflict of interest.

## References

1. Acar, R.; Vogel, C.R. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Probl. 1994, 10, 1217. [Google Scholar] [CrossRef]
2. Agarwal, V.; Gribok, A.V.; Abidi, M.A. Image restoration using L1 norm penalty function. Inverse Probl. Sci. Eng. 2007, 15, 785–809. [Google Scholar] [CrossRef]
3. Aujol, J.-F. Some first-order algorithms for total variation based image restoration. J. Math. Imaging Vis. 2009, 34, 307–327. [Google Scholar] [CrossRef]
4. Tai, X.-C.; Lie, K.-A.; Chan, T.F.; Osher, S. Image processing based on partial differential equations. In Proceedings of the International Conference on PDE-Based Image Processing and Related Inverse Problems, CMA, Oslo, Norway, 8–12 August 2005; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
5. Chen, D.; Chen, Y.; Xue, D. Fractional-order total variation image restoration based on primal-dual algorithm. In Abstract and Applied Analysis; Hindawi Publishing Corporation: London, UK, 2013; Volume 2013. [Google Scholar]
6. Williams, B.M.; Zhang, J.; Chen, K. A new image deconvolution method with fractional regularisation. J. Algorithms Comput. 2016, 10, 265–276. [Google Scholar] [CrossRef]
7. Chan, R.; Lanza, A.; Morigi, S.; Sgallari, F. An adaptive strategy for the restoration of textured images using fractional order regularization. Numer. Math. Theory Methods Appl. 2013, 6, 276–296. [Google Scholar] [CrossRef]
8. Zhang, J.; Chen, K. Variational image registration by a total fractional-order variation model. J. Comput. Phys. 2015, 293, 442–461. [Google Scholar] [CrossRef]
9. Benzi, M.; Golub, G.H.; Liesen, J. Numerical solution of saddle point problems. Acta Numer. 2005, 14, 1–137. [Google Scholar] [CrossRef]
10. Silvester, D.; Wathen, A. Fast iterative solution of stabilised Stokes systems. Part II: Using general block preconditioners. SIAM J. Numer. Anal. 1994, 31, 1352–1367. [Google Scholar] [CrossRef]
11. Wathen, A.; Silvester, D. Fast iterative solution of stabilised Stokes systems. Part I: Using simple diagonal preconditioners. SIAM J. Numer. Anal. 1993, 30, 630–649. [Google Scholar] [CrossRef]
12. Bramble, J.H.; Pasciak, J.E. A preconditioning technique for indefinite systems resulting from mixed approximations of elliptic problems. Math. Comput. 1988, 50, 1–17. [Google Scholar] [CrossRef]
13. Cao, Z.-H. Positive stable block triangular preconditioners for symmetric saddle point problems. Appl. Numer. Math. 2007, 57, 899–910. [Google Scholar] [CrossRef]
14. Klawonn, A. Block-triangular preconditioners for saddle point problems with a penalty term. SIAM J. Sci. Comput. 1998, 19, 172–184. [Google Scholar] [CrossRef]
15. Pestana, J. On the eigenvalues and eigenvectors of block triangular preconditioned block matrices. SIAM J. Matrix Anal. Appl. 2014, 35, 517–525. [Google Scholar] [CrossRef]
16. Simoncini, V. Block triangular preconditioners for symmetric saddle-point problems. Appl. Numer. Math. 2004, 49, 63–80. [Google Scholar] [CrossRef]
17. Axelsson, O.; Neytcheva, M. Preconditioning methods for linear systems arising in constrained optimization problems. Numer. Linear Algebr. Appl. 2003, 10, 3–31. [Google Scholar] [CrossRef]
18. Bai, Z.-Z.; Golub, G.H. Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J. Numer. 2007, 27, 1–23. [Google Scholar] [CrossRef]
19. Benzi, M.; Ng, M.K. Preconditioned iterative methods for weighted Toeplitz least squares problems. SIAM J. Matrix Anal. Appl. 2006, 27, 1106–1124. [Google Scholar] [CrossRef]
20. Ng, M.K.; Pan, J. Weighted Toeplitz regularized least squares computation for image restoration. SIAM J. Sci. Comput. 2014, 36, B94–B121. [Google Scholar] [CrossRef]
21. Cao, Z.-H. Block triangular Schur complement preconditioners for saddle point problems and application to the Oseen equations. Appl. Numer. 2010, 60, 193–207. [Google Scholar] [CrossRef]
22. Chen, C.; Ma, C. A generalized shift-splitting preconditioner for saddle point problems. Appl. Math. Lett. 2015, 43, 49–55. [Google Scholar] [CrossRef]
23. Salkuyeh, D.K.; Masoudi, M.; Hezari, D. On the generalized shift-splitting preconditioner for saddle point problems. Appl. Math. 2015, 48, 55–61. [Google Scholar] [CrossRef]
24. Beik, F.P.A.; Benzi, M.; Chaparpordi, S.-H.A. On block diagonal and block triangular iterative schemes and preconditioners for stabilized saddle point problems. J. Comput. Appl. Math. 2017, 326, 15–30. [Google Scholar] [CrossRef]
25. Murphy, M.F.; Golub, G.H.; Wathen, A.J. A note on preconditioning for indefinite linear systems. Siam J. Sci. Comput. 2000, 21, 1969–1972. [Google Scholar] [CrossRef]
26. Benzi, M.; Golub, G.H. A preconditioner for generalized saddle point problems. SIAM J. Matrix Anal. Appl. 2004, 26, 20–41. [Google Scholar] [CrossRef]
28. Vogel, C.R.; Oman, M.E. Fast, robust total variation-based reconstruction of noisy, blurred images. IEEE Trans. Image Process. 1998, 7, 813–824. [Google Scholar] [CrossRef] [PubMed]
29. Axelsson, O. Iterative Solution Methods; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
30. Campisi, P.; Egiazarian, K. Blind Image Deconvolution: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
31. Groetsch, C.W.; Groetsch, C. Inverse Problems in the Mathematical Sciences; Springer: Berlin/Heidelberg, Germany, 1993; Volume 52. [Google Scholar]
32. Tikhonov, A.N. Regularization of incorrectly posed problems. Sov. Math. Dokl. 1963, 4, 1624–1627. [Google Scholar]
33. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
34. Osher, S.; Solé, A.; Vese, L. Image decomposition and restoration using total variation minimization and the h1. Multiscale Model. Simul. 2003, 1, 349–370. [Google Scholar] [CrossRef]
35. Getreuer, P. Total variation inpainting using split Bregman. Image Process. Line 2012, 2, 147–157. [Google Scholar] [CrossRef]
36. Guo, W.; Qiao, L.-H. Inpainting based on total variation. In Proceedings of the 2007 International Conference on Wavelet Analysis and Pattern Recognition, Beijing, China, 2–4 November 2007; Volume 2, pp. 939–943. [Google Scholar]
37. Bresson, X.; Esedoglu, S.; Vandergheynst, P.; Thiran, J.-P.; Osher, S. Fast global minimization of the active contour/snake model. J. Math. Imaging Vis. 2007, 28, 151–167. [Google Scholar] [CrossRef]
38. Unger, M.; Pock, T.; Trobin, W.; Cremers, D.; Bischof, H. Tvseg-interactive total variation based image segmentation. BMVC 2008, 31, 44–46. [Google Scholar]
39. Yan, H.; Zhang, J.-X.; Zhang, X. Injected infrared and visible image fusion via l_{1} decomposition model and guided filtering. IEEE Trans. Comput. Imaging 2022, 8, 162–173. [Google Scholar] [CrossRef]
40. Chan, T.; Marquina, A.; Mulet, P. High-order total variation-based image restoration. SIAM J. Sci. Comput. 2000, 22, 503–516. [Google Scholar] [CrossRef]
41. Steidl, G.; Didas, S.; Neumann, J. Relations between higher order TV regularization and support vector regression. In International Conference on Scale-Space Theories in Computer Vision; Springer: Berlin/Heidelberg, Germany, 2005; pp. 515–527. [Google Scholar]
42. Bredies, K.; Kunisch, K.; Pock, T. Total generalized variation. SIAM J. Imaging Sci. 2010, 3, 492–526. [Google Scholar] [CrossRef]
43. Zhu, W.; Chan, T. Image denoising using mean curvature of image surface. SIAM J. Imaging Sci. 2012, 5, 1–32. [Google Scholar] [CrossRef]
44. Lysaker, M.; Osher, S.; Tai, X.-C. Noise removal using smoothed normals and surface fitting. IEEE Trans. Image Process. 2004, 13, 1345–1357. [Google Scholar] [CrossRef]
45. Ahmad, S.; Al-Mahdi, A.M.; Ahmed, R. Two new preconditioners for mean curvature-based image deblurring problem. AIMS Math. 2021, 6, 13824–13844. [Google Scholar] [CrossRef]
46. Al-Mahdi, A.; Fairag, F. Block diagonal preconditioners for an image de-blurring problem with fractional total variation. J. Phys. Conf. Ser. 2018, 1132, 012063. [Google Scholar] [CrossRef]
47. Fairag, F.; Al-Mahdi, A.; Ahmad, S. Two-level method for the total fractional-order variation model in image deblurring problem. Numer. Algorithms 2020, 85, 931–950. [Google Scholar] [CrossRef]
48. Sohail, A.; Bég, O.; Li, Z.; Celik, S. Physics of fractional imaging in biomedicine. Prog. Biophys. Mol. Biol. 2018, 140, 13–20. [Google Scholar] [CrossRef]
49. Xu, K.-D.; Zhang, J.-X. Prescribed performance tracking control of lower-triangular systems with unknown fractional powers. Fractal Fract. 2023, 7, 594. [Google Scholar] [CrossRef]
50. Wang, Y.; Zhang, X.; Boutat, D.; Shi, P. Quadratic admissibility for a class of lti uncertain singular fractional-order systems with 0 < α < 2. Fractal Fract. 2022, 7, 1. [Google Scholar]
51. Zhang, J.; Chen, K. A total fractional-order variation model for image restoration with nonhomogeneous boundary conditions and its numerical solution. SIAM J. Imaging Sci. 2015, 8, 2487–2518. [Google Scholar] [CrossRef]
52. Miller, K.S.; Ross, B. An Introduction to the Fractional Calculus and Fractional Differential Equations; Wiley: Hoboken, NJ, USA, 1993. [Google Scholar]
53. Oldham, K.; Spanier, J. The Fractional Calculus Theory and Applications of Differentiation and Integration to Arbitrary Order; Elsevier: Amsterdam, The Netherlands, 1974; Volume 111. [Google Scholar]
54. Podlubny, I. Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and Some of Their Applications; Academic Press: Cambridge, MA, USA, 1998; Volume 198. [Google Scholar]
55. Chan, T.F.; Golub, G.H.; Mulet, P. A nonlinear primal-dual method for total variation-based image restoration. SIAM J. Sci. 1999, 20, 1964–1977. [Google Scholar] [CrossRef]
56. Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for fractional advection–dispersion flow equations. J. Comput. Appl. Math. 2004, 172, 65–77. [Google Scholar] [CrossRef]
57. Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for two-sided space-fractional partial differential equations. Appl. Numer. Math. 2006, 56, 80–90. [Google Scholar] [CrossRef]
58. Wang, H.; Du, N. Fast solution methods for space-fractional diffusion equations. J. Comput. Appl. Math. 2014, 255, 376–383. [Google Scholar] [CrossRef]
59. Strang, G. A proposal for Toeplitz matrix calculations. Stud. Appl. Math. 1986, 74, 171–176. [Google Scholar] [CrossRef]
60. Olkin, J.A. Linear and Nonlinear Deconvolution Problems (Optimization). Ph.D. Thesis, Rice University, Houston, TX, USA, 1986. [Google Scholar]
61. Chan, T.F.; Olkin, J.A. Circulant preconditioners for Toeplitz-block matrices. Numer. Algorithms 1994, 6, 89–101. [Google Scholar] [CrossRef]
62. Chan, R.H.; Ng, K.-P. Toeplitz preconditioners for Hermitian Toeplitz systems. Linear Algebra Appl. 1993, 190, 181–208. [Google Scholar] [CrossRef]
63. Lin, F.-R. Preconditioners for block Toeplitz systems based on circulant preconditioners. Numer. Algorithms 2001, 26, 365–379. [Google Scholar] [CrossRef]
64. Chan, R.H. Toeplitz preconditioners for Toeplitz systems with nonnegative generating functions. IMA J. Numer. Anal. 1991, 11, 333–345. [Google Scholar] [CrossRef]
65. Serra, S. Preconditioning strategies for asymptotically ill-conditioned block Toeplitz systems. BIT Numer. Math. 1994, 34, 579–594. [Google Scholar] [CrossRef]
66. Lin, F.-R.; Wang, C.-X. BTTB preconditioners for BTTB systems. Numer. Algorithms 2012, 60, 153–167. [Google Scholar] [CrossRef]
67. Chan, R.H.; Strang, G. Toeplitz equations by conjugate gradients with circulant preconditioner. SIAM J. Sci. Stat. 1989, 10, 104–119. [Google Scholar] [CrossRef]
68. Chan, R.H.; Yeung, M.-C. Circulant preconditioners constructed from kernels. SIAM J. Numer. Anal. 1992, 29, 1093–1103. [Google Scholar] [CrossRef]
69. Chan, T.F. An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comput. 1988, 9, 766–771. [Google Scholar] [CrossRef]
70. Davis, P.J. Circulant Matrices; American Mathematical Soc.: New York, NY, USA, 2012. [Google Scholar]
71. Chowdhury, M.R.; Qin, J.; Lou, Y. Non-blind and blind deconvolution under Poisson noise using fractional-order total variation. J. Math. Imaging Vis. 2020, 62, 1238–1255. [Google Scholar] [CrossRef]
Figure 1. Cross sections.
Figure 1. Cross sections.
Figure 2. Right box.
Figure 2. Right box.
Figure 3. Middle box.
Figure 3. Middle box.
Figure 4. Left box.
Figure 4. Left box.
Figure 5. TV-error.
Figure 5. TV-error.
Figure 6. TFOV-error.
Figure 6. TFOV-error.
Figure 7. Eigenvalues of A.
Figure 7. Eigenvalues of A.
Figure 8. Eigenvalues of $( P 1 ) − 1 A$.
Figure 8. Eigenvalues of $( P 1 ) − 1 A$.
Figure 9. Golden house image.
Figure 9. Golden house image.
Figure 10. Retinal image.
Figure 10. Retinal image.
Figure 11. Shape of the kernel.
Figure 11. Shape of the kernel.
Figure 12. Residual versus iterations number when $λ = 1 × 10 − 3$.
Figure 12. Residual versus iterations number when $λ = 1 × 10 − 3$.
Figure 13. Residual versus iterations number when $λ = 1 × 10 − 5$.
Figure 13. Residual versus iterations number when $λ = 1 × 10 − 5$.
Figure 14. Golden house image (blurred).
Figure 14. Golden house image (blurred).
Figure 15. Retinal image (blurred).
Figure 15. Retinal image (blurred).
Figure 16. Using $P 1$ with $α = 1$.
Figure 16. Using $P 1$ with $α = 1$.
Figure 17. Using $P 2$ with $α = 1$.
Figure 17. Using $P 2$ with $α = 1$.
Figure 18. Using $P 1$ with $α = 1.8$.
Figure 18. Using $P 1$ with $α = 1.8$.
Figure 19. Using $P 2$ with $α = 1.8$.
Figure 19. Using $P 2$ with $α = 1.8$.
Figure 20. Using $P 1$ with $α = 1$.
Figure 20. Using $P 1$ with $α = 1$.
Figure 21. Using $P 2$ with $α = 1$.
Figure 21. Using $P 2$ with $α = 1$.
Figure 22. Using $P 1$ with $α = 1.8$.
Figure 22. Using $P 1$ with $α = 1.8$.
Figure 23. Using $P 2$ with $α = 1.8$.
Figure 23. Using $P 2$ with $α = 1.8$.
Figure 24. FGMRES vs. GMRES.
Figure 24. FGMRES vs. GMRES.
Figure 25. Peppers image (exact).
Figure 25. Peppers image (exact).
Figure 26. Peppers image (blurred).
Figure 26. Peppers image (blurred).
Figure 27. Using TV ($α = 1$).
Figure 27. Using TV ($α = 1$).
Figure 28. Using NP.
Figure 28. Using NP.
Figure 29. Using $P 1$ with $α = 1.9$.
Figure 29. Using $P 1$ with $α = 1.9$.
Figure 30. Using $P 2$ with $α = 1.9$.
Figure 30. Using $P 2$ with $α = 1.9$.
Figure 31. $N = 64$.
Figure 31. $N = 64$.
Figure 32. $N = 128$.
Figure 32. $N = 128$.
Figure 33. $N = 256$.
Figure 33. $N = 256$.
Figure 34. Satel image (blurred).
Figure 34. Satel image (blurred).
Figure 35. Using NFOV.
Figure 35. Using NFOV.
Figure 36. Using NP.
Figure 36. Using NP.
Figure 37. Using $P 1$ with $α = 1.9$.
Figure 37. Using $P 1$ with $α = 1.9$.
Figure 38. Using $P 2$ with $α = 1.9$.
Figure 38. Using $P 2$ with $α = 1.9$.
Table 1. The CPU time comparison of GMRES and FGMRES.
Table 1. The CPU time comparison of GMRES and FGMRES.
ParametersIterationsCPU-Time
N$α$$λ$$β$NP$P 1$$P 2$NP$P 1$$P 2$
321.3$10 − 3$15330323.441.881.98
641.8$10 − 8$0.130116619439.7120.9720.55
1281.6$10 − 6$0.01178689176.6435.8638.22
Table 2. The PSNR Comparison of TV, NP, $P 1$ and $P 2$.
Table 2. The PSNR Comparison of TV, NP, $P 1$ and $P 2$.
ParametersIterationsDeblurred PSNR
N$α$$λ$$β$TV ($α = 1$)NP$P 1$$P 2$TV ($α = 1$)NP$P 1$$P 2$
641.6$10 − 4$1$120 +$$120 +$201847.223048.642249.013148.9233
1281.8$10 − 4$1$120 +$$120 +$402245.224346.035246.852646.8957
2561.9$10 − 7$1$120 +$$120 +$603840.333144.122044.627744.6241
Table 3. The PSNR Comparison of NFOV, NP, $P 1$ and $P 2$.
Table 3. The PSNR Comparison of NFOV, NP, $P 1$ and $P 2$.
ParametersIterationsDeblurred PSNR
N$α$$λ$$β$NFOVNP$P 1$$P 2$NFOVNP$P 1$$P 2$
641.7$10 − 4$1$120 +$$120 +$412625.986926.562526.786126.8283
1281.9$10 − 7$1$120 +$$120 +$654524.141725.190825.431225.6952
 Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

## Share and Cite

MDPI and ACS Style

Al-Mahdi, A.M. Preconditioning Technique for an Image Deblurring Problem with the Total Fractional-Order Variation Model. Math. Comput. Appl. 2023, 28, 97. https://doi.org/10.3390/mca28050097

AMA Style

Al-Mahdi AM. Preconditioning Technique for an Image Deblurring Problem with the Total Fractional-Order Variation Model. Mathematical and Computational Applications. 2023; 28(5):97. https://doi.org/10.3390/mca28050097

Chicago/Turabian Style

Al-Mahdi, Adel M. 2023. "Preconditioning Technique for an Image Deblurring Problem with the Total Fractional-Order Variation Model" Mathematical and Computational Applications 28, no. 5: 97. https://doi.org/10.3390/mca28050097