Next Article in Journal
Post-COVID-19 Pandemic Era and Sustainable Healthcare: Organization and Delivery of Health Economics Research (Principles and Clinical Practice)
Next Article in Special Issue
Czerwik Vector-Valued Metric Space with an Equivalence Relation and Extended Forms of Perov Fixed-Point Theorem
Previous Article in Journal
Adaptive Consensus of the Stochastic Leader-Following Multi-Agent System with Time Delay
Previous Article in Special Issue
Existence and Uniqueness of Non-Negative Solution to a Coupled Fractional q-Difference System with Mixed q-Derivative via Mixed Monotone Operator Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Two-Step Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Image Recovery

by
Rattanakorn Wattanataweekul
1,
Kobkoon Janngam
2 and
Suthep Suantai
3,*
1
Department of Mathematics, Statistics and Computer, Faculty of Science, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
2
Graduate Ph.D. Degree Program in Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
Research Center in Optimization and Computational Intelligence for Big Data Prediction, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(16), 3518; https://doi.org/10.3390/math11163518
Submission received: 16 July 2023 / Revised: 31 July 2023 / Accepted: 10 August 2023 / Published: 15 August 2023
(This article belongs to the Special Issue Advances in Fixed Point Theory and Its Applications)

Abstract

:
This paper introduces a novel two-step inertial algorithm for locating a common fixed point of a countable family of nonexpansive mappings. We establish strong convergence properties of the proposed method under mild conditions and employ it to solve convex bilevel optimization problems. The method is further applied to the image recovery problem. Our numerical experiments show that the proposed method achieves faster convergence than other related methods in the literature.

1. Introduction

Bilevel optimization has received significant attention in recent years, having arisen as a powerful tool for many machine learning applications such as hyperparameter optimization [1,2], signal processing [3,4], and reinforcement learning [5]. It is defined as a mathematical program in which an optimization problem contains another optimization problem as a constraint. In this paper, we consider the bilevel optimization problem in which the following minima are sought:
min x S * ω ( x ) ,
where ω : R n R is assumed to be strongly convex and differentiable, while S * is a nonempty set of inner level optimizers satisfying
min x R n { ψ 1 ( x ) + ψ 2 ( x ) } ,
where ψ 1 : R n R is a differentiable and convex function such that ψ 1 is L-Lipschitz continuous and ψ 2 : R n R { } is a convex, proper, and lower semi-continuous function. We let Λ be the solution set of (1).
Observe that this bilevel optimization model contains the inner level minimization problem (2) as a constraint to the outer level optimization problem (1). It is a well-known form (1) that
x * Λ if and only if ω ( x * ) , x x * 0 for all x S * .
Many researchers have proposed algorithms for solving problem (2); see [6,7,8,9,10]. The basic algorithm is the proximal forward–backward technique, or proximal gradient method, defined by the iterative equation
x n + 1 = p r o x α n ψ 2 ( I α n ψ 1 ) ( x n ) , n N ,
where α n > 0 is the step-size, p r o x ψ 2 is the proximity operator of ψ 2 , and ψ 1 is the gradient of ψ 1 [6,11]. Equation (3) is referred to in the literature as the forward–backward splitting algorithm (FBSA). The FBSA can be used to solve the inner level optimization problem if ψ 1 is L-Lipschitz continuous [7].
The proximal gradient method can also be viewed as a fixed-point algorithm, where the iterated mapping is given by
T : = p r o g α ψ 2 ( I α ψ 1 )
and is called the forward–backward mapping [12]. The forward–backward mapping, T, is nonexpansive if 0 < α < 2 / L , where L is a Lipschitz constant of ψ 1 and, in that case, F i x ( T ) = a r g m i n { ψ 1 ( x ) + ψ 2 ( x ) } . It is noted that implementation of the forward–backward operator can be simplified by first changing the inner level optimization problem into a zero-point problem of the sum of two monotone operators, and then, after analysis, translating back into the fixed-point problem. Exemplifying the fixed-point approach, Sabach et al. [13] proposed the bilevel gradient sequential averaging method (BiG-SAM) for solving problems (1) and (2). The iterative process can be defined as
u n = p r o x c g ( x n 1 c f ( x n 1 ) ) , v n = x n 1 λ ω ( x n 1 ) , x n + 1 = γ n v n + ( 1 γ n ) u n , n 1
where c ( 0 , 2 L f ) , λ ( 0 , 2 L ω + σ ) , ω is strongly convex with parameter σ , and where L f and L ω are Lipschitz constants for the gradients of f and ω . The authors analyzed the convergence behavior of BiG-SAM using an existing fixed-point algorithm and discussed its rate of convergence.
In optimization problems like those presented above, mathematicians frequently employ a technique known as inertial-type extrapolation [14,15] to accelerate the convergence of the iterative equations. This approach involves utilizing a term θ n ( x n x n 1 ) , where θ n denotes an inertial parameter, to govern the momentum x n x n 1 . One such algorithm that has enjoyed immense popularity was developed by Nesterov [14]. He used an inertial or extrapolation technique to solve convex optimization problems of the form of (2), where F : = ψ 1 + ψ 2 is a convex, smooth function. Nesterov’s algorithm takes the following form:
z n = x n + θ n ( x n x n 1 ) , x n + 1 = z n + c F ( z n ) , n N ,
where the inertial parameter θ n ( 0 , 1 ) for all n and c > 0 is the step size depending on the Lipschitz continuity modulus of F . Nesterov proved that Equation (6) has a faster convergence rate than the general gradient algorithm by selecting { θ n } such that sup n θ n = 1 . Similarly, in 2009, Beck et al. [16] introduced the fast iterative shrinkage-thresholding algorithm (FISTA) for solving linear inverse problems. Their result combined the proximity algorithm with the inertial technique, again resulting in the algorithm’s convergence rate being considerably accelerated.
In 2019, Shehu et al. [17] presented an inertial forward–backward algorithm, called the inertial bilevel gradient sequential averaging method (iBiG-SAM) for solving problems (1) and (2). Their method was subsequently improved by Sabach et al. [13], using the following iterative algorithm:
s n = x n + θ n ( x n x n 1 ) , u n = p r o x c g ( I c f ) ( s n ) , v n = s n λ ω ( s n ) , x n + 1 = γ n v n + ( 1 γ n ) u n , n 1 .
The authors transformed the bilevel optimization problem into a fixed-point problem for a nonexpansive mapping in an infinite dimensional Hilbert space and then proved strong convergence.
As the above suggests, research on fixed-point problems for nonexpansive mappings has become crucial for developing optimization methods. The Mann iterative process is a well-known method for approximating fixed points of nonexpansive mappings on Hilbert spaces. However, Mann’s process provides only weak convergence. Many authors have demonstrated fixed-point problems exhibiting strong convergence for nonexpansive mappings on Hilbert spaces using the viscosity approximation method, expressed by the equation
x n + 1 = β n S ( x n ) + ( 1 β n ) T x n , n 1 ,
where { β n } ( 0 , 1 ) , S is a contraction on Hilbert spaces H and x 1 H ; see [18,19].
In 2009, Takahashi [20] modified the viscosity approximation method, selecting a particular fixed point of the nonexpansive self-mapping of Moudafi [18]. The iterative process is given by
x n + 1 = β n S ( x n ) + ( 1 β n ) T n x n , n 1 ,
where { β n } ( 0 , 1 ) , S is a contraction of C into itself, { T n } is a countable family of nonexpansive of C into itself, C is subset of a Banach space, and  x 1 C . Takahashi proved the strong convergence of (9) to a common fixed point of T n .
Jailoka et al. [21] introduced a fast viscosity forward–backward algorithm (FVFBA) with the inertial technique for finding a common fixed point of a countable family of nonexpansive mappings. They proved a strong convergence result and applied it to solving a convex minimization problem of the sum of two convex functions. The  iterative process can be formulated by
u n = x n + θ n ( x n x n 1 ) , v n = ( 1 α n ) T n u n + α n S ( u n ) , x n + 1 = ( 1 β n ) T n u n + β n T n v n , n 1 ,
where { α n } ,   { β n } ( 0 , 1 ) , S is a contraction on Hilbert spaces H and x 1 H .
Recently, Janngam et al. [22] presented an inertial viscosity modified SP algorithm (IVMSPA). The authors proved a strong convergence of their algorithm and applied it to solving the convex bilevel optimization problems (problems 1 and 2). Their algorithm was given by
y n = x n + θ n ( x n x n 1 ) , z n = ( 1 α n ) y n + α n S ( y n ) , w n = ( 1 β n ) z n + β n T n z n , x n + 1 = ( 1 γ n ) w n + γ n T n w n , n 1 ,
where { α n } , { β n } , { γ n } ( 0 , 1 ) , S is contraction mapping on Hilbert spaces H and x 1 H .
The above authors all employ a single inertial parameter to accelerate the convergence of their algorithms. However, it has been noted that the incorporation of two inertial parameters enhances motion modeling, improves stability and robustness, increases redundancy and fault tolerance, expands the range of applications, and offers flexibility and adaptability in algorithm design. In [23], it was illustrated through an example that the one-step inertial extrapolation, expressed as w n = x n + θ n ( x n x n 1 ) with θ n [ 0 , 1 ) , may not produce acceleration. Additionally, Ref. [24] mentioned that incorporating more than two points, such as x n and x n 1 , in the inertial process could lead to acceleration. For instance, consider the following two-step inertial extrapolation:
y n = x n + θ ( x n x n 1 ) + δ ( x n 1 x n 2 )
where θ > 0 and δ < 0 can provide acceleration. The limitations of employing one-step inertial acceleration in the alternating direction method of multipliers (ADMM) were dissused in [25], which led to the proposal of adaptive acceleration as an alternative solution. In addition, Polyak [26] discussed the potential for multi-step inertial methods to enhance the speed of optimization techniques despite the absence of established convergence or rate results in [26]. Recent research conducted in [27] has further explored and examined various aspects of multi-step inertial methods.
Based on the information provided above, our aim in this paper is to solve the convex bilevel optimization problem by introducing a new accelerated viscosity algorithm with the two-point inertial technique, which we then apply to image recovery. The remainder of the paper is organized as follows. In Section 2, we recall some basic definitions and results that are crucial in the paper. The proposed algorithm and the analysis of its convergence are presented in Section 3. The performance of deblurring images using our algorithm is analyzed and illustrated in Section 4. Finally, we give conclusions and discuss directions for future work in Section 5.

2. Preliminaries

In this section, we present some preliminary material that will be needed for the main theorems.
Let C be a nonempty subset of a real Hilbert space H with norm · , R denote the set of real numbers, R + denote the non-negative real numbers, R > 0 denote the positive real numbers, N denote the set of positive integers, and let I denote the identity mapping on H.
Definition 1.
The mapping T : C C is said to be L-Lipschitz with L 0 , if 
T u T v L u v
for all u , v C . Furthermore, if  L [ 0 , 1 ) then T is called a contraction mapping, and it is nonexpansive if L = 1 .
When { x n } is a sequence in C, we denote the strong convergence of x n to x C by x n x , and F i x ( T ) will symbolize the set of all fixed points of T.
Let T : C C be a nonexpansive mapping and { T n } be a family of nonexpansive mappings of C into itself such that F i x ( T ) Γ : = n = 1 F i x ( T n ) . The sequence { T n } is said to satisfy the NST-condition (I) with T [28], if for each bounded sequence { x n } C ,
lim n x n T n x n = 0 implies lim n x n T x n = 0 .
The following condition is an essential condition for proving our convergence theorem.
Definition 2
([29,30]). A sequence { T n } with n = 1 F i x ( T n ) is said to satisfy the condition (Z) if for every bounded sequence { u n } in C such that
lim n u n T n u n = 0 ,
then, every weak cluster point of { u n } belongs to n = 1 F i x ( T n ) .
Recall that for a nonempty closed convex subset C of H, the metric projection on C is a mapping P C : H C , defined by
P C x = a r g m i n { x y : y C }
for all x H . Note that v = P C x if and only if x v , y v 0 for all y C .
The definition and properties of a proximity operator are presented below.
Definition 3
([31,32]). Let g : H R { } be a function that is convex, proper, and lower semi-continuous. The function p r o x g , known as the proximity operator of g, is defined as follows:
p r o x g ( x ) : = min y H g ( y ) + 1 2 x y 2 .
Alternatively, it can be expressed as:
p r o x g = ( I + g ) 1 ,
where g represents the subdifferential of g defined by:
g ( x ) : = { v H : g ( x ) + v , u x g ( u ) for all u H }
for any x H . Additionally, for  ρ > 0 , we know that p r o x ρ g is firmly nonexpansive and
F i x ( p r o x ρ g ) = A r g m i n ( g ) : = { v H : g ( v ) g ( u ) for all u H } ,
where F i x ( p r o x ρ g ) is the set of fixed points of p r o x ρ g .
The following lemmas will be used for proving the convergence of our proposed algorithm.
Lemma 1
([33]). Let g : H R { } be a convex, proper, and lower semi-continuous function and let f : H R be a differentiable and convex function such that f is L-Lipschitz continuous. Let
T n : = p r o x ρ n g ( I ρ n f ) and T : = p r o x ρ g ( I ρ f ) ,
where ρ n , ρ ( 0 , 2 / L ) with ρ n ρ as n . Then { T n } satisfies the NST-condition (I) with T.
Lemma 2
([34]). Let x 1 , x 2 H and t [ 0 , 1 ] . Then, the following properties are true:
(i) 
x 1 ± x 2 2 = x 1 2 ± 2 x 1 , x 2 + x 2 2 ;
(ii) 
x 1 + x 2 2 x 1 2 + 2 x 2 , x 1 + x 2 ;
(iii) 
t x 1 + ( 1 t ) x 2 2 = t x 1 2 + ( 1 t ) x 2 2 t ( 1 t ) x 1 x 2 2 .
Lemma 3
([35]). Let { a n } , { b n } R + and { t n } ( 0 , 1 ) such that n = 1 t n = . Assume that
a n + 1 ( 1 t n ) a n + t n b n
for all n N . If  lim sup i b n i 0 for every subsequence { a n i } of { a n } satisfying
lim inf i ( a n i + 1 a n i ) 0 ,
then lim n a n = 0 .

3. Main Results

Throughout this section, we let C be closed convex with C H and a mapping F : C C be a k-contraction where 0 < k < 1 . Let { T n } is a family of nonexpansive mappings of C into itself satisfying the condition (Z) such that Γ : = n = 1 F i x ( T n ) .
For the first of our main results, we draw upon the ideas of Jailoka et al. [21] and Liang [24] and introduce a modified two-step inertial viscosity algorithm (MTIVA) for finding a common fixed point of a family of nonexpansive mappings { T n } , as follows:
In Theorem 1, we show that Algorithm 1 converges strongly.
Algorithm 1 Modified Two-Step Inertial Viscosity Algorithm (MTIVA)
Initialization: Let { β n } , { γ n } [ 0 , 1 ] , { τ n } R + and let { μ n } , { ρ n } R > 0 be bounded sequences. Take x 1 ,   x 0 ,   x 1 H arbitrarily. For  n N .
Step 1. Compute the inertial step:
ϑ n = min μ n , τ n x n x n 1 if x n x n 1 , μ n otherwise ,
and
δ n = max ρ n , τ n x n 1 x n 2 if x n 1 x n 2 , ρ n otherwise ,
w n = x n + ϑ n ( x n x n 1 ) + δ n ( x n 1 x n 2 ) .
Step 2. Compute the viscosity step:
z n = ( 1 γ n ) T n w n + γ n F ( w n ) .
Step 3. Compute x n + 1 :
x n + 1 = ( 1 β n ) T n w n + β n T n z n .
Theorem 1.
Let a sequence { x n } be generated by Algorithm 1. Suppose the conditions (C1–C3) hold for the sequences { τ n } , { γ n } , and  { β n } . Then, x n p ˘ Γ , where p ˘ = P Γ F ( p ˘ ) .
(C1) 
lim n τ n γ n = 0 ;
(C2) 
0 < ϵ 1 β n ϵ 2 < 1 for some ϵ 1 , ϵ 2 R ;
(C3) 
0 < γ n < 1 , lim n γ n = 0 and n = 1 γ n = .
Proof. 
Let p ˘ = P Γ F ( p ˘ ) . By the definition of z n , we obtain
z n p ˘ = ( 1 γ n ) T n w n + γ n F ( w n ) p ˘ ( 1 γ n ) T n w n p ˘ + γ n F ( w n ) F ( p ˘ ) + γ n F ( p ˘ ) p ˘ 1 γ n ( 1 k ) w n p ˘ + γ n F ( p ˘ ) p ˘ .
By the definition of w n , we obtain
w n p ˘ = x n + ϑ n ( x n x n 1 ) + δ n ( x n 1 x n 2 ) p ˘ x n p ˘ + ϑ n x n x n 1 + δ n x n 1 x n 2 .
Using (18) and (19), we obtain
x n + 1 p ˘ 1 β n T n w n p ˘ + β n T n z n p ˘ 1 β n w n p ˘ + β n z n p ˘ 1 γ n β n ( 1 k ) w n p ˘ + β n γ n F ( p ˘ ) p ˘ 1 γ n β n ( 1 k ) x n p ˘ + ϑ n x n x n 1 + δ n x n 1 x n 2 + β n γ n F ( p ˘ ) p ˘ 1 γ n β n ( 1 k ) x n p ˘ + β n γ n θ n β n γ n x n x n 1 + δ n β n γ n x n 1 x n 2 + F ( p ˘ ) p ˘ .
By (13), (14) and (C1), we have ϑ n β n γ n x n x n 1 0 as n and δ n β n γ n x n 1 x n 2 0 as n , and then M 1 ,   M 2 > 0 exist such that
ϑ n β n γ n x n x n 1 M 1 and δ n β n γ n x n 1 x n 2 M 2
for all n 1 . Then,
x n + 1 p ˘ 1 γ n β n ( 1 k ) x n p ˘ + β n γ n ( 1 k ) M 1 + M 2 + F ( p ˘ ) p ˘ 1 k max x n p ˘ , M + F ( p ˘ ) p ˘ 1 k ,
where M = M 1 + M 2 > 0 . Thus, by mathematical induction, we deduce that
x n p ˘ max x 1 p ˘ , M + F ( p ˘ ) p ˘ 1 k
for all n 1 . Hence, the sequence { x n } is bounded and so are the sequences { F ( w n ) } , { T n w n } , { z n } . Now, by Lemma 2, we obtain
z n p ˘ 2 = ( 1 γ n ) ( T n w n p ˘ ) + γ n F ( w n ) F ( p ˘ ) + γ n ( F ( p ˘ ) p ˘ ) 2 γ n F ( w n ) F ( p ˘ ) + ( 1 γ n ) ( T n w n p ˘ ) 2 + 2 γ n F ( p ˘ ) p ˘ , z n p ˘ γ n F ( w n ) F ( p ˘ ) 2 + ( 1 γ n ) T n w n p ˘ 2 + 2 γ n F ( p ˘ ) p ˘ , z n p ˘ 1 γ n ( 1 k ) w n p ˘ 2 + 2 γ n F ( p ˘ ) p ˘ , z n p ˘
and
w n p ˘ 2 = x n p ˘ 2 + 2 x n p ˘ , ϑ n ( x n x n 1 ) + δ n ( x n 1 x n 2 ) + ϑ n ( x n x n 1 ) + δ n ( x n 1 x n 2 ) 2 x n p ˘ 2 + 2 ϑ n x n p ˘ x n 1 x n + 2 | δ n | x n p ˘ x n 1 x n 2 + ϑ n 2 x n 1 x n 2 + 2 ϑ n | δ n | x n 1 x n x n 1 x n 2 + δ n 2 x n 1 x n 2 2 .
Also, from Lemma 2 (iii), (20) and (21), we obtain
x n + 1 p ˘ 2 = ( 1 β n ) T n w n p ˘ 2 + β n T n z n p ˘ 2 β n ( 1 β n ) T n w n T n z n 2 ( 1 β n ) w n p ˘ 2 + β n z n p ˘ 2 β n ( 1 β n ) T n w n T n z n 2 1 β n γ n ( 1 k ) w n p ˘ 2 + 2 γ n β n F ( p ˘ ) p ˘ , z n p ˘ β n ( 1 β n ) T n w n T n z n 2 1 β n γ n ( 1 k ) x n p ˘ 2 + 2 ϑ n x n p ˘ x n 1 x n + 2 | δ n | x n p ˘ x n 1 x n 2 + ϑ n 2 x n 1 x n 2 + 2 ϑ n | δ n | x n 1 x n x n 1 x n 2 + δ n 2 x n 1 x n 2 2 + 2 γ n β n F ( p ˘ ) p ˘ , z n p ˘ β n ( 1 β n ) T n w n T n z n 2 = 1 β n γ n ( 1 k ) x n p ˘ 2 β n ( 1 β n ) T n w n T n z n 2 + β n γ n ( 1 k ) b n ,
where
b n = 1 1 k ( 2 ϑ n β n γ n x n p ˘ x n 1 x n + 2 | δ n | β n γ n x n p ˘ x n 1 x n 2 + 2 ϑ n | δ n | β n γ n x n 1 x n x n 1 x n 2 + δ n 2 β n γ n x n 1 x n 2 2 + 2 F ( p ˘ ) p ˘ , z n p ˘ ) .
It follows that
β n ( 1 β n ) T n w n T n z n 2 x n p ˘ 2 x n + 1 p ˘ 2 + β n γ n ( 1 k ) M ,
where M = sup { b n : n N } .
Next, we shall show that the sequence { x n } converges strongly to p ˘ . Take a n : = x n p ˘ 2 and t n = β n γ n ( 1 k ) . From (22), we have
a n + 1 ( 1 t n ) a n + t n b n
for all n N . To apply Lemma 3, we have to show that lim sup i b n i 0 whenever a subsequence { a n i } of { a n } satisfies
lim inf i ( a n i + 1 a n i ) 0 .
Suppose that { a n i } is a subsequence of { a n } satisfying (24). It follows from (23) and (C3) that
lim sup i β n i ( 1 β n i ) T n i w n i T n i z n i 2 lim sup i ( a n i a n i + 1 + β n i γ n i ( 1 k ) M ) lim sup i ( a n i a n i + 1 ) + ( 1 k ) M lim i β n i γ n i = lim inf i ( a n i + 1 a n i ) 0 .
The condition (C2) and above inequality lead to
lim i T n i w n i T n i z n i = 0 .
Using (C2) and (C3), and since
β n i z n i T n i w n i = β n i γ n i F ( w n i ) T n i w n i ,
we obtain
lim i z n i T n i w n i = 0 .
From (25) and (26), we obtain
z n i T n i z n i z n i T n i w n i + T n i w n i T n i z n i 0
as i . In order to prove that lim sup i b n i 0 , it suffices to show that
lim sup i F ( p ˘ ) p ˘ , z n i p ˘ 0 .
Since { z n i } is bounded, a subsequence { z n i j } of { z n i } and y H exists such that { z n i j } y as j and
lim sup i F ( p ˘ ) p ˘ , z n i p ˘ = lim j F ( p ˘ ) p ˘ , z n i j p ˘ = F ( p ˘ ) p ˘ , y p ˘ .
Since { T n } satisfies the condition (Z) and (27), we obtain y Γ . From  p ˘ = P Γ F ( p ˘ ) , we obtain
F ( p ˘ ) p ˘ , z p ˘ 0
For all z Γ . In particular, we have
F ( p ˘ ) p ˘ , y p ˘ 0 .
Hence, we obtain (28). Thus, in view of Lemma 3, { x n } converges to p ˘ , as required.    □
In what follows, we impose the assumptions on the mappings ψ 1 , ψ 2 , and  ω associated with the convex bilevel optimization problems (1) and (2).
(A1)
ψ 1 : H R is a convex and differentiable function such that ψ 1 is Lipschitz continuous with constant L ψ 1 > 0 and ψ 2 : H ( , ] are proper lower semi-continuous and convex functions;
(A2)
ω : R n R is strongly convex with parameter σ such that ω is L ω -Lipschitz continuous and s ( 0 , 2 L ω + σ ) .
With the above assumptions in place, we propose the following algorithm, called the two-step inertial forward–backward bilevel gradient method (TIFB-BiGM), for solving problems (1) and (2).
The proposition below is attributable to Sabach and Shtern [13] and is critical to our next result.
Proposition 1.
Suppose that ω : R n R is strongly convex with σ > 0 and ω is Lipschitz continuous with constant L ω . Hence, it follows that for all s ( 0 , 2 σ + L ω ) , the mapping S s = I s ω is a contraction such that
x s ω ( u ) ( v s ω ( v ) ) 1 2 s σ L ω σ + L ω u v
for all u, v R n .
Theorem 2.
The sequence { x n } generated by Algorithm 2 converges strongly to p ˘ Λ , where Λ is the set of all solutions of (1) and p ˘ = P S * ( I s ω ) ( p ˘ ) , provided that all conditions as in Theorem 1 hold.
Algorithm 2 Two-Step Inertial Forward–Backward Bilevel Gradient Method (TIFB-BiGM)
Initialization: Let { β n } , { γ n } [ 0 , 1 ] , { τ n } R + , and let { μ n } , { ρ n } R > 0 be bounded sequences. Take x 1 , x 0 , x 1 H arbitrarily.
Let { c n } ( 0 , 2 L ψ 1 ) with c n c as n , where c ( 0 , 2 L ψ 1 ) . For  n N .
Step 1. Compute the inertial step:
ϑ n = min μ n , τ n x n x n 1 if x n x n 1 , μ n otherwise ,
and
δ n = max ρ n , τ n x n 1 x n 2 if x n 1 x n 2 , ρ n otherwise ,
w n = x n + ϑ n ( x n x n 1 ) + δ n ( x n 1 x n 2 ) .
Step 2. Compute:
z n = ( 1 γ n ) p r o x c n ψ 2 ( I c n ψ 1 ) w n + γ n ( I s ω ) ( w n ) ,
x n + 1 = ( 1 β n ) p r o x c n ψ 2 ( I c n ψ 1 ) w n + β n p r o x c n ψ 2 ( I c n ψ 1 ) z n .
Proof. 
Put F = I s ω and T n = p r o x c n ψ 2 ( I c n ψ 1 ) , where c n ( 0 , 2 L ψ 1 ) Then, by Proposition 1, F is a contraction mapping. We also know that T n is nonexpansive. Using Theorem 1, we conclude that x n p ˘ Γ , where p ˘ = P Γ F ( p ˘ ) . It is noted that, Γ = n = 1 F i x ( T n ) = S * . Then, for all x S * , we have
0 F ( p ˘ ) p ˘ , x p ˘ = p ˘ s ω ( p ˘ ) p ˘ , x p ˘ = s ω ( p ˘ ) , x p ˘ .
Dividing above inequalities by s , we obtain
ω ( p ˘ ) , x p ˘ 0
for all x S * . Hence, p ˘ Λ , so x n p ˘ Λ . This completes the proof. □

4. Application to Image Recovery

Algorithm 2 will now be applied to the problem of image restoration. The algorithm’s performance will be compared to that of several existing methods, such as IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM. Image restoration, also known as image deblurring or image deconvolution, is the process of removing or minimizing degradations (blur) in an image. Efforts along these lines began in the 1950s, and applications have been found in a number of areas, including consumer photography, scientific exploration, and image/video decoding; see [36,37]. Mathematically, image restoration can be modeled with the equation
v = A x + b ˘ ,
where v R m is the observed image, A R m × n is the blurring matrix, x R n is an original image, and b ˘ is an additive noise. The objective is to recover the original image x ¯ R n that satisfies (34) by minimizing the value of b ˘ using the least squares method as shown in Equation (35). This method aims to minimize the squared difference between v and A x defined as follows:
min x v A x 2 2 ,
where · 2 is the Euclidean norm. Many iterations, such as the Richardson iteration, see [38], can be used to estimate the solution of (35). The problem stated in Equation (35) is considered ill-posed because there are more unknown variables than observations, resulting in a norm result that is too large to be meaningful. This issue is discussed in references [39,40]. To address this problem, various regularization methods have been introduced to improve the least squares problem. One commonly used method is Tikhonov regularization, which was proposed by Tikhonov and involves minimizing a specific equation.
min x v A x 2 2 + ζ L x 2 ,
where ζ is a positive parameter known as a regularization parameter, · 1 is the l 1 -norm and · 2 is the Euclidean norm, and L R m × n is called the Tikhonov matrix. L is set to be the identity in the standard form. A well-known model for solving problem (34) is the least absolute shrinkage and selection operator (LASSO) [41], which is defined by the expression
min x v A x 2 2 + ζ x 1 .
The restoration of RGB images presents a challenge for the model (36) due to the significant size of the matrix A, as well as its associated elements, which can make computing the multiplication A x and x 1 quite expensive. To address this, researchers in this field commonly implement a 2-D fast Fourier transform to transform the images, resulting in a modified version of the model (36) that overcomes this issue.
min x v A x 2 2 + ζ W x 1 .
The blurring operation A, commonly selected as A = R W , plays a crucial role in the problem (34). R represents the blurring matrix, while W denotes the two-dimensional fast Fourier transform. The observed image v R m × n is affected by both blurring and noise, with its dimensions being m × n .
Now, let S * be the set of all solutions of (38). Among the solutions in S * , we would also like to select a solution x * S * in such a way that x * is a minimizer of
min x * S * 1 2 x * 2 .
We consider 2 RGB images (Wat Chedi Luang [42] and Matsue Castle) with the size of 256 × 256 as the original images (see Figure 1). The pictures we used in this experiment were created by the third author. In order to simulate blurring, we convolved the images using a Gaussian blur filter with a size of 9 × 9 and a standard deviation of σ = 4 with noise 10 4 .
Peak signal-to-noise ratio (PSNR) [43] and signal-to-noise ratio (SNR) [44] were used as the metrics for evaluating the performance of each algorithm. The PSNR and SNR at x n are given by
P S N R ( x n ) = 10 log 10 M A X 2 M S E ,
S N R ( x n ) = 10 log 10 x x ¯ 2 x n x ¯ 2 ,
where M A X is the maximum pixel value (usually 255 in 8-bit grayscale images) and M S E = 1 256 2 x n x 2 2 is the mean squared error between the original and the distorted image. Both and SNR are expressed in decibels (dB) as a logarithmic measure of the signal-to-noise or signal-to-error ratio.
In image restoration, both PSNR and SNR are commonly used as metrics to assess the performance of deblurring results. However, it is important to note that these metrics provide different types of information.
PSNR measures the quality of a deblurred image by comparing it to the original image and evaluating the amount of noise introduced during the restoration process. It calculates the ratio between the peak signal power (the maximum possible value for the pixel) and the mean squared error (MSE) between the original and deblurred images. Higher PSNR values indicate better restoration quality as they indicate a lower level of distortion or noise.
On the other hand, SNR measures the ratio between the signal power and the noise power in the deblurred image. It quantifies the preservation of the original signal after the restoration process. Higher SNR values indicate less noise in the deblurred image.
While both PSNR and SNR are useful metrics, they focus on different aspects of image restoration. PSNR primarily considers the visual quality and fidelity of the deblurred image compared to the original, while SNR focuses more on the amount of noise present in the deblurred image.
To comprehensively evaluate the performance of your deblurring algorithm, it is recommended to consider both PSNR and SNR. They provide complementary information about the restoration quality.
We now employ our proposed algorithm (TIFB-BiGM) in Theorem 2 to solve the convex bilevel optimization problems (38) and (39). In our experiments, the algorithm developed in this paper (TIFB-BiGM) as well as the others are discussed and applied to solve the convex bilevel optimization problems (38) and (39), where ω ( x ) = 1 2 x 2 2 , ψ 1 ( x ) = v A x 2 , ψ 2 ( x ) = ζ W x 1 and ζ = 5 × 10 5 . The observed images are blurred images. We compute the Lipschitz constant L ψ 1 by using the maximum eigenvalues of the matrix A A .
For the first experiment, the parameters of the TIFB-BiGM are chosen as follows: β n = 0.99 n n + 1 , γ n = 1 50 n , c n = 1 L ψ 1 , τ n = 10 14 n 2 and s = 0.01 . Now, the experiments for recovering the “Wat Chedi Luang” image with size of 256 × 256 using TIFB-BiGM with different inertial parameters are shown in Table 1 and Table 2. We also observe from Table 1 and Table 2 that μ n tends to 1 and ρ n tends to 0
μ n = 0.99 n n + 0.001 and ρ n = 1 n 2
gives the highest values of PSNR and SNR for our method.
The parameter values for each algorithm were chosen for optimum performance, based on the published literature. The value for γ n in Table 3 is the best choice for BiG-SAM considered in [13]. For iBiG-SAM, α = 3 is the best choice over other values considered in [17], and the same authors found, based on their numerical experiments, μ n = n n + 1 to be the best choice for FVFBA.
The following experiments demonstrate Algorithm 2’s efficiency for image restoration in comparison to IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM using PSNR and SNR as measurements.
The efficiency of restoring images using various algorithms under different iterations are illustrated in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. The results indicate that TIFB-BiGM achieves higher PSNR and SNR values than IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM. Therefore, our algorithm demonstrates superior convergence behavior compared to the aforementioned methods.

5. Conclusions

In this paper, algorithmic solutions to a family of convex bilevel optimization problems are developed and applied to image processing. An interesting connection between minization problems and fixed-point methods is observed. We first present a modified two-step inertial viscosity algorithm (MTIVA) for finding a common fixed point of a family of nonexpansive operators in a Hilbert space and prove strong convergence under relatively mild conditions. This is the applied to the solution of a convex bilevel optimization problem by introducing a novel two-step inertial forward–backward bilevel gradient method (TIFB-BiGM). The main results are then employed in the solution of an image restoration problem. Through careful comparative analysis, we demonstrate that our algorithm outperforms several existing algorithms such as IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM, in terms of image recovery efficiency, as verified through numerical experiments conducted under specific parameter settings.
There are several potential avenues for future research. Firstly, investigating the adaptability and performance of the proposed algorithm in different image processing tasks could provide valuable insights. Additionally, one might explore the algorithm’s scalability to large-scale image datasets or investigate the incorporation of parallel computing techniques that could enhance the algorithm’s computational efficiency. Moreover, conducting comparative studies with other state-of-the-art image restoration algorithms would provide a comprehensive evaluation of the algorithm’s strengths and limitations. Finally, exploring the applicability of the proposed algorithm to other domains beyond image processing, such as computer vision or signal processing, would broaden its potential impact.

Author Contributions

Conceptualization, S.S.; formal analysis, R.W. and S.S.; investigation, R.W. and K.J.; methodology, R.W. and S.S.; software, K.J.; supervision, S.S.; validation, R.W. and S.S.; writing—original draft, R.W. and K.J.; and writing—review and editing, R.W. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

NSRF via the program Management Unit for Human Resources & Institutional Development, Research, and Innovation (grant number B05F640183).

Data Availability Statement

Not applicable.

Acknowledgments

This research has received funding support from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research, and Innovation (grant number B05F640183), and it was also partially supported by Chiang Mai University and Ubon Ratchathani University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Franceschi, L.; Frasconi, P.; Salzo, S.; Grazzi, R.; Pontil, M. Bilevel programming for hyperparameter optimization and meta-learning. In Proceedings of the International Conference on Machine Learning (ICML), Stockholm, Sweden, 10–15 July 2018; pp. 1568–1577. [Google Scholar]
  2. Shaban, A.; Cheng, C.-A.; Hatch, N.; Boots, B. Truncated back-propagation for bilevel optimization. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), Okinawa, Japan, 16–18 April 2019; pp. 1723–1732. [Google Scholar]
  3. Kunapuli, G.; Bennett, K.P.; Hu, J.; Pang, J.-S. Classification model selection via bilevel programming. Optim. Methods Softw. 2008, 23, 475–489. [Google Scholar] [CrossRef]
  4. Flamary, R.; Rakotomamonjy, A.; Gasso, G. Learning constrained task similarities in graph regularized multitask learning. In Regularization, Optimization, Kernels, and Support Vector Machines; Chapman and Hall/CRC: Boca Raton, FL, USA, 2014; Volume 103, ISBN 978-0367658984. [Google Scholar]
  5. Konda, V.R.; Tsitsiklis, J.N. Actor-critic algorithms. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Denver, CO, USA, 30 November–2 December 1999; pp. 1008–1014. [Google Scholar]
  6. Bruck, R.E., Jr. On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. J. Math. Anal. Appl. 1977, 61, 159–164. [Google Scholar] [CrossRef] [Green Version]
  7. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  8. Janngam, K.; Suantai, S. An inertial modified S-Algorithm for convex minimization problems with directed graphs and their applications in classification problems. Mathematics 2022, 10, 4442. [Google Scholar] [CrossRef]
  9. Cabot, A. Proximal point algorithm controlled by a slowly vanishing term: Applications to hierarchial minimization. SIAM J. Optim. 2005, 15, 555–572. [Google Scholar] [CrossRef]
  10. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  11. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  12. Beck, A.; Sabach, S. A first order method for finding minimal norm-like solutions of convex optimization problems. Math. Program. 2014, 147, 25–46. [Google Scholar] [CrossRef]
  13. Sabach, S.; Shtern, S. A first order method for solving convex bilevel optimization problems. SIAM J. Optim. 2017, 27, 640–660. [Google Scholar] [CrossRef] [Green Version]
  14. Nesterov, Y.E. A method for solving the convex programming problem with convergence rate O(1/k2). Sov. Math. Dokl. 1983, 27, 372–376. [Google Scholar]
  15. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  16. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  17. Shehu, Y.; Vuong, P.T.; Zemkoho, A. An inertial extrapolation method for convex simple bilevel optimization. Optim. Methods Softw. 2019, 36, 1–19. [Google Scholar] [CrossRef]
  18. Moudafi, A. Viscosity approximation method for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  19. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  20. Takahashi, W. Viscosity approximation methods for countable families of nonexpansive mappings in Banach spaces. Nonlinear Anal. 2009, 70, 719–734. [Google Scholar] [CrossRef]
  21. Jailoka, P.; Suantai, S.; Hanjing, A. A fast viscosity forward–backward algorithm for convex minimization problems with an application in image recovery. Carpathian J. Math. 2021, 37, 449–461. [Google Scholar] [CrossRef]
  22. Janngam, K.; Suantai, S.; Cho, Y.J.; Kaewkhao, A.; Wattanataweekul, R. A Novel Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Classification Problems. Mathematics 2023, 11, 3241. [Google Scholar] [CrossRef]
  23. Poon, C.; Liang, J. Geometry of First-order Methods and Adaptive Acceleration. arXiv 2020, arXiv:2003.03910. [Google Scholar]
  24. Liang, J. Convergence Rates of First-Order Operator Splitting Methods. Ph.D. Thesis, Normandie Universit’e, Normaundie, France, 2016. [Google Scholar]
  25. Poon, C.; Liang, J. Trajectory of Alternating Direction Method of Multiplier and Adaptive Acceleration. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
  26. Polyak, B.T. Introduction to Optimization; Optimization Software, Publication Division: New York, NY, USA, 1987. [Google Scholar]
  27. Combettes, P.L.; Glaudin, L. Quasi-Nonexpansive Iterations on the Affine Hull of Orbits: From Mann’s Mean Value Algorithm to Inertial Methods. SIAM J. Optim. 2017, 27, 2356–2380. [Google Scholar] [CrossRef] [Green Version]
  28. Nakajo, K.; Shimoji, K.; Takahashi, W. On strong convergence by the hybrid method for families of mappings in Hilbert spaces. Nonlinear Anal. 2009, 71, 112–119. [Google Scholar] [CrossRef]
  29. Aoyama, K.; Kimura, Y. Strong convergence theorems for strongly nonexpansive sequences. Appl. Math. Comput. 2011, 217, 7537–7545. [Google Scholar] [CrossRef]
  30. Aoyama, K.; Kohsaka, F.; Takahashi, W. Strong convergence theorems by shrinking and hybrid projection methods for relatively nonexpansive mappings in Banach spaces. Nonlinear Anal. Convex Anal. 2009, 10, 7–26. [Google Scholar]
  31. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. Comptes Rendus Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  32. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  33. Bussaban, L.; Suantai, S.; Kaewkhao, A. A parallel inertial S-iteration forward–backward algorithm for regression and classification problems. Carpathian J. Math. 2020, 36, 35–44. [Google Scholar] [CrossRef]
  34. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  35. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 724–750. [Google Scholar] [CrossRef]
  36. Maurya, A.; Tiwari, R. A Novel Method of Image Restoration by using Different Types of Filtering Techniques. Int. J. Eng. Sci. Innov. Technol. 2014, 3, 124–129. [Google Scholar]
  37. Suseela, G.; Basha, S.A.; Babu, K.P. Image Restoration Using Lucy Richardson Algorithm For X-Ray Images. IJISET Int. J. Innov.Sci. Eng. Technol. 2016, 3, 280–285. [Google Scholar]
  38. Vogel, C.R. Computational Methods for Inverse Problems; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  39. Eldén, L. Algorithms for the Regularization of Ill-Conditioned Least Squares Problems. BIT Numer. Math. 1977, 17, 134–145. [Google Scholar] [CrossRef]
  40. Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms); SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
  41. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  42. Yatakoat, P.; Suantai, S.; Hanjing, A. On Some Accelerated Optimization Algorithms Based on Fixed Point and Linesearch Techniques for Convex Minimization Problems with Applications. Adv. Cont. Discr. Mod. 2022, 2022, 43:1–43:13. [Google Scholar] [CrossRef]
  43. Thung, K.; Raveendran, P. A survey of image quality measures. In Proceedings of the 2009 International Conference for Technical Postgraduates (TECHPOS), Kuala Lumpur, Malaysia, 14–15 December 2009; pp. 1–4. [Google Scholar]
  44. Chen, D.Q.; Zhang, H.; Cheng, L.Z. A fast fixed-point algorithmfixed-point algorithmfixed-point algorithmfixed-point algorithm for total variation deblurring and segmentation. J. Math. Imaging Vis. 2012, 43, 167–179. [Google Scholar] [CrossRef]
Figure 1. Original images: (a) Wat Chedi Luang, (b) Matsue Castle.
Figure 1. Original images: (a) Wat Chedi Luang, (b) Matsue Castle.
Mathematics 11 03518 g001
Figure 2. The graphs of PSNR of each algorithm for Wat Chedi Luang.
Figure 2. The graphs of PSNR of each algorithm for Wat Chedi Luang.
Mathematics 11 03518 g002
Figure 3. The graphs of SNR of each algorithm for Wat Chedi Luang.
Figure 3. The graphs of SNR of each algorithm for Wat Chedi Luang.
Mathematics 11 03518 g003
Figure 4. The graphs of PSNR of each algorithm for Matsue Castle.
Figure 4. The graphs of PSNR of each algorithm for Matsue Castle.
Mathematics 11 03518 g004
Figure 5. The graphs of SNR of each algorithm for Matsue Castle.
Figure 5. The graphs of SNR of each algorithm for Matsue Castle.
Mathematics 11 03518 g005
Figure 6. Results for deblurring “Wat Chedi Luang” image using various algorithms at the 500th iteration. (a) Gaussian blurred image, (b) TIFB-BiGM (PSNR = 29.7216, SNR = 25.6962), (c) IVMSPA (PSNR = 29.5375, SNR = 25.5121), (d) FVFBA (PSNR = 28.9243, SNR = 24.8989), (e) BiG-SAM (PSNR = 24.7118, SNR = 20.6864), and (f) iBiG-SAM (PSNR = 27.0172, SNR = 22.9918).
Figure 6. Results for deblurring “Wat Chedi Luang” image using various algorithms at the 500th iteration. (a) Gaussian blurred image, (b) TIFB-BiGM (PSNR = 29.7216, SNR = 25.6962), (c) IVMSPA (PSNR = 29.5375, SNR = 25.5121), (d) FVFBA (PSNR = 28.9243, SNR = 24.8989), (e) BiG-SAM (PSNR = 24.7118, SNR = 20.6864), and (f) iBiG-SAM (PSNR = 27.0172, SNR = 22.9918).
Mathematics 11 03518 g006aMathematics 11 03518 g006b
Figure 7. Results for deblurring “Matsue Castle” image using various algorithms at the 500th iteration. (a) Gaussian blurred image, (b) TIFB-BiGM (PSNR = 30.9830, SNR = 27.5075), (c) IVMSPA (PSNR = 30.8212, SNR = 27.3457), (d) FVFBA (PSNR = 30.43625, SNR = 26.9636), (e) BiG-SAM (PSNR = 25.4625, SNR = 21.9870), and (f) iBiG-SAM (PSNR = 27.9712, SNR = 24.4957).
Figure 7. Results for deblurring “Matsue Castle” image using various algorithms at the 500th iteration. (a) Gaussian blurred image, (b) TIFB-BiGM (PSNR = 30.9830, SNR = 27.5075), (c) IVMSPA (PSNR = 30.8212, SNR = 27.3457), (d) FVFBA (PSNR = 30.43625, SNR = 26.9636), (e) BiG-SAM (PSNR = 25.4625, SNR = 21.9870), and (f) iBiG-SAM (PSNR = 27.9712, SNR = 24.4957).
Mathematics 11 03518 g007aMathematics 11 03518 g007b
Table 1. PSNR values for restoration of “Wat Chedi Luang” image by TIFB-BiGM after 300 iterations for different choices of parameters μ n and ρ n .
Table 1. PSNR values for restoration of “Wat Chedi Luang” image by TIFB-BiGM after 300 iterations for different choices of parameters μ n and ρ n .
μ n 0.10.30.50.9 0.99 n n + 0.001 1
ρ n
0.122.975523.214323.518524.676925.339825.4489
0.322.779122.976423.215423.945424.212924.2479
0.522.611622.779922.977323.521523.692323.7133
0.922.336222.466222.612922.978923.080523.0924
1 n 2 23.084723.351323.703825.427126.211624.9267
Table 2. SNR values for restoration of “Wat Chedi Luang” image by TIFB-BiGM after 300 iterations for different choices of parameters μ n and ρ n .
Table 2. SNR values for restoration of “Wat Chedi Luang” image by TIFB-BiGM after 300 iterations for different choices of parameters μ n and ρ n .
μ n 0.10.30.50.9 0.99 n n + 0.001 1
ρ n
0.118.950319.189019.493220.651621.314421.4236
0.318.753918.951019.190119.920020.187620.2225
0.518.586418.754518.951919.496119.667019.6879
0.918.311018.440818.587518.953619.055119.0670
1 n 2 19.059518.326019.678421.401822.191320.9014
Table 3. Parameters selection of TIFB-BiGM, IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM.
Table 3. Parameters selection of TIFB-BiGM, IVMSPA, FVFBA, BiG-SAM, and iBiG-SAM.
MethodsSetting
TIFB-BiGM s = 0.01 , c n = 1 L ψ 1 , β n = 0.99 n n + 1 , γ n = 1 50 n ,
τ n = 10 18 n 2 , μ n = 0.99 n n + 0.001 , ρ n = 1 n 2
IVMSPA s = 0.01 , c n = 1 L f , α n = 1 50 n , β n = γ n = 0.5 ,
τ n = 10 20 n
θ n = min p n 1 p n + 1 , α n τ n x n x n 1 if x n x n 1 p n 1 p n + 1 otherwise

where p 1 = 1 and p n + 1 = 1 + 1 + 4 p n 2 2
FVFBA c n = n n + 1 , β n = 0.99 n n + 1 , γ n = 1 50 n , τ n = 10 15 n 2
θ n = min n n + 1 , τ n x n x n 1 if x n x n 1 n n + 1 otherwise
BiG-SAM λ = 0.01 , c = 1 L ψ 1 , γ n = 2 ( 0.1 ) 1 n 2 + c L ψ 1 4
iBiG-SAM λ = 0.01 , c = 1 L ψ 1 , γ n = 2 ( 0.1 ) 1 2 + c L ψ 1 4 , β n = γ n n 0.01
θ n = min n n + α 1 , β n x n x n 1 if x n x n 1 , n n + α 1 otherwise
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wattanataweekul, R.; Janngam, K.; Suantai, S. A Novel Two-Step Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Image Recovery. Mathematics 2023, 11, 3518. https://doi.org/10.3390/math11163518

AMA Style

Wattanataweekul R, Janngam K, Suantai S. A Novel Two-Step Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Image Recovery. Mathematics. 2023; 11(16):3518. https://doi.org/10.3390/math11163518

Chicago/Turabian Style

Wattanataweekul, Rattanakorn, Kobkoon Janngam, and Suthep Suantai. 2023. "A Novel Two-Step Inertial Viscosity Algorithm for Bilevel Optimization Problems Applied to Image Recovery" Mathematics 11, no. 16: 3518. https://doi.org/10.3390/math11163518

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop