Next Article in Journal
Sentiment Difficulty in Aspect-Based Sentiment Analysis
Next Article in Special Issue
Proximal Analytic Center Cutting Plane Algorithms for Variational Inequalities and Nash Economic Equilibrium
Previous Article in Journal
High-Speed Wavelet Image Processing Using the Winograd Method with Downsampling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Bilevel Monotone Inclusion and Variational Inequality Problems

by
Austine Efut Ofem
1,*,
Jacob Ashiwere Abuchu
1,2,
Hossam A. Nabwey
3,4,*,
Godwin Chidi Ugwunnadi
5,6 and
Ojen Kumar Narain
1
1
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
2
Department of Mathematics, University of Calabar, Calabar 540271, Nigeria
3
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
4
Department of Basic Engineering, Faculty of Engineering, Menoufia University, Shibin el Kom 32511, Egypt
5
Department of Mathematics, University of Eswatini, Kwaluseni M201, Eswatini
6
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Pretoria 0204, South Africa
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(22), 4643; https://doi.org/10.3390/math11224643
Submission received: 12 October 2023 / Revised: 8 November 2023 / Accepted: 10 November 2023 / Published: 14 November 2023
(This article belongs to the Special Issue Variational Inequality and Mathematical Analysis)

Abstract

:
In this article, the problem of solving a strongly monotone variational inequality problem over the solution set of a monotone inclusion problem in the setting of real Hilbert spaces is considered. To solve this problem, two methods, which are improvements and modifications of the Tseng splitting method, and projection and contraction methods, are presented. These methods are equipped with inertial terms to improve their speed of convergence. The strong convergence results of the suggested methods are proved under some standard assumptions on the control parameters. Also, strong convergence results are achieved without prior knowledge of the operator norm. Finally, the main results of this research are applied to solve bilevel variational inequality problems, convex minimization problems, and image recovery problems. Some numerical experiments to show the efficiency of our methods are conducted.

1. Introduction

Let K be a nonempty, closed, and convex subset of a real Hilbert space H with inner product · , · and induced norm · . Let F : K H be an operator. The classical variational inequality problem (VIP) is derived as follows: Find p K , such that
F p , q p 0 , q K .
We denote by V ( K , F ) the solution set of the VIP (1). Problem (1) has a wide range of applications; several methods for solving this problem have been developed by many researchers (see [1,2,3] and the references in them).
On the other hand, the monotone inclusion problem (MIP) is formulated as follows: Find p H , such that
0 ( D + E ) p ,
where H is a real Hilbert space, E : H H is a single-valued monotone operator, and D : H 2 H is a maximal monotone operator. We denote by ( D + E ) 1 ( 0 ) the solution set of the MIP (2); it is referred to as the set of zero points of D + E . Several optimization problems can be reformulated into the MIP (2). Some of these problems include convex minimization problems, equilibrium problems, image/signal processing problems, DC programming problems, split feasibility problems, and variational inequality problems; see [4,5,6]. The numerous applications of this problem have attracted the attention of a large number of researchers in the last few years, and many methods for solving the problem have also been developed; see [7,8,9]. One of the first methods for solving this problem is the forward–backward algorithm (FBA), which is defined by sequence { p k } as follows:
p k + 1 = ( I + λ k E ) 1 ( I λ k D ) p k ,
where λ k > 0 is the step size, and ( I λ k D ) and ( I + λ k E ) 1 are denoted as forward and backward operators (also referred to as resolvent operators), respectively. The FBA for solving the MIP was independently studied by Lion and Mercier [6], and Passty [10]. In recent years, the convergence analysis and modifications of this method have been deeply exploited by many authors; see [4,11,12] and the references in them. We should note that the weak convergence result of method (3) requires the operator (D) to be strongly monotone; that is a strong assumption. In order to weaken this restriction, several methods have been developed by a large number of researchers; see [6,11,13] and the reference therein. One of the first methods considered to weaken this assumption was introduced by Tseng [13]. This method is called the Tseng splitting algorithm; it is also known as the forward–backward–forward method. Precisely, this method is defined as follows:
p 1 H , q k = ( I + λ k E ) 1 ( I λ k D ) p k , p k + 1 = q k + λ k ( D p k D q k ) ,
where { λ k } is the step size, which can be updated automatically by the Armijo-type line-search technique. The author proves the weak convergence result of method (4) when operator D is Lipschitz continuous and monotone, and operator E is a maximal monotone operator. In [14], Zhang and Wang merge the FBA (3) and the projection and contraction method to obtain an iterative method that also surmounts the limitations of the FBA. Precisely, this method is defined as follows:
p 1 H , q k = ( I + λ k E ) 1 ( I λ k D ) p k , p k + 1 = p k γ δ k m k ,
where m k = p k q k λ k ( D p k D q k ) , δ k = p k q k , m k m k 2 , γ ( 0 , 2 ) ; { λ k } is a control sequence, operator D is monotone–Lipschitz continuous, and E denotes the maximal monotone operator.
It is important to note that algorithms (4) and (5) only converge weakly in infinite dimensional spaces. However, in machine learning and CT reconstruction, strong convergence is more desirable in infinite dimensional spaces [12]. Therefore, it is necessary to modify (3), such that it can achieve strong convergence in real Hilbert spaces. In the last two decades, so many modifications of the forward–backward method have been constructed to obtain strong convergence results in real Hilbert spaces; see [11,12,15,16] and the references in them.
In recent years, the construction of inertial-based algorithms has attracted massive interest from researchers. The idea of including inertial terms in iterative methods for solving optimization problems was initiated by Polyak [17] and it has been confirmed by numerous authors that the inclusion of an inertial term in a method acts as a boost to the convergence speed of the method. A common feature of the inertial-type algorithm is that the next iteration depends on the combination of two previous iterates; for more details, see [3,18,19]. Many inertial-type algorithms have been studied and numerical tests have demonstrated that the inertial effects on these methods greatly improve their performances; see [1,3,20]. Recently, Lorenz and Pock [17] introduced and studied the following inertial FBA to solve the MIP (2):
w k = p k + θ k ( p k p k 1 ) , q k = ( I + λ k E ) 1 ( I λ k D ) p k .
Note that method (6) only convergences weakly in real Hilbert spaces; numerical tests by the authors proved that their method outperforms several existing methods without inertial terms.
Several mathematical problems, such as variational inequality problems, equilibrium problems, split feasibility problems, and split minimization problems, are all special MIP cases. These problems have been applied to solve diverse real-world problems, such as modeling inverse problems arising from phase retrieval, modeling intensity-modulated radiation therapy planning, sensor networks in computerized and data compression, optimal control problems, and image/signal processing problems [21,22,23].
The bilevel programming problem is a constrained optimization problem in which the constrained set is a solution set of another optimization problem. This problem is enriched with many applications in modeling Stackelberg games, the convex feasibility problem, determination in Wardrop equilibria for network flow, domain decomposition methods for PDEs, optimal control problems, and image/signal processing problems [23]. When the first-level problem is a VIP and the second-level problem is a fixed point set of a mapping, then the bilevel problem is known as the hierarchical variational inequality problem. In [24,25,26], Yamada introduced the following method, called the hybrid steepest-descent iterative method, to solve the hierarchical VIP:
p k + 1 = ( I α k ϱ F ) S p k ,
where F is a strongly monotone–Lipschitz continuous operator and S is a nonexpansive mapping.
In this paper, we consider the problem of solving a VIP over the solution set of the MIP in a real Hilbert space. This problem is formulated as follows:
Find p ( D + E ) 1 ( 0 ) such that F p , q p 0 , q ( D + E ) 1 ( 0 ) ,
where F is a strongly monotone–Lipschitz continuous operator, D is a monotone–Lipschitz continuous operator, and E is a maximal monotone operator.
Inspired by the inertial technique, the Tseng splitting algorithm, projection, and contraction method, and hybrid steepest decent method, we introduce two efficient iterative algorithms to solve problem (8). We prove the strong convergence results of the suggested method under some standard assumptions on the control parameters. Also, the strong convergence results are achieved without prior knowledge of the operator norm. Instead, the stepsizes are self-adaptively updated. Furthermore, we apply our main results to solve the bilevel variational inequality problem, convex minimization problem, and image recovery problem. We conduct numerical experiments to show the practicability, applicability, and efficiency of our methods. Our results improve, generalize, and unify the results presented in [4,12,13,27], as well as several others in the literature.
This article is organized as follows: In Section 2, we present some established definitions and lemmas that will be useful in deriving our main results. In Section 3, we present the proposed method and establish its convergence analysis. In Section 4, we show the applications of our main results to real-world problems. In Section 5, several numerical tests are carried out in finite and infinite dimensional spaces to demonstrate the computational efficiency of the proposed methods. Lastly, in Section 6, a summary of the obtained results is given.

2. Preliminaries

Let K be a nonempty, closed, and convex subset of a real Hilbert space H . We represent the weak and strong convergence of { p k } to p by p k p and p k p , respectively. For every point p H , the unique nearest point, which is denoted by P K p , exists in K , such that p P K p p q , q K . The mapping P K is called the metric projection of H onto K and it is known to be nonexpansive.
Lemma 1
([28]). Let H be a real Hilbert space and K a nonempty closed convex subset of H . Suppose p H and q K . Then q = P K p p q , q w 0 , w K .
Lemma 2
([28]). Let H be a real Hilbert space. Then for every p , q H and σ R , we have
(i) 
p + q 2 p 2 + 2 q , p + q ;
(ii) 
p + q 2 = p 2 + 2 p , q + q 2 ;
(iii) 
σ u + ( 1 σ ) v 2 = σ p 2 + ( 1 σ ) q 2 σ ( 1 σ ) p q 2 .
Lemma 3
([29]). Let { a k } be a sequence of non-negative real numbers, such that
a k + 1 ( 1 ν k ) a k + ν k b k , k 1 ,
where { ν k } ( 0 , 1 ) with k = 0 ν k = . If lim sup k b k 0 for every subsequence { a k j } of { a k } , the following inequality holds:
lim inf j ( a k j + 1 a k j ) 0 ,
then lim k a k = 0 .
Definition 1.
Let H be a real Hilbert space and F : H H be a mapping. Then, F is called
(1) 
L-Lipschitz continuous, if L > 0 exists, such that
F p F q L p q , p , q H .
If L [ 0 , 1 ) , then F is a contraction.
(2) 
η-strongly monotone, if there exists a constant η > 0 , such that
p q , F p F q η p q 2 , p , q H .
(3) 
η-inverse strongly monotone (η-co-coercive), if there exists a constant η > 0 , such that
p q , F p F q η F p F q 2 , p , q H .
(4) 
Monotone, if
F p F q , p q 0 , p , q H .
Definition 2.
Let E : H 2 H be a multi-valued operator. Then
(a) 
The graph of E is defined by
Graph ( E ) = { ( p , q ) H × H : p H , q E ( p ) } .
(b) 
Operator E is said to be monotone if
p q , y z 0 , y , z H , p E ( y ) , q E ( z ) .
(c) 
Operator E is said to be maximal monotone if E is monotone and its graph is not a proper subset of the graph of any of the monotone operators.
(d) 
For all p H , the resolvent of E is a single-valued mapping J λ E : H H defined by
J λ E ( p ) = ( I + λ E ) 1 ( p ) ,
where λ > 0 and I is an identity operator on H .
Lemma 4
([30]). Let E : H 2 H be a maximal monotone mapping and D : H H be a monotone and L-Lipschitz continuous operator. Then, the mapping D + E is a maximal monotone mapping.
Lemma 5
([31]). Suppose that ϱ > 0 , α ( 0 , 1 ) , and F : H H is η-strongly monotone and L 1 continuous, such that 0 < η L 1 . Then for any nonexpansive mapping S : H H , we can associate a mapping S ϱ : H H defined by S ϱ p = ( I α ϱ F ) S p k , p H . Then, S ϱ is a contraction provided ϱ < 2 η L 1 2 ; that is,
S ϱ p S ϱ q ( 1 α χ ) p q ,
where χ = 1 1 ϱ ( 2 η ϱ L 1 2 ) ( 0 , 1 ) .

3. Main Results

In this section, we construct two methods for approximating the solution of the variational inequality problem over the solution set of the monotone inclusion problem. We establish the strong convergence results of the methods in the settings of real Hilbert spaces. The following assumptions will be useful in achieving our main results:
Assumption 1.
(A1) Operator D : H H is monotone and L 2 -Lipschitz continuous, and E : H 2 H is a maximal monotone operator.
(A2) 
The solution set denoted by Ω = { p ( D + E ) 1 ( 0 ) : F p , q p 0 , q ( D + E ) 1 ( 0 ) } .
(A3) 
F : H H is η-strongly monotone and L 1 -Lipschitz continuous.
(A4) 
{ α k } ( 0 , 1 ) , such that lim k α k = 0 and k = 1 α k = . The positive sequence { ϵ k } satisfies lim k ϵ k α k = 0 .
(A5) 
Let 0 < s < s < 1 , { t k } [ 0 , ) with lim k t k = 0 , { s k } [ 0 , ) with lim k s k = 0 , and q k [ 0 , ) with k = 0 q k < .
Remark 1.
From (9) and assumption ( A 4 ) , it is not hard to see that
lim k ϕ k p k p k 1 = 0 and lim k ϕ k α k p k u k 1 = 0 .
Remark 2.
Obviously, the step size (16) properly contains some well-known step sizes considered in  [12,18,32] and many others.
Lemma 6.
Assume that Assumption 1 holds and { p k } is the sequence generated by Algorithm 1, then { λ k } defined by (16) is well-defined, and lim k λ k = λ > 0 .
Proof. 
Since D is L 2 -Lipschitz continuous, such that L 1 > 0 , s k 0 , then by (16), if D w k D v k for all k 1 , we obtain
( s k + s ) w k v k D w k D v k ( s k + s ) w k v k L 2 w k v k μ L 2 .
The remaining part of the proof of this lemma is similar to that in [33], so we omit it here. □
Algorithm 1 A modified accelerated projection and contraction method.
  • Initialization: Choose ϕ > 0 , λ 1 > 0 , 0 < c 1 < c 1 < 2 and ϱ 0 , 2 η L 1 2 . Let p 0 , p 1 H and set k = 1 .
  • Iterative steps: Calculate the next iteration point p k + 1 as follows:
  • Step 1: Choose ϕ k , such that ϕ k [ 0 , ϕ ¯ k ] , where
    ϕ ¯ k = min ϕ , ϵ k p k p k 1 , if p k p k 1 , ϕ , otherwise .
    Step 2: Compute
    w k = p k + ϕ k ( p k p k 1 ) ,
    v k = ( I + λ k E ) 1 ( I λ k D ) w k .
  • Step 3: Compute
    z k = w m m k r k ,
    where
    r k = w k v k λ k ( D w k D v k )
    and
    m k = ( c 1 + t k ) w k v k , r k r k 2 , if r k 0 , 0 , otherwise .
    Step 4: Compute
    p k + 1 = ( I α k ϱ F ) z k , k 1 .
    Update
    λ k + 1 = min ( s k + s ) w k v k D w k D v k , λ k + q k , if D w k D v k , λ k + q k , otherwise .
  • Put k : = k + 1 and return to Step 1.
Lemma 7.
If assumption ( A 5 ) is performed, then a positive integer K exists, such that
c 1 + t k ( 0 , 2 ) and λ k ( s k + s ) λ k + 1 ( 0 , 1 ) , k K .
Proof. 
Since 0 < c 1 < c 1 < 2 and lim k t k = 0 , then a positive integer K 1 exists, such that
0 < c 1 + t k c 1 < 2 , k K 1 .
For 0 < s < s 1 , lim k s k = 0 and lim k λ k = λ , we have
lim k 1 λ k ( s k + s ) λ k + 1 = 1 s > 1 s > 0 ,
and this means that a positive integer K 2 exists, such that
1 λ k ( s k + s ) λ k + 1 > 0 , k K 2 .
Setting K = max { K 1 , K 2 } , which means that
c 1 + t k ( 0 , 2 ) and λ k ( s k + s ) λ k + 1 ( 0 , 1 ) , k K .
Lemma 8.
Suppose that Assumption 1 holds and { z k } is the sequence generated from Algorithm 1. Then, for p Ω , the following inequality holds:
z k p 2 w k p 2 + 1 2 c 1 + t k z k w k 2 , k 1 .
Proof. 
Since v k = ( I + λ k E ) 1 ( I λ k D ) w k , we have that w k λ k D w k v k λ k E v k . Since p ( D + E ) 1 ( 0 ) , it follows that
λ k D p λ k E p .
Now, due to the maximal monotonicity of E, we have that
w k λ k D w k v k + λ k D p , v k p 0 .
Thus,
w k v k λ k ( D w k D p + D v k D v k ) , v k p 0 .
From (13), it implies that
r k λ k ( D v k D p ) , v k p 0 .
By the monotonicity of D, it follows that
r k , v k p λ k D v k D p , v k p 0 .
By (19), it follows that
w k p , r k = w k v k , r k + v k p , r k w k v k , r k .
Since z k = w k m k r k , we have that m k · r k 2 = z k w k 2 . From (14), if r k 0 , we have m k r k 2 = ( c 1 + t k ) w k v k , r k . From Lemma 2 and (20), we obtain
z k p 2 = w k m k r k p 2 = w k p 2 + m k 2 r k 2 2 m k w k p , r k w k p 2 + m k 2 r k 2 2 m k w k v k , r k = w k p 2 + m k 2 r k 2 2 c 1 + t k m k · m k r k 2 = w k p 2 + 1 2 c 1 + t k z k w k 2 .
Lemma 9.
Let { w k } and { v k } be sequences generated by Algorithm 1. Let { w k j } and { v k j } be subsequences of { w k } and { v k } , respectively. If w k j x * H and lim j w k j v k j = 0 , then x * ( D + E ) 1 ( 0 ) .
Proof. 
The proof is similar to that of Lemma 7 in [5]. Thus, we omit it here. □
Lemma 10.
Let { p k } be the sequence generated by Algorithm 1. Then, { p k } is bounded.
Proof. 
Let p Ω . From (10) and Assumption 1 ( A 4 ) , we have ϕ k p k p k 1 ϵ k , k N , and this implies that
ϕ k α k p k p k 1 ϵ k α k 0 , as k .
It implies from (22) that there exists K 3 > 0 , such that
ϕ k α k p k p k 1 K 3 , k N .
Using (10) and (23), we have
w k p = u k + ϕ k ( p k p k 1 ) p p k p + ϕ k p k p k 1 p k p + α k ϕ k α k p k p k 1 p k p + α k K 3 .
By Lemma 7, we know that a positive integer K exists, such that 0 < c 1 + t k < 2 . Therefore, from (21), we have
z k p w k p .
Combining (24) and (25), we have
z k p w k p p k p + α k K 3 .
By Lemma 5, (15) and (26), we obtain
p k + 1 p = ( I α k ϱ F ) z k ( I α k ϱ F ) p α k ϱ F p ( I α k ϱ F ) z k ( I α k ϱ F ) p + α k ϱ F p ( 1 α k χ ) z k p + α k ϱ F p ( 1 α k χ ) p k p + α k χ · K 3 χ + α k χ · ϱ χ F p = ( 1 α k χ ) p k p + α k χ K 3 + ϱ F p χ max p k p , K 3 + ϱ F p χ max p 1 p , K 3 + ϱ F p χ ,
where χ = 1 1 ϱ ( 2 η ϱ L 1 2 ) ( 0 , 1 ) . □
This implies that { p k } is bounded. Consequently, we have that { w k } , { v k } , { z k } and F z k are also bounded sequences.
Theorem 1.
Suppose that Assumption 1 holds and { p k } is the sequence defined by Algorithm 1. Then, { p k } converges strongly to the unique solution of problem (8).
Proof. 
The proof of the theorem will be divided into three steps.
  • Claim 1:
2 c 1 + t k 1 z k w k 2 p k p 2 p k + 1 p 2 + α k K 6 , k 1 , for some K 6 > 0 .
Indeed, by (15), Lemma 2, and Lemma 5, we have
p k + 1 p 2 = ( I α k ϱ F ) z k ( I α k ϱ F ) p α k ϱ F p 2 ( I α k ϱ F ) z k ( I α k ϱ F ) p 2 2 α k ϱ F p , p k + 1 p ( 1 α k χ ) 2 z k p 2 + 2 α k ϱ F p , p p k + 1 z k p 2 + α k K 4 ,
for some K 4 > 0 . By (24), we have
w k p 2 ( p k p + α k K 3 ) 2 = p k p 2 + α k ( 2 K 3 p k p + α k K 3 2 ) = p k p 2 + α k K 5 ,
for some K 5 > 0 . Now, using (21), (29), and (30), we have
p k + 1 p 2 w k p 2 + 1 2 c 1 + t k z k w k 2 + α k K 4 p k p 2 + α k K 5 + 1 2 c 1 + t k z k w k 2 + α k K 4 .
From (31), it implies that
2 c 1 + t k 1 z k w k 2 p k p 2 p k + 1 p 2 + α k K 6 , k 1 ,
for some K 6 = K 4 + K 5 > 0 .
  • Claim 2:
p k + 1 p 2 ( 1 α k χ ) p k p 2 + α k χ 2 ϱ χ F p , p p k + 1 + 3 K * ϕ k α k χ p k p k 1 , k 1 .
for some K * > 0 .
Indeed, from (10), we have
w k p 2 = p k + ϕ k ( p k p k 1 ) p 2 p k p 2 + 2 ϕ k p k p p k p k 1 + ϕ k 2 p k p k 1 2 p k p 2 + 3 K * ϕ k p k p k 1 ,
where K * = sup k N { p k p , ϕ p k p k 1 } > 0 . Now, using (15), Lemma 2, Lemma 5, (25), and (33), we have
p k + 1 p 2 = ( I α k ϱ F ) z k ( I α k ϱ F ) p α k ϱ F p 2 ( I α k ϱ F ) z k ( I α k ϱ F ) p 2 2 α k ϱ F p , p k + 1 p ( 1 α k χ ) 2 z k p 2 + 2 α k ϱ F p , p p k + 1 ( 1 α k χ ) z k p 2 + 2 α k ϱ F p , p p k + 1 ( 1 α k χ ) w k p 2 + 2 α k ϱ F p , p p k + 1 ( 1 α k χ ) p k p 2 + 3 K * ϕ k p k p k 1 + 2 α k ϱ F p , p p k + 1 = ( 1 α k χ ) p k p 2 + α k χ 2 ϱ χ F p , p p k + 1 + 3 K * ϕ k α k χ p k p k 1 , k 1 .
  • Claim 3: sequence { p k p 2 } converges to zero. For this, recalling Lemma 3 and Remark 1, it suffices to show that lim sup k F p , p p k + 1 0 for every subsequence { p k j p } of { p k p } satisfying
    lim inf j ( p k j + 1 p p k j p ) 0 .
Now, we assume that p k j p 2 is a subsequence of p k p 2 , such that (35) holds. Then
lim inf j ( p k j + 1 p 2 p k j p 2 ) = lim inf j [ ( p k j + 1 p p k j p ) ( p k j + 1 p + p k j p ) ] 0 .
Owing Claim 1, lim j α k j = 0 and lim j t k j = 0 , we have
lim sup j 2 c 1 + t k j 1 z k j w k j 2 lim sup j [ p k j p 2 p k j + 1 p 2 + α k j K 6 ] = lim sup j [ p k j p 2 p k j + 1 p 2 ] + lim sup j α k j K 6 = lim inf j [ p k j p 2 p k j + 1 p 2 ] 0 .
Consequently, we have
lim j z k w k = 0 .
By (15), we have
p k j + 1 z k j = ( I α k j ϱ F ) z k j = α k j ϱ F z k j 0 as j .
Also, by (10) and Remark 1
w k j p k j = α k j ϕ k α k j p k p k 1 0 as j .
p k j + 1 p k j p k j + 1 z k j + z k j w k j + w k j p k j 0 as j .
Using (13) and (16), we have
w k j v k j , r k j = w k j v k j , w k j v k j λ k j ( D w k j D v k j ) = w k j v k j 2 w k j v k j , λ k j ( D w k j D v k j ) w k j v k j 2 λ k j w k j v k j D w k j D v k j 1 λ k j ( s k j + s ) λ k j + 1 w k j v k j 2 .
By Lemma 7, a positive integer K exists, such that 1 λ k j ( s k j + s ) λ k j + 1 > 0 , for all k > K . Now, if r k j = 0 , then following the Lipschitz continuity of D, (12)–(14) and (40), we have that
w k j v k j 2 1 1 λ k j ( s k j + s ) λ k j + 1 w k j v k j , r k j = 1 ( 1 + t k j ) 1 λ k j ( s k j + s ) λ k j + 1 m k r k j 2 = 1 ( 1 + t k j ) 1 λ k j ( s k j + s ) λ k j + 1 m k r k j w k j v k j λ k j ( D w k j D v k j ) 1 ( 1 + t k j ) 1 λ k j ( s k j + s ) λ k j + 1 m k r k j ( w k j v k j + λ k j D w k j D v k j ) 1 ( 1 + t k j ) 1 λ k j ( s k j + s ) λ k j + 1 m k r k j ( w k j v k j + λ k j L 2 w k j v k j ) ( 1 + λ k j L 2 ) ( 1 + t k j ) 1 λ k j ( s k j + s ) λ k j + 1 m k r k j w k j v k j ( 1 + λ k j L 2 ) ( 1 + t k j ) 1 λ k j ( s k j + s ) λ k j + 1 w k j z k j w k j v k j .
This implies that
w k j v k j ( 1 + λ k j L 2 ) ( 1 + t k j ) 1 λ k j ( s k j + s ) λ k j + 1 w k j z k j .
By Lemma 6, we have that lim j λ k j = λ , and this implies that λ k j λ k j + 1 = 1 . Furthermore, by assumption ( A 4 ) , we have that lim j t k j = 0 = lim j s k j and 0 < s < s 1 < 1 . Due to (36) and (42), we have that
lim j w k j v k j = 0 .
Next, if r k j = 0 , then due to (14), we know that lim j w k j v k j = 0 also holds. Now, by the boundedness of { p k j } , then there exists a subsequence { p k j i } of { p k j } , which is weakly convergent to some q H ; furthermore,
lim sup j F p , p p k j = lim j F p , p p k j i = F p , p q .
From (38), we have that w k j q as j . This implies from Lemma 9 and (43) that q ( D + E ) 1 ( 0 ) . By (44) and the assumption that p is the unique solution to problem (8), we have
lim sup j F p , p p k j = F p , p q 0 .
The combination of (39) and (45) yields
lim sup j F p , p p k j + 1 = lim sup j F p , p p k j = F p , p q 0 .
Using Remark 1 and (46), we obtain
lim sup j 2 ϱ χ F p , p p k + 1 + 3 K * ϕ k j α k j χ p k j p k j 1 0 .
Thus, from Claim 2, assumption ( A 4 ) , (47), and Lemma 3, it follows that p k p = 0 , as required. □
Next, we present the second method in Algorithm 2.
Algorithm 2 A modified accelerated Tseng splitting method
  • Initialization: Choose ϕ > 0 , λ 1 > 0 , ϱ 0 , 2 η L 1 2 and { ϑ k } [ a , b ] ( 0 , 1 ] . Let p 0 , p 1 H and set k = 1 .
  • Iterative Steps: Calculate the next iteration point p k + 1 as follows:
  • Step 1: Choose ϕ k , such that ϕ k [ 0 , ϕ ¯ k ] , where
    ϕ ¯ k = min k 1 k + ϕ 1 , ϵ k p k p k 1 , if p k p k 1 , k 1 k + ϕ 1 , otherwise .
    Step 2: Set
    w k = p k + ϕ k ( p k p k 1 ) ,
    and compute
    v k = ( I + λ k E ) 1 ( I λ k D ) w k ,
    Step 3: Compute
    z k = ( 1 ϑ k ) w k + ϑ k ( v k + λ k ( D w k D v k ) ) .
    Step 4: Compute
    p k + 1 = ( I α k ϱ F ) z k , k 1 .
    Update
    λ k + 1 = min ( s k + s ) w k v k D w k D v k , λ k + q k , if D w k D v k , λ k + q k , otherwise .
    Put k : = k + 1 and return to Step 1.
Remark 3.
From (48), and Assumption 1 ( A 4 ) , we observe that
lim k ϕ k p k p k 1 = 0 and lim k ϕ k α k p k p k 1 = 0 .
Lemma 11.
If assumption ( A 5 ) is performed, then a positive integer K exists, such that
( s k + s ) 2 λ k 2 λ k + 1 2 ( 0 , 1 ) , k K .
Proof. 
The proof is similar to the proof of Lemma 7. □
Lemma 12.
Suppose Assumption 1 holds and { p k } is the sequence defined by Algorithm 2. Then, for all p Ω , we have the following inequality:
z k p 2 ( 1 ϑ k ) w k p 2 ϑ k 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 ϑ k ( 1 ϑ k ) h k w k 2 .
Proof. 
From the definition of { λ k } , it is obvious that
D w k D v k ( s k + s ) λ k + 1 w k v k , k N .
Observe that (55) holds if D w k = D v k . If D w k D v k , we have
λ k + 1 = min ( s k + s ) w k v k D w k D v k , λ k + q k ( s k + s ) w k v k D w k D v k ,
this implies that D w k D v k ( s k + s ) λ k + 1 w k v k . Thus, we have that (55) holds for D w k D v k and D w k = D v k . Now, let h k = v k + λ k ( D w k D v k ) . Then, from Lemma 2 and (55), we have
h k p 2 = v k + λ k ( D w k D v k ) q 2 = v k p 2 + λ k 2 D w k D v k 2 + 2 λ k v k p , D w k D v k = v k + w k + w k p 2 + λ k 2 D w p D v p 2 + 2 λ k v k p , D w k D v k = v k w k 2 + w k p 2 + 2 v k w k , w k p + λ k 2 D w k D v k 2 + 2 λ k v k p , D w k D v k = v k w k 2 + w k p 2 + λ k 2 D w k D v k 2 + 2 v k w k , v k p + 2 v k w k , w k v k + 2 λ k v k p , D w k D v k = v k w k 2 + w k p 2 + λ k 2 D w k D v k 2 + 2 v k w k , v k p 2 v k w k , v k w k 2 λ k D w k D v k , p v k = v k w k 2 + w k p 2 + λ k 2 D w k D v k 2 + 2 v k w k , v k p 2 v k w k 2 2 λ k D w k D v k , p v k = w k p 2 v k w k 2 + λ k 2 D w k D v k 2 2 w k v k λ k ( D w k D v k ) , v k p w k p 2 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 2 w k v k λ k ( D w k D v k ) , v k p .
Next, we show that
w k v k λ k ( D w k D v k ) , v k p 0 .
From (50), we have that ( I λ k D ) w k ( I + λ m E ) v k . Due to the maximal monotonicity of D, we know that there exists g m F v m , such that
( I λ k D ) w k = v k + λ k g m ,
which means that
g k = λ k 1 ( w k v k λ k D w k ) .
From the definition of p , we have 0 ( D + E ) p . Using the fact that D v k + g k ( D + E ) g k and that ( D + E ) is a maximal monotone operator, we obtain
D v k + g k , g k p 0 .
Combining (58) and (59), we have
λ k 1 w k v k λ k D w k + λ k D v k , v k p 0 ,
this means that (57) holds. By (56) and (57), we have
h k p 2 w k p 2 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 .
Moreover, from (52), (60) and Lemma 2, we have
z k p = ( 1 ϑ k ) w k + ϑ k h k p 2 = ( 1 ϑ k ) ( w k p ) + ϑ k ( h k p ) 2 = ( 1 ϑ k ) w k p 2 + ϑ k h k p 2 ϑ k ( 1 ϑ k ) h k w k 2 ( 1 ϑ k ) w k p 2 + ϑ k w k p 2 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 ϑ m ( 1 ϑ k ) h k w k 2 = w k p 2 ϑ k 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 ϑ k ( 1 ϑ k ) h k w k 2 .
Lemma 13.
Let { w k } and { v k } be sequences generated by Algorithm 2. Let { w k j } and { v k j } be subsequences of { w k } and { v k } , respectively. If w k j x * H and lim j w k j v k j = 0 , then x * ( D + E ) 1 ( 0 ) .
Proof. 
The proof is similar to the proof of Lemma 9. □
Lemma 14.
Let { p k } be the sequence generated by Algorithm 2. Then, { p k } is bounded.
Proof. 
Now, due to the boundedness of { ϑ k } and Lemma 11, there exists K N , such that ϑ k 1 ( s m + s ) 2 λ k 2 λ k + 1 2 > 0 , for all k K . This, together with (61), yields
z k p w k p .
The remaining part of the proof is similar to that of Lemma 10. Therefore, we omit it here and this completes the proof of the Lemma. □
Theorem 2.
Suppose that Assumption 1 holds and { p k } is the sequence defined by Algorithm 2. Then, { p k } converges strongly to the unique solution of problem (8).
Proof. 
The proof of the theorem will be divided into three steps.
  • Claim (i):
    ϑ k 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 ϑ k ( 1 ϑ k ) h k w k 2 p k p 2 p k + 1 p 2 + α k K 6 , k 1 , for some K 6 > 0 .
Indeed, using (29), (30) and (61), we have
p k + 1 p 2 w k p 2 ϑ k 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 ϑ k ( 1 ϑ k ) h k w k 2 + α k K 4 p k p 2 + α k K 5 ϑ k 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2
ϑ k ( 1 ϑ k ) h k w k 2 + α k K 4 .
From (63), it implies that
ϑ k 1 ( s k + s ) 2 λ k 2 λ k + 1 2 w k v k 2 ϑ k ( 1 ϑ k ) h k w k 2 p k p 2 p k + 1 p 2 + α k K 4 , k 1 , for some K 6 = K 4 + K 5 > 0 .
  • Claim (ii):
    p k + 1 p 2 ( 1 α k χ ) p k p 2 + α k χ 2 ϱ χ F p , p p k + 1 + 3 K * ϕ k α k χ p k p k 1 , k 1 .
    for some K * > 0 . □
The proof of Claim (ii) is similar to that of Claim 2 of Theorem 1. Therefore, we omit it here.
  • Claim (iii): sequence { p k p 2 } converges to zero. For this, recalling Lemma 3 and Remark 3, it suffices to show that lim sup k F p , p p k + 1 0 for every subsequence { p k j p } of { p k p } , satisfying
    lim inf j ( p k j + 1 p p k j p ) 0 .
Now, we assume that p k j p 2 is a subsequence of p k p 2 , such that (65) holds. Then
lim inf j ( p k j + 1 p 2 p k j p 2 ) = lim inf j [ ( p k j + 1 p p k j p ) ( p k j + 1 p + p k j p ) ] 0 .
Owing to Claim (i), lim j α k j = 0 and lim j s k j = 0 and the boundedness of { ϑ k j } , we have
lim sup j ϑ k j 1 ( s k j + s ) 2 λ k j 2 λ k j + 1 2 w k j v k j 2 ϑ k j ( 1 ϑ k j ) h k j w k j 2 lim sup j [ p k j p 2 p k j + 1 p 2 + α k j K 6 ] = lim sup j [ p k j p 2 p k j + 1 p 2 ] + lim sup j α k j K 6 = lim inf j [ p k j p 2 p k j + 1 p 2 ] 0 .
Consequently, we have
lim j w k j v k j = 0 and lim j h k j w k j = 0 .
Using (51), (66), and the boundedness of { ϑ k j } , we have
z k j w k j = ϑ k j h k j w k j 0 as j .
Again, by (52), we have
p k j + 1 z k j = ( I α k j ϱ F ) z k j = α k j ϱ F z k j 0 as j .
Also, by (10) and Remark 1
w k j p k j = α k j ϕ k α k j p k p k 1 0 as j .
p k j + 1 p k j p k j + 1 z k j + z k w k + w k j p k j 0 as j .
The remaining part of the proof is similar to Claim 3 of Theorem 1. Hence, we omit it here and complete the proof of the theorem.

4. Applications

In this section, we consider the applications of our methods to the bilevel variational inequality problem, convex minimization problem, and image recovery problem.

4.1. Application to the Bilevel Variational Inequality Problem

Let H be a real Hilbert space and K be a nonempty closed convex subset of H . Let D , F : H H be two single-valued operators. Then, the bilevel variational inequality problem (BVIP) is formulated as follows:
find p V I ( K , D ) such that F p , q p 0 , q V I ( K , D ) ,
where V I ( K , D ) denotes the solution set of the variational inequality problem (VIP):
find p K such that D p , w p 0 , w K .
Assume that V I ( K , D ) , the VIP (72) is equivalent to the following inclusion problem:
find p K such that 0 ( D + E ) p ,
where E : H 2 H is the sub-differential of the indicator function and it is a maximal monotone operator [34]. In this case, according to [35], the resolvent of E is the metric projection P K ; that is, ( I + λ k E ) 1 ( p ) . Thus, the following corollaries follow immediately from Theorem 1 and Theorem 2, respectively:
Corollary 1.
Let H be a real Hilbert space and K a nonempty closed convex subset of H ; let D : H H be a monotone and L-Lipschitz continuous operator, F : H H be a strongly monotone and L 1 monotone operator, and P K : H 2 H be a maximal monotone operator. Assume that { p V I ( K , D ) : F p , q p 0 , q V I ( K , D ) } and that assumptions ( A 4 ) ( A 5 ) hold. If { p k } , a sequence generated by Algorithm 3:
Algorithm 3 A modified accelerated projection and contraction method.
  • Initialization: Choose ϕ > 0 , λ 1 > 0 , 0 < c 1 < c 1 < 2 and ϱ 0 , 2 η L 1 2 . Let p 0 , p 1 H and set k = 1 .
  • Iterative steps: Calculate the next iteration point p k + 1 as follows:
  • Step 1: Choose ϕ k , such that ϕ k [ 0 , ϕ ¯ k ] , where
    ϕ ¯ k = min ϕ , ϵ k p k p k 1 , if p k p k 1 , ϕ , otherwise .
    Step 2: Compute
    w k = p k + ϕ k ( p k p k 1 ) , v k = P K ( w k λ k D w k ) .
    Step 3: Compute
    z k = w m m k r k ,
    where
    r k = w k v k λ k ( D w k D v k )
    and
    m k = ( c 1 + t k ) w k v k , r k r k 2 , if r k 0 , 0 , otherwise .
    Step 4: Compute
    p k + 1 = ( I α k ϱ F ) z k , k 1 .
    Update
    λ k + 1 = min ( s k + s ) w k v k D w k D v k , λ k + q k , if D w k D v k , λ k + q k , otherwise .
    Put k : = k + 1 and return to Step 1.
Then, sequence { p k } converges strongly to a unique solution of the (BVIP) (71).
Corollary 2.
Let H be a real Hilbert space and K be a nonempty closed convex subset of H ; let D : H H be a monotone and L-Lipschitz continuous operator, F : H H be a strongly monotone and L 1 monotone operator, and P K : H 2 H be a maximal monotone operator. Assume that { p V I ( K , D ) : F p , q p 0 , q V I ( K , D ) } and that assumptions ( A 4 ) ( A 5 ) hold. If { p k } , a sequence generated by Algorithm 4:
Algorithm 4 A modified accelerated Tseng splitting method.
  • Initialization: Choose ϕ > 0 , λ 1 > 0 , ϱ 0 , 2 η L 1 2 and { ϑ k } [ a , b ] ( 0 , 1 ] . Let p 0 , p 1 H and set k = 1 .
  • Iterative steps: Calculate the next iteration point p k + 1 as follows:
  • Step 1: Choose ϕ k , such that ϕ k [ 0 , ϕ ¯ k ] , where
    ϕ ¯ k = min k 1 k + ϕ 1 , ϵ k p k p k 1 , if p k p k 1 , k 1 k + ϕ 1 , otherwise .
  • Step 2: Set
    w k = p k + ϕ k ( p k p k 1 ) ,
    and compute
    v k = P K ( I λ k D ) w k ,
    Step 3: Compute
    z k = ( 1 ϑ k ) w k + ϑ k ( v k + λ k ( E w k E v k ) ) .
    Step 4: Compute
    p k + 1 = ( I α k ϱ F ) z k , k 1 .
    Update
    λ k + 1 = min ( s k + s ) w k v k D w k D v k , λ k + q k , if D w k D v k , λ k + q k , otherwise .
  • Put k : = k + 1 and return to Step 1.
Then, sequence { p k } converges strongly to a unique solution of the (BVIP) (71).

4.2. Application to the Convex Minimization Problem

Let h : H R be a convex differentiable function and g : H R be a proper lower-semi-continuous and convex function. Then, the convex minimization problem (CMP) is formulated as follows:
find p H such that h ( p ) + g ( p ) = lim p H h ( u ) + g ( u ) .
It is well-known that problem (74) is a special case of the MIP; that is, it is equivalent to the problem: 0 h ( p ) + g ( p ) . It is a known fact that if h is ( 1 L ) -Lipschitz continuous, then it is L-inverse strongly monotone and g is a maximal monotone operator. The solution set of the CMP (74) is denoted by ( h + g ) 1 ( 0 ) . In Theorems 1 and 2, we set D = h and E = g , F ( p ) = p f ( p ) , where f : H H is a γ -contraction mapping. It is not hard to see that F : H H is ( 1 + γ ) Lipschitz continuous and ( 1 γ ) -strongly monotone. Consequently, if we take ϱ = 1 , then, we obtain the following corollaries from Theorems 1 and 2, respectively.
Corollary 3.
Let H be a real Hilbert space, h : H H be a L-Lipschitz continuous operator, and g : H 2 H be a maximal monotone operator. Assume that ( h + g ) 1 ( 0 ) and that assumptions ( A 4 ) ( A 5 ) hold. If { p k } , a sequence generated by Algorithm 5:
Algorithm 5 A Modified Accelerated Projection and Contraction Method.
  • Initialization: Choose ϕ > 0 , λ 1 > 0 , 0 < c 1 < c 1 < 2 and γ [ 0 , 5 2 ) . Let p 0 , p 1 H and set k = 1 .
  • Iterative steps: Calculate the next iteration point p k + 1 as follows:
  • Step 1: Choose ϕ k , such that ϕ k [ 0 , ϕ ¯ k ] , where
    ϕ ¯ k = min ϕ , ϵ k p k p k 1 , if p k p k 1 , ϕ , otherwise .
    Step 2: Compute
    w k = p k + ϕ k ( p k p k 1 ) , v k = ( I + λ k g ) 1 ( w k λ k h w k ) .
    Step 3: Compute
    z k = w m m k r k ,
    where
    r k = w k v k λ k ( h w k h v k )
    and
    m k = ( c 1 + t k ) w k v k , r k r k 2 , if r k 0 , 0 , otherwise .
    Step 4: Compute
    p k + 1 = ( I α k ) z k + α k f ( z k ) , k 1 .
    Update
    λ k + 1 = min ( s k + s ) w k v k h w k h v k , λ k + p k , if h w k h v k , λ k + p k , otherwise .
    Put k : = k + 1 and return to Step 1.
Then, sequence { p k } converges strongly to an element in ( h + g ) 1 ( 0 ) .
Corollary 4.
Let H be a real Hilbert space, h : H H be a L-Lipschitz continuous operator, and g : H 2 H be a maximal monotone operator. Assume that ( h + g ) 1 ( 0 ) and that assumptions ( A 4 ) ( A 5 ) hold. If { p k } , a sequence generated by Algorithm 6:
Algorithm 6 A modified accelerated Tseng splitting method.
  • Initialization: Choose ϕ > 0 , λ 1 > 0 , ϱ 0 , 2 η L 1 2 and { ϑ k } [ a , b ] ( 0 , 1 ] . Let p 0 , p 1 H and set k = 1 .
  • Iterative steps: Calculate the next iteration point p k + 1 as follows:
  • Step 1: Choose ϕ k , such that ϕ k [ 0 , ϕ ¯ k ] , where
    ϕ ¯ k = min k 1 k + ϕ 1 , ϵ k p k p k 1 , if p k p k 1 , k 1 k + ϕ 1 , otherwise .
    Step 2: Set
    w k = p k + ϕ k ( p k p k 1 ) ,
    and compute
    v k = ( I + λ k g ) 1 ( I λ k h ) w k ,
    Step 3: Compute
    z k = ( 1 ϑ k ) w k + ϑ k ( v k + λ k ( h w k h v k ) ) .
    Step 4: Compute
    p k + 1 = ( I α k ) z k + α k f ( z k ) , k 1 .
    Update
    λ k + 1 = min ( s k + s ) w k v k h w k h v k , λ k + q k , if h w k h v k , λ k + q k , otherwise .
    Put k : = k + 1 and return to Step 1.
Then, sequence { p k } converges strongly to an element in ( h + g ) 1 ( 0 ) .

4.3. Application to the Image Restoration Problem

The general image recovery problem can be formulated by the inversion of the observation model defined by
z = D p + h ,
where h is the unknown additive random noise, p R k is the known original image, D is the linear operator involved in the considered image recovery problem, and z is the known degraded observation. Model (75) is closely equivalent to different concepts of optimization problems. In recent years, the l 1 norm has been widely used by many authors in these kinds of problems. The l 1 regularization, which can be employed to remove noise in the recovery process, is defined by
min p R k { λ k p 1 + 1 2 z D p 2 2 } ,
where p R k , z R m , D is a m × k matrix and λ k > 0 . Next, we use various algorithms, as listed above, to find the solution to the following CMP:
find p Argmin p R k { λ k p 1 + 1 2 z D p 2 2 } ,
where D is an operator representing the mask and g is the degraded image.
In this numerical experiment, g ( p ) = p 1 , h ( p ) = 1 2 z D p 2 2 , and in all the algorithms, set D = h and E = g . We define the gradient by
h ( p ) = D * ( z D p ) .
Moreover, we compare the image recovery efficiency of method Algorithms 5 (in short, OAURN3) and 6 (in short, OAURN4) with Algorithm 3.3 by Adamu et al. [36] (in short, ADIA Alg. 3.3) and Algorithm 3.1 by Alakoya et al. [15] (in short, AOM Alg. 3.1). For all algorithms, we use the stopping criterion E k = p k + 1 p k < 10 6 and choose the following parameters for all the methods:
  • In OAURN2 and OAURN3, we set ϕ = 0.73 , λ 1 = 3.5 , s = 0.66 , c 1 = 0.99 , ϑ k = 0.68 , α k = 1 k + 1 , ϵ k = 100 ( k + 1 ) 2 , s k = 1 k , ϱ = 1.8 η L 1 2 and q k = 1 ( k + 1 ) 1.1 .
  • In ADIA Alg. 3.3, we set τ k = 1 k + 1 , σ k = μ k = 0.5 ( 1 τ k ) , ϵ k = γ k = s k = 100 ( k + 1 ) 2 , u = 1 2 , λ k = 1 4 and a = 0.8 .
  • In AOM Alg. 3.1, we set α k = 1 k + 1 , δ k = ξ k = 0.5 ( 1 α k ) , ϵ k = 100 ( k + 1 ) 2 , θ = 0.89 , λ 1 = 3.5 , ϕ = 0.89 , α = 0.145 , β = 0.895 , S p = 2 3 p , T p = 3 4 p , f ( p ) = 1 3 p and ϕ k = 1 ( k + 1 ) 1.1 .
The test images are a hand X-ray and an apple. The performances of the algorithms are measured via the signal-to-noise ratio (SNR), defined by
S N R = 20 log 10 p 2 p p * 2 ,
where p * is the restored image and p is the original image. We consider the blur function in MATLAB “special (‘motion‘, 40, 70)” and add random noise. All numerical simulations were performed using MATLAB R2020b and carried out on a PC Desktop with an Intel® Core™ i7-3540M CPU @ 3.00GHz × 4 and 400.00GB memory. The numerical results are presented in Figure 1, Figure 2, Figure 3 and Figure 4 and Table 1.
Remark 4.
From Figure 1, Figure 2, Figure 3 and Figure 4 and Table 1, one can see that the qualities of the recovered images are better with higher SNR values. Thus, it is evident that Algorithms 1 and 2 are more efficient than the other compared algorithms.

5. Numerical Experiments

In this section, we present some numerical experiments to illustrate the numerical behavior of Algorithms 1 (in short, OAURN1) and 2 (in short, OAURN2). Moreover, we compare them with Algorithm 3.3 by Adamu et al. [36] (in short, ADIA Alg. 3.3) and Algorithm 3.1 by Alakoya et al. [15] (in short, AOM Alg. 3.1). We choose the parameters of all the methods as follows:
  • In the proposed Algorithms 1 and 2, we set ϕ = 0.73 , λ 1 = 2.5 , s = 0.59 , c 1 = 0.67 , ϑ k = 0.89 , α k = 1 2 k + 1 , ϵ k = 1 ( 2 k + 1 ) 3 , s k = 1 k + 1 , ϱ = 1.7 η L 1 2 and q k = 1 ( k + 1 ) 1.1 .
  • In ADIA Alg. 3.3, we set τ k = 1 2 k + 1 , σ k = μ k = 0.5 ( 1 τ k ) , ϵ k = γ k = s k = 1 ( 2 k + 1 ) 2 , u = 1 2 , λ k = 1 6 and a = 0.9 .
  • In AOM Alg. 3.1, we set α k = 1 2 k + 1 , δ k = ξ k = 0.5 ( 1 α k ) , ϵ k = 1 ( 2 k + 1 ) 3 , θ = 0.73 , λ 1 = 2.5 , ϕ = 0.59 , α = 0.145 , β = 0.465 , S p = 2 3 p , T p = 2 3 p , f ( p ) = 1 2 p and ϕ k = 1 ( k + 1 ) 1.1 .
Example 1.
Let H = L 2 ( [ 0 , 1 ] ) and let the operators D , E , F : H H be defined by
D ( p ) = 3 p ( t ) , E ( p ) = 6 p ( t ) , and F ( p ) = 0.5 p ( t ) , t [ 0 , 1 ] .
It is not hard to verify that D is 1 2 -inverse strongly monotone, E is a maximal monotone operator, and F is 0.5 -strongly monotone and 0.5-Lipschitz continuous. For this experiment, we take the stopping E k = p k + 1 p k < 10 5 and consider the following cases:
  • Case I:  p 0 ( t ) = t and p 1 ( t ) = 1 + t 2 ;
  • Case II:  p 0 ( t ) = 2 s and p 1 ( t ) = sin ( t ) ;
  • Case III:  p 0 ( t ) = t 3 + t and p 1 ( t ) = t 3 + 3 t ,
  • Case IV:  p 0 ( t ) = t + 2 and p 1 ( t ) = cos ( t ) .
The obtained numerical results are presented in the following Table 2 and Figure 5. It can be seen that our method outperforms the compared methods.
Example 2.
Let H = ( l 2 ( R ) , · l 2 ) , where l 2 ( R ) = { p = ( p 1 , p 2 , p 3 , ) , p j R : j = 1 | u j | < } and p l 2 = ( j = 1 | p j | 2 ) 1 2 , p l 2 ( R ) . We now define the operators D , E , F : l 2 ( R ) l 2 ( R ) by
D p = 0.5 p , E p = 8 p , and F ( p ) = 0.8 p , p H .
It is easy to check that D is 2-inverse strongly monotone, E is a maximal monotone operator, and F is 0.8 -strongly monotone and 0.8-Lipschitz continuous. For this experiment, we take the stopping E k = p k + 1 p k < 10 8 and consider the following cases:
  • Case A:  p 0 = 1 4 , 1 8 , 1 9 , and p 1 = 1 , 1 2 , 1 3 , .
  • Case B:  p 0 = 1 5 , 1 7 , 1 10 , and p 1 = 1 , 1 3 , 1 5 , .
  • Case C:  p 0 = 1 , 1 8 , 1 10 , and p 1 = 1 6 , 1 5 , 1 7 .
  • Case D:  p 0 = 1 2 , 1 6 , 1 8 , and p 1 = 1 , 1 5 , 1 10 , .
The obtained numerical results are shown in Table 3 and Figure 6; it can be seen that our method outperformed the compared methods.

6. Conclusions

In this work, two efficient iterative methods for solving the strongly monotone variational inequality problem over the solution set of the monotone inclusion problem have been introduced. These methods are accelerated by the inertial technique. The new methods use self-adaptive step sizes rather than depending on prior knowledge of the operator norm and the Lipschitz constants of the operators involved. We obtained the strong convergence results of these methods under some mild conditions on the control parameters. We applied our results to solve the variational inequality problem, bilevel variational inequality problem, convex minimization problem, and image recovery problems. Numerical experiments were carried out to authenticate the applicability of our methods and to further show the superiority of the proposed method over some existing methods. The new results improve, extend, complement, and unify some existing results in [4,12,13,27] and several others.

Author Contributions

Conceptualization, A.E.O.; Methodology, G.C.U.; Software, J.A.A.; Validation, Funding acquisition, Formal analysis, H.A.N.; Writing—original draft, A.E.O.; Supervision, G.C.U. and O.K.N. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to Prince Sattam Bin Abdulaziz University for funding this research through project number (PSAU/2023/01/90101).

Data Availability Statement

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Abuchu, J.A.; Ofem, A.E.; Ugwunnadi, G.C.; Narain, O.K.; Hussain, A. Hybrid Alternated Inertial Projection and Contraction Algorithm for Solving Bilevel Variational Inequality Problems. J. Math. 2023, 2023, 3185746. [Google Scholar] [CrossRef]
  2. Ofem, A.E. Strong convergence of a multi-step implicit iterative scheme with errors for common fixed points of uniformly L–Lipschitzian total asymptotically strict pseudocontractive mappings. Results Nonlinear Anal. 2020, 3, 100–116. [Google Scholar]
  3. Ofem, A.E.; Mebawondu, A.A.; Ugwunnadi, G.C.; Işık, H.; Narain, O.K. A modified subgradient extragradient algorithm-type for solving quasimonotone variational inequality problems with applications. J. Inequal. Appl. 2023, 2023, 73. [Google Scholar] [CrossRef]
  4. Cholamjiak, P.; Hieu, D.V.; Cho, Y.J. Relaxed forward–backward splitting methods for solving variational inclusions and applications. J. Sci. Comput. 2021, 88, 85. [Google Scholar] [CrossRef]
  5. Gibali, A.; Thong, D.V. Tseng type methods for solving inclusion problems and its applications. Calcolo 2018, 55, 49. [Google Scholar] [CrossRef]
  6. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  7. Ofem, A.E.; Mebawondu, A.A.; Ugwunnadi, G.C.; Cholamjiak, P.; Narain, O.K. Relaxed Tseng splitting method with double inertial steps for solving monotone inclusions and fixed point problems. Numer. Algor. 2023. [Google Scholar] [CrossRef]
  8. Ofem, A.E.; Işik, H.; Ali, F.; Ahmad, J. A new iterative approximation scheme for Reich–Suzuki type nonexpansive operators with an application. J. Inequal. Appl. 2022, 2022, 28. [Google Scholar] [CrossRef]
  9. Shehu, Y.; Dong, Q.L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2019, 68, 385–409. [Google Scholar] [CrossRef]
  10. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef]
  11. Izuchukwu, C.; Reich, S.; Shehu, Y.; Taiwo, A. Strong Convergence of Forward-Reflected-Backward Splitting Methods for Solving Monotone Inclusions with Applications to Image Restoration and Optimal Control. J. Sci. Comput. 2023, 94, 73. [Google Scholar] [CrossRef]
  12. Tan, B.; Cho, S.Y. Strong Convergence of inertial forward-backward methods for solving monotone inclusions. Appl. Anal. 2022, 101, 5386–5414. [Google Scholar] [CrossRef]
  13. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  14. Zhang, C.; Wang, Y. Proximal algorithm for solving monotone variational inclusion. Optimization 2018, 67, 1197–1209. [Google Scholar] [CrossRef]
  15. Alakoya, T.O.; Ogunsola, O.J.; Mewomo, O.T. An inertial viscosity algorithm for solving monotone variational inclusion and common fixed point problems of strict pseudocontractions. Bol. Soc. Mat. Mex. 2023, 29, 31. [Google Scholar] [CrossRef]
  16. Thong, D.V.; Anh, P.K.; Dung, V.T.; Linh, D.T.M. A Novel Method for Finding Minimum-norm Solutions to Pseudomonotone Variational Inequalities. Netw. Spat. Econ. 2023, 23, 39–64. [Google Scholar] [CrossRef]
  17. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  18. Thong, D.V.; Hieu, D.V.; Rassias, T.M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. 2020, 14, 115–144. [Google Scholar] [CrossRef]
  19. Thong, D.V.; Liu, L.L.; Dong, Q.L.; Long, L.V.; Tuan, P.A. Fast relaxed inertial Tseng’s method-based algorithm for solving variational inequality and fixed point problems in Hilbert spaces. J. Comput. Appl. Math. 2023, 418, 114739. [Google Scholar] [CrossRef]
  20. Shehu, Y.; Yao, J.C. Rate of convergence for inertial iterative method for countable family of certain quasi–nonexpansive mappings. J. Nonlinear Convex Anal. 2020, 21, 533–541. [Google Scholar]
  21. Ofem, A.E.; Igbokwe, D.I. A new faster four step iterative algorithm for Suzuki generalized nonexpansive mappings with an application. Adv. Theory Nonlinear Anal. Its Appl. 2021, 5, 482–506. [Google Scholar]
  22. Ofem, A.E.; Abuchu, J.A.; George, R.; Ugwunnadi, G.C.; Narain, O.K. Some new results on convergence, weak w2–stability and data dependence of two multivalued almost contractive mappings in hyperbolic spaces. Mathematics 2022, 10, 3720. [Google Scholar] [CrossRef]
  23. Eslamian, M. Variational inequality over the solution set of split monotone variational inclusion problem with application to bilevel programming problem. Filomat 2023, 37, 8361–8376. [Google Scholar]
  24. Ofem, A.E.; Udofia, U.E.; Igbokwe, D.I. A robust iterative approach for solving nonlinear Volterra Delay integro-differential equations. Ural Math. J. 2021, 7, 59–85. [Google Scholar] [CrossRef]
  25. Okeke, G.A.; Ofem, A.E.; Abdeljawad, T.; Alqudah, M.A.; Khan, A. A solution of a nonlinear Volterra integral equation with delay via a faster iteration method. AIMS Math. 2023, 8, 102–124. [Google Scholar] [CrossRef]
  26. Yamada, I. The hybrid steepest decent method for variational inequality problem over intersection of fixed point sets of nonexpansive mapping. In Inherently Parallel Algorithm in Feasibility and Their Application; Butnariu, D., Censor, Y., Reich, S., Eds.; Elsevier: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
  27. Lorenz, D.A.; Pock, T. An inertial forward–backward algorithm for monotone inclusions. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
  28. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  29. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  30. Brezis, H.; Chapitre, I.I. Operateurs maximaux monotones. North-Holland Math. Stud. 1973, 5, 19–51. [Google Scholar]
  31. Anh, P.K.; Anh, T.V.; Muu, L.D. On Bilevel split pseudomonotone variational inequality problems with applications. Acta Math. Vietnam. 2017, 42, 413–429. [Google Scholar] [CrossRef]
  32. Liu, H.; Yang, J. Weak convergence of iterative methods for solving quasimonotone variational inequalities. Comput. Optim. Appl. 2020, 77, 491–508. [Google Scholar] [CrossRef]
  33. Wang, Z.; Lei, Z.; Long, X.; Chen, Z. A modified Tseng splitting method with double inertial steps for solving monotone inclusion problems. J. Sci. Comput. 2023, 96, 92. [Google Scholar] [CrossRef]
  34. Kitkuan, D.; Kumam, P.; Martinez-Moreno, J. Generalized Halpern-type forward-backward splitting methods for convex minimization problems with application to image restoration problems. Optimization 2020, 69, 1–25. [Google Scholar] [CrossRef]
  35. Rockafellar, R. On the maximality of sums of nonlinear monotone operators. Trans. Amer. Math. Soc. 1970, 149, 75–88. [Google Scholar] [CrossRef]
  36. Adamu, A.; Deepho, J.; Ibrahim, A.H.; Abubakar, A.A.B. pproximation of zeros of sum of monotone mappings with applications to variational inequality and image restoration problems. Nonlinear Funct. Anal. Appl. 2021, 26, 411–432. [Google Scholar]
Figure 1. Comparison of restored images via various methods when the number of iterations is 2000 for the apple image.
Figure 1. Comparison of restored images via various methods when the number of iterations is 2000 for the apple image.
Mathematics 11 04643 g001
Figure 2. Comparison of restored images via various methods when the number of iterations is 2500 for the hand X-ray image.
Figure 2. Comparison of restored images via various methods when the number of iterations is 2500 for the hand X-ray image.
Mathematics 11 04643 g002aMathematics 11 04643 g002b
Figure 3. Graphs of SNR for the methods OAURN1, OAURN2, ADIA Alg. 3.3, and AOM Alg 3.1 for the apple image.
Figure 3. Graphs of SNR for the methods OAURN1, OAURN2, ADIA Alg. 3.3, and AOM Alg 3.1 for the apple image.
Mathematics 11 04643 g003
Figure 4. Graphs of SNR for the methods OAURN1, OAURN2, ADIA Alg. 3.3, and AOM Alg 3.1 for the Hand X-ray image.
Figure 4. Graphs of SNR for the methods OAURN1, OAURN2, ADIA Alg. 3.3, and AOM Alg 3.1 for the Hand X-ray image.
Mathematics 11 04643 g004
Figure 5. Example 1, Case I (top left); Case II (top right); Case III (bottom left); Case IV (bottom right).
Figure 5. Example 1, Case I (top left); Case II (top right); Case III (bottom left); Case IV (bottom right).
Mathematics 11 04643 g005
Figure 6. Example 2, Case A (top left); Case B (top right); Case C (bottom left); Case D (bottom right).
Figure 6. Example 2, Case A (top left); Case B (top right); Case C (bottom left); Case D (bottom right).
Mathematics 11 04643 g006
Table 1. Numerical comparison for the methods OAURN1, OAURN2, ADIA Alg. 3.3, and AOM Alg. 3.1.
Table 1. Numerical comparison for the methods OAURN1, OAURN2, ADIA Alg. 3.3, and AOM Alg. 3.1.
ImageskOAURN1OAURN2ADIA Alg. 3.3AOM Alg 3.1.
SNRSNRSNRSNR
Apple30023.834223.597820.863711.7830
Size = 350 × 600 60023.347823.0477120.999911.9876
1600224.989324.567322.673816.3562
200024.783924.343522.824616.8673
30023.998423.637421.224311.7803
Hand X-ray60023.999923.856321.675411.8587
Size = 520 × 750 160024.973824.897322.743716.3478
250024.9998924.9455522.546717.8495
Table 2. Numerical results of Example 1.
Table 2. Numerical results of Example 1.
Cases OAURN1OAURN2ADIA A1g. 3.3AOM Alg. 3.1
Case ICPU time (s)0.03540.03850.0588340.7367
No. of Iter.15151755
Case IICPU time (s)0.004560.0056870.08640.2673
No. of Iter.15151898
Case IIICPU time (s)0.08530.09870.13430.4536
No. of Iter.14141757
Case IVCPU time (s)0.16370.18560.354680.9637
No. of Iter.16 16171888
Table 3. Numerical results of Example 2.
Table 3. Numerical results of Example 2.
Cases OAURN1OAURN2ADIA Alg. 3.3AOM Alg. 3.1
Case ACPU time (s)0.00600.00610.00990.0187
No of Iter.48497089
Case BCPU time (s)0.00730.00750.01670.0376
No of Iter.46497390
Case CCPU time (s)0.00380.00410.05620.0876
No of Iter.47506072
Case DCPU time (s)0.00520.00540.25360.4667
No of Iter.58617797
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ofem, A.E.; Abuchu, J.A.; Nabwey, H.A.; Ugwunnadi, G.C.; Narain, O.K. On Bilevel Monotone Inclusion and Variational Inequality Problems. Mathematics 2023, 11, 4643. https://doi.org/10.3390/math11224643

AMA Style

Ofem AE, Abuchu JA, Nabwey HA, Ugwunnadi GC, Narain OK. On Bilevel Monotone Inclusion and Variational Inequality Problems. Mathematics. 2023; 11(22):4643. https://doi.org/10.3390/math11224643

Chicago/Turabian Style

Ofem, Austine Efut, Jacob Ashiwere Abuchu, Hossam A. Nabwey, Godwin Chidi Ugwunnadi, and Ojen Kumar Narain. 2023. "On Bilevel Monotone Inclusion and Variational Inequality Problems" Mathematics 11, no. 22: 4643. https://doi.org/10.3390/math11224643

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop