Next Article in Journal
Analytical Stress Solution and Numerical Mechanical Behavior of Rock Mass Containing an Opening under Different Confining Stress Conditions
Previous Article in Journal
Comparison of Non-Newtonian Models of One-Dimensional Hemodynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Krasnoselski–Mann Iterative Method for Solving Hierarchical Fixed Point and Split Monotone Variational Inclusion Problems with Its Applications

by
Preeyanuch Chuasuk
1 and
Anchalee Kaewcharoen
2,*
1
Department of Mathematics, Faculty of Science, Burapha University, Chonburi 20131, Thailand
2
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(19), 2460; https://doi.org/10.3390/math9192460
Submission received: 25 August 2021 / Revised: 26 September 2021 / Accepted: 28 September 2021 / Published: 2 October 2021
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In this article, we discuss the hierarchical fixed point and split monotone variational inclusion problems and propose a new iterative method with the inertial terms involving a step size to avoid the difficulty of calculating the operator norm in real Hilbert spaces. A strong convergence theorem of the proposed method is established under some suitable control conditions. Furthermore, the proposed method is modified and used to derive a scheme for solving the split problems. Finally, we compare and demonstrate the efficiency and applicability of our schemes for numerical experiments as well as an example in the field of image restoration.

1. Introduction

The variational inequality problem (VIP for short) is a significant branch of mathematics. Over the decades, this problem has been extensively studied for solving many real-world problems in various applied research areas such as physics, economics, finance, optimization, network analysis, medical images, water resources and structural analysis. Moreover, it contains fixed point problems, optimization problems arising in machine learning, signal processing and linear inverse problems, (see [1,2,3]). The set of solutions of the variational inequality problem is denoted by
V I ( C , A ) = { u C : v u , A u 0 } , v C ,
where C is a nonempty closed convex subset of the Hilbert space H and A : C H is a mapping.
Moudafi and Mainge [4] established the hierarchical fixed point problem for a nonexpansive mapping T with respect to another nonexpansive mapping S on H by utilizing the concept of the variational inequality problem: Find x F i x ( T ) such that
x S x , x x 0 , x F i x ( T ) ,
where S : H H is a nonexpansive mapping and F i x ( T ) = { x C : x = T x } . It is easy to see that (1) is equivalent to the following fixed point problem: Find x H such that
x P F i x ( T ) S x ,
where P F i x ( T ) stands for the metric projection on the closed convex set F i x ( T ) . The solution set of HFPP (1) is denoted by Φ = { x H : x S x , x x 0 , x F i x ( T ) } . It is obvious that Φ = V I ( F i x ( T ) , I S ) . It is worth noting (1) covers monotone variational inequality on fixed point sets as well as minimization problem, etc. For several fixed point theory and solving the hierarchical fixed point problem in (1), many iterative methods have been studied and developed, (see [4,5,6,7,8]).
On the other hand, The split monotone variational inclusion problem (in short, SpMVIP) is proposed and studied by Moudafi [9]. It had been applied to the intensity-modulated radiation therapy treatment planning as a model, (see [10]). This concept has appeared in many inverse problems which emerge in phase retrieval and other real-world problems such as sensor networks, data compression as well as comprised tomography, (see [11,12]): Find x H 1 such that
0 f ( x ) + F ( x )
and such that
y = A x solves 0 g ( y ) + G ( y ) ,
where F : H 1 2 H 1 and G : H 2 2 H 2 are multi-valued monotone operators, f : H 1 H 1 , g : H 2 H 2 are two single-valued operators and A : H 1 H 2 is a bounded linear operator. The solution set of SpMVIP is denoted by Ω = { x H 1 : x Sol ( MVIP ( 3 ) ) } and A x Sol ( MVIP ( 4 ) ) } .
They introduced the following iterative method and studied the weak convergence theorem for SpMVIP: For a given x 0 H 1 , compute iterative sequence { x n } generated by the following scheme:
x n + 1 = U ( x n + r A ( V I ) A x n ) , for r > 0 ,
where U = J λ F ( I λ f ) , V = J λ G ( I λ g ) such that J λ F and J λ G are the resolvent mappings of F and G, respectively (see the definition in Section 2) and I is the identity mapping. The operator A is a bounded linear operator with A is adjoint of A and r ( 0 , 1 L ) with L being the spectral radius of the operator A A . They proved that the sequence { x n } converges weakly to a solution of hierarchical fixed point and split monotone variational inclusion problems.
In 2017, Kazmi et al. [13] developed a Krasnoselski–Mann type iterative method to approximate a common solution set of a hierarchical fixed point problem for nonexpansive mappings S , T and a split monotone variational inclusion problem which was defined as follows:
x 0 C ; u n = ( 1 α n ) x n + α n ( β n S x n + ( 1 β n ) T x n ) ; x n + 1 = U ( u n + λ A ( V I ) A u n ) , n 0 ,
where U = J λ F ( I λ n f ) , V = J λ G ( I λ n g ) and the step size λ ( 0 , 1 L ) where L is the spectral radius of the operator A A . Under some suitable conditions on { α n } , { β n } and { r n } :
( C 1 ) n = 0 β n < ;   ( C 2 ) lim n x n u n α n β n = 0 ;   ( C 3 ) lim inf n λ n > 0
.
They proved that the sequence { x n } converges weakly to a solution of hierarchical fixed point and split monotone variational inclusion problems. Normally, an interesting question is how to construct the strongly convergence results which approximate the solution of the split monotone variational inclusion and hierarchical fixed point problems.
In 2021, Dao-Jun Wen [14] modified Krasnoselski–Mann type iteration (5) which replaced operator T with ( I μ n D ) where D is a strongly monotone operator and L-Lipschitzian. Define a sequence { x n } in the following manner:
u n = ( 1 α n ) x n + α n T ( β n S x n + ( 1 β n ) ( I μ n D ) x n ) , x n + 1 = U ( u n + λ A ( V I ) A u ) ,
where U = J λ F ( I λ f ) , V = J λ G ( I λ g ) , T , S are two nonexpansive mappings and the step size λ ( 0 , 1 L ) where L is the spectral radius of the operator A A . They established the strong convergence result and constructed the new condition of coefficients which replaced of condition (C2).
If f = 0 and g = 0 , then the split monotone variational inclusion problem reduced to the following split variational inclusion problem (SVIP): Find x H 1 such that
0 F ( x ) and such that y = A x solves 0 G ( y ) .
Byrne et al. [15] have attempted to solve a special case of (6) and defined { x n } in the following: x n + 1 = J λ F ( x n + r A ( J λ G I ) A x n ) . Moreover, they obtained the weak and strong convergence results of problem (6) with resolvent operator technique and the stepsize r ( 0 , 2 A A ) . In order to speed up the convergence speed, Alvarez and Attouch [16] considered the following iterative scheme: x n + 1 = J λ F ( x n + θ n ( x n x n 1 ) ) , where F is a maximal monotone operator, λ > 0 and θ n [ 0 , 1 ] , such the iterative scheme is called the inertial proximal method, and θ n ( x n x n 1 ) is referred to as the inertial extrapolation term which is a procedure of speeding up the convergence properties under the condition that n = 1 x n x n 1 2 < (see [17]). At the same time, the idea of the inertial technique plays an important role in the optimization community as a technology to build an accelerating iterative method, (see [18,19]).
On the other hand, the iterative methods mentioned above share a common manner, that is, their step size requires a knowlegde of the prior information of the operator (matrix) norm A . It may be difficult to calculate A and the fixed step size of iterative methods have an impact on the implementation. To conquer, the construction of self-adaptive step size has aroused interest among researchers. Recently, there are many iterative methods that do not require the prior information of the operator (matrix) norm, (see [20,21,22]).
Motivated and inspired by the work mentioned above, we further investigate the self-adaptive inertial Krasnoselski-mann iterative method for solving hierarchical fixed point and split monotone variational inclusion problems. This manuscript aims to suggest modifications of the results that appeared in [14] by applying the inertial scheme that is effective for speeding up the iteration process and adding the step size that the prior information of the operator (matrix) norm is not required. A strong convergence theorem of the proposed iterative method is established under some suitable conditions. Furthermore, we also present some numerical experiments to demonstrate the advantages of the stated iterative method over other existing methods [9,13,14] and apply our main results to solving the image restoration problem.

2. Materials and Methods

In this section, we recall some basic definitions and properties which will be frequently used in our later investigation. Some useful results proved already in the literature are also summarized.
Definition 1.
A mapping f : H H is said to be:
(i)
monotone, if
f x f y , x y 0 , x , y H ;
(ii)
α-inverse strongly monotone, if there exists a constant α > 0 such that
f x f y , x y α f x f y 2 , x , y H ;
(iii)
β-Lipschitz continuous, if there exists a constant β > 0 such that
f x f y β x y x , y H .
Let F : H 2 H be a multivalued operator on H. Then the graph G ( F ) of F is defined by
G ( F ) = { ( x , y ) H × H : y F ( x ) } ,
and
(i)
the operator F is called a monotone operator, if
u v , x y 0 , whenever u F ( x ) , v F ( y ) ;
(ii)
the operator F is called a maximal monotone operator, if F is monotone and the graph of F is not properly contained in the graph of other monotone mappings.
Let F : H 2 H be a set-valued maximal monotone mapping, which Graph (F) is not property contained in the graph of any other monotone mapping. Then the resolvent operator J λ F : H H is defined by
J λ F = ( I + λ F ) 1 ( x ) , x H ,
where I stands for the identity operator on H. It is well known that resolvent operator J λ F is single-valued, nonexpansive and firmly nonexpansive for λ > 0 .
Lemma 1.
Let f be a κ-inverse strongly monotone mapping and F be a maximal monotone mapping, then J λ F ( I λ F ) is averaged for all λ ( 0 , 2 κ ) .
Lemma 2
([9]). Let f be a mapping and F be a maximal monotone mapping, then 0 f ( x ) + F ( x ) if and only if x = J λ F ( I λ f ) x , i.e., x F i x ( J λ F ( I λ f ) ) for λ > 0 .
Lemma 3.
For any x , y H , the following results hold:
(i)
x + y 2 x 2 + 2 y , x + y ;
(ii)
t x + ( 1 t ) y 2 = t x 2 + ( 1 t ) y 2 t ( 1 t ) x y 2 , t [ 0 , 1 ] .
Lemma 4
([23]). Let D : H H be η-strongly monotone and L-Lipschitz continuous. Then I μ n D is a ( 1 μ n ρ ) -contraction, i.e.,
( I μ n D ) x ( I μ n D ) y ( 1 μ n ρ ) x y , x , y H ,
where { μ n } is a sequence such that μ n ( 0 , μ ] and ρ = 2 η μ L 2 2 with μ < 2 μ L 2 .
Lemma 5
([24]). Let { a n } and { c n } be two sequences of non-negative real numbers such that
a n + 1 ( 1 τ n ) a n + b n + c n , n 0 ,
where { τ n } is a sequence in ( 0 , 1 ) and { b n } is a sequence in R . Assume that Σ n = 0 c n < . If b n τ n M for some M 0 , then { a n } is a bounded sequence.
Lemma 6
([25]). Let { a n } be a sequence of non-negative real numbers, { τ n } be a sequence of real numbers in ( 0 , 1 ) with Σ n = 1 τ n = and { b n } be a sequence of real numbers such that
a n + 1 ( 1 τ n ) a n + τ n b n , n 1 .
If lim sup k b n k 0 for every subsequence { a n k } of { a n } satisfying lim inf k ( a n k + 1 a n k ) 0 , then lim n a n = 0 .

3. Results

In this section, we propose a self-adaptive method with inertial extrapolation term for solving hierarchical fixed point and split variational inclusion problems. To begin with, the control conditions need to be satisfied by Condition 1:
(A1)
{ θ n } ( 0 , θ ) for some θ > 0 such that θ n = O ( τ n ) , i.e., lim n θ n τ n = 0 ;
(A2)
{ τ n } ( 0 , 1 ) such that lim n τ n = 0 and n = 0 τ n = ;
(A3)
{ α n } , { β n } ( a , b ) ( 0 , 1 ) such that lim inf n α n > 0 , n = 0 β n < ;
(A4)
{ μ n } is a positive sequence such that lim n μ n = 0 and n = 0 μ n = ;
(A5)
{ δ n } ( a , b ) ( 0 , 1 τ n ) , { α n β n } ( a , b ) ( 0 , τ n ) and n = 0 α n β n μ n < .
Remark 1.
From θ n ¯ = min { θ , ϵ n x n x n 1 } , if x n x n 1 , θ , otherwise , and Algorithm 1, we have
θ n x n x n 1 θ n ¯ x n x n 1 ϵ n .
Therefore Σ n = 1 θ n ( x n x n 1 ) < .
Algorithm 1: Inertial Krasnoselski-Mann Iterative Method
Initialization: Let x 0 , x 1 H 1 , θ > 0 , { σ n } ( 0 , 1 ) ,   { ϵ n } [ 0 , ) and
n = 0 ϵ n < .
Iterative steps: Calculate x n + 1 follows:
  Step 1. Given the iterates x n 1 and x n for each n 1 , choose 0 θ n θ n ¯ where
   θ n ¯ = min { θ , ϵ n x n x n 1 } , if x n x n 1 ; θ , otherwise .
  Step 2. Compute
   w n = x n + θ n ( x n x n 1 ) ; u n = ( 1 α n ) w n + α n T ( ( 1 β n ) S w n + β n ( 1 μ n D ) w n ) .
  Step 3. Compute U = J λ F ( I λ f ) , V = J λ G ( I λ g ) and
   y n = U ( u n + r n ( A ( V I ) A u n ) ) , where
   r n = σ n ( V I ) A u n 2 A ( V I ) A u n 2 , if A ( V I ) A u n 2 0 ; 0 , otherwise .
  Step 4. Compute
   x n + 1 = ( 1 δ n τ n ) x n + δ n y n .
  Set n = n + 1 , and return to Step 1.
Lemma 7.
Assume that H 1 and H 2 are real Hilbert spaces and A : H 1 H 2 is a bounded linear operator with its adjoint operator A . Let F : H 1 2 H 1 and G : H 2 2 H 2 be set-valued maximal monotone operators. Let f : H 1 H 1 and g : H 2 H 2 be κ 1 , κ 2 -inverse strongly monotone mappings with κ = min { κ 1 , κ 2 } . Then, for any x , y H , λ ( 0 , 2 κ ) and L = A 2 , the following statements hold:
(i)
U = J λ F ( I λ f ) and V = J λ G ( I λ g ) are nonexpansive;
(ii)
W x W y 2 x y 2 r n ( 1 r n L ) ( V I ) A x ( V I ) A y 2 , where W = I + r n A ( V I ) A .
Proof.
(i) Since f and g are κ 1 , κ 2 -inverse strongly monotone mappings, respectively. From Lemma 1, we obtain that U = J λ F ( I λ f ) is averaged. So U is nonexpansive for λ ( 0 , 2 κ ) . Similarly, we can prove that V = J λ G ( I λ g ) is nonexpansive.
(ii) From V = J λ G ( I λ g ) is nonexpansive, we have
( I V ) A x ( I V ) A y 2 = ( I V ) ( A x A y ) , ( I V ) ( A x A y ) = ( A x A y ) ( V ( A x ) V ( A y ) ) 2 = A x A y 2 2 A x A y , V ( A x ) V ( A y ) + V ( A x A y ) 2 2 A x A y 2 2 A x A y , V ( A x ) V ( A y ) = 2 A x A y , ( A x A y ) ( V ( A x A y ) ) = 2 A x A y , ( I V ) A x ( I V ) A y .
It follows that
W x W y 2 = ( I + r n A ( V I ) A ) x ( I + r n A ( V I ) A ) y 2 = x y 2 2 r n x y , A ( I V ) A x A ( I V ) A y + r n 2 A ( I V ) A x A ( I V ) A y 2 x y 2 r n ( I V ) A x ( I V ) A y 2 + r n 2 A 2 ( I V ) A x ( I V ) A y 2 = x y 2 r n ( 1 r n L ) ( V I ) A x ( V I ) A y 2 .
Lemma 8.
The sequence { r n } formed by Algorithm 1, then { r n } is bounded.
Proof.
If A ( J λ G ( I λ g ) I ) A u n 2 0 , then
inf 1 ( J λ G ( I λ g ) I ) A u n 2 A ( J λ G ( I λ g ) I ) A u n 2 r n > 0 .
From A is bounded and linear, we get
σ n ( J λ G ( I λ g ) I ) A u n 2 A ( J λ G ( I λ g ) I ) A u n 2 σ n ( J λ G ( I λ g ) I ) A u n 2 A 2 ( J λ G ( I λ g ) I ) A u n 2 = σ n A 2 .
Hence sup r n > , thus { r n } is bounded. □
Lemma 9.
Assume that { y n } and { u n } are formed by Algorithm 1. If { u n k } converges weakly to z and lim k u n k y n k = lim k ( V I ) A u n k = 0 , then z Ω = { x H 1 : x Sol ( MVIP ( 3 ) ) } and A x Sol ( MVIP ( 4 ) ) } .
Proof.
Since U is nonexpansive, we get
y n k U u n k = U ( u n k + r n k A ( V I ) A u n k ) U u n k r n k ( A ( V I ) A u n k ) = r n k A ( V I ) A u n k ,
which together lim k ( V I ) A u n k = 0 gives that
lim k y n k U u n k = 0 .
Since
u n k U u n k = u n k y n k + y n k U u n k ,
we get
lim k u n k U u n k = 0 .
This combines with Lemma 2 and u n k z yields z F i x ( J λ F ( I λ f ) ) . In view of the fact A is a linear operator, we get A u n k A z . From lim k ( V I ) A u n k = 0 , we obtain A z F i x ( J λ G ( I λ g ) ) . Therefore z Ω .
Theorem 1.
Let H 1 and H 2 be are two real Hilbert spaces and C be a nonempty closed convex subset of H 1 and A : H 1 H 2 be a bounded linear operator with its adjoint operator A . Assume that F : H 1 2 H 1 and G : H 2 2 H 2 are set-valued maximal monotone operators. Let f : H 1 H 1 and g : H 2 H 2 be are κ 1 , κ 2 -inverse strongly monotone mappings with κ = min { κ 1 , κ 2 } . Let D : C C be η-strongly monotone and L-Lipschitzian, T : C C be a nonexpansive mapping, S : C C be a continuous quasi-nonexpansive mapping such that I S is monotone and Ψ = Φ Ω Fix ( S ) . Let { x n } be the sequence defined by Algorithm 1 and the Condition 1 holds. Then the sequence { x n } converges strongly to x Ψ in the norm, where x = min { z : z Ω } .
Proof.
This proof is divided the remaining proof into several steps.
Step 1. We will show that { x n } is bounded. Let us assume that x Ψ . From the definition of w n , we get
w n x = x n + θ n ( x n x n 1 ) x x n x + θ n x n x n 1 = x n x + τ n θ n τ n x n x n 1 .
According to Condition 1, one has θ n τ n x n x n 1 0 . Therefore, there exists a constant M 1 > 0 such that
θ n τ n x n x n 1 M 1 , n 1 .
Combining (8) and (9), we obtain
w n x x n x + τ n M 1 , n 1 .
Set v n = ( 1 β n ) S w n + β n B n w n and B n = I μ n D , we get
u n x = ( 1 α n ) w n + α n ( T v n ) x = ( 1 α n ) ( w n x ) + α n ( T v n x ) ( 1 α n ) w n x + α n T v n x ( 1 α n ) w n x + α n v n x .
From Lemma 4, we have
v n x = ( 1 β n ) S w n + β n B n w n x ( 1 β n ) S w n x + β n B n w n x ( 1 β n ) w n x + β n B n w n B n x + B n x x ( 1 β n ) w n x + β n ( 1 μ n ρ ) w n x + μ n D x w n x + β n μ n D x .
From (11) and (12), we get
u n x ( 1 α n ) w n x + α n v n x ( 1 α n ) w n x + α n w n x + β n μ n D x w n x + α n β n μ n D x .
Indeed, it follows Lemma 7, A x = V ( A x ) and (7), we have
y n x 2 u n x 2 r n ( 1 r n A 2 ) ( V I ) A u n 2 u n x 2 r n ( 1 σ n ) ( V I ) A u n 2 .
It follows from { σ n } ( 0 , 1 ) , we obtain
y n x u n x .
Combining (10), (13) and (15), we get
y n x w n x + α n β n μ n D x x n x + τ n M 1 + α n β n μ n D x .
Set M 2 = τ n M 1 + α n β n μ n D x 0 , it follows that
y n x x n x + M 2 .
By the definition of x n + 1 , we also have
x n + 1 x = ( 1 δ n τ n ) x n + δ n y n x = ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) + τ n x ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) + τ n x .
Further, we have
( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) 2 = ( 1 δ n τ n ) 2 x n x 2 + δ n 2 y n x 2 + 2 ( 1 δ n τ n ) δ n x n x , y n x ( 1 δ n τ n ) 2 x n x 2 + δ n 2 y n x 2 + ( 1 δ n τ n ) δ n x n x 2 + ( 1 δ n τ n ) δ n y n x 2 = ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n y n x 2 .
By (16) and Condition 1, we get
( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n y n x 2 ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n x n x + M 2 2 = ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n x n x 2 + 2 M 2 x n x + M 2 2 = ( 1 τ n ) 2 x n x 2 + ( 1 τ n ) δ n 2 M 2 x n x + M 2 2 = ( 1 τ n ) 2 x n x 2 + 2 ( 1 τ n ) δ n M 2 x n x + ( 1 τ n ) δ n M 2 2 ( 1 τ n ) x n x + M 2 2 .
Substituting (19) into (17), we obtain
x n + 1 x ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) + τ n x ( 1 τ n ) x n x + M 2 + τ n x = ( 1 τ n ) x n x + τ n M 1 + α n β n μ n D x + τ n x = ( 1 τ n ) x n x + τ n ( M 1 + x ) + α n β n μ n D x .
From Lemma 5, this implies that { x n } is bounded. Together with (10), (13) and (16), we get { w n } , { u n } and { y n } are also bounded.
Step 2. We will show that
δ n ( 1 τ n ) α n ( 1 α n ) T v n w n 2 + δ n r n ( 1 σ n ) ( V I ) A u n 2 x n x 2 x n + 1 x 2 + τ n M 5 .
By the definition of w n , we get
w n x 2 = x n + θ n ( x n x n 1 ) x 2 = ( x n x ) + θ n ( x n x n 1 ) 2 = x n x 2 + θ n 2 x n x n 1 2 + 2 θ n x n x , x n x n 1 x n x 2 + θ n 2 x n x n 1 2 + 2 θ n x n x x n x n 1 = x n x 2 + θ n x n x n 1 θ n x n x n 1 + 2 x n x x n x 2 + τ n M 3 ,
for some M 3 > 0 . From Lemma 3, we have
u n x 2 = ( 1 α n ) w n + α n ( T v n ) x 2 = ( 1 α n ) ( w n x ) + α n ( T v n x ) 2 ( 1 α n ) w n x 2 + α n T v n x 2 α n ( 1 α n ) T v n w n 2 ( 1 α n ) w n x 2 + α n v n x 2 α n ( 1 α n ) T v n w n 2 .
From Lemma 3, Lemma 4 and (10), we obtain
v n x 2 = ( 1 β n ) S w n + β n B n w n x 2 = ( 1 β n ) ( S w n x ) + β n ( B n w n x ) 2 = ( 1 β n ) ( S w n x ) + β n ( ( 1 μ n D ) w n x ) 2 = ( 1 β n ) ( S w n x ) + β n ( 1 μ n D ) ( w n x ) μ n D x 2 ( 1 β n ) S w n x 2 + β n ( 1 μ n D ) ( w n x ) μ n D x 2 ( 1 β n ) β n S w n B n w n 2 ( 1 β n ) S w n x 2 + β n ( ( 1 μ n D ) 2 w n x 2 2 μ n D x , ( 1 μ n D ) ( w n x ) μ n D x ) ( 1 β n ) w n x 2 + β n ( ( 1 μ n ρ ) w n x 2 2 μ n D x , ( 1 μ n D ) ( w n x ) μ n D x ) w n x 2 + 2 β n μ n D x x w n + μ n D w n w n x 2 + 2 β n μ n D x x n x + τ n M 1 + μ n D w n .
Substituting (20) and (22) into (21), we get
u n x 2 ( 1 α n ) w n x 2 + α n ( w n x 2 + 2 β n μ n D x ( x n x + τ n M 1 + μ n D w n ) ) α n ( 1 α n ) T v n w n 2 w n x 2 + 2 α n β n μ n D x x n x + τ n M 1 + μ n D w n α n ( 1 α n ) T v n w n 2 .
From (14) and (23), it follows that
y n x 2 u n x 2 r n ( 1 σ n ) ( V I ) A u n 2 w n x 2 + 2 α n β n μ n D x x n x + τ n M 1 + μ n D w n α n ( 1 α n ) T v n w n 2 r n ( 1 σ n ) ( V I ) A u n 2 .
By the definition of x n + 1 , we have
x n + 1 x 2 = ( 1 δ n τ n ) x n + δ n y n x 2 = ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) τ n x 2 = ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) 2 + τ n 2 x 2 τ n ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) , x ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) 2 + τ n M 4 ,
for some M 4 > 0 . Substituting (18), (24) and using Condition 1 into (25), we obtain
x n + 1 x 2 ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) 2 + τ n M 4 ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n y n x 2 + τ n M 4 ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n ( x n x 2 + τ n M 3 + 2 α n β n μ n D x x n x + τ n M 1 + μ n D w n α n ( 1 α n ) T v n w n 2 r n ( 1 σ n ) ( V I ) A u n 2 ) + τ n M 4 ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n ( τ n M 3 + 2 τ n μ n D x ( x n x + τ n M 1 + μ n D w n ) α n ( 1 α n ) T v n w n 2 r n ( 1 σ n ) ( V I ) A u n 2 ) + τ n M 4 x n x 2 + τ n M 5 δ n ( 1 τ n ) ( α n ( 1 α n ) T v n w n 2 r n ( 1 σ n ) ( V I ) A u n 2 ) ,
for some M 5 > 0 . Thus
δ n ( 1 τ n ) α n ( 1 α n ) T v n w n 2 + δ n r n ( 1 σ n ) ( V I ) A u n 2 x n x 2 x n + 1 x 2 + τ n M 5 .
Step 3. We will show that
x n + 1 x 2 = ( 1 τ n ) x n x 2 + τ n ( δ n M 3 + 2 δ n μ n D x ( x n x + τ n M 1 + μ n D w n ) + 2 δ n y n x n x x n + 1 + 2 x , x x n + 1 ) .
From the definition of x n + 1 , we get
x n + 1 = ( 1 δ n τ n ) x n + δ n y n = ( 1 δ n ) x n + δ n y n τ n x n .
Let t n = ( 1 δ n ) x n + δ n y n . Then we have x n t n = δ n ( y n x n ) and
t n x 2 = ( 1 δ n ) x n + δ n y n x 2 = ( 1 δ n ) ( x n x ) + δ n ( y n x 2 ( 1 δ n ) x n x 2 + δ n y n x 2 δ n ( 1 δ n ) x n y n 2 ( 1 δ n ) x n x 2 + δ n y n x 2 .
From the definition of t n , we obtain
x n + 1 = t n τ n x n = ( 1 τ n ) t n τ n ( x n t n ) = ( 1 τ n ) t n τ n δ n ( y n x n ) .
This implies that
x n + 1 x 2 = ( 1 τ n ) t n τ n δ n ( y n x n ) x 2 = ( 1 τ n ) ( t n x ) τ n ( δ n ( y n x n ) x ) 2 ( 1 τ n ) t n x 2 + 2 τ n δ n ( y n x n ) + τ n x , x x n + 1 ( 1 τ n ) t n x 2 + 2 τ n δ n y n x n , x x n + 1 + 2 τ n x , x x n + 1 ( 1 τ n ) t n x 2 + 2 τ n δ n y n x n x x n + 1 + 2 τ n x , x x n + 1 .
From (20), (24), (27) and (29), we have
x n + 1 x 2 ( 1 τ n ) ( 1 δ n ) x n x 2 + δ n y n x 2 + 2 τ n δ n y n x n x x n + 1 + 2 τ n x , x x n + 1 ( 1 τ n ) ( ( 1 δ n ) x n x 2 + δ n ( x n x 2 + τ n M 3 + 2 α n β n μ n D x x n x + τ n M 1 + μ n D w n α n ( 1 α n ) T v n w n 2 r n ( 1 σ n ) ( V I ) A u n 2 ) ) + 2 τ n δ n y n x n x x n + 1 + 2 τ n x , x x n + 1 ( 1 τ n ) x n x 2 + τ n ( δ n M 3 + 2 δ n μ n D x ( x n x + τ n M 1 + μ n D w n ) ) + 2 δ n y n x n x x n + 1 + 2 x , x x n + 1 .
Step 4. We will show x Ψ . Assume that { x n k x } is a subsequence of { x n x } such that
lim inf k x n k + 1 x x n k x 0 .
Then,
lim inf k x n k + 1 x 2 x n k x 2 = lim inf k x n k + 1 x x n k x x n k + 1 x + x n k x 0 .
From Step 2, we see that
δ n k α n k ( 1 α n k ) T v n k w n k 2 + δ n k r n k ( 1 σ n k ) ( V I ) A u n k 2 lim sup k x n k x 2 x n k + 1 x 2 + τ n k M 5 lim sup k x n k x 2 x n k + 1 x 2 + lim sup k τ n k M 5 0 ,
which indicates that
lim k T v n k w n k = 0 and lim k ( V I ) A u n k = 0 .
From the definition of u n , we get
u n k w n k = α n k T v n k w n k .
Thus
lim k u n k w n k = 0 .
From w n k x n k = θ n k ( x n k x n k 1 ) , we have
u n k x n u n k w n k + w n k x n k = u n k w n k + θ n k x n k x n k 1 = u n k w n k + τ n k θ n k τ n k x n k x n k 1 .
So
lim k u n k x n k = 0 .
Since T is nonexpansive, S is a continuous quasi-nonexpansive mapping, we have
T v n k x 2 v n k x 2 = ( ( 1 β n k ) S w n k + β n k B n k w n k ) x 2 = ( 1 β n k ) ( S w n k x ) + β n k ( w n k x ) β n k μ n k D w n k 2 ( 1 β n k ) ( S w n k x ) + β n k ( w n k x ) 2 2 β n k μ n k D w n k , v n k x ( 1 β n k ) S w n k x 2 + β n k w n k x 2 β n k ( 1 β n k ) S w n k w n k 2 2 β n k μ n k D w n k , v n k x w n k x 2 β n k ( 1 β n k ) S w n k w n k 2 2 β n k μ n k D w n k , v n k x .
So
β n k ( 1 β n k ) S w n k w n k 2 w n k x 2 T v n k x 2 2 β n k μ n k D w n k , v n k x w n k T v n k ( w n k x T v n k x ) 2 β n k μ n k D w n k , v n k x .
Thus
lim k S w n k w n k = 0 .
Note that
w n k v n k = w n k ( 1 β n k ) S w n k + β n k ( 1 μ n k D ) w n k = ( 1 β n k ) ( S w n k w n k ) + β n k μ n k D w n k ( 1 β n k ) S w n k w n k + β n k μ n k D w n k .
From (35) and Condition 1, we get
lim k w n k v n k = 0 .
By the definition of u n , we have
v n k T v n k = v n k w n k + w n k u n k + u n k T v n k = v n k w n k + w n k u n k + ( 1 α n k ) w n k T v n k = v n k w n k + α n k w n k T v n k + ( 1 α n k ) w n k T v n k = v n k w n k + w n k T v n k .
From (31) and (37), we obtain
lim k v n k T v n k = 0 .
Since w n = x n + θ n ( x n x n 1 ) , we have w n x n = θ n ( x n x n 1 ) . It follows that
w n x n = τ n θ n τ n x n x n 1 .
So
lim k w n k x n k = 0 .
Set q n = u n + r n A ( V I ) A u n . Now we estimate
q n k x n k = u n k + r n k A ( V I ) A u n k x n k u n k x n k + r n k A ( V I ) A u n k .
From (31) and (33), we get
lim k q n k x n k = 0 .
Since
u n k q n k u n k x n k + x n k q n k ,
by (33) and (40), we have
lim k u n k q n k = 0 .
By the definition of y n and Lemma 7, we get
y n k x 2 = J λ F ( I λ f ) q n k J λ F ( I λ f ) x 2 ( q n k x ) λ ( f q n f x ) 2 = q n k x 2 + λ 2 f q n f x 2 2 λ q n x , f q n f x q n k x 2 + λ 2 f q n f x 2 2 κ 1 λ f q n f x = q n k x 2 λ ( 2 κ 1 λ ) f q n f x .
By the definition of x n + 1 and (21), (22), (25), (42), we have
x n + 1 x 2 ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) 2 + τ n M 4 ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n y n x 2 + τ n M 4 ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n ( x n x 2 + τ n M 3 + α n 2 β n μ n D x x n x + τ n M 1 + μ n D w n r n ( 1 σ n ) ( V I ) A u n 2 λ ( 2 κ 1 λ ) f q n f x ) + τ n M 4 ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n ( τ n M 3 + α n ( 2 β n μ n D x ( x n x + τ n M 1 + μ n D w n ) ) r n ( 1 σ n ) ( V I ) A u n 2 λ ( 2 κ 1 λ ) f q n f x ) + τ n M 4 .
Thus
( 1 τ n ) δ n λ ( 2 κ 1 λ ) f q n f x x n x 2 x n + 1 x 2 + ( 1 τ n ) δ n ( τ n M 3 + α n ( 2 β n μ n D x x n x + τ n M 1 + μ n D w n r n ( 1 r n σ n ) ( V I ) A u n 2 ) + τ n M 4 .
Substituting n into n k , from the boundness of { x n } , Condition 1, (31), and taking k , we get
lim k f q n k f x = 0 .
Since J λ F is firmly nonexpansive, we obtain
y n x 2 = J λ F ( I λ f ) q n J λ F ( I λ f ) x ( I λ f ) q n ( I λ f ) x , y n x = 1 2 ( I λ f ) q n ( I λ f ) x 2 + y n x 2 q n y n λ ( f q n f x ) 2 = 1 2 ( q n x 2 + λ 2 f q n f x 2 2 λ q n x , f q n f x + y n x 2 q n y n λ ( f q n f x ) 2 ) = 1 2 ( q n x 2 + λ 2 f q n f x 2 2 λ q n x , f q n f x + y n x 2 q n y n 2 + λ f q n f x 2 2 λ q n y n , f q n f x ) 1 2 q n x 2 + 2 λ x y n f q n f x + y n x 2 q n y n 2 .
So
y n x 2 q n x 2 + 2 λ x y n f q n f x q n y n 2 .
From (23), (25) and (46), we have
x n + 1 x 2 ( 1 δ n τ n ) ( x n x ) + δ n ( y n x ) 2 + τ n M 4 ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n y n x 2 + τ n M 4 ( 1 δ n τ n ) ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n ( x n x 2 + τ n M 3 + 2 α n β n μ n D x x n x + τ n M 1 + μ n D w n + 2 λ x y n f q n f x q n y n 2 ) + τ n M 4 ( 1 τ n ) x n x 2 + ( 1 τ n ) δ n ( τ n M 3 + 2 α n β n μ n D x ( x n x + τ n M 1 + μ n D w n ) + 2 λ x y n f q n f x q n y n 2 ) + τ n M 4 .
Therefore
( 1 τ n ) δ n q n y n 2 x n x 2 x n + 1 x 2 + ( 1 τ n ) δ n ( τ n M 3 + 2 α n β n μ n D x ( x n x + τ n M 1 + μ n D w n ) + 2 λ x y n f q n f x q n y n 2 ) + τ n M 4 .
Substituting n into n k , from the boundness of { x n } , Condition 1, (45) and taking limit k , we get
lim k q n k y n k = 0 .
Since
y n k u n k y n k q n k + q n k u n k ,
it follows from (41) and (48), we obtain
lim k y n k u n k = 0 .
Since
x n k y n k = x n k q n k + q n k y n k ,
we have
lim k x n k y n k = 0 .
By the definition of x n + 1 , we get
x n k + 1 x n k = δ n k y n k x n k τ n k x n k .
Using (50) and Condition 1, we have
lim k x n k + 1 x n k = 0 .
Note that
x n k T x n k = x n k v n k + v n k T v n k + T v n k T x n k 2 x n k v n k + v n k T v n k .
Since { x n } is bounded, there exists a subsequence { x n k } which converges weakly to x . Assume that x T x , by Opial property
lim inf k x n k x < lim inf k x n k T x lim inf k x n k T x n k + T x n k T x lim inf k 2 x n k v n k + v n k T v n k + x n k x = lim inf k x n k x .
This is a contradiction. Hence x = T x and x F i x ( T ) . On the other hand, we note that u n = ( 1 α n ) w n + α n T v n and
w n T v n = ( w n v n ) + ( v n T v n ) = ( ( 1 β n ) ( w n S w n ) β n μ n D w n ) + ( v n T v n ) .
Setting
φ n = w n u n α n ( 1 β n ) = ( w n S w n ) β n ( 1 β n ) μ n D w n + 1 ( 1 β n ) ( v n T v n ) .
In particular, for each z F i x ( T ) , we have
φ n , w n z = ( I S ) w n β n ( 1 β n ) μ n D w n + 1 ( 1 β n ) ( I T ) v n , w n z = ( I S ) w n ( I S ) z , w n z + ( I S ) z , w n z β n ( 1 β n ) μ n D w n , w n z + 1 ( 1 β n ) ( I T ) v n , w n z = ( I S ) w n ( I S ) z , w n z + ( I S ) z , w n z β n ( 1 β n ) μ n D w n , w n z + 1 ( 1 β n ) ( I T ) v n , w n v n + 1 ( 1 β n ) ( I T ) v n , v n z ,
which comes from the monotonicity of I S and I T ,
φ n , w n z ( I S ) z , w n z + 1 ( 1 β n ) ( I T ) v n , w n v n β n ( 1 β n ) μ n D w n , w n z .
Replacing n with n k , we have
φ n k , w n k z ( I S ) z , w n k z + 1 ( 1 β n k ) ( I T ) v n k , w n k v n k β n k ( 1 β n k ) μ n k D w n k , w n k z .
Moreover, by (32) and Condition 1, we get
lim k φ n k = lim k w n k u n k α n k ( 1 β n k ) = 0 .
Since x n k w n k 0 , one has w n k x . Taking limit on both sides of (53) for k , it follows from (37) and (38) that
( I S ) z , x z 0 , z F i x ( T ) .
If z is substituted by t z + ( 1 t ) x for t ( 0 , 1 ) , we get
( I S ) ( t z + ( 1 t ) x ) , x z 0 , z F i x ( T ) .
Since ( I S ) is continuous, on taking limit t 0 + , we have
( I S ) x , x z 0 , z F i x ( T ) .
That is x Φ . Next we will show that x Ω .
Note that y n k = U ( u n k + r n k ( A ( V I ) A u n k ) ) can be rewritten as
y n k u n k + r n k ( A ( V I ) A u n k λ f ( W u n k ) F ( y n k ) ,
where W = I + r n A ( I V ) A is nonexpansive for Lemma 7. Taking k in (54) by (31), (49) and the graph of maximal monotone mapping is weakly strongly closed, we obtain 0 f ( x ) + F ( x ) . Moreover, since { x n } and { u n } have the same asymptotical behavior, we have A u n k A x . Since the resolvent operator V = J λ G ( I λ g ) is average and hence nonexpansive, we can obtain
0 g ( A x ) + G ( A x ) .
This shows that x Ω and so x Ψ .
Step 5. We will show that x n x where x = min { z : z Ω } . Since { x n k } is bounded, there exists a subsequence { x n j } of { x n k } such that x n j z . Moreover,
lim sup k x , x x n k = lim sup j x , x x n j = x , x z .
Since x n k u n k 0 , one has u n j z , which together with y n k u n k 0 and Lemma 9, we get z Ω . From the definition of x and (55), we obtain
lim sup k x , x x n k = x , x z 0 .
Combining (56) and (52), we obtain
lim sup k x , x x n k + 1 lim sup k x , x x n k x , x z 0 .
This together with (50), Step 3. and Lemma 6, we can conclude that { x n } strongly converges to x Ψ . Hence we obtain the desired conclusion. □
Remark 2.
We note that our results directly improve and extend some known results in the literature as follows:
(i)
Our proposed iterative method has a strong convergence in real Hilbert spaces which is more preferable than the weak convergence results of Kazmi et al. [7].
(ii)
We improve and extend Theorem 3.1 of Dao-Jun Wen [14]. Especially, we use the quasi-nonexpansive mappings instead of the nonexpansive mappings.
(iii)
The selection of the step size in the iterative method provided by Dao-Jun Wen [14] and Kazmi et al. [7] requires the prior information of the operator (matrix) norm while our iterative method can update the step size of each iteration.
(iv)
Our iterative method improve and extend iterative method of Dao-Jun Wen [14] and Kazmi et al. [7].

4. Theoretical Applications

In this section, we derive a scheme for solving hierarchical fixed point and the split problems from Algorithm 1 and also extend and generalize the known results.

4.1. Split Variational Inclusion Problem

The split variational inclusion problem is one of the important special case of the split monotone variational inclusion problem. This is a fundamental problem in optimization theory, which is applied in a wide range of disciplines. In other words, if f = 0 and g = 0 , then the split monotone variational inclusion problem is reduced to split variational inclusion problem. Let us denote Sol(SVIP) = { x H 1 : 0 F ( x ) and 0 G ( A x ) } by the solution set of the split variational inclusion problem.
Theorem 2.
Let H 1 and H 2 be are two real Hilbert spaces and C be a nonempty closed convex subset of H 1 and A : H 1 H 2 be a bounded linear operator with its adjoint operator A . Assume that F : H 1 2 H 1 and G : H 2 2 H 2 are set-valued maximal monotone operators. Let D : C C be η-strongly monotone and L-Lipschitzian, T : C C be a nonexpansive mapping, S : C C be a continuous quasi-nonexpansive mapping such that I S is monotone and Γ = S o l ( S V I P ) Φ F i x ( S ) . Let { x n } be the a sequence which satisfies the Condition 1 generated by the following scheme:
w n = x n + θ n ( x n x n 1 ) ; u n = ( 1 α n ) w n + α n T ( ( 1 β n ) S w n + β n ( 1 μ n D ) w n ) ; y n = J λ F ( u n + r n ( A ( J λ G I ) A u n ) ) ; x n + 1 = ( 1 δ n τ n ) x n + δ n y n ,
where { r n } and { θ n } satisfy the Algorithm 1. Then the sequence { x n } converges strongly to x Γ in the norm, where x = min { z : z Γ } .

4.2. Split Variational Inequality Problem

Let C be a nonempty closed convex subset of a Hilbert space H 1 . Define the normal cone N C ( x ) of C at a point x C by
N C ( x ) = { z H 1 : z , y x 0 , y C } .
It is known that for each λ > 0 , we get
y = ( I + λ N C ) 1 x x y + λ N C ( y ) x y λ N C ( y ) x y , z y 0 , z C y P C x .
This implies that ( I + λ N C ) 1 x = P C x . Let C and Q be nonempty closed convex subsets of Hilbert spaces of H 1 and H 2 , respectively. If F = N C and G = N Q in the split monotone variational inclusion problem, the following split variational inequality problem is obtained: Find x C such that
f ( x ) , x x 0 x C and g ( A x ) , y A x 0 y Q ,
where f : H 1 H 1 and g : H 2 H 2 are κ 1 , κ 2 -inverse strongly monotone mappings with κ = min { κ 1 , κ 2 } . In particular, if f is a κ 1 -inverse strongly monotone mapping with λ ( 0 , 2 κ 1 ) , then P C ( I λ f ) is average. Then, the following results can be obtained from our Theorem 1.
Theorem 3.
Let H 1 , H 2 , C , Q , f , g be the same as above. Let A : H 1 H 2 be a bounded linear operator with adjoint operator A . Select arbitrary initial points x 0 , x 1 H 1 , { x n } is generated by the following scheme:
w n = x n + θ n ( x n x n 1 ) ; u n = ( 1 α n ) w n + α n T ( ( 1 β n ) S w n + β n ( 1 μ n D ) w n ) ; y n = P C ( I λ f ) ( u n + r n ( A ( P Q ( I λ g ) I ) A u n ) ) ; x n + 1 = ( 1 δ n τ n ) x n + δ n y n ,
where { r n } and { θ n } satisfy the Algorithm 1. Suppose that Condition 1 are satisfied and Υ = Sol ( SVIP ) Φ Fix ( S ) . Then the sequence { x n } converges strongly to x Υ in the norm, where x = min { z : z Υ } .

5. Numerical Experiments

Some numerical results will be presented in this section to demonstate the effeciveness of our proposed method. The MATLAB codes were run in MATLAB version 9.5 (R2018b) on MacBook Pro 13-inch, 2019 with 2.4 GHz Quad-Core Intel Core i5 processor. RAM 8.00 GB.
Example 1.
We consider an example in infinite dimensional Hilbert spaces. Assume H 1 = H 2 = L 2 ( [ 0 , 1 ] ) with inner product x , y = 0 1 x ( t ) y ( t ) d t and induced norm x = 0 1 | x ( t ) | 2 d t for all x , y L 2 ( [ 0 , 1 ] ) . Let F , G : L 2 ( 0 , 1 ) L 2 ( 0 , 1 ) be defined by F x ( t ) = 3 x ( t ) and G x ( t ) = 5 x ( t ) where x ( t ) L 2 ( [ 0 , 1 ] ) , t [ 0 , 1 ] . Let f , g : L 2 ( 0 , 1 ) L 2 ( 0 , 1 ) be defined by f x ( t ) = 2 x ( t ) and g x ( t ) = 4 x ( t ) where x ( t ) L 2 ( [ 0 , 1 ] ) , t [ 0 , 1 ] . Then f is 2-inverse strongly monotone, g is 4-inverse strongly monotone and F , G are maximal monotone. Further, we have λ > 0 by a direct calculation that
J λ F ( x λ f x ) = ( I + λ F ) 1 ( x λ f x ) = 1 2 λ 1 + 3 λ x ( t )
J λ G ( x λ g x ) = ( I + λ G ) 1 ( x λ g x ) = 1 4 λ 1 + 5 λ x ( t ) ,
where x ( t ) L 2 [ 0 , 1 ] , t [ 0 , 1 ] . Choose θ = 0.6 and ε n = 1 ( n + 1 ) 3 , we have
θ n ¯ = min { 0.6 , 1 ( n + 1 ) 3 x n x n 1 } , if x n x n 1 ; 0.6 , otherwise .
The mapping A : H 1 H 2 is defined by A x ( t ) = 9 4 x ( t ) and A = A = 9 4 . Let T x ( t ) = x ( t ) , S x ( t ) = x ( t ) cos ( x ( t ) ) and D x ( t ) = 10 x ( t ) where x ( t ) L 2 ( [ 0 , 1 ] ) , t [ 0 , 1 ] . Then T is a nonexpansive mapping with F i x ( T ) = ( , ) and S is a continuous qusi-nonexpansive mapping with F i x ( S ) = { 0 } and ( I S ) is monotone but S is not a nonexpansive mapping. Hence Φ = Sol ( HFPP ) = { 0 } . Furthermore, it easy to prove that Ω = { 0 } . Therefore Ψ = Ψ Ω = { 0 } . We choose the iterative coefficients
α n = 1 2 , β n = 1 n 2 , μ n = 1 2 n , δ n = 0.98 τ n , τ n = 1 n + 2 .
The further comparison of convergence behavior between our proposed method and Dao-Jun Wen. [14] and Kazmi et al. [7] is displayed in Figure 1 and Table 1 with the stopping criteria x n x 10 5 .
Remark 3.
Table 1 shows that the different inertial points x 0 = x 1 have almost on effect on the number of iterations and shows that our proposed method has a better number of iterations and CPU time than Dao-Jun Wen. [14] and Kazmi et al. [7].
Example 2.
In this example, we apply our main result to solve the image restoration problem by using Algorithm 1. We consider the convex minimization problem as follows:
min x { h ( x ) + g ( x ) } ,
where h : R n R is a smooth convex loss function and differentiable with L-Lipschitz continuous gradient h where L > 0 and g : R n R is a continuous convex function. It is well known that the convex minimization problem (58) is equivalent to the following problem:
f i n d a p o i n t x C s u c h t h a t 0 h ( x ) + g ( x ) .
We consider the degradation model that represents image restoration problems as the following mathematical model:
b = A x + ε ,
where A R m × n is a blurring matrix and ε R m × 1 is a noise term. The goal is to recover the original image x R n × 1 by minimizing a noise term. We consider a model which produces the restored image given by the following minimization problem:
min x { 1 2 A x b 2 2 + ω x 1 } ,
for some regularization parameter ω > 0 . In this situation, we choose h ( x ) = 1 2 A x b 2 2 and g ( x ) = μ x 1 and set operators f = h , F = g , A = 9 x 4 , D = 10 x , G = 0 , T = 3 n x 3 n + 1 , S = p r o x r g ( I r h ) and θ n is defined as Example 1. We set parameters as follows:
α n = 60 n 9 100 n , β n = 1 150 n 2 , τ n = 1 2000 ( 2 n + 1 ) , δ n = 0.4 τ n , μ n = 0.5 .
For this example, we choose the regularization parameter ω = 10 3 and the original gray and RGB images (see in Figure 2a,f). We use an average and motion blur create the blurred and noise images (see in Figure 2b,g). The performance for image restoring process is quantitatively measured by signal-to-noise ratio (SNR), which is defined as
S N R = 20 log 10 x 2 2 x x n 2 2 ,
where x and x n denote the original image and the restored image at iteration n, respectively. A higher SNR implies that the recovered image is of higher quality. Our numerical results are explained in Table 2 and Figure 3.
Remark 4.
It can be observed from Figure 2 that the restoration quality of the the gray and RGB images restored by our algorithm is better than the quality of the image restored by Dao-Jun Wen. Alg. [14] and Kazmi et al. Alg. [7], and Table 2 and Figure 3 are verified by the higher SNR values of our algorithm.

6. Conclusions

In this article, the main contribution is to introduce a novel self-adaptive inertial Krasnoselski-mann iterative method for solving hierarchical fixed point and split monotone variational inclusion problems in Hilbert speaces. The main advantage of this scheme involves both the use of an inertial technique and the self-adaptive step size criterion which does not require prior knowledge of the Lipschitz constant of the cost operator. Under standard assumptions, the strong convergence of the proposed method is established. A modified scheme derived from the proposed method is given for solving hierarchical fixed points and the split problems. The application of the proposed method in image recovery and comparison with Dao-Jun Wen. Alg. [14] and Kazmi et al. Alg. [7] are presented.

Author Contributions

Conceptualization, P.C. and A.K.; methodology, P.C. and A.K.; formal analysis, P.C. and A.K.; investigation, P.C. and A.K.; writing—original draft preparation, P.C. and A.K.; writing—review and editing, P.C. and A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financially supported by Faculty of science, Burapha University (SC05/2564) and Faculty of science, Naresuan University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editor and the anonymous referees for their valuable comments and suggestions which helped to improve the original version of this paper. This work was financially supported by Faculty of science, Burapha University (SC05/2564) and the second author would like to express her deep thank to Faculty of science, Naresuan University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Combettes, P.L.; Pesquet, J.-C. Deep neural structures solving variational inequalities. Set-Valued Var. Anal. 2020, 28, 491–518. [Google Scholar] [CrossRef] [Green Version]
  2. Luo, M.J.; Zhang, Y. Robust solution sto box-contrained stochastic linear variational inequality problem. J. Ineq. Appl. 2017, 2017, 253. [Google Scholar] [CrossRef] [Green Version]
  3. Juditsky, A.; Nemirovski, A. Solving variational inequalities with monotone operators in domains given by linear minimization oracles. arXiv 2013, arXiv:1312.1073v2. [Google Scholar] [CrossRef]
  4. Moudafi, A.; Mainge, P.-E. Towards viscosity approximations of hierarchical fixed-point problems. Fixed Point Theory Appl. 2006, 1–10. [Google Scholar] [CrossRef] [Green Version]
  5. Ćirić, L.J. Some Recent Results in Metrical Fixed Point Theory; University of Belgrade: Belgrade, Serbia, 2003. [Google Scholar]
  6. Faraji, H.; Radenović, S. Some fixed point results of enriched contractions by Krasnoseljski iterative method in partially ordered Banach spaces. Trans. A Razmadze Math. Inst. to appear.
  7. Kazmi, K.R.; Ali, R.; Furkan, M. Hybrid iterative method for split monotone variational inclusion problem and hierarchical fixed point problem for finite family of nonexpansive mappings. Numer. Algorithms 2018, 79, 499–527. [Google Scholar] [CrossRef]
  8. Moudafi, A. Krasnoselski–Mann iteration for hierarchical fixed-point problems. Inverse Probl. 2007, 23, 1635–1640. [Google Scholar] [CrossRef]
  9. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  10. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [Green Version]
  11. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  12. Combettes, P.L. The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–453. [Google Scholar]
  13. Kazmi, K.R.; Ail, R.; Furkan, M. Krasnoselski–Mann type iterative method for hierarchical fixed point problem and split mixed equilibrium problem. Numer. Algorithms 2017, 77, 289–308. [Google Scholar] [CrossRef]
  14. Wen, D.-J. Modified Krasnoselski–Mann type iterative algorithm with strong convergence for hierarchical fixed point problem and split monotone variational inclusions. J. Comput. Appl. Math. 2021, 393, 1–13. [Google Scholar] [CrossRef]
  15. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  16. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operator via discretization of nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  17. Chuang, C.S. Hybrid inertial proximal algorithm for the split variational inclusion problem in Hilbert spaces with applications. Optimization 2017, 66, 777–792. [Google Scholar] [CrossRef]
  18. Olona, M.A.; Alakoya, T.O.; Owolabi, A.O.-E.; Mewomo, O.T. Inertial shrinking projection algorithm with self-adaptive step size for split generalized equilibrium and fixed point problems for a countable family of nonexpansive multivalued mappings. Demonstr. Math. 2021, 54, 47–67. [Google Scholar] [CrossRef]
  19. Olona, M.A.; Alakoya, T.O.; Owolabi, A.O.-E.; Mewomo, O.T. Inertial algorithm for solving equilibrium, variational inclusion and fixed point problems for an infinite family of strictly pseudocontractive mappings. J. Nonlinear Funct. Anal. 2021, 2, 10. [Google Scholar]
  20. Kesornprom, S.; Cholamjiak, P. Proximal type algorithms involving linesearch and inertial technique for split variational inclusion problem in hilbert spaces with applications. Optimization 2019, 68, 2369–2395. [Google Scholar] [CrossRef]
  21. Tang, Y. Convergence analysis of a new iterative algorithm for solving split variational inclusion problems. J. Ind. Manag. Optim. 2020, 16, 945–964. [Google Scholar] [CrossRef]
  22. Tang, Y.; Gibali, A. New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algorithms 2020, 83, 305–331. [Google Scholar] [CrossRef]
  23. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  24. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef] [Green Version]
  25. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
Figure 1. Numerical behavior of all algorithms with different initial values in Example 1.
Figure 1. Numerical behavior of all algorithms with different initial values in Example 1.
Mathematics 09 02460 g001
Figure 2. The original images are shown as (a) “Cameraman image” and (f) “Anchalee image”, Degradation images by an average and motion blur are shown as (b,g), repectively. (c,h) show the reconstruction by our algorithm, (d,i) show the reconstruction by Dao-Jun Wen algorithm and (e,j) show the reconstruction by Kazmi et al. algorithm.
Figure 2. The original images are shown as (a) “Cameraman image” and (f) “Anchalee image”, Degradation images by an average and motion blur are shown as (b,g), repectively. (c,h) show the reconstruction by our algorithm, (d,i) show the reconstruction by Dao-Jun Wen algorithm and (e,j) show the reconstruction by Kazmi et al. algorithm.
Mathematics 09 02460 g002aMathematics 09 02460 g002b
Figure 3. Numerical behavior of all algorithms with SNR in Example 2.
Figure 3. Numerical behavior of all algorithms with SNR in Example 2.
Mathematics 09 02460 g003
Table 1. The result of all algorithms with different inertial points in Example 1.
Table 1. The result of all algorithms with different inertial points in Example 1.
Algorithms x 1 = 100 t 2 x 1 = 500 ( t 3 + 2 t ) x 1 = 500 sin ( t )
Iter.Time (s)Iter.Time (s)Iter.Time (s)
Our Algorithm380.13470.13450.12
Dao-Jun Wen Alg.860.15970.12810.16
Kazmi et al. Alg.930.18950.241030.21
Table 2. The performance of signal-to-noise ratio (SNR) in the gray and RGB images.
Table 2. The performance of signal-to-noise ratio (SNR) in the gray and RGB images.
SNR
Cameraman ImageAnchalee Image
nOur Alg.Dao-Jun WenKazmi et al.Our Alg.Dao-Jun WenKazmi et al.
126.25648.11164.033237.28659.53664.7484
5030.168730.009129.230245.666445.117142.9992
10030.766030.580929.869948.593147.775545.0396
20031.193830.951730.394751.784650.563447.7038
30031.293931.001130.598853.565051.997749.4282
40031.243230.907230.673954.729152.841150.6422
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chuasuk, P.; Kaewcharoen, A. Inertial Krasnoselski–Mann Iterative Method for Solving Hierarchical Fixed Point and Split Monotone Variational Inclusion Problems with Its Applications. Mathematics 2021, 9, 2460. https://doi.org/10.3390/math9192460

AMA Style

Chuasuk P, Kaewcharoen A. Inertial Krasnoselski–Mann Iterative Method for Solving Hierarchical Fixed Point and Split Monotone Variational Inclusion Problems with Its Applications. Mathematics. 2021; 9(19):2460. https://doi.org/10.3390/math9192460

Chicago/Turabian Style

Chuasuk, Preeyanuch, and Anchalee Kaewcharoen. 2021. "Inertial Krasnoselski–Mann Iterative Method for Solving Hierarchical Fixed Point and Split Monotone Variational Inclusion Problems with Its Applications" Mathematics 9, no. 19: 2460. https://doi.org/10.3390/math9192460

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop