Next Article in Journal
Water Particles Monitoring in the Atacama Desert: SPC Approach Based on Proportional Data
Next Article in Special Issue
Accelerated Modified Tseng’s Extragradient Method for Solving Variational Inequality Problems in Hilbert Spaces
Previous Article in Journal
Analysis of Novel Oscillations of Quantized Mechanical Energy in Mass-Accreting Nano-Oscillator Systems
Previous Article in Special Issue
Variational-Like Inequality Problem Involving Generalized Cayley Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Tseng-Type Algorithm with Self-Adaptive Techniques for Solving the Split Problem of Fixed Points and Pseudomonotone Variational Inequalities in Hilbert Spaces

1
The Key Laboratory of Intelligent Information and Big Data Processing of Ningxia Province, and Health Big Data Research Institute, North Minzu University, Yinchuan 750021, China
2
Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung 807, Taiwan
3
Research Center of Nonlinear Analysis and Optimization, Kaohsiung Medical University, Kaohsiung 807, Taiwan
4
Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung 807, Taiwan
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(3), 152; https://doi.org/10.3390/axioms10030152
Submission received: 15 June 2021 / Revised: 2 July 2021 / Accepted: 5 July 2021 / Published: 10 July 2021
(This article belongs to the Special Issue Advances in Nonlinear and Convex Analysis)

Abstract

:
In this paper, we survey the split problem of fixed points of two pseudocontractive operators and variational inequalities of two pseudomonotone operators in Hilbert spaces. We present a Tseng-type iterative algorithm for solving the split problem by using self-adaptive techniques. Under certain assumptions, we show that the proposed algorithm converges weakly to a solution of the split problem. An application is included.

1. Introduction

In this paper, we survey the variational inequality (in short, VI ( C , ϕ ) ) of seeking an element p C such that
ϕ ( p ) , x p , for all x C ,
where C is a nonempty closed convex set in a real Hilbert space H, · , · means the inner product of H, and ϕ : H H is a nonlinear operator. Denote by Sol ( C , ϕ ) the solution set of variational inequality (1).
A host of problems such as optimization problem, saddle point, equilibrium problem, fixed point problem can be converted into the form of variational inequality (1), see [1,2,3,4,5,6,7,8,9,10,11,12]. Many numerical algorithms have been proposed and developed for solving (1) and related problems, see [13,14,15,16,17,18,19,20,21,22,23,24,25] and the references therein. Generally speaking, ϕ should satisfy the following assumptions
  • ϕ is strongly monotone, i.e., there exists a positive constant γ such that
    ϕ ( u ) ϕ ( u ^ ) , u u ^ γ u u ^ 2 , for all u , u ^ H .
  • ϕ is Lipschitz continuous, i.e., there exists a positive constant κ such that
    ϕ ( u ) ϕ ( u ^ ) κ u u ^ , for all u , u ^ H .
In order to abate the restriction (2), Korpelevich’s extragradient algorithm ([26]) was proposed in 1976
y k = proj C [ x k τ ϕ ( x k ) ] , x k + 1 = proj C [ x k τ ϕ ( y k ) ] ,
where proj C denotes the orthogonal projection from H onto C and the step-size τ is in ( 0 , 1 κ ) .
Extragradient algorithm (4) guarantees the convergence of the sequence { x k } provided ϕ is monotone. Extragradient algorithm and its variant have been investigated extensively, see [27,28,29,30,31]. However, we have to compute (i) twice proj C at two different points and (ii) two values ϕ ( x k ) and ϕ ( y k ) . Two important modifications of extragradient algorithm have been made. One is proposed in [32] by Censor, Gibali and Reich and another is the following remarkable algorithm proposed in [33] by Tseng
y k = proj C [ x k γ ϕ ( x k ) ] , x k + 1 = y k + γ [ ϕ ( x k ) ϕ ( y k ) ] ,
where γ ( 0 , 1 κ ) .
On the other hand, if ϕ is not Lipschitz continuous or its Lipschitz constant κ is difficult to estimate, then algorithms (4) and (5) are invalid. To avoid this obstacle, Iusem [34] used a self-adaptive technique without prior knowledge of Lipschitz constant κ of ϕ for solving (1). Some related works on self-adaptive methods for solving (1), please refer to [35,36,37,38].
Let H 1 and H 2 be two real Hilbert spaces. Let C and Q be two nonempty closed and convex subsets of H 1 and H 2 , respectively. Let S : C C , T : Q Q , f : H 1 H 1 and g : H 2 H 2 be four nonlinear operators. We consider the classical split problem which is to find a point x C such that
x Fix ( S ) Sol ( C , f ) and A x Fix ( T ) Sol ( Q , g ) ,
where Fix ( S ) : = { u | u = S u } and Fix ( T ) : = { v | v = T v } are the fixed point sets of S and T, respectively.
The solution set of (6) is denoted by Γ , i.e.,
Γ = { x Fix ( S ) Sol ( C , f ) , A x Fix ( T ) Sol ( Q , g ) } .
Let f and g be the null operators in C and Q, respectively. Then, the split problem (6) becomes to the split fixed point problem studied in [39,40] which is to find an element point x C such that
x Fix ( S ) and A x Fix ( T ) .
Let S and T be the identity operators in C and Q, respectively. Then, the split problem (6) becomes to the split variational inequality problem studied in [41] which is to find an element x C such that
x Sol ( C , f ) and A x Sol ( Q , g ) .
The solution set of (8) is denoted by Γ 1 , i.e.,
Γ = { x Sol ( C , f ) , A x Sol ( Q , g ) } .
The split problems (6)–(8) have a common prototype that is the split feasibility ([42]) problem of finding a point x such that
x C and A x Q .
The split problems have emerged their powerful applications in image recovery and signal processing, control theory, biomedical engineering and geophysics. Some iterative algorithms for solving the split problems have been studied and extended by many scholars, see [43,44,45,46,47].
Motivated by the work in this direction, in this paper, we further survey the split problem (6) in which S and T are two pseudocontractive operators and f and g are two pseudomonotone operators. We present a Tseng-type iterative algorithm for solving the split problem (6) by using self-adaptive techniques. Under certain conditions, we show that the proposed algorithm converges weakly to a solution of the split problem (6).

2. Preliminaries

Let H be a real Hilbert space equipped with inner product · , · and the induced norm defined by x x : = ( x , x ) . For any x , x H and constant η R , we have
η x + ( 1 η ) x 2 = η x 2 + ( 1 η ) x 2 η ( 1 η ) x x 2 .
The symbol denotes the weak convergence and the symbol denotes the strong convergence. Use ω w ( u k ) to denote the set of all weak cluster points of the sequence { u k } , namely, ω w ( u k ) = { u : there exists { u k i } { u k } such that u k i u as i } .
Recall that an operator ϕ : H H is said to be
  • Pseudomonotone, if
    ϕ ( x ˜ ) , x x ˜ 0 implies ϕ ( x ) , x x ˜ 0 , x , x ˜ H .
  • Weakly sequentially continuous, if H u k u ˜ implies that ϕ ( u k ) ϕ ( u ˜ ) .
Let C be a nonempty closed convex subset of a real Hilbert space H. Recall that an operator S : C C is said to be pseudocontractive if
S ( x ) S ( x ) 2 x x 2 + ( I S ) x ( I S ) x 2 , for all x , x C .
For given u H , there exists a unique point in C, denoted by proj C [ u ] such that
u proj C [ u ] x u , for all x C .
It is known that proj C is firmly nonexpansive, that is, proj C satisfies
proj C [ x ] proj C [ x ] 2 proj C [ x ] proj C [ x ] , x x , for all x , x H .
It is obvious that proj C is nonexpansive, i.e., proj C [ x ] proj C [ x ] x x for all x , x H . Moreover, proj C satisfies the following inequality ([48])
x proj C [ x ] , x proj C [ x ] 0 , x H and for all x C .
Lemma 1 ([49]).
Let C be a nonempty, convex and closed subset of a Hilbert space H. Assume that the operator S : C C is pseudocontractive and κ-Lipschitz continuous. Then, for all u ˜ C and u Fix ( S ) , we have
u S ( ( 1 α ) u ˜ + α S ( u ˜ ) ) 2 u ˜ u 2 + ( 1 α ) u ˜ S ( ( 1 α ) u ˜ + α S ( u ˜ ) ) 2 ,
where α is a constant in ( 0 , 1 1 + κ 2 + 1 ) .
Lemma 2 ([50]).
Let C be a nonempty closed convex subset of a real Hilbert space H. Let f : C H be a continuous and pseudomonotone operator. Then p Sol ( C , f ) iff p solves the following variational inequality
f ( u ) , u p 0 , for all u C .
Lemma 3 ([51]).
Let C be a nonempty, convex and closed subset of a Hilbert space H. Let the operator S : C C be continuous pseudocontractive. Then, S is demiclosed, i.e., u k u ˜ and S ( u k ) u as k + imply that S ( u ˜ ) = u .
Lemma 4 ([52]).
Let Γ be a nonempty closed convex subset of a real Hilbert space H. Let { x k } H be a sequence. If the following assumptions are satisfied
( i )
x Γ , lim k + x k x exists;
( i i )
w ω ( x k ) Γ .
Then the sequence { x k } converges weakly to some point in Γ.

3. Main Results

In this section, we present our main results.
Let H 1 and H 2 be two real Hilbert spaces. Let C H 1 and Q H 2 be two nonempty closed convex sets. Let S : C C , T : Q Q , f : H 1 H 1 and g : H 2 H 2 be four nonlinear operators. Let A : C Q be a bounded linear operator with its adjoint A .
Let { α k } , { β k } , { ζ k } and { λ k } be four real number sequences. Let ϑ , δ , ω , μ and ε be five constants. Let γ 0 and τ 0 be two positive constants.
Next, we introduce an iterative algorithm for solving the split problem (6).
In order to demonstrate the convergence analysis of Algorithm 1, we add some conditions on the operators and the parameters.
Algorithm 1: Select an initial point x 0 C . Set k = 0 .
Step 1. Assume that the present iterate x k and the step-sizes γ k and τ k are given.
  Compute
(12) v k = ( 1 β k ) x k + β k S [ ( 1 α k ) x k + α k S ( x k ) ] , (13) y k = proj C [ v k γ k f ( v k ) ] , (14) u k = ( 1 ϑ ) v k + ϑ y k + ϑ γ k [ f ( v k ) f ( y k ) ] , (15) w k = proj Q [ A u k τ k g ( A u k ) ] , (16) t k = ( 1 δ ) A u k + δ w k + δ τ k [ g ( A u k ) g ( w k ) ] , (17) q k = ( 1 ζ k ) t k + ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] .

Step 2. Compute the next iterate x k + 1 by the following form
x k + 1 = proj C [ u k + ε A ( q k A u k ) ] .

Step 3. Increase k by 1 and go back to Step 1. Meanwhile, update
γ k + 1 = min γ k , ω y k v k f ( y k ) f ( v k ) , f ( y k ) f ( v k ) , γ k , e l s e .
and
τ k + 1 = min τ k , μ w k A u k g ( w k ) g ( A u k ) , g ( w k ) g ( A u k ) , τ k , e l s e .
Suppose that
(c1):
S and T are two pseudocontractive operators with Lipschitz constants L 1 and L 2 , respectively;
(c2):
the operator f is pseudomonotone on H 1 , weakly sequentially continuous and κ 1 -Lipschitz continuous on C;
(c3):
the operator g is pseudomonotone on H 2 , weakly sequentially continuous and κ 2 -Lipschitz continuous on Q.
(r1):
L 1 > 1 and 0 < β ̲ < β k < β ¯ < α k < α ¯ < 1 1 + L 1 2 + 1 ( k 0 ) ;
(r2):
L 2 > 1 and 0 < ζ ̲ < ζ k < ζ ¯ < λ k < λ ¯ < 1 1 + L 2 2 + 1 ( k 0 ) ;
(r3):
ϑ ( 0 , 1 ] , δ ( 0 , 1 ] , ω ( 0 , 1 ) , μ ( 0 , 1 ) and ε ( 0 , 1 / A 2 ) .
Remark 1.
According to (19), the sequence { γ k } is monotonically decreasing. Moreover, by the κ 1 -Lipschitz continuity of f, we obtain that ω y k v k f ( y k ) f ( v k ) ω κ 1 . Thus, { γ k } has a lower bound min { γ 0 , ω κ 1 } . Therefore, the limit lim k + γ k exists. Similarly, the sequence { τ k } is monotonically decreasing and has a lower bound min { τ 0 , μ κ 2 } . So, the limit lim k + τ k exists.
Now, we prove our main theorem.
Theorem 1.
Suppose that Γ . Then the sequence { x k } generated by Algorithm 1 converges weakly to some point p Γ .
Proof. 
Let x Γ . Then, x Fix ( S ) Sol ( C , f ) and A x Fix ( T ) Sol ( Q , g ) . By (10) and (12), we have
v k x 2 = ( 1 β k ) ( x k x ) + β k ( S [ ( 1 α k ) x k + α k S ( x k ) ] x ) 2 = ( 1 β k ) x k x 2 + β k S [ ( 1 α k ) x k + α k S ( x k ) ] x 2 ( 1 β k ) β k S [ ( 1 α k ) x k + α k S ( x k ) ] x k 2 .
Using Lemma 1, we obtain
S [ ( 1 α k ) x k + α k S ( x k ) ] x 2 x k x 2 + ( 1 α k ) S [ ( 1 α k ) x k + α k S ( x k ) ] x k 2 .
Combining (21) and (22), we derive
v k x 2 ( 1 β k ) x k x 2 + β k ( 1 α k ) S [ ( 1 α k ) x k + α k S ( x k ) ] x k 2 + β k x k x 2 ( 1 β k ) β k S [ ( 1 α k ) x k + α k S ( x k ) ] x k 2 = x k x 2 β k ( α k β k ) S [ ( 1 α k ) x k + α k S ( x k ) ] x k 2 ( by condition ( r 1 ) ) x k x 2 .
Similarly, according to (10), Lemma 1 and (17), we have the following estimate
q k A x 2 t k A x 2 ( λ k ζ k ) ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k 2 ( by condition ( r 2 ) ) t k A x 2 .
Applying the inequality (11) to (13), we obtain
y k v k + γ k f ( v k ) , y k x 0 .
Since x Sol ( C , f ) and y k C , f ( x ) , y k x 0 . This together with the pseudomonotonicity of f implies that
f ( y k ) , y k x 0 .
Based on (25) and (26), we get
y k v k , y k x + γ k f ( v k ) f ( y k ) , y k x 0 .
It follows that
1 2 ( y k v k 2 + y k x 2 v k x 2 ) + γ k f ( v k ) f ( y k ) , y k x 0 ,
which yields
y k x 2 v k x 2 2 γ k f ( v k ) f ( y k ) , y k x y k v k 2 .
By (14), we have
u k x 2 = ( 1 ϑ ) ( v k x ) + ϑ ( y k x ) + ϑ γ k [ f ( v k ) f ( y k ) ] 2 = ( 1 ϑ ) ( v k x ) + ϑ ( y k x ) 2 + ϑ 2 γ k 2 f ( v k ) f ( y k ) 2 + 2 ϑ ( 1 ϑ ) γ k v k x , f ( v k ) f ( y k ) + 2 ϑ 2 γ k y k x , f ( v k ) f ( y k ) .
From (10), we obtain
( 1 ϑ ) ( v k x ) + ϑ ( y k x ) 2 = ( 1 ϑ ) v k x 2 + ϑ y k x 2 ( 1 ϑ ) ϑ v k y k 2 .
Substituting (27) and (29) into (28), we deduce
u k x 2 ( 1 ϑ ) v k x 2 + ϑ v k x 2 2 ϑ γ k f ( v k ) f ( y k ) , y k x ϑ y k v k 2 ( 1 ϑ ) ϑ v k y k 2 + ϑ 2 γ k 2 f ( v k ) f ( y k ) 2 + 2 ϑ ( 1 ϑ ) γ k v k x , f ( v k ) f ( y k ) + 2 ϑ 2 γ k y k x , f ( v k ) f ( y k ) = v k x 2 ( 2 ϑ ) ϑ v k y k 2 + ϑ 2 γ k 2 f ( v k ) f ( y k ) 2 2 ϑ ( 1 ϑ ) γ k f ( v k ) f ( y k ) , y k v k v k x 2 ( 2 ϑ ) ϑ v k y k 2 + ϑ 2 γ k 2 f ( v k ) f ( y k ) 2 + 2 ϑ ( 1 ϑ ) γ k f ( v k ) f ( y k ) y k v k .
Thanks to (19), f ( v k ) f ( y k ) ω y k v k γ k + 1 . It follows from (30) that
u k x 2 v k x 2 ( 2 ϑ ) ϑ v k y k 2 + ϑ 2 ω 2 γ k 2 γ k + 1 2 y k v k + 2 ϑ ( 1 ϑ ) ω γ k γ k + 1 y k v k 2 = v k x 2 ϑ 2 ϑ ϑ ω 2 γ k 2 γ k + 1 2 2 ( 1 ϑ ) ω γ k γ k + 1 y k v k 2 .
By Remark 1, lim k + γ k γ k + 1 = 1 . So,
lim k + 2 ϑ ϑ ω 2 γ k 2 γ k + 1 2 2 ( 1 ϑ ) ω γ k γ k + 1 = 2 ϑ ϑ ω 2 2 ( 1 ϑ ) ω = ϑ ( ω 1 ) ( ω + 2 ϑ ϑ ) > 0 .
Then, there exists σ > 0 and m 1 such that 2 ϑ ϑ ω 2 γ k 2 γ k + 1 2 2 ( 1 ϑ ) ω γ k γ k + 1 σ when k m 1 . In combination with (31), we get
u k x 2 v k x 2 σ ϑ y k v k 2 .
This together with (23) implies that
u k x 2 x k x 2 β k ( α k β k ) S [ ( 1 α k ) x k + α k S ( x k ) ] x k 2 σ ϑ y k v k 2 .
By the property (11) of proj Q and (15), we have
w k A u k + τ k g ( A u k ) , w k A x 0 .
Since A x Sol ( Q , g ) and w k Q , g ( A x ) , w k A x 0 . By the pseudomonotonicity of g, we obtain
g ( w k ) , w k A x 0 .
Taking into account (33) and (34), we obtain
w k A u k , w k A x + τ k g ( A u k ) g ( w k ) , w k A x 0 ,
which is equivalent to
1 2 ( w k A u k 2 + w k A x 2 A u k A x 2 ) + τ k g ( A u k ) g ( w k ) , w k A x 0 .
It follows that
w k A x 2 A u k A x 2 2 τ k g ( A u k ) g ( w k ) , w k A x w k A u k 2 .
From (14), we receive
t k A x 2 = ( 1 δ ) ( A u k A x ) + δ ( w k A x ) + δ τ k [ g ( A u k ) g ( w k ) ] 2 = ( 1 δ ) ( A u k A x ) + δ ( w k A x ) 2 + δ 2 τ k 2 g ( A u k ) g ( w k ) 2 + 2 δ ( 1 δ ) τ k A u k A x , g ( A u k ) g ( w k ) + 2 δ 2 τ k w k A x , g ( A u k ) g ( w k ) .
By virtue of (10), we achieve
( 1 δ ) ( A u k A x ) + δ ( w k A x ) 2 = ( 1 δ ) A u k A x 2 + δ w k A x 2 ( 1 δ ) δ A u k w k 2 .
Substituting (35) and (37) into (36), we obtain
t k A x 2 A u k A x 2 ( 2 δ ) δ A u k w k 2 + δ 2 τ k 2 g ( A u k ) g ( w k ) 2 2 δ ( 1 δ ) τ k w k A u k , g ( A u k ) g ( w k ) A u k A x 2 ( 2 δ ) δ A u k w k 2 + δ 2 τ k 2 g ( A u k ) g ( w k ) 2 + 2 δ ( 1 δ ) τ k w k A u k g ( A u k ) g ( w k ) .
Duo to (20), we have
g ( A u k ) g ( w k ) μ A u k w k τ k + 1 .
This together with (38) implies that
t k A x 2 A u k A x ( 2 δ ) δ A u k w k 2 + δ 2 μ 2 τ k 2 τ k + 1 2 w k A u k + 2 δ ( 1 δ ) μ τ k τ k + 1 A u k w k 2 = A u k A x δ 2 δ δ μ 2 τ k 2 τ k + 1 2 2 ( 1 δ ) μ τ k τ k + 1 A u k w k 2 .
By Remark 1, lim k + τ k τ k + 1 = 1 and hence
lim k 2 δ δ μ 2 τ k 2 τ k + 1 2 2 ( 1 δ ) μ τ k τ k + 1 = 2 δ δ μ 2 2 ( 1 δ ) μ > 0 .
So, there exists ϱ > 0 and m 2 such that
2 δ δ μ 2 τ k 2 τ k + 1 2 2 ( 1 δ ) μ τ k τ k + 1 ϱ ,
when k m 2 .
In the light of (39), we have
t k A x 2 A u k A x ϱ δ w k A u k 2 .
Owing to (24) and (40), we get
q k A x 2 A u k A x ( λ k ζ k ) ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k 2 ϱ δ w k A u k 2 .
Observe that
u k x , A ( q k A u k ) = A u k A x , q k A u k = 1 2 [ q k A x 2 A u k A x 2 ] 1 2 q k A u k 2 .
Combining (41) and (42), we acquire
u k x , A ( q k A u k ) 1 2 ϱ δ w k A u k 2 1 2 q k A u k 2 1 2 ( λ k ζ k ) ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k 2 .
In view of (18), we have
x k + 1 x 2 = proj C [ u k + ε A ( q k A u k ) ] proj C [ x ] 2 u k x + ε A ( q k A u k ) 2 = u k x 2 + ε A ( q k A u k ) 2 + 2 ε A ( q k A u k ) , u k x .
It follows from (32) and (43) that
x k + 1 x 2 u k x 2 + ε 2 A 2 q k A u k 2 ε ϱ δ w k A u k 2 ε q k A u k 2 ε ( λ k ζ k ) ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k 2 = u k x 2 ε ( 1 ε A 2 ) q k A u k 2 ε ϱ δ w k A u k 2 ε ( λ k ζ k ) ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k 2 x k x 2 ε ϱ δ w k A u k 2 ε ( 1 ε A 2 ) q k A u k 2 β k ( α k β k ) S [ ( 1 α k ) x k + α k S ( x k ) ] x k 2 σ ϑ y k v k 2 ε ( λ k ζ k ) ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k 2 x k x 2 ,
which implies that lim k + x k x exists. Since x k + 1 x u k x v k x x k x , we deduce
lim k + u k x = lim k + v k x = lim k + x k x .
So, the sequences { x k } , { u k } and { v k } are all bounded.
By virtue of (44), we derive
β k ( α k β k ) S [ ( 1 α k ) x k + α k S x k ] x k 2 + σ ϑ y k v k 2 + ε ϱ δ w k A u k 2 + ε ( 1 ε A 2 ) q k A u k 2 + ε ( λ k ζ k ) ζ k T [ ( 1 λ k ) t k + λ k T t k ] t k 2 x k x 2 x k + 1 x 2 0 ,
which implies that
(46) lim k + q k A u k = 0 , (47) lim k + S [ ( 1 α k ) x k + α k S ( x k ) ] x k = 0 , (48) lim k + T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k = 0 , (49) lim k + y k v k = 0 , (50) lim k + w k A u k = 0 .
By the L 1 -Lipschitz continuity of S, we have
S ( x k ) x k S ( x k ) S [ ( 1 α k ) x k + α k S ( x k ) ] + S [ ( 1 α k ) x k + α k S ( x k ) ] x k L 1 α k S ( x k ) x k + S [ ( 1 α k ) x k + α k S ( x k ) ] x k .
It follows that
S ( x k ) x k 1 1 L 1 α k S [ ( 1 α k ) x k + α k S ( x k ) ] x k .
This together with (47) implies that
lim k + S ( x k ) x k = 0 .
From (12) and (47), we conclude that x k v k 0 .
Next, we show that ω w ( x k ) Γ . Pick any p ω w ( x k ) . Then, there exists a subsequence { x k i } of { x k } such that x k i p as i + . In addition, y k i p and v k i p as i + .
First, we prove that p Sol ( C , f ) . In view of (11) and y k i = proj C [ v k i γ k i f ( v k i ) ] , we achieve
y k i v k i + γ k i f ( v k i ) , y k i u 0 , for all u C .
It follows that
1 γ k i v k i y k i , u y k i + f ( v k i ) , y k i v k i f ( v k i ) , u v k i , for all u C .
Noting that from (49), we have lim i + v k i y k i = 0 . Meanwhile, { y k i } and { f ( v k i ) } are bounded. Then, by (52), we deduce
lim inf i + f ( v k i ) , u v k i 0 , for all u C .
Let { ϵ j } be a positive real numbers sequence satisfying lim j + ϵ j = 0 . On account of (53), for each ϵ j , there exists the smallest positive integer n j such that
f ( v k i j ) , u v k i j + ϵ j 0 , for all j n j .
Moreover, for each j > 0 , f ( v k i j ) 0 . Setting φ ( v k i j ) = f ( v k i j ) f ( v k i j ) 2 , we have f ( v k i j ) , φ ( v k i j ) = 1 . From (54), we have
f ( v k i j ) , u + ϵ j φ ( v k i j ) v k i j 0 .
By the pseudomonotonicity of f, we get
f ( u + ϵ j φ ( v k i j ) ) , u + ϵ j φ ( v k i j ) v k i j 0 ,
which implies that
f ( u ) , u v k i j f ( u ) f ( u + ϵ j φ ( v k i j ) ) , u + ϵ j φ ( v k i j ) v k i j + f ( u ) , ϵ j φ ( v k i j ) .
Because of f ( v k i j ) f ( p ) , we have
lim inf j + f ( v k i j ) f ( p ) > 0 .
Then,
lim j + ϵ j φ ( v k i j ) = lim j + ϵ j f ( v k i j ) = 0 .
This together with (55) implies that
f ( u ) , u p 0 .
By Lemma 2 and (56), we conclude that p Sol ( C , f ) .
On the other hand, by (51), S x k i x k i 0 as i + . This together with x k i p and Lemma 3 implies that p Fix ( S ) . Therefore, p Fix ( S ) Sol ( C , f ) .
Next, we show that A p Fix ( T ) Sol ( Q , g ) . Observe that
T ( t k ) t k T ( t k ) T [ ( 1 λ k ) t k + λ k T ( t k ) ] + T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k L 2 λ k T ( t k ) t k + T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k .
It follows that
T ( t k ) t k 1 1 L 2 λ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] t k .
This together with (48) implies that
lim k + T ( t k ) t k = 0 .
From (14), u k i p as i + . Thanks to (17) and (48), we have q k i t k i 0 as i + . Combining with (46), we deduce that t k i A p . Applying Lemma 3 to (57), we obtain that A p Fix ( T ) .
Next, we show that A p Sol ( Q , g ) . In view of (10) and w k i = p r o j Q [ A u k i τ k i g ( A u k i ) ] , we achieve
w k i A u k i + τ k i g ( A u k i ) , w k i v 0 , for all v Q .
It follows that
1 τ k i w k i A u k i , w k i v + g ( A u k i ) , w k i A u k i g ( A u k i ) , v A u k i , for all v Q .
Noting that from (r3), we have lim i + w k i A u k i = 0 . Then, by (58), we deduce
lim inf i + g ( A u k i ) , v A u k i 0 , for all v Q .
Choose a positive real numbers sequence { υ j } such that lim j + υ j = 0 . In terms of (59), for each υ j , there exists the smallest positive integer m j such that
g ( A u k i ) , v A u k i + υ j 0 , for all j m j .
Moreover, for each j > 0 , g ( A u k i ) 0 . Setting ψ ( u k i ) = g ( A u k i ) g ( A u k i ) 2 , we have g ( A u k i ) , ψ ( u k i ) = 1 . From (60), we have
g ( A ( u k i j ) ) , v + υ j ψ ( u k i j ) A u k i j 0 .
By the pseudomonotonicity of g, we get
g ( v + υ j ψ ( u k i j ) ) , v + υ j ψ ( u k i j ) A u k i j 0 ,
which implies that
g ( v ) , v A u k i j g ( v ) g ( v + υ j ψ ( u k i j ) ) , v + υ j ψ ( u k i j ) A u k i j + g ( v ) , υ j ψ ( u k i j ) .
Because of g ( A ( u k i j ) ) g ( A p ) , we have
lim inf j + g ( A ( u k i j ) ) g ( A p ) > 0 .
Then,
lim j + υ j ψ ( u k i j ) ) = lim j υ j g ( A u k i ) = 0 .
This together with (61) implies that
g ( v ) , v A p 0 , for all v Q .
By Lemma 2 and (62), we conclude that A p Sol ( Q , g ) . So, p Γ and ω w ( x k ) Γ .
Finally, we show that the entire sequence { x k } converges weakly to p . As a matter of fact, we have the following facts:
(i)
x Γ , lim k + x k x exists;
(ii)
w ω ( x k ) Γ ;
(iii)
p w ω ( x k ) .
Thus, by Lemma 4, we deduce that the sequence { x k } weakly converges to p Γ . This completes the proof. □
Corollary 1.
Suppose that Γ 1 . Then the sequence { x k } generated by Algorithm 2 converges weakly to some point p 1 Γ 1 .
Algorithm 2: Select an initial point x 0 C . Set k = 0 .
Step 1. Assume that the present iterate x k and the step-sizes γ k and τ k are given.
  Compute
(63) y k = proj C [ x k β k f ( x k ) ] , (64) u k = ( 1 ϑ ) x k + ϑ y k + ϑ γ k [ f ( x k ) f ( y k ) ] , (65) w k = proj Q [ A u k τ k g ( A u k ) ] , (66) t k = ( 1 δ ) A u k + δ w k + δ τ k [ g ( A u k ) g ( w k ) ] ,

Step 2. Compute the next iterate x k + 1 by the following form
x k + 1 = proj C [ u k + ε A ( t k A u k ) ] .

Step 3. Increase k by 1 and go back to Step 1. Meanwhile, update
γ k + 1 = min γ k , ω y k x k f ( y k ) f ( x k ) , f ( y k ) f ( x k ) , γ k , e l s e .

τ k + 1 = min τ k , μ w k A u k g ( w k ) g ( A u k ) , g ( w k ) g ( A u k ) , τ k , e l s e .

4. Application to Split Pseudoconvex Optimization Problems and Fixed Point Problems

In this section, we apply Algorithm 1 to solve split pseudoconvex optimization problems and fixed point problems.
Let R n be the Euclidean space. Let C be a closed convex set in R n . Recall that a differentiable function F : R n R is said to be pseudoconvex on C if for every pair of distinct points x , y C ,
F ( x ) T ( y x ) 0 implies F ( y ) F ( x ) .
Now, we consider the following optimization problem
min F ( x ) subject to x C ,
where F ( x ) is pseudoconvex and twice continuously differentiable.
Denote by SOP ( C , F ) the solution set of optimization problem (70).
The following lemma reveals the relationship between the variational inequality and the pseudoconvex optimization problem.
Lemma 5
([53]). Suppose that F : R n R is differentiable and pseudoconvex on C. Then x C satisfies
F ( x ) T ( x x ) 0 for all x C
if and only if x is a minimum of F ( x ) in C.
Let R n and R m be two Euclidean spaces. Let C R n and Q R m be two nonempty closed convex sets. Let A be a given m × n real matrix. Let S : C C and T : Q Q be two pseudocontractive operators with Lipschitz constants L 1 and L 2 , respectively. Let F : R n R be a differentiable function with κ 1 -Lipschitz continuous gradient which is also pseudoconvex on C. Let G : R m R be a differentiable function with κ 2 -Lipschitz continuous gradient which is also pseudoconvex on Q.
Consider the following split problem of finding a point x C such that
x Fix ( S ) SOP ( C , F ) and A x Fix ( T ) SOP ( Q , G ) .
The solution set of (71) is denoted by Γ 2 , i.e.,
Γ 2 = { x Fix ( S ) SOP ( C , F ) , A x Fix ( T ) SOP ( Q , G ) } .
Next, we introduce an iterative algorithm for solving the split problem (71).
Let { α k } , { β k } , { ζ k } and { λ k } be four real number sequences. Let ϑ , δ , ω , μ and ε be five constants. Let γ 0 and τ 0 be two positive constants.
Theorem 2.
Suppose that Γ 2 and the conditions (r1)–(r3) hold. Then the sequence { x k } generated by Algorithm 3 converges to some point p Γ 2 .
Algorithm 3: Select an initial point x 0 C . Set k = 0 .
Step 1. Assume that the present iterate x k and the step-sizes γ k and τ k are given.
  Compute
v k = ( 1 β k ) x k + β k S [ ( 1 α k ) x k + α k S ( x k ) ] , y k = proj C [ v k γ k F ( v k ) ] , u k = ( 1 ϑ ) v k + ϑ y k + ϑ γ k [ F ( v k ) F ( y k ) ] , w k = proj Q [ A u k τ k G ( A u k ) ] , t k = ( 1 δ ) A u k + δ w k + δ τ k [ G ( A u k ) G ( w k ) ] , q k = ( 1 ζ k ) t k + ζ k T [ ( 1 λ k ) t k + λ k T ( t k ) ] .

Step 2. Compute the next iterate x k + 1 by the following form
x k + 1 = proj C [ u k + ε A ( q k A u k ) ] .

Step 3. Increase k by 1 and go back to Step 1. Meanwhile, update
γ k + 1 = min γ k , ω y k v k F ( y k ) F ( v k ) , F ( y k ) F ( v k ) , γ k , e l s e .
and
τ k + 1 = min τ k , μ w k A u k G ( w k ) G ( A u k ) , G ( w k ) G ( A u k ) , τ k , e l s e .

5. Concluding Remarks

In this paper, we survey iterative methods for solving the split problem of fixed points of two pseudocontractive operators and variational inequalities of two pseudomonotone operators in Hilbert spaces. By using self-adaptive techniques, we construct a Tseng-type iterative algorithm for solving this split problem. We prove that the proposed Tseng-type iterative algorithm converges weakly to a solution of the split problem under some additional conditions imposed the operators and the parameters. Finally, we apply our algorithm to solve split pseudoconvex optimization problems and fixed point problems.

Author Contributions

Both the authors have contributed equally to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

Li-Jun Zhu was supported by the National Natural Science Foundation of China [grant number 11861003], the Natural Science Foundation of Ningxia province [grant numbers NZ17015, NXYLXK2017B09]. Yeong-Cheng Liou was partially supported by MOST 109-2410-H-037-010 and Kaohsiung Medical University Research Foundation.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Glowinski, R. Numerical Methods for Nonlinear Variational Problems; Springer: New York, NY, USA, 1984. [Google Scholar]
  2. Berinde, V.; Păcurar, M. Kannan’s fixed point approximation for solving split feasibility and variational inequality problems. J. Comput. Appl. Math. 2021, 386, 113217. [Google Scholar] [CrossRef]
  3. Goldstein, A.A. Convex programming in Hilbert space. Bull. Am. Math. Soc. 1964, 70, 709–711. [Google Scholar] [CrossRef] [Green Version]
  4. Ceng, L.-C.; Petrușel, A.; Yao, J.-C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
  5. Zhao, X.; Köbis, M.A.; Yao, Y.; Yao, J.-C. A Projected Subgradient Method for Nondifferentiable Quasiconvex Multiobjective Optimization Problems. J. Optim. Theory Appl. 2021, in press. [Google Scholar] [CrossRef]
  6. Cho, S.Y.; Qin, X.; Yao, J.C.; Yao, Y. Viscosity approximation splitting methods for monotone and nonexpansive operators in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 251–264. [Google Scholar]
  7. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  8. Ceng, L.-C.; Petruşel, A.; Yao, J.-C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  9. Dong, Q.L.; Peng, Y.; Yao, Y. Alternated inertial projection methods for the split equality problem. J. Nonlinear Convex Anal. 2021, 22, 53–67. [Google Scholar]
  10. Yao, Y.; Li, H.; Postolache, M. Iterative algorithms for split equilibrium problems of monotone operators and fixed point problems of pseudo-contractions. Optimization 2020, 1–19. [Google Scholar] [CrossRef]
  11. Stampacchi, G. Formes bilineaires coercivites surles ensembles convexes. C. R. Acad. Sci. 1964, 258, 4413–4416. [Google Scholar]
  12. Zegeye, H.; Shahzad, N.; Yao, Y. Minimum-norm solution of variational inequality and fixed point problem in banach spaces. Optimization 2013, 64, 453–471. [Google Scholar] [CrossRef]
  13. Fukushima, M. A relaxed projection method for variational inequalities. Math. Program. 1986, 35, 58–70. [Google Scholar] [CrossRef]
  14. Chen, C.; Ma, S.; Yang, J. A General Inertial Proximal Point Algorithm for Mixed Variational Inequality Problem. SIAM J. Optim. 2015, 25, 2120–2142. [Google Scholar] [CrossRef]
  15. Yao, Y.; Postolache, M.; Yao, J.C. Iterative algorithms for the generalized variational inequalities. UPB Sci. Bull. Ser. A 2019, 81, 3–16. [Google Scholar]
  16. Bao, T.Q.; Khanh, P.Q. A Projection-Type Algorithm for Pseudomonotone Nonlipschitzian Multivalued Variational Inequalities. Nonconvex Optim. Appl. 2006, 77, 113–129. [Google Scholar]
  17. Wang, X.; Li, S.; Kou, X. An Extension of Subgradient Method for Variational Inequality Problems in Hilbert Space. Abstr. Appl. Anal. 2013, 2013, 1–7. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, C.; Zhu, Z.; Yao, Y.; Liu, Q. Homotopy method for solving mathematical programs with bounded box-constrained variational inequalities. Optimization 2019, 68, 2297–2316. [Google Scholar] [CrossRef]
  19. Maingé, P.-E. Strong convergence of projected reflected gradient methods for variational inequalities. Fixed Point Theory 2018, 19, 659–680. [Google Scholar] [CrossRef]
  20. Malitsky, Y. Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Softw. 2018, 33, 140–164. [Google Scholar] [CrossRef] [Green Version]
  21. Yao, Y.; Postolache, M.; Yao, J.C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef] [Green Version]
  22. Abbas, M.; Ibrahim, Y.; Khan, A.R.; De La Sen, M. Strong Convergence of a System of Generalized Mixed Equilibrium Problem, Split Variational Inclusion Problem and Fixed Point Problem in Banach Spaces. Symmetry 2019, 11, 722. [Google Scholar] [CrossRef] [Green Version]
  23. Hammad, H.A.; Rehman, H.U.; De La Sen, M. Shrinking Projection Methods for Accelerating Relaxed Inertial Tseng-Type Algorithm with Applications. Math. Probl. Eng. 2020, 2020, 7487383. [Google Scholar] [CrossRef]
  24. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  25. Moudafi, A. Viscosity Approximation Methods for Fixed-Points Problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  26. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Ekon. Matorsz. Metod. 1976, 12, 747–756. [Google Scholar]
  27. Zhao, X.; Yao, Y. Modified extragradient algorithms for solving monotone variational inequalities and fixed point problems. Optimization 2020, 69, 1987–2002. [Google Scholar] [CrossRef]
  28. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified extragradient-like algorithms with new stepsizes for variational inequalities. Comput. Optim. Appl. 2019, 73, 913–932. [Google Scholar] [CrossRef]
  29. Vuong, P.T. On the Weak Convergence of the Extragradient Method for Solving Pseudo-Monotone Variational Inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Thong, D.V.; Gibali, A. Extragradient methods for solving non-Lipschitzian pseudo-monotone variational inequalities. J. Fixed Point Theory Appl. 2019, 21, 20. [Google Scholar] [CrossRef]
  31. Yao, Y.; Postolache, M.; Yao, J.C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Sci. Bull. Ser. A 2020, 82, 3–12. [Google Scholar]
  32. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  33. Tseng, P. A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings. SIAM J. Control. Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  34. Iusem, A.N. An iterative algorithm for the variational inequality problem. Comput. Appl. Math. 1994, 13, 103–114. [Google Scholar]
  35. He, B.; He, X.-Z.; Liu, H.X.; Wu, T. Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 2009, 196, 43–48. [Google Scholar] [CrossRef]
  36. He, B.S.; Yang, H.; Wang, S.L. Alternating Direction Method with Self-Adaptive Penalty Parameters for Monotone Variational Inequalities. J. Optim. Theory Appl. 2000, 106, 337–356. [Google Scholar] [CrossRef]
  37. Yusuf, S.; Ur Rehman, H.; Gibali, A. A self-adaptive extragradient-CQ method for a class of bilevel split equilibrium problem with application to Nash Cournot oligopolistic electricity market models. Comput. Appl. Math. 2020, 39, 293. [Google Scholar]
  38. Yang, J.; Liu, H. A Modified Projected Gradient Method for Monotone Variational Inequalities. J. Optim. Theory Appl. 2018, 179, 197–211. [Google Scholar] [CrossRef]
  39. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  40. Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef]
  41. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the Split Variational Inequality Problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  42. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  43. He, Z.; Du, W.S. Nonlinear algorithms approach to split common solution problems. Fixed Point Theory Appl. 2012, 2012, 130. [Google Scholar] [CrossRef] [Green Version]
  44. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2020, 69, 269–281. [Google Scholar] [CrossRef]
  45. Xu, H.-K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  46. Yao, Y.; Shehu, Y.; Li, X.-H.; Dong, Q.-L. A method with inertial extrapolation step for split monotone inclusion problems. Optimization 2020, 70, 741–761. [Google Scholar] [CrossRef]
  47. Zhao, X.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, 82, 43–52. [Google Scholar]
  48. Yao, Y.; Qin, X.; Yao, J.C. Projection methods for firmly type nonexpansive operators. J. Nonlinear Convex Anal. 2018, 19, 407–415. [Google Scholar]
  49. Yao, Y.; Shahzad, N.; Ya, J.C. Convergence of Tseng-type self-adaptive algorithms for variational inequalities and fixed point problems. Carpathian J. Math. 2021, in press. [Google Scholar]
  50. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  51. Zhou, H. Strong convergence of an explicit iterative algorithm for continuous pseudo-contractions in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2009, 70, 4039–4046. [Google Scholar] [CrossRef]
  52. Abbas, B.; Attouch, H.; Svaiter, B.F. Newton–Like Dynamics and Forward-Backward Methods for Structured Monotone Inclusions in Hilbert Spaces. J. Optim. Theory Appl. 2013, 161, 331–360. [Google Scholar] [CrossRef]
  53. Harker, P.T.; Pang, J.-S. Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications. Math. Program. 1990, 48, 161–220. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhu, L.-J.; Liou, Y.-C. A Tseng-Type Algorithm with Self-Adaptive Techniques for Solving the Split Problem of Fixed Points and Pseudomonotone Variational Inequalities in Hilbert Spaces. Axioms 2021, 10, 152. https://doi.org/10.3390/axioms10030152

AMA Style

Zhu L-J, Liou Y-C. A Tseng-Type Algorithm with Self-Adaptive Techniques for Solving the Split Problem of Fixed Points and Pseudomonotone Variational Inequalities in Hilbert Spaces. Axioms. 2021; 10(3):152. https://doi.org/10.3390/axioms10030152

Chicago/Turabian Style

Zhu, Li-Jun, and Yeong-Cheng Liou. 2021. "A Tseng-Type Algorithm with Self-Adaptive Techniques for Solving the Split Problem of Fixed Points and Pseudomonotone Variational Inequalities in Hilbert Spaces" Axioms 10, no. 3: 152. https://doi.org/10.3390/axioms10030152

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop