Next Article in Journal
Shift Scheduling with the Goal Programming Method: A Case Study in the Glass Industry
Next Article in Special Issue
A Generic Family of Optimal Sixteenth-Order Multiple-Root Finders and Their Dynamics Underlying Purely Imaginary Extraneous Fixed Points
Previous Article in Journal
Anti-Synchronization of a Class of Chaotic Systems with Application to Lorenz System: A Unified Analysis of the Integer Order and Fractional Order
Previous Article in Special Issue
How to Obtain Global Convergence Domains via Newton’s Method for Nonlinear Integral Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Modified Inertial Iterative Algorithm for Solving Split Variational Inclusion Problem for Multi-Valued Quasi Nonexpansive Mappings with Some Applications

by
Pawicha Phairatchatniyom
1,
Poom Kumam
1,2,*,
Yeol Je Cho
3,4,
Wachirapong Jirakitpuwapat
1 and
Kanokwan Sitthithakerngkiet
5
1
KMUTTFixed Point Research Laboratory, Room SCL 802 Fixed Point Laboratory, Science Laboratory Building, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi, 126 Pracha Uthit Rd., Bang Mod, Thung Khru, Bangkok 10140, Thailand
2
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
3
Department of Mathematics Education, Gyeongsang National University, Jinju 52828, Korea
4
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
5
Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(6), 560; https://doi.org/10.3390/math7060560
Submission received: 28 April 2019 / Revised: 23 May 2019 / Accepted: 28 May 2019 / Published: 19 June 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
Based on the very recent work by Shehu and Agbebaku in Comput. Appl. Math. 2017, we introduce an extension of their iterative algorithm by combining it with inertial extrapolation for solving split inclusion problems and fixed point problems. Under suitable conditions, we prove that the proposed algorithm converges strongly to common elements of the solution set of the split inclusion problems and fixed point problems.

1. Introduction

The split monotone variational inclusion problem (SMVIP) was introduced by Moudafi [1]. This problem is as follows:
Find a point x * H 1 such that 0 f ^ ( x * ) + B 1 ( x * )
and such that
y * = A x * H 2 solves 0 g ^ ( y * ) + B 2 ( y * ) ,
where 0 is the zero vector, H 1 and H 2 are real Hilbert spaces, f ^ and g ^ are given single-valued operators defined on H 1 and H 2 , respectively, B 1 and B 2 are multi-valued maximal monotone mappings defined on H 1 and H 2 , respectively, and A is a bounded linear operator defined on H 1 to H 2 .
It is well known (see [1]) that
0 f ^ ( x * ) + B 1 ( x * ) x * = J λ B 1 ( x * λ f ^ ( x * ) ) ,
and that
0 g ^ ( y * ) + B 2 ( y * ) y * = J λ B 2 ( y * λ g ^ ( y * ) ) , y * = A x * ,
where J λ B 1 : = ( I + λ B 1 ) 1 and J λ B 2 : = ( I + λ B 2 ) 1 are the resolvent operators of B 1 and B 2 , respectively, with λ > 0 . Note that J λ B 1 and J λ B 2 are nonexpansive and firmly nonexpansive.
Recently, Shehu and Agbebaku [2] proposed an algorithm involving a step-size selected and proved strong convergence theorem for split inclusion problem and fixed point problem for multi-valued quasi-nonexpansive mappings. In [1], Moudafi pointed out that the problem (SMVIP) [3,4,5] includes, as special cases, the split variational inequality problem [6], the split zero problem, the split common fixed point problem [7,8,9] and the split feasibility problem [10,11], which have already been studied and used in image processing and recovery [12], sensor networks in computerized tomography and data compression for models of inverse problems [13].
If f ^ 0 and g ^ 0 in the problem (SMVIP), then the problem reduces to the split variational inclusion problem (SVIP) as follows:
Find a point x * H 1 such that 0 B 1 ( x * )
and such that
y * = A x * H 2 solves 0 B 2 ( y * ) .
Note that the problem (SVIP) is equivalent to the following problem:
Find a point x * H 1 such that x * = J λ B 1 ( x * ) and y * = J λ B 2 ( y * ) , y * = A x *
for some λ > 0 .
We denote the solution set of the problem (SVIP) by Ω , i.e.,
Ω = { x * H 1 : 0 B 1 ( x * ) and 0 B 2 ( y * ) , y * = A x * } .
Many works have been developed to solve the split variational inclusion problem (SVIP). In 2002, Byrne et al. [7] introduced the iterative method { x n } as follows: For any x 0 H 1 ,
x n + 1 = J λ B 1 ( x n + γ A * ( J λ B 2 I ) A x n )
for each n 0 , where A * is the adjoint of the bounded linear operator A, γ ( 0 , 2 / L ) , L = A * A and λ > 0 . They have shown the weak and strong convergence of the above iterative method for solving the problem (SVIP).
Later, inspired by the above iterative algorithm, many authors have extended the algorithm { x n } generated by (5). In particular, Kazmi and Rizvi [4] proposed an algorithm { x n } for approximating a solution of the problem (SVIP) as follows:
u n = J λ B 1 ( x n + γ n A * ( J λ B 2 I ) A x n ) , x n + 1 = α n f n ( x n ) + ( 1 α n ) S u n
for each n 0 , where { α n } is a sequence in ( 0 , 1 ) , λ > 0 , γ ( 0 , 1 / L ) , L is the spectral radius of the operator A * A , f : H 1 H 1 is a contraction and S : H 1 H 1 is a nonexpansive mapping. In 2015, Sitthithakerngkiet et al. [5] proposed an algorithm { x n } for solving the problem (SVIP) and the fixed point problem (FPP) of a countable family of nonexpansive mappings as follows:
y n = J λ B 1 ( x n + γ n A * ( J λ B 2 I ) A x n ) , x n + 1 = α n f ( x n ) + ( 1 α n D ) S n y n
for each n 0 , where { α n } is a sequence in ( 0 , 1 ) , λ > 0 , γ ( 0 , 1 / L ) , L is the spectral radius of the operator A * A , f : H 1 H 1 is a contraction, D : H 1 H 2 is strongly positive bounded linear operator and, for each n 1 , S n : H 1 H 1 is a nonexpansive mapping.
In both their works, they obtained some strong convergence results by using their proposed iterative methods (for some more results on algorithms, see [14,15]).
Recall that a point x * H 1 is called a fixed point of a given multi-valued mapping S : H 1 2 H 1 if
x * S x *
and the fixed point problem (FPP) for a multi-valued mapping S : H 1 2 H 1 is as follows:
Find a point x * H 1 such that x * S x * .
The set of fixed points of the multi-valued mapping S is denoted by F ( S ) .
As applications, the fixed point theory for multi-valued mappings was applied to various fields, especially mathematical economics and game theory (see [16,17,18]).
Recently, motivated by the results of Byrne et al. [7], Kazmi and Rizvi [4] and Sitthithakerngkiet [5], Shehu and Agbebaku [2] introduced the split fixed point inclusion problem (SFPIP) from the problems (SVIP) and (FPP) for a multi-valued quasi-nonexpansive mapping S : H 1 2 H 1 as follows:
Find a point x * H 1 such that 0 B 1 ( x * ) , x * S x *
and such that
y * = A x * H 2 solves 0 B 2 ( y * ) ,
where H 1 and H 2 are real Hilbert spaces, B 1 and B 2 are multi-valued maximal monotone mappings defined on H 1 and H 2 , respectively, and A is a bounded linear operator defined on H 1 to H 2 .
Note that the problem (SFPIP) is equivalent to the following problem: for some λ > 0 ,
Find a point x * H 1 such that x * = J λ B 1 ( x * ) , x * S x * and A x * = J λ B 2 ( A x * ) .
The solution set of the problem (SFPIP) is denoted by F ( S ) Ω , i.e.,
F ( S ) Ω = { x * H 1 : 0 B 1 ( x * ) , x * S x * and 0 B 2 ( A x * ) } .
Notice that, if S is the identity operator, then the problem (SFPIP) reduces to the problem (SVIP). Moreover, if J λ B 1 = J λ B 2 = A = I , then the problem (SFPIP) reduces to the problem (FPP) for a multi-valued quasi-nonexpansive mapping.
Furthermore, Shehu and Agbebaku [2] introduced an algorithm { x n } for solving the problem (SFPIP) for a multi-valued quai-nonexpasive mapping S as follows: For any x 1 H 1 ,
u n = J λ B 1 ( x n + γ n A * ( J λ B 2 1 ) A x n ) , x n + 1 = α n f n ( x n ) + β n x n + δ n ( σ w n + ( 1 σ ) u n ) , w n S x n ,
for each n 1 , where { α n } , { β n } and { δ n } are the real sequences in ( 0 , 1 ) such that
α n + β n + δ n = 1 , σ ( 0 , 1 ) , γ n : = τ n ( J λ B 2 I ) A x n 2 A * ( J λ B 2 I ) 2 ,
where 0 < a τ n b < 1 , and { f n ( x ) } is the uniform convergence sequence for any x in a bounded subset D of H 1 , and proved that the sequences { u n } and { x n } generated by (11) both converge strongly to p F ( S ) Ω , where p = P F ( S ) Ω f ( p ) .
In optimization theory, the second-order dynamical system, which is called the heavy ball method, is used to accelerate the convergence rate of algorithms. This method is a two-step iterative method for minimizing a smooth convex function which was firstly introduced by Polyak [19].
The following is a modified heavy ball method for the improvement of the convergence rate, which was introduced by Nesterov [20]:
y n = x n + θ n ( x n x n 1 ) , x n + 1 = y n λ n f ( y n )
for each n 1 , where λ n > 0 , θ n [ 0 , 1 ) is an extrapolation factor. Here, the term θ n ( x n x n 1 ) is the inertia (for more recent results on the inertial algorithms, see [21,22]).
The following method is called the inertial proximal point algorithm, which was introduced by Alvarez and Attouch [23]. This method combined the proximal point algorithm [24] with the inertial extrapolation [25,26]:
y n = x n + θ n ( x n x n 1 ) , x n + 1 = ( I + λ n T ^ ) 1 ( y n )
for each n 1 , where I is identity operator and T ^ is a maximal monotone operator. It was proven that, if a positive sequence λ n is non-decreasing, θ n [ 0 , 1 ) and the following summability condition holds:
n = 1 θ n x n x n 1 2 < ,
then { x n } generated by (12) converges to a zero point of T.
In fact, recently, some authors have pointed out some problems in this summability condition (13) given in [27], that is, to satisfy this summability condition (13) of the sequence { x n } , one needs to calculate { θ n } at each step. Recently, Bot et al. [28] improved this condition, that is, they got rid of the summability condition (13) and replaced the other conditions.
In this paper, inspired by the results of Shehu and Agbebaku [2], Nesterov [20] and Alvarez and Attouch [23], we proposed a new algorithm by combining the iterative algorithm (11) with the inertial extrapolation for solving the problem (SFPIP) and prove some strong convergence theorems of the proposed algorithm to show the existence of a solution of the problem (SFPIP). Furthermore, as applications, we consider our proposed algorithm for solving the variational inequality problem and give some applications in game theory.

2. Preliminaries

In this section, we recall some definitions and results which will be used in the proof of the main results.
Let H 1 and H 2 be two real Hilbert spaces with the inner product · , · and the norm · . Let C be a nonempty closed and convex subset of H 1 and D be a nonempty bounded subset of H 1 . Let A : H 1 H 2 be a bounded linear operator and A * : H 2 H 1 be the adjoint of A.
Let { x n } be a sequence in H, we denote the strong and weak convergence of a sequence { x n } by x n x and x n x , respectively.
Recall that a mapping T : C C is said to be:
(1)
Lipschitz if there exists a positive constant α such that, for all x , y C ,
T x T y α x y .
If α ( 0 , 1 ) and α = 1 , then the mapping T is contractive and nonexpansive, respectively.
(2)
firmly nonexpansive if
T x T y 2 T x T y , x y
for all x , y C .
A mapping P C is said to be the metric projection of H 1 onto C if, for all point x H 1 , there exists a unique nearest point in C, denoted by P C x , such that
x P C x x y
for all y C .
It is well known that P C is nonexpansive mapping and satisfies
x y , P C x P C y P C x P C y 2
for all x , y H 1 . Moreover, P C x is characterized by the fact P C x C and
x P C x , y P C x 0
for all y C and x H 1 (see [6,22]).
A multi-valued mapping B 1 : H 1 2 H 1 is said to be monotone if, for all x , y H 1 , u B 1 ( x ) and v B 1 ( y ) ,
x y , u v 0 .
A monotone mapping B 1 : H 1 2 H 1 is said to be maximal if the graph G ( B 1 ) of B 1 is not properly contained in the graph of any other monotone mapping. It is known that a monotone mapping B 1 is maximal if and only if, for all ( x , u ) H 1 × H 1 ,
x y , u v 0
for all ( y , v ) G ( B 1 ) implies that u B 1 ( x ) .
Let B 1 : H 1 2 H 1 be a multi-valued maximal monotone mapping. Then the resolvent mapping J λ B 1 : H 1 H 1 associated with B 1 is defined by
J λ B 1 ( x ) : = ( I + λ B 1 ) 1 ( x )
for all x H 1 and for some λ > 0 , where I is the identity operator on H 1 . It is well known that, for any λ > 0 , the resolvent operator J λ B 1 is single-valued firmly nonexpansive (see [2,5,6,14]).
Definition 1.
Suppose that { f n ( x ) } is a sequence of functions defined on a bounded set D. Then f n ( x ) converges uniformly to the function f ( x ) on D if, for all x D ,
f n ( x ) f ( x ) as n .
Let f n : D H 1 be a uniformly convergent sequence of contraction mappings on D, i.e., there exists μ n ( 0 , 1 ) such that
f n ( x ) f n ( y ) μ n x y
for all x , y D .
Let C B ( H 1 ) denote the family of nonempty closed and bounded subsets of H 1 . The Hausdorff metric on C B ( H 1 ) is defined by
H ^ ( x , y ) = max sup x A inf y B x y , sup y B inf x A x y
for all A , B C B ( H 1 ) (see [18]).
Definition 2.
[2] Let S : H 1 C B ( H 1 ) be a multi-valued mapping. Assume that p H 1 is a fixed point of S, that is, p S p . The mapping S is said to be:
(1) 
nonexpansive if, for all x , y H 1 ,
H ^ ( S x , S y ) x y .
(2) 
quasi-nonexpansive if F ( S ) and, for all x H 1 and p F ( S ) ,
H ^ ( S x , S p ) x p
Definition 3.
[2] A single-valued mapping S : H H is said to be demiclosed at the origin if, for any sequence { x n } H with x n x and S x n 0 , we have S x = 0 .
Definition 4.
[2] A multi-valued mapping S : H 1 C B ( H 1 ) is said to be demiclosed at the origin if, for any sequence { x n } H with x n x and d ( x n , S x n ) 0 , we have x S x .
Lemma 1.
[29,30] Let H be a Hilbert space. Then, for any x , y , z X and α , β , γ [ 0 , 1 ] with α + β + γ = 1 , we have
α x + β y + γ z 2 = α x 2 + β y 2 + γ z 2 α β x y 2 α γ x z 2 β γ y z 2 .
Lemma 2.
[2,31] Let H be a real Hilbert space. Then the following results hold:
(1)
x y 2 = x 2 2 x , y + y 2 .
(2)
x + y 2 = x 2 + 2 x , y + y 2 .
(3)
x + y 2 x 2 + 2 y , x + y for all x , y H .
Lemma 3.
[2,32,33] Let { a n } , { c n } R + , { σ n } ( 0 , 1 ) and { b n } R be sequences such that
a n + 1 ( 1 σ n ) a n + b n + c n for all n 0 .
Assume n = 0 | c n | < . Then the following results hold:
(1)
If b n β σ n for some β 0 , then { a n } is a bounded sequence.
(2)
If we have
n = 0 σ n = and lim sup n b n σ n 0 ,
then lim n a n = 0 .
Lemma 4.
[32,33] Let { s n } be a sequence of non-negative real numbers such that
s n + 1 ( 1 λ n ) s n + λ n t n + r n
for each n 1 , where
(a)
{ λ n } [ 0 , 1 ] and n = 1 λ n = ;
(b)
lim sup t n 0 ;
(c)
r n 0 and n = 1 r n < .
Then s n 0 as n .

3. The Main Results

In this section, we prove some strong convergence theorems of the proposed algorithm for solving the problem (SFPIP).
Theorem 1.
Let H 1 , H 2 be two real Hilbert spaces, A : H 1 H 2 be bounded operator with adjoint operator A * and B 1 : H 1 2 H 1 , B 2 : H 2 2 H 2 be maximal monotone mappings. Let S : H 1 C B ( H 1 ) be a multi-valued quasi-nonexpansive mapping and S be demiclosed at the origin. Let { f n } be a sequence of μ n -contractions f n : H 1 H 1 with 0 < μ * μ n μ * < 1 and { f n ( x ) } be uniformly convergent for any x in a bounded subset D of H 1 . Suppose that F ( S ) Ω . For any x 0 , x 1 H 1 , let the sequences { y n } , { u n } , { z n } and { x n } be generated by
y n = x n + θ n ( x n x n 1 ) , u n = J λ B 1 ( y n + γ n A * ( J λ B 2 I ) A y n ) , z n = ξ v n + ( 1 ξ ) u n , v n S x n , x n + 1 = α n f n ( x n ) + β n x n + δ n z n
for each n 1 , where ξ ( 0 , 1 ) , γ n : = τ n ( J λ B 2 I ) A y n 2 A * ( J λ B 2 I ) A y n 2 with 0 < τ * τ n τ * < 1 , { θ n } [ 0 , ω ¯ ) for some ω ¯ > 0 and { α n } , { β n } , { δ n } ( 0 , 1 ) with α n + β n + δ n = 1 satisfying the following conditions:
(C1)
lim n α n = 0 ;
(C2)
n = 1 α n = ;
(C3)
0 < ϵ 1 β n and 0 < ϵ 2 δ n ;
(C4)
lim n θ n α n x n x n 1 = 0 .
Then { x n } generated by ( 14 ) converges strongly to p F ( S ) Ω , where p = P F ( S ) Ω f ( p ) .
Proof. 
First, we show that { x n } is bounded. Let p = P F ( S ) Ω f ( p ) . Then p F ( S ) Ω and so J λ B 1 p = p and J λ B 2 A p = A p . By the triangle inequality, we get
y n p = x n + θ n ( x n x n 1 ) p x n p + θ n x n x n 1 .
By the Cauchy-Schwarz inequality and Lemma 2 (1) and (2), we get
y n p 2 = x n + θ n ( x n x n 1 ) p 2 = x n p 2 + θ n 2 x n x n 1 2 + 2 θ n x n p , x n x n 1 x n p 2 + θ n 2 x n x n 1 2 + 2 θ n x n x n 1 x n p .
By using (15) and the fact that S is quasi-nonexpansive S, we get
z n p = ξ v n + ( 1 ξ ) u n p = ξ ( v n p ) + ( 1 ξ ) ( u n p ) ξ v n p + ( 1 ξ ) u n p ξ d ( v n , S p ) + ( 1 ξ ) y n p ξ H ^ ( S x n , S p ) + ( 1 ξ ) [ x n p + θ n x n x n 1 ] ξ x n p + ( 1 ξ ) x n p + ( 1 ξ ) θ n x n x n 1 x n p + θ n x n x n 1 ,
which implies that
z n p 2 ( x n p + θ n x n x n 1 ) 2 = x n p 2 + 2 θ n x n x n 1 x n p + θ n 2 x n x n 1 2 .
Since J λ B 1 is nonexpansive, by Lemma 2 (2), we get
u n p 2 = J λ B I ( y n + γ n A * ( J λ B 2 I ) A y n ) p 2 = J λ B 1 ( y n + γ n A * ( J λ B 2 I ) A y n ) J λ B 1 p 2 y n + γ n A * ( J λ B 2 I ) A y n p 2 = y n p 2 + γ n 2 A * ( J λ B 2 I ) A y n 2 + 2 γ n y n p , A * ( J λ B 2 I ) A y n .
Again, by Lemma 2 (2), we get
y n p , A * ( J λ B 2 I ) A y n = A ( y n p ) , ( J λ B 2 I ) A y n = J λ B 2 A y n A p ( J λ B 2 I ) A y n , ( J λ B 2 I ) A y n = J λ B 2 A y n A p , ( J λ B 2 I ) A y n ( J λ B 2 I ) A y n , ( J λ B 2 I ) A y n = J λ B 2 A y n A p , ( J λ B 2 I ) A y n ( J λ B 2 I ) A y n 2 = 1 2 ( J λ B 2 A y n A p 2 + ( J λ B 2 I ) A y n 2 J λ B 2 A y n A p ( J λ B 2 I ) A y n 2 ) ( J λ B 2 I ) A y n 2 = 1 2 J λ B 2 A y n A p 2 + ( J λ B 2 I ) A y n 2 J λ B 2 A y n A p J λ B 2 A y n + A y n 2 ( J λ B 2 I ) A y n 2 = 1 2 J λ B 2 A y n A p 2 + ( J λ B 2 I ) A y n 2 A y n A p 2 ( J λ B 2 I ) A y n 2 = 1 2 J λ B 2 A y n A p 2 A y n A p 2 ( J λ B 2 I ) A y n 2 1 2 A y n A p 2 A y n A p 2 ( J λ B 2 I ) A y n 2 = 1 2 ( J λ B 2 I ) A y n 2 .
Using (20) into (19), we get
u n p 2 y n p 2 + γ n 2 A * ( J λ B 2 I ) A y n 2 γ n ( J λ B 2 I ) A y n 2 = y n p 2 γ n ( J λ B 2 I ) A y n 2 γ n A * ( J λ B 2 I ) A y n 2 .
By the definition of γ n , (21) can then be written as follows:
u n p 2 y n p 2 γ n ( 1 τ n ) ( J λ B 2 I ) A y n 2 y n p 2 .
Thus we have
u n p y n p .
Using the condition (C3) and (17), we get
x n + 1 p = α n f n ( x n ) + β n x n + δ n z n p = α n ( f n ( x n ) f n ( p ) ) + α n ( f n ( p ) p ) + β n ( x n p ) + δ n ( z n p ) α n f n ( x n ) f n ( p ) + α n f n ( p ) p + β n x n p + δ n z n p α n μ n x n p + α n f n ( p ) p + β n x n p + δ n ( x n p + ( 1 ξ ) θ n x n x n 1 ) ( α n μ * + ( β n + δ n ) ) x n p + ( 1 ξ ) δ n θ n x n x n 1 + α n f n ( p ) p = ( 1 α n ( 1 μ * ) x n p + ( 1 ξ ) δ n α n θ n α n x n x n 1 + α n f n ( p ) p .
Since { f n } is the uniform convergence on D, there exists a constant M > 0 such that
f n ( p ) p M
for each n 1 . So we can choose β : = M 1 μ * and set
a n : = x n p , b n : = α n f n ( p ) p ,
c n : = ( 1 ξ ) δ n α n θ n α n x n x n 1 , σ n : = α n ( 1 μ * ) .
By Lemma 3 (1) and our assumptions, it follows that { x n } is bounded. Moreover, { u n } and { y n } are also bounded.
Now, by Lemma 2, we get
x n + 1 p 2 = α n ( f n ( x n ) f n ( p ) ) + α n ( f n ( p ) p ) + β n ( x n p ) + δ n ( z n p ) 2 α n ( f n ( x n ) f n ( p ) ) + β n ( x n p ) + δ n ( z n p ) 2 + 2 α n f n ( p ) p , x n + 1 p = β n ( x n p ) + δ n ( z n p ) 2 + α n 2 f n ( x n ) f n ( p ) 2 + 2 α n f n ( x n ) f n ( p ) , β n ( x n p ) + δ n ( z n p ) + 2 α n f n ( p ) p , x n + 1 p β n 2 x n p 2 + δ n 2 z n p 2 + 2 β n δ n x n p , z n p + α n 2 μ n 2 x n p 2 + 2 α n f n ( p ) p , x n + 1 p + 2 α n f n ( x n ) f n ( p ) β n ( x n p ) + δ n ( z n p ) β n 2 x n p 2 + δ n 2 z n p 2 + β n δ n x n p 2 + z n p 2 x n z n 2 + α n 2 μ * 2 x n p 2 + 2 α n μ n x n p β n x n p + δ n z n p + 2 α n f n ( p ) p , x n + 1 p β n ( β n + δ n ) x n p 2 + δ n ( β n + δ n ) z n p 2 β n δ n x n z n 2 + α n 2 μ * 2 x n p 2 + 2 μ * α n ( β n + δ n ) x n p 2 + 2 μ * α n ( 1 ξ ) δ n θ n x n x n 1 x n p + 2 α n f n ( p ) p , x n + 1 p β n ( β n + δ n ) x n p 2 + δ n ( β n + δ n ) ( x n p 2 + θ n 2 x n x n 1 2 + 2 θ n x n x n 1 x n p ) β n δ n x n z n 2 + α n 2 μ * 2 x n p 2 + 2 μ * α n ( β n + δ n ) x n p 2 + 2 μ * α n ( 1 ξ ) δ n θ n x n x n 1 x n p + 2 α n f n ( p ) p , x n + 1 p = ( 1 α n ) 2 + α n 2 μ * 2 + 2 μ * α n ( 1 α n ) x n p 2 β n δ n x n z n 2 + 2 1 α n ( 1 μ * ( 1 ξ ) ) δ n θ n x n x n 1 x n p + ( 1 α n ) δ n θ n 2 x n x n 1 2 + 2 α n f n ( p ) p , x n + 1 p .
Now, we consider two steps for the proof as follows:
Case 1. Suppose that there exists n 0 N such that { x n p } n = n 0 is non-increasing and then { x n p } converges. By Lemma 1, we get
x n + 1 p 2 = α n f n ( x n ) + β n x n + δ n z n p 2 = α n f n ( x n ) p 2 + β n x n p 2 + δ n z n p 2 α n β n f n ( x n ) x n 2 α n γ n f n ( x n ) z n 2 β n γ n x n z n 2 α n f n ( x n ) p 2 + β n x n p 2 + δ n z n p 2 α n f n ( x n ) p 2 + β n x n p 2 + δ n ξ x n p 2 + ( 1 ξ ) u n p 2 α n f n ( x n ) p 2 + ( β n + ξ δ n ) x n p 2 + ( 1 ξ ) δ n u n p 2 ,
which implies that
u n p 2 1 ( 1 ξ ) δ n α n f n ( x n ) p 2 + ( β n + ξ δ n ) x n p 2 x n + 1 p 2 .
Applying (16) and (24) to (21), we get
γ n ( ( J λ B 2 I ) A y n 2 γ n A * ( J λ B 2 I ) A y n 2 ) y n p 2 u n p 2 x n p 2 + 2 θ n x n 1 p x n p + θ n 2 x n x n 1 2 + 1 ( 1 ξ ) δ n ( α n f n ( x n ) p 2 + ( β n + ξ δ n ) x n p 2 x n + 1 p 2 ) = β n + δ n ( 1 ξ ) δ n x n p 2 + α n ( 1 ξ ) δ n f n ( x n ) p 2 1 ( 1 ξ ) δ n x n + 1 p 2 + θ n x n x n 1 2 x n p + θ n x n x n 1 1 ( 1 ξ ) ϵ 2 ( x n p 2 x n + 1 p 2 ) + α n ( 1 ξ ) ϵ 2 ( f n ( x n ) p 2 x n p 2 + θ n α n x n x n 1 2 x n p + α n θ n α n x n x n 1 ) .
Since { x n p } is convergent, we have x n p x n + 1 p 0 as n . By the conditions (C2) and (C4), we get
γ n ( ( J λ B 2 I ) A y n 2 γ n A * ( J λ B 2 I ) A y n 2 ) 0 as n .
From the definition of γ n , we get
τ n ( 1 τ n ) ( J λ B 2 I ) A y n 4 A * ( J λ B 2 I ) A y n 2 0 as n
or
( J λ B 2 I ) A y n 2 A * ( J λ B 2 I ) A y n 0 as n .
Since
A * ( J λ B 2 I ) A y n A * ( J λ B 2 I ) A y n = A ( J λ B 2 I ) A y n ,
it is easy to see that
( J λ B 2 I ) A y n A ( J λ B 2 I ) A y n 2 A * ( J λ B 2 I ) A y n .
Consequently, we get
( J λ B 2 I ) A y n 0 as n
and also
A * ( J λ B 2 I ) A y n 0 as n .
Similarly, from (23) and our assumptions, we get
x n z n 2 = 1 β n δ n { x n p 2 x n + 1 p 2 + ( 1 α n ) δ n θ n 2 x n x n 1 2 + 2 1 α n ( 1 μ * ( 1 ξ ) ) δ n θ n x n x n 1 x n p + α n α n ( 1 + μ * 2 ) 2 ( 1 μ * ( 1 α n ) ) x n p 2 + 2 f n ( p ) p , x n + 1 p } 1 ϵ 1 ϵ 2 { x n p 2 x n + 1 p 2 + θ n α n x n x n 1 [ δ n ( 1 α n ) α n 2 θ n α n x n x n 1 + 2 δ n 1 α n ( 1 μ * ( 1 ξ ) ) θ n x n p ] + α n [ 2 f n ( p ) p , x n + 1 p + α n ( 1 + μ * 2 ) 2 ( 1 μ * ( 1 α n ) ) x n p 2 ] } 0 as n .
Therefore, we have
x n z n 0 as n .
By the condition (C2) and (27), we get
x n + 1 x n = α n f n ( x n ) + β n x n + δ n z n x n α n f n ( x n ) x n + δ n x n z n 0 as n .
Thus we have
x n + 1 z n x n + 1 x n + x n z n 0 as n .
Since J λ B 1 is firmly nonexpansive, we have
u n p 2 = J λ B 1 ( y n + γ n A * ( J λ B 2 I ) A y n ) J λ B 1 p 2 u n p , y n + γ n A * ( J λ B 2 I ) A y n p = 1 2 u n p 2 + y n + γ n A * ( J λ B 2 I ) A y n p 2 u n y n γ n A * ( J λ B 2 I ) A y n 2 = 1 2 ( u n p 2 + y n p 2 + γ n 2 A * ( J λ B 2 I ) A y n 2 + 2 y n p , γ n A * ( J λ B 2 I ) A y n u n y n 2 γ n 2 A * ( J λ B 2 I ) A y n 2 + 2 u n y n , γ n A * ( J λ B 2 I ) A y n ) 1 2 y n p 2 + y n p 2 u n y n 2 + 2 u n p , γ n A * ( J λ B 2 I ) A y n 1 2 2 y n p 2 u n y n 2 + 2 γ n u n p A * ( J λ B 2 I ) A y n y n p 2 1 2 u n y n 2 + γ n u n p A * ( J λ B 2 I ) A y n
or
u n y n 2 2 y n p 2 u n p 2 + γ n u n p A * ( J λ B 2 1 ) A y n .
From (28), (16), (24) and (26) and our assumptions, it follows that
u n y n 2 2 [ x n p 2 + 2 θ n x n x n 1 x n p + θ n 2 x n x n 1 2 + 1 ( 1 ξ ) δ n α n f n ( x n ) p 2 + ( β n + ξ δ n ) x n p 2 x n + 1 p 2 + γ n u n p A * ( J λ B 2 1 ) A y n ] = 2 [ 1 ( 1 ξ ) ϵ 2 x n p 2 x n + 1 p 2 + γ n u n p A * ( J λ B 2 1 ) A y n + α n ( 1 ξ ) ϵ 2 ( f n ( x n ) p 2 x n p 2 + θ n α n x n x n 1 2 x n p + α n θ n α n x n x n 1 ) ] 0 as n ,
that is, we have
u n y n 0 as n .
From y n : = x n + θ n ( x n x n 1 ) , we get
y n x n = x n + θ n ( x n x n 1 ) x n = α n θ n α n x n x n 1 ,
which, with the condition (C4), implies that
y n x n 0 as n .
In addition, using (27), (29) and (30), we obtain
z n u n u n y n + y n z n u n y n + y n x n + x n z n 0 as n .
From z n : = ξ v n + ( 1 ξ ) u n , we get
v n u n = 1 ξ z n u n 0 as n .
Thus, by (29)–(31), we also get
x n v n x n u n + u n v n x n y n + y n u n + u n v n 0 as n .
Therefore, we have
d ( x n , S x n ) x n v n 0 as n .
Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that x n k x * H 1 and, consequently, { u n k } and { y n k } converge weakly to the point x * .
From (32), Lemma 4 and the demiclosedness principle for a multi-valued mapping S at the origin, we get x * S x * , which implies that
x * F ( S ) .
Next, we show that x * Ω . Let ( v , z ) G ( B 1 ) , that is, z B 1 ( v ) . On the other hand, u n k = J λ B 1 ( y n k + γ n k A * ( J λ B 2 I ) A y n k ) can be written as
y n k + γ n k A * ( J λ B 1 I ) A y n k u n k + λ B 1 ( u n k ) ,
or, equivalently,
( y n k u n K ) + γ n k A * ( J λ B 1 I ) A y n k λ B 1 ( u n k ) .
Since B 1 is maximal monotone, we get
v u n k , z ( y n k u n k ) + γ n k A * ( J λ B 2 I ) A y n k λ 0 .
Therefore, we have
v u n k , z v u n k , ( y n k u n k ) + γ n k A * ( J λ B 2 I ) A y n k λ = v u n k , y n k u n k λ + v u n k , γ n k A * ( J λ B 2 I ) A y n k λ .
Since u n k x * , we have
lim k v u n k , z = v x * , z .
By (26) and (29), it follows that (33) becomes v x * , z 0 , which implies that
0 B 1 ( x * ) .
Moreover, from (29), we know that { A y n k } converges weakly to A x * and, by (25), the fact that J λ B 2 is nonexpansive and the demiclosedness principle for a multi-valued mapping, we have
0 B 2 ( A x * ) ,
which implies that x * Ω . Thus x * F ( S ) Ω . Since { f n ( x ) } is uniformly convergent on D, we get
lim sup n f n ( p ) p , x n + 1 p = lim sup j f n j ( p ) p , x n j + 1 p = f ( p ) p , x * p 0 .
From (23), we get
x n + 1 p 2 1 2 α n ( 1 μ * ( 1 α n ) ) + α n 2 ( 1 + μ * 2 ) x n p 2 β n δ n x n z n 2 + 2 1 α n ( 1 μ * ( 1 ξ ) ) δ n θ n x n x n 1 x n p + ( 1 α n ) δ n θ n 2 x n x n 1 2 + 2 α n f n ( p ) p , x n + 1 p 1 2 α n ( 1 μ * ) x n p 2 + 2 α n ( 1 μ * ) f n ( p ) p , x n + 1 p 1 μ * + α n [ δ n θ n α n x n x n 1 ( 2 1 α n ( 1 μ * ( 1 ξ ) ) x n p + ( 1 α n ) α n θ n α n x n x n 1 + α n ( 1 + μ * 2 ) x n p 2 ] .
By Lemma 4, we obtain
lim n x n = p .
Case 2. Suppose that { x n p } n = n 0 is not a monotonically decreasing sequence for some n 0 large enough. Set Γ n = x n p 2 and let τ : B N be a mapping defined by
τ ( n ) : = max { k N : k n , Γ k Γ k + 1 }
for all n n 0 . Obviously, τ is a non-decreasing sequence. Thus we have
0 Γ τ ( n ) Γ τ ( n ) + 1
for all n n 0 . That is, x τ ( n ) p x τ ( n ) + 1 p for all n n 0 . Thus lim n x τ ( n ) p exists. As in Case 1, we can show that
lim n ( J λ B 2 I ) A y τ ( n ) = 0 , lim n A * ( J λ B 2 I ) A y τ ( n ) = 0 ,
lim n x τ ( n ) + 1 x τ ( n ) = 0 , lim n u τ ( n ) x τ ( n ) = 0 ,
lim n v τ ( n ) u τ ( n ) = 0 , lim n x τ ( n ) v τ ( n ) = 0 .
Therefore, we have
d ( x τ ( n ) , S x τ ( n ) ) x τ ( n ) v τ ( n ) 0 as n .
Since { x τ ( n ) } is bounded, there exists a subsequence { u τ ( n ) } of { x τ ( n ) } that converges weakly to a point x * H 1 . From u τ ( n ) x τ ( n ) 0 , it follows that u τ ( n ) x * H 1 .
Moreover, as in Case 1, we show that x * F ( S ) Ω . Furthermore, since { f n ( x ) } is uniformly convergent on D H 1 , we obtain that
lim sup n f τ ( n ) ( p ) p , x τ ( n ) + 1 p 0 .
From (23), we get
x τ ( n ) + 1 p 2 1 2 α τ ( n ) ( 1 μ * ( 1 α τ ( n ) ) ) + α τ ( n ) 2 ( 1 + μ * 2 ) x τ ( n ) p 2 β τ ( n ) δ τ ( n ) x τ ( n ) z τ ( n ) 2 + 2 α τ ( n ) f τ ( n ) ( p ) p , x τ ( n ) + 1 p + 2 1 α τ ( n ) ( 1 μ * ( 1 ξ ) ) δ τ ( n ) θ τ ( n ) x τ ( n ) x τ ( n ) 1 x τ ( n ) p + ( 1 α τ ( n ) ) δ τ ( n ) θ τ ( n ) 2 x τ ( n ) x τ ( n ) 1 2 1 2 α τ ( n ) ( 1 μ * ) x τ ( n ) p 2 + α τ ( n ) 2 ( 1 + μ * 2 ) x τ ( n ) p 2 + δ τ ( n ) θ n x τ ( n ) x τ ( n ) 1 ( 2 ( 1 α τ ( n ) ( 1 μ * ) ) x τ ( n ) p + ( 1 α τ ( n ) ) θ τ ( n ) x τ ( n ) x τ ( n ) 1 ) + 2 α τ ( n ) f τ ( n ) ( p ) p , x τ ( n ) + 1 p ,
which implies that
2 α τ ( n ) ( 1 μ * ) x τ ( n ) p 2 x τ ( n ) p 2 x τ ( n ) + 1 p 2 + α τ ( n ) 2 ( 1 + μ * 2 ) x τ ( n ) p 2 + δ τ ( n ) θ n x τ ( n ) x τ ( n ) 1 ( 2 ( 1 α τ ( n ) ( 1 μ * ) ) x τ ( n ) p + ( 1 α τ ( n ) ) θ τ ( n ) x τ ( n ) x τ ( n ) 1 ) + 2 α τ ( n ) f τ ( n ) ( p ) p , x τ ( n ) + 1 p ,
or
2 ( 1 μ * ) x τ ( n ) p 2 α τ ( n ) ( 1 + μ * 2 ) x τ ( n ) p 2 + 2 f τ ( n ) ( p ) p , x τ ( n ) + 1 p + δ τ ( n ) θ τ ( n ) α τ ( n ) x τ ( n ) x τ ( n ) 1 ( 2 ( 1 α τ ( n ) ( 1 μ * ) ) x τ ( n ) p + ( 1 α τ ( n ) ) α τ ( n ) θ τ ( n ) α τ ( n ) x τ ( n ) x τ ( n ) 1 ) .
Thus we have
lim sup n x τ ( n ) p 0
and so
lim n x τ ( n ) p = 0 .
By (35) and (38), we get
x τ ( n ) + 1 p x τ ( n ) + 1 x τ ( n ) + x τ ( n ) p 0 , n .
Furthermore, for all n n 0 , it is easy to see that Γ τ ( n ) Γ τ ( n ) + 1 if n τ ( n ) (that is, τ ( n ) < n ) because of Γ j Γ j + 1 for τ ( n ) + 1 j n . Consequently, it follows that, for all n n 0 ,
0 Γ n max { Γ τ ( n ) , Γ τ ( n ) + 1 } = Γ τ ( n ) + 1 .
Therefore, lim Γ n = 0 , that is, { x n } converges strongly to the point x * . This completes the proof. ☐
Remark 1.
[22] The condition (C4) is easily implemented in numerical results because the value of x n x n 1 is known before choosing θ n . Indeed, we can choose the parameter θ n such as
θ n = min ω ¯ , ω n x n x n 1 , if x n x n 1 0 , ω ¯ , otherwise ,
where { ω n } is a positive sequence such that ω n = o ( α n ) . Moreover, in the condition (C4), we can take α n = 1 n + 1 , ω ¯ = 4 5 and
θ n = min ω ¯ , α n 2 x n x n 1 , if x n x n 1 0 , ω ¯ , otherwise ,
or
θ n = min 4 5 , 1 ( n + 1 ) 2 x n x n 1 , if x n x n 1 0 , 4 5 , otherwise .
If the multi-valued quasi-nonexpansive mapping S in Theorem 1 is a single-valued quasi-nonexpansive mapping, then we obtain the following:
Corollary 1.
Let H 1 and H 2 be two real Hilbert spaces. Suppose that A : H 1 H 2 is a bounded linear operator with adjoint operator A * . Let { f n } be a sequence of μ n -contractions f n : H 1 H 1 with 0 < μ * μ n μ * < 1 and { f n ( x ) } be uniformly convergent for any x in a bounded subset D of H 1 . Suppose that S : H 1 H 1 is a single-valued quasi-nonexpansive mapping, I S is demiclosed at the origin and F ( S ) Ω . For any x 0 , x 1 H 1 , let the sequences { y n } , { u n } , { z n } and { x n } be generated by
y n = x n + θ n ( x n x n 1 ) , u n = J λ B 1 ( y n + γ n A * ( J λ B 2 I ) A y n ) , z n = ξ S x n + ( 1 ξ ) u n , x n + 1 = α n f n ( x n ) + β n x n + δ n z n
for each n 1 , where ξ ( 0 , 1 ) , γ n : = τ n ( J λ B 2 I ) A y n 2 A * ( J λ B 2 I ) A y n 2 with 0 < τ * τ n τ * < 1 , { θ n } [ 0 , ω ¯ ) for some ω ¯ > 0 and { α n } , { β n } , { δ n } ( 0 , 1 ) with α n + β n + δ n = 1 satisfying the following conditions:
(C1) 
lim n α n = 0 ;
(C2) 
n = 1 α n = ;
(C3) 
0 < ϵ 1 β n and 0 < ϵ 2 δ n ;
(C4) 
lim n θ n α n x n x n 1 = 0 .
Then the sequence { x n } generated by ( 39 ) converges strongly to a point p F ( S ) Ω , where p = P F ( S ) Ω f ( p ) .
Remark 2.
If θ n = 0 , then the iterative scheme (14) in Theorem 1 reduces to the iterative (11).

4. Applications

In this section, we give some applications of the problem (SFPIP) in the variational inequality problem and game theory. First, we introduce variational inequality problem in [34] and game theory (see [35]).

4.1. The Variational Inequality Problem

Let C be a nonempty closed and convex subset of a real Hilbert space H 1 . Suppose that an operator F : H 1 H 1 is monotone.
Now, we consider the following variational inequality problem (VIP):
Find a point x * C such that F x * , y x * 0 for all y C .
The solution set of the problem (VIP) is denoted by Γ .
Moreover, it is well-known that x * is a solution of the problem (VIP) if and only if x * is a solution of the problem (FPP) [34], that is, for any γ > 0 ,
x * = P C ( x * γ F x * ) .
The following lemma is extracted from [2,36]. This lemma is used for finding a solution of the split inclusion problem and the variational inequality problem:
Lemma 5.
Let H 1 be a real Hilbert space, F : H 1 H 1 be a monotone and L-Lipschitz operator on a nonempty closed and convex subset C of H 1 . For any γ > 0 , let T = P C ( I γ F ( P C ( I γ F ) ) ) . Then, for any y Γ and L γ < 1 , we have
T x T y x y ,
I T is demiclosed at the origin and F ( T ) = Γ .
Now, we apply our Theorem 1, by combining with Lemma 5, to find a solution of the problem (VIP), that is, a point in the set Γ .
Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be maximal monotone mappings defined on H 1 and H 2 , respectively, and A : H 1 H 2 be a bounded linear operator with its adjoint A * .
Now, we consider the split fixed point variational inclusion problem (SFPVIP) as follows:
Find a point x * H 1 such that 0 B 1 ( x * ) , x * Γ
and
y * = A x * H 2 such that 0 B 2 ( y * ) .
Theorem 2.
Let H 1 and H 2 be two real Hilbert spaces, A : H 1 H 2 be a bounded linear operator with its adjoint A * . Let { f n } be a sequence of μ n -contractions f n : H 1 H 1 with 0 < μ * μ n μ * < 1 and { f n ( x ) } be uniformly convergent for any x in a bounded subset D of H 1 . For any λ > 0 , let T = P C ( I γ F ( P C ( I γ F ) ) ) with L γ < 1 , where F : H 1 H 1 is a L-Lipschitz and monotone operator on C H 1 and F ( T ) Ω . For any x 0 , x 1 H 1 , let the sequences { y n } , { u n } , { z n } and { x n } be generated by
y n = x n + θ n ( x n x n 1 ) , u n = J λ B 1 ( y n + γ n A * ( J λ B 2 I ) A y n ) , z n = ξ T x n + ( 1 ξ ) u n , x n + 1 = α n f n ( x n ) + β n x n + δ n z n
for each n 1 , where ξ ( 0 , 1 ) , γ n : = τ n ( J λ B 2 I ) A y n 2 A * ( J λ B 2 I ) A y n 2 with 0 < τ * τ n τ * < 1 , { θ n } [ 0 , ω ¯ ) for some ω ¯ > 0 and { α n } , { β n } , { δ n } ( 0 , 1 ) with α n + β n + δ n = 1 satisfying the following conditions:
(C1) 
lim n α n = 0 ;
(C2) 
n = 1 α n = ;
(C3) 
0 < ϵ 1 β n , 0 < ϵ 2 δ n ;
(C4) 
lim n θ n α n x n x n 1 = 0 .
Then the sequence { x n } generated by ( 43 ) converges strongly to a point p F ( T ) Ω = Γ Ω , where p = P Γ Ω f ( p ) .
Proof. 
Since I T is demiclosed at the origin and F ( T ) = Γ , by using Lemma (5) and Corollary (1), the sequence { x n } converges strongly to a point p F ( T ) Ω , that is, the sequence { x n } converges strongly to a point p Γ . ☐

4.2. Game Theory

Now, we consider a game of N players in strategic form
G = ( p i , S i ) ,
where i = 1 , , N , p i : S = S 1 × S 2 × × S N R is the pay-off function (continuous) of the ith player and S i R M i is the set of strategy of the ith player such that M i = | S i | .
Let S i be nonempty compact and convex set, s i S i be the strategy of the ith player and s = ( s 1 , s 2 , , s N ) be the collective strategy of all players. For any s S and z i S i of the ith player for each i, the symbols S i , s i and ( z i , s i ) are defined by
  • S i : = ( S 1 × × S i 1 × S i + 1 × × S N ) is the set of strategies of the remaining players when s i was chosen by ith player,
  • s i : = ( s 1 , , s i 1 , s i + 1 , , s N ) is the strategies of the remaining players when ith player has s i and
  • ( z i , s i ) : = ( s 1 , , s i 1 , z i , s i + 1 , , s N ) is the strategies of the situation that z i was chosen by ith player when the rest of the remaining players have chosen s i .
Moreover, s i ¯ is a special strategy of the ith player, supporting the player to maximize his pay-off, which equivalent to the following:
p i ( s i ¯ , s i ) = max z i S i p i ( z i , s i ) .
Definition 5.
[37,38] Given a game of N players in strategic form, the collective strategies s * S is said to be a Nash equilibrium point if
p i ( s * ) = max z i S i p i ( z i , s i * )
for all i = 1 , , N and s i * S i .
If no player can change his strategy to bring advantages, then the collective strategies s * = ( s i * , s i * ) is a Nash equilibrium point. Furthermore, a Nash equilibrium point is the collective strategies of all players, i.e., s i * (for each i 1 ) is the best response of ith player. There is a multi-valued mapping T i : S i 2 S i such that
T i ( s i ) = arg max p i ( z i , s i ) = { s i S i : p i ( s i , s i ) = max z i S i p i ( z i , s i ) }
for all s i S i . Therefore, we can define the mapping T : S 2 S by
T : = T 1 × T 2 × × T N
such that the Nash equilibrium point is the collective strategies s * , where s * F ( T ) . Note that s * F ( T ) is equivalent to s i * T ( s i * ) .
Let H 1 and H 2 be two real Hilbert spaces, B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be multi-valued mappings. Suppose S is nonempty compact and convex subset of H 1 = R M N , H 2 = R and the rest of the players have made their best responses s i * . For each s S , define a mapping A : S H 2 by
A s = p i ( s ) p i ( z i , s i * )
, where p i is linear, bounded and convex. Indeed, A is also linear, bounded and convex.
The Nash equilibrium problem (NEP) is the following:
Find a point s * S such that A s * > 0 , 0 H 2 .
However, the solution to the problem (NEP) may not be single-valued. Then the problem (NEP) reduces to finding the fixed point problem (FPP) of a multi-valued mapping, i.e.,
Find a point s * S such that s * T s * ,
where T is multi-valued pay-off function.
Now, we apply our Theorem 1 to find a solution to the problem (FPP).
Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be maximal monotone mappings defined on H 1 and H 2 , respectively, and A : H 1 H 2 be a bounded linear operator with its adjoint A * .
Now, we consider the following problem:
Find a point s * H 1 such that 0 B 1 ( s * ) , s * T s *
and
y * = A s * H 2 such that 0 B 2 ( y * ) .
Theorem 3.
Assume that B 1 and B 2 are maximal monotone mappings defined on Hilbert spaces H 1 and H 2 , respectively. Let T : S C B ( S ) be a multi-valued quasi-nonexpansive mapping such that T is demiclosed at the origin. Let { f n } be a sequence of μ n -contractions f n : H 1 H 1 with 0 < μ * μ n μ * < 1 and { f n ( x ) } be uniformly convergent for any x in a bounded subset D of H 1 . Suppose that the problem(NEP)has a nonempty solution and F ( T ) Ω . For arbitrarily chosen x 0 , x 1 H 1 , let the sequences { y n } , { u n } , { z n } and { x n } be generated by
y n = x n + θ n ( x n x n 1 ) , u n = J λ B 1 ( y n + γ n A * ( J λ B 2 I ) A y n ) , z n = ξ v n + ( 1 ξ ) u n , v n T x n , x n + 1 = α n f n ( x n ) + β n x n + δ n z n
for each n 1 , where ξ ( 0 , 1 ) , γ n : = τ n ( J λ B 2 I ) A y n 2 A * ( J λ B 2 I ) A y n 2 with 0 < τ * τ n τ * < 1 , { θ n } [ 0 , ω ¯ ) for some ω ¯ > 0 and { α n } , { β n } , { δ n } ( 0 , 1 ) with α n + β n + δ n = 1 satisfying the following conditions:
(C1) 
lim n α n = 0 ;
(C2) 
n = 1 α n = ;
(C3) 
0 < ϵ 1 β n and 0 < ϵ 2 δ n ;
(C4) 
lim n θ n α n x n x n 1 = 0 .
Then the sequence { x n } generated by Equation ( 48 ) converges strongly to Nash equilibrium point.
Proof. 
By Theorem 1, the sequence { x n } converges strongly to a point p F ( T ) Ω , then the sequence { x n } converges strongly to a Nash equilibrium point. ☐

Author Contributions

All five authors contributed equally to work. All authors read and approved the final manuscript. P.K. and K.S. conceived and designed the experiments. P.P., W.J. and Y.J.C. analyzed the data. P.P. and W.J. wrote the paper. Authorship must be limited to those who have contributed substantially to the work reported.

Funding

The Royal Golden Jubilee PhD Program (Grant No. PHD/0167/2560). The Petchra Pra Jom Klao Ph.D. Research Scholarship (Grant No. 10/2560). The King Mongkut’s University of Technology North Bangkok, Contract No. KMUTNB-KNOW-61-035.

Acknowledgments

The authors acknowledge the financial support provided by King Mongkut’s University of Technology Thonburi through the “KMUTT 55th Anniversary Commemorative Fund”. Pawicha Phairatchatniyom would like to thank the “Science Graduate Scholarship", Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT) (Grant No. 11/2560). Wachirapong Jirakitpuwapat would like to thank the Petchra Pra Jom Klao Ph.D. Research Scholarship and the King Mongkut’s University of Technology Thonburi (KMUTT) for financial support. Moreover, Kanokwan Sitthithakerngkiet was funded by King Mongkut’s University of Technology North Bangkok, Contract No. KMUTNB-KNOW-61-035.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moudafi. Split monotone variational inclusions. J. Opt. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  2. Shehu, Y.; Agbebaku, D. On split inclusion problem and fixed point problem for multi-valued mappings. Comput. Appl. Math. 2017, 37. [Google Scholar] [CrossRef]
  3. Shehu, Y.; Ogbuisi, F.U. An iterative method for solving split monotone variational inclusion and fixed point problems. Rev. Real Acad. Cienc. Exact. Fís. Nat. Serie A. Mat. 2015, 110, 503–518. [Google Scholar] [CrossRef]
  4. Kazmi, K.; Rizvi, S. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Opt. Lett. 2013, 8. [Google Scholar] [CrossRef]
  5. Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comput. 2015, 250, 986–1001. [Google Scholar] [CrossRef]
  6. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Num. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  7. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. Technical Report. arXiv 2011, arXiv:1108.5953. [Google Scholar]
  8. Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Prob. 2010, 26, 055007. [Google Scholar] [CrossRef]
  9. Yao, Y.; Liou, Y.C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2017, 67, 1309–1319. [Google Scholar] [CrossRef]
  10. Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Prob. 2011, 27, 015007. [Google Scholar] [CrossRef]
  11. Sahu, D.R.; Pitea, A.; Verma, M. A new iteration technique for nonlinear operators as concerns convex programming and feasibility problems. Numer. Algorithms 2019. [Google Scholar] [CrossRef]
  12. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  13. Combettes, P. The convex feasibility problem in image recovery. Adv. Imag. Electron. Phys. 1996, 95, 155–270. [Google Scholar] [CrossRef]
  14. Kazmi, K.R.; Rizvi, S.H. Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 2013, 21, 44–51. [Google Scholar] [CrossRef] [Green Version]
  15. Peng, J.W.; Wang, Y.; Shyu, D.S.; Yao, J.C. Common solutions of an iterative scheme for variational inclusions, equilibrium problems, and fixed point problems. J. Inequal. Appl. 2008, 15, 720371. [Google Scholar] [CrossRef]
  16. Jung, J.S. Strong convergence theorems for multivalued nonexpansive nonself-mappings in Banach spaces. Nonlinear Anal. Theory Meth. Appl. 2007, 66, 2345–2354. [Google Scholar] [CrossRef]
  17. Panyanak, B. Mann and Ishikawa iterative processes for multivalued mappings in Banach spaces. Comput. Math. Appl. 2007, 54, 872–877. [Google Scholar] [CrossRef] [Green Version]
  18. Shahzad, N.; Zegeye, H. On Mann and Ishikawa iteration schemes for multi-valued maps in Banach spaces. Nonlinear Anal. 2009, 71, 838–844. [Google Scholar] [CrossRef]
  19. Polyak, B. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  20. Nesterov, Y. A method of solving a convex programming problem with convergence rate O(1/sqr(k)). Sov. Math. Dokl. 1983, 27, 372–376. [Google Scholar]
  21. Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef] [Green Version]
  22. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Opt. 2017, 13, 1–21. [Google Scholar] [CrossRef]
  23. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Wellposedness in optimization and related topics (Gargnano, 1999). Set Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  24. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Opt. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  25. Attouch, H.; Peypouquet, J.; Redont, P. A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J. Opt. 2014, 24, 232–256. [Google Scholar] [CrossRef]
  26. Boţ, R.I.; Csetnek, E.R. An inertial alternating direction method of multipliers. Min. Theory Appl. 2016, 1, 29–49. [Google Scholar]
  27. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  28. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas-Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar]
  29. Chuang, C.S. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fix. Point Theory Appl. 2013, 2013. [Google Scholar] [CrossRef] [Green Version]
  30. Che, H.; Li, M. Solving split variational inclusion problem and fixed point problem for nonexpansive semigroup without prior knowledge of operator norms. Math. Prob. Eng. 2015, 2015, 1–9. [Google Scholar] [CrossRef]
  31. Ansari, Q.H.; Rehan, A.; Wen, C.F. Split hierarchical variational inequality problems and fixed point problems for nonexpansive mappings. J. Inequal. Appl. 2015, 16, 274. [Google Scholar] [CrossRef]
  32. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  33. Maingé, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef]
  34. Glowinski, R.; Tallec, P. Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics; SIAM Studies in Applied Mathematics; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1989. [Google Scholar]
  35. Von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1947. [Google Scholar]
  36. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Opt. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  37. Nash, J.F., Jr. Equilibrium points in n-person games. Proc. Nat. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef] [PubMed]
  38. Nash, J. Non-cooperative games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Phairatchatniyom, P.; Kumam, P.; Cho, Y.J.; Jirakitpuwapat, W.; Sitthithakerngkiet, K. The Modified Inertial Iterative Algorithm for Solving Split Variational Inclusion Problem for Multi-Valued Quasi Nonexpansive Mappings with Some Applications. Mathematics 2019, 7, 560. https://doi.org/10.3390/math7060560

AMA Style

Phairatchatniyom P, Kumam P, Cho YJ, Jirakitpuwapat W, Sitthithakerngkiet K. The Modified Inertial Iterative Algorithm for Solving Split Variational Inclusion Problem for Multi-Valued Quasi Nonexpansive Mappings with Some Applications. Mathematics. 2019; 7(6):560. https://doi.org/10.3390/math7060560

Chicago/Turabian Style

Phairatchatniyom, Pawicha, Poom Kumam, Yeol Je Cho, Wachirapong Jirakitpuwapat, and Kanokwan Sitthithakerngkiet. 2019. "The Modified Inertial Iterative Algorithm for Solving Split Variational Inclusion Problem for Multi-Valued Quasi Nonexpansive Mappings with Some Applications" Mathematics 7, no. 6: 560. https://doi.org/10.3390/math7060560

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop