Next Article in Journal
Stability of Anisotropy Pressure in Self-Gravitational Systems in f(G) Gravity
Next Article in Special Issue
A Mathematical Model for Zika Virus Infection and Microcephaly Risk Considering Sexual and Vertical Transmission
Previous Article in Journal
Entire Gaussian Functions: Probability of Zeros Absence
Previous Article in Special Issue
Parameter Estimation Analysis in a Model of Honey Production
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Inertial Subgradient Extragradient Method for Approximating Solutions to Equilibrium Problems in Hadamard Manifolds

by
Olawale Kazeem Oyewole
and
Simeon Reich
*
Department of Mathematics, The Technion—Israel Institute of Technology, Haifa 32000, Israel
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(3), 256; https://doi.org/10.3390/axioms12030256
Submission received: 12 December 2022 / Revised: 22 February 2023 / Accepted: 23 February 2023 / Published: 1 March 2023

Abstract

:
In this work, we are concerned with the iterative approximation of solutions to equilibrium problems in the framework of Hadamard manifolds. We introduce a subgradient extragradient type method with a self-adaptive step size. The use of a step size which is allowed to increase per iteration is to avoid the dependence of our method on the Lipschitz constant of the underlying operator as has been the case in recent articles in this direction. In general, operators satisfying weak monotonicity conditions seem to be more applicable in practice. By using inertial and viscosity techniques, we establish a convergence result for solving a pseudomonotone equilibrium problem under some appropriate conditions. As applications, we use our method to solve some theoretical optimization problems. Finally, we present some numerical illustrations in order to demonstrate the quantitative efficacy and superiority of our proposed method over a previous method present in the literature.

1. Introduction

The minimax inequality introduced in 1972 by Ky Fan [1], later renamed as Equilibrium Problem (EP), plays a major role in many fields and provides a unified framework for the study of variational inequalities, game theory, mathematical economics, fixed point theory and optimization theory. The use of the term equilibrium problem is credited to the 1994 paper by Blum and Oetlli [2], which followed an earlier article by Muu and Oetlli [3]. In the latter paper, three standard examples of the EP were given, viz: fixed point, convex minimization and variational inequality problems. The EP also includes as examples, convex differentiable optimization, complementarity, saddle point and Nash equilibrium problem [2,3]. Let g : K × K R be a bifunction such that g ( x , x ) = 0 for all x K , where K is a nonempty subset of a topological space X. Then, the EP calls for finding a point x K such that
g ( x , y ) 0 y K .
The study of variational inequality, equilibrium and other related optimization problems has recently received considerable attention from researchers in the framework of Riemannian manifolds. Thus, methods and ideas have been extended from linear settings to this more general setting. These generalizations become necessary because of the advantages they bring forth. For example, nonconvex optimization problems can easily be transformed into problems of convex type by choosing a suitable Riemannian metric [4,5,6]. Another advantage of this extension is that constrained optimization problems can be viewed as unconstrained ones [4,6,7,8]. As a result, classical methods for solving optimization problems have been extended from linear frameworks to Riemannian manifolds. In 2012, Colao et al. [9] studied equilibrium problems on Hadamard manifolds in the following setting: Let K be a nonempty, closed and convex subset of an Hadamard manifold M and let g : K × K R be a bifunction satisfying g ( x , x ) = 0 for all x M . An equilibrium problem (EP, for short) on a manifold consists of finding
x K such that g ( x , y ) 0 y K .
We denote by S o l ( g , K ) the solution set of the EP (1). By developing and proving Fan’s KKM Lemma, Colao et al. [9] studied the existence of solutions to the EP in the framework of Hadamard spaces. For many other studies and results in this direction, see, for example, [10,11,12].
The development of an effective iterative algorithm for approximating solutions to optimization problems is another interesting area of research in nonlinear analysis and optimization theory. Iterative approximation of solutions to equilibrium problems in any setting, whether linear or nonlinear, includes, for example, the use of the Extragradient Method (EGM) proposed by Korpelevich [13]. The EGM was initially used for solving saddle point problems. It was later adapted for solving variational inequality and then equilibrium problems. Inspired by the EGM and the perceived drawbacks of the method, Censor et al. [14] introduced the Subgradient Extragradient Method (SEGM), which has since been used for solving both variational and equilibrium problems. For solving EPs, Tran et al. [15] introduced an extragradient-like method for approximating solutions to pseudomonotone equilibrium problems. Using an alternative approach, Nguyen et al. [16] introduced a method for finding common solutions to fixed point and equilibrium problems, which is based on the extragradient method proposed in [15]. More recently, Rehman et al. [17] introduced an inertial subgradient extragradient algorithm for solving equilibrium problems. Using a viscosity approach, they proved a strong convergence theorem for an algorithm approximating solutions to EPs with pseudomonotone bifunctions. For more contributions regarding methods for solving EPs in linear settings, see, for example, [12,18,19,20,21].
We note that several of the methods discussed above have been extended to EPs on Hadamard manifolds. The first work of Colao et al. [9] was followed by those of Salahuddin [10] and Li et al. [21]. Neto et al. [22] extended the result of Nguyen et al. [16] to this setting by considering the following algorithm: Given λ n > 0 , compute
y n = arg min z M { g ( x n , z ) + 1 2 λ n d 2 ( x n , z ) , x n + 1 = arg min z M { g ( y n , z ) + 1 2 λ n d 2 ( x n , z ) ,
where 0 λ n < μ < min { 1 c 1 , 1 c 2 } , and  c 1 > 0 and c 2 > 0 are the Lipschitz constants of the bifunction g. Fan et al. [23] proposed an explicit extragradient method for solving pseudomonotone equilibrium problems on Hadamard manifolds. Their method uses a variable step size which is monotonically decreasing. These authors proved a convergence theorem for their method and also established an R-linear convergence result for the proposed method. Very recently, Ali-Akbari [24] has introduced a subgradient extragradient algorithm for approximating solutions to EPs on Hadamard manifolds and has proved a convergence theorem for approximating solutions to pseudomonotone equilibrium problems. This theorem depends on the Lipschitz constants of the corresponding bifunctions.
The inertial technique finds crucial application in the construction of effective and accelerated algorithms in fixed point and optimization theory (see, for instance, [25,26]). In this method, the next iterate is determined by two preceding iterates ( x n 1 and x n ) and an inertial parameter θ n which controls the momentum x n x n 1 . For more recent developments regarding inertial algorithms, we refer the readers to [25,27,28] and to references therein. At this point, we recall that the viscosity method due to Moudafi [29] for a nonexpansive mapping S and a given strict contraction f over K is given by x 0 K and x n + 1 = β n f ( x n ) + ( 1 β n ) S x n , n 0 , where the sequence { β n } ( 0 , 1 ) converges to zero. The viscosity method has also been adapted to the framework of Hadamard manifolds (see [30,31]). In this setting, the sequence { x n } starting with an arbitrary point x 0 K is given by
x n + 1 = exp f ( x n ) ( 1 β n ) exp f ( x n ) 1 S x n n 0 .
Motivated by the subgradient method of [14], the viscosity approach [30,31,32], and by Rehman et al. [17,27] and Ali-Akbari [24], we introduce an inertial subgradient extragradient algorithm for approximating solutions to equilibrium problems on Hadamard manifolds. Employing the viscosity technique, we propose an algorithm for approximating solutions to pseudomonotone equilibrium problems and establish a convergence theorem for it. The proposed algorithm uses a self-adaptive step length which is allowed to increase during the execution of the method. In this way, dependence of the method on the Lipschitz constants is dispensed with. In order to be more precise, we now highlight the following advantages of our result over previous results announced in this direction in the literature:
(i)
The method in the present paper uses an adaptive step size which is allowed to increase from iteration to iteration unlike the method in [23,33], where the step sizes decrease monotonically, and the method of [17,24,27], which relies on the Lipschitz condition imposed on the bifunction. Since the relevant Lipschitz constants can be difficult to estimate, this affects the efficiency of the method;
(ii)
We note that the sequence of control parameters of the viscosity step of our method is only required to be non-summable. This differs from [30,32], where an extra condition (that the difference between successive parameters be summable) is imposed;
(iii)
The use of the inertial technique makes the convergence of our algorithm faster than that of the method used in [23,24];
(iv)
Our result is obtained in the framework of an Hadamard manifold unlike the results of [15,16,17], which were obtained in real Hilbert spaces.
The rest of our paper is organized as follows: First, we recall some useful definitions and preliminary results in Section 2. In Section 3, we introduce our proposed method, state our main result and present its convergence analysis. Two applications are presented in Section 4. In Section 5, we present the results of a numerical experiment which shows the efficiency of our method. We provide some concluding remarks in Section 6.

2. Preliminaries

Let M be an m-dimensional manifold, let x M and let T x M be the tangent space of M at x M . We denote by T M = x M T x M the tangent bundle of M. An inner product R · , · is called a Riemannian metric on M if · , · x : T x M × T x M R is an inner product for all x M . The corresponding norm induced by the inner product R x · , · on T x M is denoted by · x . We will drop the subscript x and adopt · for the corresponding norm induced by the inner product. A differentiable manifold M endowed with a Riemannian metric R · , · is called a Riemannian manifold. In what follows, we denote the Riemannian metric R · , · by · , · when no confusion arises. Given a piecewise smooth curve γ : [ a 1 , a 2 ] M joining x to y (that is, γ ( a 1 ) = x and γ ( a 2 ) = y ), we define the length l ( γ ) of γ by l ( γ ) : = a 1 a 2 γ ( t ) d t . The Riemannian distance d ( x , y ) is the minimal length over the set of all such curves joining x to y. The metric topology induced by d coincides with the original topology on M. We denote by ∇ the Levi–Civita connection associated with the Riemannian metric [34].
Let γ be a smooth curve in M. A vector field X along γ is said to be parallel if γ X = 0 , where 0 is the zero tangent vector. If  γ itself is parallel along γ , then we say that γ is a geodesic and γ is a constant. If  γ = 1 , then the geodesic γ is said to be normalized. A geodesic joining x to y in M is called a minimizing geodesic if its length equals d ( x , y ) . A Riemannian manifold M equipped with a Riemannian distance d is a metric space ( M , d ) . A Riemannian manifold M is said to be complete if for all x M , all geodesics emanating from x are defined for all t R . The Hopf–Rinow theorem [34] posits that if M is complete, then any pair of points in M can be joined by a minimizing geodesic. Moreover, if  ( M , d ) is a complete metric space, then every bounded and closed subset of M is compact. If M is a complete Riemannian manifold, then the exponential map exp x : T x M M at x M is defined by
exp x v : = γ v ( 1 , x ) v T x M ,
where γ v ( · , x ) is the geodesic starting from x with velocity v (that is, γ v ( 0 , x ) = x and γ v ( 0 , x ) = v ). Then, for any t , we have exp x t v = γ v ( t , x ) and exp x 0 = γ v ( 0 , x ) = x . Note that the mapping exp x is differentiable on T x M for every x M . The exponential map exp x has an inverse exp x 1 : M T x M . For any x , y M , we have d ( x , y ) = exp y 1 x = exp x 1 y (see [34] for more details). The parallel transport P γ , γ ( a 2 ) , γ ( a 1 ) : T γ ( a 1 ) M T γ ( a 2 ) M on the tangent bundle T M along γ : [ a 1 , a 2 ] R with respect to ∇ is defined by
P γ , γ ( a 2 ) , γ ( a 1 ) v = F ( γ ( a 2 ) ) , a 1 , a 2 R and v T γ ( a 1 ) M ,
where F is the unique vector field such that γ ( t ) F = 0 for all t [ a 1 , a 2 ] and F ( γ ( a 1 ) ) = v . If γ is a minimizing geodesic joining x to y , then we write P y , x instead of P γ , y , x . Note that for every a 1 , a 2 , r , s R , we have
P γ ( s ) , γ ( r ) P γ ( r ) , γ ( a 1 ) = P γ ( s ) , γ ( a 1 ) and P γ ( a 2 ) , γ ( a 1 ) 1 = P γ ( a 1 ) , γ ( a 2 ) .
Additionally, P γ ( a 2 ) , γ ( a 1 ) is an isometry from T γ ( a 1 ) M to T γ ( a 2 ) M , that is, the parallel transport preserves the inner product
P γ ( a 2 ) , γ ( a 1 ) ( u ) , P γ ( a 2 ) , γ ( a 1 ) ( v ) γ ( a 2 ) = u , v γ ( a 1 ) , u , v T γ ( a 1 ) M .
We now give some examples of Hadamard manifolds.
  • Space 1: Let R + + = { x R : x > 0 } and M = ( R + + , · , · ) be the Riemannian manifold equipped with the inner product x , y = x y x , y R . Since the sectional curvature of M is zero [35], M is an Hadamard manifold. Let x , y M and v T x M with v 2 = 1 . Then, d ( x , y ) = | ln x ln y | , exp x t v = x e v x t , t ( 0 , + ) , and  exp x 1 y = x ln y x ln x .
  • Space 2: Let R + + m be the product space R + + m : = { ( x 1 , x 2 , , x m ) : x i R + + , i = 1 , 2 , , m } . Let M = ( ( R ) + + , · , · ) be the m-dimensional Hadamard manifold with the Riemannian metric p , q = p T q and the distance d ( x , y ) = | ln x y | = | ln i = 1 m x i y i | , where x , y M with x = { x i } i = 1 m and y = { y i } i = 1 m .
  • Space 3: See [36]. Let M = H n be the n dimensional hyperbolic space of constant sectional curvature k = 1 . The metric of H n is induced from the Lorentz metric { · , · } and will be denoted by the same symbol. Consider the following model for H n :
    H n = { ξ = ξ 1 , ξ 2 , , ξ n + 1 R n + 1 : ξ n + 1 > 0 and { ξ , ξ } = 1 } .
Let x , y H n and v T x H n . Then, a normalized geodesic γ x starting from γ x ( 0 ) = x is defined by
γ x ( t ) = ( cosh t ) x + ( sinh t ) v .
We have { u , x } = 0 for all u T x H n . Also,
exp x 1 y = cosh 1 ( { x , y } ) y + { x , y } x { x , y } 2 1 .
A subset K M is said to be convex if for any two points x , y K , the geodesic γ joining x to y is contained in K . That is, if  γ : [ a 1 , a 2 ] M is a geodesic such that x = γ ( a 1 ) and y = γ ( a 2 ) , then γ ( ( 1 t ) a 1 + t a 2 ) K for all t [ 0 , 1 ] . A complete simply connected Riemannian manifold of non-positive sectional curvature is called an Hadamard manifold. We denote by M a finite dimensional Hadamard manifold. Henceforth, unless otherwise stated, we represent by K a nonempty, closed and convex subset of M .
We now collect some results and definitions which we shall use in the next section.
Proposition 1
([34]). Let x M . The exponential mapping exp x : T x M M is a diffeomorphism. For any two points x , y M , there exists a unique normalized geodesic joining x to y , which is given by
γ ( t ) = exp x t exp x 1 y t [ 0 , 1 ] .
A geodesic triangle Δ ( p , q , r ) of a Riemannian manifold M is a set containing three points p , q , r and three minimizing geodesics joining these points.
Proposition 2
([34]). Let Δ ( p , q , r ) be a geodesic triangle in M. Then,
d 2 ( p , q ) + d 2 ( q , r ) 2 exp q 1 p , exp q 1 r d 2 ( r , q )
and
d 2 ( p , q ) exp p 1 r , exp p 1 q + exp q 1 r , exp q 1 p .
Moreover, if θ is the angle at p , then we have
exp p 1 q , exp p 1 r = d ( q , p ) d ( p , r ) cos θ .
Also,
exp p 1 q 2 = exp p 1 q , exp p 1 q = d 2 ( p , q ) .
For any x M and K M , there exists a unique point y K such that d ( x , y ) d ( x , z ) for all z K . This unique point y is called the nearest point projection of x onto the closed and convex set K and is denoted P K ( x ) .
Lemma 1
([37]). For any x M , there exists a unique nearest point projection y = P K ( x ) . Furthermore, the following inequality holds:
exp y 1 x , exp y 1 z 0 z K .
We call a mapping f : M M a ψ -contraction if
d ( f ( x ) , f ( y ) ) ψ ( d ( x , y ) ) x , y M ,
where ψ : [ 0 , + ) [ 0 , + ) is a function satisfying the following two conditions:
(i)
ψ ( s ) < s for all s > 0 ;
(ii)
ψ is continuous.
Remark 1.
(a)  ψ ( s ) = s s + 1 for all s 0 satisfies conditions (i) and (ii) above.
(b) 
If ψ ( s ) = k s for all s 0 and k ( 0 , 1 ) , then f is a ψ-contraction mapping with a Lipschitz constant k.
(c) 
Any ψ-contraction mapping is nonexpansive.
Any ψ -contraction belongs to the class of mappings introduced by Boyd and Wong [38] who established the existence and uniqueness of a fixed point for mappings in this class in the framework of complete metric spaces.
The next lemma presents the relationship between triangles in R 2 and geodesic triangles in Riemannian manifolds (see [39]).
Lemma 2
([39]). Let Δ ( u 1 , u 2 , u 3 ) be a geodesic triangle in M . Then, there exists a triangle Δ ( u ¯ 1 , u ¯ 2 , u ¯ 3 ) corresponding to Δ ( u 1 , u 2 , u 3 ) , such that d ( u i , u i + 1 ) = u ¯ i u ¯ i + 1 with the indices taking modulo 3. This triangle is unique up to isometries of R 2 .
The triangle Δ ( u ¯ 1 , u ¯ 2 , u ¯ 3 ) in Lemma 2 is called the comparison triangle for Δ ( u 1 , u 2 , u 3 ) M . The points u ¯ 1 , u ¯ 2 and u ¯ 3 are called comparison points to the points u 1 , u 2 and u 3 in  M .
A function h : M R is said to be geodesic if for any geodesic γ M , the composition h γ : [ u , v ] R is convex, that is,
h γ ( λ u + ( 1 λ ) v ) λ h γ ( u ) + ( 1 λ ) h γ ( v ) , u , v R , λ [ 0 , 1 ] .
The subdifferential of a function h : M R at a point x M is given by
h ( x ) : = { z T x M : h ( y ) h ( x ) + z , exp x 1 y y M } .
The convex function h is called subdifferentiable at a point x M if the set h ( x ) is nonempty. The elements of h ( x ) are called the subgradients of h at x . The set h ( x ) is closed and convex, and it is known to be nonempty if h is convex on M . We denote by 2 h the partial derivative of h at the second argument, that is, 2 h ( x , · ) for all x M . The normal cone, denoted N K , is defined at a point x M by
N K ( x ) : = { z T x M : z , exp x 1 y 0 y K } .
Lemma 3
([20]). Let x 0 M and { x n } M be such that x n x 0 . Then, for any y M , we have exp x n 1 y exp x 0 1 y and exp y 1 x n exp y 1 x 0 ;
The following definitions can be found in [40]. Let K be a nonempty, closed and convex subset of M. A bifunction g : M × M R is said to be
(i)
Monotone on K if
g ( x , y ) + g ( y , x ) 0 x , y K ;
(ii)
Pseudomontone on K if
g ( x , y ) 0 g ( y , x ) 0 x , y K ;
(iii)
Lipschitz-type continuous if there exist constants c 1 > 0 and c 2 > 0 , such that
g ( x , y ) + g ( y , z ) g ( x , z ) c 1 d 2 ( x , y ) c 2 d 2 ( y , z ) x , y , z K .
For solving EP (1), we make the following assumptions concerning g on K :
(A1)
g is pseudomonotone on K and g ( x , x ) = 0 for all x M ;
(A2)
g ( · , y ) is upper semicontinuous for all y M ;
(A3)
g ( x , · ) is convex and subdifferentiable for all fixed x M ;
(A4)
g satisfies a Lipschitz-type condition on M .
The following propositions (see [41]) are very useful in our convergence analysis:
Proposition 3.
Let M be an Hadamard manifold and d : M × M : R be the distance function. Then, the function d is convex with respect to the product Riemannian metric. In other words, given any pair of geodesics γ 1 : [ 0 , 1 ] M and γ 2 : [ 0 , 1 ] M , then for all t [ 0 , 1 ] , we have
d ( γ 1 ( t ) , γ 2 ( t ) ) ( 1 t ) d ( γ 1 ( 0 ) , γ 2 ( 0 ) ) + t d ( γ 1 ( 1 ) , γ 2 ( 1 ) ) .
In particular, for each y M , the function d ( · , y ) : M R is a convex function.
Proposition 4.
Let M be an Hadamard manifold and x M . Let ρ x ( y ) = 1 2 d 2 ( x , y ) . Then, ρ x ( y ) is strictly convex and its gradient at y is given by
ρ x ( y ) = exp y 1 x .
Proposition 5.
Let K be a nonempty convex subset of an Hadamard manifold M and let h : K R be a proper, convex and lower semicontinuous function on K. Then, a point x solves the convex minimization problem
min x K h ( x )
if and only if 0 h ( x ) + N K ( x ) .
Lemma 4
([42]). Let u , v R n and λ [ 0 , 1 ] . Then, the following relations hold:
(i) 
λ u + ( 1 λ ) v 2 = λ u 2 + ( 1 λ ) v 2 λ ( 1 λ ) u v 2 ;
(ii) 
u ± v 2 = u 2 ± 2 u , v + v 2 ;
(iii) 
u + v 2 u 2 + 2 v , u + v .
Lemma 5
([43]). Let { u n } be a sequence of non-negative real numbers, { α n } be a sequence of real numbers in ( 0 , 1 ) such that n = 1 α n = and { v n } be a sequence of real numbers. Assume that
u n + 1 ( 1 α n ) u n + α n v n n 1 .
If lim sup k v n k 0 for every subsequence { u n k } of { u n } satisfying the condition
lim inf k ( u n k + 1 u n k ) 0 ,
then lim n u n = 0 .

3. Main Result

In this section, we first propose a convergent algorithm for approximating a solution to the EP (1) and then present its convergence analysis. Let f : M M be a ψ -contraction where ψ : [ 0 , + ) [ 0 , + ) is a continuous and increasing function satisfying ψ ( 0 ) = 0 and ψ ( s ) < s for all s > 0 . The solution set S o l ( g , K ) is closed and convex [9,10]. We assume that S o l ( g , K ) is nonempty.
Assume { ϵ n } is a positive sequence such that ϵ n = ( β n ) , that is, lim n ϵ n β n = 0 , where β n is a sequence in (0,1) satisfying
(C1)
lim n β n = 0 and n = 1 β n = .
Remark 2.
We observe that Algorithm 1 provides us with a self-adaptive method where the step length can increase from iteration to iteration unlike the monotone decreasing sequence of step lengths in [17]. By this construction, the dependence of the bifunction g on the Lipschitz constants is dispensed with.
Algorithm 1: Inertial subgradient extragradient method for solving EP (ISEMEP)
Initialization: Choose x 0 , x 1 K , λ 1 > 0 , μ ( 0 , 1 ) , a non-negative sequence of real numbers { δ n } such that n = 1 δ n < + and θ > 0 .
Step 1: Given x n , x n 1 and λ n , choose θ n such that θ n [ 0 , θ ¯ n ] , where
θ ¯ n = min θ , ϵ n d ( x n , x n 1 ) , if x n x n 1 , θ , otherwise .

Compute
w n = exp x n ( θ n exp x n 1 x n 1 ) , y n = arg min y M g ( w n , y ) + 1 2 λ n d 2 ( w n , y ) ,

If y n = w n , then stop. Otherwise, go to the next step.
Step 2: Define the half-space T n by
T n : = { y M : exp y n 1 w n λ n v n , exp y n 1 y 0 }
with v n 2 g ( w n , y n ) and compute
z n = arg min y T n g ( y n , y ) + 1 2 λ n d 2 ( w n , y ) .

Step 3: Compute
x n + 1 = γ n ( 1 β n ) n 0 ,
where γ n : [ 0 , 1 ] M is the geodesic joining f ( x n ) to z n , that is, γ n ( 0 ) = f ( x n ) and γ n ( 1 ) = z n for all n 0 .
λ n + 1 = min λ n + δ n , μ [ d 2 ( y n , w n ) + d 2 ( z n , y n ) ] 2 [ g ( w n , z n ) g ( w n , y n ) g ( y n , z n ) ] , g ( w n , z n ) g ( w n , y n ) g ( y n , z n ) > 0 , λ n + δ n , otherwise .

Set n : = n + 1 and return to Step 1.
We first show that K T n . From the definition of y n and Proposition 5, we see that
0 2 g ( w n , y ) + 1 2 λ n d 2 ( w n , y ) ( y n ) + N K ( y n ) .
Hence, there exist p n 2 g ( w n , y n ) and q n N K ( y n ) such that
λ n p n exp y n 1 w n + q n = 0 .
Therefore, for all y K , we obtain
λ n p n , exp y n 1 y = exp y n 1 w n , exp y n 1 y q n , exp y n 1 y .
Note that q n N K ( y n ) implies that q n , exp y n 1 y 0 , for all y K . Thus, it follows from (9) that
exp y n 1 w n , exp y n 1 y λ n p n , exp y n 1 y .
That is,
exp y n 1 w n λ n p n , exp y n 1 y 0 .
This implies that K T n .
Lemma 6.
Let { λ n } be the sequence given by (8). Then, lim n λ n = λ with
min μ 2 max { c 1 , c 2 } , λ 1 λ λ 1 + δ ,
where δ = n = 0 δ n .
Proof. 
Assume (A4) holds, then there exist c 1 and c 2 such that
g ( w n , z n ) g ( w n , y n ) g ( y n , z n ) c 1 d 2 ( y n , w n ) + c 2 d 2 ( z n , y n ) max { c 1 , c 2 } ( d 2 ( y n , w n ) + d 2 ( z n , y n ) ) .
Thus,
μ ( d 2 ( y n , w n ) + d 2 ( z n , y n ) ) 2 ( g ( w n , z n ) g ( w n , y n ) g ( y n , z n ) ) μ 2 max { c 1 , c 2 } .
Using induction, we obtain
min μ 2 max { c 1 , c 2 } , λ 1 λ n λ 1 + δ .
It is not difficult to show that lim n λ n = λ . Therefore, the convergence of { λ n } implies that
min μ 2 max { c 1 , c 2 } , λ 1 λ λ 1 + δ .
 □
Lemma 7.
The sequence { x n } defined recursively by Algorithm 1 satisfies the inequality
d 2 ( z n , p ) d 2 ( w n , p ) 1 μ λ n λ n + 1 [ d 2 ( w n , y n ) + d 2 ( y n , z n ) ] .
Proof. 
Let p S o l ( g , K ) . Using the definition of z n and Proposition 5, we find that
0 2 g ( y n , y ) + 1 2 λ n d 2 ( w n , y ) ( z n ) + N T n ( z n ) .
There exist a n 2 g ( y n , z n ) and b n N T n ( z n ) such that
λ n a n exp z n 1 w n + b n = 0 .
Hence, for all y T n , we obtain
λ n a n , exp y n 1 y = exp z n 1 w n , exp z n 1 y b n , exp z n 1 y .
Since b n N T n ( z n ) , we have b n , exp z n 1 y 0 for all y T n . Therefore,
exp z n 1 w n , exp z n 1 y λ n a n , exp z n 1 y y T n .
From the definition of the subdifferential and the fact that a n 2 g ( y n , z n ) , it follows that
a n , exp y n 1 y g ( y n , y ) g ( y n , z n ) y M .
We obtain from (10) and (11) that
exp z n 1 w n , exp z n 1 y λ n g ( y n , y ) g ( y n , z n ) y T n .
Let y = p in (12). We have
exp z n 1 w n , exp z n 1 p λ n g ( y n , p ) g ( y n , z n ) .
Since p S o l ( g , K ) , we have g ( p , y n ) 0 . If follows from the pseudomonotonicity of g that g ( y n , p ) 0 . Thus, we obtain
exp z n 1 w n , exp z n 1 p λ n g ( y n , z n ) .
It is easy to from (8), that
g ( y n , z n ) μ 2 λ n + 1 d 2 ( y n , w n ) + μ 2 λ n + 1 d 2 ( y n , z n ) g ( w n , z n ) + g ( w n , y n ) ,
which implies, since λ n > 0 , that
λ n g ( y n , z n ) μ λ n d 2 ( y n , w n ) 2 λ n + 1 + μ λ n d 2 ( y n , z n ) 2 λ n + 1 λ n g ( w n , z n ) g ( w n , y n ) .
It follows from z n T n that exp y n 1 w n λ n v n , exp y n 1 z n 0 , which implies that
exp y n 1 w n , exp y n 1 z n λ n v n , exp y n 1 z n .
Since v n 2 g ( w n , y n ) , it follows from the definition of the subdifferential that
v n , exp y n 1 y g ( w n , y ) g ( w n , y n ) .
Setting y = z n in the above inequality, we have
v n , exp y n 1 z n g ( w n , z n ) g ( w n , y n ) .
Thus, it follows from above inequality and (15) that
exp y n 1 w n , exp y n 1 z n λ n g ( w n , z n ) g ( w n , y n ) .
Combining (13), (14) and (16), we obtain
exp z n 1 w n , exp y n 1 p μ λ n d 2 ( y n , w n ) 2 λ n + 1 + μ λ n d 2 ( z n , y n ) 2 λ n + 1 exp y n 1 w n , exp y n 1 z n .
Using Equation (3) and Proposition 2, we obtain
d 2 ( w n , z n ) + d 2 ( z n , p ) d 2 ( w n , p ) 2 exp z n 1 w n , exp z n 1 p
and
2 exp y n 1 w n , exp y n 1 z n d 2 ( w n , z n ) d 2 ( w n , y n ) d 2 ( z n , y n ) .
Using this in (17), we obtain
d 2 ( w n , z n ) + d 2 ( z n , p ) d 2 ( w n , p ) μ λ n d 2 ( w n , y n ) λ n + 1 + μ λ n d 2 ( z n , y n ) λ n + 1 + d 2 ( w n , z n ) d 2 ( w n , y n ) d 2 ( z n , y n ) .
Therefore, we have
d 2 ( z n , p ) d 2 ( w n , p ) 1 μ λ n λ n + 1 [ d 2 ( w n , y n ) + d 2 ( z n , y n ) ] .
   □
Lemma 8.
Let f : K K be a ψ-contraction and assume that
0 < κ : = sup { ψ ( d ( x n , q ) ) d ( x n , q ) : x n q , n 0 , q S o l ( g , K ) } < 1 .
Then, the sequence { x n } generated by Algorithm 1 is bounded.
Proof. 
Fix n 1 and p S o l ( g , K ) , and consider the geodesic triangles Δ ( w n , x n , p ) and Δ ( x n , x n 1 , p ) with the comparison triangles Δ ( w n , x n , p ) and Δ ( x n , x n 1 , p ) . Then, by Lemma 2, we have d ( w n , p ) = w n p , d ( x n , p ) = x n p and d ( x n , x n 1 ) = x n x n 1 . Recall from Algorithm 1 that w n = exp x n θ n exp x n 1 x n 1 . The comparison point of w n is w n = x n + θ n ( x n 1 x n ) . Thus, we obtain
d ( w n , p ) = w n p = x n + θ n ( x n 1 x n ) p x n p + θ n x n x n 1 = x n p + β n · θ n β n x n x n 1 .
Since θ n β n x n x n 1 = θ n β n d ( x n , x n 1 ) 0 as n , there exists a constant M 1 > 0 such that θ n β n d ( x n , x n 1 ) = θ n β n x n x n 1 M 1 n 1 . Hence, we obtain
d ( w n , p ) d ( x n , p ) + β n M 1 .
It is not difficult to see that
d 2 ( w n , p ) d 2 ( x n , p ) + 2 θ n d ( x n , p ) d ( x n , x n 1 ) + θ n 2 d 2 ( x n , x n 1 ) .
Next, using the definition of x n + 1 , the convexity of the Riemannian distance and (18), we see that
d ( x n + 1 , p ) = d ( γ n ( 1 β n ) , p ) β n d ( γ n ( 0 ) , p ) + ( 1 β n ) d ( γ n ( 1 ) , p ) = β n d ( f ( x n ) , p ) + ( 1 β n ) d ( z n , p ) β n d ( f ( x n ) , f ( p ) + d ( f ( p ) , p ) + ( 1 β n ) d ( w n , p ) β n ψ ( d ( x n , p ) ) + β n d ( f ( p ) , p ) + ( 1 β n ) d ( w n , p ) .
Since 0 < κ = sup { ψ ( d ( x n , q ) ) d ( x n , q ) : x n q , n 0 , q S o l ( g , K ) } < 1 , we find that
d ( x n + 1 , p ) β n κ d ( x n , p ) + ( 1 β n ) d ( w n , p ) + β n d ( f ( p ) , p ) β n κ d ( x n , p ) + ( 1 β n ) [ d ( x n , p ) + β n M 1 ] + β n d ( f ( p ) , p ) = ( 1 β n ( 1 κ ) ) d ( x n , p ) + β n ( 1 β n ) M 1 + β n d ( f ( p ) , p ) max d ( x n , p ) , M 1 + d ( f ( p ) , p ) 1 κ
max d ( x 0 , p ) , M 1 + d ( f ( p ) , p ) 1 κ .
Hence, the sequence { x n } is bounded. Consequently, the sequences { w n } , { y n } and { z n } are bounded too.    □
Theorem 1.
Let f : K K be a ψ-contraction and assume conditions (A1)–(A4) hold. If 0 < κ = sup { ψ ( d ( x n , q ) ) d ( x n , q ) : x n q , n 0 , q S o l ( g , K ) } < 1 , then the sequence { x n } generated by Algorithm 1 converges to a point p S o l ( g , K ) , where p = P S o l ( g , K ) f ( p ) and P S o l ( g , K ) is the nearest point projection of K onto S o l ( g , K ) .
Proof. 
Let p S o l ( g , K ) satisfy p = P S o l ( g , K ) f ( p ) . Note that this fixed point equation has a unique solution by the Boyd–Wong fixed point theorem [38]. Fix n 1 and let w = f ( x n ) , z = z n and y = f ( p ) . Consider the following geodesic triangles with their respective comparison triangles in R 2 : Δ ( w , z , p ) and Δ ( w , z , p ) , Δ ( y , z , w ) and Δ ( y , z , w ) , Δ ( y , z , p ) and Δ ( y , z , p ) . By Lemma 2, we have d ( w , z ) = w z , d ( w , y ) = w y , d ( w , p ) = w p , d ( z , y ) = z y and d ( y , p ) = y p . From the definition of x n + 1 , we have
x n + 1 = exp w ( 1 β n ) exp w 1 z .
The comparison point of x n + 1 in R 2 is x n + 1 = β n w + ( 1 β n ) z . Let α and α denote the angles at p and p in the triangles Δ ( y , x n + 1 , p ) and Δ ( y , x n + 1 , p ) , respectively. Then, we have α α and cos α cos α . Using Lemma 4 and the property of f , we obtain
d 2 ( x n + 1 , p ) x n + 1 p 2 = β n ( w p ) + ( 1 β n ) ( y p ) 2 β n ( w y ) + ( 1 β n ) ( z p ) 2 + 2 β n x n + 1 p , y p ( 1 β n ) z p 2 + β n w y 2 + 2 β n x n + 1 p y p cos α ( 1 β n ) d 2 ( z , p ) + β n d 2 ( w , y ) + 2 β n d ( x n + 1 , p ) d ( y , p ) cos α = ( 1 β n ) d 2 ( z n , p ) + β n d 2 ( f ( x n ) , f ( p ) ) + 2 β n d ( x n + 1 , p ) d ( f ( p ) , p ) cos α .
Since d ( x n + 1 , p ) d ( f ( p ) , p ) cos α = exp p 1 f ( p ) , exp p 1 x n + 1 and 0 < κ = sup { ψ ( d ( x n , q ) ) d ( x n , q ) : x n q , n 0 , q S o l ( g , K ) } < 1 , using (21), we obtain
d 2 ( x n + 1 , p ) ( 1 β n ) d 2 ( z n , p ) + β n ψ ( d 2 ( x n , p ) ) + 2 β n exp p 1 f ( p ) , exp p 1 x n + 1 ( 1 β n ) d 2 ( w n , p ) ( 1 β n ) 1 μ λ n λ n + 1 [ d 2 ( y n , w n ) + d 2 ( z n , y n ) ] + β n ψ ( d 2 ( x n , p ) ) + 2 β n exp p 1 f ( p ) , exp p 1 x n + 1
= [ 1 β n ( 1 κ ) ] d 2 ( x n , p ) + β n ( 1 κ ) b n ( 1 β n ) 1 μ λ n λ n + 1 [ d 2 ( y n , w n ) + d 2 ( z n , y n ) ] ,
where
b n = 1 1 κ 2 exp p 1 f ( p ) , exp p 1 x n + 1 + 2 θ n β n d ( x n , p ) d ( x n , x n 1 ) + θ n 2 β n d 2 ( x n , x n 1 ) .
It follows from (24) that
( 1 β n ) 1 μ λ n λ n + 1 [ d 2 ( y n , w n ) + d 2 ( z n , y n ) ] d 2 ( x n , p ) d 2 ( x n + 1 , p ) + β n ( 1 κ ) M ,
where M = sup n N b n . We claim that d ( x n , p ) 0 as n . To prove this, set a n = d ( x n , p ) and d n = β n ( 1 κ ) . It is easy to see from (24) that the sequence { a n } satisfies
a n + 1 ( 1 d n ) a n + d n b n .
Next, we claim that lim sup k b n k 0 whenever there exists a subsequence { a n k } of { a n } satisfying
lim inf k ( a n k + 1 a n k ) 0 .
To prove this, assume the existence of such a subsequence { a n k } . Then, by using (25), we have
lim sup k ( 1 β n k ) 1 μ λ n k λ n k + 1 [ d 2 ( y n k , w n k ) + d 2 ( z n k , y n k ) ] lim sup k ( a n k a n k + 1 ) + ( 1 κ ) M lim k β n k = lim inf k ( a n k + 1 a n k ) 0 .
Note that λ n λ as n and that μ ( 0 , 1 ) . Hence, there exists N 0 such that for all n N , 0 < μ λ n λ n + 1 < 1 . That is, lim n 1 μ λ n λ n + 1 = 1 μ > 0 .
This, in its turn, implies that
lim k d ( y n k , w n k ) = 0 = lim k d ( z n k , y n k ) .
By replacing p with x n k in (19), it is not difficult to see that
lim k d ( w n k , x n k ) lim k β n k · θ n k β n k x n k x n k 1 = lim k β n k · θ n k β n k d ( x n k , x n k 1 )
= 0 .
Using the triangle inequality, we obtain
d ( y n k , x n k ) d ( y n k , w n k ) + d ( w n k , x n k ) , d ( z n k , x n k ) d ( z n k , y n k ) + d ( y n k , x n k ) .
Using (27) and (28), we obtain
d ( y n k , x n k ) , d ( z n k , x n k ) 0 as k .
By employing the convexity of the Riemannian distance, we have
d ( x n + 1 , z n ) = d ( γ n ( 1 β n ) , z n ) β n d ( γ n ( 0 ) , z n ) + ( 1 β n ) d ( γ n ( 1 ) , z n ) β n d ( f ( x n ) , z n ) + ( 1 β n ) d ( z n , z n ) β n d ( f ( x n ) , z n ) .
Thus, it follows from (C1) that
lim k d ( x n k + 1 , z n k ) = 0 .
When combined with (29), we obtain
lim k d ( x n k + 1 , x n k ) = 0 .
Now, we claim that lim sup k b n k 0 . To see this, we only need to show that
lim sup k exp p 1 f ( p ) , exp p 1 x n k + 1 0 .
Since { x n k } is bounded, there exists a subsequence { x n k j } of { x n k } which converges to q M such that
lim j exp p 1 f ( p ) , exp p 1 x n k j = lim sup k exp p 1 f ( p ) , exp p 1 x n k = exp p 1 f ( p ) , exp p 1 q .
Since x n k j q , it follows from (29) that y n k j , z n k j q . Using (12), we see that
λ n g ( y n , y ) λ n g ( y n , z n ) exp z n 1 w n , exp z n 1 y y T n ,
which implies, in view of (14), that
λ n g ( y n , y ) λ n g ( y n , z n ) + exp z n 1 w n , exp z n 1 y μ λ n 2 λ n + 1 d 2 ( y n , w n ) + d 2 ( z n , y n ) + λ n ( g ( w n , z n ) g ( w n , y n ) ) + exp z n 1 w n , exp z n 1 y .
Using (16), we obtain
λ n k g ( y n k , y ) μ λ n k 2 λ n k + 1 d 2 ( y n k , w n k ) + d 2 ( z n k , y n k ) + exp z n k 1 w n k , exp z n k 1 y + exp y n k 1 w n k , exp y n k 1 z n k .
Passing to the limit in (33) with n k replaced by n k j , and using λ n λ > 0 , condition (A2), Lemma 3 and y n k j q , we find that
g ( q , y ) lim sup j g ( y n k j , y ) 0 y T n .
Since K T n , we see that g ( q , y ) 0 y K , which implies that q S o l ( g , K ) .
Finally, from  p = P S o l ( g , K ) f ( p ) , (31), (32) and Lemma 1, it follows that
lim j exp p 1 f ( p ) , exp p 1 x n k j + 1 = lim sup k exp p 1 f ( p ) , exp p 1 x n k + 1 = exp p 1 f ( p ) , exp p 1 q 0 .
Hence, we conclude by applying Lemma 5 to (26) that the sequence { x n } converges to p S o l ( g , K ) , as asserted.    □

4. Applications

In this section, we apply our main result to some theoretical optimization problems.

4.1. An Application to Solving Variational Inequality Problems

Suppose
g ( x , y ) = G x , exp x 1 y , if x , y K , + , otherwise ,
where G : K M is a mapping. Then, the equilibrium problem (1) concurs with the following variational inequality (VIP) (see [44]):
Find x K such that G x , exp x 1 y 0 y K .
We denote the set of solutions of VIP (34) as V I P ( G , K ) . The mapping G : K M is said to be pseudomonotone if
G x , exp x 1 y 0 G y , exp x 1 y 0 , x , y K .
Assume that the function G satisfies the following conditions:
(V1)
The function G is pseudomonotone on K with V I P ( G , K ) ;
(V2)
G is L-Lipschitz continous, that is,
P y , x G x G y x y , x , y K ,
where P y , x is a parallel transport (see [7,45]);
(V3)
lim sup n G x n , exp x n 1 y G p , exp p 1 y for every y K and { x n } K such that x n p .
By replacing the proximal term arg min y M g ( x , y ) + 1 2 λ n d 2 ( x , y ) with P K ( exp x ( λ n G ( x ) ) ) , where P K is the metric projection of M onto K in Algorithm 1, we have the following method for approximating a point in V I P ( G , K ) :
In this setting, we have the following convergence theorem for approximating a solution to the VIP (34).
Theorem 2.
Let f : K K be a ψ-contraction and G : K M be a pseudomonotone operator satisfying conditions V 1 V 3 . If 0 < κ = sup { ψ ( d ( x n , q ) ) d ( x n , q ) : x n q , n 0 , q V I P ( G , K ) } < 1 , then the sequence { x n } generated by Algorithm 2 converges to an element p V I P ( G , K ) which satisfies p = P V I P ( G , K ) f ( p ) .
Algorithm 2: Inertial subgradient extragradient method for solving VIP(ISEMVIP)
Initialization:Choose x 0 , x 1 K , λ 1 > 0 , μ ( 0 , 1 ) , a non-negative sequence of real numbers { δ n } such that n = 0 δ n < + and θ > 0 .
Step 1:Given x n , x n 1 and λ n , choose θ n such that θ n [ 0 , θ ¯ n ] , where
θ ¯ n = min θ , ϵ n d ( x n , x n 1 ) , i f x n x n 1 , θ , otherwise .

Compute
w n = exp x n ( θ n exp x n 1 x n 1 ) , y n = P K ( exp w n ( λ n G ( w n ) ) ) .

If y n = w n , then stop. Otherwise, go to the next step.
Step 2:Compute v n = G w n and define the half-space T n by
T n : = { y M : exp y n 1 w n λ n v n , exp y n 1 y 0 }

with v n 2 g ( w n , y n ) and compute
z n = P T n ( exp w n ( λ n G ( w n ) ) ) .

Step 3:Compute
x n + 1 = γ n ( 1 β n ) n 0 ,
where γ n : [ 0 , 1 ] M is the geodesic joining f ( x n ) to z n , that is, γ n ( 0 ) = f ( x n ) and γ n ( 1 ) = z n for all n 0 .
λ n + 1 = min λ n + δ n , μ [ d 2 ( y n , w n ) + d 2 ( z n , y n ) ] 2 [ P y n , w n G ( w n ) G ( y n ) , z n y n ] , P y n , w n G ( w n ) G ( y n ) , z n y n > 0 , λ n + δ n , o t h e r w i s e .

Set n : = n + 1 and return toStep 1.
Remark 3.
Note that Algorithm 2 is a direct application of Algorithm 1 to a variational inequality problem and that the projection onto the half-space T n in Algorithm 2 can be calculated in closed form without the need to use a minimization algorithm for computing z n in Algorithm 1 for solving equilibrium problems. For a closed-form formula for computing the metric projection onto T n , (see for example, [46]).

4.2. An Application to Solving Convex Optimization Problems

Consider the convex optimization problem (COP)
min x K h ( x ) ,
where h is a proper lower semicontinuous convex function of M into ( , + ] such that K is contained in the effective domain of h, that is, K d o m h : = { x M : h ( x ) < + } . The set of solutions to COP (39) is denoted by C O P ( h , K ) . Let the bifunction g : K × K R be defined by g ( x , y ) : = h ( y ) h ( x ) . Then, g ( x , y ) satisfies conditions (A1)–(A4) and C O P ( h , K ) = S o l ( g , K ) . Let P r o x λ h be the proximal operator of the function h of parameter λ > 0 let h denote the gradient of h. Using the term P r o x λ h ( exp x ( λ h ( x ) ) ) in place of arg min y M g ( x , y ) + 1 2 λ n d 2 ( x , y ) in Algorithm 1, we obtain a method for minimizing the function h .

5. Numerical Example

In this section, we present some numerical illustrations of our main result. All codes were written in Matlab 2017b computed on a Personal Computer (PC) Core i5 at 2.0 GHz and 8.00 GB RAM.
Example 1.
We consider an extension of the Nash equilibrium model introduced in [7,47]. In this problem, the bifunction g : K × K R is given by
g ( x , y ) = P x + Q y + p , y x .
Let M be Space 2 above and let K M be given by
K = { x = ( x 1 , x 2 , , x m ) : 1 x i 100 , i = 1 , 2 , , m } .
Let x , y K , and let p = ( p 1 , p 2 , , p m ) T R m be chosen randomly with elements in [ 1 , m ] . The matrices P and Q are two square matrices of order m such that Q is symmetric positive semidefinite and Q P is negative semidefinite. It is known (see [7]) that g is pseudomonotone and satisfies (A2) with Lipschitz constants c 1 = c 2 = 1 2 Q P (see [15], Lemma 6.2). Assumptions (A3) and (A4) are also satisfied (see [48]). Thus, our main theorem is fully compatible with this example. Setting δ n = 1 2 n + 7 , β n = 1 n + 1 , ϵ n = 1 n 1.1 , μ = 0.5 and λ 1 = 10 3 , we compare our method with (Algorithm 1) of Fan et al. [23]. The comparisons are made for some values of m using x n + 1 x n 2 = 10 4 as the stopping criterion. The results for this example are presented in Table 1 and Figure 1.
Example 2.
Let M be Space 2 above. We consider an example of a variational inequality and present a numerical comparison of Algorithm 1 through its adaptation to VI with (Algorithm 1) of Fan et al. [23]. The following example has been considered by authors in many recent articles (see, for example, [49]). Let the mapping F : E M be defined by
F ( x ) = 0.5 x 1 x 2 2 x 2 10 7 4 x 1 + 0.1 x 2 2 10 7
where x = ( x 1 , x 2 ) and K : = { x R 2 : ( x 1 2 ) 2 + ( x 2 2 ) 2 1 } . It is known that the mapping F is pseudomonotone on K and L-Lipschitz continuous with L = 5 . For this example, we let β n = 1 n + 1 , δ n = 1 2 n + 1 , θ n = 1 3 and ϵ n = 1 n 1.2 and λ 1 = 10 8 . Using x n + 1 x n 2 = 10 4 as the stopping criterion, we compare Algorithm 1 and Fan et al. alg. for different initial values of x 0 and x 1 . The results for this example are presented in Figure 2 and Table 2.
(Case 1) 
x 0 = [ 0.5 , 1 ] and x 1 = [ 1 , 2 ] ;
(Case 2) 
x 0 = [ 2 , 1 ] and x 1 = [ 1 , 2 ] ;
(Case 3) 
x 0 = [ 1.2 , 1.5 ] and x 1 = [ 0 , 0.5 ] ;
(Case 4) 
x 0 = [ 0.3 , 0.5 ] and x 1 = [ 0.9 , 0.7 ] .

6. Conclusions

In this paper, we introduced an inertial subgradient extragradient method for approximating solutions to equilibrium problems in the framework of Hadamard manifolds. Since we use self-adaptive step sizes which are allowed to increase from iteration to iteration, our method does not require knowledge of the Lipschitz constant of the cost operator. A convergence result was proved by using a viscosity technique with mild conditions on the control parameters involved for generating the sequence of the approximants. We also provided two theoretical applications of our result. Furthermore, we presented some numerical experiments which illustrate the performance of the method we proposed. By way of comparison to another method presented for the same subject in Fan et al. [23], we displayed the competitiveness of our Algorithm. The authors intend to consider more examples in Hadamard manifolds in future works.

Author Contributions

Conceptualization, O.K.O. and S.R.; methodology, O.K.O. and S.R.; software, O.K.O.; validation, S.R.; funding acquisition, S.R. All authors have read and agreed to the published version of the manuscript.

Funding

Simeon Reich was partially supported by the Israel Science Foundation (Grant 820/17), by the Fund for the Promotion of Research at the Technion and by the Technion General Research Fund.

Acknowledgments

Both authors are very grateful to the editors and to five anonymous referees for their useful comments and helpful suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fan, K. A Minimax Inequality and Its Application. In Inequalities; Shisha, O., Ed.; Academic: New York, NY, USA, 1972; Volume 3, pp. 103–113. [Google Scholar]
  2. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  3. Muu, L.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  4. Rapcsák, T. Geodesic convexity in nonlinear optimization. J. Optim. Theory Appl. 1991, 69, 169–183. [Google Scholar] [CrossRef]
  5. Rapcsák, T. Nonconvex Optimization and Its Applications, Smooth Nonlinear Optimization in Rn; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997. [Google Scholar]
  6. Udriste, C. Convex Functions and Optimization Methods on Riemannian Manifolds, Mathematics and Its Applications; Kluwer Academic: Norwell, MA, USA, 1994; Volume 297. [Google Scholar]
  7. Khammahawong, K.; Kumam, P.; Chaipunya, P.; Yao, J.; Wen, C.; Jirakitpuwapat, W. An extragradient algorithm for strongly pseudomonotone equilibrium problems on Hadamard manifolds. Thai J. Math. 2020, 18, 350–371. [Google Scholar]
  8. Upadhyay, B.; Ghosh, A.; Mishra, P.; Treanţă, S. Optimality conditions and duality for multiobjective semi-infinite programming problems on Hadamard manifolds using generalized geodesic convexity. RAIRO-Oper. Res. 2022, 56, 2037–2065. [Google Scholar] [CrossRef]
  9. Colao, V.; López, G.; Marino, G.; Martín-Márquez, V. Equilibrium problems in Hadamard manifolds. J. Math. Anal. Appl. 2012, 388, 61–77. [Google Scholar] [CrossRef] [Green Version]
  10. Salahuddin, S. The existence of solution for equilibrium problems in Hadamard manifolds. Trans. A. Razmadze Math. Inst. 2017, 171, 381–388. [Google Scholar] [CrossRef]
  11. Tang, G.; Zhou, L.; Huang, N. Existence results for a class of hemivariational inequality problems on Hadamard manifolds. Optimization 2016, 65, 1451–1461. [Google Scholar] [CrossRef]
  12. Zhou, L.-W.; Huang, N.-J. Existence of solutions for vector optimization on Hadamard manifolds. J. Optim. Theory Appl. 2013, 157, 44–53. [Google Scholar] [CrossRef]
  13. Korpelevich, G. An extragradient method for finding saddle points and for other problems. Ekon. Mat. Metody. 1976, 12, 747–756. [Google Scholar]
  14. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Tran, D.; Dung, M.; Nguyen, V. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  16. Nguyen, T.; Strodiot, J.; Nguyen, V. Hybrid methods for solving simultaneously an equilibrium problem and countably many fixed point problems in a Hilbert space. J. Optim. Theory Appl. 2014, 160, 809–831. [Google Scholar] [CrossRef]
  17. Rehman, H.; Kumam, P.; Sitthithakerngkiet, K. Viscosity-type method for solving pseudomonotone equilibrium problems in a real Hilbert space with applications. AIMS Math. 2021, 6, 1538–1560. [Google Scholar] [CrossRef]
  18. Ceng, L.; Li, X.; Qin, X. Parallel proximal point methods for systems of vector optimization problems on Hadamard manifolds without convexity. Optimization 2020, 69, 357–383. [Google Scholar] [CrossRef]
  19. Hieu, D.V.; Quy, P.K.; Vy, L.V. Explicit iterative algorithms for solving equilibrium problems. Calcolo 2019, 56, 11. [Google Scholar] [CrossRef]
  20. Li, C.; López, G.; Martín-Márquez, V. Monotone vector fields and the proximal point algorithm on Hadamard manifolds. J. Lond. Math. Soc. 2009, 79, 663–683. [Google Scholar] [CrossRef]
  21. Li, C.; Yao, J. Variational inequalities for set-valued vector fields on Riemannian manifolds: Convexity of the solution set and the proximal point algorithm. SIAM J. Control Optim. 2012, 50, 2486–2514. [Google Scholar] [CrossRef]
  22. Neto, J.C.; Santos, P.; Soares, P. An extragradient method for equilibrium problems on Hadamard manifolds. Optim. Lett. 2016, 10, 1327–1336. [Google Scholar] [CrossRef]
  23. Fan, J.; Tan, B.; Li, S. An explicit extragradient algorithm for equilibrium problems on Hadamard manifolds. Comp. Appl. Math. 2021, 40, 68. [Google Scholar] [CrossRef]
  24. Ali-Akbari, M. A subgradient extragradient method for equilibrium problems on Hadamard manifolds. Int. J. Nonlinear Anal. Appl. 2022, 13, 75–84. [Google Scholar]
  25. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  26. Polyak, B. Some methods of speeding up the convergence of iterarive methods. Zh. Vychisl. Mat. Mat. Fiz. 1964, 4, 1–17. [Google Scholar]
  27. Rehman, H.; Kumam, P.; Gibali, A.; Kumam, W. Convergence analysis of a general inertial projection-type method for solving pseudomonotone equilibrium problems with applications. J. Ineq. Appl. 2021, 2021, 63. [Google Scholar] [CrossRef]
  28. Oyewole, O.; Izuchukwu, C.; Okeke, C.; Mewomo, O. Inertial approximation method for split variational inclusion problem in Banach spaces. Int. J. Nonlinear Anal. Appl. 2020, 11, 285–304. [Google Scholar]
  29. Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  30. Al-Homidan, S.; Ansari, Q.; Babu, F.; Yao, J.-C. Viscosity method with a f-contraction mapping for hierarchical variational inequalities on Hadamard manifolds. Fixed Point Theory 2020, 21, 561–584. [Google Scholar] [CrossRef]
  31. Huang, S. Approximations with weak contractions in Hadamard manifolds. Linear Nonlinear Anal. 2015, 1, 317–328. [Google Scholar]
  32. Dilshad, M.; Khan, A.; Akram, M. Splitting type viscosity methods for inclusion and fixed point problems on Hadamard manifolds. AIMS Math. 2021, 6, 5205–5221. [Google Scholar] [CrossRef]
  33. Thong, D.; Hieu, D. Some extragradient-viscosity algorithms for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 82, 761–789. [Google Scholar] [CrossRef]
  34. Sakai, T. Riemannian Geometry. Vol. 149, Translations of Mathematical Monographs; American Mathematical Society: Providence, RI, USA, 1996. [Google Scholar]
  35. Ansari, Q.; Babu, F.; Yao, J. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  36. Ferreira, O.; Perez, L.L.; Nemeth, S. Singularities of monotone vector fields and an extragradient algorithm. J. Glob. Optim. 2005, 31, 133–151. [Google Scholar] [CrossRef]
  37. Wang, J.; López, G.; Martín-Márquez, V.; Li, C. Monotone and accretive vector fields on Riemannian manifolds. J. Optim. Theory Appl. 2010, 146, 691–708. [Google Scholar] [CrossRef] [Green Version]
  38. Boyd, D.; Wong, J. On nonlinear contractions. Proc. Am. Math. Soc. 1969, 20, 335–341. [Google Scholar] [CrossRef]
  39. Bridson, M.; Haefliger, A. Metric Spaces of Non-Positive Curvature. Grundlehren der Mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences); Springer: Berlin, Germany, 1999; Volume 319. [Google Scholar] [CrossRef]
  40. Mastroeni, G. On auxiliary principle for equilibrium problems. In Equilibrium Problems and Variational Models; Nonconvex Optimization and Its Applications; Kluwer Academic: Norwell, MA, USA, 2003; Volume 68, pp. 289–298. [Google Scholar]
  41. Ferreira, O.; Oliveira, P. Proximal Point Algorithm on Riemannian Manifolds. Optimization 2002, 51, 257–270. [Google Scholar] [CrossRef]
  42. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  43. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operator in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  44. Stampacchia, G. Formes Bilineaires Coercivites sur les Ensembles Convexes. C. R. Acad. Paris 1964, 258, 4413–4416. [Google Scholar]
  45. Upadhyay, B.; Treanţă, S.; Mishra, P. On Minty Variational Principle for Nonsmooth Multiobjective Optimization Problems on Hadamard Manifolds. Optimization 2022, 71, 1–19. [Google Scholar] [CrossRef]
  46. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics; 2057; Springer: Berlin, Germany, 2012. [Google Scholar]
  47. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Series in Operations Research; Springer: New York, NY, USA, 2003; Volume II. [Google Scholar]
  48. Chen, J.; Liu, S. Extragradient-like method for pseudomonotone equilibrium problems on Hadamard manifolds. J. Ineq. Appl. 2020, 2020, 205. [Google Scholar] [CrossRef]
  49. Shehu, Y.; Dong, Q.-L.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert Spaces. Optimization 2019, 68, 385–409. [Google Scholar] [CrossRef]
Figure 1. Example 1, Top left: m = 20 ; Top right: m = 30 ; Bottom left: m = 50 ; Bottom right: m = 60 .
Figure 1. Example 1, Top left: m = 20 ; Top right: m = 30 ; Bottom left: m = 50 ; Bottom right: m = 60 .
Axioms 12 00256 g001
Figure 2. Example 2, Top left: Case 1; Top right: Case 2; Bottom left: Case 3; Bottom right: Case 4.
Figure 2. Example 2, Top left: Case 1; Top right: Case 2; Bottom left: Case 3; Bottom right: Case 4.
Axioms 12 00256 g002
Table 1. Computation results for Example 1.
Table 1. Computation results for Example 1.
Algorithm 1Fan et al. Alg.
m = 20 No of Iter.2339
CPU time (s)0.00132.9229
m = 30 No of Iter.2343
CPU time (s)0.01303.6771
m = 50 No of Iter.4153
CPU time (s)0.00505.8712
m = 60 No of Iter.3540
CPU time (s)0.00505.8712
Table 2. Computation result for Example 2.
Table 2. Computation result for Example 2.
Algorithm 1Fan et al. Alg.
Case 1No of Iter.1529
CPU time (s)0.00132.9229
Case 2No of Iter.1729
CPU time (s)0.01303.6771
Case 3No of Iter.1522
CPU time (s)0.00505.8712
Case 4No of Iter.1517
CPU time (s)0.00505.8712
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Oyewole, O.K.; Reich, S. An Inertial Subgradient Extragradient Method for Approximating Solutions to Equilibrium Problems in Hadamard Manifolds. Axioms 2023, 12, 256. https://doi.org/10.3390/axioms12030256

AMA Style

Oyewole OK, Reich S. An Inertial Subgradient Extragradient Method for Approximating Solutions to Equilibrium Problems in Hadamard Manifolds. Axioms. 2023; 12(3):256. https://doi.org/10.3390/axioms12030256

Chicago/Turabian Style

Oyewole, Olawale Kazeem, and Simeon Reich. 2023. "An Inertial Subgradient Extragradient Method for Approximating Solutions to Equilibrium Problems in Hadamard Manifolds" Axioms 12, no. 3: 256. https://doi.org/10.3390/axioms12030256

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop