Next Article in Journal
SimBench—A Benchmark Dataset of Electric Power Systems to Compare Innovative Solutions Based on Power Flow Analysis
Previous Article in Journal
From Non-Modular to Modular Concept of Bidirectional Buck/Boost Converter for Microgrid Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models

1
KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and Applications Research Group, SCL 802 Fixed Point Laboratory, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Mathematics, College of Science & Arts, King Abdulaziz University, P. O. Box 344, Rabigh 21911, Saudi Arabia
5
Department of Mathematics, College of Science, Northern Border University, Arar 73222, Saudi Arabia
6
Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi, Thanyaburi, Pathumthani 12110, Thailand
*
Authors to whom correspondence should be addressed.
Energies 2020, 13(12), 3292; https://doi.org/10.3390/en13123292
Submission received: 8 April 2020 / Revised: 1 June 2020 / Accepted: 15 June 2020 / Published: 26 June 2020
(This article belongs to the Section F: Electrical Engineering)

Abstract

:
This manuscript aims to incorporate an inertial scheme with Popov’s subgradient extragradient method to solve equilibrium problems that involve two different classes of bifunction. The novelty of our paper is that methods can also be used to solve problems in many fields, such as economics, mathematical finance, image reconstruction, transport, elasticity, networking, and optimization. We have established a weak convergence result based on the assumption of the pseudomonotone property and a certain Lipschitz-type cost bifunctional condition. The stepsize, in this case, depends upon on the Lipschitz-type constants and the extrapolation factor. The bifunction is strongly pseudomonotone in the second method, but stepsize does not depend on the strongly pseudomonotone and Lipschitz-type constants. In contrast, the first convergence result, we set up strong convergence with the use of a variable stepsize sequence, which is decreasing and non-summable. As the application, the variational inequality problems that involve pseudomonotone and strongly pseudomonotone operator are considered. Finally, two well-known Nash–Cournot equilibrium models for the numerical experiment are reviewed to examine our convergence results and show the competitive advantage of our suggested methods.

1. Introduction

An Equilibrium problem (EP) was originally started in the unifying feature by Blum and Oettli [1] in 1994 and provided a detailed investigation of their theoretical properties. This study contributes significantly to the advancement of applied and pure science. This problem is primarily related to Ky Fan Inequity due to his early contributions to this field [2]. It has been established that the equilibrium problem theory has set up an unique approach to investigate an immense range of topics that have appeared in social and physical science. For instance, it might involve physical or mechanical structures, chemical processes [3], the distribution of traffic over computer, and telecommunication networks or public roads [4,5,6,7]. In economics, it often refers to production competition [8] or the dynamics of offer and demand [9], exploiting the mathematical model of non-cooperative games and the analogous equilibrium concept by Nash [10,11]. The problem of equilibrium, as a particular case, includes many mathematical problems as a particular case, such as the variational inequality problems (VIP), problems of minimization, the fixed point problems, Nash equilibrium of non-cooperative games, complementarity problems, and saddle point problem (see e.g., [1,12]).
On the other hand, iterative methods are efficient techniques for determining the approximate solution of an equilibrium problem. In that case, two major approaches that are well-known i.e., the proximal point method [13] and auxiliary problem principle [14]. The proximal point method strategy was initially developed by Martinet [15] for the monotone variational inequality problems and later Rockafellar [16] extends this approach for monotone operators. Moudafi [13] proposed the proximal point method for monotone equilibrium problems. Konnov [17] also suggests a different interpretation of the proximal point method with weaker assumptions for equilibrium problems.
In addition, inertial-type methods are additionally significant, depending on the heavy-ball methods of the second-order time dynamic system. Polyak began by considering inertial extrapolation as an acceleration procedure to deal with the problem of smooth convex minimization. Inertial-type algorithms are two-step iterative schemes, and the next iteration is determined by using the previous two iterations and it can be viewed as an accelerating step of the iterative sequence. A large number of methods are the earliest, being set up for solving the problem (EP) in finite and infinite-dimensional spaces, such as the proximal point-like methods [13,18], the extragradient methods [19,20,21,22,23], the subgradient extragradient methods [24,25,26], the inertia methods [27,28,29,30,31,32] and others in [33,34].
In this work, our focus is on the proximal point method, in particular projection methods, which are well established and technically easy to implement due to their convenient numerical computation. This manuscript aims to suggest two modifications of the results that appeared in [21,35,36] by applying the inertial scheme that is useful for speeding up the iteration process. The first result includes the two-step inertial Popov’s extragradient method for determining a numerical solution to the pseudomonotone equilibrium problems and the weak convergence of the suggested method is achieved based on the standard assumptions. We also propose an alternative inertial-type method, the second variant of the first method. The second method does not need any information regarding the Lipschitz-type and strongly pseudomonotone constants of a bifunction. A practical explanation for the second method is that it uses a diminishing and non-summable sequence of non-negative real numbers, which are useful in achieving the strong convergence.
This manuscript is arranged, as follows: in Section 2, we provide some essential definitions and useful results. Section 3 and Section 4 include all of our main methods and corresponding convergence results. Section 5 provides the methods for variational inequality problems. Section 6 sets out the numerical tests to show the numerical efficiency of the proposed methods for the test problems based on the Nash–Cournot equilibrium model compare to other existing methods.

2. Background

Let K be a non-empty, convex, and closed subset of the Hilbert space E . Let H : K E be an operator and S O L V I ( H , K ) is the solution set of a variational inequality problem relative to the operator H upon the set K. Likewise, S O L E P ( f , K ) denotes the solution set of an equilibrium problem on the set K and ξ * is any arbitrary element of the solution set S O L E P ( f , K ) or S O L V I ( H , K ) .
Definition 1.
[1] Let f : E × E R be a bifunction with f ( u ˜ , u ˜ ) = 0 , for each u ˜ K . The equilibrium problem for f upon K is defined, as follows:
Find ξ * K such that f ( ξ * , v ˜ ) 0 , v ˜ K .
Definition 2.
[37] The metric projection P K ( u ˜ ) of u ˜ on a closed and convex subset K of E is determined, as follows:
P K ( u ˜ ) = arg min { v ˜ u ˜ : v ˜ K } .
Next, we take the concept of monotonicity of a bifunction into account (see [1,38] for details).
Definition 3.
Let f : E × E R on K for γ > 0 is
(1)
strongly monotone if
f ( u ˜ , v ˜ ) + f ( v ˜ , u ˜ ) γ u ˜ v ˜ 2 , u ˜ , v ˜ K ;
(2)
monotone if
f ( u ˜ , v ˜ ) + f ( v ˜ , u ˜ ) 0 , u ˜ , v ˜ K ;
(3)
strongly pseudomonotone if
f ( u ˜ , v ˜ ) 0 f ( v ˜ , u ˜ ) γ u ˜ v ˜ 2 , u ˜ , v ˜ K ;
(4)
pseudomonotone if
f ( u ˜ , v ˜ ) 0 f ( v ˜ , u ˜ ) 0 , u ˜ , v ˜ K ;
(5)
satisfying the Lipschitz-type condition on K if there exist constants L 1 , L 2 > 0 , such that
f ( u ˜ , w ˜ ) f ( u ˜ , v ˜ ) + f ( v ˜ , w ˜ ) + L 1 u ˜ v ˜ 2 + L 2 v ˜ w ˜ 2 , u ˜ , v ˜ , w ˜ K ,
holds.
This section ends with a few essential lemmas that are useful for examining convergence.
Lemma 1.
[39] Assume that K is non-empty, convex, and closed subset of Hilbert space E and g : K R is a convex, subdifferentiable, and lower semi-continuous function on K . Furthermore, u ˜ K is a minimizer of g if and only if 0 g ( u ˜ ) + N K ( u ˜ ) where g ( u ˜ ) and N K ( u ˜ ) denotes the subdifferential of g at u ˜ and normal cone of K at u ˜ , respectively.
Lemma 2.
[40] Let { p n } , { q n } [ 0 , + ) be two sequences and n = 1 p n = with n = 1 p n q n < , then  lim inf n q n = 0 .
Lemma 3.
[41] For u ˜ , v ˜ E and μ R then the following relation is true:
μ u ˜ + ( 1 μ ) v ˜ 2 = μ u ˜ 2 + ( 1 μ ) v ˜ 2 μ ( 1 μ ) u ˜ v ˜ 2 .
Lemma 4.
[42] Assume that a ˜ n , b ˜ n and c ˜ n are sequences in [ 0 , + ) , such that
a ˜ n + 1 a ˜ n + b ˜ n ( a ˜ n a ˜ n 1 ) + c ˜ n , n 1 , with n = 1 + c ˜ n < + ,
and also with b ˜ > 0 , such that 0 b ˜ n b ˜ < 1 for all n N . Subsequently, the following relations are hold.
(i)
n = 1 + [ a ˜ n a ˜ n 1 ] + < , with [ s ] + : = max { s , 0 } ;
(ii)
lim n + a ˜ n = a * [ 0 , + ) .
Lemma 5.
[43] Let { u ˜ n } be a sequence in E and K E such that the following relations are true:
(i)
For each u ˜ K , lim n u ˜ n u ˜ exists;
(ii)
Every sequentially weak cluster point of { u ˜ n } belongs to K;
Subsequently, { u ˜ n } weakly converges to a point in K .
A normal cone of K at u ˜ K is defined as:
N K ( u ˜ ) = { z ˜ E : z ˜ , v ˜ u ˜ 0 , v ˜ K } .
Let g : K R be a convex function with subdifferential of g at u ˜ K is defined as:
g ( u ˜ ) = { z ˜ E : g ( v ˜ ) g ( u ˜ ) z ˜ , v ˜ u ˜ , v ˜ K } .

3. Inertial Popov’s Two-Step Subgradient Extragradient Algorithm for Pseudomonotone EP

We present our first method to solve the pseudomonotone equilibrium problems involving the Lipschitz-type condition of a bifunction. It uses an inertial term to boost up the iterative sequence, so we referred it as an “Inertial Popov’s Two-step Subgradient Extragradient Algorithm” for a class pseudomonotone equilibrium problems. The detailed algorithm is given below.
Algorithm 1 (Two-step Subgradient Extragradient Algorithm for Pseudomonotone EP)
  • Initialization: Choose u 1 , u 0 , v 0 E , 0 ϑ n ϑ < 5 2 and λ ( ϑ , L 1 , L 2 ) > 0 . Set
    u 1 = arg min y K { λ f ( v 0 , y ) + 1 2 ρ 0 y 2 } ,
    v 1 = arg min y K { λ f ( v 0 , y ) + 1 2 ρ 1 y 2 } ,
    where ρ 0 = u 0 + ϑ 0 ( u 0 u 1 ) and ρ 1 = u 1 + ϑ 1 ( u 1 u 0 ) .
  • Iterative steps: Given u n 1 , u n , v n 1 , v n for n 1 and construct a half space
  • H n = { z E : ρ n λ ω n 1 v n , z v n 0 } ,
    where ω n 1 f ( v n 1 , v n ) .
  • Step 1: Compute
    u n + 1 = arg min y H n { λ f ( v n , y ) + 1 2 ρ n y 2 } ,
    where ρ n = u n + ϑ n ( u n u n 1 ) .
  • Step 2: Compute
    v n + 1 = arg min y K { λ f ( v n , y ) + 1 2 ρ n + 1 y 2 } ,
    where ρ n + 1 = u n + 1 + ϑ n + 1 ( u n + 1 u n ) .
  • Step 3: If u n + 1 = ρ n and v n = v n 1 , then STOP. Otherwise, set n : = n + 1 and go back to Step 1.
Assumption 1.
Assume that f : E × E R satisfy the following conditions:
(A1)
f ( v ˜ , v ˜ ) = 0 for all v ˜ K and f is pseudomonotone on K;
(A2)
f satisfy the Lipschitz-type condition on E through two positive constants L 1 and L 2 ;
(A3)
lim sup n f ( u ˜ n , v ˜ ) f ( u ˜ * , v ˜ ) for all v ˜ K and { u ˜ n } K satisfy u ˜ n u ˜ * ;
(A4)
f ( u ˜ , . ) is convex and subdifferentiable on E for each u ˜ E .
Lemma 6.
We have the following crucial inequality that results from the Algorithm 1.
λ f ( v n , y ) λ f ( v n , u n + 1 ) ρ n u n + 1 , y u n + 1 , y H n .
Proof. 
By the value u n + 1 through Lemma 1, we have
0 2 λ f ( v n , y ) + 1 2 ρ n y 2 ( u n + 1 ) + N H n ( u n + 1 ) .
For ω f ( v n , u n + 1 ) , there exists ω ¯ N H n ( u n + 1 ) , such that
λ ω + u n + 1 ρ n + ω ¯ = 0 .
The above implies that
ρ n u n + 1 , y u n + 1 = λ ω , y u n + 1 + ω ¯ , y u n + 1 , y H n .
Because ω ¯ N H n ( u n + 1 ) then ω ¯ , y u n + 1 0 , y H n . It implies that
λ ω , y u n + 1 ρ n u n + 1 , y u n + 1 , y H n .
Due to ω f ( v n , u n + 1 ) and by definition of subdifferentiable, we obtain
f ( v n , y ) f ( v n , u n + 1 ) ω , y u n + 1 , y E .
From expressions (1) and (2), we have the required result. □
Lemma 7.
We also have the following inequality from Algorithm 1.
λ f ( v n , y ) λ f ( v n , v n + 1 ) ρ n + 1 v n + 1 , y v n + 1 , y K .
Proof. 
The proof is the same as that of Lemma 6. □
Lemma 8.
We have the following inequality from Algorithm 1.
λ f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) ρ n v n , u n + 1 v n .
Proof. 
Because u n + 1 H n then the definition of H n implies that
ρ n λ ω n 1 v n , u n + 1 v n 0 .
The above implies that
λ ω n 1 , u n + 1 v n ρ n v n , u n + 1 v n .
From ω n 1 f ( v n 1 , v n ) and due to subdifferential definition, we have
f ( v n 1 , y ) f ( v n 1 , v n ) ω n 1 , y v n , y E .
Set y = u n + 1 in the above expression
f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) ω n 1 , u n + 1 v n , y E .
From expression (3) and (4), we obtain the desired result. □
Now, we are proving the validity of the stopping criterion for Algorithm 1.
Lemma 9.
If u n + 1 = ρ n and v n = v n 1 in Algorithm 1, then v n S O L E P ( f , K ) .
Proof. 
By substituting u n + 1 = ρ n in Lemma 6, we have
λ f ( v n , y ) λ f ( v n , u n + 1 ) 0 , y H n .
Because u n + 1 H n and v n = v n 1 , u n + 1 = ρ n , then from Lemma 8, we have
λ f ( v n , u n + 1 ) ρ n v n 2 0 .
The expression (5) and (6) implies that v n S O L E P ( f , K ) . □
Remark 1.
Two more conditions for stopping criterion are u n + 1 = v n = ρ n and ρ n + 1 = v n + 1 = v n for Algorithm 1. The validity of these stopping criterion can be shown easily by Lemma 6 and Lemma 7, respectively.
Lemma 10.
Let f : E × E R satisfying the Assumption 1. Assume that S O L E P ( f , K ) is nonempty. Afterwards, for each ξ * S O L E P ( f , K ) , we have
u n + 1 ξ * 2 ρ n ξ * 2 ( 1 4 λ L 1 ) ρ n v n 2 ( 1 2 λ L 2 ) u n + 1 v n 2 + 4 λ L 1 ρ n v n 1 2 .
Proof. 
Substituting y = ξ * into Lemma 6, we obtain
λ f ( v n , ξ * ) λ f ( v n , u n + 1 ) ρ n u n + 1 , ξ * u n + 1 , y H n .
Since ξ * S O L E P ( f , K ) then f ( ξ * , v n ) 0 . Thus, from (A1) the above expression becomes
ρ n u n + 1 , u n + 1 ξ * λ f ( v n , u n + 1 ) .
Because of the Lipschitz-type condition, we have
f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) + f ( v n , u n + 1 ) + L 1 v n 1 v n 2 + L 2 v n u n + 1 2 .
The expression (9) and (10) implies that
ρ n u n + 1 , u n + 1 ξ * λ f ( v n 1 , u n + 1 ) f ( v n 1 , v n ) λ L 1 v n 1 v n 2 λ L 2 v n u n + 1 2 .
From expression (11) and Lemma 8, we obtain
ρ n u n + 1 , u n + 1 ξ * ρ n v n , u n + 1 v n λ L 1 v n 1 v n 2 λ L 2 v n u n + 1 2 .
We have the following facts:
2 ρ n u n + 1 , u n + 1 ξ * = ρ n ξ * 2 + u n + 1 ρ n 2 + u n + 1 ξ * 2 .
2 ρ n v n , u n + 1 v n = ρ n v n 2 + u n + 1 v n 2 ρ n u n + 1 2 .
We also have the following inequality
v n 1 v n 2 v n 1 ρ n + ρ n v n 2 2 v n 1 ρ n 2 + 2 ρ n v n 2 .
From the above two facts and last inequality with (12) provides the required result. □
Now, we are in a position to provide our first convergence result of this work.
Theorem 1.
Assume that { u n } , { v n } and { ρ n } sequences in E generated by Algorithm 1, where the sequence ϑ n is non-decreasing and λ is a positive real number, such that
0 < λ < 1 2 2 ϑ 1 2 ϑ 2 L 2 ( 1 ϑ ) 2 + 2 L 1 ( 1 + ϑ + ϑ 2 + ϑ 3 ) and 0 ϑ n ϑ < 5 2 .
Subsequently, the sequences { u n } , { v n } and { ρ n } are converges weakly to an element ξ * of S O L E P ( f , K ) .
Proof. 
From Lemma 10, we have
u n + 1 ξ * 2 + 4 λ L 1 ρ n + 1 v n 2 ρ n ξ * 2 ( 1 4 λ L 1 ) ρ n v n 2 ( 1 2 λ L 2 ) u n + 1 v n 2 + 4 λ L 1 ρ n v n 1 2 + 4 λ L 1 ρ n + 1 v n 2 .
By the definition of ρ n in Algorithm 1, we have
ρ n ξ * 2 = ( 1 + ϑ n ) ( u n ξ * ) ϑ n ( u n 1 ξ * ) 2 = ( 1 + ϑ n ) u n ξ * 2 ϑ n u n 1 ξ * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 .
By the definition of ρ n + 1 in Algorithm 1, we also have
ρ n + 1 v n 2 = u n + 1 + ϑ n + 1 ( u n + 1 u n ) v n 2 = ( 1 + ϑ n + 1 ) ( u n + 1 v n ) ϑ n + 1 ( u n v n ) 2 = ( 1 + ϑ n + 1 ) u n + 1 v n 2 ϑ n + 1 u n v n 2 + ϑ n + 1 ( 1 + ϑ n + 1 ) u n + 1 u n 2 ( 1 + ϑ n ) u n + 1 v n 2 + ϑ n ( 1 + ϑ n ) u n + 1 u n 2 .
Combining the expression (13)–(15), we obtain
u n + 1 ξ * 2 + 4 λ L 1 ρ n + 1 v n 2 ( 1 + ϑ n ) u n ξ * 2 ϑ n u n 1 ξ * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 λ L 1 ρ n v n 1 2 ( 1 4 λ L 1 ) ρ n v n 2 ( 1 2 λ L 2 ) u n + 1 v n 2 + 4 λ L 1 ( 1 + ϑ n ) u n + 1 v n 2 + 4 λ L 1 ϑ n ( 1 + ϑ n ) u n + 1 u n 2
( 1 + ϑ n ) u n ξ * 2 ϑ n u n 1 ξ * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 λ L 1 ρ n v n 1 2 + 4 λ L 1 ϑ n ( 1 + ϑ n ) u n + 1 u n 2 ( 1 4 λ L 1 ) ρ n v n 2 ( 1 2 λ L 2 4 λ L 1 ( 1 + ϑ n ) ) u n + 1 v n 2
( 1 + ϑ n + 1 ) u n ξ * 2 ϑ n u n 1 ξ * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 λ L 1 ρ n v n 1 2 + 4 λ L 1 ϑ n ( 1 + ϑ n ) u n + 1 u n 2 1 2 λ L 2 4 λ L 1 ( 1 + ϑ n ) 2 2 ( u n + 1 v n 2 + ρ n v n 2 ) .
By substituting
σ n = 1 2 λ L 2 4 λ L 1 ( 1 + ϑ n ) 2 ,
and due to the inequality 2 u n + 1 v n 2 + 2 ρ n v n 2 u n + 1 ρ n 2 . From this discussion, the expression (18) turns into following:
Λ n + 1 Λ n + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 λ L 1 ϑ n ( 1 + ϑ n ) u n + 1 u n 2 σ n u n + 1 ρ n 2 ,
where Λ n = u n ξ * 2 ϑ n u n 1 ξ * 2 + 4 λ L 1 ρ n v n 1 2 . By the value ρ n + 1 , we have
u n + 1 ρ n 2 = u n + 1 u n ϑ n ( u n u n 1 ) 2 = u n + 1 u n 2 + ϑ n 2 u n u n 1 2 2 ϑ n u n + 1 u n , u n u n 1
u n + 1 u n 2 + ϑ n 2 u n u n 1 2 2 ϑ n u n + 1 u n u n u n 1 ( 1 ϑ n ) u n + 1 u n 2 + ( ϑ n 2 ϑ n ) u n u n 1 2 .
Combining the expression (19) and (21) implies that
Λ n + 1 Λ n + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 λ L 1 ϑ n ( 1 + ϑ n ) u n + 1 u n 2 σ n ( 1 ϑ n ) u n + 1 u n 2 σ n ( ϑ n 2 ϑ n ) u n u n 1 2 Λ n + r n u n u n 1 2 q n u n + 1 u n 2 ,
where r n : = ϑ n ( 1 + ϑ n ) + σ n ϑ n ( 1 ϑ n ) and q n : = σ n ( 1 ϑ n ) 4 λ L 1 ϑ n ( 1 + ϑ n ) .
Further, we take Γ n = Λ n + r n u n u n 1 2 . It follows from (22) that
Γ n + 1 Γ n = u n + 1 ξ * 2 ϑ n + 1 u n ξ * 2 + r n + 1 u n + 1 u n 2 + 4 λ L 1 ρ n + 1 v n 2 u n ξ * 2 + ϑ n u n 1 ξ * 2 r n u n u n 1 2 4 λ L 1 ρ n v n 1 2 = u n + 1 ξ * 2 ( 1 + ϑ n + 1 ) u n ξ * 2 + ϑ n u n 1 ξ * 2 + 4 λ L 1 ρ n + 1 v n 2 4 λ L 1 ρ n v n 1 2 r n u n u n 1 2 + r n + 1 u n + 1 u n 2 q n u n + 1 u n 2 + r n + 1 u n + 1 u n 2 = ( q n r n + 1 ) u n + 1 u n 2 .
Next, we need to compute
q n r n + 1 = σ n ( 1 ϑ n ) 4 λ L 1 ϑ n ( 1 + ϑ n ) ϑ n + 1 ( 1 + ϑ n + 1 ) σ n + 1 ϑ n + 1 ( 1 ϑ n + 1 ) σ n ( 1 ϑ n ) 4 λ L 1 ϑ n ( 1 + ϑ n ) ϑ n ( 1 + ϑ n ) σ n ϑ n ( 1 ϑ n ) σ n ( 1 ϑ ) 2 4 λ L 1 ϑ ( 1 + ϑ ) ϑ ( 1 + ϑ ) 1 2 λ L 2 4 λ L 1 ( 1 + ϑ ) 2 ( 1 ϑ ) 2 4 λ L 1 ϑ ( 1 + ϑ ) ϑ ( 1 + ϑ ) = 1 2 2 ϑ 1 2 ϑ 2 λ L 2 1 ϑ 2 + 2 L 1 1 + ϑ + ϑ 2 + ϑ 3 0 .
The expression (23) and (24) with some δ 0 , implies that
Γ n + 1 Γ n ( q n r n + 1 ) u n + 1 u n 2 δ u n + 1 u n 2 0 .
The above relation (25) implies that the sequence { Γ n } is non-increasing. From Γ n + 1 , we have
Γ n + 1 = u n + 1 ξ * 2 ϑ n + 1 u n ξ * 2 + r n + 1 u n + 1 u n 2 + 4 λ L 1 ρ n + 1 v n 2 ϑ n + 1 u n ξ * 2 .
Additionally, from definition Γ n , we have
u n ξ * 2 Γ n + ϑ n u n 1 ξ * 2 Γ 1 + ϑ u n 1 ξ * 2 Γ 1 ( ϑ n 1 + + 1 ) + ϑ n u 0 ξ * 2 Γ 1 1 ϑ + ϑ n u 0 ξ * 2 .
Combining the expression (26) and (27), we obtain
Γ n + 1 ϑ n + 1 u n ξ * 2 ϑ u n ξ * 2 ϑ Γ 1 1 ϑ + ϑ n + 1 u 0 ξ * 2 .
It continues to follow from (25) and (28), such that
δ n = 1 k u n + 1 u n 2 Γ 1 Γ k + 1 Γ 1 + ϑ Γ 1 1 ϑ + ϑ n + 1 u 0 ξ * 2 Γ 1 1 ϑ + u 0 ξ * 2 ,
letting k in (29) implies that
n = 1 u n + 1 u n 2 < + implies u n + 1 u n 0 as n .
From the relation (20) and (30), we obtain
u n + 1 ρ n 0 as n .
Next, the expression (28) implies that
Λ n + 1 ϑ Γ 1 1 ϑ + ϑ n + 1 u 0 ξ * 2 + r n + 1 u n + 1 u n 2 .
From the relation (18) we have
1 2 λ L 2 4 λ L 1 ( 1 + ϑ ) u n + 1 v n 2 + ρ n v n 2 Λ n Λ n + 1 + ϑ ( 1 + ϑ ) u n u n 1 2 + 4 λ L 1 ϑ ( 1 + ϑ ) u n + 1 u n 2 .
Set k N and using (33) for n = 1 , 2 , , k , gives that
1 2 L 2 λ 4 L 1 λ ( 1 + ϑ ) n = 1 k u n + 1 v n 2 + ρ n v n 2 Λ 0 Λ k + 1 + ϑ ( 1 + ϑ ) n = 1 k u n u n 1 2 + 4 λ L 1 ϑ ( 1 + ϑ ) n = 1 k u n + 1 u n 2 Λ 0 + ϑ Γ 1 1 ϑ + ϑ k + 1 u 0 ξ * 2 + r k + 1 u k + 1 u k 2 + ϑ ( 1 + ϑ ) n = 1 k u n u n 1 2 + 4 λ L 1 ϑ ( 1 + ϑ ) n = 1 k u n + 1 u n 2 ,
letting k in (34) implies that
n = 1 u n + 1 v n 2 < + and n = 1 ρ n v n 2 < + ,
and
lim n u n + 1 v n = lim n ρ n v n = 0 .
The following relation can easily be derived:
lim n u n v n = lim n u n ρ n = lim n v n 1 v n = 0 .
By the definition of ρ n and using Cauchy inequality, we have
ρ n v n 1 2 = u n + ϑ n ( u n u n 1 ) v n 1 2 = ( 1 + ϑ n ) ( u n v n 1 ) ϑ n ( u n 1 v n 1 ) 2 = ( 1 + ϑ n ) u n v n 1 2 ϑ n u n 1 v n 1 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 ( 1 + ϑ ) u n v n 1 2 + ϑ ( 1 + ϑ ) u n u n 1 2 .
Now, summing up the expression (38) for n = 1 , 2 , k , we obtain
n = 1 k ρ n v n 1 2 ( 1 + ϑ ) n = 1 k u n v n 1 2 + ϑ ( 1 + ϑ ) n = 1 k u n u n 1 2
The above expression with (30) and (35) implies that
ρ n v n 1 2 < + .
It follows from the relation (16), we obtain
u n + 1 ξ * 2 ( 1 + ϑ ) u n ξ * 2 ϑ u n 1 ξ * 2 + ϑ ( 1 + ϑ ) u n u n 1 2 + 4 L 1 λ ρ n v n 1 2 ,
above expression with (30), (40), (37) and Lemma 4 implies that limit of u n ξ * , ρ n ξ * and v n ξ * exists for every ξ * S O L E P ( f , K ) , means that the sequences { u n } , { ρ n } and { v n } are bounded. Next, we need to show that each weak sequential limit point of the sequence { u n } belongs to S O L E P ( f , K ) . Let z be arbitrary weak cluster point of the sequence { u n } , and then there exists a weak convergent subsequence { u n k } of { u n } converges to z , this also implies that { v n k } also converge weakly to z . Now our aim to prove that z S O L E P ( f , K ) . By Lemma 6, the bifunction Lipschitz-type condition and Lemma 8, we have
λ f ( v n k , y ) λ f ( v n k , u n k + 1 ) + ρ n k u n k + 1 , y u n k + 1 λ f ( v n k 1 , u n k + 1 ) λ f ( v n k 1 , v n k ) λ L 1 v n k 1 v n k 2 λ L 2 v n k u n k + 1 2 + ρ n k u n k + 1 , y u n k + 1 ρ n k v n k , u n k + 1 v n k λ L 1 v n k 1 v n k 2 λ L 2 v n k u n k + 1 2 + ρ n k u n k + 1 , y u n k + 1
where y be an any element in H n . As a result with (31), (36), (37), and due to the boundedness of the sequence { u n } the above inequality tends to zero. By given λ > 0 , the assumption (A3) and v n k z , we obtain
0 lim sup k f ( v n k , y ) f ( z , y ) , y H n .
Due to z K H n , we obtain f ( z , y ) 0 , y K . This implies that z belongs to S O L E P ( f , K ) . Thus Lemma 5, ensures that { ρ n } , { u n } and { v n } weakly converges to ξ * as n .
Remark 2.
For ϑ n = ϑ = 0 in Algorithm 1 gives the results as in [35,36].

4. Inertial Popov’s Two-Step Subgradient Extragradient Algorithm for Strongly Pseudomonotone EP

The second algorithm is also an inertial algorithm that is able to solve the strongly pseudomonotone equilibrium problem. However, the advantage of this algorithm is that there is no need for prior information regarding the strongly pseudomonotone constant γ and Lipschitz constants L 1 , L 2 . Let { λ n } ( 0 , + ) be a non-increasing sequence, so that the following conditions are satisfied:
( T 1 ) : lim n λ n = 0 and ( T 2 ) : n = 1 λ n = + .
Assumption 2.
Let a bifunction f : E × E R satisfies the following conditions:
(B1)
f ( u ˜ , u ˜ ) = 0 , u ˜ K and f is strongly pseudomontone on K;
(B2)
f meet the Lipschitz-type condition on E with two positive constants L 1 and L 2 ;
(B3)
f ( u ˜ , . ) is sub-differentiable and convex on E for all u ˜ E .
Lemma 11.
Assume that f : E × E R satisfies the conditions (B1)–(B3). Let the solution set S O L E P ( f , K ) is nonempty. For each ξ * S O L E P ( f , K ) , we have
u n + 1 ξ * 2 ρ n ξ * 2 ( 1 4 λ n L 1 ) ρ n v n 2 ( 1 2 λ n L 2 ) u n + 1 v n 2 + 4 λ n L 1 ρ n v n 1 2 2 γ λ n v n ξ * 2 .
Now, we are in a position to provide our second convergence result of this work.
Theorem 2.
Assume that f : E × E R satisfies the conditions (B1)–(B3). Let { u n } , { v n } and { ρ n } are sequences in E generated by Algorithm 2 and ϑ n is non-decreasing sequence with 0 ϑ n ϑ < 5 2 . Subsequently, { u n } , { v n } and { ρ n } strongly converge to an element ξ * in S O L E P ( f , K ) .
Algorithm 2 (Two-step Subgradient Extragradient Algorithm for Strongly Pseudomonotone EP)
  • Initialization: Choose u 1 , u 0 , v 0 E , 0 ϑ n ϑ < 5 2 and a sequence { λ n } satisfying (43). Set
    u 1 = arg min { λ 0 f ( v 0 , y ) + 1 2 ρ 0 y 2 : y K } ,
    v 1 = arg min { λ 1 f ( v 0 , y ) + 1 2 ρ 1 y 2 : y K } ,
    where ρ 0 = u 0 + ϑ 0 ( u 0 u 1 ) and ρ 1 = u 1 + ϑ 1 ( u 1 u 0 ) .
  • Iterative steps: Assume that u n 1 , u n , v n 1 and v n are known for n 1 and
    H n = { z E : ρ n λ n ω n 1 v n , z v n 0 } ,
    where ω n 1 f ( v n 1 , v n ) .
  • Step 1: Compute
    u n + 1 = arg min { λ n f ( v n , y ) + 1 2 ρ n y 2 : y H n } ,
    where ρ n = u n + ϑ n ( u n u n 1 ) .
  • Step 2: Compute
    v n + 1 = arg min { λ n + 1 f ( v n , y ) + 1 2 ρ n + 1 y 2 : y K } ,
    where ρ n + 1 = u n + 1 + ϑ n + 1 ( u n + 1 u n ) .
  • Step 3: If u n + 1 = ρ n and v n = v n 1 , then STOP. Otherwise set n : = n + 1 and go to Step 1.
Proof. 
The proof is the identical as the proof of Theorem 1, but there are still few changes. We provide the proof for the readable purpose. By Lemma 11 and adding 4 L 1 λ n ρ n + 1 v n 2 in both sides, we have
u n + 1 ξ * 2 + 4 L 1 λ n ρ n + 1 v n 2 ρ n ξ * 2 ( 1 4 L 1 λ n ) ρ n v n 2 ( 1 2 L 2 λ n ) u n + 1 v n 2 + 4 L 1 λ n ρ n v n 1 2 2 γ λ n v n ξ * 2 + 4 L 1 λ n ρ n + 1 v n 2 .
By using the definition of ρ n in Algorithm 2, we have
ρ n ξ * 2 = ( 1 + ϑ n ) u n ξ * 2 ϑ n u n 1 ξ * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 .
By using the definition ρ n + 1 in Algorithm 2, we also have
ρ n + 1 v n 2 ( 1 + ϑ n ) u n + 1 v n 2 + ϑ n ( 1 + ϑ n ) u n + 1 u n 2 .
Combining the expression (44)–(46), we obtain
u n + 1 ξ * 2 + 4 L 1 λ n + 1 ρ n + 1 v n 2 ( 1 + ϑ n ) u n ξ * 2 ϑ n u n 1 ξ * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 L 1 λ n ρ n v n 1 2 ( 1 4 L 1 λ n ) ρ n v n 2 ( 1 2 L 2 λ n ) u n + 1 v n 2 + 4 L 1 λ n ( 1 + ϑ n ) u n + 1 v n 2 + 4 L 1 λ n ϑ n ( 1 + ϑ n ) u n + 1 u n 2 2 γ λ n v n ξ * 2
( 1 + ϑ n ) u n ξ * 2 ϑ n u n 1 ξ * 2 + 4 L 1 λ n ρ n v n 1 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 L 1 λ n ( 1 + ϑ n ) u n + 1 u n 2 2 γ λ n v n ξ * 2 1 2 L 2 λ n 4 L 1 λ n ( 1 + ϑ n ) 2 2 ( u n + 1 v n 2 + ρ n v n 2 ) .
Next, we let ϱ n = 1 2 L 2 λ n 4 L 1 λ n ( 1 + ϑ n ) 2 and
Φ n = u n ξ * 2 ϑ n u n 1 ξ * 2 + 4 L 1 λ n ρ n v n 1 2 .
Due to the above substituting the expression (48) turns into the following:
Φ n + 1 Φ n + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 L 1 λ n ϑ n ( 1 + ϑ n ) u n + 1 u n 2 ϱ n u n + 1 ρ n 2 2 γ λ n v n ξ * 2 ,
By the definition ρ n + 1 , we have
u n + 1 ρ n 2 ( 1 ϑ n ) u n + 1 u n 2 + ( ϑ n 2 ϑ n ) u n u n 1 2 .
Combining the expression (49) and (50), we obtain
Φ n + 1 Φ n + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 L 1 λ n ϑ n ( 1 + ϑ n ) u n + 1 u n 2 2 γ λ n v n ξ * 2 ϱ n ( 1 ϑ n ) u n + 1 u n 2 ϱ n ( ϑ n 2 ϑ n ) u n u n 1 2 = Φ n + R n u n u n 1 2 Q n u n + 1 u n 2 2 γ λ n v n ξ * 2 ,
where R n : = ϑ n ( 1 + ϑ n ) + ϱ n ϑ n ( 1 ϑ n ) and Q n : = ϱ n ( 1 ϑ n ) 4 L 1 λ n ϑ n ( 1 + ϑ n ) . In addition, we also take Ψ n = Φ n + R n u n u n 1 2 . It follows from (51) that
Ψ n + 1 Ψ n ( Q n R n + 1 ) u n + 1 u n 2 2 γ λ n v n ξ * 2 .
Since λ n 0 , then there exists a finite number n 0 N such that
0 < λ n < 1 2 2 ϑ 1 2 ϑ 2 L 2 ( 1 ϑ ) 2 + 2 L 1 ( 1 + ϑ + ϑ 2 + ϑ 3 ) , n n 0 .
Similarly, it follows from (24) and expression (52) implies that
Ψ n + 1 Ψ n δ u n + 1 u n 2 0 , n n 0 .
The above implies that the sequence { Ψ n } is non-increasing for n n 0 . From the value of Ψ n , we have
u n ξ * 2 Ψ n + ϑ n u n 1 ξ * 2 Ψ n 0 + ϑ u n 1 ξ * 2 Ψ n 0 ( ϑ n n 0 + + 1 ) + ϑ n n 0 u n 0 ξ * 2 Ψ n 0 1 ϑ + ϑ n n 0 u n 0 ξ * 2 .
From the definition of Ψ n + 1 with the expression (54), we obtain
Ψ n + 1 ϑ n + 1 u n ξ * 2 ϑ u n ξ * 2 ϑ Ψ n 0 1 ϑ + ϑ n n 0 + 1 u n 0 ξ * 2 ϑ Ψ n 0 1 ϑ + u n 0 ξ * 2 .
It is follows from (53) and (55) that
δ n = n 0 k u n + 1 u n 2 Ψ n 0 Ψ k + 1 Ψ n 0 + ϑ Ψ n 0 1 ϑ + u n 0 ξ * 2 Ψ n 0 1 ϑ + u n 0 ξ * 2 ,
letting k in the expression (56), we obtain
n = 1 u n + 1 u n 2 < + implies that u n + 1 u n 0 as n .
From the expression (20) and (57), we obtain
u n + 1 ρ n 0 as n .
The expression (55) implies that
Φ n + 1 ϑ Ψ n 0 1 ϑ + u n 0 ξ * 2 + R n + 1 u n + 1 u n 2 .
It follows from (48) for all n n 0 , such that
1 2 L 2 λ n 4 L 1 λ n ( 1 + ϑ ) u n + 1 v n 2 + ρ n v n 2 Φ n Φ n + 1 + ϑ ( 1 + ϑ ) u n u n 1 2 + 4 L 1 λ n ϑ ( 1 + ϑ ) u n + 1 u n 2 .
Consider the expression (60) for n 0 , n 0 + 1 , , k . Summing them up, we obtain
1 2 L 2 λ n 4 L 1 λ n ( 1 + ϑ ) n = n 0 k u n + 1 v n 2 + ρ n v n 2 Φ n 0 Φ k + 1 + ϑ ( 1 + ϑ ) n = n 0 k u n u n 1 2 + 4 L 1 2 L 2 + 4 L 1 ϑ ( 1 + ϑ ) n = n 0 k u n + 1 u n 2 Φ n 0 + ϑ Φ n 0 1 ϑ + u n 0 ξ * 2 + R k + 1 u k + 1 u k 2 + ϑ ( 1 + ϑ ) n = n 0 k u n u n 1 2 + 4 L 1 2 L 2 + 4 L 1 ϑ ( 1 + ϑ ) n = n 0 k u n + 1 u n 2 = Φ n 0 1 ϑ + u n 0 ξ * 2 + R k + 1 u k + 1 u k 2 + ϑ ( 1 + ϑ ) n = n 0 k u n u n 1 2 + 4 L 1 2 L 2 + 4 L 1 ϑ ( 1 + ϑ ) n = n 0 k u n + 1 u n 2 ,
By letting k in the expression (61) implies that
n u n + 1 v n 2 < + and n ρ n v n 2 < + ,
and
lim n u n + 1 v n = lim n ρ n v n = 0 .
We can easily derive the following relationship:
lim n u n v n = lim n u n ρ n = lim n v n 1 v n = 0 .
By using the value ρ n , we obtain
ρ n v n 1 2 ( 1 + ϑ ) u n v n 1 2 + ϑ ( 1 + ϑ ) u n u n 1 2 .
Now, summing up equation (65) for n = n 0 , n 0 + 1 , k , we obtain
n = n 0 k ρ n v n 1 2 ( 1 + ϑ ) n = n 0 k u n v n 1 2 + ϑ ( 1 + ϑ ) n = n 0 k u n u n 1 2
The above expression with (57) and (62) implies that
n = 1 ρ n v n 1 2 < + .
Furthermore, the expression (47) gives that
u n + 1 ξ * 2 ( 1 + ϑ ) u n ξ * 2 ϑ u n 1 ξ * 2 + ϑ ( 1 + ϑ ) u n u n 1 2 + 4 L 1 λ n ρ n v n 1 2 .
The above expression through (57), (67), and Lemma 4 implies that
lim n u n ξ * = l .
The expression (64) with (69), we obtain
lim n ρ n ξ * = lim n v n ξ * = l .
Now, we are showing that the sequence { u n } converges strongly to ξ * . Due to the condition on λ n for all n n 0 , we can easily observe the following inequality:
0 < λ n < 1 2 L 2 + 4 L 1 , n n 0 .
It follows from Lemma 11, such that
2 γ λ n v n ξ * 2 ρ n ξ * 2 u n + 1 ξ * 2 + 4 L 1 λ n ρ n v n 1 2 , n n 0 .
From the expression (45) and (71), we obtain
2 γ λ n v n ξ * 2 u n + 1 ξ * 2 + ( 1 + ϑ n ) u n ξ * 2 ϑ n u n 1 ξ * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 + 4 L 1 λ n ρ n v n 1 2 ( u n ξ * 2 u n + 1 ξ * 2 ) + 2 ϑ u n u n 1 2 + ( ϑ n u n ξ * 2 ϑ n 1 u n 1 ξ * 2 ) + 4 L 1 λ n ρ n v n 1 2 .
It follows from expression (72) that
n = n 0 k 2 γ λ n v n ξ * 2 ( u n 0 ξ * 2 u k + 1 ξ * 2 ) + 2 ϑ n = n 0 k u n u n 1 2 + ( ϑ k u k ξ * 2 ϑ n 0 1 u n 0 1 ξ * 2 ) + 4 L 1 2 L 2 + 4 L 1 n = n 0 k ρ n v n 1 2 u n 0 ξ * 2 + ϑ u k ξ * 2 + 2 ϑ n = n 0 k u n u n 1 2 + 4 L 1 2 L 2 + 4 L 1 n = n 0 k ρ n v n 1 2 M ,
for M 0 . It implies that
n = 1 2 γ λ n v n ξ * 2 < + .
By the Lemma 2 and (74) implies that
lim inf v n ξ * = 0 .
Finally, expression (69) and (75) provide that lim n u n ξ * = 0 . This completes the proof. □

5. Application to Variational Inequality Problems

For considering Algorithm 1 and Theorem 1, we can able to write the next result for solving variational inequality problems that involve pseudomonotone and Lipschitz continuous operator.
Corollary 1.
Assume that H : K E be a Lipschitz continuous with the constant L and pseudomonotone operator. Let { u n } , { v n } and { ρ n } be sequences generated, as follows:
(i)
Choose u 1 , u 0 , v 0 E , 0 ϑ n ϑ < 5 2 and λ ( ϑ , L 1 , L 2 ) > 0 . Compute
u 1 = P K ( ρ 0 λ H v 0 ) , where ρ 0 = u 0 + ϑ 0 ( u 0 u 1 ) , v 1 = P K ( ρ 1 λ H v 0 ) , where ρ 1 = u 1 + ϑ 1 ( u 1 u 0 ) .
(ii)
Given u n 1 , u n , v n 1 , and v n for each n 1 , and construct the half-space first as
H n = { z E : ρ n λ H v n 1 v n , z v n 0 } .
(iii)
Evaluate
u n + 1 = P H n ( ρ n λ H v n ) , where ρ n = u n + ϑ n ( u n u n 1 ) , v n + 1 = P K ( ρ n + 1 λ H v n ) , where ρ n + 1 = u n + 1 + ϑ n + 1 ( u n + 1 u n ) ,
where λ > 0 , such that
0 < λ < 1 2 2 ϑ 1 2 ϑ 2 L 2 ( 1 ϑ ) 2 + 2 L 1 ( 1 + ϑ + ϑ 2 + ϑ 3 ) and 0 ϑ n ϑ < 5 2 ,
with L 1 = L 2 = L 2 . Subsequently, sequence { u n } , { ρ n } and { v n } converge weakly to ξ * S O L V I ( H , K ) .
From the consideration on Algorithm 2 and Theorem 2, we state the following result for the class of variational inequality problems involving strongly pseudomonotone and Lipschitz continuous operator.
Corollary 2.
Assume that H : K E is a Lipschitz continuous and strongly pseudomonotone operator with the constant L . Let { u n } , { v n } and { ρ n } are the sequences generated as follows:
(i)
Choose u 1 , u 0 , v 0 E , 0 ϑ n ϑ < 5 2 and a sequence { λ n } satisfying (43). Compute
u 1 = P K ( ρ 0 λ 0 H v 0 ) , where ρ 0 = u 0 + ϑ 0 ( u 0 u 1 ) , v 1 = P K ( ρ 1 λ 1 H v 0 ) , where ρ 1 = u 1 + ϑ 1 ( u 1 u 0 ) .
(ii)
Given u n 1 , u n , v n 1 , and v n create a half space for each n 1 , such that
H n = { z E : ρ n λ n H v n 1 v n , z v n 0 } .
(iii)
Compute
u n + 1 = P H n ( ρ n λ n H v n ) , where ρ n = u n + ϑ n ( u n u n 1 ) , v n + 1 = P K ( ρ n + 1 λ n + 1 H v n ) , where ρ n + 1 = u n + 1 + ϑ n + 1 ( u n + 1 u n ) ,
where 0 ϑ n ϑ < 5 2 , with L 1 = L 2 = L 2 . The sequence { u n } , { ρ n } and { v n } converge strongly to ξ * S O L V I ( H , K ) .

6. Computational Experiment

Some numerical results will be presented in this section to show the performance of our proposed methods. The MATLAB codes run in MATLAB version 9.5 (R2018b) on a PC (with Intel(R) Core(TM)i3-4010U CPU @ 1.70GHz 1.70GHz, RAM 4.00 GB).

6.1. Nash-Cournot Equilibrium Model of Electricity Markets

The Nash–Cournot equilibrium model of electricity markets in [20] is considered in this example. Assume that there are three companies ( i = 1 , 2 , 3 ) generating electricity. These three companies has generating units denoted as U 1 = { 1 } , U 2 = { 2 , 3 } and U 3 = { 4 , 5 , 6 } , respectively. Let u j denote the generating power of the each unit for i = { 1 , 2 , 3 , 4 , 5 , 6 } . Next, we take the electricity price P as P = 378 . 4 2 j = 1 6 u j . The cost of generating the j unit is written as:
c j ( u j ) : = max { c j ( u j ) , c j ( u j ) } ,
where c j ( u j ) : = α j 2 u j 2 + β j u j + γ j and c j ( u j ) : = α j u j + β j β j + 1 γ j 1 β j ( u j ) ( β j + 1 ) β j . Table 1 provides the values of the unknown parameters. Consider that the profit of the firm i is
F i ( u ) : = P j I i u j j I i c j ( u j ) = 378 . 4 2 l = 1 6 u l j I i u j j I i c j ( u j ) ,
with u = ( u 1 , , u 6 ) T corresponding to the constraint set u C : = { u R 6 : u j min u j u j max } , with u j min and u j max values given in Table 2. Consider the equilibrium function f by
f ( u , v ) : = i = 1 3 ϕ i ( u , u ) ϕ i ( u , v ) ,
where
ϕ i ( u , v ) : = 378 . 4 2 j I i u j + j I i v j j I i v j j I i c j ( v j ) .
The Nash–Cournot equilibrium models of electricity markets can be seen as an equilibrium problem in the following way (see [44] for more details):
Find ξ * K such that f ( ξ * , y ) 0 , y K .
During the numerical example in Section 6.1, we take the values u 1 = ( 10 , 10 , 20 , 17 , 8 , 14 ) T , u 0 = ( 10 , 20 , 30 , 10 , 0 , 1 ) T , v 0 = ( 48 , 48 , 30 , 27 , 18 , 24 ) T .

6.1.1. Algorithm 1 Behaviour for Different Values of ϑ n :

Figure 1 and Table 3 characterize the behaviour of error term D n = u n + 1 u n T O L regarding Algorithm 1 (Algo1) with respect to different values of ϑ n in terms of the number of iterations and elapsed time, respectively.

6.1.2. Algorithm 1 Comparison with Existing Algorithms

Figure 2 and Table 4 explain the numerical comparison between Algorithm 1 (EgA) in [19], Algorithm 1 (PEgA) in [21], Algorithm 3.1 (PSgEgA) in [35,36] and Algorithm 1(Algo1).
Algorithm 1 (EgA) in [19]: Choose u 0 E and 0 < λ < min { 1 2 L 1 , 1 2 L 2 } .
v n = arg min { λ f ( u n , y ) + 1 2 u n y 2 : y K } , u n + 1 = arg min { λ f ( v n , y ) + 1 2 u n y 2 : y K } .
Algorithm 1 (PEgA) in [21]: Choose u 0 , v 0 E and 0 < λ < min 1 2 L 2 + 4 L 1 .
u n + 1 = arg min { λ f ( v n , y ) + 1 2 u n y 2 : y K } , v n + 1 = arg min { λ f ( v n , y ) + 1 2 u n + 1 y 2 : y K } .
Algorithm 3.1 (PSgEgA) in [35,36]: Choose u 0 , v 0 E and 0 < λ < min 1 2 L 2 + 4 L 1 .
(i)
u 1 = arg min { λ f ( v 0 , y ) + 1 2 u 0 y 2 : y K } , v 1 = arg min { λ f ( v 0 , y ) + 1 2 u 1 y 2 : y K } .
(ii)
Given u n 1 , u n , v n 1 , v n for n 1 and construct a half space as
H n = { z E : u n λ ω n 1 v n , z v n 0 } , where ω n 1 f ( v n 1 , v n ) .
(iii)
u n + 1 = arg min { λ f ( v n , y ) + 1 2 u n y 2 : y H n } , v n + 1 = arg min { λ f ( v n , y ) + 1 2 u n + 1 y 2 : y K } .

6.1.3. Algorithm 2 Behaviour by Using Different Step-Size Sequences λ n

Figure 3 and Table 5 describe the numerical results for error term D n = u n + 1 u n T O L for Algorithm 2 (Algo2).

6.2. Example 2

Assume that f : R × R R is defined by
f ( u , v ) = tan 1 ( u ) ( v u ) , u , v R ,
where K = [ 0 , 1 ] . We can easily see that f ( u , v ) satisfy all of the conditions (A1)–(A4) with Lipschitz-type constants are L 1 = L 2 = 1 2 (for more details, see [36]).

6.2.1. Algorithm 1 Performance for Different Values of Extrapolation Factor ϑ n :

Figure 4 and Table 6 show the numerical results regarding the error term D n = u n of Algorithm 1 using different values of ϑ n in term of the no.of iterations. For these results, we use values u 1 = 1 2 , u 0 = 1 , v 0 = 1 and y-axes depict D n value, whereas x-axes are depicted as the number of iterations. The input and output values of the parameters are shown in Table 6, which are useful for choosing the best extrapolation factor value.

6.2.2. Algorithm 1 Comparison with Existing Algorithm

Figure 5 and Table 7 illustrate the comparison of our proposed Algorithm 1 (Algo1) with the existing Algorithm 3.1 (PSgEgA) that appears in the paper of Liu [36]. For these results, the stopping criterion is ( D n = u n ) and y-axes depict D n value, whereas the x-axes are depicted as the number of iterations. The input and output values for the parameters are written in Table 7.

6.3. Nash–Cournot Oligopolistic Equilibrium Model

Consider a Nash–Cournot oligopolistic equilibrium model [19] based on n companies that manufacture the same commodity. Each company produces u i amount of commodity and u denotes a vector whose entries u i . The price function for each company i is defined by P i ( S ) = ϕ i ψ i S , where S = i = 1 m u i and ϕ i > 0 , ψ i > 0 . Now, consider a profit function for each company i are F i ( u ) = P i ( S ) u i t i ( u i ) , where t i ( u i ) is the value tax and fee for producing u i . Let K i = [ u i min , u i max ] is the set of action of each company i and accumulated actions for whole model taken the form as K : = K 1 × K 2 × × K n . In addition, each company wants to get peak revenue on the assertion that the output of the other companies is an input parameter. The strategy being used to deal with this sort of model mainly focuses on the well-known Nash equilibrium idea. A point u * K = K 1 × K 2 × × K n is equilibrium point of the model if
F i ( u * ) F i ( u * [ u i ] ) , u i K i , i = 1 , 2 , , n ,
with vector u * [ u i ] denote a vector achievement from u * by considering u i * with u i . Let f ( u , v ) : = φ ( u , v ) φ ( u , u ) with φ ( u , v ) : = i = 1 n F i ( u [ v i ] ) and the problem of determine the Nash equilibrium point is
Find u * K : f ( u * , v ) 0 , v K .
Next, the bifunction f is written as
f ( u , v ) = P u + Q v + q , v u ,
where q R m and the matrices P, Q are
Q = 1.6 1 0 0 0 1 1.6 0 0 0 0 0 1.5 0 0 0 0 1 1.5 0 0 0 0 0 2 , P = 3.1 2 0 0 0 3 3.6 0 0 0 0 0 3.5 2 0 0 0 2 3.3 0 0 0 0 0 3
with q = ( 1 , 2 , 1 , 2 , 1 ) T and K = { u R 5 : 2 u i 5 } . During this example, we use the values of the parameters u 1 = ( 1 , 2 , 1 , 2 , 0 ) T , u 0 = ( 1 , 3 , 1 , 1 , 2 ) T and v 0 = ( 1 , 2 , 1 , 1 , 2 ) T .

6.3.1. Algorithm 2. Behaviour for Different Step-Size Sequences λ n :

The class of step-size sequences { λ n } used in the experiments are:
(I)
λ n = 1 ( n + 2 ) q , q { 1.0 ; 0.8 ; 0.5 ; 0.3 ; 0.1 } ;
(II)
λ n = 1 ( log ( n + 3 ) ) q , q { 7 ; 5 ; 3 ; 2 ; 0.5 } .
Figure 6 and Figure 7 describe the numerical results for Algorithm 2 (Algo2) by using the above define classes of step-size sequences.

6.3.2. Algorithm 2. Comparison with Existing Algorithms

Figure 8 describes the numerical results of Algorithm 2 (Algo2) using the stepsize sequences λ n = 1 n + 1 .
Discussion About Numerical Experiments: We have the following observations regarding the above-mentioned experiments:
(1)
Figure 1 and Figure 4 and Table 3 and Table 6 reported results for Algorithm 1 while using different values for ϑ n . From these results, we can see that the value of θ n nearer the upper bound value 5 2 is more appropriate and enhances the effectiveness of the suggested algorithms.
(2)
It can also be acknowledged that the efficiency of the algorithm depends on the complexity of the problem and tolerance of the error term. More time and a significant number of iterations are required in the case of large-scale problems. In this situation, we can see that the certain value of the step-size enhances the performance of the algorithm and boosts the convergence rate.
(3)
From Figure 5 and Table 7, it can also be noted that the choice of the initial points and the complexity of the bifunction affect the performance of algorithms in terms of the number of iterations and time of execution in seconds.
(4)
We have the following observation from Figure 3 and Figure 6, Figure 7 and Figure 8 with Table 5.
(i)
No previous information of Lipschitz-constant L 1 , L 2 is required for running algorithms on Matlab.
(ii)
In fact, the convergence rate of algorithms depends entirely on the convergence rate of step-size sequences λ n .
(iii)
The convergence rate of the iterative sequence often depends on the complexity of the problem as well as on the size of the problem.
(iv)
Due to the variable step-size sequence, a specific step-size value that is not appropriate for the current iteration of the method often causes inconsistency and a hump in the behavior of the iterative sequence.

7. Conclusions

Two different approaches are proposed in this paper to deal with two families of equilibrium problems. The first algorithm is an inertial two-step step proximal-like method that generates a weak converging iterative sequence and it can solve pseudomonoton equilibrium problems. In addition, we use the diminishing and non-summable step-size sequence for the second algorithm to achieve the strong convergence. The key advantage of the second algorithm is that iterative sequences have been developed with no prior knowledge of a strong pseudomonotonicity and Lipschitz-type constants of a bifunction. Numerical findings were mentioned to show the numerical efficiency of algorithms as compared to other algorithms. Such numerical studies imply that the inertial effects normally enhance the effectiveness of the iterative sequence in this context.

Author Contributions

Conceptualization, H.u.R. and P.K.; methodology, M.S., N.A.A. and W.K.; writing—original draft preparation, H.u.R., P.K. and W.K.; writing—review and editing, H.u.R., P.K., M.S. and N.A.A.; software, H.u.R., M.S. and N.A.A.; supervision, P.K., M.S. and W.K.; project administration and funding acquisition, P.K. and W.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was financially supported by King Mongkut’s University of Technology Thonburi through the ‘KMUTT 55th Anniversary Commemorative Fund’. Moreover, this project was supported by Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation research Cluster (CLASSIC), Faculty of Science, KMUTT. In particular, Habib ur Rehman was financed by the Petchra Pra Jom Doctoral Scholarship Academic for Ph.D. Program at KMUTT [grant number 39/2560]. Furthermore, Wiyada Kumam was financially supported by the Rajamangala University of Technology Thanyaburi (RMUTTT) (Grant No. NSF62D0604).

Acknowledgments

The first author would like to thank the “Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi”. We are very grateful to editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work.

Conflicts of Interest

The authors declare that they have conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Fan, K. A Minimax Inequality and Applications, INEQUALITIES III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  3. Biegler, L.T. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes; SIAM-Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2010; Volume 10. [Google Scholar]
  4. Dafermos, S. Traffic Equilibrium and Variational Inequalities. Transp. Sci. 1980, 14, 42–54. [Google Scholar] [CrossRef] [Green Version]
  5. Ferris, M.C.; Pang, J.S. Engineering and Economic Applications of Complementarity Problems. SIAM Rev. 1997, 39, 669–713. [Google Scholar] [CrossRef] [Green Version]
  6. Nagurney, A. Network Economics: A Variational Inequality Approach; Springer: Dordrecht, The Netherlands, 1993. [Google Scholar] [CrossRef]
  7. Patriksson, M. The Traffic Assignment Problem: Models and Methods; Courier Dover Publications: Mineola, NY, USA, 2015. [Google Scholar]
  8. Cournot, A.A. Recherches sur les Principes Mathématiques de la Théorie des Richesses; Wentworth Press Hachette: Paris, France, 1838. [Google Scholar]
  9. Arrow, K.J.; Debreu, G. Existence of an Equilibrium for a Competitive Economy. Econometrica 1954, 22, 265. [Google Scholar] [CrossRef]
  10. Nash, J.F. 5. Equilibrium Points in n-Person Games. In The Essential John Nash; Nasar, S., Ed.; Princeton University Press: Princeton, NJ, USA, 2002; pp. 49–50. [Google Scholar] [CrossRef] [Green Version]
  11. Nash, J. Non-Cooperative Games. Ann. Math. 1951, 54, 286. [Google Scholar] [CrossRef]
  12. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory, Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  13. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
  14. Mastroeni, G. On auxiliary principle for equilibrium problems. In Equilibrium Problems and Variational Models; Springer: Berlin, Germany, 2003; pp. 289–298. [Google Scholar]
  15. Martinet, B. Brève communication. Régularisation d’inéquations variationnelles par approximations successives. Rev. Française D’informatique Et De Rech. Opérationnelle. Série Rouge 1970, 4, 154–158. [Google Scholar] [CrossRef] [Green Version]
  16. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  17. Konnov, I. Application of the Proximal Point Method to Nonmonotone Equilibrium Problems. J. Optim. Theory Appl. 2003, 119, 317–333. [Google Scholar] [CrossRef]
  18. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  19. Quoc Tran, D.; Le Dung, M.N.V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  20. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2011, 52, 139–159. [Google Scholar] [CrossRef]
  21. Lyashko, S.I.; Semenov, V.V. A New Two-Step Proximal Algorithm of Solving the Problem of Equilibrium Programming. In Optimization and Its Applications in Control and Data Sciences; Springer International Publishing: New York, NY, USA, 2016; pp. 315–325. [Google Scholar] [CrossRef]
  22. Anh, P.N.; Hai, T.N.; Tuan, P.M. On ergodic algorithms for equilibrium problems. J. Glob. Optim. 2015, 64, 179–195. [Google Scholar] [CrossRef]
  23. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2017, 97, 811–824. [Google Scholar] [CrossRef]
  24. ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019. [Google Scholar] [CrossRef]
  25. Anh, P.N.; An, L.T.H. The subgradient extragradient method extended to equilibrium problems. Optimization 2012, 64, 225–248. [Google Scholar] [CrossRef]
  26. Ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optimization Methods and Software 2020, 1–32. [Google Scholar] [CrossRef]
  27. Vinh, N.T.; Muu, L.D. Inertial Extragradient Algorithms for Solving Equilibrium Problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  28. ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  29. Hieu, D.V. An inertial-like proximal algorithm for equilibrium problems. Math. Methods Oper. Res. 2018, 88, 399–415. [Google Scholar] [CrossRef]
  30. ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  31. Hieu, D.V.; Cho, Y.J.; bin Xiao, Y. Modified extragradient algorithms for solving equilibrium problems. Optimization 2018, 67, 2003–2029. [Google Scholar] [CrossRef]
  32. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  33. ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  34. Muu, L.D.; Quoc, T.D. Regularization Algorithms for Solving Monotone Ky Fan Inequalities with Application to a Nash-Cournot Equilibrium Model. J. Optim. Theory Appl. 2009, 142, 185–204. [Google Scholar] [CrossRef]
  35. Kassay, G.; Hai, T.N.; Vinh, N.T. Coupling popov’s algorithm with subgradient extragradient method for solving equilibrium problems. J. Nonlinear Convex Anal. 2018, 19, 959–986. [Google Scholar]
  36. Liu, Y.; Kong, H. The new extragradient method extended to equilibrium problems. Rev. De La Real Acad. De Cienc. Exactas, Físicas Y Naturales. Ser. A. Matemáticas 2019, 113, 2113–2126. [Google Scholar] [CrossRef]
  37. Goebel, K.; Reich, S. Uniform convexity. In Hyperbolic Geometry, and Nonexpansive; Marcel Dekker, Inc.: New York, NY, USA, 1984. [Google Scholar]
  38. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  39. Tiel, J.V. Convex Analysis: An Introductory Text, 1st ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  40. Ofoedu, E. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef] [Green Version]
  41. Heinz, H.; Bauschke, P.L.C. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed.; CMS Books in Mathematics; Springer International Publishing: New York, NY, USA, 2017. [Google Scholar]
  42. Attouch, F.A.H. An Inertial Proximal Method for Maximal Monotone Operators via Discretization of a Nonlinear Oscillator with Damping. Set Valued Var. Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  43. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–598. [Google Scholar] [CrossRef] [Green Version]
  44. Maiorano, A.; Song, Y.; Trovato, M. Dynamics of non-collusive oligopolistic electricity markets. In Proceedings of the 2000 IEEE Power Engineering Society Winter Meeting, Conference Proceedings (Cat. No.00CH37077), Singapore, 23–27 January 2000. [Google Scholar] [CrossRef]
  45. Hieu, D.V. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 2017, 77, 983–1001. [Google Scholar] [CrossRef]
Figure 1. Experiment in Section 6.1.1: Algorithm 1 behaviour for different values of ϑ n .
Figure 1. Experiment in Section 6.1.1: Algorithm 1 behaviour for different values of ϑ n .
Energies 13 03292 g001
Figure 2. Comparison of Algorithm 1 with Algorithm 1 in [19], Algorithm 1 in [21], and Algorithm 3.1 in [35,36].
Figure 2. Comparison of Algorithm 1 with Algorithm 1 in [19], Algorithm 1 in [21], and Algorithm 3.1 in [35,36].
Energies 13 03292 g002
Figure 3. Algorithm 2 behaviour with respect to different step-size sequences λ n .
Figure 3. Algorithm 2 behaviour with respect to different step-size sequences λ n .
Energies 13 03292 g003
Figure 4. Experiment in Section 6.2.1: Algorithm 1 behaviour regarding different values of ϑ n .
Figure 4. Experiment in Section 6.2.1: Algorithm 1 behaviour regarding different values of ϑ n .
Energies 13 03292 g004
Figure 5. Experiment in Section 6.2.2: Comparison of Algorithm 1 with Algorithm 3.1 in [35,36].
Figure 5. Experiment in Section 6.2.2: Comparison of Algorithm 1 with Algorithm 3.1 in [35,36].
Energies 13 03292 g005
Figure 6. Experiment in Section 6.3.1: Algorithm 2 behaviour with respect to step-size sequences λ n = 1 ( n + 2 ) q .
Figure 6. Experiment in Section 6.3.1: Algorithm 2 behaviour with respect to step-size sequences λ n = 1 ( n + 2 ) q .
Energies 13 03292 g006
Figure 7. Experiment in Section 6.3.1: Algorithm 2 behaviour with respect to step-size sequences λ n = 1 ( log ( n + 3 ) ) q .
Figure 7. Experiment in Section 6.3.1: Algorithm 2 behaviour with respect to step-size sequences λ n = 1 ( log ( n + 3 ) ) q .
Energies 13 03292 g007
Figure 8. Experiment in Section 6.3.2: Comparison of Algorithm 2 with Algorithm 1 (EgM) in [23] and Algorithm 3.1 (PEgM) in [45].
Figure 8. Experiment in Section 6.3.2: Comparison of Algorithm 2 with Algorithm 1 (EgM) in [23] and Algorithm 3.1 (PEgM) in [45].
Energies 13 03292 g008
Table 1. The values of parameters are used in the cost function.
Table 1. The values of parameters are used in the cost function.
j α j β j γ j α j β j γ j
10.04002.000.002.00001.000025.0000
20.03501.750.001.75001.000028.5714
30.12501.000.001.00001.00008.0000
40.01163.250.003.25001.000086.2069
50.05003.000.003.00001.000020.0000
60.05003.000.003.00001.000020.0000
Table 2. The parameter values use for constraint set.
Table 2. The parameter values use for constraint set.
j123456
u j m i n 000000
u j m a x 808050553040
Table 3. Experiment in Section 6.1.1: Algorithm 1 performance for varying parameters extrapolation factor ϑ n .
Table 3. Experiment in Section 6.1.1: Algorithm 1 performance for varying parameters extrapolation factor ϑ n .
Algo.name ϑ n λ ξ * Iter.TimeTOL
Algo10.220.02 ( 46.6525 , 32.1462 , 15.0018 , 25.0170 , 10.8987 , 10.8982 ) T 4824138.915365 10 4
Algo10.180.02 ( 46.6525 , 32.1460 , 15.0020 , 25.0104 , 10.9019 , 10.9016 ) T 4949166.620335 10 4
Algo10.140.02 ( 46.6525 , 32.1460 , 15.0020 , 25.0035 , 10.9050 , 10.9053 ) T 5193127.834772 10 4
Algo10.100.02 ( 46.6726 , 32.1460 , 15.0020 , 24.9969 , 10.9080 , 10.9089 ) T 5432136.310422 10 4
Algo10.050.02 ( 46.6526 , 32.1460 , 15.0020 , 24.9885 , 10.9118 , 10.9134 ) T 5721142.108161 10 4
Algo10.010.02 ( 46.6526 , 32.1460 , 15.0020 , 24.9818 , 10.9149 , 10.9170 ) T 5945144.356535 10 4
Algo10.0010.02 ( 46.6726 , 32.1460 , 15.0021 , 24.9787 , 10.9163 , 10.9187 ) T 6043157.711757 10 4
Table 4. Experiment in Section 6.1.2: Algorithm 1 comparison with existing algorithms using two different values of ϑ n .
Table 4. Experiment in Section 6.1.2: Algorithm 1 comparison with existing algorithms using two different values of ϑ n .
Algo.name ϑ n λ ξ * Iter.TimeTOL
EgA0.02 ( 46.6526 , 32.1469 , 15.0012 , 24.9783 , 10.9154 , 10.9200 ) T 7180264.156236 10 4
PEgA0.02 ( 46.6526 , 32.1460 , 15.0021 , 24.9784 , 10.8164 , 10.9188 ) T 6055210.681669 10 4
PSgEgA0.02 ( 46.6525 , 32.1463 , 15.0017 , 25.0004 , 10.9058 , 10.9076 ) T 5515175.840493 10 4
Algo10.120.02 ( 46.6725 , 32.1463 , 15.0017 , 25.0181 , 10.8976 , 10.8982 ) T 4894134.245610 10 4
Algo10.200.02 ( 46.6725 , 32.1463 , 15.0017 , 25.0326 , 10.8910 , 10.8904 ) T 4333115.599023 10 4
Table 5. Experiment in Section 6.1.3: Algorithm 2 numerical values by using different step-size sequences  λ n .
Table 5. Experiment in Section 6.1.3: Algorithm 2 numerical values by using different step-size sequences  λ n .
Algo.name ϑ n λ ξ * Iter.TimeTOL
Algo20.12 1 n + 1 ( 46.6526 , 32.1467 , 15.0011 , 25.1260 , 10.8442 , 10.8442 ) T 125461.898186 10 4
Algo20.12 1 log ( n + 1 ) ( 46.6523 , 32.1467 , 15.0011 , 25.1409 , 10.8368 , 10.8368 ) T 44229.006584 10 4
Algo20.12 1 ( n + 1 ) ( log ( n + 3 ) ) ( 46.6524 , 32.1467 , 15.0011 , 25.1011 , 10.8566 , 10.8566 ) T 231170.849546 10 4
Algo20.12 log ( n + 3 ) n + 1 ( 46.6523 , 32.1467 , 15.0011 , 25.1371 , 10.8387 , 10.8387 ) T 66244.766232 10 4
Algo20.12 1 log ( log ( n + 20 ) ) ( 46.6525 , 32.1467 , 15.0011 , 25.1464 , 10.8341 , 10.8341 ) T 43431.504484 10 4
Table 6. Experiment in Section 6.2.1: Algorithm 1 performance for varying parameters extrapolation factor ϑ n .
Table 6. Experiment in Section 6.2.1: Algorithm 1 performance for varying parameters extrapolation factor ϑ n .
ϑ n λ ξ * Iter.TimeTOL
0.200.0506.5877 × 10 9 700.008866 10 8
0.150.0506.6948 × 10 9 750.010382 10 8
0.100.0506.4466 × 10 9 800.008518 10 8
0.050.0507.5191 × 10 9 840.008378 10 8
0.010.0506.9392 × 10 9 880.008989 10 8
Table 7. Experiment in Section 6.2.2: Algorithm 1 comparison with Algorithm 3.1 in [35,36].
Table 7. Experiment in Section 6.2.2: Algorithm 1 comparison with Algorithm 3.1 in [35,36].
Algorithm u 1 u 0 v 0 ϑ n λ ξ * Iter.TimeTOL
PSgEgA110.17.9278 × 10 11 1100.001014 10 10
Algo10.5110.160.16.5112 × 10 11 920.006082 10 10
PSgEgA0.50.50.16.9204 × 10 11 1070.006580 10 10
Algo110.50.50.160.17.1870 × 10 11 870.006919 10 10
PSgEgA0.20.20.17.7873 × 10 11 1020.007282 10 10
Algo110.20.20.160.17.1827 × 10 11 660.000688 10 10

Share and Cite

MDPI and ACS Style

Rehman, H.u.; Kumam, P.; Shutaywi, M.; Alreshidi , N.A.; Kumam, W. Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models. Energies 2020, 13, 3292. https://doi.org/10.3390/en13123292

AMA Style

Rehman Hu, Kumam P, Shutaywi M, Alreshidi  NA, Kumam W. Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models. Energies. 2020; 13(12):3292. https://doi.org/10.3390/en13123292

Chicago/Turabian Style

Rehman, Habib ur, Poom Kumam, Meshal Shutaywi, Nasser Aedh Alreshidi , and Wiyada Kumam. 2020. "Inertial Optimization Based Two-Step Methods for Solving Equilibrium Problems with Applications in Variational Inequality Problems and Growth Control Equilibrium Models" Energies 13, no. 12: 3292. https://doi.org/10.3390/en13123292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop