Next Article in Journal
A Quadratic Surface Minimax Probability Machine for Imbalanced Classification
Previous Article in Journal
A Family of Higher Order Scheme for Multiple Roots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Class of Smoothing Modulus-Based Iterative Methods for Solving the Stochastic Mixed Complementarity Problems

1
School of Mathematics and Computing Science, Guilin University of Electronic Technology, Guilin 541004, China
2
Department of Mathematics Teaching and Research, Guilin Institute of Information Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2023, 15(1), 229; https://doi.org/10.3390/sym15010229
Submission received: 3 December 2022 / Revised: 29 December 2022 / Accepted: 10 January 2023 / Published: 13 January 2023
(This article belongs to the Section Mathematics)

Abstract

:
In this paper, we present a smoothing modulus-based iterative method for solving the stochastic mixed complementarity problems (SMCP). The main idea is that we firstly transform the expected value model of SMCP into an equivalent nonsmooth system of equations, then obtain an approximation smooth system of equations by using a smoothing function, and finally solve it by the Newton method. We give the convergence analysis, and the numerical results show the effectiveness of the new method for solving the SMCP with symmetry coefficient matrices.

1. Introduction

Mixed complementarity problems often arise in the economic, transportation, control and optimization, such as price equilibrium, Nash equilibrium problem, stochastic traffic equilibrium problem and so on. On the other hand, since some elements may involve uncertain data in many practical problems, some practical problems can be characterized by SMCPs: for example, the stochastic traffic equilibrium problem [1].
SMCP is a class of stochastic nonlinear complementarity problems (SNCPs); Zhang and Chen [2] applied the expected residual minimization model of SNCPs to solve the stochastic traffic equilibrium problem. Li and Lin [3] presented a sampling average approximation method for a class of stochastic Nash equilibrium problems. For other sampling average methods, please see [4,5,6,7]. Recently, Ruud Egging [8] proposed a Benders decomposition method for multi-stage SMCPs. Devine et al. [9] proposed the Rolling Horizon approach for solving SMCPs. The expected value model of the SMCP ([10]) and the expected residual minimization model of the SLCP ([11]) are studied, and applied the sample average approximation (SAA) method to solve these problems. The expected value model methods are also used to solve the SMCPs in [12,13,14,15].
Recently, Dong and Jiang proposed a modular iteration method in [16], and Bai et al. [17] proposed the modular matrix splitting iteration methods. These methods are very effective for solving linear complementarity problems with the symmetry positive definite coefficient matrices or the unsymmetric matrices. Now, a lot of research results in modular iteration methods are presented, such as unsteady extrapolation modular iteration methods, modular matrix splitting iteration methods, etc. See [18,19,20] for more details. In the modular iteration methods, since the equivalent fixed-point equation system is a non-differentiable absolute value equation system, Foutayeni et al. [21] constructed a smoothing function to approximate the original equations, obtaining an effective smoothing numerical algorithm. The approximation methods are efficient; see [22,23,24].
In this paper, based on the idea of the smoothing numerical algorithm in [21], we set up a smoothing modulus-based iteration method to solve the SMCPs. The numerical results in Section 4 show that the new method is very effective for solving the SMCP with symmetry coefficient matrices.
The organization of the paper is as follows. In Section 2, we establish the smoothing modulus-based iteration method for solving the SMCPs. The convergence of the new method is presented in Section 3, and the numerical results are shown in Section 4. In addition, some conclusions are given in Section 5.

2. The Smoothing Modulus-Based Iterative Method

The following notation will be used in the paper. For a given smoothing vector function g : R s R t , g R t + s denotes its Jacobi matrix. ( Ω , F, μ ) denotes the probability space, where Ω is a sample space, F is the non-empty subset of the power set of Ω and μ is the probability. For a given matrix A, we let A denote its spectral norm and A F denote its Frobenius norm, that is
A F = i = 1 n j = 1 n a i j 2 1 2 ,
where a i j is an elements in matrix A.
In this paper, we consider the following SMCP, given mappings G : R n 1 × R n 2 × Ω R n 1 and H : R n 1 × R n 2 × Ω R n 2 , finding u R n 1 and v R n 2 , for almost all ω Ω such that
G ( u , v , ω ) = 0 , v 0 , H ( u , v , ω ) 0 , v T H ( u , v , ω ) = 0 ,
where ω is a random variable.
The SMCP is a natural extension of the mixed complementarity problems. For the deterministic situation, the above problem degenerates into the mixed complementarity problem (MCP). Given mappings G : R n 1 × R n 2 R n 1 and H : R n 1 × R n 2 R n 2 , finding u R n 1 and v R n 2 such that
G ( u , v ) = 0 , v 0 , H ( u , v ) 0 , v T H ( u , v ) = 0 .
For SMCP (1), due to the existence of the random variable ω , it is generally difficult to find u and v which makes this problem true for almost all ω , so the methods for solving the mixed complementarity problem cannot be directly used to solve problem (1). Hence, we use the expected value model (EV model) proposed by Gurkan [25] to solve the stochastic variational inequalities; similarly, the EV model of SMCP (1) can be obtained, which is that, finding u R n 1 and v R n 2 such that
E [ G ( u , v , ω ) ] = 0 , v 0 , E [ H ( u , v , ω ) ] 0 , v T E [ H ( u , v , ω ) ] = 0 ,
where G : R n 1 × R n 2 × Ω R n 1 and H : R n 1 × R n 2 × Ω R n 2 are two mappings, ω Ω is random variable, and E [ · ] is the expected value. By using the expected value model, the stochastic mixed complementarity problem is transforming into a deterministic mixed complementarity problem; then, we construct the smoothing modulus iteration method to solve it.
For Problem (2), let z R n 2 , v = | z | + z , E [ H ( u , v , ω ) ] = | z | z ; then, we have | z | z E [ H ( u , v , ω ) ] = 0 , and set
E [ Q ( u , z , ω ) ] = | z | z E [ H ( u , | z | + z , ω ) ] .
We can further rewrite (2) as the following equivalent equation system:
ϕ ( u , z ) = E [ G ( u , z , ω ) ] E [ Q ( u , z , ω ) ] = 0 .
Since | z | is not differentiable, we introduce a smooth vector function [21],
( z 2 + e c ) 1 2 = ( z 1 2 + e c ) 1 2 , ( z 2 2 + e c ) 1 2 , . . . , ( z n 2 + e c ) 1 2 T ,
where c is a large positive integer. We substitute it into Problem (3) to replace the | z | in ϕ ( u , z ) and set
E [ Q c ( u , z , ω ) ] = ( z 2 + e c ) 1 2 z E [ H ( u , ( z 2 + e c ) 1 2 + z , ω ) ] .
Hence, we obtain an approximate smoothing nonlinear equation system of Equation (3)
Φ ( u , z ) = E [ G ( u , z , ω ) ] E [ Q c ( u , z , ω ) ] = 0 .
Subsequently, we use the sample average method based on the independently and identically distributed sequence of random variable ω , obtain an approximate value of the expected value, and transform the original problem into an approximate problem. As a consequence, by solving this approximate problem, the approximate solution of Problem (4) is obtained.
For an integrable function φ : Ω R , the sampling average approximate for E [ φ ( ω ) ] is obtained by taking an independently and identically distributed sequence { ω 1 , . . . , ω N } Ω of random variable ω , and have that E [ φ ( ω ) ] 1 N i = 1 n φ ( ω i ) . The strong law of large numbers guarantees that this procedure converges with probability one (abbreviated by w . p . 1 ), that is
lim N + 1 N i = 1 n φ ( ω i ) = E [ φ ( ω ) ] = Ω φ ( ω ) d ξ ( ω ) w . p . 1 .
where ξ ( ω ) is a probability distribution function of random variable ω ; see [25,26] for more details.
Given the independent and identical distribution of the random variable { ω 1 , . . . , ω N } Ω , and using the average value of the sample points to approximate the expected value, we obtain the following approximation equations of Problem (4)
Φ N ( u , z ) = G N ( u , z ) Q c N ( u , z ) = 0 ,
where G N ( u , z ) = 1 N i = 1 N G ( u , z , ω i ) , Q c N ( u , z ) = 1 N i = 1 N Q c ( u , z , ω i ) .
The basic assumptions of this article are given below [10]
(A1)
For any u v R n 1 + n 2 , G ( u , v , : ) and H ( u , v , : ) are F -measurable, where F is the σ -algebra on Ω .
(A2)
For ω Ω , G ( : , : , ω ) and H ( : , : , ω ) are continuously differentiable in R n 1 + n 2 .
(A3)
There is a non-negative integrable function κ ( ω ) , such that for any ω Ω ,
sup ( u , v ) R n 1 + n 2 { G ( u , v , ω ) 2 , H ( u , v , ω ) 2 , ( u , v ) G ( u , v , ω ) F 2 , ( u , v ) H ( u , v , ω ) F 2 } κ ( ω ) .
Lemma 1.
[Theorem 2.1, [21]]    Let z R n 2 , when c + , the vector function ( z 2 + e c ) 1 2 uniformly converges to | z | .
Lemma 2.
[Theorem 16.8, [27]] Suppose that f ( ω , t ) is a measurable and integrable function of ω for each t in ( a , b ) . Let ϕ ( t ) = f ( ω , t ) μ ( d ω ) .
(i) Suppose that for ω A , where A F , μ ( Ω A ) = 0 , f ( ω , t ) is continuous in t at t 0 . Suppose further that | f ( ω , t ) | g ( ω ) for ω A and | t t 0 | < δ , where δ is independent of ω and g is integrable. Then, ϕ ( t ) is continuous at t 0 .
(ii) Suppose that for ω A , where A F , μ ( Ω A ) = 0 , f ( ω , t ) has in ( a , b ) a derivative f ( ω , t ) . Suppose further that | f ( ω , t ) | g ( ω ) for ω A and t ( a , b ) , where g is integrable. Then, ϕ ( t ) has a derivative f ( ω , t ) μ ( d ω ) on ( a , b ) .
We discuss some properties of Φ ( u , z ) and Φ N ( u , z ) .
Lemma 3.
Φ is a smooth mapping, and the Jacobi matrix V of Φ ( u , z ) is
V = u E [ G ( u , z , ω ) ] z E [ G ( u , z , ω ) ] u E [ Q c ( u , z , ω ) ] z E [ Q c ( u , z , ω ) ] .
Proof. 
From the basic assumptions and Lemma 1, we know that E [ G ( u , z , ω ) ] and E [ Q c ( u , z , ω ) are continuously differentiable in R n 1 + n 2 , Φ ( u , z ) is smoothing. Then
E [ G ( u , z , ω ) ] = E [ G ( u , z , ω ) ] ,
E [ Q c ( u , z , ω ) ] = E [ Q c ( u , z , ω ) ] .
The Jacobi matrix V is easy to obtain.    □
Lemma 4.
Φ N is a smooth mapping, and the Jacobi matrix V N of Φ N ( u , z ) is
V N = 1 N i = 1 N u G ( u , z , ω i ) 1 N i = 1 N z G ( u , z , ω i ) 1 N i = 1 N u Q c ( u , z , ω i ) 1 N i = 1 N z Q c ( u , z , ω i ) .
Proof. 
It is similar to the proof of Lemma 3; hence, we omit the proof here.   □
From Formula (5), when N is sufficiently large, Φ N ( u , z ) converges to Φ ( u , z ) with probability one; therefore, Φ N ( u , z ) is a good approximation of Φ ( u , z ) . Based on the above analysis, we give a class of smoothing modulus-based iteration method for solving stochastic mixed complementarity problems.

3. Convergence Theorem

In this section, we give the convergence analysis of Algorithm 1.
Algorithm 1: Smoothing Modulus-based Iterative Method
Input parameters x 0 = ( u 0 T , z 0 T ) T , c, ε > 0 , k = 0 .
(1)    Computing Φ N ( u k , z k ) and V k N .
(2)    Computing Δ x k ,
V k N · Δ x k = Φ N ( u k , z k ) .
(3)     x k + 1 = x k + Δ x k .
(4)    If | Δ x k | ε , stop. Else, k : = k + 1 , return to (1).
Lemma 5.
[28] Let S be a nonempty compact subset of R and suppose that:
(i) For almost every ξ ω the function f ( : , ξ ) is continuous on S ;
(ii) f ( x , ξ ) , x S , is dominated by an integrable function;
(iii) The sample is iid (independent identically distributed),
Then, the expected value function f ( x ) is finite valued and continuous on S , and f N ( x ) = 1 N i = 1 N f ( x , ξ i ) converges to f ( x ) with probability one uniformly on S .
Lemma 6.
[27] Let the random variables ω 1 , ω 2 in ( a , b ) , < a < b < + , and E [ ω 1 2 ] < + , E [ ω 2 2 ] < + ; then, we have
E [ ω 1 ω 2 ] 2 E [ ω 1 2 ] E [ ω 2 2 ] .
Theorem 1.
Assume that x N = u N z N R n 1 + n 2 is the solution of Problem (6) for each N and x * = u * z * R n 1 + n 2 is an accumulation point of the sequence { x N } ; then, x * is a solution of Problem (4) with a probability of one.
Proof. 
Without loss of generality, we assume that the sequence { x N } converges to x * as N + . Let I R n 1 + n 2 be a compact set that contains the whole sequence { x N } . Let
Φ ˜ ( u , z , ω ) = G ( u , z , ω ) Q c ( u , z , ω ) ,
Φ ( u , z ) = E [ Φ ˜ ( x , ω ) ] ,
Φ N ( u , z ) = 1 N i = 1 N G ( u , z , ω i ) 1 N i = 1 N Q c ( u , z , ω i ) ,
it follows from Assumption (A3) that
Φ ˜ ( u , z , ω ) 2 G ( u , z , ω ) 2 + Q c ( u , z , ω ) 2 2 κ ( ω ) .
This indicates that the function Φ ˜ ( u , z , ω ) is dominated uniformly by the non-negative integrable function 2 κ ( ω ) on I. By Assumption (A2) and Lemma 6, for almost every ω , the function Φ ˜ ( : , : , ω ) is continuously differentiable on I, from Lemma 2, Φ ( u , z ) is continuous on I, and by Lemma 5, the function Φ N ( u , z ) converges to Φ ( u , z ) uniformly on I with a probability of one.
Note that each x N solves (6), that is
Φ N ( u N , z N ) = 0 .
Taking a limit, we can obtain Φ ( u * , z * ) = 0 with a probability of one. That is, x * is a solution of Problem (4) with a probability of one. This completes the proof. □
Lemma 7.
[Theorem 3.2, [29]] Suppose that F : D R n R n is continuously differentiable on the open neighborhood S 0 D of x * , F ( x * ) is nonsingular, and x * is the solution of the equation F ( x * ) = 0 . Then, the image G ( x ) = x [ F ( x ) ] 1 F ( x ) is well-defined on a closed ball S = S ¯ ( x * , δ ) S 0 , and the sequence { x k } generated by Newton iteration x k + 1 = x k [ F ( x k ) ] 1 F ( x k ) ] superlinearly converges to x * . Assume that x S ,
F ( x ) F ( x * ) α x x *
holds, the iteration sequence { x k } converges at least second order.
Theorem 2.
Let x * = u * z * R n 1 + n 2 be the solution of Φ N ( u , z ) , and ( u , z ) Φ N ( u * , z * ) be nonsingular; then, the sequence { x k } generated by Algorithm 1 converges to x * .
Proof. 
It is easy to know from the basic assumptions that Φ N ( u , z ) is continuously differentiable, and ( u , z ) Φ N ( u * , z * ) is non-singular. According to Lemma 7, the sequence { x k } generated by Algorithm 1 converges to x * . This completes the proof. □

4. Numerical Results

In this section, we use two examples to examine the numerical effectiveness of smoothing modulus-based iterative methods from aspects of the number of iteration steps (denoted by ‘IT’), elapsed CPU time in seconds (denoted by ‘CPU’), and norm of absolute residual vectors (denoted by ‘RES’). Here, ‘RES’ is defined as Δ x 2 . In addition, all experiments are carried out using MATLAB (version R2018b) on a personal computer with a 1.80 GHz central processing unit (Intel(R) Core(TM) i5-8250U CPU), 8.00GB memory.
In our computations, we utilize the random number generator r a n d in MATLAB to generate an independent and identically distributed sequence { ω 1 , . . . , ω N } Ω of the random variable ω from [ 0 , 1 ] , and in the semi-smooth Newton method [10], we set the parameters by ε = 10 9 , c = 30 , ρ = 10 9 , κ = 2.1 , σ = 10 4 , β = 0.5 . In the tables, Let Algorithm 2 denote the Ssemi-smooth Newton Method presented by [10].
Example 1.
[3] Consider the stochastic Nash equilibrium problem (SNEP) in the natural gas market; by using the Karush–Kuhn–Tucker (KKT) condition, we transform it into SMCP. Suppose that there are three suppliers ( q 1 , q 2 , q 3 ) ; the inverse demand function is given by p ( q , ω ) = 10 ω q + 50 , where ω is a random variable with uniform distribution on [ 0 , 1 ] . The cost functions are given by
C 1 ( q 1 ) = 25 q 1 , C 2 ( q 2 ) = 21 q 2 , C 3 ( q 3 ) = 28 q 3 .
The strategy sets are given by
Q 1 = { q 1 | G ( q 1 ) = E [ 3 ω + q 1 12 ] 0 } , Q 2 = { q 2 | G ( q 2 ) = E [ ω + q 2 15 ] 0 } , Q 3 = { q 3 | G ( q 3 ) = E [ 4 ω + q 3 9 ] 0 } .
We choose as initial vectors x 0 = ( 0 , 0 , 0 , 0 , 0 , 0 ) , and the number of samples N = 50 , 10 3 , 10 4 , 10 5 . The numerical results of Algorithms 1 and 2 are listed in Table 1.
Then, we suppose that there are eight suppliers ( q 1 , q 2 , . . . , q 8 ) , and the inverse demand function is given by p ( q , ω ) = 10 ω q + 120 , where ω is a random variable with uniform distribution on [ 0 , 1 ] . The cost functions are given by
C 1 ( q 1 ) = 32 q 1 , C 2 ( q 2 ) = 27 q 2 , C 3 ( q 3 ) = 24 q 3 , C 4 ( q 4 ) = 26 q 4 , C 5 ( q 5 ) = 33 q 5 , C 6 ( q 6 ) = 36 q 6 , C 7 ( q 7 ) = 35 q 7 , C 8 ( q 8 ) = 30 q 8 .
The strategy sets are given by
Q 1 = { q 1 | G ( q 1 ) = E [ 3 ω + q 1 18 ] 0 } , Q 2 = { q 2 | G ( q 2 ) = E [ ω + q 2 15 ] 0 } , Q 3 = { q 3 | G ( q 3 ) = E [ 4 ω + q 3 20 ] 0 } , Q 4 = { q 4 | G ( q 4 ) = E [ 2 ω + q 3 16 ] 0 } , Q 5 = { q 5 | G ( q 5 ) = E [ 3 ω + q 5 12 ] 0 } , Q 6 = { q 6 | G ( q 6 ) = E [ ω + q 6 10 ] 0 } , Q 7 = { q 7 | G ( q 7 ) = E [ 5 ω + q 7 9 ] 0 } , Q 8 = { q 8 | G ( q 8 ) = E [ 3 ω + q 8 14 ] 0 } .
We choose the initial vectors as the zero vector and the number of samples N = 50 , 10 3 , 10 4 , 10 5 , and the numerical results by Algorithm 1 are listed in Table 2.
Example 2.
[10] Consider the stochastic traffic equilibrium problems (STEP), utilize the EV model, and convert STEP into
E [ Γ T F D ( ω ) ] = 0 , F 0 , E [ C ( F , ω ) Γ u ] 0 , F T E [ C ( F , ω ) Γ u ] = 0 ,
where ω is a random variable.
In Step (9), u and F, respectively, indicate the shortest travel cost vector and the route flow vector, Γ = [ 1 1 1 1 1 1 ] T is the origin-destination (OD) pair-route incidence matrix and K is the link-route incidence matrix. C ( F , ω ) is the travel cost function for route
C ( F , ω ) = K T ( H ( ω ) · K · F + k ( ω ) ) ,
where k ( ω ) is the free travel cost,
k ( ω ) = [ 50 , 30 , 40 , 40 + 60 ω , 30 , 50 , 20 , 60 , 40 + 40 ω , 70 ] T .
H ( ω ) is expressed as
H ( ω ) = 22 0 2 2 4 1 2 0 4 5 0 15 0 0 1 2 0 3 5 3 2 0 14 0 2 0 1 3 2 3 2 0 0 16 + 50 ω 0 2 3 1 2 4 4 1 2 0 12 0 2 2 0 0 1 2 0 2 0 10 0 0 1 2 2 0 1 3 2 0 11 0 0 0 0 3 3 1 2 0 0 14 0 1 4 5 2 2 0 1 0 0 16 + 50 ω 0 5 3 3 4 0 2 0 1 0 20 .
Hence, we have
Φ N ( u , z ) = Γ T ( ( z 2 + e c ) 1 2 + z ) D N = 0 , ( M N I ) ( z 2 + e c ) 1 2 + ( M N + I ) z + K T · k ( ω ) Γ u = 0 .
Here, M N = K T · H ( ω ) · K , D N = 1 N i = 1 N D ( ω i ) . In addition, Jacobi matrix
V N = 0 Γ T · B + Γ T Γ ( M N I ) · B + ( M N + I ) ,
where B = d i a g ( z 1 z 1 2 + e c , z 2 z 2 2 + e c , . . . , z 6 z 6 2 + e c ) . We can easily verify that V N = ( u , z ) Φ N ( u , z ) is nonsingular.
In Example 2, we choose as initial vectors the zero vector and the number of samples N = 50 , 100 , 200 , 500 ; then, we solve the numerical results in the two cases of D ( ω ) = 200 200 ω and D ( ω ) = 200 .
According to the numerical results in Table 3, Table 4, Table 5 and Table 6, it can be seen that our algorithm is better than the semi-smooth Newton method based on the FB function in I T , C P U and R E S .

5. Conclusions

In this paper, we propose a class of smoothing modulus-based iterative methods for solving the stochastic mixed complementarity problems, and we analyze the convergence of the algorithm. We document the performance of the method on two benchmark examples and empirically confirm our theoretical claims about convergence.

Author Contributions

Conceptualization, methodology, validation, formal analysis, investigation, resources, and writing—original draft preparation, C.G. and Y.L.; writing—review and editing, supervision, project administration, funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Guangxi Natural Science Foundation (2020GXNSFAA159143) and the Natural Science Foundation of China (12161027).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agdeppa, R.P.; Yamishita, N.; Fukushima, M. Convex expected residual models for stochastic affine varitional inequality problem. Pac. J. Optim. 2010, 6, 3–19. [Google Scholar]
  2. Zhang, C.; Chen, X. Stochastic nonlinear complementarity problem and applications to traffic equilibrium under uncertainty. Optim. Theory Appl. 2008, 137, 277–295. [Google Scholar] [CrossRef] [Green Version]
  3. Li, P.Y.; He, Z.F.; Lin, G.H. Sampling average approximation method for a class of stochastic Nash equilibrium problems. Optim. Methods Softw. 2013, 28, 785–795. [Google Scholar] [CrossRef]
  4. Li, P.Y. Sample average approximation method for a class of stochastic generalized Nash equilibrium problems. Comput. Appl. Math. 2014, 261, 387–393. [Google Scholar] [CrossRef]
  5. Xu, F.H.; Zhang, D.L. Stochastic Nash equilibrium problems: Sample average approximation and applications. Comput. Optim. Appl. 2013, 55, 597–654. [Google Scholar] [CrossRef] [Green Version]
  6. Yang, Z.P.; Zhang, J.; Zhu, X.D.; Lin, G.H. Infeasible interior-point algorithms based on sampling average approximations for a class of stochastic complementarity problems and their applications. Comput. Appl. Math. 2019, 352, 382–400. [Google Scholar] [CrossRef]
  7. Yousefian, F.; Nedic, A.; Shanbhag, U.V. On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems. Math. Program. 2017, 165, 391–431. [Google Scholar] [CrossRef]
  8. Egging, R. Benders Decomposition for multi-stage stochastic mixed complementarity problems-Applied to global natural gas market model. Eur. J. Oper. Res. 2013, 226, 341–353. [Google Scholar] [CrossRef]
  9. Devine, M.T.; Steven, A.G.; Seksun, M. A rolling horizon approach for stochastic mixed complementarity problems with endogenous learning: Application to natural gas markets. Comput. Oper. Res. 2016, 68, 1–15. [Google Scholar] [CrossRef] [Green Version]
  10. He, Z.F. Sampling average approximation method for solving stochastic mixed complementarity problem. Master’s Thesis, Dalian University of Technology, Dalian, China, 2010. [Google Scholar]
  11. Chen, X.; Fukushima, M. Expected residual minimization method for stochastic linear complementarity problems. Math. Oper. Res. 2005, 30, 1022–1038. [Google Scholar] [CrossRef] [Green Version]
  12. Lin, G.H.; Luo, M.J.; Zhang, D.L.; Zhang, J. Stochastic second-order-cone complementarity problems: Expected residual minimization formulation and its applications. Math. Program. 2017, 165, 197–233. [Google Scholar] [CrossRef]
  13. Lin, G.H.; Fukushima, M. New reformulations for stochastic nonlinear complementarity problems. Optim. Methods Softw. 2006, 21, 551–564. [Google Scholar] [CrossRef] [Green Version]
  14. Lin, G.H. Comnined Monte Carlo sampling and penalty method for stochastic nonlinear complementarity problems. Math. Comput. 2009, 78, 1671–1686. [Google Scholar] [CrossRef]
  15. Lin, G.H.; Fukushima, M. Stochastic equilibrium problems and stochastic mathematical programs with equilibrium constrains: A survey. Pac. J. Optim. 2010, 6, 455–482. [Google Scholar]
  16. Dong, J.L.; Jiang, M.Q. A modified modulus method for symmetric positive-definite linear complementarity problems. Numer. Linear Algebra Appl. 2009, 16, 129–143. [Google Scholar] [CrossRef]
  17. Bai, Z.Z. Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  18. Schafer, U. On the modulus algorithm for the linear complementarity problems. Oper. Res. Lett. 2004, 32, 350–354. [Google Scholar] [CrossRef] [Green Version]
  19. Hadjidimos, A.; Tzoumas, M. Nonstationary extrapolated modulus algorithms for the solution of the linear complementarity problems. Linear Algebra Appl. 2009, 431, 197–210. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, L.L. On modulus-based matrix splitting iteration methods for linear complementarity problems. Math. Numer. Sin. 2012, 34, 373–386. [Google Scholar] [CrossRef]
  21. Foutayeni, Y.E.I.; Bouanani, H.E.I.; Khaladi, M. An (M+1)-step iterative method of convergence order (m+2) for linear complementarity problems. Appl. Math. Comput. 2007, 54, 229–242. [Google Scholar] [CrossRef]
  22. Liu, J.; Nadeem, M.; Habib, M.; Akgül, A. Approximate Solution of Nonlinear Time-Fractional Klein-Gordon Equations Using Yang Transform. Symmetry 2022, 14, 907. [Google Scholar] [CrossRef]
  23. Fang, J.; Nadeem, M.; Habib, M.; Akgül, A. Numerical Investigation of Nonlinear Shock Wave Equations with Fractional Order in Propagating Disturbance. Symmetry 2022, 14, 1179. [Google Scholar] [CrossRef]
  24. Guran, L.; Akgül, E.K.; Akgül, A.; Bota, M.-F. Remarks on Fractal-Fractional Malkus Waterwheel Model with Computational Analysis. Symmetry 2022, 14, 2220. [Google Scholar] [CrossRef]
  25. Gurkan, G.; Ozge, A.Y.; Robinson, S.M. Sample-path solution of stochastic variational inequalities. Math. Program. 1999, 84, 313–333. [Google Scholar] [CrossRef]
  26. Cuzzocrea, A.; EFadda, E.; Baldo, A. Lyapunov Central Limit Theorem: Theoretical Properties and Applications in Big-Data-Populated Smart City Settings. In Proceedings of the 2021 5th International Conference on Cloud and Big Data Computing (ICCBDC ’21), Liverpool, UK, 13–15 August 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 34–38. [Google Scholar]
  27. Patrick, B. Probability and Measure; Wiley-Interscience: New York, NY, USA, 1995. [Google Scholar]
  28. Ruszczynski, A.; Shapiro, A. Stochastic Programming. In Handbooks in Operation Research and Management Science; Elsevier: Amsterdam, The Netherlands, 2003. [Google Scholar]
  29. Huang, X.D.; Zeng, Z.G.; Ma, Y.N. The Theory and Methods for Nonlinear Numerical Analysis; Wuhan University Press: Wuhan, China, 2004. [Google Scholar]
Table 1. Numerical results by Algorithms 1 and 2.
Table 1. Numerical results by Algorithms 1 and 2.
NAlgorithmITCPURES ( q 1 , q 2 , q 3 )
50130.00125.3709 × 10 15 (7.3953, 11.3953, 4.3953)
502400.0098 7.8978 × 10 11 (7.3957, 11.3957, 4.3957)
1000130.0021 4.7568 × 10 15 (7.2806, 11.2806, 4.2806)
10002400.0072 7.6246 × 10 11 (7.2808, 11.2808, 4.2808)
10,000130.0018 4.6871 × 10 15 (7.2544, 11.2544, 4.2544)
10,0002400.0080 7.5603 × 10 11 (7.2547, 11.2547, 4.2547)
100,000130.0024 5.1394 × 10 15 (7.2497, 11.2497, 4.2497)
100,0002400.0084 7.5491 × 10 11 (7.2498, 11.2498, 4.2498)
Table 2. Numerical results by Algorithm 1.
Table 2. Numerical results by Algorithm 1.
NCPURES ( q 1 , q 2 , q 3 , q 4 , q 5 , q 6 , q 7 , q 8 )
503 7.7989 × 10 13 (8.8890, 13.8890, 16.8890, 14.8890, 7.8890, 4.8890, 5.8890, 10.8890)
10003 6.8438 × 10 13 (8.8886, 13.8886, 16.8886, 14.8886, 7.8886, 4.8886, 5.8886, 10.8886)
10,0003 6.8038 × 10 13 (8.8881, 13.8881, 16.8881, 14.8881, 7.8881, 4.8881, 5.8881, 10.8881)
100,0003 6.7784 × 10 13 (8.8875, 13.8875, 16.8875, 14.8875, 7.8875, 4.8875, 5.8875, 10.8875)
Table 3. Numerical results by Algorithm 1 and Algorithm 2 ( D ( ω ) = 200 200 ω ).
Table 3. Numerical results by Algorithm 1 and Algorithm 2 ( D ( ω ) = 200 200 ω ).
NAlgorithmITCPURES ( F N , u N )
50140.0030 1.3877 × 10 11 (13.5099, 11.9378, 2.1923, 5.5175, 0.0000, 70.0828, 4.5086 × 10 3 )
502510.0181 4.7262 × 10 11 (13.6261, 11.9834, 2.1906, 5.5743, 0.0000, 70.3681, 4.5281 × 10 3 )
100140.0041 1.4361 × 10 11 (13.5270, 11.9359, 2.1864, 5.5251, 0.0000, 70.0764, 4.5085 × 10 3 )
1002500.0086 3.7474 × 10 11 (13.4430, 11.9049, 2.1892, 5.4842, 0.0000, 69.8821, 4.4951 × 10 3 )
200140.0024 1.3955 × 10 11 (12.4513, 11.4676, 2.1734, 4.9954, 0.0000, 67.1802, 4.3118 × 10 3 )
2002500.0106 4.7450 × 10 11 (12.4352, 11.4581, 2.1715, 4.9872, 0.0000, 67.1225, 4.3079 × 10 3 )
500140.00289 1.4174 × 10 11 (12.3707, 11.4307, 2.1713, 4.9555, 0.0000, 66.9530, 4.2964 × 10 3 )
5002500.0144 5.3082 × 10 11 (12.3306, 11.4116, 2.1697, 4.9356, 0.0000, 66.8356, 4.2885 × 10 3 )
Table 4. Numerical results by Algorithm 1 and Algorithm 2 ( D ( ω ) = 200 ).
Table 4. Numerical results by Algorithm 1 and Algorithm 2 ( D ( ω ) = 200 ).
NAlgorithmITCPURES ( F N , u N )
50150.0032 3.032 × 10 11 (25.8763, 22.0867, 6.3194, 11.7501, 0.0000, 1.339 × 10 2 , 8.612 × 10 3 )
502500.0375 6.812 × 10 11 (25.6015, 22.1066, 6.4554, 11.6114, 0.0000, 1.342 × 10 2 , 8.622 × 10 3 )
100150.0015 2.536 × 10 11 (25.5377, 22.1211, 6.4793, 11.5848, 0.0000, 1.342 × 10 2 , 8.624 × 10 3 )
1002500.0108 6.955 × 10 11 (25.5332, 22.1223, 6.4808, 11.5830, 0.0000, 1.342 × 10 2 , 8.629 × 10 3 )
200150.0023 1.395 × 10 11 (26.0126, 22.0732, 6.2548, 11.8168, 0.0000, 1.338 × 10 2 , 8.615 × 10 3 )
2002500.0237 7.986 × 10 11 (26.0081, 22.0744, 6.2563, 11.8150, 0.0000, 1.338 × 10 2 , 8.623 × 10 3 )
500150.0023 3.053 × 10 11 (26.1042, 22.0639, 6.2116, 11.8614, 0.0000, 1.3378 × 10 2 , 8.616 × 10 3 )
5002500.0245 8.300 × 10 11 (26.1209, 22.0624, 6.2036, 11.8697, 0.0000, 1.337 × 10 2 , 8.624 × 10 3 )
Table 5. Numerical results by Algorithm 1 ( D ( ω ) = 200 200 ω ).
Table 5. Numerical results by Algorithm 1 ( D ( ω ) = 200 200 ω ).
NFlow of each link (a, b, c, d, e, f, g, h, i, j)
50(27.6145, 75.5348, 14.1171, 13.4974, 5.5109, 17.4387, 70.0238, 2.1894, 30.9361, 72.2131)
100(27.6493, 75.6015, 14.1223, 13.5270, 5.5251, 17.4610, 70.0764, 2.1864, 30.9880, 72.2628)
200(26.0923, 72.1756, 13.6410, 12.4513, 4.9954, 16.4630, 67.1802, 2.1734, 28.9143, 69.3536)
500(26.4158, 72.9016, 13.7488, 12.6670, 5.1021, 16.6704, 67.7995, 2.1805, 29.3374, 69.9812)
Table 6. Numerical results by Algorithm 1 ( D ( ω ) = 200 ).
Table 6. Numerical results by Algorithm 1 ( D ( ω ) = 200 ).
NFlow of each link (a, b, c, d, e, f, g, h, i, j)
50(54.1288, 145.8741, 28.6129, 25.5159, 11.5741, 33.6971, 134.3231, 6.4898, 59.2137, 140.7898)
100(54.1381, 145.8648, 28.6004, 25.5377, 11.5848, 33.7059, 134.2854, 6.4793, 59.2436, 140.7593)
200(54.1349, 145.8612, 28.6053, 25.5296, 11.5812, 33.7038, 134.2894, 6.4827, 59.2334, 140.7627)
500(54.1442, 145.8519, 28.5936, 25.5514, 11.5919, 33.7128, 134.2647, 6.4721, 59.2638, 140.7321)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, C.; Liu, Y.; Li, C. A Class of Smoothing Modulus-Based Iterative Methods for Solving the Stochastic Mixed Complementarity Problems. Symmetry 2023, 15, 229. https://doi.org/10.3390/sym15010229

AMA Style

Guo C, Liu Y, Li C. A Class of Smoothing Modulus-Based Iterative Methods for Solving the Stochastic Mixed Complementarity Problems. Symmetry. 2023; 15(1):229. https://doi.org/10.3390/sym15010229

Chicago/Turabian Style

Guo, Cong, Yingling Liu, and Chenliang Li. 2023. "A Class of Smoothing Modulus-Based Iterative Methods for Solving the Stochastic Mixed Complementarity Problems" Symmetry 15, no. 1: 229. https://doi.org/10.3390/sym15010229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop