Next Article in Journal
Unbiased Estimates for Products of Moments and Cumulants for Finite Populations
Previous Article in Journal
Analyzing Three-Dimensional Laplace Equations Using the Dimension Coupling Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Smoothing Method for Sparse Programs by Symmetric Cone Constrained Generalized Equations

1
School of Economics and Management, Hebei University of Technology, Tianjin 300401, China
2
School of Management, Henan University of Technology, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(17), 3719; https://doi.org/10.3390/math11173719
Submission received: 8 August 2023 / Revised: 26 August 2023 / Accepted: 28 August 2023 / Published: 29 August 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
In this paper, we consider a sparse program with symmetric cone constrained parameterized generalized equations (SPSCC). Such a problem is a symmetric cone analogue with vector optimization, and we aim to provide a smoothing framework for dealing with SPSCC that includes classical complementarity problems with the nonnegative cone, the semidefinite cone and the second-order cone. An effective approximation is given and we focus on solving the perturbation problem. The necessary optimality conditions, which are reformulated as a system of nonsmooth equations, and the second-order sufficient conditions are proposed. Under mild conditions, a smoothing Newton approach is used to solve these nonsmooth equations. Under second-order sufficient conditions, strong BD-regularity at a solution point can be satisfied. An inverse linear program is provided and discussed as an illustrative example, which verified the efficiency of the proposed algorithm.

1. Introduction

There has been recent active research on sparse programs, driven by the need to find practical solutions for optimization problems. In this paper, we focus on sparse programs governed by symmetric cone constrained parameterized generalized equations, as depicted in the following equation:
( P ) min f ( x , y ) + γ x 1 s . t . 0 ϕ ( x , y ) + N Γ ( x , y ) ( y )
where x R n is the variable, y R m is the parameter, and f : R n × R m R , ϕ : R n × R m R m are proper, level-bounded, and twice continuously differentiable mappings. We know from [1] that the norm x 1 = i = 1 n | x i | guarantees sparsity. N C ( z ) represents the normal cone of the set C R m at z C , Γ : R n × R m R m is a set-valued mapping, defined by
Γ ( x , y ) : = { z R m | g ( x , y , z ) K } ,
where g : R n × R m × R m R n is twice continuously differentiable and K is a symmetric cone, defined by
K : = { u 2 | u J } .
where J is an n-dimensional real Euclidean space. A = J , , · , · is a Euclidean Jordan algebra, which is described in detail in [2,3]. Let K * be the dual cone of K , which is defined by K * = { x | x , y 0 , y K } , and K is a self-dual closed convex cone if K = K * [4].
For convenience, we define an auxiliary function F : R n × R m × R m R m by
F ( x , y , ξ ) : = ϕ ( x , y ) + J 3 g ( x , y , y ) T ξ ,
where J 3 g ( x , y , y ) is the partial Jacobian of g at ( x , y , y ) with respect to the third variable.
If ( x ^ , y ^ ) is a feasible solution to problem (1) and J 3 g ( x ^ , y ^ , y ^ ) attains full rank in row, then g ( x , y , y ) K and J 3 g ( x , y , y ) acheives full rank in row for any feasible solution ( x , y ) sufficiently close to ( x ^ , y ^ ) . Then, it follows from theorem 1.17 in [5] that
N Γ ( x , y ) ( y ) = J 3 g T ( x , y , y ) N K ( g ( x , y , y ) ) .
Let G ( x , y ) = g ( x , y , y ) ; thus,
0 ϕ ( x , y ) + N Γ ( x , y ) ( y ) 0 = ϕ ( x , y ) + J 3 g T ( x , y , y ) ξ ξ N K ( G ( x , y ) ) F ( x , y , ξ ) = 0 G ( x , y ) K ξ K ξ , G ( x , y ) = 0
The problem (1) is equivalent to the following symmetric cone complementarity problem:
min x , y , ξ f ( x , y ) + γ x 1 s . t . F ( x , y , ξ ) = 0 G ( x , y ) K ξ K ξ , G ( x , y ) = 0 .
Remark 1. 
The mathematical programs with symmetric cone complementarity constraints
min x , y , ξ f ( x , y ) + γ x 1 s . t . F ( x , y , ξ ) = 0 G ( x , y ) K H ( x , y ) K H ( x , y ) , G ( x , y ) = 0
can be transformed as the above problem; just let  H ( x , y ) = ξ , F ( x , y , ξ ) = ( ϕ ( x , y ) , H ( x , y ) + ξ ) T .
From the above reformulation, it becomes evident that mathematical programs governed by generalized equations can be considered a significant subset of mathematical programs with equilibrium constraints (MPECs) [6]. These possess applications that extend to engineering design and economic modeling [7,8]. The main sources of MPECs arise from bilevel programming problems and inverse problems, both of which find numerous applications [9,10]. Given the unique nature of their constraints, these problems are notoriously challenging to handle. The research dedicated to MPECs over the past few decades has been substantial, spanning both optimality theories and numerical methods.
Outrata [2] derives optimality conditions and provides comprehensive results on MPECs through the use of variational analysis. Recent papers have explored specific cases of MPECs, including NLCP, SOCCP, and SDCP, as seen in works such as [11,12,13,14,15,16,17,18,19]. However, these papers mainly focus on individual cases of MECPs, with limited discussions on the general framework of symmetric cone complementarity programming (SCCP) [20,21,22]. In this paper, our focus is directed towards mathematical programs featuring symmetric cone complementarity constraints in a general form.
The Newton method proves to be an effective approach for solving optimization problems and boasts a wide range of applications [23,24,25,26,27,28]. In [20], a regularized smoothing method was tested for the SCCP without any objective. By employing the Chen-Mangasarian functions, a smoothing method is presented, yielding a C-stationary point [22]. Cruz et al. explored a semi-smooth Newton method for linear complementarity problems involving second-order cones [23]. This semi-smooth approach globally and Q-linearly converges to the solution. Based on the concept of 2-regularity, Yu developed the smooth Newton method for solving Nonlinear Complementarity Problems [24]. In [25], nonlinear complementarity optimization was utilized to address phase transitions in porous media, proposing two smooth approaches to solve compositional two-phase flow problems. Utilizing the augmented Lagrangian method, Sun introduced a semismooth Newton approach for solving the total generalization variation problem [27]. Guo et al. devised the Newton-Cotes open method alongside the generalized Newton technique to tackle absolute value equations, with numerical experiments demonstrating the simplicity and effectiveness of the method [28].
In this paper, we focus on the mathematical program with symmetric cone complementarity constraints in a general form. In addition, we hope to get the sparse solutions at the same time, which are not discussed in previous papers. Then, optimality conditions for SPSCC are derived. We develop the smoothing Newton method to the nonsmooth system of equations to get the numerical solution of SPSCC. It is further shown that the method is effective by numerical experiment.
The organization of this paper is as follows. Section 2 is devoted to proposing the perturbation problem, which is a good approximation of the primal problem. In Section 3, the optimal conditions including the first-order necessary optimality conditions and the second-order sufficient conditions are given. The smoothing method is constructed in Section 4 to solve the perturbation problem. Finally, an illustrative example of the inverse linear program is provided and discussed in Section 5.

2. Perturbation Approach

For Problem ( P ) , notice that x 1 = i = 1 n x i 2 . However, x 1 is non-differentiable at x i = 0 . Therefore, we approximate x 1 by i = 1 n x i 2 + ϵ 2 for a small ϵ ( 0 , δ ) , where δ > 0 , and obviously we have
lim ϵ 0 i = 1 n x i 2 + ϵ 2 = x 1 .
For a given ϵ ( 0 , δ ) , we approximate Problem ( P ) by
( P ϵ ) min f ^ ( x , y , ϵ ) = f ( x , y ) + i = 1 n γ x i 2 + ϵ 2 s . t . F ( x , y , ξ ) = 0 G ( x , y ) K ξ K ξ , G ( x , y ) = 0
Next, we will show that this perturbation approach is effective, which means the solution of problem (P) can be obtained by solving a set of perturbation problems ( P ϵ ). We denote Ω 0 as the feasible set of problem ( P ϵ ) , and we define f ¯ ( x , y , ϵ ) by
f ¯ ( x , y , ϵ ) = f ^ ( x , y , ϵ ) ( x , y ) Ω 0 , + o t h e r w i s e .
Let us introduce some notations:
κ ( 0 ) = inf f ( x , y ) + γ x 1 | ( x , y ) Ω 0 κ ( ϵ ) = inf f ( x , y ) + i = 1 n γ x i 2 + ϵ 2 | ( x , y ) Ω 0 S ( 0 ) = A r g min f ( x , y ) + γ x 1 | ( x , y ) Ω 0 S ( ϵ ) = A r g min f ( x , y ) + i = 1 n γ x i 2 + ϵ 2 | ( x , y ) Ω 0
Lemma 1. 
f ¯ ( x , y , ϵ ) is locally uniformly level-bounded in ( x , y ) with respect to ϵ.
Proof. 
Let ϵ belong to a bounded set Σ R + ; then for any given 0 α 1 α , since f ( x , y ) is level-bounded and the set { i = 1 n γ x i 2 + ϵ 2 α 1 } is bounded, it follows that
{ ( x , y , ϵ ) | f ¯ ( x , y ) α } { ( x , y ) | f ¯ ( x , y , ϵ ) α } × Σ Ω 0 { { ( x , y ) | i = 1 n γ x i 2 + ϵ 2 α 1 } { ( x , y ) | f ( x , y ) α α 1 } } × Σ Ω 0 B .
where B is a bounded set in R n × R m × R n . We complete the proof. □
To measure the distance between solution sets, let us give the measurement [29]
D ( A , B ) = sup x A d i s t ( x , B )
where A , B R d are set, and d i s t ( x , B ) = inf x B x x 2 .
Theorem 1. 
Suppose that Ω is a compact set. Then, lim ϵ 0 D ( S ( ϵ ) , S ( 0 ) ) = 0 and lim ϵ 0 κ ( ϵ ) = κ ( 0 ) .
Proof. 
Obviously, we have that f ¯ ( · , · , ϵ ) is continuous to f ¯ ( · , · , 0 ) as ϵ 0 ; then f ¯ ( · , · , ϵ ) are lower semi-continuous and proper. From the above Lemmas, f ¯ ( x , y , ϵ ) is level-bounded in ( x , y ) locally uniformly in ϵ . By theorem 7.33 in [30], it is easy to get κ ( ϵ ) κ ( 0 ) and
lim sup ϵ 0 S ( ϵ ) S ( 0 ) .
Solution sets S ( ϵ ) and S ( 0 ) are both uniformly compact, since they are included in Ω , which is a compact set. Moreover, using the results of example 4.13 in [30], it can be known that Equation (8) means lim ϵ 0 D ( S ( ϵ ) , S ( 0 ) ) = 0 . Therefore, we conclude the proof. □
The above Theorem 1 shows that this perturbation approach is effective when ϵ is close enough to 0. Solving problem ( P ϵ ) will yield a good approximate solution to the original problem.

3. Optimal Conditions for Perturbation Problem

In this section, we consider the perturbation problem for a given ϵ > 0 . Let us define the natural residual function (NR-function) Φ N R : R n × R n R n by
Φ N R ( G ( x , y ) , ξ ) : = ξ Π K ( ξ G ( x , y ) ) .
From Proposition 6 in [3], we have
G ( x , y ) Π K ( G ( x , y ) + ξ ) = 0 G ( x , y ) K , ξ K , ξ , G ( x , y ) = 0 .
Remark 2. 
In fact, since K is a closed, convex cone, we know x = Π K ( x ) Π K * ( x ) . So
G ( x , y ) + ξ = Π K ( G ( x , y ) + ξ ) Π K * ( G ( x , y ) ξ ) .
If G ( x , y ) Π K ( G ( x , y ) + ξ ) = 0 , then
ξ = Π K * ( G ( x , y ) ξ ) = Π K ( G ( x , y ) ξ ) K .
In addition, ξ , G ( x , y ) = Π K * ( G ( x , y ) ξ ) , Π K ( G ( x , y ) + ξ ) . The above relationship holds.
Since
ξ = Π K ( ξ G ( x , y ) ) ( ξ G ( x , y ) , ξ ) g p h Π K .
Let Ω = { 0 m } × g p h Π K and define the function Ψ : R n × R m × R n R m × R n × R n ,
Ψ ( x , y , ξ ) : = F ( x , y , ξ ) ξ G ( x , y ) ξ
Then problem ( P ϵ ) is equivalent to the following problem:
min f ^ ( x , y , ϵ ) s . t . Ψ ( x , y , ξ ) Ω .
Before giving the optimal conditions, the following assumptions are given.
Assumption 1. 
The component-wise strict complementarity condition holds, i.e.,
ξ * + G ( x * , y * ) i n t K .
Assumption 2. 
The basic constraint qualification holds.
Assumption 3. 
The following matrix is row full rank.
J x , y F ( x * , y * , ξ * ) J G ( x * , y * ) .
It is easy to see that if Assumptions 1 and 2 hold, then Assumption 3 holds.
In the following, we give the optimality conditions for the perturbation problem P ϵ by the formulation (11).

3.1. The First-Order Necessary Optimality Conditions

Theorem 2. 
For problem ( P ϵ ) , let ( x * , y * ) be a local solution. Assume that J 3 g T ( x * , y * , y * ) attains full row in the rank at ( x * , y * ) , and that the equation F ( x * , y * , ξ ) = 0 possesses the unique solution ξ * R n . If the basic constraint qualification holds at ( x * , y * , ξ * ) , then there are u * R m , v * R n , w * R n such that
L ( x * , y * , ξ * , u * , v * , w * ) = 0 ,
where L is defined by
L ( x , y , ξ , u , v , w ) : = F ( x , y , ξ ) ξ + Π K ( ξ G ( x , y ) ) f ^ ( x , y ) + J x , y F ( x , y , ξ ) T u J G ( x , y ) T v J 3 g ( x , y , y ) u v w
with v B Π K ( ξ G ( x , y ) ) ( w ) .
Proof. 
From the previous analysis, it can be seen that problem ( P ϵ ) can be reformulated as problem (11). Then ( x * , y * , ξ * ) is a local optimal solution of problem (11). Apply the results of theorem 6.12, 6.14 in [30], the point ( x * , y * , ξ * ) needs to satisfy the following equation
0 f ^ ( x * , y * ) 0 + N Ψ 1 ( Ω ) ( x * , y * , ξ * ) .
where Ω , Ψ are given in (10) and N C denotes the normal cone of set C. Because the basic constraint qualification holds at ( x * , y * , ξ * ) , then applying theorem 6.14 in [30], we have
0 f ^ ( x * , y * ) 0 + J Ψ ( x * , y * , ξ * ) T N Ω ( Ψ ( x * , y * , ξ * ) ) .
So there exist u * , v * , w * such that
( v * , w * ) N g p h Π K ( ξ * G ( x * , y * ) , ξ * )
and
f ^ ( x * , y * ) + J x , y F ( x * , y * , ξ * ) T u * J G ( x * , y * ) T v * = 0 J 3 g ( x * , y * , y * ) u * v * w * = 0 .
The proof is complete. □
Based on the above theorem, the definition of the M-stationary point is provided.
Definition 1. 
Let ( x * , y * ) be a feasible point of problem ( P ϵ ) . Assume J 3 g T ( x * , y * , y * ) attains full rank in the row, and that the equation F ( x * , y * , ξ ) = 0 has a unique solution ξ * R n . In case there exist multipliers ( u * , v * , w * ) that fulfill (16) and (17), the term ( x * , y * ) is referred to as an M-stationary point.
Next, some different symmetric cones are considered, respectively.

3.1.1. Case with Nonnegative Cone

Next, we show the form of this optimality condition, provided K = R + n . We associate with each pair ( a , b ) g p h Π R + n , the index sets
L ( a ) : = { i { 1 , 2 , , m } | a i > 0 } , I 0 ( a ) : = { i { 1 , 2 , , m } | a i = 0 } , I + ( a ) : = { i { 1 , 2 , , m } | a i < 0 } ,
represent the index of inactive inequalities, the index of strongly active inequalities and the index of weekly active inequalities, respectively. Thus,
N g p h Π R + n ( a , b ) = i = 1 m N g p h Π R + ( a i , b i ) ,
where
N g p h Π R + ( a i , b i ) = b i < 0 i L 0 × R i I + { 0 × R } { b i < 0 } i I 0
then (17) can be rewritten as
f ^ ( x * , y * ) + J x , y F ( x * , y * , ξ * ) T u * J G ( x * , y * ) T v * = 0 J 3 g ( x * , y * , y * ) u * v * w * = 0 .

3.1.2. Case with Semidefinite Cone

In the situation where K = S + n , consider Z ¯ S p and let Z ¯ + = Π S + p ( Z ¯ ) . Applying orthogonal decomposition to the symmetric matrix Z ¯ yields Z ¯ = P ¯ Λ P ¯ T , where Λ is a diagonal matrix composed of eigenvalues and P ¯ is formed by eigenvectors.
Hence, for the projection Z ¯ + , we have Z ¯ + = P ¯ Λ + P ¯ T , where the diagonal matrix Λ + has elements λ i = ( Λ + ) i i = max { 0 , Λ i i } for i = 1 , 2 , , p . The projection operator Π S + p ( · ) is directionally differentiable along any H S p . Now we proceed to provide the specific form of the directional derivative of Π S + p ( · ) .
According to the eigenvalue of Z ¯ , we define the following index sets respectively,
α = { i | λ i > 0 } , β = { i | λ i = 0 } , γ = { i | λ i < 0 } .
Rearrange eigenvalues and corresponding eigenvectors,
Λ ¯ = Λ α 0 0 0 0 0 0 0 Λ γ , P ¯ = [ P ¯ α P ¯ β P ¯ γ ] R p × | α | × R p × | β | × R p × | γ | .
Let Θ be the symmetry matrix with the element
Θ i j [ 0 , 1 ] if ( i , j ) β × β , Θ i j = max { λ i , 0 } + max { λ j , 0 } | λ i | + | λ j | otherwise .
Lemma 2. 
Let  Θ S p  satisfy (19). Then  W B Π S + p ( Z ¯ )  if and only if there exists  W 0 B Π S + P ( 0 )  such that
W ( H ) = P ¯ P ¯ α T H P ¯ α P ¯ α T H P ¯ β Θ α γ P ¯ α T H P ¯ γ P ¯ β T H P ¯ α Π S + | β | ( P ¯ β T H P ¯ β ) 0 P ¯ γ T H P ¯ α Θ γ α 0 0 P ¯ T , H S p .
where “” is the Hadamard product, and  Θ α γ  is composed of elements in the first  | α |  rows and the last  | γ |  columns of the matrix  Θ , and  Θ γ α  is composed of elements in the first  | γ |  rows and the last  | α |  columns of the matrix  Θ .

3.1.3. Case with Second-Order Cone

If K = Q m + 1 is the second-order cone. For z = ( z 0 , z ¯ ) Q m + 1 , it has the following decomposition,
z = λ 1 ( z ) c 1 ( z ) + λ 2 ( z ) c 2 ( z ) ,
where
λ i ( z ) = z 0 + ( 1 ) i ( z ¯ ) , c i ( z ) = 1 2 ( ( 1 ) i z ¯ z ¯ ; 1 ) , i f z ¯ 0 , 1 2 ( ( 1 ) i ω ; 1 ) , i f z ¯ = 0 , i = 1 , 2 .
where ω R m such that ω = 1 .
Lemma 3. 
Let  z Q m + 1 .
(1) 
if  d e t ( z ) 0 , then
B Π Q m + 1 ( z ) = { J Π Q m + 1 ( z ) } .
(2) 
if  d e t ( z ) = 0 ,  λ 2 ( z ) 0 , then
B Π Q m + 1 ( z ) = I , I + 1 2 z ¯ z ¯ T ( z ¯ ) 2 z ¯ z ¯ z ¯ T z ¯ 1 .
(3) 
if  d e t ( z ) = 0 ,  λ 1 ( z ) 0 , then
B Π Q m + 1 ( z ) = 0 , 1 2 z ¯ z ¯ T ( z ¯ ) 2 z ¯ z ¯ z ¯ T z ¯ 1 .
(4) 
if  d e t ( z ) = 0 ,  λ 1 ( z ) = λ 2 ( z ) = 0 ,then
B Π Q m + 1 ( z ) = { I , 0 } 1 2 2 α I + ( 1 2 α ) ω ω T ω ω T 1 | ω R m , ω = 1 , α [ 0 , 1 ] .

3.2. The Second-Order Sufficient Conditions

Let H : R n × R m × R n R m × R n be defined by
H ( x , y , ξ ) : = F ( x , y , ξ ) Π K ( ξ G ( x , y ) ) + ξ ,
Then using a similar analysis as that in [31], we give the definition of the critical cone C ( x * , y * , ξ * ) at point ( x * , y * ) along direction d * ,
C ( x * , y * , ξ * ) : = ( Δ x , Δ y , Δ ξ ) | f ^ ( x * , y * ) T ( Δ x , Δ y ) 0 ; H ( x * , y * , ξ * ; Δ x , Δ y , Δ ξ ) = 0 ,
where H ( x * , y * , ξ * ; Δ x , Δ y , Δ ξ ) can be computed by
H ( x * , y * , ξ * ; Δ x , Δ y , Δ ξ ) = J x , y F ( x , y , ξ ) ( Δ x , Δ y ) + J 3 g ( x , y , y ) T Δ ξ Π K ( ξ G ( x , y ) ; J G ( x , y ) ( Δ x , Δ y ) Δ ξ ) + Δ ξ .
Before giving the second-order growth condition, we define the following second-order conditions, which are useful to prove optimality conditions.
Definition 2. 
Let ( x * , y * ) be an M-stationary point of problem ( P ϵ ) , and let ξ * be the unique solution of F ( x * , y * , ξ * ) = 0 . Assume that J 3 g ( x , y , ξ ) attains full rank in the row, and Assumption 1 holds. If
h T x , y , ξ 2 L ( x * , y * , ξ * , u * , w * ) h > 0 , h C ( x * , y * , ξ * ) { 0 } ,
where  ( u * , v * , w * )  is the corresponding multiplier vector, the Lagrangian function  L  is
L ( x , y , ξ , u , w ) : = f ^ ( x , y ) + u , F ( x , y , ξ ) w , Π K ( ξ G ( x , y ) ) + ξ .
Then we conclude that the second-order sufficient conditions hold at  ( x * , y * ) .
Theorem 3. 
If Assumption 1 and the second-order sufficient conditions hold at  ( x * , y * ) , then the second-order growth condition holds at  ( x * , y * ) .
Proof. 
If the second-order growth condition fails to hold, there exists a feasible sequence { ( x n , y n ) } ( x * , y * ) satisfying that
f ^ ( x n , y n ) f ^ ( x * , y * ) + o ( τ n 2 ) ,
where τ n = ( x n x * , y n y * ) .
When n is large enough, J 3 g ( x , y , y ) attains full rank in the row. The unique solution ξ * of F ( x * , y * , ξ * ) = 0 can be expressed as
ξ n = [ J 3 g ( x n , y n , y n ) J 3 g ( x n , y n , y n ) T ] 1 J 3 g ( x n , y n , y n ) ϕ ( x n , y n ) ,
and
ξ * = [ J 3 g ( x * , y * , y * ) J 3 g ( x * , y * , y * ) T ] 1 J 3 g ( x * , y * , y * ) ϕ ( x * , y * ) .
Therefore, we have
{ x n , y n , ξ n } { x * , y * , ξ * } .
Let t n = ( x n x * , y n y * , ξ n ξ * ) , and let
( d n x , d n y , d n ξ ) : = t n 1 ( x n x * , y n y * , ξ n ξ * ) .
Then we have t n τ n , and
f ^ ( x n , y n ) f ^ ( x * , y * ) + o ( t n 2 ) .
Without loss of generality, we assume that ( d n x , d n y , d n ξ ) ( d x , d y , d ξ ) 0 .
L ( x , y , ξ , u * , w * ) = f ^ ( x , y ) + J x , y F ( x , y , ξ ) T u * + J G ( x , y ) T J Π K ( ξ G ( x , y ) ) w * J 3 g ( x , y , y ) u * + J Π K ( ξ G ( x , y ) ) w * w *
Therefore, x , y , ξ L ( x * , y * , ξ * , u * , w * ) = 0 .
Now, using the Taylor expressions, we obtain the following:
L ( x n , y n , ξ n , u n , w n ) = L ( x * , y * , ξ * , u * , w * ) + 1 2 t n 2 ( d n x , d n y , d n ξ ) T
x , y , ξ 2 L ( x * , y * , ξ * , u * , w * ) ( d n x , d n y , d n ξ ) + o ( t n 2 ) .
f ^ ( x n , y n ) = f ^ ( x * , y * ) + t n f ^ ( x * , y * ) ( d n x , d n y ) + o ( t n ) .
H ( x n , y n , ξ n ) = H ( x * , y * , ξ * ) + t n H ( x * , y * , ξ * ; d n x , d n y , d n ξ ) + o ( t n ) .
It follows from (27), when n ,
f ^ ( x * , y * ) T ( d x , d y ) 0 .
Considering the feasibility of ( x n , y n ) and using (28), we have H ( x * , y * , ξ * ; d n x , d n y , d n ξ ) = 0 . Consequently, ( d n x , d n y , d n ξ ) C ( x * , y * , ξ * ) . Given that the second-order sufficient conditions hold at ( x * , y * ) , there exists β > 0 that satisfies the following inequality when n is sufficiently large:
( d n x , d n y , d n ξ ) T x , y , ξ 2 L ( x * , y * , ξ * , u * , w * ) ( d n x , d n y , d n ξ ) β > 0 .
On the other hand,
L ( x n , y n , ξ n , u n , w n ) L ( x * , y * , ξ * , u * , w * ) = f ^ ( x n , y n ) f ^ ( x * , y * ) o ( t n 2 ) 0 .
That is a contradiction. □

4. Smoothing Newton Method

In this section, we focus on solving the problem ( P ) . Considering the metric projection operator onto the symmetric cone K , we have
Π K ( z ) = z + | z | 2 = i = 1 r λ i ( z ) + | λ i ( z ) | 2 .
We define a real-valued function h μ ( z ) = z + z 2 + μ 2 2 , which is continuously differentiable,
h μ ( z ) = 1 2 + z 2 z 2 + μ 2 , 2 h μ ( z ) = μ 2 2 ( z 2 + μ 2 ) 3 2 .
The L o ¨ w n e r operator is also continuously differentiable by Theorem 2 and
lim μ 0 h μ ( z ) = z + | z | 2 = ( z ) + .
Since
Π K ( ξ G ( x , y ) ) = i = 1 r ( λ i ( ξ G ( x , y ) ) + c i ( ξ G ( x , y ) ) ,
we now introduce a smoothing approximation
Φ μ ( x , y , ξ ) = i = 1 r h μ ( λ i ( ξ G ( x , y ) ) c i ( ξ G ( x , y ) ) = i = 1 r λ i ( ξ G ( x , y ) ) + λ i ( ξ G ( x , y ) ) 2 + μ 2 2 c i ( ξ G ( x , y ) )
Proposition 1. 
For each μ 0 , the function Φ μ is continuously differentiable on R n × R m × R m . Further, lim μ 0 Φ μ ( x , y , ξ ) = Π K ( ξ G ( x , y ) ) .
Proof. 
For each μ 0 , it is easy to know that Φ μ is differentiable, and
Φ μ ( x , y , ξ ) = J x T G ( x , y ) [ Φ μ ( τ ) ] J y T G ( x , y ) [ Φ μ ( τ ) ] 1 [ Φ μ ( τ ) ] = ( x G , y G , 1 ) T Φ μ ( τ ) + ( 0 , 0 , 1 ) T ,
where τ = ξ G ( x , y ) , and
Φ μ ( τ ) = 2 [ λ i ( τ ) , λ j ( τ ) ] ϕ L ( c i ) L ( c j ) + ϕ μ ( λ i ( τ ) ) Q ( c i )
Considering (30) and the spectral decomposition, it is easy to know that
lim μ 0 Φ μ ( x , y , ξ ) = Π K ( ξ G ( x , y ) ) .
Hence, we complete the proof. □
Therefore, the smoothing Newton method can be applied to solve the nonsmooth equation L ( x , y , ξ , u , v , w ) = 0 using the smoothing approximation mapping, which is defined by
L μ ( x , y , ξ , u , v , w ) : = F ( x , y , ξ ) ξ + Φ μ ( ξ G ( x , y ) ) f ^ ( x , y ) + J x , y F ( x , y , ξ ) T u J G ( x , y ) T v J 3 g ( x , y , y ) u v w J Φ μ ( x , y , ξ ) ( w ) v
where
Φ μ ( τ ) = 2 [ λ i ( τ ) , λ j ( τ ) ] ϕ L ( c i ) L ( c j ) + ϕ μ ( λ i ( τ ) ) Q ( c i )
and
[ ϕ [ 1 ] ( τ ) ] i j = [ τ i , τ j ] ϕ = ϕ ( τ i ) ϕ ( τ j ) τ i τ j , τ i τ j , ϕ ( τ i ) , τ i = τ j . i , j = 1 , 2 , , r
Let
E ( ϵ , μ , x , y , ξ , u , v , w ) = ϵ μ L μ ( x , y , ξ , u , v , w ) .
When μ 0 , L ( x , y , ξ , u , v , w ) = 0 if and only if E ( Z ) = 0 , where Z = ( ϵ , μ , x , y , ξ , u , v , w ) .
Define e ( Z ) : = E ( Z ) 2 2 as the merit function. Denote Z ¯ = ( ϵ ¯ , μ ¯ , 0 ) . Let the parameters ϵ ¯ R + + , μ ¯ R + + , γ ( 0 , 1 ) such that γ ϵ ¯ + γ μ ¯ < 1 , and η be such that η ( Z ) = γ min { 1 , e ( Z ) } . Let
Ω = { Z | ϵ η ( Z ) ϵ ¯ , μ η ( Z ) μ ¯ } .
Then for any Z, we have η ( Z ) γ < 1 , which implies that every point ( ϵ ¯ , μ ¯ , x , y , ξ , u , v , w ) is feasible.
Proposition 2. 
E ( Z ) = 0 η ( Z ) = 0 E ( Z ) = η ( Z ) Z .
The algorithm of the smoothing Newton method [32] for our problem is presented in the following.
Next, the convergence of Algorithm 1 needs analysis. Before that, we first provide the following Theorem that indicates the possibility of verifying the strongly BD-regularity of E under certain conditions.
Algorithm 1: The algorithm of smoothing Newton method for SPSCC.
step 0: Choose δ ( 0 , 1 ) and σ ( 0 , 1 2 ) . Let Z 0 = ( ϵ 0 , μ 0 , x 0 , y 0 , ξ 0 , u 0 , v 0 , w 0 ) be
 an arbitrary point with ϵ 0 = ϵ ¯ , μ 0 , = μ ¯ and set k = 0 .
step 1: Calculate E ( Z k ) , If E ( Z k ) 2 = 0 , stop, record the point Z k ; Otherwise,
 let η k = η ( Z k ) .
step 2: Obtained Δ Z k by solving the equation
E ( Z k ) + J E ( Z k ) ( Δ Z k ) = η ( Z k ) Z ¯ .
step 3: Let l k be the smallest non-negative integer such that
e Z k + δ l Δ Z k 1 2 σ 1 γ ϵ ¯ γ μ ¯ δ l e ( Z k )
step 4: Let Z k + 1 = Z k + δ l k Δ Z k and k = k + 1 . Go to step 1.
Theorem 4. 
Assume J 3 g T ( x * , y * , y * ) be row full rank. If the second-order sufficient conditions hold at ( x * , y * ) , then E is strongly BD-regular at ( 0 , 0 , x * , y * , ξ * , u * , v * , w * ) .
Proof. 
To prove E is strongly BD-regular, we need to prove the nonsingularity of H ^ for any H ^ B E ( 0 , 0 , x * , y * , ξ * , u * , v * , w * ) . That means that for any vector, Δ d , H ^ Δ d = 0 implies d = 0 .
Since Δ ϵ = 0 , Δ μ = 0 , we just need focus on H ( Δ x , Δ y , Δ ξ , Δ u , Δ v , Δ w ) = 0 , where
H = J x , y F J 3 g ( x , y , y ) J x , y G Φ μ 1 Φ μ 2 f ^ + J x , y { J x , y F T u } J x , y { J x , y G T v } J ξ { J x , y F T u } J x , y { J 3 g ( x , y , y ) u } 0 J G J x , y { J Π K ( ξ G ) ( w ) } J ξ { J Π K ( ξ G ) ( w ) } 0 0 0 0 0 0 J x , y F T J x , y G T 0 J 3 g ( x , y , y ) 1 1 0 1 J Π K ( ξ G )
Then we can obtain the following equalities:
J x , y F ( x * , y * , ξ * ) T ( Δ x , Δ y ) + J 3 g ( x * , y * , y * ) Δ ξ = 0
J x , y G ( x * , y * ) Φ 0 ( Δ x , Δ y ) + ( 1 Φ 0 ) Δ ξ = 0 [ 2 f ^ ( x * , y * ) + J x , y { J x , y F ( x * , y * , ξ * ) T u } J x , y { J x , y G ( x * , y * ) T v } ] ( Δ x , Δ y ) +
J ξ { J x , y F ( x * , y * , ξ * ) T u } Δ ξ + J x , y F ( x * , y * , ξ * ) T Δ u J x , y G ( x * , y * ) T Δ v = 0
J x , y { J 3 g ( x * , y * , y * ) u } ( Δ x , Δ y ) + J 3 g ( x * , y * , y * ) Δ u Δ v Δ w = 0 J G J x , y { J Π K ( ξ G ( x * , y * ) ) ( w ) } ( Δ x , Δ y ) J ξ { J Π K ( ξ G ( x * , y * ) ) ( w ) } Δ ξ
Δ v J Π K ( ξ G ( x * , y * ) ) Δ w = 0 .
Since L ( x * , y * , ξ * , u * , v * , w * ) = 0 and together with the equalities (36), (37), (39), we can obtain
f ^ ( x * , y * ) T ( Δ x , Δ y ) = { J G ( x * , y * ) T v * J x , y F ( x * , y * , ξ * ) T u * } T ( Δ x , Δ y ) = v * T J G ( x * , y * ) T ( Δ x , Δ y ) + [ J 3 g ( x * , y * , y * ) u * ] T Δ ξ = v * T J G ( x * , y * ) T ( Δ x , Δ y ) + ( v * + w * ) T Δ ξ = J Π K ( · ) w * , J G ( x * , y * ) T ( Δ x , Δ y ) + ( I J Π K ( · ) ) w * , Δ ξ = 0 .
Then from the equalities (36)–(40) and the Definition of the critical cone, we know that ( Δ x , Δ y , Δ ξ ) C ( x * , y * , ξ * ) .
Next, multiply ( Δ x , Δ y ) at the left and right sides of the equality (38) simultaneously,
( Δ x , Δ y ) T 2 f ^ ( x * , y * ) ( Δ x , Δ y ) + ( Δ x , Δ y ) T J x , y { J x , y F ( x * , y * , ξ * ) T u } ( Δ x , Δ y ) ( Δ x , Δ y ) T J x , y { J x , y G ( x * , y * ) T v * } ( Δ x , Δ y ) + ( Δ x , Δ y ) T J ξ { J x , y F ( x * , y * , ξ * ) T u * } Δ ξ + ( Δ x , Δ y ) T J x , y F ( x * , y * , ξ * ) T Δ u ( Δ x , Δ y ) T J x , y G ( x * , y * ) T Δ v = 0
where
( Δ x , Δ y ) T J x , y F ( x * , y * , ξ * ) T Δ u ( Δ x , Δ y ) T J x , y G ( x * , y * ) T Δ v = Δ ξ T J 3 g ( x * , y * , ξ * ) Δ u ( Δ x , Δ y ) T J x , y G ( x * , y * ) T Δ v = Δ ξ T { J x , y [ J 3 g ( x * , y * , ξ * ) u * ] ( Δ x , Δ y ) Δ v Δ w } ( Δ x , Δ y ) T J x , y G ( x * , y * ) T Δ v = Δ ξ T J x , y [ J 3 g ( x * , y * , ξ * ) u * ] ( Δ x , Δ y ) { Δ ξ T ( Δ v + Δ w ) + ( Δ x , Δ y ) T J x , y G ( x * , y * ) T Δ v } = Δ ξ T J x , y [ J 3 g ( x * , y * , ξ * ) u * ] ( Δ x , Δ y ) J Π K ( · ) Δ w + Δ v , Δ ξ + J x , y G ( x * , y * ) T ( Δ x , Δ y ) = Δ ξ T J x , y [ J 3 g ( x * , y * , ξ * ) u * ] ( Δ x , Δ y ) J G ( x * , y * ) J x , y [ J Π K ( · ) ( w * ) ] ( Δ x , Δ y ) + J ξ [ J Π K ( · ) ( w * ) ] Δ ξ , Δ ξ + J x , y G ( x * , y * ) T ( Δ x , Δ y )
Then the equality (41) can be reduced to
( Δ x , Δ y ) T 2 f ^ ( x * , y * ) ( Δ x , Δ y ) + ( Δ x , Δ y ) T J x , y { J x , y F ( x * , y * , ξ * ) T u } ( Δ x , Δ y ) ( Δ x , Δ y ) T J x , y { J x , y G ( x * , y * ) T v * } ( Δ x , Δ y ) + ( Δ x , Δ y ) T J ξ { J x , y F ( x * , y * , ξ * ) T u * } Δ ξ Δ ξ T J x , y [ J 3 g ( x * , y * , ξ * ) u * ] ( Δ x , Δ y ) J G ( x * , y * ) J x , y [ J Π K ( · ) ( w * ) ] ( Δ x , Δ y ) + J ξ [ J Π K ( · ) ( w * ) ] Δ ξ , Δ ξ + J x , y G ( x * , y * ) T ( Δ x , Δ y ) = 0 .
That means that
( Δ x , Δ y , Δ ξ ) T x , y , ξ 2 L ( x * , y * , ξ * , u * , w * ) ( Δ x , Δ y , Δ ξ ) = 0 , ( Δ x , Δ y , Δ ξ ) C ( x * , y * , ξ * ) .
Since the second-order sufficient conditions hold, it is easy to get ( Δ x , Δ y , Δ ξ ) = 0 .
Therefore, we can reduce the Equation (38) to
J x , y F ( x * , y * , ξ * ) T Δ u J x , y G ( x * , y * ) T Δ v = 0 .
Under the Assumption 3 and considering Equation (39), we can conclude that ( Δ u , Δ v , Δ w ) = 0 . This completes the proof. □
According to the convergent results in [32], the global convergence of Algorithm 1 can be obtained.
Theorem 5. 
For each k 0 , assume ϵ k > 0 , μ k > 0 , and J E is nonsingular. Then
(i) 
If  Z *  is the cluster of points  { Z k }  generated by Algorithm 1, then  E ( Z * ) = 0 .
(ii) 
Further, if E satisfies the strong BD-regularity at  Z * , then
Z k + 1 Z * = o ( Z k Z * )
and
ϵ k + 1 = o ( ( ϵ k ) )
μ k + 1 = o ( ( μ k ) ) .

5. An Example

In this section, we present an illustrative example of an inverse problem. This example demonstrates the application of sparse mathematical programs governed by symmetric cone constrained generalized equations, where K = R + m .
Consider the following inverse linear programming,
( I L P ) min c c 0 1 s . t . x 0 S O L ( L P ( c ) ) c R n
where · 1 is defined by c 1 = i = 1 n | c i | , for any c R n . S O L ( L P ( c ) ) represents the solution set of the classical linear problem.
Considering the classical dual theory, it is easy to know that the Problem ( I L P ) can be reformulated as
min c c 0 1 s . t . c A T λ = 0 n 0 m λ ( A x 0 b 0 ) 0 m ( c , λ ) R n × R m
Letting I = { i | a i T x 0 b i 0 = 0 } , we can set I = { 1 , 2 , r } , A ^ = ( a 1 , a 2 , , a r ) T without loss of generality. Because of the complementarity constraints, λ = ( λ ^ , 0 , 0 , , 0 ) . The problem above is equivalent to
min c c 0 1 s . t . c A ^ T λ ^ = 0 n ( c , λ ^ ) R n × R + r
If λ ¯ is the optimal solution, then c ¯ = A ^ T λ ¯ is the optimal of the original problem. In fact, (44) is equivalent to the above problem, so here we just solve the problem (44).

5.1. Perturbation Problem

It should be noted that the objective function of (44), | c c 0 | = i = 1 n | c i c i 0 | , is non-differentiable at the points c i = c i 0 for i = 1 , 2 , , n . This leads to situations where the KKT conditions might not hold at certain local optimal points. To address this, we construct a smoothing function that approximates the semismooth problem.
min i = 1 n ( c i c i 0 ) 2 + ϵ 2 s . t . c A ^ T λ ^ = 0 n ( c , λ ^ ) R n × R + r
Let Ω = { ( c , λ ^ ) R n × R + r | c A ^ T λ ^ = 0 n } , and
f ( c , λ ^ , ϵ ) = i = 1 n ( c i c i 0 ) 2 + ϵ 2 ( c , λ ^ ) Ω , + o t h e r w i s e .
Then the perturbation problem is min f ( c , λ ^ , ϵ ) , and the original problem is min f ( c , λ ^ , 0 ) . The following theorem shows that the optimal value of the perturbation problem is convergent to that of the original problem as ϵ 0 . Further, the solution set is outer semi-continuous.
Theorem 6. 
Let κ ( ϵ ) = inf ( c , λ ^ ) f ( c , λ ^ , ϵ ) , S ( ϵ ) = A r g min ( c , λ ^ ) f ( c , λ ^ , ϵ ) then κ ( ϵ ) is continuous at 0, and the solution mapping S ( ϵ ) is outer semi-continuous at 0.
Proof. 
Obviously, f ( c , λ ^ , ϵ ) is continuous at ϵ = 0 . Considering the Definition 7.39 in [30] (the Definition of the epi-continuous), we have that f ( c , λ ^ , ϵ ) is epi-continuous at ϵ = 0 . Then e p i f ( c , λ ^ , ϵ ) is a closed subset in R 2 m + 2 . Considering the Theorem 7.1 in [30], we know that f ( c , λ ^ , ϵ ) is lower semi-continuous (l.s.c.) in R 2 m + 1 . Next, we need to prove that ( c , λ ^ ) is uniformly level-bounded with respect to ϵ .
Suppose ϵ ( 0 , 1 ) . For any α > 0 , choose a neighborhood V ( 0 , 1 ) of ϵ . It is evident that the set of multipliers Π is both nonempty and bounded due to the optimality condition. Let U = Ω { { ( c i c i 0 ) 2 ( α n 1 ) , i = 1 , , n } × Π } × V , and U is bounded. For any ( c , λ ^ , ϵ ) U
f ( c , λ ^ , ϵ ) = i = 1 n ( c i c i 0 ) 2 + ϵ 2 + 0 i = 1 n ( α n ) 2 1 + ϵ 2 i = 1 n ( α n ) 2 α
Then { ( c , λ ^ ) | f ( c , λ ^ , ϵ ) α } U . So f ( c , λ ^ , ϵ ) is l.s.c., and ( c , λ ^ ) is uniformly level-bounded with respect to ϵ . By Theorem 7.41 in [30], the Theorem is proved. □

5.2. Smoothing Newton Method

From the above analysis, we can solve the inverse linear programming by a series of perturbation problems. In the following, we consider how to solve the perturbation problem.
The Lagrange function of the problem (45) is
L ϵ ( c , λ ^ , y , z ) = i = 1 n ( c i c i 0 ) 2 + ϵ 2 + y , c A ^ T λ ^ z , λ ^ ,
and the KKT system is
c L ϵ ( c , λ ^ , y , z ) = 0 λ ^ L ϵ ( c , λ ^ , y , z ) = 0 c A ^ T λ ^ = 0 0 z λ ^ 0
where
c L ϵ ( c , λ ^ , y , z ) = ( c 1 c 1 0 ( c 1 c 1 0 ) 2 + ϵ 2 + y 1 , , c n c n 0 ( c n c n 0 ) 2 + ϵ 2 + y n ) T λ ^ L ϵ ( c , λ ^ , y , z ) = A ^ y z .
The Fisher–Burmeister function (F-B) φ is defined by
φ ( a , b ) : = a 2 + b 2 a b .
Obviously, φ ( a , b ) = 0 a 0 , b 0 , a b = 0 . Then the KKT system can be transformed to
c L ϵ ( c , λ ^ , y , z ) = 0 λ ^ L ϵ ( c , λ ^ , y , z ) = 0 c A ^ T λ ^ = 0 φ ( λ ^ , z ) = 0
where φ ( λ ^ , z ) = ( φ ( λ 1 ^ , z 1 ) , φ ( λ r ^ , z r ) ) T .
Since the F–B function is non-differentiable at zero, we select the smoothing approximation mapping,
φ μ ( a , b ) : = a 2 + b 2 + μ 2 a b ,
we have
φ μ ( a , b ) = 0 a 0 , b 0 , a b = 1 2 μ 2 .
It is evident that φ μ φ with μ 0 .
Define Φ ϵ , μ : R 2 n + 2 r R 2 n + 2 r , then the smoothing KKT system is
Φ ϵ , μ = c L ϵ ( c , λ ^ , y , z ) λ ^ L ϵ ( c , λ ^ , y , z ) c A ^ T λ ^ ϕ μ ( λ ^ , z ) = 0 ,
where ϕ μ ( λ ^ , z ) = ( φ μ ( λ ^ 1 , z 1 ) , , φ μ ( λ ^ r , z r ) ) T .
Remark 3. 
For any B Φ  if and only if there exists V 1 λ ^ φ , V 2 z φ ,
B = ϵ 2 D 1 3 0 I 0 0 0 A ^ I I A ^ T 0 0 0 V 1 0 V 2 .
Then we have
d i s t ( Φ ϵ , μ , Φ ) = min B Φ B J Φ = min V 1 λ ^ φ , V 2 z φ ( V 1 D λ 2 + V 2 D z 2 ) 1 2 = d i s t ( φ μ , φ ) 0 .
Therefore, this smoothing is reasonable.
If ( c ¯ , λ ¯ ) R n × R r is a local minimizer, there exists a Lagrange multiplier ( y ¯ , z ¯ ) that satisfies the KKT system due to the first necessary optimality condition. Therefore, Φ ϵ , μ ( c ¯ , λ ¯ , y ¯ , z ¯ ) = 0 . To elaborate, finding the KKT point of (44) is essentially solving the equations Φ ϵ , μ ( c , λ ^ , y , z ) = 0 , a task that can be tackled using the Newton method. However, this requires that the condition of the Jacobian of Φ ϵ , μ at ( c ¯ , λ ¯ , y ¯ , z ¯ )
J Φ ϵ , μ ( c , λ ^ , y , z ) = ϵ 2 D 1 3 0 I 0 0 0 A ^ I I A ^ T 0 0 0 D λ 0 D z
must be nonsingular, where
D 1 = d i a g ( 1 ( c 1 c 1 0 ) 2 + ϵ 2 , , 1 ( c n c n 0 ) 2 + ϵ 2 ) , D λ = d i a g ( λ 1 ( z 1 + λ 1 ) 2 + μ 2 1 , , λ r ( z r + λ r ) 2 + μ 2 1 ) , D z = d i a g ( z 1 ( z 1 + λ 1 ) 2 + μ 2 1 , , z r ( z r + λ r ) 2 + μ 2 1 ) .
Theorem 7. 
Suppose that A ^ attains full rank in the row for any μ 0 , J Φ ϵ , μ ( c , λ ^ , y , z ) is nonsingular, ( c , λ ^ , y , z ) R n × R r × R n × R r .
Proof. 
Suppose there exists h = ( h 1 , h 2 , h 3 , h 4 ) T R n × R r × R n × R r satisfying J Φ ϵ , μ ( c , λ ^ , y , z ) h = 0 , namely
ϵ 2 D 1 3 h 1 + h 3 = 0
A ^ h 3 + h 4 = 0
h 1 A ^ T h 2 = 0
D λ h 2 D z h 4 = 0
By left-multiplying both sides of Equation (46) with h 1 T , we obtain:
h 1 T ϵ 2 D 1 3 h 1 + h 1 T h 3 = 0
Since ϵ 2 D 1 3 is positive definite, so h 1 T h 3 0 . By Equation (49),
h 2 = D λ 1 D z h 4
It is evident that D λ 1 D z is positive definite, and combining the Equation (48), we have
h 1 T h 3 = h 3 T h 1 = h 3 T A ^ T h 2 = ( A ^ h 3 ) T h 2 = h 4 h 2 = h 4 T D λ 1 D z h 4 0 ,
So h 1 T h 3 = 0 . Substituting that into (50), we have h 1 = 0 , and substituting this into (46), h 3 = 0 . Let h 1 = 0 in the Equation (48), since A ^ attains full rank in the row, so h 2 = 0 , and let h 2 = 0 in (49), we get h 4 = 0 . Therefore, h = ( h 1 , h 2 , h 3 , h 4 ) = 0 , and J Φ ϵ , μ ( c , λ ^ , y , z ) is nonsingular. □
Theorem 8. 
Supposing that A ^ is of full rank in the row, then the sequence { Z k } generated by the smoothing Newton method converges to a solution Z * of (ILP) quadratically.

5.3. Numerical Tests

We solve the inverse linear problem using Algorithm 1. All the numerical experiments are carried out by Matlab (R2013b), on a Lenovo computer with the configuration of Intel(R) Core(TM)2 Quad Q9550/2.83GHz/RAM 4.00GB. The coefficients of the problems are created randomly. We set other parameters in the algorithm as δ = 0.6 , σ = 0.01 , γ = 0.5 . When the residual error T o l = e ( Z k ) < 10 6 , Algorithm 1 stops. Then the results of our experiments are in the following table, in which I t e r denotes the iteration numbers, f u n c denotes the iteration numbers of the function, and R e s 0 , R e s ∗ denote the initial value and the final value of the E .
From Table 1, the numerical results show the efficiency of Algorithm 1. It is evident that the convergence of the smoothing Newton method is stable and efficient.

6. Conclusions

This paper concentrates on investigating a numerical framework for a specific type of optimization problems characterized by symmetric cone constraints and the l 1 norm. Utilizing perturbation analysis theory, we reframe the problem as a semi-smooth optimization problem with a complementary constraint. In this context, we employ the smooth Newton method to tackle the resulting equations, which exhibits global convergence under reasonable conditions. Our numerical experiments affirm the effectiveness of this approach in solving Symmetric Cone Constrained Optimization Problems (SPSCC) within the nonnegative cone. It’s worth noting that the framework proposed in this paper has the potential to be extended to address other symmetric conic optimization problems as well.

Author Contributions

Conceptualization, C.C. and L.T.; methodology, C.C.; software, C.C. and L.T.; validation, C.C. and L.T.; formal analysis, C.C. and L.T.; investigation, C.C.; resources, C.C.; data curation, C.C.; writing—original draft preparation, C.C.; writing—review and editing, C.C., L.T.; visualization, C.C. and L.T.; supervision, L.T.; project administration, C.C.; funding acquisition, C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China under Grant 72102059, and by Hebei Natural Science Foundation under Grant G2020202001.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Candes, E.J.; Tao, T. Decoding by Linear Programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  2. Outrata, J.V. Mathematical programs with equilibrium constraints: Theory and numerical methods. In Nonsmooth Mechanics of Solids; Haslinger, J., Stavroulakis, G.E., Eds.; CISM Lecture, Notes; Springer: New York, NY, USA, 2006; Volume 485, pp. 221–274. [Google Scholar]
  3. Gowda, M.S.; Sznajder, R.; Tao, J. Some P-properties for linear transformations on Euclidean Jordan algebras. Linear Algebr. Appl. 2004, 393, 203–232. [Google Scholar] [CrossRef]
  4. Faraut, J.; Koranyi, A. Analysis on Symmetric Cones; Clarendon Press: Oxford, UK, 1994. [Google Scholar]
  5. Mordukhovich, B.S. Variational Analysis and Generalized Differentiation, I: Basic Theory; Springer: Berlin, Germany, 2006. [Google Scholar]
  6. Luo, Z.Q.; Pang, J.S.; Ralph, D. Mathematical Programs with Equilibrium Constraints; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  7. Chinchuluun, A.; Pardalos, P.M.; Migdalas, A.; Pitsoulis, L. Pareto Optimality, Game Theory and Equilibria; Springer: Berlin, Germany, 2008. [Google Scholar]
  8. Giannessi, F.; Maugeri, A.; Pardalos, P.M. Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2002. [Google Scholar]
  9. Dempe, S. Foundations of Bilevel Programming; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2002. [Google Scholar]
  10. Imanbetova, A.; Sarsenbi, A.; Seilbekov, B. Inverse Problem for a Fourth-Order Hyperbolic Equation with a Complex-Valued Coefficient. Mathematics 2023, 11, 3432. [Google Scholar] [CrossRef]
  11. Chen, C.; Mangasarian, O.L. Smoothing methods for convex inequalities and linear complementarity problems. Math. Program. 1995, 71, 51–69. [Google Scholar] [CrossRef]
  12. Chen, J.S.; Tseng, P. An unconstrained smooth minimization reformulation of the second-order cone complementarity problem. Math. Program. Ser. B 2005, 104, 293–327. [Google Scholar] [CrossRef]
  13. Chen, X.; Qi, H.D. Cartesian P-property and its applications to the semidefinite linear complementarity problem. Math. Program. 2006, 106, 177–201. [Google Scholar] [CrossRef]
  14. Chen, X.; Qi, H.D.; Tseng, P. Analysis of nonsmooth symmetric matrix functions with applications to semidefinite complementarity problems. SIAM J. Optim. 2003, 13, 960–985. [Google Scholar] [CrossRef]
  15. Chen, X.D.; Sun, D.; Sun, J. Complementarity functions and numerical experiments on some smoothing Newton methods for second-order-cone complementarity problems. Comput. Optim. Appl. 2003, 25, 39–56. [Google Scholar] [CrossRef]
  16. Fukushima, M.; Luo, Z.Q.; Tseng, P. Smoothing functions for second-order cone complementarity problems. SIAM J. Optim. 2001, 12, 436–460. [Google Scholar] [CrossRef]
  17. Hayashi, S.; Yamashita, N.; Fukuahima, M. A combined smoothing and regularization method for monotone second-order cone complementarity problems. SIAM J. Optim. 2005, 15, 593–615. [Google Scholar] [CrossRef]
  18. Huang, Z.H.; Han, J. Non-interior continuation method for solving the monotone semidefinite complementarity problem. Appl. Math. Optim. 2003, 47, 195–211. [Google Scholar] [CrossRef]
  19. Xia, Y.; Peng, J.M. A continuation method for the linear second-order cone complementarity Problem. In Computational Science and Its Applications-ICCSA, Proceedings of the Lecture Notes in Computer Science 3483, VOL 4, Singapore, 9–12 May 2005; Springer: Berlin, Germany, 2005; pp. 290–300. [Google Scholar]
  20. Kong, L.C.; Sun, J.; Xiu, N.H. A regularized smoothing Newton method for symmetric cone complementarity problems. SIAM J. Optim. 2008, 9, 1028–1047. [Google Scholar] [CrossRef]
  21. Liu, Y.; Zhang, L.; Wang, Y. Some properties of a class of merit functions for symmetric cone complementarity problems. Asia-Pac. J. Oper. Res. 2006, 23, 473–496. [Google Scholar] [CrossRef]
  22. Yan, T.; Fukushima, M. Smoothing method for mathematical programs with symmetric cone complementarity constraints. Optimization 2011, 60, 113–128. [Google Scholar] [CrossRef]
  23. Cruz, J.B.; Ferreira, O.P.; Németh, S.Z.; Prudente, L.D.F. A semi-smooth Newton method for projection equations and linear complementarity problems with respect to the second-order cone. arXiv 2016, arXiv:1605.09463. [Google Scholar]
  24. Hao-Dong, Y. Smooth Newton Method for Nonlinear Complementarity Problems. Math. Pract. Theory 2016, 46, 9. [Google Scholar]
  25. Bui, Q.M.; Elman, H.C. Semi-smooth Newton methods for nonlinear complementarity formulation of compositional two-phase flow in porous media. J. Comput. Phys. 2020, 407, 109163. [Google Scholar] [CrossRef]
  26. Engel, S.; Kunisch, K. Optimal control of the linear wave equation by time-depending BV-controls: A semi-smooth Newton approach. arXiv 2018, arXiv:1808.10158. [Google Scholar] [CrossRef]
  27. Sun, H. An efficient augmented Lagrangian method with semismooth Newton solver for total generalized variation. Inverse Probl. Imaging 2023, 17, 381–405. [Google Scholar] [CrossRef]
  28. Guo, P.; Iqbal, J.; Ghufran, S.M.; Arif, M.; Alhefthi, R.K.; Shi, L. A New Efficient Method for Absolute Value Equations. Mathematics 2023, 11, 3356. [Google Scholar] [CrossRef]
  29. Dentcheva, D.; Ruszczynski, A.; Shapiro, A. Lectures on Stochastic Programming; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2008. [Google Scholar]
  30. Rockafellar, R.T.; Wets, R.J.B. Variational Analysis; Springer-Verlag: Berlin, Germany, 1998. [Google Scholar]
  31. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
  32. Qi, L.; Sun, D.; Zhou, G. A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities. Math. Program. 2000, 87, 1–35. [Google Scholar] [CrossRef]
Table 1. Numerical tests.
Table 1. Numerical tests.
rnCputimeIterFuncRes0Res∗
5100.01 s31458.635.23  × 10 7
5200.07 s2840129.48  × 10 7
10200.1 s252020.28.56  × 10 7
20500.3 s365868.74.38  × 10 7
501001.1 s35402249.40  × 10 8
10050090.9 s6214413804.96  × 10 7
200500106.9 s507921109.34  × 10 8
2001000647.3 s578938203.35  × 10 7
50010001373.5 s7516868402.40  × 10 7
1000200014,056.4 s10223319,3001.59  × 10 8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cheng, C.; Tang, L. A Smoothing Method for Sparse Programs by Symmetric Cone Constrained Generalized Equations. Mathematics 2023, 11, 3719. https://doi.org/10.3390/math11173719

AMA Style

Cheng C, Tang L. A Smoothing Method for Sparse Programs by Symmetric Cone Constrained Generalized Equations. Mathematics. 2023; 11(17):3719. https://doi.org/10.3390/math11173719

Chicago/Turabian Style

Cheng, Cong, and Lianjie Tang. 2023. "A Smoothing Method for Sparse Programs by Symmetric Cone Constrained Generalized Equations" Mathematics 11, no. 17: 3719. https://doi.org/10.3390/math11173719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop