Next Article in Journal
Chromatic Number of Fuzzy Graphs: Operations, Fuzzy Graph Coloring, and Applications
Next Article in Special Issue
Amended Criteria for Testing the Asymptotic and Oscillatory Behavior of Solutions of Higher-Order Functional Differential Equations
Previous Article in Journal
Modelling Dependency Structures of Carbon Trading Markets between China and European Union: From Carbon Pilot to COVID-19 Pandemic
Previous Article in Special Issue
A Novel Approach for the Approximate Solution of Wave Problems in Multi-Dimensional Orders with Computational Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inexact Restoration Methods for Semivectorial Bilevel Programming Problem on Riemannian Manifolds

School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2022, 11(12), 696; https://doi.org/10.3390/axioms11120696
Submission received: 30 September 2022 / Revised: 1 December 2022 / Accepted: 2 December 2022 / Published: 5 December 2022
(This article belongs to the Special Issue 10th Anniversary of Axioms: Mathematical Analysis)

Abstract

:
For a better understanding of the bilevel programming on Riemannian manifolds, a semivectorial bilevel programming scheme is proposed in this paper. The semivectorial bilevel programming is firstly transformed into a single-level programming problem by using the Karush–Kuhn–Tucker (KKT) conditions of the lower-level problem, which is convex and satisfies the Slater constraint qualification. Then, the single-level programming is divided into two stages: restoration and minimization, based on which an Inexact Restoration algorithm is developed. Under certain conditions, the stability and convergence of the algorithm are analyzed.

1. Introduction

The bilevel optimization problem on Euclidean spaces has been shown to be NP-hard, and even the verification of the local optimality for a feasible solution is in general NP-hard. Bilevel optimization problems are often nonconvex optimization problems, and this makes the computation of an optimal solution a challenging task. Thus, it is natural to consider the bilevel optimization problems on Riemannian manifolds. Actually, studying optimization problems on Riemannian manifolds has many advantages. Some constrained optimization problems on Euclidean spaces can be seen as unconstrained ones from the Riemannian geometry viewpoint. Moreover, some nonconvex optimization problems in the setting of Euclidean spaces may become convex optimization problems by introducing an appropriate Riemannian metric. See for instance [1,2]. The aim of this paper is to study the bilevel optimization problem on Riemannian manifolds.
In order to study the bilevel optimization problem on Riemannian manifolds, it is reasonable to have some idea of solving the bilevel optimization problem in Euclidean spaces. An approach to investigate bilevel optimization problems on Euclidean spaces is to replace the lower-level problem by its (under certain necessary and sufficient assumptions) KKT optimality conditions. In a recent article [3], the authors presented the KKT reformulation of the bilevel optimization problems on Riemannian manifolds. Moreover, it has been shown that global optimal solutions of the KKT reformulation correspond to global optimal solutions of the bilevel problem on the Riemannian manifolds provided the lower level convex problem satisfies Slater’s constraint qualification. On this basis, we consider a semivectorial bilevel optimization problem on Riemannian manifolds with a multiobjective problem in the lower-level problem. Since the Inexact Restoration (IR) algorithm [4,5] was introduced to solve constrained optimization problems and if we transform the semivectorial bilevel optimization problem into a single-level problem, it also can be solved by using the IR algorithm as a constrained optimization problem.
For the convenience of the readers, let us review the IR algorithm on Euclidean spaces firstly. Each iteration of the IR algorithm consists of two phases: restoration and minimization. Consider the following nonlinear programming:
min f ( x ) s . t . C ( x ) 0 , x Ω ,
where f : R n R and C : R n R m are continuous differentiable functions and the set Ω R m is closed convex. The algorithm generates feasible iterates with respect to Ω , x k Ω (for all k = 0 , 1 , 2 ).
In the restoration step, which is executed once per iteration, an intermediate point y k Ω is found such that the infeasibility at y k is a fraction of the infeasibility at x k . Immediately after restoration, we construct an approximation π k of the feasible region using available information at y k . In the minimization step, we compute a trial point z k i π k such that f ( z k i ) f ( y k ) . Here, the symbol ≪ means sufficiently smaller than, and z k i y k δ k i , where δ k i is a trust-region radius. The trial point z k i is accepted as a new iteration one if the value of a nonsmooth (exact penalty) merit function at z k i is sufficiently smaller than its value at x k . If z k i is not acceptable, the trust-region radius is reduced.
The IR algorithm is related to classical feasible methods for nonlinear programming, such as the generalized reduced gradient (GRG) and the family of sequential gradient restoration algorithms. There are several studies on the numerical characteristics of the IR algorithm. For example, this method was applied to the general constraint problem in [6], and good results were obtained. In addition, the IR algorithm using the regularization strategy was proposed in [7], in which the problem of derivative-free optimization was effectively solved. The IR algorithms are especially useful when there is some natural way to restore feasibility. One of the most successful applications of the IR algorithm is electronic structure calculation, as shown in [8]. Moreover, the IR algorithm has also been successful applied to optimization problems with the box constraint in [9] and problems with multiobjective constraints under weighted-sum scalarization in [10]. For more applications, please see [11,12].
Since the IR algorithm is so important in applications, many researches have been trying to improve it from different angles. The restoration phase improves feasibility, and in the minimization step, optimality is improved as a linear tangent approximation of the constraints. When a sufficient descent criterion does not hold, the trial point is modified in such a way that, eventually, acceptance occurs at a point that may be close to the solution of the restoration (first) phase. The acceptance criterion may use merit functions [4,5] or filters [13]. The minimization step consists of an inexact (approximate) minimization of f with linear constraints. In this case, the restoration step represents also an inexact minimization of infeasibility with linear constraints. Therefore, the available algorithms for (large-scale) linearly constrained minimization can be fully exploited; see the published articles [14,15,16]. Furthermore, IR techniques for constrained optimization were improved, extended, and analyzed in [7,17,18,19], among others.
Inspired and motivated by the research works [4,10,20,21,22,23,24,25], we introduce a kind of bilevel programming with a multiobjective problem in the lower level on Riemannian manifolds, the so-called semivectorial bilevel programming. Then, we transform the semivectorial bilevel programming into a single-level programming by using the KKT optimality conditions of the lower-level problem, which is convex and satisfies the Slater constraint qualification. Finally, we divide the single-level programming into two stages: restoration and minimization, and give an IR algorithm for semivectorial bilevel programming. Under certain conditions, we analyze the well-definiteness and convergence of the presented algorithm.
The remainder of this paper is organized as follows: In Section 2, some basic concepts, notations, and important results of Riemannian geometry are presented. In Section 3, we propose the semivectorial bilevel programming on the Riemannian manifold and give the KKT reformulation, and then, we present an algorithm by using the IR technique for solving the semivectorial bilevel programming on Riemannian manifolds. In Section 4, its convergence properties are studied. The conclusions are given in Section 5.

2. Preliminaries

An m-dimensional Riemannian manifold is a pair ( M , g ) , where M stands for an m-dimensional smooth manifold and g stands for a smooth, symmetric positive definite ( 0 , 2 ) -tensor field on M, called a Riemannian metric on M. If ( M , g ) is a Riemannian manifold, then for any point x M , the restriction g x : T x M × T x M R is an inner product on the tangent space T x M . The tangent bundle T M over M is T M : = x M T x M , and a vector field on M is a section of the tangent bundle, which is a mapping X : M T M such that, for any x M , X ( x ) X x T x M .
We denote · , · x by the scalar product on T x M with the associated norm . x . The length of a tangent vector v T x M is defined by v x = v , v 1 2 . Given a piecewise smooth curve γ : [ a , b ] R M joining x to y, i.e., γ ( a ) = x and γ ( b ) = y , then its length is defined by L ( γ ) = a b γ ˙ ( t ) γ ( t ) d t , where γ ˙ means the first derivative of γ with respect to t. Let x and y be two points in Riemannian manifold ( M , g ) and Γ x , y the set of all piecewise smooth curves joining x and y. The function:
d : M × M R , d ( x , y ) : = inf { L ( γ ) : γ Γ x , y }
is a distance on M, and the induced metric topology on M coincides with the topology of M as the manifold.
Let ∇ be the Levi-Civita connection associated with the Riemannian metric and γ be a smooth curve in M. A vector field X is said to be parallel along γ : [ 0 , 1 ] M if γ ˙ X = 0 . If γ ˙ itself is parallel along γ joining x to y,
γ ( 0 ) = x , γ ( 1 ) = y and γ ˙ γ ˙ = 0 on [ 0 , 1 ] ,
then we say that γ is a geodesic, and in this case, γ ˙ is constant. When γ ˙ = 1 , γ is said to be normalized. A geodesic joining x to y in M is said to be minimal if its length equals d ( x , y ) .
By the Hopf–Rinow theorem, we know that, if M is complete, then any pair of points in M can be joined by a minimal geodesic. Moreover, ( M , d ) is a complete metric space, and the bounded closed subsets are compact. Furthermore, for the exponential mapping at x, exp x : T x M M is well defined on T x M . Clearly, a curve γ : [ 0 , 1 ] M is a minimal geodesic joining x to y if and only if there exists a vector v T x M such that v = d ( x , y ) and γ ( t ) = exp x ( t v ) for each t [ 0 , 1 ] .
Set p M and V p : = { v T p M : γ v defined in [ 0 , 1 ] } . The exponential mapping exp p : V p M is defined by exp p ( v ) = γ v ( 1 ) , v V p . The exponential mapping exp p : T p M M at p M is well posed on the tangent space T p M . Obviously, a curve γ : [ 0 , 1 ] M joining p and q is a minimum geodesic, if and only if there is a vector v T p M such that v = d ( p , q ) and γ ( t ) = exp p ( t v ) hold for every t [ 0 , 1 ] .
The gradient of a differentiable function f : M R with respect to the Riemannian metric g is the vector field grad f defined by g ( grad f , X ) = d f ( X ) , X T M , where d f denotes the differential of the function f.
In this normal coordinate system, the geodesics through p are represented by lines passing through the origin. Moreover, the matrix ( g i j ) associated with the bilinear form g at the point p in this orthonormal basis reduces to the identity matrix, and the Christoffel symbols vanish. Thus, for any smooth function f : M R , in normal coordinates around p, we obtain
grad f ( p ) = i f x i ( p ) x i .
Now, consider a smooth function f : M R and the real-valued function T p M v f p ( v ) : = f ( exp p v ) defined around 0 in T p M .
It is easy to see that
f p x i ( 0 ) = f x i ( p ) .
The Taylor–Young formula (for Euclidean spaces) applied to f p around the origin can be written using matrices as
f p ( v ) = f p ( 0 ) + J f p ( 0 ) v + 1 2 v T Hess f p ( 0 ) v + o ( v 2 ) ,
where
v = [ v 1 v n ] T , J f p ( 0 ) = f x 1 ( p ) f x n ( p ) , Hess f p ( 0 ) = 2 f x i x j ( p ) = Hess p f ( v , v ) .
In other words, we have the following Taylor–Young expansion for f around p:
f ( exp p v ) = f ( p ) + g p ( grad f , v ) + 1 2 Hess p f ( v , v ) + o ( v p 2 )
which holds in any coordinate system.
The set A M is said to be convex if it contains a geodesic segment γ whenever it contains the end points of γ , that is γ ( ( 1 t ) a + t b ) is in A whenever x = γ ( a ) and y = γ ( b ) are in A, and t [ 0 , 1 ] . A function f : M R is said to be convex if its restriction to any geodesic curve γ : [ a , b ] M is convex in the classical sense, such that the one real variable function f γ : [ a , b ] R is convex. Let P A denote the projection on A M , that is, for each x M ,
P A x = x ¯ A : d ( x , x ¯ ) = inf z A d ( x , z ) .
For more details and complete information on the fundamentals in Riemannian geometry, see [1,26,27,28].

3. Inexact Restoration Algorithm

We study an optimistic bilevel programming on an m-dimensional Riemannian manifold ( M , g ) , where the lower-level problem is a multi-objective problem, the so-called semivectorial bilevel programming. The problem is formulated below:
min F ( x ) s . t . x Sol ( MOP ) ,
where F : M R and Sol ( MOP ) is the effective solution set of the following multi-objective problem (MOP):
min { f 1 ( x ) , , f p ( x ) } s . t . h ( x ) = 0 , x M ,
where f = { f 1 ( x ) , , f p ( x ) } : M R p , I : = { 1 , , p } , h : M R n , and D = { x M : h ( x ) = 0 } denote the feasible solution of the MOP.
Definition 1.
Let f : M R p be a vectorial function on Riemannian manifold M. Then, f is said to be convex on M if, for every x , y M and every geodesic segment γ : [ 0 , 1 ] M joining x to y, i.e., γ ( 0 ) = x and γ ( 1 ) = y , it holds that
f ( γ ( t ) ) ( 1 t ) f ( x ) + t f ( y ) , t [ 0 , 1 ] .
The above definition is a natural extension of the definition of convexity in Euclidean space to the Riemannian context; see [29].
Definition 2. 
A point x M is said to be Pareto critical of f on Riemannian manifold M if, for any v T x M , there are an index i I and u grad f i ( x ) , such that
u , v 0 .
Definition 3.
(a) A point x * M is a Pareto-optimal point of f on Riemannian manifold M if there is no x M with f ( x ) f ( x * ) . (b) A point x * M is a weak Pareto-optimal point of f on Riemannian manifold M if there is no x M with f ( x ) f ( x * ) .
We know that criticality is a necessary, but not a sufficient condition for optimality. Under the convexity of the vectorial function f, the following proposition shows that criticality is equivalent to weak optimality.
Proposition 1
([29]). Let f : M R p be a convex function given by f = { f 1 ( x ) , , f p ( x ) } . A point x M is a critical Pareto-optimal point of the function f if and only if it is a weak Pareto-optimal point of the function f.
We assume that the functions f = { f 1 ( x ) , , f p ( x ) } : M R p and h : M R n are twice continuously differentiable and consider the weighted sun scaling problem related to the MOP, as follows.
Let ω i 0 , i = 1 , , p such that i = 1 p ω i = 1 :
min x i = 1 p ω i f i ( x ) s . t . h ( x ) = 0 , x M .
Note that, if ω i 0 , i = 1 , , p such that i = 1 p ω i = 1 , then the weak Pareto-optimal solution sets of Problem (4) are equivalent to the union of the optimal solution sets of Problem (5). Meanwhile, if f i : M R , i = 1 , , p is the convex function on the Riemannian manifold, then the function i = 1 p ω i f i ( x ) is also convex. Thus, the bilevel programming (3)–(4) can be transformed into the following problem:
min x , ω F ( x ) s . t . i = 1 p ω i = 1 , ω i 0 , i I , x arg min min i = 1 p ω i f i ( x ) s . t . h ( x ) = 0 , x M . .
A strategy to solve the bilevel problem (6) on the Riemannian manifolds is to replace the lower-level problem with the KKT conditions. When the lower-level problem is convex and satisfies the Slater constraint qualification, the global optimal solutions of the KKT reformulation correspond to the global optimal solutions of the bilevel problem on the Riemannian manifolds. See Theorems 4.1 and 4.2 in [3].
In the following, we give the KKT reformulation of the semivectorial bilevel programming on Riemannian manifolds.
min x , ω F ( x ) s . t . ω W , i = 1 p ω i grad x f i ( x ) + grad x h ( x ) μ = 0 , h ( x ) = 0 , x M ,
where
W = ω R p : i = 1 p ω i = 1 , ω i 0 , i = 1 , , p
is a convex and compact set, μ R n , and M is a complete m-dimensional Riemannian manifold.
We will adopt an IR method to solve the optimization problem in two stages, first pursuing feasibility and optimality, keeping a certain control over the feasibility that has been realized. Consequently, the approach exploits the inherent minimization structure of the problem, especially in the feasibility phase, so that it can obtain better solutions. Moreover, in the feasibility phase of the IR strategy, the user is free to choose the method of his/her choice, as long as the recovered iteration satisfies some mild assumptions [4,5].
For simplicity, we introduce the following notations:
C ( x , ω , μ ) = i = 1 p ω i grad x f i ( x ) + grad x h ( x ) μ h ( x ) R m + n
and
L ( x , ω , μ , λ ) = F ( x ) + C ( x , ω , μ ) T λ , λ R m + n .
We write shortly s = ( x , ω , μ ) M × W × R n and give the Jacobian of C as follows:
C ( s ) = i = 1 p ω i Hess x f i + j = 1 n μ j Hess x h j grad x f 1 grad x f p grad x h grad x h T 0 0 0 .
Thus, the semivectorial bilevel programming can be reduced:
min F ( s ) s . t . C ( s ) = 0 , s M × W × R n .
Before giving a rigorous description of the algorithm, let us start with an overview of each step.
Restoration step: We apply any globally convergent optimization algorithm to solve the lower-level minimization problem parameterized by z k = ( x ¯ , ω k , μ ¯ ) . Once an approximate minimizer x ¯ and a pair of corresponding estimated Lagrange multiplier vectors are obtained, then we compute the current set π k and the direction d tan k .
Approximate linearized feasible region: The set π k is a linear approximation of the region described by KKT ( x ¯ ) containing z k = ( x ¯ , ω k , μ ¯ ) . This auxiliary region is given by
π k = { s M × W × R n : C ( z k ) , γ ˙ s , z k ( 0 ) = 0 } .
Descent direction: Using the projection on Riemannian manifolds, the projection defined on π k is represented as follows:
P π k ( z k ) = P k exp z k η grad s L ( z k , λ k ) ,
where η > 0 is an arbitrary scaling parameter independent of k. It turns out that
d tan k = P k exp z k η grad s L ( z k , λ k ) z k
which is a feasible descent direction on π k .
Minimization step: The objective of the minimization step is to obtain v k , i π k such that L ( v k , i , λ k ) < L ( z k , λ k ) and v k , i B k , i = { v : d ( v , z k ) δ k , i } , where δ k , i is a trust-region radius. The first trial point at each iteration is obtained using a trust-region radius δ k , 0 . A successive trust-region radius is tried until a point v k , i is found such that the merit function at this point is sufficiently smaller than the merit function at s k .
Merit function and penalty parameter: We decided to use a variant of the sharp Lagrangian merit function, given by
Ψ ( s , λ , θ ) = θ L ( s , λ ) + ( 1 θ ) | C ( s ) | ,
where θ ( 0 , 1 ] is a penalty parameter used to give different weights to the objective function and the feasibility objective. The choice of the parameter θ at each iteration depends on practical and theoretical considerations. Roughly speaking, we wish the merit function at the new point to be less than the merit function at the current point s k .
That is, we want Ared k , i > 0 , where Ared k , i is the actual reduction of the merit function, defined by
Ared k , i = Ψ ( s k , λ k , θ k , i ) Ψ ( v k , i , λ k , θ k , i ) .
So,
Ared k , i = θ k , i L ( s k , λ k ) L ( v k , i , λ k , i ) + ( 1 θ k , i ) | C ( s k ) | | C ( v k , i ) | .
However, merely a reduction of the merit function is not sufficient to guarantee convergence. In fact, we need a sufficient reduction of the merit function, which will be defined by the satisfaction of the following test:
Ared k , i 0.1 Pred k , i ,
where Pred k , i is a positive predicted reduction of the merit function Ψ ( s , λ , θ ) between s k and v k , i . It is defined by
Pred k , i = θ k , i L ( s k , λ k ) L ( v k , i , λ k ) C ( z k ) T ( λ k , i λ k ) + ( 1 θ k , i ) | C ( s k ) | | C ( z k ) | .
The quantity Pred k , i defined above can be nonpositive depending on the value of the penalty parameter. Fortunately, if θ k , i is small enough, Pred k , i is arbitrarily close to | C ( s k ) | | C ( z k ) | , which is necessarily nonnegative. Therefore, we will always be able to choose θ k , i ( 0 , 1 ] such that
Pred k , i 1 2 | C ( s k ) | | C ( z k ) | .
When the criterion Ared k , i 0.1 Pred k , i is satisfied, we accept v k , i = z k . Otherwise, we reduce the trust-region radius.
To establish IR methods for semivectorial bilevel programming on Riemannian manifolds, we adapt the IR method presented in [4]. In the presented algorithm, the parameters η > 0 , N > 0 , θ 1 ( 0 , 1 ) , δ min > 0 , τ 1 > 0 , and τ 2 > 0 are given. The initial approximations s 0 W × M × R n , λ 0 R m + n , as well as a sequence { ω k } such that k = 0 + ω k < + are also given.

4. Convergence Results

Using the method for studying the convergence of the IR algorithm in Euclidean spaces [20,22], the convergence results of IR algorithms for semivectorial bilevel programming on Riemannian manifolds are given under the following assumptions. From now on, we assume that the semivectorial bilevel optimization problems on Riemannian manifolds satisfy assumptions H 1 H 3 stated below:
H 1
There exists L 1 such that, for all ( x , ω ) , ( x ¯ , ω ¯ ) M × W , μ , μ ¯ R n , and ξ [ 0 , ξ max ] ,
| C ( x , ω , μ ) C ( x ¯ , ω ¯ , μ ¯ ) | L 1 d ( x , ω , μ ) , ( x ¯ , ω ¯ , μ ¯ ) .
H 2
There exists L 2 such that, for all x , x ¯ M ,
| gard x F ( x ) gard x F ( x ¯ ) | L 2 d ( x , x ¯ ) .
H 3
There exists r [ 0 , 1 ) , independently of k, such that the point z k = ( x ¯ , ω ¯ , μ ¯ ) obtained at the restoration phase satisfies
| C ( z k ) | r | C ( s k ) | ,
where s k = ( x k , ω k , μ k ) . Moreover, if C ( s k ) = 0 , then z k = s k .
Theorem 1 (Well-definiteness).
Under assumptions H 1 H 3 , IR Algorithm 1 for bilevel programming is well defined.
Algorithm 1: Inexact Restoration algorithm
  • Define θ k min = min { 1 , θ k 1 , , θ 1 } , θ k large = min { 1 , θ k min + ω k } , and θ k , 1 = θ k large .
  • (Restoration phase) Find an approximate minimizer  x ¯ and multipliers μ ¯ R n for the problem:
    min x i = 1 p ω i k f i ( x ) s . t . h ( x ) = 0 , x M ,
    and define z k = ( x ¯ , ω k , μ ¯ ) .
  • (Direction) Compute
    d tan k = P k exp z k η grad s L ( z k , λ k ) z k ,
    where P k is the projection on
    π k = { s M × W × R n : C ( z k ) , γ ˙ s , z k ( 0 ) = 0 } ,
    and P k exp z k η grad s L ( z k , λ k ) is a solution of the following problem:
    min y M × W × R n 1 2 y exp z k η grad s L ( z k , λ k ) 2 s . t . C ( z k ) , γ ˙ y , z k ( 0 ) = 0 .
    If z k = s k , d tan k = 0 , then stop and return x k as a solution of Problem (7). Otherwise, we set
    i 0 and choose δ k , 0 δ min .
  • (Minimization phase) If d tan k = 0 , then we take v k , i = z k . Otherwise, we take t break k , i = min 1 , δ k , i d tan k and find v k , i π k such that, for some 0 < t < t break k , i , we have
    L ( v k , i , λ k ) max L ( z k + t d tan k , λ k ) , L ( z k , λ k ) τ 1 δ k , i , L ( z k , λ k ) τ 2
    and d ( v k , i , z k ) δ k , i .
  • If d tan k = 0 , define λ k , i = λ k . Otherwise, we take λ k , i R n + m such that | λ k , i | N .
  • For all θ [ 0 , 1 ] , we define
    Pred k , i ( θ ) = θ L ( s k , λ k ) L ( v k , i , λ k ) C ( z k ) T ( λ k , i λ k ) + ( 1 θ ) | C ( s k ) | | C ( z k ) | .
         We take θ k , i as the maximum θ [ 0 , θ k , i 1 ] that it satisfies:
    Pred k , i ( θ ) 1 2 | C ( s k ) | | C ( z k ) | ,
    and define Pred k , i = Pred k , i ( θ k , i ) .
  • Compute
    Ared k , i = θ k , i L ( s k , λ k ) L ( v k , i , λ k , i ) + ( 1 θ k , i ) | C ( s k ) | | C ( v k , i ) | .
    If
    Ared k , i 0.1 Pred k , i ,
    then we take
    s k + 1 = v k , i , λ k + 1 = λ k , i , θ k = θ k , i , δ k = δ k , i , Ared k = Ared k , i , Pred k = Pred k , i .
    and finish the current k th iteration. Otherwise, we choose δ k , i + 1 [ 0.1 δ k , i , 0.9 δ k , i ] , set i i + 1 , and go to Step 4.
Proof. 
According to Step 6 and Step 7 of Algorithm 1, it can be calculated that
Ared k , i 0.1 Pred k , i = 0.9 Pred k , i + ( 1 θ k , i ) [ | C ( z k ) | | C ( v k , i ) | ] + θ k , i L ( v k , i , λ k ) L ( v k , i , λ k , i ) + C ( z k ) T ( λ k , i λ k ) = 0.9 Pred k , i + ( 1 θ k , i ) [ | C ( z k ) | | C ( v k , i ) | ] + θ k , i C ( v k , i ) T λ k C ( v k , i ) T λ k , i + C ( z k ) T ( λ k , i λ k ) = 0.9 Pred k , i + ( 1 θ k , i ) [ | C ( z k ) | | C ( v k , i ) | ] + θ k , i C ( z k ) C ( v k , i ) T ( λ k , i λ k ) .
Through the condition (12), we have
Ared k , i 0.1 Pred k , i 0.45 | C ( s k ) | | C ( z k ) | + ( 1 θ k , i ) [ | C ( z k ) | | C ( v k , i ) | ] + θ k , i C ( z k ) C ( v k , i ) T ( λ k , i λ k ) .
Then, from the assumption H 3 ,
Ared k , i 0.1 Pred k , i = 0.45 ( 1 r ) | C ( s k ) | + ( 1 θ k , i ) [ | C ( z k ) | | C ( v k , i ) | ] + θ k , i C ( z k ) C ( v k , i ) T ( λ k , i λ k ) .
If C ( s k ) 0 , due to the continuity of C and δ k , i 0 , we have | C ( z k ) | | C ( v k , i ) | 0 . Thus, there exists a positive constant δ k , i such that
Ared k , i 0.1 Pred k , i 0 .
This means that the algorithm is well defined when C ( s k ) 0 .
If C ( s k ) = 0 , then s k is feasible. Since the algorithm does not terminate at the kth iteration, we know that d tan k 0 . Therefore, we have
z k = s k a n d C ( z k ) = C ( s k ) = 0 .
Combining the condition (12), it follows that
Pred k , i ( θ ) = θ L ( s k , λ k ) L ( v k , i , λ k ) 0 ,
and independent of θ , for all i, θ k , i = θ k , 1 . In terms of the inequality (13), when δ k , i is sufficiently small, we obtain
Ared k , i 0.1 Pred k , i 0 .
Therefore, Algorithm 1 is well defined. □
The next theorem is an important tool for proving the convergence of Algorithm 1. We prove that the actual reduction Ared k , i * , with i * the accepted value of i, achieved at each iteration necessarily tends to 0.
Theorem 2.
Under the assumptions H 1 H 3 , if Algorithm 1 generates an infinite sequence, then
lim k + Ared k = 0 , lim k + | C ( s k ) | = 0 .
The same results above occur when λ k = 0 , for all k.
Proof. 
Let us prove that lim k + Ared k = 0 , i.e., we need to prove
lim k + θ k L ( s k , λ k ) L ( s k + 1 , λ k + 1 ) + ( 1 θ k ) | C ( s k ) | | C ( s k + 1 ) | = 0 ,
that is
lim k + θ k L ( s k , λ k ) + ( 1 θ k ) | C ( s k ) | θ k L ( s k + 1 , λ k + 1 ) + ( 1 θ k ) | C ( s k + 1 ) | = 0 ,
namely
lim k + Ψ ( s k , θ k ) Ψ ( s k + 1 , θ k ) = 0 ,
where Ψ ( s k , θ k ) = θ k L ( s k , λ k ) + ( 1 θ k ) | C ( s k ) | .
By contradiction, suppose that there is an infinite indicator set T 1 { 0 , 1 , 2 } and a positive constant ζ > 0 such that, for any k T 1 , we have
Ψ ( s k + 1 , θ k ) Ψ ( s k , θ k ) ζ .
Let Ψ k = Ψ ( s k , θ k ) , then
Ψ k + 1 = θ k + 1 L ( s k + 1 , λ k + 1 ) + ( 1 θ k + 1 ) | C ( s k + 1 ) | = θ k + 1 L ( s k + 1 , λ k + 1 ) + ( 1 θ k + 1 ) | C ( s k + 1 ) | θ k L ( s k + 1 , λ k + 1 ) + ( 1 θ k ) | C ( s k + 1 ) | + θ k L ( s k + 1 , λ k + 1 ) + ( 1 θ k ) | C ( s k + 1 ) | = ( θ k + 1 θ k ) L ( s k + 1 , λ k + 1 ) + ( θ k θ k + 1 ) | C ( s k + 1 ) | + θ k L ( s k + 1 , λ k + 1 ) + ( 1 θ k ) | C ( s k + 1 ) | ( θ k θ k + 1 ) C ( s k + 1 ) L ( s k + 1 , λ k + 1 ) + θ k L ( s k , λ k ) + ( 1 θ k ) | C ( s k ) | ζ k .
Equivalently,
Ψ k + 1 ( θ k θ k + 1 ) | C ( s k + 1 ) | L ( s k + 1 , λ k + 1 ) + Ψ k ζ k ,
where ζ k > 0 and ζ k > ζ > 0 , k T 1 .
According to the definition of θ k , 1 ,
θ k θ k + 1 + ω k 0 , k T 1 .
There is an upper bound c > 0 , such that
| C ( s k ) | L ( s k + 1 , λ k + 1 ) c .
Combining the inequalities (14) and (15), it follows that
Ψ j + 1 ( θ j θ j + 1 + ω j ) | C ( s j + 1 ) | L ( s j + 1 , λ j + 1 ) + Ψ j ζ j ω j | C ( s j + 1 ) | L ( s j + 1 , λ j + 1 ) ( θ j θ j + 1 + ω j ) c + Ψ j ζ j + ω j c ( θ j θ j + 1 ) c + Ψ j ζ j + 2 ω j c .
Then, for all k 1 , we have
Ψ k Ψ 0 + ( θ 0 θ k + 1 ) c + j = 0 k 1 ζ j + j = 0 k 1 2 ω j c Ψ 0 + 2 c + j = 0 k 1 ζ j + j = 0 k 1 2 ω j c .
Since j = 0 k 1 2 ω j is the convergence and ζ j is bounded away from zero, this implies that Ψ k is unbounded. This is a contradiction. Thus, we have that lim k + Ared k = 0 . In addition, in a similar way, we can prove lim k + | C ( s k ) | = 0 . □
According to Theorem 2, it means that the point generated by the IR algorithm for the KKT transformation (7) will converge to a feasible point eventually. Then, we prove that d t a n k cannot be bounded away from zero under the following assumption H 4 . This means that the point generated by the IR algorithm will converge to a weak Pareto solution of Problem (7):
H 4
There exists β > 0 , independently of k, such that
d ( s k , z k ) β | C ( s k ) | .
Theorem 3. 
Suppose that the assumptions H 1 , H 2 , H 3 , and H 4 hold. If { s k } is an infinite sequence generated by Algorithm 1, { z k } is the sequence defined at the restoration phase in Algorithm 1, then:
1 
C ( s k ) 0 .
2 
There exists a limit point s * of { s k } .
3 
Every limit point of { s k } is a feasible point of the KKT reformulation (7).
4 
If, for all ω, a global solution of the lower-level problem is found, then any limit point ( x * , ω * ) is feasible for the weighted semivectorial bilevel programming (6).
5 
If s * is a limit point of { s k } , there exists an infinite set K N such that
lim k K s k = lim k K z k = s * , C ( s * ) = 0 , lim k K d t a n k = 0 .
Proof. 
We can prove the first two items from Theorem 2 and the assumption H 1 H 3 . Based on the conclusions of the first two terms, the third and forth items are valid. The fifth item follows from the assumption H 4 and the first item. □
The above conclusions give the well-definiteness and convergence of the algorithm proposed for semivectorial bilevel programming on Riemannian manifolds. From the point of view of the assumption put forward in this paper, the assumptions H 3 and H 4 are related to the sequences generated by the IR algorithm. Therefore, it is worth studying establishing sufficient conditions to ensure their effectiveness. Two assumptions about the lower-level problem are given below to verify the hypotheses H 3 and H 4 :
H 5
For every solution s = ( x , ω , μ ) of C ( x , ω , μ ) = 0 , such that the gradients grad h i ( x ) , i = 1 , , n of the active lower level constraints are linearly independent.
H 6
For every solution s = ( x , ω , μ ) of C ( x , ω , μ ) = 0 such that the matrix:
H ( x , ω , μ ) = i = 1 p ω i Hess x f i ( x ) + i = 1 n μ i Hess x h i ( x ) ,
is positive definite in the following set:
Z ( x ) = { d R n | grad h ( x ) T d = 0 , d j = 0 for all j } .
For convenience, to verify H 3 and H 4 , we define the following matrix:
D ( s ) = i = 1 p ω i Hess x f i + i = 1 n μ i Hess x h i grad x h grad x h T 0 .
Lemma 1.
The matrix D ( s ) is non-singular for any solution s = ( x , ω , μ ) of C ( x , ω , μ ) = 0 .
Proof. 
Assuming that there exist u R m and v R p such that
D ( s ) u v = 0 ,
then we have
i = 1 p ω i Hess x f i + i = 1 n μ i Hess x h i u + grad x h v = 0 ,
grad x h u = 0 .
According to the assumptions H 5 H 6 and Equalities (16) and (17), it follows that u = 0 and v = 0 . This means that the matrix D ( s ) is non-singular for any solution s = ( x , ω , μ ) of C ( x , ω , μ ) = 0 . □
Let D ( s ) be defined on M × W × R n , for each ω W , a solution u ( ω ) = ( x ( ω ) , μ ( ω ) ) of C ( x , ω , μ ) = 0 such that the function v ( ω ) = u ( ω ) is continuous on W. Now, we fix the function v ( ω ) , by Lemma 1, and we can define a function Υ ( ω ) = D ( ω , v ( ω ) ) 1 over the set W. Let V ( v ( ω ) , α ) = { v M × R n : d ( v , v ( ω ) ) α } . Furthermore, the following lemma can be obtained.
Lemma 2.
There exist α > 0 and β > 0 , such that, for all ω W , it holds | Υ ( ω ) | < β , and for all v V ( v ( ω ) , α ) , Υ ( ω ) coincides with the local inverse operator of D ( ω , · ) .
Proof. 
Since D ( ω , v ) is continuous on ( ω , v ) , v ( ω ) is continuous on W, and Υ ( ω ) is continuous with respect to ω W , there exists β > 0 , such that, for all ω W , | Υ ( ω ) | < β .
For each fixed value of ω W , associated with each v, the continuously differentiable operator of the vector C ( ω , v ) verifies the assumption of the inverse function theorem at v ( ω ) . Hence, there exists α > 0 such that C ( ω , · ) has a continuously differentiable local inverse operator G ( ω ) : C ( ω , V ( v ( ω ) , α ) ) V ( v ( ω ) , α ) , and the Jacobian matrix [ G ( ω ) ] is consistent with Υ ( ω ) . This ends the proof. □
Finally, we state that H 3 and H 4 hold under the assumptions H 5 to H 6 . The next theorem summarizes this fact, and it can be proven as follows.
Theorem 4.
Let r [ 0 , 1 ) , ( ω , u ) W × M × R n be such that C ( ω , u ) 0 . If the assumptions H 5 H 6 hold, then there exist β > 0 , ω W , and u ¯ = ( x ¯ , μ ¯ ) M × R n such that
| C ( ω , u ¯ ) | r | C ( ω , u ) | ,
and
d ( ω , u ) , ( ω , u ¯ ) β | C ( ω , u ) | .
Proof. 
According to Lemmas 1 and 2, combining the assumptions H 5 and H 6 , by using Taylor expansions of the functions on Riemannian manifolds, the statement follows from the results of [20]. This ends the proof. □
Example 1.
We consider the particular case M = R + 2 : = { ( x 1 , x 2 ) R 2 | x 1 > 0 , x 2 > 0 } with the metric g given in Cartesian coordinates ( x 1 , x 2 ) around the point x M by the matrix:
M y ( g i j ) y = g y i , y j : = diag x 1 1 , x 2 1 .
In other words, for any vectors u = ( u 1 , u 2 ) and v = ( v 1 , v 2 ) in the tangent plane at x M , denoted by T y M , which coincides with R 2 , we have
g ( u , v ) = u 1 v 1 x 1 + u 2 v 2 x 2 .
Let a = ( a 1 , a 2 ) M and v = ( v 1 , v 2 ) T a M . It is easy to see that the (minimizing) geodesic curve t γ ( t ) verifying γ ( 0 ) = a , γ ( 0 ) = v is given by
R t ( a 1 e v 1 a 1 t , a 2 e v 2 a 2 t ) .
Hence, M is a complete Riemannian manifold. Furthermore, the (minimizing) geodesic segment γ : [ 0 , 1 ] M 2 joining the points a = ( a 1 , a 2 ) and b = ( b 1 , b 2 ) , i.e., γ ( 0 ) = a , γ ( 1 ) = b is given by γ i ( t ) = a 1 1 t b i t , i = 1 , 2 . Thus, the distance d on the metric space ( M 2 , g 2 ) is given by
d ( a , b ) = 0 1 γ ˙ ( t ) γ ( t ) d t = 0 1 ( γ ˙ 1 ( t ) γ 1 ( t ) ) 2 + ( γ ˙ 2 ( t ) γ 2 ( t ) ) 2 d t = ( ln a 1 b 1 ) 2 + ( ln a 2 b 2 ) 2 .
It follows easily that the closed ball B ( a ; R ) centered in a M of radius R 0 verifies
a 1 e R 2 , a 1 e R 2 × a 2 e R 2 , a 2 e R 2 B ( a ; R ) ;
thus, every closed rectangle [ ρ 1 , η 1 ] × [ ρ 2 , η 2 ] ( ρ 1 > 0 , ρ 2 > 0 ) is bounded in the metric space ( M , g ) with the distance d.
Next, we consider the functions F : M R , f : M R 2 and h : M R given for any x M by
F ( x ) = x 1 , f 1 ( x ) = 1 2 ( x 1 1 ) 2 3 4 ln x 1 + 3 8 ( x 2 1 ) 2 , f 2 ( x ) = 1 4 ( x 1 1 ) 2 3 8 ln x 1 + 3 16 ( x 2 1 ) 2 , h ( x ) = 1 3 ( x 1 1 ) 2 + 1 3 ( x 2 1 ) 2 1 3 .
It is easy to see that, for x M and any geodesic segment γ : [ 0 , 1 ] M with γ ( 0 ) = a , γ ( 1 ) = b , the functions f i ( x ) , i = 1 , 2 , and h ( x ) are all convex on M with the Riemannian metric g. Moreover, the function h ( x ) satisfies the Slater constraint qualification.
We then consider the corresponding KKT reformulation of the semivectorial bilevel programming on Riemannian manifolds:
min x , ω F ( x ) = x 2 s . t . ω W , i = 1 2 ω i grad x f i ( x ) + grad x h ( x ) μ = 0 , h ( x ) = 0 , x M .
By the definition of the gradient of a differentiable function with respect to the Riemannian metric g, let ω 1 = 1 3 , ω 2 = 2 3 , ω 1 + ω 2 = 1 , and μ = ( 1 2 , 3 4 ) T R 2 ; we have
min x , ω F ( x ) = x 1 s . t . ( x 1 1 2 ) 2 + ( x 2 1 2 ) 2 1 = 0 , 1 3 ( x 1 1 ) 2 + 1 3 ( x 2 1 ) 2 1 3 = 0 , x M .
It is easy to see that the unique optimal solution of the KKT reformulation is x = ( 3 7 4 , 3 + 7 4 ) .
According to Algorithm 1, we first give the initial approximations s 0 W × M × R 2 , λ 0 R 2 , and a sequence { ω k } . In the restoration phase, find an approximate minimizer x ¯ = ( x 1 ¯ , x 2 ¯ ) M and multiplier μ ¯ = ( μ 1 ¯ , μ 2 ¯ ) R 2 for the problem:
min x ω 1 k f 1 ( x ) + ω 2 k f 2 ( x ) s . t . h ( x ) = 0 , x M ,
and define z k = ( x ¯ , ω k , μ ¯ ) .
We then compute the direction by using the exponential mapping and the projection defined on Riemannian manifold M.
d tan k = P k exp z k η grad s L ( z k , λ k ) z k , = P k z 1 k e η grad s L ( z k , λ k ) z 1 k , z 2 k e η grad s L ( z k , λ k ) z 2 k z k ,
where L ( z k , λ k ) = x 1 + λ 1 k i = 1 2 ω i k grad s f i ( x ¯ ) + grad s h ( x ¯ ) μ ¯ + λ 2 k h ( x ¯ ) .
In the minimization phase, we first find v k , i such that L ( v k , i , λ k ) < L ( z k , λ k ) and v k , i B k , i = { v : d ( v , z k ) δ k , i } . Then, by calculating the actual reduction Ared k , i and positive predicted reduction Pred k , i of the merit function Ψ ( s , λ , θ ) such that Ared k , i 0.1 Pred k , i , we obtain a sequence { s k } .
According to Theorems 3 and 4, the sequence { s k } generated by the IR method established in the present paper converges to a solution of the semivectorial bilevel programming on Riemannian manifolds.

5. Conclusions

In this paper, a new algorithm for solving the semivectorial bilevel programming based on the IR technique was proposed, which preserves the two-stage structure of the problem. In the feasibility phase, lower-level problems can be solved imprecisely using their properties, and users are free to use special-purpose solvers. In the optimal stage, a minimization algorithm with linear constraints was used. Moreover, it was also proven that the algorithm is well-defined and converges to the feasible point under mild conditions. Under more stringent assumptions, the convergence of sequences generated by the presented algorithm was proven. Furthermore, the validity of some conditions generated by the algorithm was given as well.

Author Contributions

Conceptualization, J.L. and Z.W.; methodology, J.L.; writing—original draft preparation, J.L.; writing—review and editing, J.L.; supervision, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China Grant Number 11871383.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Udrişte, C. Convex Functions and Optimization Methods on Riemannian Manifolds; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 297. [Google Scholar]
  2. Boumal, N. An Introduction to Optimization on Smooth Manifolds. 2020. Available online: https://www.nicolasboumal.net/book/ (accessed on 11 September 2020).
  3. Liao, L.; Wan, Z. On the Karush–Kuhn–Tucker reformulation of the bilevel optimization problems on Riemannian manifolds. Filomat, 2022; 36, accepted and in press. [Google Scholar]
  4. Martínez, J.M.; Pilotta, E.A. Inexact-restoration algorithms for constrained optimization. J. Optim. Theory Appl. 2000, 104, 135–163. [Google Scholar] [CrossRef]
  5. Martínez, J.M. Inexact-restoration method with Lagrangian tangent decrease and new merit function for nonlinear programming. J. Optim. Theory Appl. 2001, 111, 39–58. [Google Scholar] [CrossRef] [Green Version]
  6. Birgin, E.G.; Bueno, L.F.; Martínez, J.M. Assessing the reliability of general-purpose inexact restoration methods. J. Comput. Appl. Math. 2015, 282, 1–16. [Google Scholar] [CrossRef]
  7. Bueno, L.F.; Friedlander, A.; Martínez, J.M.; Sobral, F.N. Inexact restoration method for derivative-free optimization with smooth constraints. SIAM J. Optim. 2013, 23, 1189–1213. [Google Scholar] [CrossRef] [Green Version]
  8. Francisco, J.B.; Martínez, J.M.; Martínez, L.; Pisnitchenko, F. Inexact restoration method for minimization problems arising in electronic structure calculations. Comput. Optim. Appl. 2011, 50, 555–590. [Google Scholar] [CrossRef]
  9. Banihashemi, N.; Kaya, C.Y. Inexact restoration for Euler discretization of box-constrained optimal control problems. J. Optim. Theory Appl. 2013, 156, 726–760. [Google Scholar] [CrossRef]
  10. Bueno, L.F.; Haeser, G.; Martínez, J.M. An inexact restoration approach to optimization problems with multiobjective constraints under weighted-sum scalarization. Optim. Lett. 2016, 10, 1315–1325. [Google Scholar] [CrossRef]
  11. Krejić, N.; Jerinkić, N.K.; Ostojić, T. An inexact restoration-nonsmooth algorithm with variable accuracy for stochastic nonsmooth convex optimization problems in machine learning and stochastic linear complementarity problems. J. Comput. Appl. Math. 2022, 423, 114943. [Google Scholar] [CrossRef]
  12. Ma, Y.; Pan, B.; Yan, R. Feasible Sequential Convex Programming With Inexact Restoration for Multistage Ascent Trajectory Optimization. IEEE T. Aero. Elec. Sys. 2022, 1–14. [Google Scholar] [CrossRef]
  13. Gabay, D. Minimizing a differentiable function over a differential manifold. J. Optim. Theory Appl. 1982, 37, 177–219. [Google Scholar] [CrossRef]
  14. Murtagh, B.A.; Saunders, M.A. Large-scale linearly constrained optimization. Math. Program. 1978, 14, 41–72. [Google Scholar] [CrossRef] [Green Version]
  15. Gay, D.M. A trust-region approach to linearly constrained optimization. In Numerical Analysis; Springer: Berlin/Heidelberg, Germany, 1984; Volume 1066, pp. 72–105. [Google Scholar]
  16. Gill, P.E.; Murray, W.; Wright, M.H. Practical Optimization; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2019. [Google Scholar]
  17. Fischer, A.; Friedlander, A. A new line search inexact restoration approach for nonlinear programming. Comput. Optim. Appl. 2010, 46, 333–346. [Google Scholar] [CrossRef]
  18. Bueno, L.F.; Martínez, J.M. On the complexity of an inexact restoration method for constrained optimization. SIAM J. Optim. 2020, 30, 80–101. [Google Scholar] [CrossRef]
  19. Andreani, R.; Ramos, A.; Secchin, L.D. Improving the Global Convergence of Inexact Restoration Methods for Constrained Optimization Problems. Optimization-Online. 2022. Available online: https://optimization-online.org/?p=18821 (accessed on 28 March 2022).
  20. Andreani, R.; Castro, S.L.C.; Chela, J.L.; Friedlander, A.; Santos, S.A. An inexact-restoration method for nonlinear bilevel programming problems. Comput. Optim. Appl. 2009, 43, 307–328. [Google Scholar] [CrossRef]
  21. Friedlander, A.; Gomes, F.A.M. Solution of a truss topology bilevel programming problem by means of an inexact restoration method. Comput. Appl. Math. 2011, 30, 109–125. [Google Scholar]
  22. Andreani, R.; Ramirez, V.A.; Santos, S.A.; Secchin, L.D. Bilevel optimization with a multiobjective problem in the lower level. Numer. Algorithms 2019, 81, 915–946. [Google Scholar] [CrossRef]
  23. Martínez, J.M.; Pilotta, E.A. Inexact restoration methods for nonlinear programming: Advances and Perspectives. In Optimization and Control with Applications; Springer: Boston, MA, USA, 2005; pp. 271–291. [Google Scholar]
  24. Fernández, D.; Pilotta, E.A.; Torres, G.A. An inexact restoration strategy for the globalization of the sSQP method. Comput. Optim. Appl. 2013, 54, 595–617. [Google Scholar] [CrossRef]
  25. Dempe, S.; Kalashnikov, V.; Pérez-Valdés, G.A.; Kalashnykova, N. Bilevel programming problems. In Energy Systems; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  26. Eisenhart, L.P. Riemannian Geometry; Princeton University Press: Princeton, NJ, USA, 1997; Volume 51. [Google Scholar]
  27. Jost, J. Riemannian Geometry and Geometric Analysis; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2008; Volume 42005. [Google Scholar]
  28. Absil, P.A.; Mahony, R.; Sepulchre, R. Optimization Algorithms on Matrix Manifolds; Princeton University Press: Princeton, NJ, USA, 2009. [Google Scholar]
  29. Bento, G.C.; Cruz Neto, J.X. A subgradient method for multiobjective optimization on Riemannian manifolds. J. Optim. Theory Appl. 2013, 159, 125–137. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liao, J.; Wan, Z. Inexact Restoration Methods for Semivectorial Bilevel Programming Problem on Riemannian Manifolds. Axioms 2022, 11, 696. https://doi.org/10.3390/axioms11120696

AMA Style

Liao J, Wan Z. Inexact Restoration Methods for Semivectorial Bilevel Programming Problem on Riemannian Manifolds. Axioms. 2022; 11(12):696. https://doi.org/10.3390/axioms11120696

Chicago/Turabian Style

Liao, Jiagen, and Zhongping Wan. 2022. "Inexact Restoration Methods for Semivectorial Bilevel Programming Problem on Riemannian Manifolds" Axioms 11, no. 12: 696. https://doi.org/10.3390/axioms11120696

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop