Next Article in Journal
Probing the Oscillatory Behavior of Internet Game Addiction via Diffusion PDE Model
Next Article in Special Issue
Second-Ordered Parametric Duality for the Multi-Objective Programming Problem in Complex Space
Previous Article in Journal
Justification of Direct Scheme for Asymptotic Solving Three-Tempo Linear-Quadratic Control Problems under Weak Nonlinear Perturbations
Previous Article in Special Issue
Boundedness of Riesz Potential Operator on Grand Herz-Morrey Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimality Conditions and Dualities for Robust Efficient Solutions of Uncertain Set-Valued Optimization with Set-Order Relations

1
College of Mathematics and Statistics, Chongqing Jiaotong University, Chongqing 400074, China
2
School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(11), 648; https://doi.org/10.3390/axioms11110648
Submission received: 3 October 2022 / Revised: 14 November 2022 / Accepted: 15 November 2022 / Published: 16 November 2022
(This article belongs to the Special Issue Special Issue in Honor of the 60th Birthday of Professor Hong-Kun Xu)

Abstract

:
In this paper, we introduce a second-order strong subdifferential of set-valued maps, and discuss some properties, such as convexity, sum rule and so on. By the new subdifferential and its properties, we establish a necessary and sufficient optimality condition of set-based robust efficient solutions for the uncertain set-valued optimization problem. We also introduce a Wolfe type dual problem of the uncertain set-valued optimization problem. Finally, we establish the robust weak duality theorem and the robust strong duality theorem between the uncertain set-valued optimization problem and its robust dual problem. Several main results extend to the corresponding ones in the literature.

1. Introduction

Robust optimization is an important deterministic technique for studying optimization problems with data uncertainty, which is protected against data uncertainty and has grown significantly, see [1,2,3,4,5,6]. The optimization theory mainly includes multi-objective optimization and focuses on finding global optimal solutions or global efficient solutions. However, in real-world situations where the solutions are very susceptible to perturbations from the variables, we might not always be able to identify the global optimal solutions. To reduce the sensitivity to variable perturbations under these conditions, we are going to find the robust solutions.
The set-valued optimization problem:
( SOP ) min H ( z ) = { H 1 ( z ) , H 2 ( z ) , , H k ( z ) , , H q ( z ) } s . t . z M , B j ( z ) R , j = 1 , , l
has been widely studied by scholars, where M is a closed and convex subset of a real topological linear space X, H k : M 2 R , k = 1 , , q and B j : M 2 R , j = 1 , , l are given functions. Set-valued optimization is a thriving research field with numerous applications, for example in risk management [7,8], statistics [9], and others. Hamel and Heyde [7] defined set-valued (convex) measures of risk and their acceptance sets, and they gave dual representation theorems. Hamel et al. [8] defined set-valued risk measures on L d p with 0 p for conical market models, and primal and gave dual representation results. Hamel and Kostner [9] discussed relationships to families of univariate quantile functions and to depth functions, and introduced a corresponding Value at Risk for multivariate random variables as well as stochastic orders by the set-valued approach. The vectorial criterion and the set criterion are the two different forms of solution criteria for set-valued optimization problems. Each different criterion has been studied independently. The challenge of minimizing a function, when the representation of a point is actually a set, is dealt by set-valued optimization. Since there is no way to minimize a set by a total order relation, it is necessary to give a definition for minimizing the set-valued objective function. The literature [10,11,12] introduced the concepts of preorders to compare sets. These preorders enable the formulation of set-valued optimization problems pertaining to the robustness of multi-objective optimization problems. Eichfelder and Jahn [10] presented different optimality notions such as minimal, weakly minimal, strongly minimal and properly minimal elements in a pre-ordered linear space and discussed the relations among these notions. Young [11] introduced the upper set less relation and lower set less relation and then used these set relations to analyze the upper and lower limits of real number sequences. Kuroiwa et al. [12] referred to the upper-type set relation and considered some duality theorems of a set optimization problem. Furthermore, six other forms of set relations [13] were also used by Kuroiwa et al. [12] to solve set optimization problems. By generalized differentiable assumptions, a separation scheme is used to construct some robust necessary conditions for uncertain optimization problems by Wei et al. [14]. By using the constraint qualification and the regularity condition, Wang et al. [15] developed weak and strong KKT robust necessary conditions for a nonconvex nonsmooth uncertain multiobjective optimization problem under the conditions of upper semi-continuity.
Rockafellar and Tyrrell [16] first introduced subdifferential concepts of convex functions. Recently, many authors have generalized subdifferentials of a vector-valued map to the one of a set-valued map [17,18]. There are two main approaches to define the subdifferential of set-valued mappings: one is to define the subdifferential by the derivative of the set-valued maps [17], the other is to define subdifferential by using algebraic forms [18,19,20,21,22]. Tanino [18] pioneered conjugate duality for vector optimization problems and introduced weak efficient points of a set to provide a weak subdifferential for set-valued mappings. A few characteristics of this weak subdifferential were covered by Sach [19]. By using an algebraic form, Yang [20] defined a weak subdifferential for set-valued mappings, demonstrated an extension theorem of the Hahn-Banach theorem, and talked about the existence of the weak subgradients. Chen and Jahn [21] introduced a kind of weak subdifferential, which is more powerful than the weak subdifferential [20]. By the weak subdifferential, they established a sufficient optimality condition for set-valued optimization problems. Borwein [22] introduced a strong subgradient, and proved a Lagrange multiplier theorem and a Sandwich theorem for convex maps. Peng et al. [23] proved the existence of the Borwein-strong subgradient and Yang-weak subgradient for set-valued maps and presented a new Lagrange multiplier theorem and a new Sandwich theorem for set-valued maps. Li and Guo [24] investigated the features of the weak subdifferential that was first proposed in [21], as well as the necessary and sufficient conditions for optimality in set-valued optimization problems. Hernández and Rodríguez-Marín [25] presented a new definition of the strong subgradient for set-valued mappings that were stronger than the weak subgradient of set-valued mappings introduced by Chen and Jahn [21]. Long et al. [26] obtained two existence theorems for weak subgradients of set-valued mappings described in [21]. They also deduced several features of the weak subdifferential for set-valued mappings. İnceoğlu [27] defined the second-order weak subdifferential and examined some properties of the concept.
Recently, the dual theorem in the face of data uncertainty has received a great deal of attention due to the reality of uncertainty in many real-world optimization problems. Suneja et al. [28] constructed strong/weak duality results between the primary problem and its Mond-Weir type dual problem using Clarke’s generalized gradients and sufficient optimality criteria for the vector optimization problems. Chuong and Kim [29] established sufficient conditions for (weakly) efficient solutions of a nonsmooth semi-infinite multiobjective optimization problem and proposed types of Wolfe and Mond-Weir dual problems via the limiting subdifferential of locally Lipschitz functions. Moreover, they explored weak and strong duality. By means of multipliers and limiting subdifferentials of the related functions, Chuong [30] established necessary/sufficient optimality conditions for robust (weakly) Pareto solutions of a robust multiobjective optimization problem involving nonsmooth/nonconvex real-valued functions. In addition, they addressed a dual (robust) multiobjective problem to the primal one, and explored weak/strong duality. By virtue of subdifferential [31], Sun et al. [32] obtained optimality condition and established Wolfe type robust duality between the uncertain optimization problem and its uncertain dual problem under the conditions of continuity and cone-convex-concavity.
To the best of our knowledge, there are a few concepts of solutions for the uncertain set-valued optimization problem through set-order relation. Moreover, there is very little literature on the optimality condition and the dual theorem for set-based robust efficient solutions of uncertain set-valued optimization problems by terms of the second-order strong differential of a set-valued mapping. Lately, Som and Vetrivrl [33] introduced robustness for set-valued optimization to generalize some existing concepts of robustness for scalar and vector-valued optimization, and they followed the set approach for solutions to set-valued optimization problems.
To weaken the conditions of continuity and cone-convex-concavity [15,32], inspired by the subdifferential [20,22] and set-order relations [34], we introduce a new second-order strong subdifferential of set-valued mapping and define the set-based robust efficient solution for an uncertain set-valued optimization problem. Meanwhile, by using the second-order strong subdifferential of set-valued maps, we put forward Wolfe type dual problem and investigate the robust weak duality and robust strong duality of the set-based robust efficient solutions for uncertain set-valued optimization problems.
This paper is organized as follows. We quickly go through the concepts in Section 2 before introducing a brand-new second-order strong subdifferential of a set-valued map. We derive some crucial new subdifferential features in Section 3. We obtain a necessary and sufficient condition for the set-based robust efficient solutions to the uncertain set-valued optimization problem in Section 4 thanks to the concept of the second-order strong subdifferential of set-valued mappings. The robust weak duality and robust strong duality of the uncertain set-valued optimization problem are established in Section 5. Section 6 is a short conclusion of the paper.

2. Preliminaries and Definitions

Throughout the paper, let X and Y be two real topological linear spaces with their topological dual spaces X and Y , respectively. 0 X and 0 Y denote the original points of X and Y, respectively. Let K Y be a solid closed convex pointed cone. The dual cone of K is defined by
K = { y Y : y , y 0 , y K } .
Let N be a natural number and n , m , l N . Let D Y be a nonempty subset. cl D and int D denote the closure and interior of D, respectively. T ( Y ) : = { E Y E is nonempty } .
Let M be a subset of X and H : M 2 Y be a set-valued map. The domain, graph and epigraph of H are defined, respectively, by
dom H : = { z M : H ( z ) Ø } , gr H : = { ( z , y ) M × Y : y H ( z ) , z M }
and
epi H : = { ( z , y ) M × Y : y H ( z ) + K } .
A partial order relation ( K ) of space Y caused by the cone K as follows:
e K s if and only if s e K , e K s if and only if s e int K , e , s Y .
 Definition 1 ([34]).
Let E , S T ( Y ) be arbitrarily chosen sets.
 (i)
The lower set less order relation is defined by
E K l S E + K S s S , e E : e K s ,
E K l S E + int K S s S , e E : e K s .
 (ii)
The upper set less order relation is defined by
E K u S S K E e E , s S : e K s ,
E K u S S int K E e E , s S : e K s .
 Definition 2 ([35]).
Let E , S T ( Y ) be arbitrarily chosen sets. Then the certainly less order relation is defined by
E K c S ( E = S ) or ( E S , e E , s S : e K s ) ,
or equivalently, E = S or, S E K whenever E S .
 Definition 3 ([31]).
Let M be a nonempty subset of X. M is said to be convex if for any x , z M and for all β [ 0 , 1 ] ,
β x + ( 1 β ) z M .
 Definition 4 ([31]).
Let M be a nonempty convex subset of X. H : M 2 R is called K-convex if for any x , z M and for all β [ 0 , 1 ] ,
β H ( x ) + ( 1 β ) H ( z ) H ( β x + ( 1 β ) z ) + K .
 Definition 5.
A function H : M 2 R has a global minimum at ( x 1 , y 1 ) if
y 1 R + y 2 , x 2 M , y 2 H ( x 2 ) .
 Definition 6 ([22]).
Let H : M 2 Y be a set-valued map and be K-convex, x 1 M , y 1 H ( x 1 ) and H ( x 1 ) y 1 K , the set
H ( x 1 , y 1 ) = { ξ X y 2 y 1 ξ , x 2 x 1 K , x 2 M , y 2 H ( x 2 ) }
is called the Borwein-strong subdifferential of H at ( x 1 , y 1 ) .
Enlightened by the Borwein-strong subdifferential in [22,23], we put forward the new notion of second-order strong subdifferential for a set-valued map.
 Definition 7.
Let H : M 2 R be a set-valued map, x 1 M , y 1 H ( x 1 ) and H ( x 1 ) y 1 R + . Then ξ X is said to be a second-order strong subgradient of H at ( x 1 , y 1 ) if
y 2 y 1 ξ , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) .
The set
s 2 H ( x 1 , y 1 ) = { ξ X y 2 y 1 ξ , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) }
is said to be the second-order strong subdifferential of H at ( x 1 , y 1 ) . If s 2 H ( x 1 , y 1 ) Ø , then H is said to be second-order strong subdifferentiable at ( x 1 , y 1 ) .
The following example shows Definition 7.
 Example 1.
Let H : R 2 R be a set-valued map with H ( x ) = { y R y x 2 } for any x R . Take ( x 1 , y 1 ) = ( 0 , 0 ) . A simple calculation shows that H ( x 1 ) y 1 R + . Then we obtain
s 2 H ( 0 , 0 ) = { ξ R : ξ [ 1 , 1 ] } .
 Remark 1.
Let H : M 2 R be a set-valued map. If the condition H ( x 1 ) y 1 R + is not satisfied, Definition 7 is not complete. The following example shows the case.
 Example 2.
Let H : R + 2 R be a set-valued map with H ( x ) = { y R y x 2 } for any x R + . Take ( x 1 , y 1 ) = ( 1 , 1 ) . A simple calculation shows that H ( x 1 ) y 1 R + . Then it follows from Definition 7 that ξ does not exist, i.e.,
s 2 H ( 1 , 1 ) = Ø .
Therefore, the condition H ( x 1 ) y 1 R + is necessary in Definition 7.
 Remark 2.
Let H : M 2 R be a set-valued map. Obviously, if the second-order strong subdifferential exists, then 0 s 2 H ( x 1 , y 1 ) . However, 0 H ( x 1 , y 1 ) may not necessarily be true. Now we give an example to illustrate the case.
 Example 3.
Let H : R + 2 R be a set-valued map, and let H ( x ) = { y R y 1 2 x } for any x R + . Take ( x 1 , y 1 ) = ( 0 , 0 ) . A simple calculation shows that H ( x 1 ) y 1 R + . Then we have
s 2 H ( 0 , 0 ) = { ξ R : ξ = 0 } .
and
H ( 0 , 0 ) = { ξ R : ξ ( , 1 2 ] } .
Thus, 0 s 2 H ( 0 , 0 ) , but 0 H ( 0 , 0 ) .

3. Properties of a Second-Order Strong Subdifferential of Set-Valued Maps

In this section, we present some properties of a second-order strong subdifferential of set-valued maps. Firstly, we introduce the following lemma.
 Lemma 1.
Let x X , ξ , η X and β [ 0 , 1 ] . Set h x ( ξ ) : = ξ , x . Then
β h x 2 ( ξ ) + ( 1 β ) h x 2 ( η ) h x 2 ( β ξ + ( 1 β ) η ) .
Proof. 
Let x X , ξ , η X and β [ 0 , 1 ] . Since β 2 β 0 and h x is a linear function,
h x 2 ( β ξ + ( 1 β ) η ) = [ h x ( β ξ ) + h x ( ( 1 β ) η ) ] 2 = h x 2 ( β ξ ) + h x 2 ( ( 1 β ) η ) + 2 h x ( β ξ ) h x ( ( 1 β ) η ) = β h x 2 ( ξ ) + ( 1 β ) h x 2 ( η ) + ( β 2 β ) ( h x 2 ( ξ ) + h x 2 ( η ) 2 h x 2 ( ξ ) h x 2 ( η ) ) β h x 2 ( ξ ) + ( 1 β ) h x 2 ( η ) .
This proof is complete. □
 Theorem 1.
Let H : M 2 R be a set-valued map, x 1 M , y 1 H ( x 1 ) and H ( x 1 ) y 1 R + . Then the set s 2 H ( x 1 , y 1 ) is convex.
Proof. 
If s 2 H ( x 1 , y 1 ) = Ø , then there is nothing to be demonstrated.
Suppose s 2 H ( x 1 , y 1 ) Ø . Let ξ s 2 H ( x 1 , y 1 ) , η s 2 H ( x 1 , y 1 ) and λ [ 0 , 1 ] . Then,
y 2 y 1 ξ , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 )
and
y 2 y 1 η , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) ,
i.e.,
λ ( y 2 y 1 ) λ ξ , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 )
and
( 1 λ ) ( y 2 y 1 ) ( 1 λ ) η , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) .
By Lemma 1, it follows from (1) and (2) that
y 2 y 1 ( λ ξ , x 2 x 1 2 + ( 1 λ ) η , x 2 x 1 2 )
y 2 y 1 λ ξ + ( 1 λ ) η , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) .
Thus,
λ ξ + ( 1 λ ) η s 2 H ( x 1 , y 1 ) .
This proof is complete. □
 Theorem 2.
Let H : M 2 R be a set-valued map, x 1 M , y 1 H ( x 1 ) and H ( x 1 ) y 1 R + . Let H be second-order strong subdifferentiable at ( x 1 , y 1 ) . Then H has a global minimum at ( x 1 , y 1 ) if and only if 0 X s 2 H ( x 1 , y 1 ) .
 Proof.
( ) Since H has a global minimum at ( x 1 , y 1 ) ,
y 2 y 1 R + , x 2 M , y 2 H ( x 2 ) .
Then,
y 2 y 1 0 X , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) ,
which implies that 0 X s 2 H ( x 1 , y 1 ) .
( ) Let 0 X s 2 H ( x 1 , y 1 ) . Then, by Definition 7, we obtain
y 2 y 1 0 X , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) ,
which implies that y 2 y 1 R + for all x 2 M , y 2 H ( x 2 ) . Therefore, according to Definition 5, H has a global minimum at ( x 1 , y 1 ) . This proof is complete. □
 Theorem 3.
Let H : M 2 R be a set-valued map and α > 0 . Let x 1 M , y 1 H ( x 1 ) and H ( x 1 ) y 1 R + . If H and α H are second-order strong subdifferentiable at ( x 1 , y 1 ) and ( x 1 , α y 1 ) , respectively, then
s 2 ( α H ) ( x 1 , α y 1 ) = α s 2 H ( x 1 , y 1 ) .
Proof. 
Let ξ s 2 ( α H ) ( x 1 , α y 1 ) . Then
α y 2 α y 1 ξ , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) y 2 y 1 1 α ξ , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) y 2 y 1 1 α ξ , x 2 x 1 2 R + , x 2 M , y 2 H ( x 2 ) 1 α ξ s 2 H ( x 1 , y 1 ) ξ α s 2 H ( x 1 , y 1 ) .
Here we finish the proof. □
Now, we provide an illustration of Theorem 3.
 Example 4.
Let H : R 2 R be a set-valued map, and let H ( x ) = { y R y 3 x 2 } . Take ( x 1 , y 1 ) = ( 0 , 0 ) . A simple calculation shows that H ( x 1 ) y 1 R + . Then for any α > 0 , we obtain
s 2 ( α H ) ( 0 , 0 ) = { ξ R : ξ [ 3 α , 3 α ] }
and
α s 2 H ( 0 , 0 ) = { ξ R : ξ [ 3 α , 3 α ] } .
Therefore, s 2 ( α H ) ( 0 , 0 ) = α s 2 H ( 0 , 0 ) .
 Theorem 4.
Let H and Q : M 2 R be set-valued maps, x 1 M , y 1 H ( x 1 ) , y 2 Q ( x 1 ) , H ( x 1 ) y 1 R + and Q ( x 1 ) y 2 R + . If H and Q are second-order strong subdifferentiable at ( x 1 , y 1 ) and ( x 1 , y 2 ) , respectively, then
s 2 H ( x 1 , y 1 ) + s 2 Q ( x 1 , y 2 ) 2 s 2 ( H + Q ) ( x 1 , y 1 + y 2 ) .
Proof. 
Let ξ 1 s 2 H ( x 1 , y 1 ) and ξ 2 s 2 Q ( x 1 , y 2 ) . Then,
y 3 y 1 ξ 1 , x 2 x 1 2 R + , x 2 M , y 3 H ( x 2 )
and
y 4 y 2 ξ 2 , x 2 x 1 2 R + , x 2 M , y 4 Q ( x 2 ) ,
i.e.,
1 2 ( y 3 y 1 ) 1 2 ξ 1 , x 2 x 1 2 R + , x 2 M , y 3 H ( x 2 )
and
1 2 ( y 4 y 2 ) 1 2 ξ 2 , x 2 x 1 2 R + , x 2 M , y 4 Q ( x 2 ) .
According to Lemma 1, it follows from (3) and (4) that
1 2 [ ( y 3 y 1 ) + ( y 4 y 2 ) ] [ 1 2 ξ 1 , x 2 x 1 2 + 1 2 ξ 2 , x 2 x 1 2 ] 1 2 [ ( y 3 + y 4 ) ( y 1 + y 2 ) ] 1 2 ξ 1 + 1 2 ξ 2 , x 2 x 1 2 R + , x 2 M , y 3 + y 4 ( H + Q ) ( x 2 ) .
Thus,
2 2 ξ 1 + 2 2 ξ 2 s 2 ( H + Q ) ( x 1 , y 1 + y 2 ) ,
i.e.,
ξ 1 + ξ 2 2 s 2 ( H + Q ) ( x 1 , y 1 + y 2 ) .
Therefore, s 2 H ( x 1 , y 1 ) + s 2 Q ( x 1 , y 2 ) 2 s 2 ( H + Q ) ( x 1 , y 1 + y 2 ) . This proof is complete. □
 Corollary 1.
Let H i : M 2 R be set-valued maps, i = 1 , , m , x 1 M , y i H i ( x 1 ) and H i ( x 1 ) y i R + . If H i is second-order strong subdifferentiable at ( x 1 , y i ) , i = 1 , , m , then
i = 1 m s 2 H i ( x 1 , y i ) m s 2 i = 1 m H i ( x 1 , i = 1 m y i ) .
 Remark 3.
Let H and Q : M 2 R be set-valued maps. If H and Q are strong subdifferentiable at ( x 1 , y 1 ) and ( x 1 , y 2 ) , respectively, then
H ( x 1 , y 1 ) + Q ( x 1 , y 2 ) ( H + Q ) ( x 1 , y 1 + y 2 ) .
However, 2 can not be omitted in Theorem 4.
We take into consideration the following examples to demonstrate Theorem 4 and Remark 3.
 Example 5.
Let H and Q : R 2 R be set-valued maps with H ( x ) = { y R y x 2 } and Q ( x ) = { y R y 4 x 2 } . Take x 1 = 1 , y 1 = 1 H ( x 1 ) and y 2 = 4 Q ( x 1 ) . A simple calculation shows that H ( x 1 ) y 1 R + and Q ( x 1 ) y 2 R + . Then we obtain
s 2 H ( 1 , 1 ) = { ξ 1 R : ξ 1 [ 1 , 1 ] }
and
s 2 Q ( 1 , 4 ) = { ξ 2 R : ξ 2 [ 2 , 2 ] } ,
so,
s 2 H ( 1 , 1 ) + s 2 Q ( 1 , 4 ) = { ξ 1 + ξ 2 R : ξ 1 + ξ 2 [ 3 , 3 ] } .
Moreover,
s 2 ( H + Q ) ( 1 , 5 ) = { ξ 3 R : ξ 3 [ 5 , 5 ] } .
and
2 s 2 ( H + Q ) ( 1 , 5 ) = { 2 ξ 3 R : 2 ξ 3 [ 10 , 10 ] } .
In fact, 3 5 and 3 < 10 . Therefore, s 2 H ( x 1 , y 1 ) + s 2 Q ( x 1 , y 2 ) s 2 ( H + Q ) ( x 1 , y 1 + y 2 ) and s 2 H ( x 1 , y 1 ) + s 2 Q ( x 1 , y 2 ) 2 s 2 ( H + Q ) ( x 1 , y 1 + y 2 ) .
 Example 6.
Let H and Q : R 2 R be set-valued maps, and let H ( x ) = { y R y x } , Q ( x ) = { y R y 4 x } . Take ( x 1 , y 1 ) = ( 0 , 0 ) = ( x 1 , y 2 ) . A simple calculation shows that H ( x 1 ) y 1 R + and Q ( x 1 ) y 2 R + . Then we obtain
H ( 0 , 0 ) = { ξ 1 R : ξ 1 1 }
and
Q ( 0 , 0 ) = { ξ 2 R : ξ 2 4 } ,
so,
H ( 0 , 0 ) + Q ( 0 , 0 ) = { ξ 1 + ξ 2 R : ξ 1 + ξ 2 5 } .
Moreover,
( H + Q ) ( 0 , 0 ) = { ξ 3 R : ξ 3 5 } .
Therefore, H ( x 1 , y 1 ) + Q ( x 1 , y 2 ) ( H + Q ) ( x 1 , y 1 + y 2 ) .

4. The Optimality Condition for the Uncertain Set-Valued Optimization Problem

Problem (SOP) has been studied extensively without taking into account data uncertainty. However, in most real-world practical applications, there are more uncertainties in optimization problems. To define an uncertain set-valued optimization problem (USOP), we assume that uncertainties in the objective function are given as scenarios from a known uncertainty set U = { u 1 , u 2 , , u m } R m , where u i is an uncertain parameter, i = 1 , , m . The following uncertain set-valued optimization problem (USOP) can be used to describe the problem (SOP) when there is data uncertainty for both the objectives and the constraints:
( USOP ) min H ( z , u i ) = { H 1 ( z , u i ) , H 2 ( z , u i ) , , H k ( z , u i ) , , H q ( z , u i ) } s . t . z M , u i U , B j ( z , v j ) R , v j V j , j = 1 , , l ,
where H k : M × R m 2 R , k = 1 , , q and B j : M × R l 2 R , j = 1 , , l are given functions, and the uncertain parameter v j belongs to a compact and convex uncertainty set V j R l .
Let G : M × U 2 R be a set-valued map, max u i U G ( z , u i ) is defined as follows:
G ( z , u i ) R + l max u i U G ( z , u i ) , i = 1 , , m .
In this paper, we investigate problem ( USOP ) using a robust approach. As we all know, there is no proper method to directly solve problem ( USOP ) , so it is necessary to replace problem ( USOP ) by the deterministic version, that is, the robust counterpart of problem ( USOP ) . By this means, various concepts of robustness have been proposed on the basis of different robust counterparts to describe the preferences of decision makers.
The most celebrated and researched robustness concept is called worst-case robustness (also known as min-max robustness or strict robustness in the literature). The idea is to minimize the worst possible objective function value, and search for a solution that is good enough in the worst case. Meanwhile, the constraints should be satisfied for every parameter v j V j , j = 1 , , l . Worst-case robustness is a conservative concept and reveals the pessimistic attitude of a decision maker. Then, the robust (worst-case) counterpart of problem ( USOP ) is as follows:
( URSOP ) min max u i U H ( z , u i ) = { max u i U H 1 ( z , u i ) , max u i U H 2 ( z , u i ) , , max u i U H k ( z , u i ) , , max u i U H q ( z , u i ) } s . t . z M , B j ( z , v j ) R , v j V j , j = 1 , , l .
 Definition 8.
The robust feasible set of problem (USOP) is defined by
A : = { z M B j ( z , v j ) R , v j V j , j = 1 , , m } .
We assume that A Ø . Obviously, the set of all robust feasible solutions to problem (USOP) is the same as the set of all feasible solutions to problem (URSOP).
 Definition 9.
z ˘ A is said to be a R + l -robust efficient solution to problem (USOP) if z ˘ is a R + l -efficient solution to problem (URSOP), i.e., for all z A such that
max u i U H k ( z ˘ , u i ) R + l max u i U H k ( z , u i ) .
In this part, we create a necessary and sufficient optimality condition of the R + l -robust efficient solution to problem (USOP).
 Theorem 5.
Let H k : M × R m 2 R , k = 1 , , q and B j : M × R l 2 R , j = 1 , , l be set-valued maps, z ˘ M , y ˘ u i U H k ( z ˘ , u i ) and y ˘ j v j V j B j ( z ˘ , v j ) . Assume that the following conditions hold:
 (i)
H k is bounded on M × U ;
 (ii)
max u i U H k ( z , u i ) exists for all z M ;
 (iii)
for any i , j and k, H k ( z ˘ , u i ) y ˘ R + and B j ( z ˘ , v j ) y ˘ j R + ;
 (iv)
for any j and k, H k , B j is second-order strong subdifferentiable at ( z ˘ , y ˘ ) and ( z ˘ , y ˘ j ) , respectively.
Then z ˘ is a R + l -robust efficient solution to problem ( USOP ) if and only if for any i , j and k, there exist u ˘ i U , v ˘ j V j and μ ˘ j R + such that
0 s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) + j = 1 l μ ˘ j s 2 B j ( · , v ˘ j ) ( z ˘ , y ˘ j ) ,
( μ ˘ j B j ) ( z ˘ , v ˘ j ) = { 0 }
and
H k ( z ˘ , u ˘ i ) = max u i U H k ( z ˘ , u i ) .
 Proof.
( ) Let z ˘ be a R + l -robust efficient solution to problem (USOP). Then z ˘ A . Hence, for all v j V j , we have B j ( z ˘ , v j ) R . Thus, take v ˘ j V j such that
B j ( z ˘ , v ˘ j ) R .
Moreover, for any j, there exists μ ˘ j R + such that
( μ ˘ j B j ) ( z ˘ , v ˘ j ) = { 0 } .
In fact, there are two cases to illustrate (5) as follows:
(i)
If B j ( z ˘ , v ˘ j ) = { 0 } , then take arbitrary μ ˘ j > 0 , we get ( μ ˘ j B j ) ( z ˘ , v ˘ j ) = { 0 } .
(ii)
If B j ( z ˘ , v ˘ j ) R { 0 } , then take μ ˘ j = 0 , we can easily get that ( μ ˘ j B j ) ( z ˘ , v ˘ j ) = { 0 } .
Since U is a finite set and H k is bounded, there exists u ˘ i U such that
H k ( z ˘ , u ˘ i ) = max u i U H k ( z ˘ , u i ) .
According to the definition of the second-order strong subdifferential, one obtains
0 s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) and 0 j = 1 l μ ˘ j s 2 B j ( · , v ˘ j ) ( z ˘ , y ˘ j ) .
Therefore, we get
0 s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) + j = 1 l μ ˘ j s 2 B j ( · , v ˘ j ) ( z ˘ , y ˘ j ) .
( ) Assume that for any i , j and k, there exist z ˘ A , u ˘ i U , v ˘ j V j and μ ˘ j R + such that
0 s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) + j = 1 l μ ˘ j s 2 B j ( · , v ˘ j ) ( z ˘ , y ˘ j ) ,
( μ ˘ j B j ) ( z ˘ , v ˘ j ) = { 0 }
and
H k ( z ˘ , u ˘ i ) = max u i U H k ( z ˘ , u i ) .
By Theorem 3 and Corollary 1, we get
s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) + j = 1 l μ ˘ j s 2 B j ( · , v ˘ j ) ( z ˘ , y ˘ j ) = s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) + j = 1 l s 2 ( μ ˘ j 2 B j ) ( · , v ˘ j ) ( z ˘ , μ ˘ j 2 y ˘ j ) l + 1 s 2 ( H k ( · , u ˘ i ) + j = 1 l ( μ ˘ j 2 B j ) ( · , v ˘ j ) ) ( z ˘ , y ˘ + j = 1 l μ ˘ j 2 y ˘ j ) .
Since 0 s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) + j = 1 l μ ˘ j s 2 B j ( · , v ˘ j ) ( z ˘ , y ˘ j ) , one has
0 l + 1 s 2 ( H k ( · , u ˘ i ) + j = 1 l ( μ ˘ j 2 B j ) ( · , v ˘ j ) ) ( z ˘ , y ˘ + j = 1 l μ ˘ j 2 y ˘ j ) .
Therefore,
0 s 2 ( H k ( · , u ˘ i ) + j = 1 l ( μ ˘ j 2 B j ) ( · , v ˘ j ) ) ( z ˘ , y ˘ + j = 1 l μ ˘ j 2 y ˘ j ) .
Obviously, y ˘ H k ( z ˘ , u ˘ i ) , y ˘ j B j ( z ˘ , v ˘ j ) . Then by Definition 7, we get
y y ˘ + j = 1 l μ ˘ j 2 y j j = 1 l μ ˘ j 2 y ˘ j R + , z A , y H k ( z , u ˘ i ) , y j B j ( z , v ˘ j ) .
Since ( μ ˘ j B j ) ( z ˘ , v ˘ j ) = { 0 } for any j, we calculate that j = 1 l ( μ ˘ j 2 B j ) ( z ˘ , v ˘ j ) = { 0 } , i.e., for the preceding element y ˘ j B j ( z ˘ , v ˘ j ) , we have j = 1 l μ ˘ j 2 y ˘ j = 0 . Together with j = 1 l ( μ ˘ j 2 B j ) ( z , v ˘ j ) R for all z A , i.e., j = 1 l μ ˘ j 2 y j R for all z A and y j B j ( z , v ˘ j ) , it follows from (7) that
y y ˘ R + , z A , y H k ( z , u ˘ i ) ,
i.e.,
H k ( z ˘ , u ˘ i ) R + l H k ( z , u ˘ i ) , z A .
Moreover, by the transitivity of R + l set-order relation, it follows from (6) and
H k ( z , u ˘ i ) R + l max u i U H k ( z , u i ) , one has
max u i U H k ( z ˘ , u i ) R + l max u i U H k ( z , u i ) , z A .
Thus, z ˘ is a R + l -robust efficient solution to problem (USOP). This proof is complete. □
 Remark 4.
 (i)
We extend the uncertain scalar optimization problem in [32] (Theorem 3.1) to the uncertain set-valued optimization problem (USOP) in Theorem 5.
 (ii)
Ref. [32] (Theorem 3.1) is established under the conditions of continuity and cone-convex-concavity, [15] (Corollaries 3.1 and 3.2) are established under the conditions of upper semi-continuity, it is under the conditions of existence of the maximum and boundedness that we obtain Theorem 5. Since bounded functions may not be continuous, our result in Theorem 5 extends [32] (Theorem 3.1) and [15] (Corollaries 3.1 and 3.2).

5. Wolfe Type Robust Duality of Problem (USOP)

The robust weak duality and the robust strong duality are covered in this section, which begin by introducing a Wolfe type dual problem ( DSOP W ) for the uncertain set-valued optimization problem (USOP).
We now consider the Wolfe type dual problem ( DSOP W ) of problem (USOP):
max ( H 1 ( z , u i ) + j = 1 l ( μ j B j ) ( z , v j ) , , H q ( z , u i ) + j = 1 l ( μ j B j ) ( z , v j ) ) s . t . 0 s 2 H k ( · , u i ) ( z , y ) + j = 1 l μ j s 2 B j ( · , v j ) ( z , y j ) , ( μ j B j ) ( z , v j ) R , j = 1 , , l , u i U , i = 1 , , m , v j V j , μ j R + , z A , y H k ( z , u i ) , y j B j ( z , v j ) , k = 1 , , q .
 Definition 10.
The robust feasible solution set P of problem ( DSOP W ) is defined by
P : = { ( z , μ j , u i , v j ) 0 s 2 H k ( · , u i ) ( z , y ) + j = 1 l μ j s 2 B j ( · , v j ) ( z , y j ) , ( μ j B j ) ( z , v j ) R , v j V j , μ j R + , j = 1 , , l , u i U , i = 1 , , m , z A , y H k ( z , u i ) , y j B j ( z , v j ) , k = 1 , , q } .
In this section, we suppose that P Ø .
 Definition 11. 
( x ˘ , μ ˘ j , u ˘ i , v ˘ j ) P is said to be a R + u -robust efficient solution to problem ( DSOP W ) if there is no feasible solution ( z , μ j , u i , v j ) P other than ( x ˘ , μ ˘ j , u ˘ i , v ˘ j ) such that
H k ( x ˘ , u ˘ i ) j = 1 l ( μ ˘ j B j ) ( x ˘ , v ˘ j ) R + u H k ( z , u i ) j = 1 l ( μ j B j ) ( z , v j ) , i = 1 , , m , k = 1 , , q .
 Theorem 6.
( Robust weak duality ) If for any k, H k is bounded and closed, and max u i U H k ( x , u i ) exists for all x M , then for any feasible solution x to problem ( URSOP ) and any feasible solution ( z , μ j , u i , v j ) to problem ( DSOP W ) , we have
max u p U H k ( x , u p ) R + u H k ( z , u i ) + j = 1 l ( μ j B j ) ( z , v j ) , i = 1 , , m , k = 1 , , q .
Proof. 
Let x be a feasible solution to problem (URSOP) and ( z , μ j , u i , v j ) be a feasible solution to problem ( DSOP W ) .
To the contrary, suppose that (8) does not hold. Then, there exist x ˘ , z ˘ A , u ˘ i U , v ˘ j V j and μ ˘ j R + such that
max u p U H k ( x ˘ , u p ) R + u H k ( z ˘ , u ˘ i ) + j = 1 l ( μ ˘ j B j ) ( z ˘ , v ˘ j ) .
From j = 1 l ( μ ˘ j B j ) ( x ˘ , v ˘ j ) R , we have
max u p U H k ( x ˘ , u p ) + j = 1 l ( μ ˘ j B j ) ( x ˘ , v ˘ j ) R + u H k ( z ˘ , u ˘ i ) + j = 1 l ( μ ˘ j B j ) ( z ˘ , v ˘ j ) .
Then, for all y ˚ max u p U H k ( x ˘ , u p ) and y j B j ( x ˘ , v ˘ j ) , there exist y ˘ H k ( z ˘ , u ˘ i ) and y ˘ j B j ( z ˘ , v ˘ j ) such that
y ˚ + j = 1 l μ ˘ j y j R + y ˘ + j = 1 l μ ˘ j y ˘ j ,
i.e.,
( y ˚ + j = 1 l μ ˘ j y j ) ( y ˘ + j = 1 l μ ˘ j y ˘ j ) int R .
Due to H k ( x ˘ , u ˘ i ) R + c max u p U H k ( x ˘ , u p ) , we can conclude that H k ( x ˘ , u ˘ i ) max u p U H k ( x ˘ , u p ) . In fact, suppose that H k ( x ˘ , u ˘ i ) = max u p U H k ( x ˘ , u p ) . Then, it follows from (9) that
H k ( x ˘ , u ˘ i ) R + u H k ( x ˘ , u ˘ i ) + j = 1 l ( μ ˘ j B j ) ( x ˘ , v ˘ j ) .
Since H k is bounded and closed, and j = 1 l ( μ ˘ j B j ) ( x ˘ , v ˘ j ) R , we obtain
max u p U H k ( x ˘ , u p ) R + u H k ( x ˘ , u ˘ i ) + j = 1 l ( μ ˘ j B j ) ( x ˘ , v ˘ j ) ,
which is impossible. Thus, H k ( x ˘ , u ˘ i ) max u p U H k ( x ˘ , u p ) . And then, by the definition of R + c set-order relationship, one has
y R + y ˚ , y H k ( x ˘ , u ˘ i ) , y ˚ max u p U H k ( x ˘ , u p ) .
It follows from 0 s 2 H k ( · , u ˘ i ) ( z ˘ , y ˘ ) and (11) that
y y ˘ 0 , x z ˘ 2 R + , x A , y H k ( x , u ˘ i ) ,
i.e.,
y ˚ y ˘ 0 , x z ˘ 2 R + , x A , y ˚ max u p U H k ( x , u p ) .
Moreover, it follows from 0 j = 1 l μ j s 2 B j ( · , v ˘ j ) ( z ˘ , y ˘ j ) , one has
j = 1 l μ ˘ j y j j = 1 l μ ˘ j y ˘ j 0 , x z ˘ 2 R + , x A , y j B j ( x , v ˘ j ) .
Thus, it follows from (12) and (13) that
( y ˚ + j = 1 l μ ˘ j y j ) ( y ˘ + j = 1 l μ ˘ j y ˘ j ) R + , y ˚ max u p U H k ( x ˘ , u p ) , y j B j ( x ˘ , v ˘ j ) ,
which contradicts (10). Therefore, for any feasible solution x to problem (URSOP) and any feasible solution ( z , μ j , u i , v j ) to problem ( DSOP W ) , we have
max u p U H k ( x , u p ) R + u H k ( z , u i ) + j = 1 l ( μ j B j ) ( z , v j ) , i = 1 , , m , k = 1 , , q .
We complete the proof. □
 Theorem 7
(Robust strong duality). Let H k : M × R m 2 R , k = 1 , , q and B j : M × R l 2 R , j = 1 , , l be set-valued maps, x ˘ M , y ˘ u i U H k ( x ˘ , u i ) and y ˘ j v j V j B j ( x ˘ , v j ) . Assume that the following conditions hold:
 (i)
H k is bounded on M × U for any k;
 (ii)
max u i U H k ( x , u i ) exists for all x M and k;
 (iii)
for any i , j and k, H k ( x ˘ , u i ) y ˘ R + and B j ( x ˘ , v j ) y ˘ j R + ;
 (iv)
for any j and k, H k and B j are second-order strong subdifferentiable at ( x ˘ , y ˘ ) and ( x ˘ , y ˘ j ) , respectively;
 (v)
x ˘ A is a R + l -robust efficient solution to problem ( USOP ) .
Then for any i , j , k , there exist u ˘ i U , v ˘ j V j and μ ˘ j R + such that ( x ˘ , μ ˘ j , u ˘ i , v ˘ j ) is a R + u -robust efficient solution to problem ( DSOP W ) .
Proof. 
Let x ˘ be a R + l -robust efficient solution to problem (USOP). By Theorem 5, we know that for any i , j and k, there exist u ˘ i U , v ˘ j V j and μ ˘ j R + such that
0 s 2 H k ( · , u ˘ i ) ( x ˘ , y ˘ ) + j = 1 l μ ˘ j s 2 B j ( · , v ˘ j ) ( x ˘ , y ˘ j ) ,
( μ ˘ j B j ) ( x ˘ , v ˘ j ) = { 0 }
and
H k ( x ˘ , u ˘ i ) = max u i U H k ( x ˘ , u i ) , k = 1 , 2 , , q .
Therefore, ( x ˘ , μ ˘ j , u ˘ i , v ˘ j ) is a feasible solution to problem ( DSOP W ). Then, for any feasible solution ( z , μ j , u i , v j ) to problem ( DSOP W ), it follows from (14) and (15) and Theorem 6 that
H k ( x ˘ , u ˘ i ) j = 1 l ( μ ˘ j B j ) ( x ˘ , v ˘ j ) = max u i U H k ( x ˘ , u i ) H k ( z , u i ) + j = 1 l ( μ j B j ) ( z , v j ) int R + .
Hence, ( x ˘ , μ ˘ j , u ˘ i , v ˘ j ) is a R + u -robust efficient solution to problem ( DSOP W ). This proof is complete. □
 Remark 5.
Theorems 10 and 11 generalize Theorems 4.1 and 4.2 in [32] from a scalar case to a set-valued one, respectively.

6. Conclusions

In this paper, we introduce a new second-order strong subdifferential of the set-valued maps and the robust efficient solutions for set approach of the uncertain set-valued optimization problems, and then a necessary and sufficient optimality condition is derived for set-based robust efficient solutions of the uncertain set-valued optimization problem. Finally, we demonstrate robust strong duality and robust weak duality for the dual problem of the uncertain set-valued optimization problem. Our discussion makes it desirable to investigate optimality conditions and the duality theorem of a set-valued optimization problem, and the main results can be applied to risk management.

Author Contributions

Conceptualization, Y.Z. and Q.W.; methodology Y.Z., Q.W. and T.T.; writing—original draft Y.Z.; writing-review and editing Y.Z., Q.W. and T.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Natural Science Foundation of China (No. 11971078), the Group Building Project for Scientifc Innovation for Universities in Chongqing (CXQT21021), and the Graduate Student Science and Technology Innovation Project (2021ST004).

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Beck, A.; Ben-Tal, A. Duality in robust optimization: Primal worst equals dual best. Oper. Res. Lett. 2009, 37, 1–6. [Google Scholar] [CrossRef]
  2. Ben-Tal, A.; Nemirovski, A. Robust optimization–methodology and applications. Math. Program. 2002, 92, 453–480. [Google Scholar] [CrossRef]
  3. Jeyakumar, V.; Li, G.Y. Strong duality in robust convex programming: Complete characterizations. SIAM J. Optim. 2010, 20, 3384–3407. [Google Scholar] [CrossRef]
  4. Jeyakumar, V.; Li, G.; Lee, G.M. Robust duality for generalized convex programming problems under data uncertainty. Nonlinear Anal. 2012, 75, 1362–1373. [Google Scholar] [CrossRef]
  5. Jeyakumar, V.; Lee, G.M.; Li, G. Characterizing robust solution sets of convex programs under data uncertainty. J. Optim. Theory Appl. 2015, 164, 407–435. [Google Scholar] [CrossRef]
  6. Clarke, F.H. Optimization and Nonsmooth Analysis; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1990. [Google Scholar]
  7. Hamel, A.H.; Heyde, F. Duality for set-valued measures of risk. SIAM J. Financ. Math. 2010, 1, 66–95. [Google Scholar] [CrossRef]
  8. Hamel, A.H.; Heyde, F.; Rudloff, B. Set-valued risk measures for conical market models. Math. Financ. Econ. 2011, 5, 1–28. [Google Scholar] [CrossRef] [Green Version]
  9. Hamel, A.H.; Kostner, D. Cone distribution functions and quantiles for multivariate random variables. J. Multivar. Anal. 2018, 167, 97–113. [Google Scholar] [CrossRef] [Green Version]
  10. Eichfelder, G.; Jahn, J. Vector optimization problems and their solution concepts. In Recent Developments in Vector Optimization; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1–27. [Google Scholar]
  11. Young, R.C. The algebra of many-valued quantities. Math. Ann. 1931, 104, 260–290. [Google Scholar] [CrossRef]
  12. Kuroiwa, D. Some duality Theorems of set-valued optimization. RIMS Kokyuroku 1999, 1079, 15–19. [Google Scholar]
  13. Kuroiwa, D.; Tanaka, T.; Ha, T. On cone convexity of set-valued maps. Nonlinear Anal. 1997, 30, 1487–1496. [Google Scholar] [CrossRef]
  14. Wei, H.Z.; Chen, C.R.; Li, S.J. Necessary optimality conditions for nonsmooth robust optimization problems. Optimization 2022, 71, 1817–1837. [Google Scholar] [CrossRef]
  15. Wang, J.; Li, S.J.; Feng, M. Unified robust necessary optimality conditions for nonconvex nonsmooth uncertain multiobjective optimization. J. Optim. Theory Appl. 2022, 195, 226–248. [Google Scholar] [CrossRef] [PubMed]
  16. Rockafellar, R.T. Convex Functions and Dual Extremum Problems. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 1963. [Google Scholar]
  17. Song, W. Weak subdifferential of set-valued mappings. Optimization 2003, 52, 263–276. [Google Scholar] [CrossRef]
  18. Tanino, T. Conjugate duality in vector optimization. J. Math. Anal. Appl. 1992, 167, 84–97. [Google Scholar] [CrossRef]
  19. Sach, P.H. Moreau–Rockafellar Theorems for nonconvex set-valued maps. J. Optim. Theory Appl. 2007, 133, 213–227. [Google Scholar] [CrossRef]
  20. Yang, X.Q. A Hahn-Banach Theorem in ordered linear spaces and its applications. Optimization 1992, 25, 1–9. [Google Scholar] [CrossRef]
  21. Chen, G.Y.; Jahn, J. Optimality conditions for set-valued optimization problems. Math. Methods Oper. Res. 1998, 48, 187–200. [Google Scholar] [CrossRef]
  22. Borwein, J.M. A Lagrange multiplier Theorem and a sandwich Theorem for convex relations. Math. Scand. 1981, 48, 189–204. [Google Scholar] [CrossRef] [Green Version]
  23. Peng, J.W.; Lee, H.W.J.; Rong, W.D.; Yang, X.M. Hahn-Banach Theorems and subgradients of set-valued maps. Math. Methods Oper. Res. 2005, 61, 281–297. [Google Scholar] [CrossRef]
  24. Li, S.J.; Guo, X.L. Weak subdifferential for set-valued mappings and its applications. Nonlinear Anal. 2009, 71, 5781–5789. [Google Scholar] [CrossRef]
  25. Hernández, E.; Rodríguez-Marín, L. Weak and strong subgradients of set-valued maps. J. Optim. Theory Appl. 2011, 149, 352–365. [Google Scholar] [CrossRef]
  26. Long, X.J.; Peng, J.W.; Li, X.B. Weak subdifferentials for set-valued mappings. J. Optim. Theory Appl. 2014, 162, 1–12. [Google Scholar] [CrossRef]
  27. İnceoğlu, G. Some properties of second-order weak subdifferentials. Turkish J. Math. 2021, 45, 955–960. [Google Scholar]
  28. Suneja, S.K.; Khurana, S.; Bhatia, M. Optimality and duality in vector optimization involving generalized type I functions over cones. J. Glob. Optim. 2011, 49, 23–35. [Google Scholar] [CrossRef]
  29. Chuong, T.D.; Kim, D.S. Nonsmooth semi-infinite multiobjective optimization problems. J. Optim. Theory Appl. 2014, 160, 748–762. [Google Scholar] [CrossRef]
  30. Chuong, T.D. Optimality and duality for robust multiobjective optimization problems. Nonlinear Anal. 2016, 134, 127–143. [Google Scholar] [CrossRef]
  31. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  32. Sun, X.K.; Peng, Z.Y.; Guo, X.L. Some characterizations of robust optimal solutions for uncertain convex optimization problems. Optim. Lett. 2016, 10, 1463–1478. [Google Scholar] [CrossRef]
  33. Som, K.; Vetrivel, V. On robustness for set-valued optimization problems. J. Glob. Optim. 2021, 79, 905–925. [Google Scholar] [CrossRef]
  34. Kuroiwa, D. The natural criteria in set-valued optimization research on nonlinear analysis and convex analysis. Surikaisekik-Enkyusho Kokyuroku 1998, 1031, 85–90. [Google Scholar]
  35. Chiriaev, A.; Walster, G.W. Interval Arithmetic Specification; Technical Report. 1998. Available online: http://www.mscs.mu.edu/globsol/walster-papers.html (accessed on 2 October 2022).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhai, Y.; Wang, Q.; Tang, T. Optimality Conditions and Dualities for Robust Efficient Solutions of Uncertain Set-Valued Optimization with Set-Order Relations. Axioms 2022, 11, 648. https://doi.org/10.3390/axioms11110648

AMA Style

Zhai Y, Wang Q, Tang T. Optimality Conditions and Dualities for Robust Efficient Solutions of Uncertain Set-Valued Optimization with Set-Order Relations. Axioms. 2022; 11(11):648. https://doi.org/10.3390/axioms11110648

Chicago/Turabian Style

Zhai, Yuwen, Qilin Wang, and Tian Tang. 2022. "Optimality Conditions and Dualities for Robust Efficient Solutions of Uncertain Set-Valued Optimization with Set-Order Relations" Axioms 11, no. 11: 648. https://doi.org/10.3390/axioms11110648

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop