Next Article in Journal
Frequency-Constrained Optimization of a Real-Scale Symmetric Structural Using Gold Rush Algorithm
Previous Article in Journal
Analysis of Lower Facial Third and Dental Proportions to Predict Maxillary Anterior Teeth Width in the Pakistani Population
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Approach to Solve the Constrained OWA Aggregation Problem

1
Department of Neurosurgery, Gachon University Gil Hospital, 21 Namdong-daero 774 beon-gil, Namdong-gu, Incheon 21565, Korea
2
College of Business and Economics, Chung-Ang University, 221 Heukseok Dongjak, Seoul 06974, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(4), 724; https://doi.org/10.3390/sym14040724
Submission received: 8 March 2022 / Revised: 21 March 2022 / Accepted: 24 March 2022 / Published: 2 April 2022

Abstract

:
Constrained ordered weighted averaging (OWA) aggregation attempts to solve the OWA optimization problem subject to multiple constraints. The problem is nonlinear in nature due to the reordered variables of arguments in the objective function, and the solution approach via mixed integer linear programming is quite complex even in the problem with one restriction of which coefficients are all one. Recently, this has been relaxed to allow a constraint with variable coefficients but the solution approach is still abstruse. In this paper, we present a new intuitive method to constructing a problem with auxiliary symmetric constraints to convert it into linear programming problem. The side effect is that we encounter many small sub-problems to be solved. Interestingly, however, we discover that they share common symmetric features in the extreme points of the feasible region of each sub-problem. Consequently, we show that the structure of extreme points and the reordering process of input arguments peculiar to the OWA operator lead to a closed optimal solution to the constrained OWA optimization problem. Further, we extend our findings to the OWA optimization problem constrained by a range of order-preserving constraints and present the closed optimal solutions.

1. Introduction

Yager [1] introduced the ordered weighted averaging (OWA) operator to provide a parameterized class of mean-like operators that can be used to aggregate a collection of input arguments. A unique feature of the OWA operator is that the operator weights are associated with the arguments that are ordered by their magnitudes. Therefore, the OWA operator is the inner product of an ordered input vector and a weighting vector. In the short time since its first appearance, the OWA operator has been applied in diverse fields such as neural networks, database systems, fuzzy logic controllers, multi-criteria decision making, data mining, location-based services (LBS), and geographical information systems (GIS).
Yager [2] presented a new class of OWA aggregation problem, the so-called constrained OWA aggregation problem, and exemplified its simplified problem with a single constraint to indicate that it can be formulated by a mixed integer linear programming problem. Later, Carlsson et al. [3] presented a simple algorithm for the exact computation of optimal solutions to a single constrained OWA aggregation problem. Ahn [4] presented a solution to the same problem through a linear transformation that is accomplished by incorporating weak inequality constraints representing order relations of variables. Recently, Coroianu and Fullér [5] presented a proof of the constrained OWA aggregation problem with a single constraint having variable coefficients, which are extended to the co-monotone constraints that share the same ordering permutation of the coefficients [6]. Chen and Tang [7] considered a constrained OWA aggregation problem with a single constraint and lower bounded variables. In particular, they presented four types of solution depending on the number of zero elements for the three-dimensional constrained OWA aggregation problem with lower bounded variables. They deal with the maximization constrained OWA problem with lower bounded variables and the minimization constrained OWA problem with upper bounded variables. For a three-dimensional case of these models, they present the explicitly optimal solutions theoretically and empirically [8].
In this paper, we deal with a single constrained OWA aggregation problem having variable coefficients in a completely different manner on the basis of the development by Ahn [4]. The proposed method is very intuitive, and thus easy to understand, and can be readily extended to include order-preserving constraints. Specifically, we incorporate auxiliary weak inequality constraints of variables in an attempt to transform it into linear programming (LP) problems, which inevitably leads to as many LPs as the factorial of the number of variables. It is evident that the reordering property of the OWA aggregation instantly leads to the LP equivalent once we add the weak inequalities to the set of original constraints. Instead of solving each LP and selecting the largest objective function value as the optimal solution (in fact, the number of problems to be solved increases exponentially as the number of variables increases), we determine the extreme points of the feasible region of each LP. Interestingly, we find that the set of extreme points consists of a vector with one positive element, a vector with two equal positive elements, etc.; moreover, the places where the positive elements appear solely depend on the order of variables in the weak inequalities incorporated. Consequently, we show that the features of extreme points and the reordering process of input arguments peculiar to the OWA operator lead to a closed optimal solution to the constrained OWA optimization problem. Above all, we notice that this can be achieved by a novel idea of incorporating symmetric weak inequalities into the set of original constraints.
The paper is organized as follows: in Section 2, we present a solution to a single constrained OWA optimization problem with variable coefficients; in Section 3, we deal with a multiple constrained OWA optimization problem, followed by concluding remarks in Section 4.

2. Single Constrained OWA Aggregation with Variable Coefficients

2.1. The OWA Operator

Definition 1 (Yager [1]).
An OWA operator of dimension n is a mapping F :   R n R that has an associated weighting vector w = ( w 1 ,   w 2 , , w n ) such that j = 1 n w j = 1 ,   w j 0 ,   j = 1 , , n . The function value F of input arguments x = ( x 1 , x 2 ,   , x n ) determines the aggregated value of arguments in such a manner that
F ( x 1 , , x n ) = j = 1 n w j y j
with y j being the jth largest element of x = ( x 1   , x 2 ,   , x n ) .
The OWA operator is featured by two distinguished procedures, namely, (a) reordering the input arguments and (b) associating weights with the reordered arguments. The reordering process, which is substantially different from other aggregation operators, has a crucial impact on the final aggregation outcomes. A proper determination of operator weights based on the mathematical programming approach is governed principally by two criteria—the attitudinal character (or orness) and the degree of information (or input arguments) usage. Various objective functions are used to maximize information usage whereas the attitudinal character is consistently satisfied as a constraint in the optimization problem. The nonlinear objective functions in the mathematical programming approach include the maximum entropy, the minimal variability, the maximal Renyi entropy, the least square method, and the chi-square method. The linear objective functions, relatively few in number, include the minimax (or improved) disparity [9,10] and the parametric approach [11]. Readers may refer to the surveys of recent developments in the determination of the OWA operator weights [12,13]. The OWA operator weights that are determined by any one of the aforementioned methods serve as the coefficients of the objective function in the OWA aggregation problem.

2.2. The Constrained OWA Aggregation

Yager [2] considered the problem of maximizing an OWA aggregation constrained by a collection of linear inequalities:
maximize   F ( x 1 , , x n ) = j = 1 n w j y j subject   to   { A x b ,   x 0 }
where y j is the jth largest element of x = ( x 1   , x 2 ,   , x n ) , A denotes an m × n matrix whose elements are a i j , i = 1 , , m , j = 1 , , n , and b denotes an m vector whose elements are b i , i = 1 , , m . Note that in the above constrained OWA aggregation problem, the input arguments are not specific values but variables that can take on a range of values within the feasible region of the constraints. Further, Yager [2] exemplified a simple constrained OWA aggregation problem with a single constraint as shown in (3) to show that it can be formulated by a mixed integer linear programming problem:
maximize   F ( x 1 , , x n ) = j = 1 n w j y j subject   to   x 1 + x 2 + + x n 1   x j 0 ,   j = 1 , ,
where y j denotes the jth largest element of the bag { x 1 , , x n } .
Recently, Coroianu and Fullér [5] generalized Problem (3) by allowing, instead of an unvarying coefficient, variable coefficients α j , j = 1 , , n to the constraint:
F * = maximize F ( x 1 , , x n ) = j = 1 n w j y j subject   to   α 1 x 1 + α 2 x 2 + + α n x n 1 ( α j 0 )     x j 0 j = 1 , ,
where y j denotes the jth largest element of the bag { x 1 , , x n } .
Here, we first deal with the same constrained OWA aggregation problem but from a totally different perspective, and later we extend it to the one with more constraints. Before proceeding further to deal with a general n variable case, we briefly introduce a constrained OWA optimization problem with three variables in Figure 1, which helps to grasp the concept behind our proposed method.
Figure 1. Schematic flow of our proposed approach (a case of three variables).
Figure 1. Schematic flow of our proposed approach (a case of three variables).
Symmetry 14 00724 g001
Layer 1 (Formulation). The optimal solution to the OWA optimization problem with variable coefficients in each of three variables is to be sought.
Layer 2 (Decomposition). The original problem in Layer 1 is decomposed into six sub-problems each of which has weak inequalities of three variables.
Layer 3 (Linearization). Each (non-linear) sub-problem in Step 2 becomes a linear programming problem due to the incorporation of weak inequalities and the definition of variables yi, the ith largest of {x1, x2, x3}.
Layer 4 (Extreme Points). The set of extreme points for each sub-problem is determined. Each linearized sub-problem shares common properties in the extreme points of its feasible region. First, the set of extreme points consists of a vector with one positive element, a vector with two equal positive elements, and a vector with three equal positive elements. Moreover, the places where the positive elements appear only depend on the order of permutations of the incorporated weak inequalities.
Layer 5 (Local Optimal Solution). The local optimal objective value F i * , i = 1, …, 6 for each sub-problem can be determined by multiplying the coefficients of the objective function by the elements of extreme points as the order of variables in the weak inequalities since the ordered arguments are associated with the operator weights.
Layer 6 (Global Optimal Solution). The global optimal objective value is the largest among the six local optimal objective values, which can be further grouped into one positive element, two positive elements, and three positive elements across the six sub-problems, and the largest one among them is then selected.
In general, a nonnegative real space + n is the union of the sets of n ! weak inequalities, each of which has an equal volume by symmetry, and thus + n can be equivalently expressed as:
+ n = { x 1 x 2 x n 0 } { x n x n 1 x 1 0 } .
Incorporating + n into Problem (4) does not change its feasible region and thus Problem (4) is equivalent to (5):
maximize F ( x 1 , , x n ) = j = 1 n w j y j subject   to   α 1 x 1 + α 2 x 2 + + α n x n 1   { x 1 x 2 x n } { x n x n 1 x 1 }     x j 0 ,   j = 1 , , n .
Loosely speaking, the optimal solution to (4) corresponds to the largest objective function value of n ! sub-problems, F * = max i = 1 , ,   n !   [ F σ ( i ) * ] , where F σ ( i ) * is obtained by solving the following problem (Ahn [4] used a similar reasoning when solving the OWA optimization problem with a single constraint, the coefficients of which are all equal to one):
maximize F σ ( x 1 , , x n ) = j = 1 n w j y j subject   to   α 1 x 1 + α 2 x 2 + + α n x n 1   x σ ( 1 ) x σ ( 2 ) x σ ( n ) ,   x j 0 ,   j = 1 , ,
where σ ( · ) denotes a permutation of { 1 ,   2 , , n } .
For convenience, consider a constraint { x 1 x 2 x n } where σ ( 1 ) = 1 , σ ( 2 ) = 2 , , σ ( n ) = n among n ! different permutations. Incorporating this into Problem (4) immediately leads to an LP counterpart (7) since y j corresponds to x j for all j :
maximize F σ ( x 1 , , x n ) = j = 1 n w j x j subject   to   α 1 x 1 + α 2 x 2 + + α n x n = 1   x 1 x 2 x n ,   x j 0 ,   j = 1 , , .
Note that the inequality constraint in (4) is transformed into an equality in (7).
If we add a constraint { x n x n 1 x 1 } to (4), the resulting linearized problem will be:
maximize   F σ ( x 1 , , x n ) = j = 1 n w j x n j + 1 subject   to   α 1 x 1 + α 2 x 2 + + α n x n = 1     x n x n 1 x 1 ,   x j 0 ,   j = 1 , , .  
Instead of solving many LP problems one by one (in fact, we have to solve n ! LP problems), we attempt to find some properties about the extreme points of the feasible region of the constraints that are relevant to all LP problems. To do so, we start to find the extreme points of the equality constraint in (7) as follows:
v 1 ( 0 ) = ( 1 α 1 , 0 , , 0 ) T ,   v 2 ( 0 ) = ( 0 , 1 α 2 , 0 , , 0 ) T , ,   v n ( 0 ) = ( 0 ,   0 , 1 α n ) T .
Then, we are to determine new extreme points of the feasible region restricted by adding each weak equality constraint in (7) to the set (8) one by one. Therefore, the procedure requires determining the extreme points whenever incorporating { x i x i + 1 0 } , i = 1 , , n 1 .
Adding a constraint { x 1 x 2 0 } to (4) divides the current extreme points of (8) into two sets:
(i)
a set of extreme points satisfying it: S = { v 1 ( 0 ) , v 3 ( 0 ) , , v n ( 0 ) }
(ii)
the extreme point not satisfying it: N S = { v 2 ( 0 ) } .
The line segments via a convex combination are formed to find a new extreme point for each pair of vectors in S and N S . To illustrate, we construct a line segment for a pair of vectors v 1 ( 0 ) = ( 1 α 1 , 0 , , 0 ) T in S and v 2 ( 0 ) = ( 0 , 1 α 2 , 0 , , 0 ) T in N S such that ( 0 , 1 α 2 , 0 , , 0 ) + λ ( 1 α 1 0 ,   0 1 α 2 , 0 0 , , 0 0 ) = ( 1 α 1 λ ,   1 α 2 1 α 2 λ , 0 , , 0 ) ,   0 λ 1 .
We solve an equation 1 α 1 λ ( 1 α 2 1 α 2 λ ) = 0 corresponding to { x 1 x 2 = 0 } to determine if an intersection point exists and, as a result, we find λ = α 1 α 1 + α 2 [ 0 ,   1 ] , which consequently leads to a new extreme point by substituting it into the parameterized line segment as follows:
( 1 α 1 λ , 1 α 2 1 α 2 λ , 0 , , 0 ) = ( 1 α 1 · α 1 α 1 + α 2 , 1 α 2 1 α 2 · α 1 α 1 + α 2 , 0 , , 0 ) = ( 1 α 1 + α 2 ,   1 α 1 + α 2 , 0 , , 0 ) .
Repeating this process for the other pairs of vectors in S and N S results in the following extreme points:
v 1 ( 1 ) = ( 1 α 1 , 0 , , 0 ) T , v 2 ( 1 ) = ( 1 α 1 + α 2 , 1 α 1 + α 2 , 0 , , 0 ) T , v 3 ( 1 ) = ( 0 ,   0 , 1 α 3 , 0 , , 0 ) T , , v n ( 1 ) = ( 0 , 0 , 1 α n ) T .
Next, we repeat the procedure by incorporating a constraint { x 2 x 3 0 } into the feasible region characterized by the extreme points of (9), which eventually results in the following extreme points:
v 1 ( 2 ) = ( 1 α 1 , 0 , , 0 ) T , v 2 ( 2 ) = ( 1 α 1 + α 2 , 1 α 1 + α 2 , 0 , , 0 ) T , v 3 ( 2 ) = ( 1 α 1 + α 2 + α 3 , 1 α 1 + α 2 + α 3 , 1 α 1 + α 2 + α 3 , 0 , 0 ) T , ,   v n ( 2 ) = ( 0 , 0 , 1 α n ) T .
Note that incorporating the constraint { x 2 x 3 0 } divides the current extreme points of (9) into two sets S and N S : S = { v 1 ( 1 ) , v 2 ( 1 ) , v 4 ( 1 ) , , v n ( 1 ) } and N S = { v 3 ( 1 ) } .
This process ends when we finally incorporate a constraint { x n 1 x n 0 } and we obtain the following extreme points:
v 1 = ( 1 α 1 , 0 , , 0 ) , v 2 = ( 1 α 1 + α 2 , 1 α 1 + α 2 , 0 , , 0 ) , v 3 = ( 1 α 1 + α 2 + α 3 , 1 α 1 + α 2 + α 3 , 1 α 1 + α 2 + α 3 , 0 , , 0 ) , , v n = ( 1 i = 1 n α i , , 1 i = 1 n α i ) .
If we incorporated a set of constraints { x n x n 1 x 1 } into (4), we would have obtained the following extreme points due to its symmetric feature to the constraint { x 1 x 2 x n } :
v 1 = ( 0 , , 0 , 1 α n ) ,   v 2 = ( 0 , , 0 , 1 α n 1 + α n , 1 α n 1 + α n ) , v 3 = ( 0 , , 0 , 1 α n 2 + α n 1 + α n , 1 α n 2 + α n 1 + α n , 1 α n 2 + α n 1 + α n ) , ,   v n = ( 1 i = 1 n α i , , 1 i = 1 n α i ) .
At this time, we notice a particular pattern of extreme points. Each linearized sub-problem shares similar properties in the extreme points of its feasible region. First, the set of extreme points consists of a vector with one positive element, a vector with two equal positive elements, a vector with three equal positive elements, etc. Moreover, the places where the positive elements appear only depend on the order of permutations of the incorporated constraints, which is, in fact, of no importance in the OWA aggregation because the ordered arguments are associated with the operator weights. These findings lead to Theorem 1 below, which states that the optimal solution to (4) can be obtained by the so-called vector-wise comparisons instead of solving all n ! linear sub-problems.
Theorem 1.
The optimal objective function value to the constrained OWA optimization problem (4) is determined by
F * = max { w 1 α _ ( 1 ) , w 1 + w 2 α _ ( 1 ) + α _ ( 2 ) , , i = 1 n w i i = 1 n α _ ( i ) }  
where α _ ( i ) is the ith smallest element in the constraint coefficients set C = { α 1 , α 2 , , α n }
Proof. 
Our goal is to find the maximum of the optimal solutions of n ! linearized OWA aggregation problems. Considering each linearized problem, the optimal solution undoubtedly occurs at one of the n extreme points, which may be a vector with one positive element (zeros elsewhere), or a vector with two positive elements (zeros elsewhere), etc. It is obvious that considering n different vectors with one positive element such as { ( 1 α 1 , 0 , , 0 ) T ,   ( 0 , 1 α 2 , 0 , , 0 ) T , ,   ( 0 , 0 , 1 α n ) T } , the objective function of (4) is maximized at w 1 α _ ( 1 ) where α _ ( 1 ) is the smallest element in C . Similarly, considering ( n 2 ) different vectors with two positive elements such as { ( 1 α 1 + α 2 , 1 α 1 + α 2 , 0 , , 0 ) T ,   ( 1 α 1 + α 3 , 0 , 1 α 1 + α 3 , , 0 ) T , ,   ( 0 , , 0 , 1 α n + α n 1 , 1 α n + α n 1 ) T } , the objective function is maximized at w 1 + w 2 α _ ( 1 ) + α _ ( 2 ) because of w 1 + w 2 α _ ( 1 ) + α _ ( 2 ) w 1 + w 2 α i + α j for all i j . For some k < n , i = 1 k w i i = 1 k α _ ( i ) is the largest objective function value among ( n k ) different vectors with k positive elements. Continuing in this manner, we arrive at the conclusion that the optimal solution to (4) corresponds to the maximum of { w 1 α _ ( 1 ) ,   w 1 + w 2 α _ ( 1 ) + α _ ( 2 ) , ,   i = 1 k w i i = 1 k α _ ( i ) , , i = 1 n w i i = 1 n α _ ( i ) } . We break a tie arbitrarily when it occurs. □
Corollary 1.
If the optimal solution occurs at the kth place ( 1 k n ) , the optimal extreme point is determined by
{ 1 i = 1 k α _ ( i )   a t   t h e   μ ( i )   p l a c e ,           i = 1 , , k   0                                                         e l s e w h e r e   .
where μ ( i ) = { j | α _ ( i ) = α j } . Further, a set of constraints incorporated for the linearized problem is { x μ ( 1 ) x μ ( 2 ) x μ ( k ) x π ( 1 ) x π ( n k ) } where π ( i ) is any index in { 1 , , n } except μ ( i ) , i = 1 , , k .
Proof. 
The proof follows directly from Theorem 1. □
Consider the following constrained OWA optimization problem:
F * = minimize   F ( x 1 , , x n ) = j = 1 n w j y j subject   to   α 1 x 1 + α 2 x 2 + + α n x n = 1   x j 0 ,   j = 1 , , n .
Corollary 2.
Considering problem (12), we obtain the optimal solution
F * = min { w 1 α ¯ ( 1 ) , w 1 + w 2 α ¯ ( 1 ) + α ¯ ( 2 ) , , i = 1 n w i i = 1 n α ¯ ( i ) }  
where α ¯ ( i ) is the ith largest element in C = { α 1 , α 2 , , α n } .
Proof. 
Corollary 2 follows directly from the proof of Theorem 1 since the extreme points of (4) are equal to those of (12). □
Corollary 3.
Consider a special case of α i = 1 for all i in (4). Then, the optimal objective function value is determined by
F * = max { w 1 1 , w 1 + w 2 2 , , i = 1 n w i n } .  
Proof. 
We simply obtain the result (14) since α _ ( i ) = 1 for all i and thus i = 1 k α _ ( i ) = k for 1 k n . □
For convenience, we denote the extreme points of the feasible region formed by a set of constraints { x 1 + x 2 + + x n = 1 ,   x 1 x 2 x n 0 } :
v 1 = ( 1 , 0 , , 0 ) ,   v 2 = ( 1 2 , 1 2 , 0 , , 0 ) ,   v 3 = ( 1 3 , 1 3 , 1 3 , 0 , , 0 ) , ,   v n = ( 1 n , , 1 n ) .
Finally, consider the following constrained OWA optimization problem (16) which is different from (4) only in the right-hand-side (RHS):
maximize F ( x 1 , , x n ) = j = 1 n w j y j subject   to   α 1 x 1 + α 2 x 2 + + α n x n b   ( > 0 )   α j > 0 ,   x j 0 j = 1 , , n
where y j denotes the jth largest element of the bag { x 1 , , x n } .
We conclude on the basis of the proof of Theorem 1 that the optimal objective function value to Problem (16) simply corresponds to:
F * = max { b w 1 α _ ( 1 ) , b ( w 1 + w 2 ) α _ ( 1 ) + α _ ( 2 ) , , b i = 1 n w i i = 1 n α _ ( i ) } .  
Example 1.
Consider the following constrained OWA optimization problem presented by Coroianu and Fullér [5]:
maximize   F = 1 10 y 1 + 4 10 y 2 + 3 10 y 3 + 2 10 y 4 subject   to   x 1 + 3 x 2 + 2 x 3 + x 4 = 1   x i 0 , i = 1 , , 4  
where y j denotes the jth largest element of the bag { x 1 , , x n } .
It follows that α _ ( 1 ) = 1 , α _ ( 2 ) = 1 , α _ ( 3 ) = 2 , and α _ ( 4 ) = 3 from the set of coefficients C = { 1 ,   3 ,   2 ,   1 } . Note that we break a tie arbitrarily and choose α _ ( 1 ) = 1 . The extreme point ( 1 ,   0 ,   0 ,   0 ) among the extreme points with one positive element (zeros elsewhere) yields the highest objective function value of 1 10 · 1 α _ ( 1 ) = 1 10 . Considering the extreme points with two or three positive elements, we obtain the highest objective values, such as:
1 10 · 1 α _ ( 1 ) + α _ ( 2 ) + 4 10 · 1 α _ ( 1 ) + α _ ( 2 ) = 5 20   with   α _ ( 1 ) + α _ ( 2 ) = 2   and 1 10 · 1 i = 1 3 α _ ( i ) + 4 10 · 1 i = 1 3 α _ ( i ) + 3 10 · 1 i = 1 3 α _ ( i ) = 8 40   with   i = 1 3 α _ ( i ) = 4 ,   respectively .
Finally, we obtain the following objective function value for the extreme points with all four positive elements:
1 10 · 1 i = 1 4 α _ ( i ) + 4 10 · 1 i = 1 4 α _ ( i ) + 3 10 · 1 i = 1 4 α _ ( i ) + 2 10 · 1 i = 1 4 α _ ( i ) = 10 70   with   i = 1 4 α _ ( i ) = 7 .
Therefore, the optimal objective function value to the constrained OWA optimization problem is F * = max { 1 10 , 5 20 , 8 40 , 10 70 } = 5 20 and the optimal extreme point is ( 1 2 , 0 ,   0 ,   1 2 ) based on Corollary 1. Further, the weak inequality constraint incorporated to obtain the result of this optimal solution is any one of { x 1 x 4 x 2 x 3 } , { x 1 x 4 x 3 x 2 } , { x 4 x 1 x 2 x 3 } , and { x 4 x 1 x 3 x 2 } . To illustrate, if we incorporate { x 1 x 4 x 2 x 3 } into the problem, it will be linearized as follows:
maximize   F = 1 10 x 1 + 4 10 x 4 + 3 10 x 2 + 2 10 x 3 subject   to   x 1 + 3 x 2 + 2 x 3 + x 4 = 1 x 1 x 4 x 2 x 3 x i 0 ,   i = 1 , , 4
where y j denotes the jth largest element of the bag { x 1 , , x n } .
On the other hand, Corollary 2 indicates that the optimal solution to the minimized version of the example is:
F * = min { 1 / 10 3 , 1 / 10 + 4 / 10 5 , 1 / 10 + 4 / 10 + 3 / 10 6 , 1 / 10 + 4 / 10 + 3 / 10 + 2 / 10 7 } = 1 30 . with   the   optimal   extreme   point   ( 0 , 1 3 , 0 ,   0 ) .

3. An Extension of a Single Constrained OWA Aggregation Problem

In this section, we extend the development of Section 2 to solve the OWA aggregation problem constrained by a range of incompletely specified arguments. Ahn [4] has dealt with a similar topic in the context of the OWA optimization subject to a sum to unity constraint, and here we extend it to the OWA optimization problem with variable coefficients. To illustrate, consider the following constrained OWA optimization problem, which is different from (16) in that the constraints representing order relations among the variables x j s are incorporated:
maximize F = j = 1 n w j y j subject   to   α 1 x 1 + α 2 x 2 + + α n x n b   x 1 2 x 2 n x n   x j 0 ,   j = 1 , , n .
These order relation constraints ensure x 1 to be the largest, x 2 the second largest, …, and x n the least (ties can occur among the x j s), which leads to an equivalent LP problem such that:
maximize F = j = 1 n w j x j subject   to   α 1 x 1 + α 2 x 2 + + α n x n b   x 1 2 x 2 n x n , x j 0 ,   j = 1 , , n .
(The order relation constraints of (18) are a special case of more general ones { β 1 x 1 β 2 x 2 β n x n } .)
Instead of obtaining a solution via an LP package, we attempt first to determine the extreme points of the feasible region formed by the constraints and then to find a closed solution. The feasible region formed by the order relation constraints of (18) is characterized by the non-normalized extreme directions emanating from the origin, D = ( d 1 , d 2 , d 3 , , d n ) , where:
d 1 = ( c , 0 , , 0 ) ,   d 2 = ( c , 1 2 c , 0 , , 0 ) ,   d 3 = ( c , 1 2 c , 1 3 c , 0 , , 0 ) , ,   d n = ( c , 1 2 c , 1 3 c , , 1 n c ) .
Therefore, the extreme points of the feasible region formed by the constraints in (18) are simply the intersecting points of the halfspace and the non-normalized directions; to find them, we solve:
α 1 c = b c = b α 1   for   d 1 α 1 c + α 2 ( 1 2 C ) = b c = b α 1 + 1 2 α 2   for   d 2 α 1 c + α 2 ( 1 2 C ) + + α n ( 1 n C ) = b c = b j = 1 n 1 j α j   for   d n .
The extreme points are determined by substituting each derived c into the corresponding direction vector d i , i = 1 , , n , which finally leads to E 1 = ( v 1 , v 2 , , v n ) :
v 1 = ( b α 1 , 0 , , 0 ) ,   v 2 = ( b α 1 + 1 2 α 2 , b 2 ( α 1 + 1 2 α 2 ) , 0 , , 0 ) , v 3 = ( b α 1 + 1 2 α 2 + 1 3 α 3 , b 2 ( α 1 + 1 2 α 2 + 1 3 α 3 ) , b 3 ( α 1 + 1 2 α 2 + 1 3 α 3 ) , 0 , , 0 ) , v n = ( b j = 1 n 1 j α j , b 2 j = 1 n 1 j α j , , b n j = 1 n 1 j α j ) .
Remark 1.
Let us reconsider the OWA optimization problem in (7) that was formulated to transform the single constrained OWA optimization problem into the LP equivalent. Obviously, the OWA optimization problem (7) is a special case of (18) with b = 1 and β j = 1 for all j. Therefore, the extreme points of the constraints in (7) can be easily determined by replacing b with b = 1 and β j with β j = 1 for all j in E 1 in (19), which leads to the equivalent extreme points of (10).
Consider the following constrained OWA optimization problem in which other types of incomplete input arguments representing order relations among the x j s are incorporated:
maximize F = j = 1 n w j x j subject   to   α 1 x 1 + α 2 x 2 + + α n x n b   x 1 x 2 x 2 x 3 x n 2 x n 1 x n 1 x n ,   x n 1 x n x n   x j 0 ,   j = 1 , , n
A feasible region formed by the order relation constraints in (20) is characterized by the non-normalized extreme directions emanating from the origin, D = ( d 1 , d 2 , d 3 , , d n ) where d 1 = ( c , 0 , , 0 ) , d 2 = ( 2 c , c , 0 , , 0 ) , d 3 = ( 3 c , 2 c , c , 0 , , 0 ) , , d n = ( n c , ( n 1 ) c , ( n 2 ) c , , c ) . Therefore, the extreme points of the feasible region that is formed by the constraints in (20) are simply the intersecting points of the halfspace and the non-normalized direction; to find them, we solve:
α 1 c = b c = b α 1   for   d 1 2 α 1 c + α 2 c = b c = b 2 α 1 + α 2   for   d 2 n α 1 c + ( n 1 ) α 2 c + + α n c = b c = b j = 1 n ( n j + 1 ) α j   for   d n .
The extreme points are determined by substituting each derived c into the corresponding direction vector d i , i = 1 , , n , which finally leads to E 2 = ( v 1 , v 2 , , v n ) :
v 1 = ( b α 1 , 0 , , 0 ) , v 2 = ( 2 b 2 α 1 + α 2 , 2 b 2 α 1 + α 2 , 0 , , 0 ) , v 3 = ( 3 b 3 α 1 + 2 α 2 + α 3 , 3 b 3 α 1 + 2 α 2 + α 3 , 3 b 3 α 1 + 2 α 2 + α 3 , 0 , , 0 ) , v n = ( n b j = 1 n ( n j + 1 ) α j , ( n 1 ) b j = 1 n ( n j + 1 ) α j , , n b j = 1 n ( n j + 1 ) α j )
Example 2.
Consider the following constrained OWA optimization problem:
maximize   F = 1 10 y 1 + 4 10 y 2 + 3 10 y 3 + 2 10 y 4 subject   to   x 1 + 3 x 2 + 2 x 3 + x 4 = 1   x 1 2 x 2 3 x 3 4 x 4   x i 0 ,   i = 1 , , 4
where y j denotes the jth largest element of the bag { x 1 , , x n } .
The extreme points of the constraints in the example are determined by E 1 :
E 1 = ( 1 0 0 0 2 5 1 5 0 0 6 19 3 19 2 19 0 12 41 6 41 4 41 3 41 ) .
Each extreme point clearly represents the order relations of variables such that x 1 is the largest, x 2 is the second largest, x 3 is the third largest, and x 4 is the least, which thus leads to an equivalent linearized objective function of F = 1 10 x 1 + 4 10 x 2 + 3 10 x 3 + 2 10 x 4 since y 1 = x 1 , y 2 = x 2 , y 3 = x 3 , and y 4 = x 4 . The optimal objective function value is simply determined by:
F * = max { c · E 1 } = max { 0.1 ,   0.12 ,   0.126 ,   0.132 } = 0.132   where   c = ( 1 10 , 4 10 , 3 10 , 2 10 ) .

4. Concluding Remarks

The constrained OWA aggregation attempts to solve the OWA optimization problem subject to multiple constraints. In this paper, we present a new intuitive approach to solve the OWA optimization problem with a single constraint with variable coefficients. Although this problem was previously dealt with by Coroianu and Fullér [5], our approach is distinct from theirs in that we linearize the nonlinear problem by additionally incorporating symmetric weak inequalities. Exploiting the extreme points of the feasible region of each linearized problem reveals interesting properties that eventually lead to theories and corollaries. These results are easy to understand compared to theirs and readily expandable to include other order preserving constraints. To this end, we deal with the OWA optimization problem constrained by a range of incompletely specified variables and find optimal closed solutions that can be readily applied to solve the constrained OWA optimization problem.
Future research topics include the constrained OWA optimization problem with various types of constraints, for example, bounded variables, a convex sequence of variables, etc.

Author Contributions

Conceptualization, E.-Y.K.; methodology, E.-Y.K. and B.-S.A.; validation, E.-Y.K. and B.-S.A.; formal analysis, E.-Y.K. and B.-S.A.; writing—original draft preparation, E.-Y.K. and B.-S.A.; writing—review and editing, E.-Y.K. and B.-S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decision-making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
  2. Yager, R.R. Constrained OWA aggregation. Fuzzy Sets Syst. 1996, 81, 89–101. [Google Scholar] [CrossRef]
  3. Carlsson, C.; Fullér, R.; Majlender, P. A note on constrained OWA aggregation. Fuzzy Sets Syst. 2003, 139, 543–546. [Google Scholar] [CrossRef] [Green Version]
  4. Ahn, B.S. A new approach to solve the constrained OWA aggregation problem. IEEE Trans. Fuzzy Syst. 2017, 25, 1231–1238. [Google Scholar] [CrossRef]
  5. Coroianu, L.; Fullér, R. On the constrained OWA aggregation problem with single constraint. Fuzzy Sets Syst. 2018, 332, 37–43. [Google Scholar] [CrossRef]
  6. Coroianu, L.; Fullér, R.; Gagolewski, M.; James, S. Constrained ordered weighted averaging aggregation with multiple comonotone constraints. Fuzzy Sets Syst. 2020, 395, 21–39. [Google Scholar] [CrossRef]
  7. Chen, Y.F.; Tang, H.C. A three-dimensional constrained ordered weighted averaging aggregation problem with lower bounded variables. Symmetry 2018, 10, 339. [Google Scholar] [CrossRef] [Green Version]
  8. Tang, H.C.; Yang, S.T. Optimizing three-dimensional constrained ordered weighted averaging aggregation problem with bounded variables. Mathematics 2018, 6, 172. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, Y.M.; Parkan, C. A minimax disparity approach for obtaining OWA operator weights. Inf. Sci. 2005, 175, 20–29. [Google Scholar] [CrossRef]
  10. Emrouznejad, A.; Amin, G.R. Improving minimax disparity model to determine the OWA operator weights. Inf. Sci. 2010, 180, 1477–1485. [Google Scholar] [CrossRef]
  11. Amin, G.R.; Emrouznejad, A. Parametric aggregation in ordered weighted averaging. Int. J. Approx. Reason. 2011, 52, 819–827. [Google Scholar] [CrossRef] [Green Version]
  12. Liu, X.W. A review of the OWA determination methods: Classification and some extensions. In Recent Developments in the Ordered Weighted Averaging Operators: Theory and Practice; Yager, R.R., Kacprzyk, J., Beliakov, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 49–90. [Google Scholar]
  13. Carlsson, C.; Fullér, R. Maximal entropy and minimal variability OWA operator weights: A short survey of recent developments. In Soft Computing Applications for Group Decision-Making and Consensus Modeling, Studies in Fuzziness and Soft Computing; Collan, M., Kacprzyk, J.J., Eds.; Springer: Cham, Germany, 2018. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, E.-Y.; Ahn, B.-S. An Efficient Approach to Solve the Constrained OWA Aggregation Problem. Symmetry 2022, 14, 724. https://doi.org/10.3390/sym14040724

AMA Style

Kim E-Y, Ahn B-S. An Efficient Approach to Solve the Constrained OWA Aggregation Problem. Symmetry. 2022; 14(4):724. https://doi.org/10.3390/sym14040724

Chicago/Turabian Style

Kim, Eun-Young, and Byeong-Seok Ahn. 2022. "An Efficient Approach to Solve the Constrained OWA Aggregation Problem" Symmetry 14, no. 4: 724. https://doi.org/10.3390/sym14040724

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop