Previous Article in Journal
Time-Varying Causalities in Prices and Volatilities between the Cross-Listed Stocks in Chinese Mainland and Hong Kong Stock Markets
Previous Article in Special Issue
Water Carrying Capacity Evaluation Method Based on Cloud Model Theory and an Evidential Reasoning Approach

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Weight Vector Generation in Multi-Criteria Decision-Making with Basic Uncertain Information

1
Department of Engineering Management, School of Civil Engineering, Wuhan University, Wuhan 430072, China
2
Business School, Nanjing Normal University, Nanjing 210023, China
3
Machine Intelligence Institute, Iona College, New Rochelle, NY 10801, USA
4
Faculty of Economics, Matej Bel University, Tajovského 10, Sk-975 90 Banská Bystrica, Slovakia
5
Faculty of Civil Engineering, Slovak University of Technology, Radlinského 11, Sk-810 05 Bratislava, Slovakia
6
Department of Mathematics, Dibrugarh University, Dibrugarh 786004, India
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(4), 572; https://doi.org/10.3390/math10040572
Submission received: 9 January 2022 / Revised: 5 February 2022 / Accepted: 10 February 2022 / Published: 12 February 2022
(This article belongs to the Special Issue New Trends in Fuzzy Sets Theory and Their Extensions)

## Abstract

:
This paper elaborates the different methods to generate normalized weight vector in multi-criteria decision-making where the given information of both criteria and inputs are uncertain and can be expressed by basic uncertain information. Some general weight allocation paradigms are proposed in view of their convenience in expression. In multi-criteria decision-making, the given importance for each considered criterion may have different extents of uncertainty. Accordingly, we propose some special induced weight-allocation methods. The inputs can be also associated with varying uncertainty extents, and then we develop several induced weight-generation methods for consideration. In addition, we present some suggested and prescriptive weight allocation rules and analyze their reasonability.

## 1. Introduction

The consideration of relative importance between the concerned criteria is of great significance in most of the various decision environments and decision theories, including bounded rationality [1], fuzzy group decision-making [2], order-based decision models [3], multi-criteria decision-making (MCDM) [4], non-additive-measure-based decision-making [5,6], preference involved decision-making [7], random and stochastic decision-making [8], and interactive decision-making [9], and the normalized weight function/vector thus serves as the very suitable embodiment of the relative importance. There are numerous methods to generate normalized weight function, such as the method considering the roles of eigen things in the analytic hierarchy process (AHP) [10] with a myriad of applications [11,12], and the method used in ordered weighted averaging (OWA) operator [13] with some of its extensions [14,15,16,17].
Both methods adopted in AHP and OWA contain subjectivity and objectivity. Note that the involvement of subjectivity in general does not mean arbitrariness or lack of seriousness; on the contrary, subjectivity ordinarily is directly linked to working experiences or is indirectly derived from the expertise of decision makers [14].
Many different types of preferences are often embedded or embodied in numerous decision-making and aggregation problems [18,19,20,21,22]. Yager proposed to generate weight function by using inducing information and bi-polar preference [15,16]. Firstly, the inducing information should be embodied by a function/vector that exactly corresponds to the input function/vector. For example, the inducing information can be different time points when the input values are obtained, the certainty extents to which the input values are thought to be convincing, or the magnitudes of the input values in their own right. Moreover, the applied bipolar preference generally in its practical meaning should pertain to the concerned inducing information. For example, if the inducing information is about time points, then the involved bipolar preference could become newer–older preference; if it is about certainty extents, then the bipolar preference should become more–less certainty preference or indifference–certainty preference; and if it is about magnitudes, then the bipolar preference should become optimism–pessimism preference. Lastly, the weight allocation can be conducted by a number of techniques, and later we will use a quantifier-based method that provides convenience in related discussions.
Uncertainties are pervasive in practical MCDM problems, and recently researchers proposed an uncertainty paradigm called basic uncertain information (BUI) [23,24] to effectively and conveniently tackle a wide variety of uncertainties involved in decision-making and evaluation problems. Since there is a paucity of literature dealing with and discussing information-fusion-based MCDM in a BUI environment, this work will mainly focus on the certainty degree as the inducing information in MCDM in this new type of generalized and formalized uncertain decision environment. When concerned with a mere certainty inducing variable, the problem will not be complex; this is because once the extent of indifference–certainty preference is determined, we can easily perform quantifier based weight allocation. However, in MCDM problems frequently contain different factors such as different experts consulted, the different extents of certainties involved for both inputs and importance of criteria, the combination of different magnitudes and extents of certainties for both inputs and importance of criteria, and the necessity of considering those decision elements in a comprehensive or merging sense.
That is to say, in MCDM problems where much more complexities may arise, decision makers in general should generate, consider, and handle multiple and complex inducing information rather than some simple and single form. Hence, this article, in detail, will discuss and provide some merging selections and special merging techniques of inducing information, together with some paradigmatic or prescriptive decision-making suggestions for decision makers to refer to.
Note that in MCDM problems with BUI environment, there are many restrictions on the selections of the methods to merge different inducing information and thus make the problem more complex. For example, if the certainty is only associated with a normalized weight vector as a whole, then it cannot be merged inward with any entry of the normalized weight vector, but if the different certainties are specifically linked with the different entries of a non-normalized weight vector in a pointwise way, then the certainty could be merged with the entries of the non-normalized weight vector. Clearly, the existing few traditional methods cannot work because none of them have adequately considered the involved numerical uncertainties.
Some theoretical advantages and contributions of this study lie in that it will make it clearer how to reasonably consider several different types of inducing information in MCDM problems and selectively merge some of them in order to generate desirable weight functions with bipolar preferences. The study will help decision makers to build and select suitable, automatic and relatively objective weighted evaluation models with given information and under their own preferences.
The remainder of this article is organized as follows. In Section 2, we majorly review some basic concepts and propose a general weight allocation paradigm with some extended instances. Section 3 discusses the differences in generating a normalized weight vector with given importance information, which has two different uncertain forms, and then proposes some detailed generating methods. In Section 4, we analyze some different methods and orderings to generate normalized weight vector from inputs of BUI. Section 5 concludes and comments on this study.

## 2. Weight-Allocation Methods and Aggregation Based on Inducing Variable

Without loss of generality, the real number input with n individuals is represented as a function/vector $x : { 1 , … , n } → [ 0 , 1 ]$ and the set of all such input functions is conventionally denoted by $[ 0 , 1 ] n$. Given an n-nary input function x, in order to carry out some further aggregation we need to have a normalized weight function/vector of dimension n $w : { 1 , … , n } → [ 0 , 1 ]$ ($∑ i = 1 n w ( i ) = 1$), and each of the values $w ( i )$ will be associated with the input value $x ( i )$. The space of all normalized weight functions of dimension n is denoted by $W ( n )$.
In this study, we will often consider a set of n criteria, which can be used to comprehensively evaluate some alternatives or options under consideration in MCDM. Hence, the relative importance of those n criteria will be expressed as a normalized weight function of dimension n, $w ∈ W ( n )$. However, we will also be faced with the concept of “importance” of (each) criterion, and it will be expressed by a weight function/vector $W : { 1 , … , n } → [ 0 , 1 ]$, which is not necessarily normalized and should be distinguished from the concept of “relative importance”.
With some given normalized weight vector w, we can use it to perform a preference-involved aggregation such as weighted average (also known as weighted mean) and geometrical weighted average (also known as geometrical weighted mean). Whether or not the weight vector is derived from OWA aggregation, we can always express the corresponding (geometrical) OWA operators by (geometrical) weighted mean [25].
Definition 1.
The weighted average operator with weight function$w : { 1 , … , n } → [ 0 , 1 ]$is defined as the mapping$W A w : [ 0 , 1 ] n → [ 0 , 1 ]$, such that
$W A w ( x ) = ∑ i = 1 n w ( i ) x ( i ) ,$
Definition 2.
The geometrical weighted average operator with weight function$w : { 1 , … , n } → [ 0 , 1 ]$is defined as the mapping$G W A w : [ 0 , 1 ] n → [ 0 , 1 ]$, such that
$G W A w ( x ) = ∏ i = 1 n x ( i ) w ( i ) ,$
The input can be also formed by m individuals, each of which is a normalized weight function of dimension n. The space of all such input is denoted by $( W ( n ) ) m$. With a weight function $w ∈ W ( m )$, we can define the average of the collection of input functions $( u j ) j = 1 m ∈ ( W ( n ) ) m$ by using a mapping $W A W w : ( W ( n ) ) m → W ( n )$, called Weighted Average for Weights (WAW), such that
$W A W w ( ( u j ) j = 1 m ) = ∑ j = 1 m w ( j ) · u j ,$
Note that since each $u j$ is a normalized weight function of dimension n, then $∑ j = 1 m w ( j ) · u j$ is still a normalized weight function of the same dimension n.
To determine a normalized weight function with certain inducing information, we will use the method originally proposed by Yager in the induced ordered weighted average (IOWA) operator [15,16]. The original method has not perfectly considered the effect of tied values in inducing information in the actual weight allocation process. Jin et al. [26] have further developed a three-set expression to accurately and strictly deal with the involved weight allocation problem.
In this study, a piece of inducing information (also called inducing function) with dimension n is expressed by a function $c : { 1 , … , n } → [ 0 , 1 ]$, which generally corresponds to the considered n criteria in MCDM or the input function of dimension n. With a piece of inducing information and a well-defined function called Regular Increasing Monotone (RIM) quantifier [16], we can generate a normalized weight function. A RIM quantifier $Q : [ 0 , 1 ] → [ 0 , 1 ]$ is non-decreasing and has the boundary conditions $Q ( 0 ) = 0$ and $Q ( 1 ) = 1$. We denote by $Q$ the space of all RIM quantifier. In addition, the orness of any RIM quantifier $Q$ is defined by $o r n e s s ( Q ) = ∫ 0 1 Q ( t ) d t$, whose value indicates the preference extent in a general way [16].
The weight-generating method from the given inducing function c and RIM quantifier $Q$ to obtain a normalized weight function $w ∈ W ( n )$ can be rephrased and revamped by the following formula:
$w ( i ) = Q ( ( n − | L i | ) / n ) − Q ( | U i | / n ) | E i | ,$
where $L i = { q ∈ { 1 , … , n } : c ( q ) < c ( i ) }$, $U i = { q ∈ { 1 , … , n } : c ( q ) > c ( i ) }$, $E i = { q ∈ { 1 , … , n } : c ( q ) = c ( i ) }$, and $| S |$ represents the cardinality of any finite set S.
In this study, for the sake of convenience, we can use a straightforward and strict way to express the weight allocation with given information, a function/vector of dimension n, $c : { 1 , … , n } → [ 0 , 1 ]$, and a RIM quantifier Q:
$G ( n ) : [ 0 , 1 ] n × Q → W ( n ) ,$
and
$w = G ( n ) ( c , Q ) ,$
The mapping $G ( n )$ in Equation (5) is called the general weight allocation formulation and its value $w = G ( n ) ( c , Q )$ on given c and Q is called a weight allocation paradigm.
Recall that the basic uncertain information (BUI) [23,24] is a recently proposed uncertainty concept that can generalize a lot of different types of uncertainties such as fuzzy information [27], intuitionistic fuzzy information [28,29,30,31], probability information, interval information, hesitant information [32,33,34,35], and some other types of uncertain information [36]. From some methods or formulations, those different types of uncertain information may be indirectly transformed into BUI (which will not be discussed further in this work). Another feature of BUI lies in the fact that the certainty/uncertainty extents may also be communicated or expressed directly by experts. BUI can conveniently express the extent of uncertainty in decision-making, which helps one to make a reasonable evaluation and make wise decisions in uncertainty environment. Recently, apart from the authors of this work, other researchers also paid attention to BUI and developed its related theories, new concepts, and applications [37,38,39,40,41,42,43]. For example, Chen et al. [37] proposed the Improved Basic Uncertain Linguistic Information (IBULI) as a new extension of BUI, and Tao et al. [38] proposed basic uncertain information soft set with its application.
A BUI is a pair $< x , c >$ in which $x ∈ [ 0 , 1 ]$ is the mainly concerned data (also called value element to distinguish it from the certainty element in this work), and $c ∈ [ 0 , 1 ]$ is its associated certainty degree (or called certainty element), generally representing the extent to which x takes exactly its value or the degree to which some involved decision makers believe it takes that value. The originally defined BUI has a very simple pair form, and actually it can also assume different types of extended forms. For example, when $x : { 1 , … , n } → [ 0 , 1 ]$ (or $W : { 1 , … , n } → [ 0 , 1 ]$) and $c : { 1 , … , n } → [ 0 , 1 ]$ are two functions, the form $< x ( i ) , c ( i ) >$ (or $< W ( i ) , c ( i ) >$) ($i ∈ { 1 , … , n }$) can be also easily recognized as a BUI pair. Sometimes, the value element $w j$ in BUI $< w j , c j >$ is a normalized weight vector $w j ∈ W ( n )$, so then $c j$ should be recognized as the certainty degree to the whole vector $w j$ rather than one of its entry $w j ( i )$.
We can perform weight allocations according to Equations (5) and (6) with some different types of BUI inputs, as discussed in the preceding paragraph. For example, when we are given a collection of BUI pairs ${ < x i , c i > } i = 1 n$ ($x i , c i ∈ [ 0 , 1 ]$), we can use Equation (6) to obtain a normalized weight function for further information aggregation $w = G ( n ) ( c , Q )$ in which $c ( i ) = c i$. When we are given $< W ( i ) , c ′ ( i ) >$ ($i ∈ { 1 , … , n }$), then we have $w = G ( n ) ( c , Q )$, in which $c ( i ) = W ( i )$. When we are with ${ < w j , c j > } j = 1 m$ where $w j ∈ W ( n )$, we consequently have $w = G ( m ) ( c , Q )$ in which $c ∈ [ 0 , 1 ] m$ and $c ( i ) = c i$.

## 3. Generating Relative Importance from Given Importance Information

In MCDM problems, probably the most essential task is to determine the relative importance of the considered multiple criteria. In general, a widely accepted method is to allocate and form a normalized weight function $w ∈ W ( n )$ to embody the relative importance of n criteria. In general, it is much easier to assign each single importance rate to each criterion than to assign a whole piece of normalized weight vector to those all criteria altogether. Hence, the weight-allocation methods of this section will begin with some simple form that involves a function/vector $W : { 1 , … , n } → [ 0 , 1 ]$ that is, in general, not normalized. We assume $W : { 1 , … , n } → [ 0 , 1 ]$ corresponds well to n criteria ${ C i } i = 1 n$, so that $W ( i ) ∈ [ 0 , 1 ]$ is the importance extent of criterion $C i$, i.e., the larger $W ( i )$ the more important the criterion $C i$ in the comprehensive evaluation.
Next, in order to generate a normalized weight function, we consider two methods that are simple but effective in some situations. We suppose in such methods the decision maker carries out the whole weight allocation process by himself/herself so that it is required that the importance of criteria $W$ will be judged by the decision maker alone.
The first method is more direct and even somewhat simplistic. If $∑ i = 1 n W ( i ) ≠ 0$, then we can easily have a normalized weight function w by $w ( i ) = W ( i ) ∑ k = 1 n W ( k )$, and if $∑ i = 1 n W ( i ) = 0$, then set $w ( i ) = 1 / n$ by Laplace decision criterion. Alternatively, we may preset a number $r > 0$ and generate w by $w ( i ) = W ( i ) + r n r + ∑ k = 1 n W ( k )$, in which case whether or not $∑ i = 1 n W ( i ) ≠ 0$ will no longer matter.
The second method is to perform IOWA weight allocation with paradigm $w = G ( n ) ( W , Q )$, where Q is a RIM quantifier. In this case, we need $o r n e s s ( Q ) ≥ 0.5$ because it can adequately represent a preference over the criteria with higher importance values $W ( i )$ rather than over those with lower importance values.
Next, we consider the other two situations where multiple experts will be invited to join the determination of the relative importance of criteria. Assume that m experts ${ E j } j = 1 m$ have been invited to offer or suggest their own different opinions about the importance of those n criteria that are represented by m weight function $W j : { 1 , … , n } → [ 0 , 1 ]$ ($j ∈ { 1 , … , m }$). With such initial importance functions, there exist two types of uncertainty involvement about the opinions of those experts that can be represented by two different extensions of BUI below.
The first one is to one-to-one assign individual certainty degree $c j ( i )$ to each weight value $W j ( i )$, and $c j ( i )$ may vary with respect to both $i ∈ { 1 , … , n }$ and $j ∈ { 1 , … , m }$. Thus, for different experts, he/she may offer different types of importance information $W j : { 1 , … , n } → [ 0 , 1 ]$ and different certainty functions $c j : { 1 , … , n } → [ 0 , 1 ]$. The above initial information provided by those m experts thus can be formulated by $( W j , c j ) j = 1 m$, seen as a collection of m extended BUI called Varying Certainties for Weight Values (VCWV).
The second one is to assign a same certainty degree $d j$ to the importance information $W j$ as a whole. The formulation for this type of uncertainty is by $( W j , d j ) j = 1 m$, with $d j ∈ [ 0 , 1 ]$ being certainty degrees (real values) and called Constant Certainty for Weight Function (CCWF). Note that $d j$ does not vary according to $i ∈ { 1 , … , n }$ but may vary according to $j ∈ { 1 , … , m }$.
The above-mentioned two types of uncertainty involved importance information and contain significant differences in weight allocation. For the first type, we can use an aggregation operator [26,44] $f : [ 0 , 1 ] 2 → [ 0 , 1 ]$ to melt $W j ( i )$ with $c j ( i )$ and obtain the intermediate index $f ( W j ( i ) , c j ( i ) )$ ($j ∈ { 1 , … , m }$, $i ∈ { 1 , … , n }$) as the intermediate information to help further determine weight allocation; in addition, for a fixed i, $c j ( i )$ can be used as inducing information because generally the values of $W j ( i )$ assigned with some higher certainties $c j ( i )$ will be more convincing to the decision maker. For the second type, however, we cannot consider any forms of melting $W j ( i )$ with $d j$, and we will allocate weights merely according to the certainty information $( d j ) j = 1 m ∈ [ 0 , 1 ] m$ as the inducing information; in addition, note that $d j$ cannot be used as inducing information for any single $W j ( i )$ with any fixed i. This is because we have regarded $W j$ as an independent whole, which has as its certainty $d j$.
The major difference as mentioned above will then lead to differences in the detailed weights allocating processes. We next present two weight-allocation methods for VCWV and one method for CCWF.

#### 3.1. Weight Allocation for VCWV—Method 1

First, select a binary aggregation operator $f : [ 0 , 1 ] 2 → [ 0 , 1 ]$ to obtain $m n$ values $f ( W j ( i ) , c j ( i ) )$ ($j ∈ { 1 , … , m }$, $i ∈ { 1 , … , n }$). Since any aggregation operator is non-decreasing, this monotonicity ensures that both larger importance, $W j ( i )$, and larger certainty of it, $c j ( i )$, will contribute to a larger relative importance of criterion $C i$, and vice versa. Then, a final normalized weight function $w ∈ W ( n )$ can be obtained by
$w = G ( n ) ( c , Q )$
where $c ( i ) = 1 m ∑ j = 1 m f ( W j ( i ) , c j ( i ) )$ and Q is a RIM quantifier with $o r n e s s ( Q ) ≥ 0.5$.

#### 3.2. Weight Allocation for VCWV—Method 2

First, for each $i ∈ { 1 , … , n }$ obtain a normalized weight function $v i ∈ W ( m )$ by $v i = G ( m ) ( b i , Q )$, where $b i ( j ) = c j ( i )$ and Q a RIM quantifier with $o r n e s s ( Q ) ≥ 0.5$. Then, for each $i ∈ { 1 , … , n }$, generate a BUI pair $< ∑ j = 1 m v i ( j ) W j ( i ) , ∑ j = 1 m v i ( j ) c j ( i ) >$. Finally, obtain a normalized weight function $w ∈ W ( n )$ by
$w = G ( n ) ( c , Q )$
where $c ( i ) = ∑ j = 1 m v i ( j ) c j ( i )$ and Q is a RIM quantifier with $o r n e s s ( Q ) ≥ 0.5$. Note that the obtained $w$ will be used (if necessary) to weight and aggregate values $∑ j = 1 m v i ( j ) W j ( i )$ ($i ∈ { 1 , … , n }$).
Example 1.
We show a simple numerical example to generate a weight function with respect to the above weight allocation for VCWV—Method 2. The example is also representative, so after seeing this the other different methods introduced in this work will not be difficult to understand.
Assume $n = 4$ and $m = 3$, and
$W 1 = ( 0.5 , 0.7 , 0.3 , 0.9 )$, $W 2 = ( 0.8 , 0.2 , 0.6 , 1 )$, $W 3 = ( 0.9 , 0.7 , 0.4 , 0.8 )$,
$c 1 = ( 0.7 , 0.9 , 0.6 , 0.4 )$, $c 2 = ( 0.5 , 1 , 0.5 , 0.8 )$, $c 3 = ( 1 , 0.9 , 0.8 , 0.3 ) .$
Moreover, suppose $Q$ satisfies $Q ( t ) = 1 − ( 1 − t ) 2$ with $o r n e s s ( Q ) = ∫ 0 1 Q ( t ) d t = 2 / 3 ≥ 0.5$.
Firstly, we calculate
$v 1 = G ( 3 ) ( b 1 , Q ) = G ( 3 ) ( ( 0.7 , 0.5 , 1 ) , Q ) = ( 3 / 9 , 1 / 9 , 5 / 9 )$,
$v 2 = G ( 3 ) ( b 2 , Q ) = G ( 3 ) ( ( 0.9 , 1 , 0.9 ) , Q ) = ( 2 / 9 , 5 / 9 , 2 / 9 )$,
$v 3 = G ( 3 ) ( b 3 , Q ) = G ( 3 ) ( ( 0.6 , 0.5 , 0.8 ) , Q ) = ( 3 / 9 , 1 / 9 , 5 / 9 )$,
$v 4 = G ( 3 ) ( b 4 , Q ) = G ( 3 ) ( ( 0.4 , 0.8 , 0.3 ) , Q ) = ( 3 / 9 , 5 / 9 , 1 / 9 )$.
Then, by taking weighted averages, we obtain, respectively,
$c ( 1 ) = ∑ j = 1 3 v 1 ( j ) c j ( 1 ) = ( 0.7 ) ( 3 / 9 ) + ( 0.5 ) ( 1 / 9 ) + ( 1 ) ( 5 / 9 ) = 0.844$,
$c ( 2 ) = ∑ j = 1 3 v 2 ( j ) c j ( 2 ) = 0.956$,
$c ( 3 ) = ∑ j = 1 3 v 3 ( j ) c j ( 3 ) = 0.7$,
$c ( 4 ) = ∑ j = 1 3 v 4 ( j ) c j ( 4 ) = 0.611$.
Consequently,
$w = G ( 4 ) ( c , Q ) = G ( 4 ) ( ( 0.844 , 0.956 , 0.7 , 0.611 ) , Q ) = ( 5 / 16 , 7 / 16 , 3 / 16 , 1 / 16 )$
We next present a weight-allocation method for CCWF that is relatively simpler.
First, obtain a normalized weight function $v ∈ W ( m )$ by $v = G ( m ) ( c , Q )$ with $c ( j ) = d j$. Then, take the weighted average of $W j$, $j ∈ { 1 , … , m }$, by using v, and obtain $W : [ 0 , 1 ] n → [ 0 , 1 ]$ such that $W ( i ) = ∑ j = 1 m v ( j ) W j ( i )$. Finally, normalize W and obtain a final normalized weight function $w ∈ W ( n )$ by $w ( i ) = W ( i ) + r n r + ∑ k = 1 n W ( k )$, where $r > 0$ is a preset real value.

## 4. Generating Relative Importance from Inputs of BUI

In this section, we discuss some methods to generate normalized weight function from BUI inputs in MCDM.
We still consider the same decision environment involving n criteria and m experts. For any alternative under evaluation and comparison with others, we assume for the sake of argument that it has been evaluated for each criterion $C i$ ($i ∈ { 1 , … , n }$) by BUI $( < x i j , c i j > ) j = 1 m$, where $< x i j , c i j >$ is provided by Expert j and $x i j$ is the evaluation value regarding the criterion $C i$, while $c i j$ is the certainty extent for $x i j$. Suppose the relative importance of those m experts invited is represented as a normalized weight vector $u ∈ W ( m )$, then we can firstly obtain an intermediate aggregation result from BUI $( < x i j , c i j > ) j = 1 m$. In detail, for each criterion $C i$ ($i ∈ { 1 , … , n }$), we take $< x i , c i > = < ∑ j = 1 m u ( j ) x i j , ∑ j = 1 m u ( j ) c i j >$. (Actually, the weights function u can be further generalized as a fuzzy measure $μ$, and we can take the corresponding fuzzy integrals [45,46] as the intermediate BUI results, which, though more general, will not be detailed in this work).
Then, with such obtained BUI vector $( < x i , c i > ) i = 1 n$ we will next propose some different methods to generate a normalized weight function $w ∈ W ( n )$ for the n criteria.

#### 4.1. Method 1 Value Induced Approach

With $( < x i , c i > ) i = 1 n$, we directly obtain the desired normalized weights function $w ∈ W ( n )$ by
$w = G ( n ) ( x , Q )$
where $x ( i ) = x i$ and Q is a RIM quantifier with $o r n e s s ( Q ) ∈ [ 0 , 1 ]$. We note that here the RIM quantifier Q is allowed to take any value between 0 and 1, the two extreme preferences. This is because for the input data, decision maker’s optimism–pessimism bipolar preference should be able to cover all the values of preference from the extreme optimism to the extreme pessimism.

#### 4.2. Method 2 Certainty Induced Approach

With $( < x i , c i > ) i = 1 n$, we directly obtain the desired normalized weights function $w ∈ W ( n )$ by
$w = G ( n ) ( c , Q )$
where $c ( i ) = c i$ and Q is a RIM quantifier with $o r n e s s ( Q ) ≥ 0.5$. We emphasize that here the RIM quantifier is restricted so that it has its orness not less than 0.5 because, in general, we always prefer some data with higher certainties to those with lower certainties. Note that this is different from the situation in Method 1, and the preference here is represented by indifference–certainty-inclined bipolar preference.

#### 4.3. Method 3 Value and Certainty Induced Approach

Actually, the bipolar preferences of optimism–pessimism and indifference–certainty can be melted together and form some new single inducing information. Below, we consider three methods. The first two methods concern the convex combination with the general form $A ( x , c ) = λ x + ( 1 − λ ) c$. It can be considered as part of the effect of data elements and part of the effect of certainty elements according to a given extent $λ ∈ [ 0 , 1 ]$ associated with data elements. The last method will use aggregation function to merge data and certainty to generate a new inducing formation.

#### 4.4. Method 3.1 Combination-Inducing

With $( < x i , c i > ) i = 1 n$, we generate a normalized weights function $w ∈ W ( n )$ by firstly performing combination and secondly taking weight allocation:
$G ( n ) ( ( λ x i + ( 1 − λ ) c i ) i = 1 n , Q ) ,$
where Q is a given RIM quantifier with $o r n e s s ( Q ) ∈ [ 0 , 1 ]$ $λ$ is the extent to which we consider the effect of data elements in BUI, and $1 − λ$ is the complement extent for us to consider the effect of certainty elements.

#### 4.5. Method 3.2 Inducing-Combination

With $( < x i , c i > ) i = 1 n$, we generate a normalized weights function $w ∈ W ( n )$ by firstly taking weight allocations and secondly performing combination:
$λ G ( n ) ( x , Q 1 ) + ( 1 − λ ) G ( n ) ( c , Q 2 ) ,$
where $Q 1$ is a given RIM quantifier with $o r n e s s ( Q 1 ) ∈ [ 0 , 1 ]$, $Q 2$ another given RIM quantifier with $o r n e s s ( Q 2 ) ∈ [ 0.5 , 1 ]$, and $λ$ has the same meaning as in Method 3.1.

#### 4.6. Method 3.3 Aggregation-Function-Induced Method

With $( < x i , c i > ) i = 1 n$, we generate a normalized weight function $w ∈ W ( n )$ by firstly using some binary aggregation functions that are unnecessarily symmetrical and then considering the aggregated values as the inducing information. By a binary aggregation function, here we mean the mapping $A : [ 0 , 1 ] 2 → [ 0 , 1 ]$ such that (i) $A ( 0 , 0 ) = 0$ and $A ( 1 , 1 ) = 1$, (ii) $A ( x 1 , c 1 ) ≤ A ( x 2 , c 2 )$ whenever $x 1 ≤ x 2$ and $c 1 ≤ c 2$. If it is symmetrical, we mean $A ( x , c ) = A ( c , x )$ for any $x , c ∈ [ 0 , 1 ]$. Since the x is the value element and c certainty element in BUI, then we do not require the aggregation function to satisfy symmetricity. Hence, we can adopt many different types of binary non-symmetrical aggregation functions. For example, we could select
$A ( x , c ) = x 1 / c ,$
Or take the convex combination form (as in Method 3.1)
$A ( x , c ) = λ x + ( 1 − λ ) c ,$
Alternatively, we can take the geometrical form of combination
$A ( x , c ) = x λ c 1 − λ ,$
Then, we can obtain a final normalized weight function $w ∈ W ( n )$, such that
$w = G ( n ) ( c , Q )$
where Q is a given RIM quantifier with $o r n e s s ( Q ) ∈ [ 0 , 1 ]$ and c satisfies $c ( i ) = A ( x i , c i )$ ($i ∈ { 1 , … , n }$).

## 5. Conclusions

In multi-criteria decision-making, the relative importance among the involved criteria can be represented by a normalized weight vector. This work firstly proposed a general weight allocation paradigm for the sake of its convenience in expression and then some suggested extensions and weight-determination methods were clearly expressed and formulated. The basic tool for use was mainly based on the three-set expression of Yager’s inducing-weight-allocation method.
In uncertainty environments, when each of the criteria has been offered a single evaluation for its absolute importance, we provided two types of weight-allocation methods accordingly. This is because the uncertainties can be either attached to each individual criterion, or to all the criteria as a whole provided by experts.
When the inputs have uncertainties, multiple experts’ involvement will add more complexity to the whole weight-allocation problem. Then, we proposed several different weight-determination methods by proposing different orderings and melting approaches for data and certainty to be handled. The proposed weight-determination methods provide some reasonable and conducive suggestions for practitioners and decision makers in multi-criteria decision-making.
The main limitation of the proposed methods is that the determination methods of those numerical uncertainties should be further studied and devised if we do not want to use descriptive statistics, which in general is costly in terms of resources. Together with the proposed analytic evaluation models, in future studies we will try to devise some automatic methods and mechanisms to heuristically derive numerical certainties information from the few involved human decision makers.

## Author Contributions

Conceptualization, Y.-Q.X. and L.-S.J.; Methodology, L.-S.J.; Supervision, Z.-S.C.; Validation, Y.-Q.X., L.-S.J. and Z.-S.C.; Writing—original draft, Y.-Q.X. and L.-S.J.; Writing—review and editing, Z.-S.C., R.R.Y., J.Š., M.K. and S.B. All authors have read and agreed to the published version of the manuscript.

## Funding

This work was supported from the Science and Technology Assistance Agency under contract No. APVV-18-0066 and funded by National Natural Science Foundation of China (No. 72171182).

Not applicable.

Not applicable.

Not applicable.

## Acknowledgments

The authors would like to thank the editors and reviewers for handling and reviewing our paper.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Agnew, N.M.; Brown, J.L. Bounded rationality: Fallible decisions in unbounded decision space. Behav. Sci. 1986, 31, 148–161. [Google Scholar] [CrossRef]
2. Chen, S.M.; Hong, J.A. Fuzzy multiple attributes group decision making based on ranking interval type-2 fuzzy sets and the TOPSIS method. IEEE Trans. Syst. Man Cybern. Syst. 2014, 44, 1665–1673. [Google Scholar] [CrossRef]
3. Chen, Z.S.; Yu, C.; Chin, K.S.; Martínez, L. An enhanced ordered weighted averaging operators generation algorithm with applications for multicriteria decision making. Appl. Math. Model. 2019, 71, 467–490. [Google Scholar] [CrossRef]
4. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making; Springer: Heidelberg, Germany, 1981. [Google Scholar]
5. Jin, L.; Mesiar, R.; Qian, G. Weighting models to generate weights and capacities in multi-criteria group decision making. IEEE Trans. Fuzzy Syst. 2018, 26, 2225–2236. [Google Scholar] [CrossRef]
6. Mesiar, R.; Borkotokey, S.; Jin, L.; Kalina, M. Aggregation Functions and Capacities. Fuzzy Sets Syst. 2018, 346, 138–146. [Google Scholar] [CrossRef]
7. Keeney, R.L.; Raiffa, H. Decisions with Multiple Objectives: Preferences and Value Tradeoffs; Wiley: New York, NY, USA, 1976. [Google Scholar]
8. Papoulis, A. Probability, Random Variables, and Stochastic Processes, 3th ed.; McGraw-Hill: New York, NY, USA, 1991; ISBN 0-07-048477-5. [Google Scholar]
9. Pedrycz, W.; Chen, S.M. Granular Computing and Decision-Making: Interactive and Iterative Approaches; Springer: Heidelberg, Germany, 2015. [Google Scholar]
10. Saaty, T.L. Axiomatic Foundation of the Analytic Hierarchy Process. Manag. Sci. 1986, 32, 841–855. [Google Scholar] [CrossRef]
11. Duleba, S.; Moslem, S. Examining Pareto optimality in analytic hierarchy process on real Data: An application in public transport service development. Expert Syst. Appl. 2019, 116, 21–30. [Google Scholar] [CrossRef]
12. Gündoğdu, F.; Duleba, S.; Moslem, S.; Aydın, S. Evaluating public transport service quality using picture fuzzy analytic hierarchy process and linear assignment model. Appl. Soft Comput. 2021, 100, 106920. [Google Scholar] [CrossRef]
13. Yager, R.R. On ordered weighted averaging aggregation operators in multicriteria decision making. IEEE Trans. Syst. Man Cybern. 1988, 18, 183–190. [Google Scholar] [CrossRef]
14. Jin, L.; Mesiar, R.; Yager, R.R. The properties of crescent preference vectors and its usage in decision making with risk and preference. Fuzzy Sets Syst. 2021, 409, 114–127. [Google Scholar] [CrossRef]
15. Yager, R.R. Induced aggregation operators. Fuzzy Sets Syst. 2003, 137, 59–69. [Google Scholar] [CrossRef]
16. Yager, R.R. Quantifier guided aggregation using OWA operators. Int. J. Intell. Syst. 1996, 11, 49–73. [Google Scholar] [CrossRef]
17. Yager, R.R.; Kacprzyk, J.; Beliakov, G. Recent Developments on the Ordered Weighted Averaging Operators: Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
18. Bodjanova, S.; Kalina, M. Approximate evaluations based on aggregation functions. Fuzzy Sets Syst. 2013, 220, 34–52. [Google Scholar] [CrossRef]
19. Liu, X.W.; Da, Q.L. A decision tree solution considering the decision maker’s attitude. Fuzzy Sets Syst. 2005, 152, 437–454. [Google Scholar] [CrossRef]
20. Yang, R.; Jin, L.; Paternain, D.; Yager, R.R.; Mesiar, R.; Bustince, H. Some preference involved aggregation models for Basic Uncertain Information using uncertainty transformation. J. Intell. Fuzzy Syst. 2020, 39, 325–332. [Google Scholar] [CrossRef]
21. Zhu, C.; Zhang, E.Z.; Wang, Z.; Yager, R.R.; Chen, Z.S.; Chen, Z.S. An uncertain and preference evaluation model with basic uncertain information in educational management. Int. J. Comput. Intell. Syst. 2021, 14, 168–173. [Google Scholar] [CrossRef]
22. Meng, F.; Cheng, H.; Zhang, Q. Induced Atanassov′s interval-valued intuitionistic fuzzy hybrid Choquet integral operators and their application in decision making. Int. J. Comput. Intell. Syst. 2014, 7, 524–542. [Google Scholar] [CrossRef] [Green Version]
23. Jin, L.; Kalina, M.; Mesiar, R.; Borkotokey, S. Certainty aggregation and the certainty fuzzy measures. Int. J. Intell. Syst. 2018, 33, 759–770. [Google Scholar] [CrossRef]
24. Mesiar, R.; Borkotokey, S.; Jin, L.; Kalina, M. Aggregation under uncertainty. IEEE Trans. Fuzzy Syst. 2017, 26, 2475–2478. [Google Scholar] [CrossRef]
25. Jin, L.; Mesiar, R.; Yager, R.R. On WA expressions of Induced OWA operators and inducing function based orness with application in evaluation. IEEE Trans. Fuzzy Syst. 2021, 29, 1695–1700. [Google Scholar] [CrossRef]
26. Klement, E.P.; Mesiar, R.; Pap, E. Triangular Norms; Springer: Kluwer, Dordrecht, 2000. [Google Scholar]
27. Zadeh, L.A. Fuzzy sets. Inf. Control. 1965, 8, 338–357. [Google Scholar] [CrossRef] [Green Version]
28. Almahasneh, R.; Tüű-Szabó, B.; Kóczy, L.T.; Földesi, P. Optimization of the time-dependent traveling salesman problem using interval-valued intuitionistic fuzzy sets. Axioms 2020, 9, 53. [Google Scholar] [CrossRef]
29. Atanassov, K.T. Intuitionistic fuzzy sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
30. Atanassov, K.T. Operators over interval-valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1994, 64, 159–174. [Google Scholar] [CrossRef]
31. Bustince, H. A historical account of types of fuzzy sets and their relationships. IEEE Trans. Fuzzy Syst. 2016, 24, 179–194. [Google Scholar] [CrossRef] [Green Version]
32. Chen, S.M.; Hong, J.A. Multicriteria linguistic decision making based on hesitant fuzzy linguistic term sets and the aggregation of fuzzy sets. Inf. Sci. 2014, 286, 63–74. [Google Scholar] [CrossRef]
33. Chen, Z.S.; Chin, K.S.; Martínez, L.; Tsui, K.L. Customizing semantics for individuals with attitudinal HFLTS possibility distributions. IEEE Trans. Fuzzy Syst. 2018, 26, 3452–3466. [Google Scholar] [CrossRef]
34. Chen, Z.S.; Yang, Y.; Wang, X.J.; Chin, K.S.; Tsui, K.L. Fostering linguistic decision-making under uncertainty: A proportional interval type-2 hesitant fuzzy TOPSIS approach Based on Hamacher aggregation operators and andness optimization models. Inf. Sci. 2019, 500, 229–258. [Google Scholar] [CrossRef]
35. Torra, V. Hesitant fuzzy sets. Int. J. Intell. Syst. 2010, 25, 529–539. [Google Scholar] [CrossRef]
36. Dubois, D.; Prade, H. Possibility Theory: An Approach to Computerized Processing of Uncertainty; Plenum: New York, NY, USA, 1988. [Google Scholar]
37. Chen, Z.S. Sustainable building material selection: An integrated multi-criteria large group decision making framework. Appl. Soft Comput. 2021, 113, 107903. [Google Scholar] [CrossRef]
38. Tao, Z.; Shao, Z.; Liu, J.; Zhou, L.; Chen, H. Basic uncertain information soft set and its application to multi-criteria group decision making. Eng. Appl. Artif. Intell. 2020, 95, 103871. [Google Scholar] [CrossRef]
39. Chen, Z.S.; Martinez, L.; Chang, J.P.; Wang, X.J.; Xiong, S.H.; Chin, K.S. Sustainable building material selection: A QFD-and ELECTRE III-embedded hybrid MCGDM approach with consensus building. Eng. Appl. Artif. Intell. 2019, 85, 783–807. [Google Scholar] [CrossRef]
40. Chen, Z.S.; Wang, X.J.; Chin, K.S.; Tsui, K.L.; Martinez, L. Individual semantics building for HFLTS possibility distribution with applications in domain-specific collaborative decision making. IEEE Access 2018, 6, 78803–78828. [Google Scholar] [CrossRef]
41. Borkotokey, S.; Mesiar, R.; Li, J.; Kouchakinejad, F.; Siposova, A. Event-based transformations of capacities and invariantness. Soft Comput. 2018, 22, 6291–6297. [Google Scholar] [CrossRef]
42. Tao, Z.; Liu, X.; Zhou, L.; Chen, H. Rank aggregation based multi-attribute decision making with hybrid Z-information and its application. J. Intell. Fuzzy Syst. 2019, 37, 4231–4239. [Google Scholar] [CrossRef]
43. Liu, Z.; Xiao, F. An interval-valued exceedance method in MCDM with uncertain satisfactions. Int. J. Intell. Syst. 2019, 34, 2676–2691. [Google Scholar] [CrossRef]
44. Grabisch, M.; Marichal, J.L.; Mesiar, R.; Pap, E. Aggregation Functions; Cambridge University Press: Cambridge, UK, 2009; ISBN 1107013429. [Google Scholar]
45. Choquet, G. Theory of capacities. Ann. Inst. Fourier 1954, 5, 131–295. [Google Scholar] [CrossRef] [Green Version]
46. Sugeno, M. Theory of Fuzzy Integrals and its Applications. Ph.D. Thesis, Tokyo Institute of Technology, Tokyo, Japan, 1974. [Google Scholar]
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Xu, Y.-Q.; Jin, L.-S.; Chen, Z.-S.; Yager, R.R.; Špirková, J.; Kalina, M.; Borkotokey, S. Weight Vector Generation in Multi-Criteria Decision-Making with Basic Uncertain Information. Mathematics 2022, 10, 572. https://doi.org/10.3390/math10040572

AMA Style

Xu Y-Q, Jin L-S, Chen Z-S, Yager RR, Špirková J, Kalina M, Borkotokey S. Weight Vector Generation in Multi-Criteria Decision-Making with Basic Uncertain Information. Mathematics. 2022; 10(4):572. https://doi.org/10.3390/math10040572

Chicago/Turabian Style

Xu, Ya-Qiang, Le-Sheng Jin, Zhen-Song Chen, Ronald R. Yager, Jana Špirková, Martin Kalina, and Surajit Borkotokey. 2022. "Weight Vector Generation in Multi-Criteria Decision-Making with Basic Uncertain Information" Mathematics 10, no. 4: 572. https://doi.org/10.3390/math10040572

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.