Next Article in Journal
Damage Detection of Regular Civil Buildings Using Modified Multi-Scale Symbolic Dynamic Entropy
Previous Article in Journal
An Algorithmic Approach to Emergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Way Decision Models Based on Ideal Relations in Multi-Attribute Decision-Making

1
Hunan Provincial Laboratory of Intelligent Computing and Language Information Processing, Hunan Normal University, Changsha 410081, China
2
College of Information Science and Engineering, Hunan Normal University, Changsha 410081, China
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(7), 986; https://doi.org/10.3390/e24070986
Submission received: 6 June 2022 / Revised: 10 July 2022 / Accepted: 13 July 2022 / Published: 17 July 2022

Abstract

:
In recent years, research on applications of three-way decision (e.g., TWD) has attracted the attention of many scholars. In this paper, we combine TWD with multi-attribute decision-making (MADM). First, we utilize the essential idea of TOPSIS in MADM theory to propose a pair of new ideal relation models based on TWD, namely, the three-way ideal superiority model and the three-way ideal inferiority model. Second, in order to reduce errors caused by the subjectivity of decision-makers, we develop two new methods to calculate the state sets for the two proposed ideal relation models. Third, we employ aggregate relative loss functions to calculate the thresholds of each object, divide all objects into three different territories and sort all objects. Then, we use a concrete example of building appearance selection to verify the rationality and feasibility of our proposed models. Furthermore, we apply comparative analysis, Spearman’s rank correlation analysis and experiment analysis to illustrate the consistency and superiority of our methods.

1. Introduction

MADM, also known as finite scheme multi-criteria decision-making, refers to the decision problem of choosing the optimal alternative or ranking the scheme under the condition of multi-attribute. MADM is an important feature of human cognition and problem solving and plays a vital role in modern decision science. It has been widely applied in many fields such as engineering, technology, economics, management, military affairs and so forth.
In the past, decision-makers used to approach a decision-making problem based on two kinds of decisions: acceptance or rejection. However, this approach usually does not yield the optimal decisions or desired decision results. In view of this, Yao [1] put forward the concept of TWD in 2009. TWD is a decision model that is summarized and refined in the process of long-term research on rough sets, especially probabilistic rough sets and decision-theoretic rough sets, and is in line with the actual cognitive ability of human beings. TWD usually utilizes the probabilistic rough set model with two parameters, α and β , to divide the entire universe into three disjointed territories, namely positive territory, boundary territory and negative territory, and then adopts different strategies and methods for each of the three territories. In this paper, we construct a pair of new ideal relations by using the most essential ideas of TOPSIS [2], the ideal superiority relation and the ideal inferiority relation, in MADM theory [3]. Using the ideal superiority relation, we construct a TWD model based on the ideal superiority class. Similarly, the ideal inferiority class is constructed upon the ideal inferiority relation, upon which the other TWD model is proposed. Subsequently, we use the proposed models to analyze and evaluate an example of an architectural-appearance selection problem.
There are three main motivations of this paper:
(1)
First of all, traditional MADM methods are generally combined with the two-way decision model, while we combine MADM with TWD. In this paper, we use the basis behind TOPSIS together with TWD in MADM. The main idea of TOPSIS is that the optimal object should have the minimal distance to the best target solution (BTS); at the same time, the larger the distance is from the worst target solution (WTS), the better. However, its limitation is that we cannot determine the order of the objects when they only meet one of the conditions or neither of the two conditions. To solve the problem, some scholars proposed equivalence relations [4], similar relations [5,6,7], dominance relations [8,9] and neighborhood operators [10,11], while we propose a new pair of ideal relations.
(2)
Secondly, in most probability rough sets, the values of α and β are given artificially, and they do not answer why they should be set like this. Moreover, regarding the calculation of conditional probability [12] in decision-theoretic rough set models [1,5,13], scholars have different understanding and calculation methods from different angles and analysis directions; the properties of state sets generally have two types: classic sets [14,15] and fuzzy sets [16]. In classic sets, object membership values are given subjectively, while few studies have calculated state set values. However, different decision-makers have different opinions and preferences, and the given membership values also have great differences. Therefore, in order to reduce the error caused by subjectivity, we propose a new method of objectively calculating the state.
(3)
Thirdly, since there are two states and three behaviors for each object, an object has six loss functions, each of which is either a subjective loss function [17,18,19] or an objective loss function [7,16,20]. If multiple attributes of an object are considered separately, there are six loss functions for each attribute, which requires a huge amount of calculation and lots of stored data. In this paper, we use the relative loss function proposed by Jia and Liu, which aggregates the relative loss function values for each object to reduce the amount of calculation. Besides, in order to improve the accuracy and reliability of TWD division, we calculate the threshold of each object and then divide all objects into three territories according to the threshold of each object.
The research contributions of this paper are as follows:
(1)
We combine MADM with TWD and use TOPSIS to propose a pair of new ideal relations, namely ideal superiority relations and ideal inferiority relations, which have opposite definition conditions. Based on these two relations, we construct the ideal superiority class on the basis of the ideal superiority relations and the ideal inferiority class on the basis of the ideal inferiority relations. Furthermore, we construct a pair of new models: one is the TWD model based on the ideal superiority class, that is, the TWD ideal superiority model; the other is the TWD model based on the ideal inferiority class, that is, the TWD ideal inferiority model. These two models combined with TWD can be applied to the classification and sorting of objects. Moreover, the models we propose provide a new theoretical basis for research on uncertain decision-making, decision-making model selection, dynamic monitoring and intelligent decision-making technology. Meanwhile, these two models also provide new insights and ideas for decision-makers who are studying TWD.
(2)
In the current paper, we provide a new calculation method for the state set of conditional probability. In Wang’s method [21], fixed values of parameters are given subjectively; however, for different decision-makers, the research limits are different, so this method has certain limitations and inflexibility. On the basis of Wang’s method, we set up an adjustable preference parameter k to control the cardinality of the object class, which could help calculate the values of state sets objectively to provide decision-makers with various choices. Our proposed method of calculating state sets provides new insight into the field of decision analysis.
(3)
In terms of the loss function, the relative loss function of Jia and Liu is calculated objectively using the evaluation value in the information matrix. In this paper, different from the calculation methods of Jia and Liu, we set the risk-aversion coefficient of each attribute in the relative loss function to the same value instead of subjectively measuring the risk-aversion coefficient of each attribute by the decision maker, and it has been further expanded by computing the threshold of each object. The threshold is used to determine the three territories of TWD. Due to the inconsistency between the nature of attributes and the standards of the criteria for each object, we calculate the threshold of each object instead of using the same threshold standard. Hence, the measurement scale is more in line with human cognition and persuasive, and the research results obtained are more accurate and reasonable.
The specific structure of this paper is as follows: Section 2 introduces some fundamental knowledge. In Section 3, we construct a pair of new TWD models based on the ideal relations. In Section 4, we explore an application of the proposed TWD-MADM approach. In  Section 5, we conduct experimental analysis and Spearman’s correlation coefficient analysis. In Section 6, we give a brief overview of this paper and the outlook for future research.

2. Preliminaries

In this section, we introduce some fundamental knowledge of MADM, decision-theoretic rough sets and relative loss functions.

2.1. MADM

An MADM problem is about finding an optimal object from a set of related alternatives according to the specified preferences, given the attributes for each of the alternatives. In this paper, a nonempty finite set of objects is denoted by O = { O i | i η } ( η = { 1 , 2 , , n } ) , where O i is the i-th object. A nonempty finite set of attributes is expressed by S = { S j | j μ } ( μ = { 1 , 2 , , m } ) , where S j is the j-th attribute. Then, the pair ( O , S ) is called an information system. The value of object O i with respect to attribute S j is denoted by S j ( O i ) (i.e., u i j ). If there exists u i j S and u i j is a fuzzy attribute, i.e.,  u i j ( 0 , 1 ) , then ( O , S ) is referred to as a fuzzy information system. If each attribute of S is fuzzy, then ( O , S ) is called a full fuzzy information system. Here, a fuzzy information system is represented by I = ( O , S ) , and  W = { w j | j μ } is the attribute weight vector set, where w j shall satisfy two conditions: 0 w j 1 and j = 1 m w j = 1 . For the sake of simplicity, all information systems in this paper refer to fuzzy information systems unless specifically stated. A fuzzy information system can be illustrated as an n × m MADM information matrix. Usually, we choose an optimal alternative from O by evaluating and ranking all objects under the m attributes. Regarding the above description, in what follows, Table 1 demonstrates the multi-attribute information matrix.
In Table 1, there can be multiple types of attributes, such as profit attributes, expense attributes, public attributes, private attributes and so on. Accordingly, in order to classify and sort all objects accurately as well as to obtain the expected ranking result, we need to unify these diverse attributes into the same dimension before making a decision, which requires a normalized decision-making matrix. The normalized decision-making matrix is shown in Table 2. For any decision-making problem, using the same dimension and standard to measure different objects makes the decision-making process easier and simpler, and the decision-making results are more convincing.

2.2. The TWD Model Based on Decision-Theoretic Rough Set (DTRS)

The decision-theoretic rough set model is based on the process of the Bayesian decision and the main idea of TWD. The decision-theoretic rough set model uses two states and three actions to describe the decision-making process. In this paper, we let Ω = { R , ¬ R } be a set of states, which means an object is either in R or not in R. Meanwhile, let T = { T P , T B , T N } be a set of actions, where T P , T B , T N are used to classify an object into three categories. When object O i R , we classify O i based on which one of the following conditions holds true: O i P O S ( R ) , O i B N D ( R ) or  O i N E G ( R ) .
Considering that different actions will result in diverse losses correspondingly, we respectively let θ i P P , θ i B P and θ i N P represent the loss functions of selecting the action T P , T B and  T N when the object O i belongs to R. Likewise, when O i ¬ R , θ i P N , θ i B N and θ i N N represent the loss functions for choosing the action T P , T B and  T N , respectively. At the same time, let [ O i ] stand for the ideal class of O i that we construct in this paper.
For an object O i , the expected losses Q ( T | [ O i ] ) ( = P , B , N ) of choosing three different actions can be calculated by the following formulas:
Q ( T P [ O i ] ) = θ i P P P ( R [ O i ] ) + θ i P N P ( ¬ R [ O i ] ) ,
Q ( T B [ O i ] ) = θ i B P P ( R [ O i ] ) + θ i B N P ( ¬ R [ O i ] ) ,
Q ( T N [ O i ] ) = θ i N P P ( R [ O i ] ) + θ i N N P ( ¬ R [ O i ] ) ,
where P ( R [ O i ] ) is the conditional probability of an alternative O i when it is in R. In the same way, P ( ¬ R [ O i ] ) represents the conditional probability of an alternative O i when it’s not in R.
According to the Bayesian minimal risk decision theory, the set of actions with the least expected loss is chosen as the remarkable course of action. Hence, we can draw the corresponding three decision rules as follows:
( P ) If Q ( T P [ O i ] ) Q ( T B [ O i ] ) and Q ( T P [ O i ] ) Q ( T N [ O i ] ) , then O i P O S ( R ) , ( B ) If Q ( T B [ O i ] ) Q ( T P [ O i ] ) and Q ( T B [ O i ] ) Q ( T N [ O i ] ) , then O i B N D ( R ) , ( N ) If Q ( T N [ O i ] ) Q ( T P [ O i ] ) and Q ( T N [ O i ] ) Q ( T B [ O i ] ) , then O i N E G ( R ) .
In light of the properties of the probability function, we conclude that P ( R [ O i ] ) + P ( ¬ R [ O i ] ) =1. From the semantic interpretation of risk in real life, we suppose that θ i P P θ i B P < θ i N P and θ i N N θ i B N < θ i P N . Thus, the rules ( P ) ( N ) can be simplified as follows:
( P 1 ) If P ( R [ O i ] ) α i and P ( R [ O i ] ) γ i , then O i P O S ( R ) , ( B 1 ) If P ( R [ O i ] ) α i and P ( R [ O i ] ) β i , then O i B N D ( R ) , ( N 1 ) If P ( R [ O i ] ) β i and P ( R [ O i ] ) γ i , then O i N E G ( R ) ,
where
α i = θ i P N θ i B N ( θ i P N θ i B N ) + ( θ i B P θ i P P ) ,
β i = θ i B N θ i N N ( θ i B N θ i N N ) + ( θ i N P θ i B P ) ,
γ i = θ i P N θ i N N ( θ i P N θ i N N ) + ( θ i N P θ i N N ) .
To find out the magnitude relationship between the three thresholds α , β and  γ , we have two reasonable assumptions based on the magnitude of the loss functions:
Given ( θ i P N θ i B N ) ( θ i N P θ i B P ) > ( θ i B P θ i P P ) ( θ i B N θ i N N ) , we have 0 β i < γ i < α i 1 . As a result, the above decision rules of (P1)–(N1) can be further simplified as follows:
( P 2 ) If P ( R [ O i ] ) α i ,   then O i P O S ( R ) , ( B 2 ) If β i < P ( R [ O i ] ) < α i ,   then O i B N D ( R ) , ( N 2 ) If P ( R [ O i ] ) β i ,   then O i N E G ( R ) .
Given ( θ i P N θ i B N ) ( θ i N P θ i B P ) ( θ i B P θ i P P ) ( θ i B N θ i N N ) , we have 0 < α i γ i β i < 1 . As a result, the decision rules of (P1)–(N1) above can be further simplified as follows:
( P 3 ) If P ( R [ O i ] ) γ i ,   then O i P O S ( R ) , ( N 3 ) If P ( R [ O i ] ) < γ i ,   then O i N E G ( R ) .

2.3. The Relative Loss Functions

To reduce the computational cost of thresholds, Jia and Liu [3] recently rewrote the calculation formulas of the three thresholds:
α = θ ( P B ) N θ ( P B ) N + θ ( B P ) P ,
β = θ ( B N ) N θ ( B N ) N + θ ( N B ) P ,
γ = θ ( P N ) N θ ( P N ) N + θ ( N P ) P ,
where θ ( N P ) P represents the difference between the boundary territory and the positive territory when an object belongs to R. Other loss functions have similar explanatory principles.
The relative loss functions simplify the calculation process of the original loss functions, i.e., the loss is zero when accepting the action is correct or rejecting the action is wrong. Jia and Liu also carried out a regular transformation on loss functions, i.e., the loss functions in R: θ P P = 0 , θ B P = θ B P θ P P , θ N P = θ N P θ P P ; the loss functions in ¬ R : θ P N = θ P N θ N N , θ B N = θ B N θ N N , θ N N = 0 . The results are shown in Table 3.
For example, the relative loss functions of object O i under attribute S j is shown in Table 4.
In Table 4, σ stands for the risk-aversion coefficient, whose value range has certain requirements. In this paper we focus on TWD, therefore σ [ 0 , 0.5 ] . Furthermore, in an MADM decision problem, there is more than one attribute that needs to be considered for an object. If the loss functions need to be calculated for each attribute, the whole process will be cumbersome and time-consuming. Hence, we aggregate the relative loss function values of each object to reduce the computational cost. The results are shown in Table 5.

3. TWD Models Based on the Ideal Relations

In this section, we construct a pair of ideal relations with opposite definitions by applying the TOPSIS method. Then we explore the ideal relations to construct two TWD ideal models. In this section, O = { O i | i η } ( η = { 1 , 2 , , n } ) is the object set.

3.1. A TWD Model Based on the Ideal Superiority Relation

In the following, we introduce in detail how to construct the ideal superiority relation and the ideal superiority class, as well as relevant definitions and theorems. Then, we describe the process of establishing the TWD model with DTRS by using our proposed superiority class.

3.1.1. Construction of the Ideal Superiority Relation and Class

According to the TOPSIS method, for any two objects O i and O j , if the distance between O i and the BTS is less than the distance between O j and the BTS, and the distance between O i and the WTS is greater than the distance between O j and the WTS, then we can conclude that O i precedes O j .
In what follows, based on Table 2, we name the distance between O i and the BTS as the best target ideal distance (BTID), represented by O i + . In the same way, the distance between O i and the WTS is referred to as the worst target ideal distance (WTID), denoted by O i . According to the above definitions, O i + and O i can be computed as follows.
O i + = j = 1 m w j ( v i j max i η v i j ) 2 , O i = j = 1 m w j ( v i j min i η v i j ) 2 .
Utilizing O i + and O i , the ideal superiority relation based on the TOPSIS method is described below.
Definition 1.
Based on the fuzzy information system I = ( O , S ) , e.g., Table 2, we define the ideal superiority relation as follows:
E = { ( O i , O j ) O × O O j + O i + and O j O i , j η } .
Remark 1.
The explanation for Formula (11): if the BTID of O j is less than or equal to that of O i , and, simultaneously, the WTID of O j is greater than or equal to that of O i , then ( O i , O j ) E , i.e.,  O j superior to O i . From the perspective of profit and expense attributes, the lower the expense, the higher the profit, which is the optimal decision effect.
Definition 2.
Based on fuzzy information system I = ( O , S ) and Definition 1, the ideal superiority class of O i is constructed as follows:
[ O i ] E = { O j O j + O i + and O j O i , O j O } .
Obviously, the ideal superiority class of object O i is a collection of objects that are superior to O i .
Example 1.
Let us take a simple fund selection problem as an example to explain the above definition. There are five available funds S = { s 1 , s 2 , s 3 , s 4 , s 5 } and four related attributes R = { r 1 , r 2 , r 3 , r 4 } . Among them, r 1 and r 3 are profit attributes, while r 2 and r 4 are expense attributes; w = { 0.3 , 0.1 , 0.2 , 0.4 } is the weight vector of these four attributes; τ = { τ 1 , τ 2 , τ 3 , τ 4 } is the risk avoidance coefficient of these four attributes. The specific values are presented in Table 6:
According to the steps of the TOPSIS model and Definition 2, we can calculate the ideal superiority classes of these five funds as follows:
[ s 1 ] E = { s 1 , s 3 , s 4 } .
[ s 2 ] E = { s 2 , s 3 , s 4 , s 5 } .
[ s 3 ] E = { s 3 } .
[ s 4 ] E = { s 3 , s 4 } .
[ s 5 ] E = { s 3 , s 4 , s 5 } .
Proposition 1.
For the ideal superiority relation E, we summarize that it has the following properties:
(a) Reflexivity: For O i O ( i η ) , if it satisfies O i [ O i ] E , then [ O i ] E possesses reflexivity;
(b) Transitivity: For O x , O y , O z O ( x , y , z η ) , if  O x [ O y ] E and O y [ O z ] E , then we can draw that O x [ O z ] E .
Proof. 
As can be seen from Definition 2, the above two properties are easily proven.    □
Definition 3.
In light of Definition 1, we construct a brand new state set Π = { T 1 , T 2 } , where T 1 represents a collection of outstanding objects, and  T 2 represents a collection of non-outstanding objects. The state set of the ideal superiority class is defined as follows:
T 1 = { O i | [ O i ] E | | n | k 1 } , T 2 = { O i | [ O i ] E | | n | > k 1 } , k 1 ( 0 , 0.5 ] ,
where | [ O i ] E | is the cardinality of the ideal superiority class of O i , and  k 1 is called the preference parameter of the ideal superiority class. When the two states are constructed, the conditional probability under the ideal superiority class P ( | [ O i ] E ) can be computed as P ( | [ O i ] E ) = | [ O i ] E | | [ O i ] E | ( = { T 1 , T 2 } ) .
The selection of k 1 is based on the decision-maker’s preference, the perspective of the problem or the research direction. Every decision-maker has different standards for the measurement of problems. For example, some decision-makers require the definition of excellence to be extremely strict, i.e., all conditions must be met. On the other hand, for some decision-makers the standard is loosened and only a certain number of conditions need to be met. In view of the above-mentioned reasons, different researchers will select different preference parameters for the same decision problem, and the resulting decisions will be diverse as well.
For preference parameter k 1 of the ideal superiority class, we set it between 0 and 0.5, and the lower limit is 0, and it must be an open interval. If it is a closed interval, the decision conditions are too harsh, resulting in all objects belonging to a state set. The upper limit of 0.5 is considered to determine excellence based on the principle of majority. For example, if the number of the ideal superiority classes of object O i is greater than half of the object set, it indicates that object O i is a non-outstanding object.
For any object O i , if the cardinality of the ideal superiority class of O i divided by the total number of objects is less than or equal to k 1 , we say that O i belongs to the outstanding objects; the reason for taking the equal sign here is that when we define the ideal superiority class, we include the object itself to avoid taking an empty set. On the contrary, if the cardinality of the ideal superiority class of O i divided by the total number of objects is greater than k 1 , then O i belongs to non-outstanding objects. Obviously, for the states T 1 and T 2 , when the preference parameter k 1 grows larger, the number of objects in T 1 becomes bigger, while the number of objects in T 2 becomes smaller. On the contrary, when k 1 decreases, the cardinality of T 1 decreases, whereas the cardinality of T 2 increases.
Remark 2.
The state set of the ideal superiority class is Π = { T 1 , T 2 } . It possesses the following two properties: (1) T 1 T 2 = O , (2) T 1 T 2 = ϕ .
From the above analysis and discussion, the union of the two states is the set of all objects, namely, the object set O; it indicates that all objects are assigned to one of two states. At the same time, the intersection of two form sets is an empty set, it means that it is impossible for an object to belong to both state sets in the meantime.
Example 2.
(Continued from Example 1) Suppose that k 1 = 0.3 . In line with Definition 3, we can obtain the state set of the ideal superiority class as follows:
T 1 = { s 3 } , T 2 = { s 1 , s 2 , s 4 , s 5 } .
Knowing the ideal superiority class set and state set of each object, we next calculate the conditional probability of each object in different states under the ideal superiority class.
P ( T 1 | [ s 1 ] E ) = | T 1 [ s 1 ] E | | [ s 1 ] E | = | { s 3 } { s 1 , s 3 , s 4 } | | { s 1 , s 3 , s 4 } | = 1 3 .
P ( T 1 | [ s 2 ] E ) = | T 1 [ s 2 ] E | | [ s 2 ] E | = | { s 3 } { s 2 , s 3 , s 4 , s 5 } | | { s 2 , s 3 , s 4 , s 5 } | = 1 4 .
P ( T 1 | [ s 3 ] E ) = | T 1 [ s 3 ] E | | [ s 3 ] E | = | { s 3 } { s 3 } | | { s 3 } | = 1 .
P ( T 1 | [ s 4 ] E ) = | T 1 [ s 4 ] E | | [ s 4 ] E | = | { s 3 } { s 3 , s 4 } | | { s 3 , s 4 } | = 1 2 .
P ( T 1 | [ s 5 ] E ) = | T 1 [ s 5 ] E | | [ s 5 ] E | = | { s 3 } { s 3 , s 4 , s 5 } | | { s 3 , s 4 , s 5 } | = 1 3 .
   In the same way, we can replace the T 1 state with T 2 and perform the same calculation method to obtain the conditional probability of each ideal superiority class object in the T 2  state.

3.1.2. The Three-Way Process Based on the Ideal Superiority Class

In the decision-theoretic rough set, each decision behavior has a corresponding risk loss, which means various actions will produce different loss functions. Since each object has two possible states and three possible actions in DTRS, there are six loss functions in total for each object. In the proposed ideal superiority class and state set, the symbols L P , L B , L N express acceptance, deferred consideration and rejection, respectively. Subsequently, acceptance, deferred consideration and rejection represent the positive, boundary and negative territory in TWD, respectively.
In this paper, we use the relative loss functions proposed by Jia and Liu [3]. When an object belongs to T 1 , taking the loss of action L P as the standard, θ [ O i ] E P T 1 , θ [ O i ] E B T 1 and  θ [ O i ] E N T 1 minus θ [ O i ] E P T 1 respectively; then the relative losses of L P , L B and L N are 0, θ [ O i ] E ( B T 1 ) * and θ [ O i ] E ( N T 1 ) * , respectively; when an object belongs to T 2 , taking the loss of action L N as the standard, θ [ O i ] E P T 2 , θ [ O i ] E B T 2 and  θ [ O i ] E N T 2 minus θ [ O i ] E N T 2 , respectively, then the relative losses of L P , L B and L N are θ [ O i ] E ( P T 2 ) * , θ [ O i ] E ( B T 2 ) * and 0, respectively.
For an object O i in T 1 , the risk of dividing it into the positive territory is less than or equal to the risk of dividing it into the boundary territory; both risks are smaller than the risk of dividing it into the negative territory. Similarly, for an object O i in T 2 , the risk of dividing it into the negative territory is less than or equal to the risk of dividing it into the boundary territory; both risks are smaller than the risk of dividing it into the positive territory. Therefore, we put forward a reasonable hypothesis with practical significance:
0 θ [ O i ] E ( B T 1 ) * < θ [ O i ] E ( N T 1 ) * ,
0 θ [ O i ] E ( B T 2 ) * < θ [ O i ] E ( P T 2 ) * .
Based on Definitions 2 and 3, as well as the expected risk formulas, the expected losses when O i takes actions are:
Q ( L P [ O i ] E ) = θ [ O i ] E ( P T 2 ) * P ( T 2 [ O i ] E ) ,
Q ( L B [ O i ] E ) = θ [ O i ] E ( B T 1 ) * P ( T 1 [ O i ] E ) + θ [ O i ] E ( B T 2 ) * P ( T 2 [ O i ] E ) ,
Q ( L N [ O i ] E ) = θ [ O i ] E ( N T 1 ) * P ( T 1 [ O i ] E ) .
The property that P ( T 1 [ O i ] E ) + P ( T 2 [ O i ] E ) = 1 implies P ( T 2 [ O i ] E ) = 1 P ( T 1 [ O i ] E ) . As a result, we can simplify the above formulas as follows:
Q ( L P [ O i ] E ) = θ [ O i ] E ( P T 2 ) * ( 1 P ( T 1 [ O i ] E ) ) ,
Q ( L B [ O i ] E ) = θ [ O i ] E ( B T 1 ) * P ( T 1 [ O i ] E ) + θ [ O i ] E ( B T 2 ) * ( 1 P ( T 1 [ O i ] E ) ) ,
Q ( L N [ O i ] E ) = θ [ O i ] E ( N T 1 ) * P ( T 1 [ O i ] E ) .
The practical implication of the Bayesian decision process, according to the minimum risk principle, is that an action is performed when the risk of the action does not exceed the risk of taking the other two choices (in the form of dividing an object into the corresponding territory). Hence, we can represent the divisions of TWD as follows:
( P 4 ) If Q ( L P [ O i ] E ) Q ( L B [ O i ] E ) and Q ( L P [ O i ] E ) Q ( L N [ O i ] E ) , then O i P O S ( T 1 ) , ( B 4 ) If Q ( L B [ O i ] E ) Q ( L P [ O i ] E ) and Q ( L B [ O i ] E ) Q ( L N [ O i ] E ) , then O i B N D ( T 1 ) , ( N 4 ) If Q ( L N [ O i ] E ) Q ( L P [ O i ] E ) and Q ( L N [ O i ] E ) Q ( L B [ O i ] E ) , then O i N E G ( T 1 ) .
Since different attributes of an object correspond to different loss functions, we need to integrate the loss functions to reduce the amount of calculation. Table 7 displays the aggregate relative loss functions.
In Table 7, v m a x = max j μ v i j , v m i n = min j μ v i j , and  τ ( 0 , 0.5 ) is the risk avoidance coefficient. The risk avoidance coefficient τ is determined by the decision-makers according to the characteristics of the attribute. Consequently, the values of τ for realistic problems vary.
Based on the aggregate relative loss functions exhibited in Table 7, the thresholds α i E , β i E and γ i E of O i can be computed as below:
α i E = Σ j w j ( 1 τ ) ( v max v i j ) Σ j w j ( 1 τ ) ( v max v i j ) + ( Σ j w j τ ( v i j v min ) ,
β i E = Σ j w j τ ( v max v i j ) Σ j w j τ ( v max v i j ) + Σ j w j ( 1 τ ) ( v i j v min ) ,
γ i E = Σ j w j ( v max v i j ) v max v min .
Given the three thresholds above, the rules of TWD can be rewritten as follows:
( P 5 ) If P ( T 1 [ O i ] E ) α i E and P ( T 1 [ O i ] E ) γ i E , then O i P O S ( T 1 ) , ( B 5 ) If P ( T 1 [ O i ] E ) α i E and P ( T 1 [ O i ] E ) β i E , then O i B N D ( T 1 ) , ( N 5 ) If P ( T 1 [ O i ] E ) β i E and P ( T 1 [ O i ] E ) γ i E , then O i N E G ( T 1 ) .
From Section 2.3, we can see that when 0 β i E < γ i E < α i E 1 , the conditions of TWD are met. Therefore, we can further simplify the decision rules in ( P 3 ) ( N 3 ) as follows:
( P 6 ) If P ( T 1 [ O i ] E ) α i E , then O i P O S ( T 1 ) , ( B 6 ) If β i E < P ( T 1 [ O i ] E ) < α i E , then O i B N D ( T 1 ) , ( N 6 ) If P ( T 1 [ O i ] E ) β i E , then O i N E G ( T 1 ) .
Example 3.
(Continued from Examples 1 and 2) In order to make the three-way process based on the ideal superiority class method easier to understand, we will explain in detail with the help of examples. Assume τ = { 0.4 , 0.4 , 0.4 , 0.4 } , next we give the specific calculation process of the relative loss function and threshold of object s 1 and the classification process of TWD.
Since we have obtained the aggregate relative loss function of object s 1 in Table 8 above, we can calculate the three thresholds of object s 1 according to Formulas (19)–(21).
α 1 E = 0.3581 0.3581 + 0.1168 = 0.7540 , β 1 E = 0.2387 0.2387 + 0.1752 = 0.5767 , γ 1 E = 0.5968 1.0000 0.1111 = 0.6714 .
Now that we know the α 1 E , β 1 E , γ 1 E and the conditional probability P ( T 1 | [ s 1 ] E ) of object s 1 , according to the rules of TWD, we get P ( T 1 | [ s 1 ] E ) = 0.3333 < β 1 E = 0.5767 ; then, object s 1 is divided into the negative territory. The thresholds of other objects can be obtained in the same way. In Example 2, we have obtained the conditional probability of all objects. Finally, according to the rules of (P6)–(N6), the division results of the remaining objects are presented in Table 9:

3.2. A TWD Model Based on the Ideal Inferiority Relation

This subsection explains in detail how to construct the ideal inferiority relation and class, as well as the essential definitions and theorems. Then, we discuss the process of developing the TWD model with DTRS by utilizing our proposed inferiority class.

3.2.1. The Construction of the Ideal Inferiority Relation and Class

Similar to Section 3.1, we make use of the TOPSIS method to establish the ideal inferiority relation and the ideal inferiority class, which are defined as follows:
Definition 4.
Based on the fuzzy information system I = ( O , S ) , we define the ideal inferiority relation as follows:
F = { ( O i , O j ) O × O O j + O i + and O j O i , j η } .
Remark 3.
The semantic explanation of the ideal inferiority relation: Given two objects O i and O j , if the BTID of O j is greater than or equal to that of O i , and simultaneously the WTID of O j is less than or equal to that of O i , then ( O i , O j ) F . In terms of profit and expense attributes, the higher the expense, the lower the profit. High expenses and low earnings are the worst types of decision-making outcomes for decision-makers.
Definition 5.
Based on the fuzzy information system I = ( O , S ) and Definition 4, for  O i O , the ideal inferiority class of O i is constructed as follows:
[ O i ] F = { O j O j + O i + and O j O i , O j O } .
The ideal inferiority class of O i is the set of all objects whose BTID is greater than that of O i , and whose WTID is smaller than that of O i , i.e., the set of all objects inferior to object O i .
Example 4.
To illustrate the above definitions better, we use a tableware color selection problem as an example. There are six tableware colors to pick from B = { B 1 , B 2 , B 3 , B 4 , B 5 , B 6 } , with five corresponding attributes L = { L 1 , L 2 , L 3 , L 4 , L 5 } . Among them, L 2 , L 3 and L 4 are profit attributes, L 1 , L 5 are expense attributes, and  w = { 0.1 , 0.1 , 0.2 , 0.3 , 0.3 } are the weights of these five attributes. The multi-attribute information matrix in Table 10 represents the specific values of this project:
According to the steps of the TOPSIS model and Definition 5, we can calculate the ideal inferiority classes of these six tableware colors as follows:
[ B 1 ] F = { B 1 , B 2 , B 3 , B 5 , B 6 } .
[ B 2 ] F = { B 2 } .
[ B 3 ] F = { B 2 , B 3 } .
[ B 4 ] F = { B 1 , B 2 , B 3 , B 4 , B 5 , B 6 } .
[ B 5 ] F = { B 2 , B 5 } .
[ B 6 ] F = { B 2 , B 3 , B 5 , B 6 } .
Definition 6.
In light of Definition 5, we define a new state set based on the ideal inferiority class as follows:
G 1 = { O i | [ O i ] F | | n | k 2 } , G 2 = { O i | [ O i ] F | | n | < k 2 } , k 2 [ 0.5 , 1 ] ,
where | [ O i ] F | is the cardinality of the ideal inferiority class of O i , and  k 2 is the preference parameter of the ideal inferiority class decided by decision-makers; we set the value range of k 2 to be between 0.5 and 1. G 1 represents “excellent” objects; then, G 2 represents “non-excellent” objects. If the cardinality of the inferiority class of O i divided by the total number of objects equals the preference parameter k 2 , then we divide object O i into the set of “excellent” objects. The formula for calculating conditional probability under the ideal inferiority class is P ( | [ O i ] F ) = | [ O i ] F | | [ O i ] F | , ( = { G 1 , G 2 } ) .
Example 5.
(Continued from Example 4) Let k 2 = 0.5 . In line with Definition 6, we can obtain the state set of the ideal inferiority class as follows:
G 1 = { B 1 , B 4 , B 6 } , G 2 = { B 2 , B 3 , B 5 } .
When the ideal inferiority class and state set for each object are known, then, we calculate the conditional probability of each object in different states under the ideal inferiority class. In the G1 state, the conditional probabilities of each ideal inferiority class object are as follows:
P ( G 1 | [ B 1 ] F ) = | G 1 [ B 1 ] F | | [ B 1 ] F | = | { B 1 , B 4 , B 6 } { B 1 , B 2 , B 3 , B 5 , B 6 } | | { B 1 , B 2 , B 3 , B 5 , B 6 } | = 2 5 .
P ( G 1 | [ B 2 ] F ) = | G 1 [ B 2 ] F | | [ B 2 ] F | = | { B 1 , B 4 , B 6 } { B 2 } | | { B 2 } | = 0 .
P ( G 1 | [ B 3 ] F ) = | G 1 [ B 3 ] F | | [ B 3 ] F | = | { B 1 , B 4 , B 6 } { B 2 , B 3 } | | { B 2 , B 3 } | = 0 .
P ( G 1 | [ B 4 ] F ) = | G 1 [ B 4 ] F | | [ B 4 ] F | = | { B 1 , B 4 , B 6 } { B 1 , B 2 , B 3 , B 4 , B 5 , B 6 } | | { B 1 , B 2 , B 3 , B 4 , B 5 , B 6 } | = 1 2 .
P ( G 1 | [ B 5 ] F ) = | G 1 [ B 5 ] F | | [ B 5 ] F | = | { B 1 , B 4 , B 6 } { B 2 , B 5 } | | { B 2 , B 5 } | = 0 .
P ( G 1 | [ B 6 ] F ) = | G 1 [ B 6 ] F | | [ B 6 ] F | = | { B 1 , B 4 , B 6 } { B 2 , B 3 , B 5 , B 6 } | | { B 2 , B 3 , B 5 , B 6 } | = 1 4 .
   In the G2 state, the ideal inferiority class conditional probability of each object can be obtained in the same way.

3.2.2. The Three-Way Process Based on the Ideal Inferiority Class

For the ideal inferiority class, there is a similar DTRS process. When an object O i belongs to the state G1, the relative losses of selecting the action L P , L B and  L N are 0, θ [ O i ] F ( B G 1 ) * and  θ [ O i ] F ( N G 1 ) * respectively. Likewise, when O i belongs to G2, the relative losses of selecting the action L P , L B and  L N are θ [ O i ] F ( P G 2 ) * , θ [ O i ] F ( B G 2 ) * and 0, respectively. Similar to the ideal superiority class, there is also a reasonable hypothesis with practical significance for the ideal inferiority class:
0 θ [ O i ] F ( B G 1 ) * < θ [ O i ] F ( N G 1 ) * ,
0 θ [ O i ] F ( B G 2 ) * < θ [ O i ] F ( P G 2 ) * .
   On the basis of Definitions 5 and 6, the expected losses of O i are divided into the positive territory, the boundary territory and the negative territory, which are computed as follows:
Q ( L P [ O i ] F ) = θ [ O i ] F ( P G 2 ) * P ( G 2 [ O i ] F ) ,
Q ( L B [ O i ] F ) = θ [ O i ] F ( B G 1 ) * P ( G 1 [ O i ] F ) + θ [ O i ] F ( B G 2 ) * P ( G 2 [ O i ] F ) ,
Q ( L N [ O i ] F ) = θ [ O i ] F ( N G 1 ) * P ( G 1 [ O i ] F ) .
In Table 11, we give the aggregate relative losses of O i based on G1 and G2.
In Table 11, v m a x = max j μ v i j , v m i n = min j μ v i j , and the risk avoidance coefficient τ complies with the requirement τ ( 0 , 0.5 ) .
In light of the aggregate relative loss functions exhibited in Table 5, the thresholds α i F , β i F and γ i F of O i under the ideal inferiority relation F can be calculated as below:
α i F = Σ j w j ( 1 τ ) ( v max v i j ) Σ j w j ( 1 τ ) ( v max v i j ) + ( Σ j w j τ ( v i j v min ) ,
β i F = Σ j w j τ ( v max v i j ) Σ j w j τ ( v max v i j ) + Σ j w j ( 1 τ ) ( v i j v min ) ,
γ i F = Σ j w j ( v max v i j ) v max v min .
Finally, we give the TWD rules of the ideal inferiority class directly as follows:
( P 7 ) If P ( G 1 [ O i ] F ) α i F , then O i P O S ( G 1 ) , ( B 7 ) If β i F < P ( G 1 [ O i ] F ) < α i F , then O i B N D ( G 1 ) , ( N 7 ) If P ( G 1 [ O i ] F ) β i F , then O i N E G ( G 1 ) .
Example 6.
(Continued from Examples 4 and 5) In the tableware color selection problem, we set the risk-aversion coefficient of five attributes as τ = { 0.2 , 0.2 , 0.2 , 0.2 , 0.2 } ; next, we take object B 1 as an example and give the detailed Steps of the three-way process based on the ideal inferiority class.
In the ideal inferiority class model, object B 1 is in two different states, and the relative loss functions of the three different behaviors have been obtained in Table 12, so now we solve the respective thresholds of object B 1 .
α 1 F = 0.0762 0.0762 + 0.0158 = 0.8283 , β 1 F = 0.0191 0.0191 + 0.0631 = 0.2323 , γ 1 F = 0.0953 0.1742 = 0.5471 .
In Example 5, we calculated P ( G 1 | [ B 1 ] F ) = 0.4000 ; it can be seen that the conditional probability of object B 1 is less than α 1 F and greater than β 1 F ; then, using the rules of (P7)–(N7), object B 1 is divided into the boundary territory. Similarly, the thresholds and classification results of other objects are in Table 13:

3.3. Description of the above Two Models

There are similarities and differences in the two TWD models constructed by using the TOPSIS method (the TWD ideal superiority model and the TWD ideal inferiority model). Differences: the definitions of the two models are definitely different and opposite; due to the different definitions of ideal classes, the definitions of state sets in the two models are also different. Similarities: the two models are defined using the core ideas of TOPSIS, and both of these models use relative loss functions.
All objects are classified into three regions according to the relationship between the thresholds α , β , γ and conditional probabilities P ( | [ O i ] ) . Regarding the ordering of objects, objects in each domain are sorted according to their expected losses, and objects with lower expected losses are at the top of the list in these three regions. The expected loss represents the cost of an action, so when the object performs an action, the lower the loss, the better, which means that the action is a possible desirable decision. Finally, based on P O S ( ) B N G ( ) N E G ( ) , where = { T 1 , T 2 , G 1 , G 2 } and ≻ is the superiority relationship, we can obtain the ranking of all objects.

4. An Application of the Proposed TWD-MADM Approach

We apply the new TWD models established in Section 3 to a concrete application example, namely the selection of building appearances. The dataset is selected from the UCI database. Meanwhile, we provide decision algorithms based on the models, as well as the time complexity of those algorithms.

4.1. Introduction of the Problem

In our daily lives, we might see a wide range of building appearances. Buildings come in a variety of shapes and sizes, some for functional purposes, some for aesthetic reasons and some to represent regional culture. Real estate decision-makers must select the best out of numerous building look possibilities to develop in a certain location by taking into account a wide range of features, which is an MADM issue.
Buildings differ with respect to glazing area, glazing area distribution and orientation, among other parameters. We simulate various settings as functions of the aforementioned characteristics to obtain 768 building shapes. The dataset is comprised of 768 samples and 8 features. Let I = ( O , S ) be an information system for building shape selection, where O i O ( i η ) stands for the set of building shapes and S j S ( j μ ) represents the set of building attributes. In this example, we have a total of eight different building attributes. S 1 stands for “ Relative Compactness”. S 2 stands for “Surface Area”. S 3 stands for “Wall Area”. S 4 stands for “Roof Area”. S 5 stands for “Overall Height”. S 6 stands for “Orientation”. S 7 stands for “ Glazing Area”. S 8 stands for “ Glazing Area Distribution”. The original data for building shape selection are shown in Table 14.

4.2. The Decision-Making Algorithms

In order to illustrate the correctness of the application of our proposed models in MADM, Algorithms 1 and 2 show the detailed decision-making processes of the building appearance selection problem.
The decision process of Algorithm 1 based on the ideal superiority class is as follows:
Algorithm 1: Decision-making algorithm based on the ideal superiority class
Input:I information system, W weight, τ risk avoidance coefficient.
Output: The optimal building shape and the order of all building shapes.
Step 1: Choose different normalization formulas to normalize all of the evaluation values based on the nature of the attributes.
Step 2: Calculate the BTID and the WTID of each object by using Formula (10).
Step 3: Find the ideal superiority class of each object by Formula (12).
Step 4: For each object, determine whether it is in the T 1 or T 2 state by Definition 3.
Step 5: Compute the conditional probability of each object by Remark 2.
Step 6: Calculate the loss function θ by using the evaluation value v i j , weight W and risk avoidance coefficient τ in the information matrix, and then use Formulas (19)–(21) to obtain the three thresholds α E , β E and  γ E .
Step 7: Divide all objects into three regions according to (P6)–(N6), the relationship between thresholds and conditional probabilities.
Step 8: Use Formulas (16)–(18) to obtain the expected loss of each object.
Step 9: Sort objects in each region according to the expected losses, and then derive the optimal building shape and the order of all shapes based on P O S ( ) B N G ( ) N E G ( ) .
Remark 4.
Analysis of the time complexity of each step of the above algorithm: The first step is to normalize each evaluation value with a time complexity O ( n ) . Step 2 computes the BTID and WTID for each object, with a time complexity of O ( n ) . Step 3 is to determine the ideal superiority class of each object, with a time complexity of O ( n 2 ) . The fourth step is to determine whether each object is in the T 1 or T 2 state, with a time complexity of O ( n ) . Step 5 calculates the conditional probability of each object in the ideal superiority class using the conditional probability formula, with a time complexity O ( n ) . Step 6 computes the thresholds for each object with a time complexity O ( n ) . Step 7 is to divide all objects into three areas according to the TWD rules, with a time complexity O ( n ) . Step 8 computes the expected loss of each object with a time complexity O ( n ) . Step 9 sorts all objects according to P O S ( ) B N G ( ) N E G ( ) , which has a time complexity O ( n l o g n ) . Therefore, the time complexity of the entire algorithm is O ( n 2 ) .
The decision process of Algorithm 2 based on the ideal inferiority class is as follows:
Algorithm 2: Decision-making algorithm based on the ideal inferiority class
Input:I information system, W weight, τ risk avoidance coefficient.
Output: The optimal building shape and the order of all building shapes.
Step 1: Choose different normalization formulas to normalize all of the evaluation values based on the nature of the attributes.
Step 2: Compute the BTID and the WTID of each object by using Formula (10).
Step 3: Find the ideal inferiority class of each object by using Formula (23).
Step 4: For each object, determine whether it is in the G 1 or G 2 state by Definition 6.
Step 5: Compute the conditional probability of each object by the conditional probability formula of the ideal inferiority class P ( | [ O i ] F ) , i η .
Step 6: Calculate the loss function θ , and then use Formulas (27)–(29) to compute the three thresholds α F , β F and  γ F .
Step 7: Divide all objects into three regions according to (P7)–(N7).
Step 8: Obtain the expected loss of each object via Formulas (24)–(26).
Step 9: Sort objects in each region according to the expected losses, and then derive the optimal building shape and the order of all shapes based on P O S ( ) B N G ( ) N E G ( ) .
Remark 5.
Similar to the analysis of Algorithm 1, we obtain that the time complexity of Algorithm 2 is also O ( n 2 ) .

4.3. An Application Example

We use our proposed algorithms to address the problem of architectural shape selection in order to demonstrate the practicality of our proposed models. Consider a scenario where a developer purchases a block of land with the intention of developing a retro-style residential district comprised entirely of single-family apartments. Since the overall height of the buildings must be consistent for coordination and aesthetics, we need not consider S 5 (Overall Height). Because of the retro style, S 7 (Glazing Area) and S 8 (Glazing Area Distribution) are not taken into account either. Meanwhile, we randomly select eight objects from Example 4 for the sake of simplicity. Consequently, we use O = { O 1 , O 2 , O 3 , O 4 , O 5 , O 6 , O 7 , O 8 } to represent the set of eight building shapes, and S = { S 1 , S 2 , S 3 , S 4 , S 5 } to represent the set of five building attributes (i.e., Relative Compactness, Surface Area, Wall Area, Roof Area and Orientation), while the weights of the five attributes above are 0.2, 0.1, 0.3, 0.2 and 0.2, respectively. Among all attributes, S 3 is an expense attribute, and the rest are profit attributes. The risk-aversion coefficient for each attribute is 0.35, and the overall dataset is shown in Table 15.
First, we must normalize the values in Table 15 using the following two formulas:
v i j = u i j max i η u i j , S j is the profit attribute .
v i j = min i η u i j u i j , S j is the expense attribute .
The normalized results are shown in Table 16.
(1) The decision-making process using the TWD model based on the ideal superiority class.
By means of the TOPSIS method and Definition 1, we can obtain the following list of ideal superiority classes for all objects:
[ O 1 ] E = { O 1 , O 2 , O 4 , O 5 , O 6 , O 7 }
[ O 2 ] E = { O 2 , O 7 }
[ O 3 ] E = { O 1 , O 2 , O 3 , O 4 , O 5 , O 6 , O 7 }
[ O 4 ] E = { O 1 , O 2 , O 4 , O 5 , O 6 , O 7 }
[ O 5 ] E = { O 5 , O 7 }
[ O 6 ] E = { O 6 , O 7 }
[ O 7 ] E = { O 7 }
[ O 8 ] E = { O 1 , O 2 , O 4 , O 5 , O 6 , O 7 , O 8 } .
After obtaining the ideal superiority class of each object, we know the number of objects each object is superior to; the next step is to compute the state set by utilizing our newly established state set method. In the process of solving the state set, we divide all objects into two states, T 1 or T 2 . The specific steps are in Step 4 of Algorithm 1, and the results are as follows (the preference parameter k 1 = 0.2 in this project):
T 1 = { O 7 }
T 2 = { O 1 , O 2 , O 3 , O 4 , O 5 , O 6 , O 8 } .
Knowing the set of ideal superiority classes and state set, we can calculate the conditional probabilities for all building shapes; the specific calculation results are shown in Table 17.
According to Table 7, we work out the relative loss function of each building shape taking three actions in two different states. To further calculate the threshold of each building shape, by using the rules of (P6)–(N6) and comparing the conditional probability of each building shape with its three thresholds α E , β E and γ E , each building shape can be divided into a specific domain. The specific thresholds and partition results of the eight building shapes are shown in Table 18.
In the end, the objects in each domain are ordered based on their expected losses: the lower the expected loss, the higher the ranking. The ranking between domains can be obtained from the positive territory being superior to the boundary territory, and the boundary territory being superior to the negative territory. The specific ranking results are shown in Table 19.
From Table 19, we know that the building shape classified into the positive domain of TWD is O 7 , which is the optimal building shape in this project. For O 2 , O 5 and O 6 divided in the boundary domain of TWD, further consideration and evaluation are needed before making a decision. At the same time, it is also provided that O 2 is given priority over O 5 , and O 5 is given priority over O 6 . Whereas O 3 , O 8 , O 1 and O 4 are divided in the negative domain of TWD and can be discarded. Hence, under the ideal superiority class model, developers can give priority to building shape O 7 in the project.
(2) The decision-making process using the TWD model based on the ideal inferiority class
Through the TOPSIS method and Definition 4, we can obtain the following list of ideal inferiority classes for all objects:
[ O 1 ] F = { O 1 , O 3 , O 4 , O 8 }
[ O 2 ] F = { O 1 , O 2 , O 3 , O 4 , O 8 }
[ O 3 ] F = { O 3 }
[ O 4 ] F = { O 1 , O 3 , O 4 , O 8 }
[ O 5 ] F = { O 1 , O 3 , O 4 , O 5 , O 8 }
[ O 6 ] F = { O 1 , O 3 , O 4 , O 6 , O 8 }
[ O 7 ] F = { O 1 , O 2 , O 3 , O 4 , O 5 , O 6 , O 7 , O 8 }
[ O 8 ] F = { O 8 } .
Having obtained the ideal inferiority class of each object, we need to compute the state under the ideal inferiority class by using Step 4 in Algorithm 2, and then divide these eight objects into two states, i.e., G1 or G2 (the preference parameter k 2 = 0.5 in this project).
G 1 = { O 1 , O 2 , O 4 , O 5 , O 6 , O 7 }
G 2 = { O 3 , O 8 } .
Once the set of ideal inferiority classes and state set are known, we can separately compute the conditional probabilities for these eight building shapes; the results are shown in Table 20.
Likewise, according to Table 11, we can work out the relative loss function of taking three behaviors respectively when the object is in G1 or G2 state, and the threshold value of each object is further obtained by using the rules of (27)–(29). Then, we can divide each object into a clear domain via using the division rule of TWD, which is (P7)–(N7). Table 21 shows the thresholds and decision rules for the eight building shapes under the TWD ideal inferiority model.
Finally, we categorize the eight objects into three domains. In each domain, we rank the objects by their own expected loss: the smaller the expected loss, the higher the object ordering. The ordering between domains is given by P O S ( ) B N G ( ) N E G ( ) , and the final ranking results are shown in Table 22.
From Table 22, we know that O 7 , O 2 and O 6 are in the positive domain, and O 7 is superior to O 2 and O 2 is superior to O 6 , thus the optimal building shape under the TWD ideal inferiority class model is also O 7 . In the boundary domain, there are three objects, O 1 , O 4 and O 5 . In the negative domain, there are two objects, O 3 and O 8 .
In comparison to the TWD ideal superiority model, we can analyze from three aspects: the optimal object, the partial ordering and the classification of objects. First of all, the optimal building shape for both models is O 7 , which shows the two models are consistent in selecting the optimal object. Afterwards, the partial ordering of the two models is consistent to a certain extent, such as the ordering of the first three objects is the same: O 7 O 2 O 6 ; further, O 1 O 4 and O 3 O 8 . In the end, for the classification of objects, some objects are divided into the same domain in the two models. For instance, O 7 is divided into the positive domain, O 5 is divided into the boundary domain, and O 3 and O 8 are divided into the negative domain. Analysis and comparison of the above three aspects shows that the two new models proposed in this paper are consistent to a certain extent.

4.4. Comparison Analysis and Spearman’s Rank Correlation Analysis

In the following, we compare and analyze the ranking results of the two proposed models with five other MADM methods.

4.4.1. Comparison Analysis of Different MADM Approaches

In order to verify the effectiveness and reasonableness of the models we proposed, we take the example in Section 4.3 to compare and analyze the ranking results of our models with five other MADM methods: TOPSIS [2], PROMETHEE [22], Ye’s method [11], Zhang’s method [23] and Jia’s method [3]. The specific ranking results obtained from the above methods are shown in Table 23 below:
In Table 23, IS represents the TWD ideal superiority model, and IF represents the TWD ideal inferiority model. For these seven MADM methods, we implement a comprehensive analysis and discussion from three perspectives, namely overall ranking analysis, partial ranking comparison and selection of the optimal object.
(a) From the perspective of overall ranking, we can conclude that the seven MADM methods all give the ranking results of the eight objects, but compared with the traditional TOPSIS and PROMETHEE methods, the two proposed methods not only give the ranking of the objects, but also the classification of the objects. Further, we can see that Jia’s method and the PROMETHEE method yield identical ranking results, O 7 O 2 O 5 O 6 O 1 O 4 O 8 O 3 , and other MADM methods all have sorting differences. Our proposed IS method and Zhang’s method have the same ordering of the first four objects.
(b) From the perspective of the partial ranking comparison, the ordering of O 1 and O 4 in all MADM methods is the same as O 1 O 4 . In seven of the MADM methods (in all but Ye’s method), the ranking position of O 2 is second to the best option O 7 . In Ye’s method, the second sorting position is O 5 . Hence, O 2 and O 5 are either divided into the positive territory or the boundary territory. Furthermore, O 3 , O 4 and O 8 are the worst selections, which are sorted at the bottom in all methods. Moreover, O 3 is ranked in the last position in TOPSIS, PROMETHEE, Zhang’s and Jia’s, O 8 is ranked in the last position in IF and Ye’s, and O 4 is ranked in the last position in IS. This means the approximate range of sorting positions for all objects is the same.
(c) From the perspective of the selection of the optimal object, it is not difficult to find that the best choice of our two proposed methods and other MADM methods are O 7 , and in our proposed methods, O 7 is classified into the positive domain, which shows that our proposed method and other methods are consistent in the selection of the best object, and also shows the feasibility of our proposed method.
In general, Table 23 shows that our proposed models are consistent with the five other MADM methods, and our method not only has the sorting results, but also the classification of each object. From the overall analysis and discussion, our proposed method has certain feasibility and rationality.

4.4.2. Spearman’s Rank Correlation Analysis

In order to analyze and compare the correlations and differences between the selected MADM methods, we use the Spearman’s correlation coefficient (SRCC) as an indicator. The SRCC is calculated as follows:
S R C C = 1 6 Σ i = 1 n q i 2 n 3 n ,
where n is the total number of objects, and q i = x i y i , in which x i is the ranking position of O i in one MADM method and y i is the ranking position of O i in another MADM method. If the SRCC of the two ranking results is greater, then we may say that they are highly relevant and consistent between the two decision-making methods. That is, for any two decision-making methods after processing the same data, the larger the correlation coefficient value of the SRCC, the better the correlation and consistency between the two methods. On the basis of Table 23, we can calculate the SRCC between any two different methods, as shown in Table 24.
To visualize the SRCC and enhance the readability of the data, this paper uses a heatmap, which is a matrix that shows the data in terms of color changes that represent the magnitude of the correlation coefficient, thus showing the correlation between different indicators and different samples. The heatmap of Table 24 described above is shown in Figure 1.
From Table 24, for the IS method we proposed, the SRCC values of IF and TOPSIS method is low, only 0.5714 and 0.3810, respectively; however, the SRCC values between it and PROMETHEE, Ye’s method, Zhang’s method and Jia’s method are all greater than 0.648, indicating that the IS method has certain rationality and feasibility. For our proposed IF method, the SRCC values of it and the other five MADM methods are all greater than 0.648, and the lowest value is 0.7619 with the Ye’s method; the SRCC value with the TOPSIS and Zhang’s methods is as high as 0.9048, which shows that the IF method has strong consistency with all other MADM methods. Further, the two methods we proposed also have certain differences in the SRCC value. Compared with the IS method, the IF method has higher consistency with other MADM methods. On the whole, the proposed method has high consistency and similarity with other MADM methods.

4.4.3. Other Example Analysis

In order to verify the high reliability and practicability of our proposed model, we additionally cite two sets of data from [3,16]; then, the results of the proposed method are compared and analyzed with those of other MADM methods.
Example 7.
A classic corporate investment problem. There are eight investment objects O = { O 1 , O 2 , O 3 , O 4 , O 5 , O 6 , O 7 , O 8 } and five attributes S = { S 1 , S 2 , S 3 , S 4 , S 5 } , i.e., expected benefits, environmental influence, market saturation, social benefits and energy conservation, respectively. Among them S 2 and S 3 are cost attributes, and S 1 , S 4 and S 5 are benefit attributes. The weight vectors of these five attributes are w = { 0.3 , 0.1 , 0.3 , 0.2 , 0.1 } , and the risk avoidance coefficient vectors are τ = { 0.1 , 0.1 , 0.1 , 0.1 , 0.1 } . Consequently, the specific relevant data of eight investment objects over five attributes are presented in Table 25.
Using the data in Table 25, we conduct a comprehensive analysis and discussion of the results obtained by our two proposed methods and those obtained by other MADM methods. The ranking results for these eight objects in different MADM methods are presented in Table 26, and the SRCC values for each method in relation to the ranking results are shown in Table 27. In order to visualize SRCC, the heatmap of SRCC values in Table 27 is shown in Figure 2.
Remark 6.
From the ranking results in Table 26, the optimal objects selected by our proposed methods and other MADM methods are uniform: O 7 . The worst objects are O 3 and O 8 . In IS, TOPSIS, PROMETHEE and Zhang’s, the worst object is O 3 , while in IF, Ye’s and Jia’s, the worst object is O 8 . Moreover, Objects O 2 and O 4 cannot be prioritized and classified in Ye’s method, but in our proposed IS method, we conclude that O 2 is better than O 4 , and they are both classified into the boundary domain. Likewise, in our proposed IF method, we can conclude that O 4 is better than O 2 and is also classified into the boundary domain. It shows that our method has more sorting and classification advantages over Ye’s. From Table 27, we can find that the SRCC values of the IS and IF methods and other existing MADM methods are all greater than 0.648, which shows that our method is consistent and feasible with existing MADM methods.
Example 8.
An energy selection program: there are six energy projects O = { O 1 , O 2 , O 3 , O 4 , O 5 , O 6 } and four attributes S = { S 1 , S 2 , S 3 , S 4 } . Among them S 2 is a cost attribute, and S 1 , S 3 and S 4 are benefit attributes. The weight vectors of these four attributes are w = { 0.2 , 0.2 , 0.3 , 0.3 } . The specific data of six energy projects over four attributes are presented in Table 28.
In this energy project example, we choose to compare our two proposed methods with three other MADM methods: two classic methods, TOPSIS and PROMETHEE, and the state-of-the-art Zhang’s method. The ranking results of the five methods are shown in Table 29 following.
From the results in Table 29, the optimal project determined by our proposed methods and the three existing methods is identical: O 6 . Moreover, from the perspective of overall ranking results, the ranking positions of objects in our method are generally similar to other methods, which implies that the proposed methods are credible and reasonable. To more clearly illustrate the connection and consistency between our proposed method and the TOPSIS, PROMETHEE and Zhang’s methods, we calculate the SRCC between any two MADM methods in Table 30 and Figure 3, and provide statistical significance ranking results for different methods.

5. Experiment Analysis

In this section, we conduct relevant experiment analyses on the adjustable parameters of our proposed models, including the preference parameter k 1 in the ideal superiority class model, the preference parameter k 2 in the ideal inferiority class model and the risk-aversion factor τ . Since we arbitrarily change the value of a parameter, the ranking and classification results of the decision will change, so it is very necessary to analyze and discuss the influence of the parameter value on the decision result. In the following, we continue to use the example in Section 4.3 to conduct experiments, the classification and ranking results in the two models are shown by varying the parameters k 1 , k 2 in steps of 0.05 and the risk-aversion factor τ in steps of 0.1.

5.1. Analysis of the Preference Parameter k 1 and the Risk Aversion Factor τ in the Ideal Superiority Class Model

In each model, we have two variable parameters that are determined by the decision-maker according to the experimental situation. If the two parameters change at the same time, there will be countless possibilities. Therefore, we use the method of controlling variables to implement the experiment.
1 In the first case, we set the risk-aversion coefficient of each attribute to a fixed value and adjust the value of k 1 to obtain the following classification and sorting results for the eight objects in the TWD ideal superiority model. According to Definition 3, the value of k 1 ranges from 0 to 0.5 and changes with a step size of 0.05. In order to present our results more clearly and intuitively, we will show the sorting results of different values of k 1 in the form of a graph as follows.
Remark 7.
From Table 31 and Figure 4 and Figure 5, we find that the results obtained for different values of k 1 will be different when k 1 is less than or equal to 0.1. There are no objects that meet the conditions. We can obtain that the objects in the T 1 state set are empty sets, so in the T 1 state set, the conditional probabilities of the eight objects are all 0, and it can be seen that they are all divided into the negative domain, and the ranking result is O 1 O 2 O 3 O 4 O 5 O 6 O 7 O 8 . In this project, due to the small number of objects, when the value of k 1 is less than or equal to 0.1, it is obviously unreasonable because it is impossible to reasonably distinguish and classify objects. When k 1 = 0.15 or k 1 = 0.2 , the results are the same, and the broken line trajectories of the two values also coincide. This shows that the ideal number of superiority classes of each object is less than 2; that is, it satisfies the condition of T 1 state set: there is object O 7 , and the conditional probability of each object is calculated to not be 0. The final ranking and classification results are highly consistent with other MADM methods. Specifically, the ranking result is O 7 O 2 O 5 O 6 O 3 O 8 O 1 O 4 , with O 7 in the positive domain, objects O 2 , O 5 and O 6 in the boundary domain, and objects O 1 , O 3 , O 4 and O 8 in the negative domain. Furthermore, when k 1 is in [0.25, 0.5], we find that the classification and ranking results of the objects are identical, which means that the results tend to stabilize when k 1 reaches a certain threshold, and the ranking result after stabilization is O 2 O 5 O 6 O 7 O 1 O 4 O 8 O 3 , and O 1 , O 2 , O 4 , O 5 , O 6 , O 7 , O 8 is in the positive domain and O 3 is in the boundary domain.
2 In the second case, we set the preference parameter k 1 with a fixed value and adjust the value of risk-aversion coefficient τ to obtain the following classification and sorting results for the eight objects in the TWD ideal superiority model.
Remark 8.
From Table 32 and Figure 6 and Figure 7, we find that no matter what the value of τ is, O 7 is the optimal object and is in the positive domain. When τ = 0 and τ = 0.1, from the classification of Figure 7, the classification of the objects is the same, i.e., O 7 is in the positive domain, and objects O 1 , O 2 , O 3 , O 4 , O 5 , O 6 , O 8 are in the boundary domain. However, from the ranking of Figure 6, the ordering is different: for τ = 0, the ordering is O 7 O 1 O 2 O 3 O 4 O 5 O 6 O 8 , while for τ = 0.1, the ranking is O 7 O 1 O 4 O 8 O 3 O 2 O 5 O 6 . In contrast to the above, when τ = 0.4 and τ = 0.5, objects are ranked identically but classified differently: for τ = 0.4, 0 5 , O 6 are classified into the boundary domain, but whenτ = 0.5, 0 5 , O 6 is divided into the positive domain.

5.2. Analysis of the Preference Parameter k 2 and the Risk Aversion Factor τ in the Ideal Inferiority Class Model

1 In the first case, we set the risk-aversion coefficient of each attribute to a fixed value and adjust the value of k 2 to obtain the following classification and sorting results for the eight objects in the TWD ideal inferiority model. According to Definition 6, the value of k 2 ranges from 0.5 to 1 and changes with a step size of 0.05. In order to present our results more clearly and intuitively, we will show the sorting results of different values of k 2 in the form of a graph as follows.
Remark 9.
According to Table 33 and Figure 8 and Figure 9, we derive the following information: when k 2 = 0.50 , the sorting result is O 7 O 2 O 6 O 1 O 4 O 5 O 3 O 8 , in which the positive domain has objects O 2 , O 6 and O 7 , the boundary domain has objects O 1 , O 4 and O 5 , and the negative domain has objects O 3 and O 8 . The value of k 2 is 0.5, indicating that the number of ideal inferiority classes of each object needs to be greater than or equal to half of the total number of objects before being classified into G 1 . From the perspective of this project, the ideal inferiority class with objects O 1 , O 2 , O 4 , O 5 , O 6 , O 7 satisfies the condition and is divided into the G 1 state set. In the ideal inferiority class model, this value k 2 = 0.50 is reasonable and feasible, neither too harsh nor too loose. When k 2 is between 0.55 and 0.60, the ranking results are the same: O 7 O 1 O 3 O 4 O 8 O 6 O 5 O 2 , with only O 7 in the positive domain and the remaining objects in the negative domain. When k 2 is greater than or equal to 0.65 in the TWD ideal inferiority model, all objects are divided into the negative domain, and the sorting result is O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7 . In this example, when k 2 is 0.65, if the object is divided into G 1 , the ideal number of inferiority classes of the object must satisfy six or more, but from these eight objects, only O 7 satisfies the condition. Therefore, the ranking and classification results are consistent.
2 In the second case, we set the preference parameter k 2 with a fixed value and adjust the value of risk-aversion coefficient τ to obtain the following classification and sorting results for the eight objects in the TWD ideal inferiority model.
Remark 10.
According to Table 34 and Figure 10 and Figure 11, we analyze from the perspective of the optimal object, discovering that except for τ = 0 , the rest of the optimal objects are O 7 . Furthermore, regardless of the value of τ, O 3 and O 8 are both classified into the negative domain, which indicate that O 3 and O 8 are not considered objects. When τ = 0 , that is, each attribute has no risk-aversion value, the final classification result is that there are no objects in the positive domain: O 1 , O 2 , O 4 , O 5 , O 6 , O 7 are in the boundary domain, and O 3 and O 8 are in the negative domain. The sort order is O 1 O 2 O 4 O 5 O 6 O 7 O 3 O 8 . When τ = 0.1 and τ = 0.2 , the results of classification and ranking are identical. In Figure 10, it can be seen that the red line and the green line are coincident, and the sorting result is O 7 O 1 O 4 O 6 O 5 O 2 O 3 O 8 . In Figure 11, the division of the three areas of the histogram is the same: O 7 is in the positive domain, O 1 , O 2 , O 4 , O 5 , O 6 are in the boundary domain, and O 3 and O 8 are in the negative domain. When τ = 0.4 and τ = 0.5 , although they are sorted the same, the classification is different. When τ = 0.4 , O 1 and O 4 are in the boundary domain; however, when τ = 0.5 , O 1 and O 4 are classified into the positive domain.

6. Conclusions

In this study, we present two novel TOPSIS-based TWD models with opposing definitions: one of them is the TWD ideal superiority model, and the other is the TWD ideal inferiority model. When applied to practical fuzzy information systems, these two decision-making models demonstrate clear feasibility and rationality. In this paper, the datasets applicable to both models are fuzzy attribute environments. In addition, we propose a new method for objective computation of the state set that reduces the subjectivity of the decision process and makes the decision-making results more objective. Furthermore, we employ the relative loss functions of Jia and Liu to calculate the thresholds of each object, however, differently from Jia’s and Liu’s methods. Considering that it is subjective and undesirable to assign artificial, random values to each attribute of the risk-aversion factor, we set the risk avoidance coefficient of each attribute in the relative loss function to the same value. Moreover, due to the fact that this is a study of TWD in this paper, the risk-aversion coefficients range from 0 to 0.5, in spite of the values taken for each attribute being the same. Finally, according to the thresholds of each object and the TWD rules, all objects are divided into three different territories. The objects in the positive territory are acceptable, the objects in the boundary territory need further consideration, and the objects in the negative territory are rejected directly. Further, the objects in different domains are sorted according to the value of the loss function: the smaller the loss value, the more priority is given to sorting; finally, the sorting of all objects can be obtained, and it is known that the first object in the positive territory is the optimal object.
In the future, we will consider extending the applicability of the two models we proposed to other environments. Further, we can discuss and analyze the following three directions in depth: The first is to expand the TWD theories, which includes the expansion of relations, the expansion of related classification models and fusion with other classification methods. The second is the study of methodological aspects, such as decision-risk minimization and reduction methods, cost-sensitive rule acquisition and decision risk minimum rule acquisition. The third is the application of TWD in the fields of engineering, management and medicine.

Author Contributions

Conceptualization, Methodology, Investigation, Writing—original draft, Writing—review and editing: X.C. and L.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hunan Province grant number 2021JJ40361.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository. The data presented in this study are openly available in UCI Machine Learning Repository. (http://archive.ics.uci.edu/ml) (accessed on 21 March 2021).

Conflicts of Interest

The authors declared that they have no conflict of interest to this work.

Abbreviations

The following abbreviations are used in this manuscript:
MADMMulti-attribute decision-making
TWDThree-way decision

References

  1. Yao, Y.Y. Decision-theoretic rough set models. In Rough Sets and Knowledge Technology; Springer: Berlin/Heidelberg, Germany, 2007; pp. 1–12. [Google Scholar]
  2. Hwang, C.L.; Yoon, K. Multiple Attribute Decision Making: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 1981. [Google Scholar]
  3. Jia, F.; Liu, P.D. A novel three-way decision model under multiple-criteria environment. Inform. Sci. 2019, 471, 29–51. [Google Scholar] [CrossRef]
  4. Wang, P.; Zhang, P.F.; Li, Z.W. A three-way decision method based on Gaussian kernel in a hybrid information system with images: An application in medical diagnosis. Appl. Soft Comput. 2019, 77, 734–749. [Google Scholar] [CrossRef]
  5. Lang, G.M.; Miao, D.Q.; Cai, M.J. Three-way decision approaches to conflict analysis using decision-theoretic rough set theory. Inform. Sci. 2017, 406–407, 185–207. [Google Scholar] [CrossRef]
  6. Li, Z.W.; Zhang, P.F.; Xie, N.X.; Zhang, G.Q.; Wen, C.F. A novel three-way decision method in a hybrid information system with images and its application in medical diagnosis. Eng. Appl. Artif. Intell. 2020, 92, 103651. [Google Scholar] [CrossRef]
  7. Xu, Y.; Wang, X.S. Three-way decision based on improved aggregation method of interval loss function. Inform. Sci. 2020, 508, 214–233. [Google Scholar] [CrossRef]
  8. Chakhar, S.; Saad, I. Dominance-based rough set approach for groups in multicriteria classification problems. Decis. Support Syst. 2012, 54, 372–380. [Google Scholar] [CrossRef]
  9. Greco, S.; Matarazzo, B.; Slowinski, R. Rough approximation by dominance relations. Int. J. Intell. Syst. 2002, 17, 153–171. [Google Scholar] [CrossRef]
  10. Li, W.W.; Huang, Z.Q.; Jia, X.Y.; Cai, X.Y. Neighborhood based decision-theoretic rough set models. Int. J. Approx. Reason. 2016, 69, 1–17. [Google Scholar] [CrossRef]
  11. Ye, J.; Zhan, J.; Xu, Z. A novel decision-making approach based on three-way decisions in fuzzy information systems. Inform. Sci. 2020, 541, 362–390. [Google Scholar] [CrossRef]
  12. Sun, B.Z.; Ma, W.M.; Zhao, H.Y. Decision-theoretic rough fuzzy set model and application. Inform. Sci. 2014, 283, 180–196. [Google Scholar] [CrossRef]
  13. Tang, G.L.; Chiclana, F.; Liu, P.D. A decision-theoretic rough set model with q-rung orthopair fuzzy information and its application in stock investment evaluation. Appl. Soft Comput. 2020, 91, 106212. [Google Scholar] [CrossRef]
  14. Yao, Y.Y. Three-way decisions with probabilistic rough sets. Inform. Sci. 2010, 180, 341–353. [Google Scholar] [CrossRef] [Green Version]
  15. Yao, Y.Y. Three-way decision and granular computing. Int. J. Approx. Reason. 2018, 103, 107–123. [Google Scholar] [CrossRef]
  16. Zhan, J.M.; Jiang, H.B.; Yao, Y.Y. Three-way multi-attribute decision-making based on outranking relations. IEEE Trans. Fuzzy Syst. 2021, 29, 2844–2858. [Google Scholar] [CrossRef]
  17. Liu, D.; Liang, D.C.; Wang, C.C. A novel three-way decision model based on incomplete information system. Knowl.-Based Syst. 2016, 91, 32–45. [Google Scholar] [CrossRef]
  18. Liang, D.C.; Xu, Z.S.; Liu, D.; Wu, Y. Method for three-way decisions using ideal TOPSIS solutions at pythagorean fuzzy information. Inform. Sci. 2018, 435, 282–295. [Google Scholar] [CrossRef]
  19. Zhang, C.; Li, D.; Liang, J. Multi-granularity three-way decisions with adjustable hesitant fuzzy linguistic multigranulation decision-theoretic rough sets over two universes. Inform. Sci. 2020, 507, 665–683. [Google Scholar] [CrossRef]
  20. Liu, P.D.; Wang, Y.M.; Jia, F.; Fujita, H. A multiple attribute decision making three-way model for intuitionistic fuzzy numbers. Int. J. Approx. Reason. 2020, 119, 177–203. [Google Scholar] [CrossRef]
  21. Wang, W.J. Three-way decisions based multi-attribute decision making with probabilistic dominance relations. Inform. Sci. 2021, 559, 75–96. [Google Scholar] [CrossRef]
  22. Brans, J.P.; Vincke, J.P.; Mareschal, B. How to select and how to rank projects: The PROMETHEE method. Eur. J. Oper. Res. 1986, 24, 228–238. [Google Scholar] [CrossRef]
  23. Zhang, K.; Dai, J.H.; Zhan, J.M. A new classification and ranking decision method based on three-way decision theory and TOPSIS models. Inform. Sci. 2021, 568, 54–85. [Google Scholar] [CrossRef]
Figure 1. The heatmap of SRCC in seven multi-criteria decision-making methods.
Figure 1. The heatmap of SRCC in seven multi-criteria decision-making methods.
Entropy 24 00986 g001
Figure 2. The heatmap of SRCC in seven multi-criteria decision-making methods.
Figure 2. The heatmap of SRCC in seven multi-criteria decision-making methods.
Entropy 24 00986 g002
Figure 3. The heatmap of SRCC in five multi-criteria decision-making methods.
Figure 3. The heatmap of SRCC in five multi-criteria decision-making methods.
Entropy 24 00986 g003
Figure 4. The sorting results of eight objects with different preference parameters k 1 .
Figure 4. The sorting results of eight objects with different preference parameters k 1 .
Entropy 24 00986 g004
Figure 5. The classification results of eight objects with different preference parameters k 1 .
Figure 5. The classification results of eight objects with different preference parameters k 1 .
Entropy 24 00986 g005
Figure 6. The sorting results of eight objects with different risk-aversion coefficients τ .
Figure 6. The sorting results of eight objects with different risk-aversion coefficients τ .
Entropy 24 00986 g006
Figure 7. The classification results of eight objects with different risk-aversion coefficients τ .
Figure 7. The classification results of eight objects with different risk-aversion coefficients τ .
Entropy 24 00986 g007
Figure 8. The sorting results of eight objects with different preference parameters k 2 .
Figure 8. The sorting results of eight objects with different preference parameters k 2 .
Entropy 24 00986 g008
Figure 9. The classification results of eight objects with different preference parameters k 2 .
Figure 9. The classification results of eight objects with different preference parameters k 2 .
Entropy 24 00986 g009
Figure 10. The sorting results of eight objects with different risk-aversion coefficients τ .
Figure 10. The sorting results of eight objects with different risk-aversion coefficients τ .
Entropy 24 00986 g010
Figure 11. The classification results of eight objects with different risk-aversion coefficients τ .
Figure 11. The classification results of eight objects with different risk-aversion coefficients τ .
Entropy 24 00986 g011
Table 1. The initial decision-making matrix.
Table 1. The initial decision-making matrix.
S 1 S 2 S m
O 1 u 11 u 12 u 1 m
O 2 u 21 u 22 u 2 m
O n u n 1 u n 2 u n m
Table 2. The normalized decision-making matrix.
Table 2. The normalized decision-making matrix.
S 1 S 2 S m
O 1 v 11 v 12 v 1 m
O 2 v 21 v 22 v 2 m
O n v n 1 v n 2 v n m
Table 3. The relative loss functions of rule conversion.
Table 3. The relative loss functions of rule conversion.
R ¬ R
L P 0 θ P N
L B θ B P θ B N
L N θ N P 0
Table 4. The relative loss functions of rule conversion.
Table 4. The relative loss functions of rule conversion.
R j ¬ R j
L P 0 v max j v i j
L B σ ( v i j v min j ) σ ( v max j v i j )
L N v i j v min j 0
Table 5. The aggregate relative loss functions of O i .
Table 5. The aggregate relative loss functions of O i .
R ¬ R
L P 0 v max Σ j w j v i j
L B Σ j σ j w j ( v i j v min ) Σ j σ j w j ( v max v i j )
L N Σ j w j v i j v min 0
Table 6. The multi-attribute information matrix of five funds.
Table 6. The multi-attribute information matrix of five funds.
r 1 r 2 r 3 r 4
s 1 0.8 0.7 0.1 0.6
s 2 0.4 0.4 0.5 0.7
s 3 0.5 0.1 0.2 0.1
s 4 0.4 0.7 0.9 0.3
s 5 0.6 0.7 0.5 0.6
Table 7. The aggregate relative loss functions of O i ( i η ) .
Table 7. The aggregate relative loss functions of O i ( i η ) .
T 1 T 2
L P 0 v max Σ j w j v i j
L B Σ j τ w j ( v i j v min ) Σ j τ w j ( v max v i j )
L N Σ j w j v i j v min 0
Table 8. The aggregate relative loss functions of s 1 under the ideal superiority class.
Table 8. The aggregate relative loss functions of s 1 under the ideal superiority class.
T 1 T 2
L P 0 v max Σ j w j v 1 j = 0.5968
L B Σ j τ w j ( v 1 j v min ) = 0.1168 Σ j τ w j ( v max v 1 j ) = 0.2387
L N Σ j w j v 1 j v min = 0.2921 0
Table 9. The thresholds and division results of the other four objects.
Table 9. The thresholds and division results of the other four objects.
s 2 s 3 s 4 s 5
α i E 0.6138 0.4410 0.6799 0.6456
β i E 0.4139 0.2596 0.4856 0.4473
γ i E 0.5144 0.3446 0.5861 0.5484
P ( T 1 | [ s i ] E ) 0.2500 1.0000 0.5000 0.3333
Decision rules N E G P O S B N D N E G
Table 10. The multi-attribute information matrix of six tableware colors.
Table 10. The multi-attribute information matrix of six tableware colors.
L 1 L 2 L 3 L 4 L 5
B 1 0.86 5882941472
B 2 0.82 6021181213
B 3 0.76 7013561455
B 4 0.90 6373431652
B 5 0.86 5882941183
B 6 0.82 6343481633
Table 11. The aggregate relative loss functions of O i ( i η ) .
Table 11. The aggregate relative loss functions of O i ( i η ) .
G 1 G 2
L P 0 v max Σ j w j v i j
L B Σ j τ w j ( v i j v min ) Σ j τ w j ( v max v i j )
L N Σ j w j v i j v min 0
Table 12. The aggregate relative loss functions of B 1 .
Table 12. The aggregate relative loss functions of B 1 .
G 1 G 2
L P 0 v max Σ j w j v 1 j = 0.0953
L B Σ j τ w j ( v 1 j v min ) = 0.0158 Σ j τ w j ( v max v 1 j ) = 0.0191
L N Σ j w j v 1 j v min = 0.0789 0
Table 13. The thresholds and classification results of the five other objects.
Table 13. The thresholds and classification results of the five other objects.
B 2 B 3 B 4 B 5 B 6
α i F 0.7586 0.6929 0.5087 0.8607 0.6843
β i F 0.1642 0.1236 0.0608 0.2786 0.1193
γ i F 0.4400 0.3606 0.2056 0.6070 0.3514
P ( G 1 | [ B i ] F ) 0.0000 0.0000 0.5000 0.0000 0.2500
Decision rules N E G N E G B N D N E G B N D
Table 14. The original data for 385 building shapes.
Table 14. The original data for 385 building shapes.
S 1 S 2 S 3 S 4 S 5 S 6 S 7 S 8
O 1 0.98 514.50 294.00 110.25 7.00 2 0.00 0
O 56 0.90 563.50 318.50 122.50 7.00 2 0.10 1
O 109 0.86 588.00 294.00 147.00 7.00 4 0.10 2
O 160 0.82 612.50 318.50 147.00 7.00 3 0.10 3
O 216 0.76 661.50 416.50 122.50 7.00 3 0.10 4
O 242 0.62 808.50 367.50 220.50 3.50 5 0.10 4
O 310 0.79 637.00 343.00 147.00 7.00 5 0.25 1
O 385 0.74 686.00 245.00 220.50 3.50 2 0.25 3
Table 15. The eight building shapes.
Table 15. The eight building shapes.
S 1 S 2 S 3 S 4 S 5
O 1 0.98 514.50 294.00 110.25 3
O 2 0.90 563.50 318.50 122.50 5
O 3 0.76 661.50 416.50 122.50 3
O 4 0.98 514.50 294.00 110.25 3
O 5 0.62 808.50 367.50 220.50 4
O 6 0.82 612.50 318.50 147.00 4
O 7 0.69 735.00 294.00 220.50 5
O 8 0.79 637.00 343.00 147.00 2
Table 16. The eight building shapes after normalization.
Table 16. The eight building shapes after normalization.
S 1 S 2 S 3 S 4 S 5
O 1 1.0000 0.6364 1.0000 0.5000 0.6000
O 2 0.9184 0.6970 0.9231 0.5556 1.0000
O 3 0.7755 0.8182 0.7059 0.5556 0.6000
O 4 1.0000 0.6364 1.0000 0.5000 0.6000
O 5 0.6327 1.0000 0.8000 1.0000 0.8000
O 6 0.8367 0.7576 0.9231 0.6667 0.8000
O 7 0.7041 0.9091 1.0000 1.0000 1.0000
O 8 0.8061 0.7879 0.8571 0.6667 0.4000
Table 17. The conditional probability of the eight building shapes.
Table 17. The conditional probability of the eight building shapes.
O O 1 O 2 O 3 O 4 O 5 O 6 O 7 O 8
P ( T 1 | [ O i ] E ) 0.1667 0.5000 0.1429 0.1667 0.5000 0.5000 1.0000 0.1429
Table 18. Decision information for the eight building shapes under the TWD ideal superiority model.
Table 18. Decision information for the eight building shapes under the TWD ideal superiority model.
O O 1 O 2 O 3 O 4 O 5 O 6 O 7 O 8
α E 0.5862 0.5075 0.6741 0.5862 0.6243 0.5814 0.3577 0.4673
β E 0.2912 0.2300 0.3749 0.2912 0.3251 0.2871 0.1390 0.2028
γ E 0.4327 0.3568 0.5269 0.4327 0.4722 0.4279 0.2307 0.3208
Decision rules N E G B N D N E G N E G B N D B N D P O S N E G
Table 19. The ranking of the eight building shapes under the TWD ideal superiority model.
Table 19. The ranking of the eight building shapes under the TWD ideal superiority model.
DomainsRankings
P O S O 7
B N D O 2 O 5 O 6
N E G O 3 O 8 O 1 O 4
Overall ranking O 7 O 2 O 5 O 6 O 3 O 8 O 1 O 4
Table 20. The conditional probabilities for the eight building shapes under the TWD ideal inferiority model.
Table 20. The conditional probabilities for the eight building shapes under the TWD ideal inferiority model.
O O 1 O 2 O 3 O 4 O 5 O 6 O 7 O 8
P ( G 1 | [ O i ] F ) 0.5000 0.6000 0 0.5000 0.6000 0.6000 0.7500 0
Table 21. Decision information for the eight building shapes under the TWD ideal inferiority model.
Table 21. Decision information for the eight building shapes under the TWD ideal inferiority model.
O O 1 O 2 O 3 O 4 O 5 O 6 O 7 O 8
α F 0.5862 0.5075 0.6741 0.5862 0.6243 0.5814 0.3577 0.4673
β F 0.2912 0.2300 0.3749 0.2912 0.3251 0.2871 0.1390 0.2028
γ F 0.4327 0.3568 0.5269 0.4327 0.4722 0.4279 0.2307 0.3208
Decision rules B N D P O S N E G B N D B N D P O S P O S N E G
Table 22. The ranking of the eight objects under the TWD ideal inferiority model.
Table 22. The ranking of the eight objects under the TWD ideal inferiority model.
DomainsRankings
P O S O 7 O 2 O 6
B N D O 1 O 4 O 5
N E G O 3 O 8
Overall ranking O 7 O 2 O 6 O 1 O 4 O 5 O 3 O 8
Table 23. The ranking results for the eight building shapes in different MADM methods.
Table 23. The ranking results for the eight building shapes in different MADM methods.
MethodRanking ResultsOptimal Object
IS O 7 O 2 O 6 O 5 O 3 O 8 O 1 O 4 O 7
IF O 7 O 2 O 6 O 1 O 4 O 5 O 3 O 8 O 7
TOPSIS O 7 O 2 O 1 O 4 O 6 O 5 O 8 O 3 O 7
PROMETHEE O 7 O 2 O 5 O 6 O 1 O 4 O 8 O 3 O 7
Ye’s O 7 O 5 O 2 O 6 O 1 O 4 O 3 O 8 O 7
Zhang’s O 7 O 2 O 6 O 5 O 1 O 4 O 8 O 3 O 7
Jia’s O 7 O 2 O 5 O 6 O 1 O 4 O 8 O 3 O 7
Table 24. The SRCC between any two of the seven decision-making methods.
Table 24. The SRCC between any two of the seven decision-making methods.
MethodISIFTOPSISPROMETHEEYe’sZhang’sJia’s
IS 1.0000 0.5714 0.3810 0.7857 0.7857 0.7619 0.7857
IF 0.5714 1.0000 0.9048 0.8333 0.7619 0.9048 0.8333
TOPSIS 0.3810 0.9048 1.0000 0.7857 0.6667 0.8095 0.7857
PROMETHEE 0.7857 0.8333 0.7857 1.0000 0.9524 0.9762 1.0000
Ye’s 0.7857 0.7619 0.6667 0.9524 1.0000 0.9048 0.9524
Zhang’s 0.7857 0.9048 0.8095 0.9762 0.9048 1.0000 0.9762
Jia’s 0.7619 0.8333 0.7857 1.0000 0.9524 0.9762 1.0000
Table 25. The data of the example in [3].
Table 25. The data of the example in [3].
S 1 S 2 S 3 S 4 S 5
O 1 0.8 0.4 0.3 0.8 0.9
O 2 0.9 0.5 0.5 0.7 0.6
O 3 0.3 0.4 0.6 0.4 0.3
O 4 0.5 0.2 0.2 0.7 0.6
O 5 0.7 0.6 0.6 0.5 0.8
O 6 0.4 0.8 0.7 0.7 0.3
O 7 0.9 0.5 0.1 0.8 0.7
O 8 0.6 0.8 0.8 0.3 0.4
Table 26. The ranking results for the eight investment objects of different MADM methods.
Table 26. The ranking results for the eight investment objects of different MADM methods.
MethodRanking ResultsOptimal Object
IS O 7 O 2 O 5 O 1 O 4 O 6 O 8 O 3 O 7
IF O 7 O 4 O 2 O 5 O 1 O 3 O 6 O 8 O 7
TOPSIS O 7 O 1 O 2 O 4 O 5 O 6 O 8 O 3 O 7
PROMETHEE O 7 O 1 O 2 O 4 O 5 O 6 O 8 O 3 O 7
Ye’s O 7 O 1 O 4 O 2 O 5 O 6 O 3 O 8 O 7
Zhang’s O 7 O 1 O 4 O 2 O 5 O 6 O 8 O 3 O 7
Jia’s O 7 O 1 O 4 O 2 O 5 O 6 O 3 O 8 O 7
Table 27. The SRCC values between any two of the seven decision-making methods.
Table 27. The SRCC values between any two of the seven decision-making methods.
MethodISIFTOPSISPROMETHEEYe’sZhang’sJia’s
IS 1.0000 0.7857 0.8810 0.8810 0.7857 0.8095 0.7857
IF 0.7857 1.0000 0.7619 0.7619 0.8333 0.7857 0.8333
TOPSIS 0.8810 0.7619 1.0000 1.0000 0.9524 0.9762 0.9524
PROMETHEE 0.8810 0.7619 1.0000 1.0000 0.9524 0.9762 0.9524
Ye’s 0.7857 0.8333 0.9524 0.9524 1.0000 0.9762 1.0000
Zhang’s 0.8095 0.7857 0.9762 0.9762 0.9762 1.0000 0.9762
Jia’s 0.7857 0.8333 0.9524 0.9524 1.0000 0.9762 1.0000
Table 28. The data of the example in [16].
Table 28. The data of the example in [16].
S 1 S 2 S 3 S 4
O 1 0.80 0.69 0.64 0.74
O 2 0.65 0.85 0.72 0.67
O 3 0.73 0.77 0.78 0.61
O 4 0.82 0.68 0.64 0.75
O 5 0.54 0.96 0.57 0.82
O 6 0.88 0.62 0.70 0.69
Table 29. The ranking results for six investment objects for different MADM methods.
Table 29. The ranking results for six investment objects for different MADM methods.
MethodRanking ResultsOptimal Object
IS O 6 O 4 O 1 O 2 O 5 O 3 O 6
IF O 6 O 4 O 1 O 5 O 2 O 3 O 6
TOPSIS O 6 O 4 O 1 O 3 O 2 O 5 O 6
PROMETHEE O 6 O 4 O 1 O 3 O 2 O 5 O 6
Zhang’s O 6 O 4 O 1 O 2 O 3 O 5 O 6
Table 30. The SRCC between any two of five decision-making methods.
Table 30. The SRCC between any two of five decision-making methods.
MethodISIFTOPSISPROMETHEEZhang’s
IS 1.0000 0.9429 0.8286 0.8286 0.9429
IF 0.9429 1.0000 0.7714 0.7714 0.8286
TOPSIS 0.8286 0.7714 1.0000 1.0000 0.9429
PROMETHEE 0.8286 0.7714 1.0000 1.0000 0.9429
Zhang’s 0.9429 0.8286 0.9429 0.9429 1.0000
Table 31. The ranking results of different preference parameters k 1 .
Table 31. The ranking results of different preference parameters k 1 .
Preference Parameter k 1 Ranking Results
k 1 = 0.05 O 1 O 2 O 3 O 4 O 5 O 6 O 7 O 8
k 1 = 0.10 O 1 O 2 O 3 O 4 O 5 O 6 O 7 O 8
k 1 = 0.15 O 7 O 2 O 5 O 6 O 3 O 8 O 1 O 4
k 1 = 0.20 O 7 O 2 O 5 O 6 O 3 O 8 O 1 O 4
k 1 = 0.25 O 2 O 5 O 6 O 7 O 1 O 4 O 8 O 3
k 1 = 0.30 O 2 O 5 O 6 O 7 O 1 O 4 O 8 O 3
k 1 = 0.35 O 2 O 5 O 6 O 7 O 1 O 4 O 8 O 3
k 1 = 0.40 O 2 O 5 O 6 O 7 O 1 O 4 O 8 O 3
k 1 = 0.45 O 2 O 5 O 6 O 7 O 1 O 4 O 8 O 3
k 1 = 0.50 O 2 O 5 O 6 O 7 O 1 O 4 O 8 O 3
Table 32. The ranking results for different risk-aversion coefficients τ .
Table 32. The ranking results for different risk-aversion coefficients τ .
The Risk Aversion Coefficient τ Ranking Results
τ = 0.0 O 7 O 1 O 2 O 3 O 4 O 5 O 6 O 8
τ = 0.1 O 7 O 1 O 4 O 8 O 3 O 2 O 5 O 6
τ = 0.2 O 7 O 1 O 4 O 8 O 2 O 5 O 6 O 3
τ = 0.3 O 7 O 5 O 6 O 2 O 3 O 8 O 1 O 4
τ = 0.4 O 7 O 2 O 5 O 6 O 3 O 8 O 1 O 4
τ = 0.5 O 7 O 2 O 5 O 6 O 3 O 8 O 1 O 4
Table 33. The ranking results for different preference parameters k 2 .
Table 33. The ranking results for different preference parameters k 2 .
Preference Parameter k 2 Ranking Results
k 2 = 0.50 O 7 O 2 O 6 O 1 O 4 O 5 O 3 O 8
k 2 = 0.55 O 7 O 1 O 3 O 4 O 8 O 6 O 5 O 2
k 2 = 0.60 O 7 O 1 O 3 O 4 O 8 O 6 O 5 O 2
k 2 = 0.65 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
k 2 = 0.70 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
k 2 = 0.75 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
k 2 = 0.80 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
k 2 = 0.85 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
k 2 = 0.90 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
k 2 = 0.95 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
k 2 = 0.10 O 1 O 2 O 3 O 4 O 5 O 6 O 8 O 7
Table 34. The ranking results for different risk-aversion coefficients τ .
Table 34. The ranking results for different risk-aversion coefficients τ .
Risk Aversion Coefficient τ Ranking Results
τ = 0.0 O 1 O 2 O 4 O 5 O 6 O 7 O 3 O 8
τ = 0.1 O 7 O 1 O 4 O 6 O 5 O 2 O 3 O 8
τ = 0.2 O 7 O 1 O 4 O 6 O 5 O 2 O 3 O 8
τ = 0.3 O 7 O 2 O 1 O 4 O 6 O 5 O 3 O 8
τ = 0.4 O 7 O 2 O 5 O 6 O 1 O 4 O 3 O 8
τ = 0.5 O 7 O 2 O 5 O 6 O 1 O 4 O 3 O 8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, X.; Zou, L. Three-Way Decision Models Based on Ideal Relations in Multi-Attribute Decision-Making. Entropy 2022, 24, 986. https://doi.org/10.3390/e24070986

AMA Style

Chen X, Zou L. Three-Way Decision Models Based on Ideal Relations in Multi-Attribute Decision-Making. Entropy. 2022; 24(7):986. https://doi.org/10.3390/e24070986

Chicago/Turabian Style

Chen, Xiaozhi, and Ligeng Zou. 2022. "Three-Way Decision Models Based on Ideal Relations in Multi-Attribute Decision-Making" Entropy 24, no. 7: 986. https://doi.org/10.3390/e24070986

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop