Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# A Multi-Criteria Decision-Making Method Based on Single-Valued Neutrosophic Partitioned Heronian Mean Operator

by
Chao Tian
1,
Juan Juan Peng
1,*,
Zhi Qiang Zhang
1,
Mark Goh
2 and
Jian Qiang Wang
3
1
School of Information, Zhejiang University of Finance & Economics, Hangzhou 310018, China
2
NUS School of Business, National University of Singapore, Singapore 117592, Singapore
3
School of Business, Central South University, Changsha 410083, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(7), 1189; https://doi.org/10.3390/math8071189
Submission received: 20 May 2020 / Revised: 3 July 2020 / Accepted: 13 July 2020 / Published: 20 July 2020

## Abstract

:
A multi-criteria decision-making (MCDM) method with single-valued neutrosophic information is developed based on the Partitioned Heronian Mean (PHM) operator and the Shapley fuzzy measure, which recognizes correlation among the selection criteria. Motivated by the PHM operator and Shapley fuzzy measure, two new aggregation operators, namely the single-valued neutrosophic PHM operator and the weighted single-valued neutrosophic Shapley PHM operator, are defined, and their corresponding properties and some special cases are investigated. An MCDM model is applied to solve the single-valued neutrosophic problem where weight information is not completely known. An example is provided to validate the proposed method.

## 1. Introduction

Zadeh first put forward the notion of fuzzy sets (FSs) [1]. Since then, multi-criteria decision-making (MCDM) methods based on FSs have been well developed and applied to hotel selection [2], investment project selection [3], supplier selection [4], solar power station site selection [5], recycling waste resource evaluation [6], and others [7,8,9,10,11,12,13]. However, due to the inherent subjectivity in the preferences of the decision makers (DMs), a single membership degree of FSs cannot adequately capture the subjectivity and uncertainty in the decision-making process. In view of this, Atanassov [14] introduced intuitionistic fuzzy sets (IFSs), including membership and non-membership degrees and a hesitation index, as an extension of FSs. However, both FSs and IFSs are not adept at tackling problems involving information uncertainty. For example, when we ask an expert about a certain statement, the expert may say the probability that the statement is true, false, and unsure is 0.6, 0.5, and 0.1 respectively [15]. Clearly, the solution to this problem is beyond the scope of FSs and IFSs. Smarandache et al. [16] constructed neutrosophic sets (NSs) that involve three membership functions: truth, indeterminacy, and falsity. It is noted that NSs lie on a non-standard unit interval ]0, 1+[ [17], which is an extension of the standard interval [0.1] of IFSs. The uncertainty presented here, i.e., the indeterminacy factor, depends on the truth and falsity values while the incorporated uncertainty depends on the membership and non-membership degrees of the IFSs [18]. Thus, the earlier example of NSs can be expressed as x(0.6, 0.1, 0.5). While some MCDM methods with neutrosophic information have been investigated [19,20,21], their applicability is restricted because of the non-standard unit interval. As such, single-valued neutrosophic sets (SVNSs) were proposed, as a special case of NSs [22].
SVNSs have recently become a popular method to describe the preference information of DMs, and have attracted much research attention in areas such as aggregation operators [23,24,25,26,27,28], outranking relations [29], and information measures [30,31,32,33].
Indeed, aggregation operators are significant in solving MCDM problems. Different functions usually involve different aggregation operators such as the Heronian mean (HM) operator [34,35], Hamacher operator [36,37], Muirhead mean operator [38,39], Maclaurin symmetric mean operator [40,41], and Bonferroni mean operator [42,43,44]. These operators can reduce the effects of abnormal data provided by DMs. For instance, the HM operator, defined by Sykora [34], takes the interrelationship of the input arguments into account. Recently, many studies have examined the HM operator and extended it to various decision-making contexts. For instance, based on the HM operator, Liu and Shi [35] defined some neutrosophic linguistic operators and Peng et al. [45] discussed the single-valued neutrosophic hesitant fuzzy geometric Choquet integral HM operator. In addition, some other MCDM methods, including the analytic network process (ANP) [46], and the analytic hierarchy process and interpretive structural modelling (AHP-ISM) [47,48], also consider the interrelationship of criteria. However, the HM operator, ANP, and AHP-ISM presuppose that all the selection criteria are interrelated. In reality, the criteria need not always be correlated with each other. Hence, the criteria should be partitioned into distinct categories to improve decision-making accuracy. Liu et al. [49] defined the partitioned HM (PHM) operator where all the criteria are partitioned into categories, in which the criteria in the same category are correlated with each other. For example, if a firm wishes to select a food supplier from several vendors using the criteria of cost (c1), quality (c2), service performance (c3), risk (c4), and supplier profile (c5), then the criteria can be partitioned into the categories P1 = {c1, c2, c4} and P2 = {c3, c5}. Criteria c1, c2 and c4 are correlated, placing them in the same category, P1; likewise, for criteria c3 and c5 in set P2. It is noted that the Shapley fuzzy measure [50,51] is adept at handling MCDM problems with correlated selection criteria, and has been extensively used for the same reason [52,53].
From the analysis presented above, the motivations of this research can be concluded as: (1) the existing single-valued neutrosophic aggregation operators only consider the importance of assessment values or that of the ordered position, but ignore the complex interrelationship of the criteria; (2) the existing methods are mostly constructed under complete weight information, and cannot deal with MCDM problems where the weight information is incomplete. Thus, our study makes two contributions. First, we propose two new partitioned aggregation operators, namely, the single-valued neutrosophic PHM (SVNPHM) operator and the weighted single-valued neutrosophic Shapley PHM (WSVNSPHM) operator, to avoid the first shortcoming. Next, we develop a method to deal with the single-valued neutrosophic MCDM problem under incomplete weight information, to handle the second shortcoming.
The rest of this paper is set as follows. In Section 2, some definitions are introduced. The SVNPHM and WSVNSPHM operators are explained in Section 3. The single-valued MCDM method with incomplete weight information is developed in Section 4. In Section 5, an example is provided to validate the proposed method. Finally, conclusions are drawn in Section 6.

## 2. Preliminaries

Here, we introduce some definitions, namely, the Shapley fuzzy measure, PHM operator, NSs, and SVNSs.

#### 2.1. Shapley Fuzzy Measure

Definition 1
[50]. Let $X = { x 1 , x 2 , … , x n }$ be a space of objects and $P ( x )$ be the power set of $X$. Then the function $μ : ( P ( x ) → [ 0 , 1 ] )$ is defined as a fuzzy measure, satisfying
(1)
$μ ( Φ ) = 0$and$μ ( X ) = 1$;
(2)
$∀ α , β ∈ P ( X )$and$α ⊆ β$, then$μ ( α ) ≤ μ ( β )$.
Definition 2
([54]). Suppose $μ$ is a fuzzy measure on $X$. The corresponding Möbius transformation can be expressed as
$β ⊂ X , m ( β ) = ∑ α ⊂ β ( − 1 ) | β \ α | μ ( α )$
If$| β | = k , m ( β ) = 0$and there exists at least one subset$γ ( | γ | = k )$satisfying$m ( γ ) ≠ 0$, then$μ$is called a k-order additive fuzzy measure.
Definition 3
([50]). Suppose μ is a fuzzy measure on X; the Shapley value to measure the average importance degree of S is:
$τ S ( μ , X ) = ∑ M ⊆ X \ S ( n − s − m ) ! m ! ( n − s + 1 ) ! ( μ ( S ∪ { M } ) − μ ( M ) ) , ∀ S ⊆ X$
where n, m, and s denote the cardinalities of $X$, $M$, and $S$, respectively. As noted in [54], $τ S ( μ , X ) ≥ 0$ and $∑ S ⊆ X τ S ( μ , X ) = 1$. $τ S ( μ , X )$ is called Shapley fuzzy measures [53].
In this paper, the Shapley fuzzy measures are additive fuzzy measures unless otherwise stated.
Example 1.
Suppose$X = { d , e , f }$, and$μ$is a fuzzy measure, with$μ ( ∅ ) = 0$,$μ ( { d } ) = 0.1$,$μ ( { e } ) = 0.2$,$μ ( { f } ) = 0.5$,$μ ( { d , e } ) = 0.5$,$μ ( { e , f } ) = 0.9$,$μ ( { d , f } ) = 0.8$, and$μ ( { X } ) = 1$. If$S = { d , e }$, then$X \ S = { f }$. The following results can be obtained:
$ϕ S ( μ , X ) = ( 3 − 2 − 1 ) ! 1 ! ( 3 − 2 + 1 ) ! ( μ ( { d , e } ∪ { f } ) − μ ( { f } ) ) + ( 3 − 2 − 0 ) ! 0 ! ( 3 − 2 + 1 ) ! ( μ ( { d , e } ∪ { ∅ } ) − μ ( { f } ) ) = 1 2 ( μ ( d , e , f ) − μ ( f ) ) + 1 2 ( μ ( d , e ) − μ ( ∅ ) ) = 1 2 ( 1 − 0.5 ) + 1 2 ( 0.5 − 0 ) = 0.5 .$

#### 2.2. PHM

Definition 4
([34]). Let $χ i ( i = 1 , 2 , … , n )$ be a set of real numbers. The HM operator is defined as:
$H M p , q ( χ 1 , χ 2 , … , χ n ) = ( 2 n ( n + 1 ) ∑ i = 1 , j = i n χ i p χ j q ) 1 p + q$
where $p , q ≥ 0$, and the HM operator satisfies the following properties:
(1)
Idempotency: If$χ i = χ ( i = 1 , 2 , … , n )$, then$H M p , q ( χ , χ , … , χ ) = χ$.
(2)
Permutability: If$χ i ′ ( i = 1 , 2 , … , n )$is a permutation of$χ i ( i = 1 , 2 , … , n )$, then$H M p , q ( χ 1 ′ , χ 2 ′ , … , χ n ′ ) = H M p , q ( χ 1 , χ 2 , … , χ n )$.
(3)
Boundedness: If$χ + = max { χ 1 , χ 2 , … , χ n }$and$χ − = min { χ 1 , χ 2 , … , χ n }$, then$χ − ≤ H M p , q ( χ 1 , χ 2 , … , χ n ) ≤ χ +$.
Definition 5
([49]). Let $χ i ( i = 1 , 2 , … , n )$ be a set of inputs that can be partitioned into $t$ categories $P l ( l = 1 , 2 , … , t )$. The PHM operator is defined as:
$P H M p , q ( χ 1 , χ 2 , … , χ n ) = 1 t ( ∑ l = 1 t ( 2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | χ i p χ j q ) 1 p + q )$
where $p , q ≥ 0$, $p + q > 0$, $∑ l = 1 t | P l | = n$, and $P i ∩ P j = ∅$, and $| P l |$ denotes the cardinality of $P l$.
Example 2.
If$C = { c 1 , c 2 , c 3 , c 4 , c 5 }$is a set of criteria that can be partitioned into two categories P1 = {c1, c2, c3} and P2 = {c4, c5}, and the assessment values provided by the DMs are$χ = { 0.7 , 0.5 , 0.4 , 0.6 , 0.8 }$(for convenience, let p = q = 1), then, the aggregated results using the PHM operator are written as:
Moreover,
$H M 1 , 1 ( χ 1 , χ 2 , … , χ 5 ) = ( 2 5 × 6 ∑ i = 1 , j = i m χ i 1 χ j 1 ) 1 2 = 2 5 × 6 ( 0.7 × 0.7 + 0.7 × 0.5 + 0.7 × 0.4 + 0.7 × 0.6 + 0.7 × 0.8 + 0.5 × 0.5 + 0.5 × 0.4 + 0.5 × 0.6 + 0.5 × 0.8 + 0.4 × 0.4 + 0.4 × 0.6 + 0.4 × 0.8 + 0.6 × 0.6 + 0.6 × 0.8 + 0.8 × 0.8 ) = ( 4.81 15 ) 1 2 = 0.5663 .$
The reason for the difference in the results obtained by the PHM operator and those obtained by the HM operator is that the PHM operator partitions the input values into categories based on the relationship of the values, whereas the HM operator presupposes the condition that each input value is correlated with the other values. Therefore, the PHM operator is more reasonable than the HM operator.

#### 2.3. NSs and SVNSs

Definition 6
[16]. An NS $S ˜$ in $X = { x 1 , x 2 , … , x n }$ can be characterized as $S ˜ = { 〈 x , T ˜ S ˜ ( x ) , I ˜ S ˜ ( x ) , F ˜ S ˜ ( x ) 〉 | x ∈ X }$, where $T ˜ S ˜ ( x )$, $I ˜ S ˜ ( x )$, and $F ˜ S ˜ ( x )$ denote the truth, indeterminacy, and falsity memberships respectively. Furthermore, $T ˜ S ˜ ( x )$, $I ˜ S ˜ ( x )$, and $F ˜ S ˜ ( x )$ are subsets of , that is, , , and satisfy the condition $0 − ≤ sup T ˜ S ˜ ( x ) + sup I ˜ S ˜ ( x ) + sup F ˜ S ˜ ( x ) ≤ 3 +$.
Since it is impractical for NSs to tackle real-life problems because of their nonstandard intervals, Majumdar and Samant [18] defined SVNSs based on standard intervals, and Ye [19] developed the corresponding properties for SVNSs.
Definition 7
([22]). An SVNS $S$ in $X = { x 1 , x 2 , … , x n }$ is defined as $S = { 〈 x , T S ( x ) , I S ( x ) , F S ( x ) 〉 | x ∈ X }$, where $T S ( x )$, $I S ( x )$, and $F S ( x )$ are subsets in the standard interval [0, 1], i.e., $T S ( x ) : X → [ 0 , 1 ]$, $I S ( x ) : X → [ 0 , 1 ]$, and $F S ( x ) : X → [ 0 , 1 ]$. If $X$ has only one element, then $S$ is a single-valued neutrosophic number (SVNN). For convenience, we denote the SVNN by $S = 〈 T S , I S , F S 〉$.
Definition 8
([22]). Let $S = 〈 T S , I S , F S 〉$, $S 1 = 〈 T S 1 , I S 1 , F S 1 〉$, and $S 2 = 〈 T S 2 , I S 2 , F S 2 〉$ be three SVNNs. With $λ > 0$, the following properties hold:
(1)
$λ S = 〈 1 − ( 1 − T S ) λ , 1 − ( 1 − I S ) λ , 1 − ( 1 − F S ) λ 〉 , λ > 0$;
(2)
$S λ = 〈 T S λ , I S λ , F S λ 〉 , λ > 0$;
(3)
$S 1 ⊕ S 2 = 〈 T S 1 + T S 2 − T S 1 ⋅ T S 2 , I S 1 + I S 2 − I S 1 ⋅ I S 2 , F S 1 + F S 2 − F S 1 ⋅ F S 2 〉$;
(4)
$S 1 ⊗ S 2 = 〈 T S 1 ⋅ T S 2 , I S 1 ⋅ I S 2 , F S 1 ⋅ F S 2 〉$.
However, as stated in [19], the above operations are unreasonable. In view of this, Peng et al. [20] improved the properties of SVNNs as well as the corresponding comparison method.
Definition 9
([23]). Let $S = 〈 T S , I S , F S 〉$, $S 1 = 〈 T S 1 , I S 1 , F S 1 〉$, and $S 2 = 〈 T S 2 , I S 2 , F S 2 〉$ be three SVNNs. With $λ > 0$, the properties of the SVNNs are defined as follows:
(1)
$λ S = 〈 1 − ( 1 − T S ) λ , I S λ , F S λ 〉$;
(2)
$S λ = 〈 T S λ , 1 − ( 1 − I S ) λ , 1 − ( 1 − F S ) λ 〉$;
(3)
$S 1 ⊕ S 2 = 〈 T S 1 + T S 2 − T S 1 ⋅ T S 2 , I S 1 ⋅ I S 2 , F S 1 ⋅ F S 2 〉$;
(4)
$S 1 ⊗ S 2 = 〈 T S 1 ⋅ T S 2 , I S 1 + I S 2 − I S 1 ⋅ I S 2 , F S 1 + F S 2 − F S 1 ⋅ F S 2 〉$.
Definition 10
([23]). Let $S 1 = 〈 T S 1 , I S 1 , F S 1 〉$ and $S 2 = 〈 T S 2 , I S 2 , F S 2 〉$ be two SVNNs. The comparison method is defined as:
(1)
If$s ¯ ( S 1 ) > s ¯ ( S 2 )$, then$S 1$is preferable to$S 2$, which is represented as$S 1 ≻ S 2$;
(2)
If$s ¯ ( S 1 ) = s ¯ ( S 2 )$and$a ¯ ( S 1 ) > a ¯ ( S 2 )$, then$S 1$is preferable to$S 2$, which is denoted by$S 1 ≻ S 2$;
(3)
If$s ¯ ( S 1 ) = s ¯ ( S 2 )$,$a ¯ ( S 1 ) = a ¯ ( S 2 )$and$c ¯ ( S 1 ) > c ¯ ( S 2 )$, then$S 1$is preferable to$S 2$, which is denoted by$S 1 ≻ S 2$;
(4)
If$s ¯ ( S 1 ) = s ¯ ( S 2 )$,$a ¯ ( S 1 ) = a ¯ ( S 2 )$and$c ¯ ( S 1 ) = c ¯ ( S 2 )$, then$S 1$is indifferent to$S 2$, which is represented by$S 1 ~ S 2$.
In this definition,$s ¯ ( S i ) = ( T S i + 1 − I S i + 1 − F S i ) / 3$,$a ¯ ( S i ) = T S i − F S i$, and$c ¯ ( S i ) = T S i ( i = 1 , 2 )$denote the score, accuracy, and certainty functions of the SVNNs, respectively.
Example 3.
Let$S 1 = 〈 0.5 , 0.6 , 0.4 〉$and$S 2 = 〈 0.5 , 0.5 , 0.4 〉$be two SVNNs. From the comparison method presented in Definition 10, we obtain$s ¯ ( S 1 ) = 1.5 3 < 1.6 3 = s ¯ ( S 2 )$. Thus,$S 2$is preferable to$S 1$, i.e.,$S 2 ≻ S 1$, which is consistent with our definition.
Definition 11
([18]). Let $S 1 = 〈 T S 1 , I S 1 , F S 1 〉$ and $S 2 = 〈 T S 2 , I S 2 , F S 2 〉$ be two SVNNs. The normalized Euclidean distance between $S 1$ and $S 2$ can be defined as:
Example 4.
Let$S 1 = 〈 0.5 , 0.6 , 0.4 〉$and$S 2 = 〈 0.4 , 0.3 , 0.2 〉$be two SVNNs. From Definition 11, we have.

## 3. Single-Valued Neutrosophic PHM Operators

Through the PHM operator and Shapley fuzzy measure, the SVNPHM and WSVNSPHM operators are, respectively, defined, and their corresponding properties are discussed in this section.

#### 3.1. SVNPHM Operator

Definition 12.
Let$S i = ( T i , I i , F i ) ( i = 1 , 2 , … , n )$be a set of SVNNs that can be partitioned into categories$P l ( l = 1 , 2 , … , t )$. The SVNPHM operator is defined as
$S V N P H M p , q ( S 1 , S 2 , … , S n ) = 1 t ( ∑ l = 1 t ( 2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | S i p ⊗ S j q ) 1 p + q )$
where$p , q ≥ 0$,$p + q > 0$,$∑ l = 1 t | P l | = n$, and$P i ∩ P j = ∅$.$| P l |$represents the cardinality of$P l$.
Theorem 1.
Let$S i = ( T i , I i , F i ) ( i = 1 , 2 , … , n )$be a set of SVNNs. Then, the results under the SVNPHM operator also produce an SVNN, i.e.,
Proof.
Based on Definition 9, we have $S i p = 〈 T i p , 1 − ( 1 − I i ) p , 1 − ( 1 − F i ) p 〉$ and $S j q = 〈 T j q , 1 − ( 1 − I j ) q , 1 − ( 1 − F j ) q 〉$.
Then $S i p ⊗ S j q = 〈 T i p ⋅ T j q , 1 − ( 1 − I i ) p ( 1 − I j ) q , 1 − ( 1 − F i ) p ( 1 − F j ) q 〉$.
So $∑ i = 1 , j = i | P l | S i p ⊗ S j q = 〈 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) , ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) , ∏ i = 1 , j = i | P l | ( 1 − ( 1 − F i ) p ( 1 − F j ) q ) 〉$.
$2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | S i p ⊗ S j q = 〈 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) , ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ,$$∏ i = 1 , j = i | P l | ( 1 − ( 1 − F i ) p ( 1 − F j ) q ) 2 | P l | ( | P l | + 1 ) 〉$.
$( 2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | S i p ⊗ S j q ) 1 p + q = 〈 ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q , 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q$$1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − F i ) p ( 1 − F j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q 〉$.
Moreover, $∑ l = 1 t ( 2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | S i p ⊗ S j q ) 1 p + q = 〈 1 − ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) ,$ $∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) , ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − F i ) p ( 1 − F j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 〉$.
Thus,$1 t ( ∑ l = 1 t ( 2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | S i p ⊗ S j q ) 1 p + q ) = 〈 1 − ( ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) ) 1 t ,$ $∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t , ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − F i ) p ( 1 − F j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t 〉$.
Next, we present some special cases with regard to the parameters.
(1)
As $q → 0$, then Equation (7) reduces to:
(2)
When $p = 1$ and $q → 0$, Equation (7) reduces to:
(3)
When $p = q = 1$, Equation (7) becomes:
According to the operations presented in Definition 9 and Theorem 1, some properties of the SVNPHM operator are investigated in the following. □
Theorem 2.
Idempotency: Let$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$be a set of SVNNs. If$S 1 = S 2 = … = S n = S = 〈 T , I , F 〉$, then$S V N P H M p , q ( S 1 , S 2 , … , S n ) = S$.
Proof.
Since $S j = S ( j = 1 , 2 , … , n )$, we have
Theorem 3.
Permutability: Let$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$be a set of SVNNs. If$S ˜ j = ( T ˜ j , I ˜ j , F ˜ j ) ( j = 1 , 2 , … , n )$accompanies any permutation of$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$, then,
$S V N P H M p , q ( S ˜ 1 , S ˜ 2 , … , S ˜ n ) = S V N P H M p , q ( S 1 , S 2 , … , S n )$
Proof.
Since $S ˜ j = ( T ˜ j , I ˜ j , F ˜ j ) ( j = 1 , 2 , … , n )$ is any permutation of $S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$, we have
Theorem 4.
Boundedness: Let$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$be a set of SVNNs. If$S − = 〈 min j { T j } , max j { I j } , max j { F j } 〉$and$S + = 〈 max j { T j } , min j { I j } , min j { F j } 〉$, then$S − ≤ S V N P H M p , q ( S 1 , S 2 , … , S n ) ≤ S +$.
Proof.
Since $min j { T j } ≤ T j ≤ max j { T j }$, we have
$( min j { T j } ) p + q ≤ T i p T j q ≤ ( max j { T j } ) p + q ⇔ 1 − ( max j { T j } ) p + q ≤ 1 − T i p T j q ≤ 1 − ( min j { T j } ) p + q$
$⇔ ∏ i = 1 , j = i | P l | ( 1 − ( max j { T j } ) p + q ) 2 | P l | ( | P l | + 1 ) ≤ ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ≤ ∏ i = 1 , j = i | P l | ( 1 − ( min j { T j } ) p + q ) 2 | P l | ( | P l | + 1 ) ⇔ 1 − ( max j { T j } ) p + q ≤ ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ≤ 1 − ( min j { T j } ) p + q ⇔ ( min j { T j } ) p + q = 1 − 1 + ( min j { T j } ) p + q ≤ 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ≤ 1 − 1 + ( max j { T j } ) p + q = ( max j { T j } ) p + q ⇔ min j { T j } = ( ( min j { T j } ) p + q ) 1 p + q ≤ ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ≤ ( ( max j { T j } ) p + q ) 1 p + q = max j { T j } ⇔ 1 − max j { T j } ≤ 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ≤ 1 − min j { T j } ⇔ ∏ l = 1 t ( 1 − max j { T j } ) 1 t ≤ ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t ≤ ∏ l = 1 t ( 1 − min j { T j } ) 1 t ⇔ 1 − max j { T j } ≤ ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t ≤ 1 − min j { T j } ⇔ min j { T j } ≤ 1 − ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − T i p T j q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t ≤ max j { T j }$
Moreover, since $min j { I j } ≤ I j ≤ max j { I j }$, we have $1 − max j { I j } ≤ 1 − I j ≤ 1 − min j { I j }$
$( 1 − max j { I j } ) p ≤ ( 1 − I j ) p ≤ ( 1 − min j { I j } ) p ⇔ ( 1 − max j { I j } ) p + q ≤ ( 1 − I i ) p ( 1 − I j ) q ≤ ( 1 − min j { I j } ) p + q ⇔ 1 − ( 1 − min j { I j } ) p + q ≤ 1 − ( 1 − I i ) p ( 1 − I j ) q ≤ 1 − ( 1 − max j { I j } ) p + q ⇔ ∏ i = 1 , j = i | P l | ( 1 − ( 1 − min j { I j } ) p + q ) 2 | P l | ( | P l | + 1 ) ≤ ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ≤ ∏ i = 1 , j = i | P l | ( 1 − ( 1 − max j { I j } ) p + q ) 2 | P l | ( | P l | + 1 ) ⇔ 1 − ( 1 − min j { I j } ) p + q ≤ ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ≤ 1 − ( 1 − max j { I j } ) p + q ⇔ ( 1 − max j { I j } ) p + q ≤ 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ≤ ( 1 − min j { I j } ) p + q ⇔ ( ( 1 − max j { I j } ) p + q ) 1 p + q ≤ ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ≤ ( ( 1 − min j { I j } ) p + q ) 1 p + q ⇔ 1 − max j { I j } ≤ ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ≤ 1 − min j { I j } ⇔ min j { I j } ≤ 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ≤ max j { I j }$
$⇔ 1 − max j { I j } ≤ ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ≤ 1 − min j { I j } ⇔ min j { I j } ≤ 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ≤ max j { I j } ⇔ ∏ l = 1 t ( min j { I j } ) 1 t ≤ ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t ≤ ∏ l = 1 t ( max j { I j } ) 1 t ⇔ min j { I j } ≤ ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − I i ) p ( 1 − I j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t ≤ max j { I j }$
Similarly, we can get $min j { F j } ≤ ∏ l = 1 t ( 1 − ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − F i ) p ( 1 − F j ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ) 1 t ≤ max j { F j }$.
Based on the comparison method in Definition 10, the following results can be obtained as: $min j { T j } + 1 − max j { I j } + 1 − max j { F j } 3 ≤ s ¯ ( S V N P H M p , q ( S 1 , S 2 , … , S n ) ) ≤ max j { T j } + 1 − min j { I j } + 1 − min j { F j } 3$, i.e., $s ¯ ( S − ) ≤ s ¯ ( S V N P H M p , q ( S 1 , S 2 , … , S n ) ) ≤ s ¯ ( S + )$.
Thus, $S − ≤ S V N P H M p , q ( S 1 , S 2 , … , S n ) ≤ S +$ holds. □

#### 3.2. WSVNSPHM Operator

Since the importance of each input value varies according to the decision-making situation, we propose a WSVNSPHM operator in this subsection.
Definition 13.
Suppose$S i = ( T i , I i , F i ) ( i = 1 , 2 , … , n )$is a set of SVNNs that can be divided into categories$P l ( l = 1 , 2 , … , t )$, and$τ i ( μ , P l )$is the Shapley fuzzy measure on$P l$for$S i = ( T i , I i , F i ) ( i = 1 , 2 , … , n )$in the$l$-th partition. The WSVNSPHM operator is defined as:
$W S V N S P H M p , q ( S 1 , S 2 , … , S n ) = 1 t ( ∑ l = 1 t ( 2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | ( τ i ( μ , P l ) S i ) p ⊗ ( τ j ( μ , P l ) 1 − τ i ( μ , P l ) S j ) q ) 1 p + q )$
where$p , q ≥ 0$,$p + q > 0$,$∑ l = 1 t | P l | = n$, and$P i ∩ P j = ∅$. $| P l |$represents the cardinality of$P l$.
Theorem 5.
Let$S i = ( T i , I i , F i ) ( i = 1 , 2 , … , n )$be a set of SVNNs. The results derived from the WSVNSPHM operator also produce an SVNN, i.e.,
Proof.
Since $τ i ( μ , P l ) S i = 〈 1 − ( 1 − T i ) τ i ( μ , P l ) , I i τ i ( μ , P l ) , F i τ i ( μ , P l ) 〉$ and $τ j ( μ , P l ) 1 − τ i ( μ , P l ) S j = 〈 1 − ( 1 − T j ) τ j ( μ , P l ) 1 − τ i ( μ , P l ) , I j τ j ( μ , P l ) 1 − τ i ( μ , P l ) , F j τ j ( μ , P l ) 1 − τ i ( μ , P l ) 〉$, then $( τ i ( μ , P l ) S i ) p ⊗ ( τ j ( μ , P l ) 1 − τ i ( μ , P l ) S j ) q = 〈 ( 1 − ( 1 − T i ) τ i ( μ , P l ) ) p ⋅ ( 1 − ( 1 − T j ) τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ,$ $1 − ( 1 − I i τ i ( μ , P l ) ) p ⋅ ( 1 − I j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q , 1 − ( 1 − F i τ i ( μ , P l ) ) p ⋅ ( 1 − F j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q 〉$, and $∑ i = 1 , j = i | P l | ( τ i ( μ , P l ) S i ) p ⊗ ( τ j ( μ , P l ) 1 − τ i ( μ , P l ) S j ) q = 〈 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − ( 1 − T i ) τ i ( μ , P l ) ) p ⋅ ( 1 − ( 1 − T j ) τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) ,$ $∏ i = 1 , j = i | P | ( 1 − ( 1 − I i τ i ( μ , P l ) ) p ⋅ ( 1 − I j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) , ∏ i = 1 , j = i | P | ( 1 − ( 1 − F i τ i ( μ , P l ) ) p ⋅ ( 1 − F j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) 〉$.
So $2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | ( τ i ( μ , P l ) S i ) p ⊗ ( τ j ( μ , P l ) 1 − τ i ( μ , P l ) S j ) q = 〈 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − ( 1 − T i ) τ i ( μ , P l ) ) p ⋅ ( 1 − ( 1 − T j ) τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) 2 | P l | ( | P l | + 1 ) ,$$∏ i = 1 , j = i | P | ( 1 − ( 1 − I i τ i ( μ , P l ) ) p ⋅ ( 1 − I j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) 2 | P l | ( | P l | + 1 ) , ∏ i = 1 , j = i | P | ( 1 − ( 1 − F i τ i ( μ , P l ) ) p ⋅ ( 1 − F j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) 2 | P l | ( | P l | + 1 ) 〉$. $( 2 | P l | ( | P l | + 1 ) ∑ i = 1 , j = i | P l | ( τ i ( μ , P l ) S i ) p ⊗ ( τ j ( μ , P l ) 1 − τ i ( μ , P l ) S j ) q ) 1 p + q = 〈 ( 1 − ∏ i = 1 , j = i | P l | ( 1 − ( 1 − ( 1 − T i ) τ i ( μ , P l ) ) p ⋅ ( 1 − ( 1 − T j ) τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q ,$ $1 − ( 1 − ∏ i = 1 , j = i | P | ( 1 − ( 1 − I i τ i ( μ , P l ) ) p ⋅ ( 1 − I j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q , 1 − ( 1 − ∏ i = 1 , j = i | P | ( 1 − ( 1 − F i τ i ( μ , P l ) ) p ⋅ ( 1 − F j τ j ( μ , P l ) 1 − τ i ( μ , P l ) ) q ) 2 | P l | ( | P l | + 1 ) ) 1 p + q 〉$.
Then Thus, Some special cases of the WSVNSPHM operator are presented below:
(1)
As $q → 0$, Equation (12) reduces to:
(2)
When $p = 1$ and $q → 0$, Equation (12) reduces to
(3)
When $p = q = 1$, Equation (12) becomes
The properties of the WSVNSPHM operator can be obtained using the following theorems.
Theorem 6.
Idempotency: Let$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$be a set of SVNNs. If$S 1 = S 2 = … = S n = S = 〈 T , I , F 〉$, then$W S V N S P H M p , q ( S 1 , S 2 , … , S n ) = S$.
Theorem 7.
Permutability: Let$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$be a set of SVNNs. If$S ˜ j = ( T ˜ j , I ˜ j , F ˜ j ) ( j = 1 , 2 , … , n )$accompanies any permutation of$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$, then,$W S V N S P H M p , q ( S ˜ 1 , S ˜ 2 , … , S ˜ n ) = W S V N S P H M p , q ( S 1 , S 2 , … , S n )$.
Theorem 8.
Boundedness: Let$S j = 〈 T j , I j , F j 〉 ( j = 1 , 2 , … , n )$be a set of SVNNs. If$S − = 〈 min j { T j } , max j { I j } , max j { F j } 〉$and$S + = 〈 max j { T j } , min j { I j } , min j { F j } 〉$, then$S − ≤ W S V N S P H M p , q ( S 1 , S 2 , … , S n ) ≤ S +$.

## 4. Single-Valued Neutrosophic MCDM Method with Incomplete Weight Information

Suppose $S = { S 1 , S 2 , … , S n }$ is a group of candidates and $C = { c 1 , c 2 , … , c m }$ is the set of the corresponding selection criteria. Then $R = ( S i j ) n × m$ is the single-valued neutrosophic decision matrix, whereby $S i j = 〈 T i j , I i j , F i j 〉 ( i = 1 , 2 , … , n ; j = 1 , 2 , … , m )$ can be provided by DMs with respect to $S i$ for the criterion $c j$ in the form of SVNNs. Based on the relationships among the criteria, Sij can be partitioned into $t$ categories $P l ( l = 1 , 2 , … , t )$ where $P i ∩ P j = ∅$. If the criteria are correlated with each other, then the Shapley fuzzy measure is the weight of the criteria and $t = 1$. Further, if the Shapley fuzzy measure of the criteria is known, the corresponding aggregation operators can be used directly to obtain the aggregated values. If it is partly or fully unknown, then the Shapley fuzzy measure of the criteria should be found first.
The flowchart of the proposed method is shown in Figure 1 and the steps to finding the optimal candidate(s) are as follows.
Step 1. Construct and normalize decision matrix
The DMs evaluate the criteria for each candidate and construct the decision-matrix. As the selection criteria will always involve the benefit type and cost type in MCDM problems, if the criteria belong to the benefit type, then it is not necessary to normalize the decision matrix. The cost type criteria should be transformed into the associated benefit type criteria as:
where $( S i j ) c = 〈 F i j , 1 − I i j , T i j 〉$ is the complement of $S i j$.
Then, the normalized decision matrix $R ˜ = ( S ˜ i j ) n × m$ can be obtained.
Step 2. Determine closeness coefficients
Let $S ˜ + = ( S ˜ 1 + , S ˜ 2 + , … , S ˜ n + )$ and $S ˜ − = ( S ˜ 1 − , S ˜ 2 − , … , S ˜ n − )$ be the positive and negative ideal solutions respectively, $S ˜ j + = ( max i T ˜ i j , min i I ˜ i j , min i F ˜ i j )$ and $S ˜ j − = ( min i T ˜ i j , max i I ˜ i j , max i F ˜ i j )$$( i = 1 , 2 , … , n ;$ $j = 1 , 2 , … , m )$. The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) [55] is one of the key techniques in dealing with MCDM problems and it is very intuitive and simple. It can provide a ranking method by the shortest distance from the positive ideal solution (PIS) and the farthest distance from the negative ideal solution (NIS). Then the closeness coefficient of the candidate from the PIS can be found as follows:
$D i j + ( S ˜ i j , S ˜ + ) = d i j ( S ˜ i j , S ˜ + ) d i j ( S ˜ i j , S ˜ + ) + d i j ( S ˜ i j , S ˜ − ) ( i = 1 , 2 , … , n ; j = 1 , 2 , … , m ) ,$
where can be obtained by using Equation (5).
Step 3. Determine Shapley fuzzy measures
According to TOPSIS [55], the smaller the value of $D i j + ( S ˜ i j , S ˜ + )$, the better $S ˜ i j$ is. If the weight of the criteria is partly known, then a model based on the fuzzy measure can be constructed as:
where $τ c j ( μ , C )$ denotes the weight of criterion $c j$, and $G j$ represents the weight information.
Next, the fuzzy measure and the corresponding Shapley fuzzy measure are obtained by solving linear programming model (18).
Step 4. Compute global aggregation values
Using the WSVNSPHM operator, i.e., Equation (12), the global aggregation value $ς i ( i = 1 , 2 , … , n )$ of candidate $S i ( i = 1 , 2 , … , n )$ can be obtained.
Step 5. Find values of score, accuracy, and certainty
Based on Definition 10, the values of score $s ¯ ( ς i )$, accuracy $a ¯ ( ς i )$, and certainty $c ¯ ( ς i )$ of $S i$ ($i = 1 , 2 , … , n$) can be achieved.
Step 6. Rank candidates
According to Step 5, all candidates $S i$ $( i = 1 , 2 , … , n )$ are ranked, and the best selected.

## 5. Example

Hww is a large telecommunication technology player based in China. Hww produces and sells telecommunication equipment. To enhance the competitiveness of its products, the company intends to replace an existing electronic components supplier to improve the product quality. Thus, the decision-making department has to choose a suitable supplier from several candidates. Following preliminary surveys, five suppliers are considered, denoted by $S i ( i = 1 , 2 , … , 5 )$. The assessment values are provided in the form of SVNNs with respect to five factors, namely: $c 1$: cost, $c 2$: quality, $c 3$: service performance, $c 4$: supplier’s profile, and $c 5$: risk. From the relationship amongst the five criteria, these criteria can be partitioned into two categories: $P 1 = { c 1 , c 2 , c 5 }$ and $P 2 = { c 3 , c 4 }$. Only the range of the weights of these criteria are known, with . The single-valued neutrosophic decision matrix $R = ( S i j ) 5 × 5$ is constructed as presented in Table 1.

#### 5.1. Decision-Making Process

The decision-making process, using the proposed method, is as follows.
Step 1. Construct and normalize decision matrix
The DMs assess the values as SVNNs, and criteria $c 1 , c 2$, and $c 5$ belong to the cost type. The normalized decision matrix $R ˜ = ( S ˜ <$