Next Article in Journal
A Novel Integral Equation for the Riemann Zeta Function and Large t-Asymptotics
Next Article in Special Issue
Neutrosophic Quadruple Vector Spaces and Their Properties
Previous Article in Journal
Statistical Tests for Extreme Precipitation Volumes
Previous Article in Special Issue
Inspection Plan Based on the Process Capability Index Using the Neutrosophic Statistical Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measures of Probabilistic Neutrosophic Hesitant Fuzzy Sets and the Application in Reducing Unnecessary Evaluation Processes

by
Songtao Shao
1,2,†,‡ and
Xiaohong Zhang
1,3,*,‡
1
Department of Mathematics, School of Arts and Sciences, Shaanxi University of Science & Technology, Xi’an 710021, China
2
College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
3
Department of Mathematics, College of Arts and Sciences, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Current address: Xi’an Weiyang University Park, Xi’an 710016, China.
These authors contributed equally to this work.
Mathematics 2019, 7(7), 649; https://doi.org/10.3390/math7070649
Submission received: 20 June 2019 / Revised: 15 July 2019 / Accepted: 16 July 2019 / Published: 19 July 2019
(This article belongs to the Special Issue New Challenges in Neutrosophic Theory and Applications)

Abstract

:
Distance measure and similarity measure have been applied to various multi-criteria decision-making environments, like talent selections, fault diagnoses and so on. Some improved distance and similarity measures have been proposed by some researchers. However, hesitancy is reflected in all aspects of life, thus the hesitant information needs to be considered in measures. Then, it can effectively avoid the loss of fuzzy information. However, regarding fuzzy information, it only reflects the subjective factor. Obviously, this is a shortcoming that will result in an inaccurate decision conclusion. Thus, based on the definition of a probabilistic neutrosophic hesitant fuzzy set (PNHFS), as an extended theory of fuzzy set, the basic definition of distance, similarity and entropy measures of PNHFS are established. Next, the interconnection among the distance, similarity and entropy measures are studied. Simultaneously, a novel measure model is established based on the PNHFSs. In addition, the new measure model is compared by some existed measures. Finally, we display their applicability concerning the investment problems, which can be utilized to avoid redundant evaluation processes.

1. Introduction

Neutrosophic set (NS) [1,2] as a more general theory form of fuzzy sets (FS) [3] provides a simple method to describe uncertain information under the MCDM environment. Afterwards, in order to better combine with practical problems, Wang et al. proposed the single-valued neutrosophic set (SVNS) [4,5,6] and interval neutrosophic set (INS) [7,8,9] by depicting the range of different membership functions to encourage the application of FS. For instance, NS adds three independent membership functions: truth-membership function T ( x ) , indeterminacy-membership function I ( x ) and falsity-membership function F ( x ) . In development, according to the complexity of the information in the MCDM problems, SVNS and INS have been applied to deal with some different types of problems [10,11,12,13,14,15,16]. When some decision makers (DMs) make a decision, some DMs may at the hesitancy among truth membership, indeterminacy membership and falsity membership. Thus, different forms of NS have been proposed, like single-valued neutrosophic hesitant FS (SVNHFS) [17,18,19], multi-valued NS (MVNS) [20,21,22,23], some types of linguistic NS [24,25,26], and other types of NS [27,28,29,30,31,32]. Some experts applied to algebraic systems [33,34,35,36,37,38,39,40], which clarified that the extended NSs are the effective tools for describing uncertainty and imprecise information, the information including imperfect, fuzzy, uncertainty and so on. Then, based on the different requirements of practical applications, the axioms of NS are investigated. The most important thing is how to minimize the loss of information when uncertain problems are resolved.
The use of truth-membership, indeterminacy-membership and falsity-membership degrees to depict the fuzziness only expresses subjective uncertainty. However, the statistical data can describe the occurrence frequency of membership degree based on objective views. The elements that decide on the accurate evaluation conclusion of MCDM include both fuzzy and statistic information. The DMs can explain the subjective information by utilizing NSs, SVNSs, SVNHSs and so on. As the amount of information increases, the impact of statistical information on decision outcomes will increase.
Xu et al. proposed hesitant probabilistic fuzzy set [41] and researched its basic operations. Next, Hao et al. [42] constructed probabilistic dual hesitant fuzzy set and applied in risk evaluation. Zhai et al. [43] took the probabilistic interval-valued intuitionistic hesitant fuzzy set and investigated its distance, similarity and entropy measures. Later, these theories have been widely studied and applied to solve MCDM problems [44,45,46,47]. However, when solving some decision problems, the decision makers will give the indeterminacy-membership hesitant degrees and corresponding probability information. In order to solve this situation, Shao et al. [48] and Peng et al. [49] established probabilistic single-valued neutrosophic hesitant fuzzy set (PSVNHFS or PNHFS) and probability multi-valued neutrosophic set (PMVN), respectively. Shao et al. investigated the basic operation laws of PNHFSs and their characteristics. Next, they established the probabilistic neutrosophic hesitant fuzzy weighted averaging (geometric) operators to fuse the uncertain information. Peng et al. presented a new QUALIFLEX method to fuse and analyze the uncertain information. The new form of expression is conductive to reducing the loss of uncertain information and improving the application in MCDM environments.
Distance measure, similarity measure and entropy measure are three effective ways to solve MCDM problems. As the key step of implicating fuzzy information explanation into MCDM, different types of distance and similarity measure for NSs [50,51], SVNSs [52,53], and SVNHFS [54,55] have been investigated. On the other hand, some ranking methods and MCDM approaches based on the measures of linguistic NSs have been established and utilized in various practical problems [56,57]. The effectiveness of similarity measure is to express the degree of similarity between factors. Additionally, the distance measure focuses on the divergence of items, which is opposite to the similarity measure. Simultaneously, similarity measure is an effective tool to express the relationship between items. Distance measure also has this characteristic.
The present notions of measures include the three independent membership degrees (truth, indeterminacy, falsity membership degrees) of fuzzy information, which can be effective to reduce the loss of information. Researchers pay attention to study the measures to improve the exactness and effectiveness in MCDM problems. According to the inner construction of present measure formulae, we establish a novel distance measure and a novel similarity. Sahin [58] proposed the Hamming distance measure of SVNHFSs as follows:
D S V N H F S = 1 3 x X ( 1 l i = 1 l | α N 1 μ ( i ) ( x ) α N 2 μ ( i ) ( x ) | + 1 p i = 1 p | β N 1 μ ( i ) ( x ) β N 2 μ ( i ) ( x ) | + 1 q i = 1 l | γ N 1 μ ( i ) ( x ) γ N 2 μ ( i ) ( x ) | ) ,
in which α , β and γ are the truth-membership, indeterminacy-membership and falsity-membership degrees of x i X to a situation, N 1 and N 2 are SVNHFSs. However, there are some drawbacks, for which it is necessary to be concerned. For instance, the truth-membership and falsity-membership degrees are utilized to describe DMs’ determination on x to the situation A. According to DMs, there is some associated information about x to the situation A, and α and γ are given at the same time when DMs make judgements. However, β expresses the vagueness of DMs’ un-known about x, and this is distinct to the α and β . Obviously, it is not logical for any DCDM problems when DMs characterize them by utilizing the same formula and equal potentiality in a measure function.
Due to the complexity of the uncertain information, the evaluation information given by the decision makers will be fused. For example, T A , I A and F A describe the proportion of pros, cons, and abstentions, respectively, in the voting model. In the case of some subjective factors, the decision maker cannot be sure that it is fully or completely opposed, so some of the abstentions tend to vote in favor, expressed by T I . Similarly, I F describes the fusion information between abstention and opposition. T F describes the fusion information between approval and opposition. T 1 , F 1 and I 1 represent information that is fully in favor, totally opposed, and completely abstained. Then, this type of information can be solved with neutrosophic hesitation fuzzy theory, T A = T 1 + T F + T I , F A = F 1 + T F + I F and I A = I 1 + T I + I F .
The whole uncertainty set is separated into vagueness, non-vagueness and hesitancy. The non-vagueness sub-domain includes truth-membership and falsity-membership regions, whereas the vagueness sub-domain is organized by the indeterminacy–membership region. The uncertainty in the non-vagueness sub-domain can be expressed as an undetermined attribute. The indeterminacy indicates that there are a variety of thoughts about x belonging to the situation A. Every thought can not be certain. Hesitancy sub-domain describes the hesitancy degrees of DMs. Thus, it is appropriate to explore and solve the uncertain information based on the vagueness, non-vagueness and hesitant degrees. The distinction among the novel measures and previous measures is distinguished.
According to the instructions above, our main aim is to accomplish the fuzzy description system based on the PNHFS. By holding more uncertainty parameters, the uncertain information is expressed. At the same time, the uncertainty information is divided more clearly. The particular introduction is related in Section 2. The second aim is to propose novel distance, similarity and entropy measures. This work is done in Section 3 exactly. We expect to take advantages of this new approach to improve the accuracy of practical MCDM results. In Section 4, the detail is described and an application case about reducing the excess re-evaluation is shown, respectively. Finally, the discussion and future research are presented followed by the Conclusions section.

2. Preliminaries

Firstly, the basic theoretical knowledge used in this paper is reviewed. For convenience, SVNHFS is simply called the neutrosophic hesitant fuzzy set (NHFS) in this work.

2.1. Several Types of NS

Definition 1.
Suppose X is a non-empty reference set. An NHFS is described by the following mathematical formula [4]:
N = { x , t ˜ ( x ) , i ˜ ( x ) , f ˜ ( x ) | x X } ,
where t ˜ ( x ) , i ˜ ( x ) and f ˜ ( x ) [ 0 , 1 ] . t ˜ , i ˜ and f ˜ denote three different types of degrees, respectively. t ˜ : X [ 0 , 1 ] describes the truth-membership degree, i ˜ : X [ 0 , 1 ] denotes the indeterminacy-membership degree, f ˜ : X [ 0 , 1 ] depicts the falsity-membership degree. t ˜ ( x ) , i ˜ ( x ) and f ˜ ( x ) satisfy the following condition: 0 t ˜ ( x ) + i ˜ ( x ) + f ˜ ( x ) 3 .
Definition 2.
Suppose that X is a non-empty reference set; then, an NHFS involved with X on the basis of three functions to X return three subsets of [0, 1]. Ye proposed an NHFS with the following mathematical sign [18]:
N = { x , T ( x ) , I ( x ) , F ( x ) | x X } ,
where T ( x ) , I ( x ) and F ( x ) are three subsets of [ 0 , 1 ] , respectively. Moreover, the definition of single-valued neutrosophic hesitant fuzzy element (SVNHFE) is proposed. If T ( x ) , I ( x ) and F ( x ) are three finite subsets, then the SVNHFE can be expressed by
( α 1 ( x ) , α 2 ( x ) , α L ( T ) ( x ) ) , ( β 1 ( x ) , β 2 ( x ) , , β L ( I ) ( x ) ) , ( γ 1 ( x ) , γ 2 ( x ) , , γ L ( F ) ( x ) ) = T ( x ) , I ( x ) , F ( x ) ,
in which L ( T ) , L ( I ) and L ( F ) are three positive integers to describe the corresponding number of values in the T ( x ) , I ( x ) and F ( x ) . Simultaneously, α a ( a { 1 , 2 , , L ( T ) ) describes the ath possible truth-membership degree, β b ( b { 1 , 2 , , L ( I ) ) describes the bth possible indeterminacy-membership degree, and γ c ( c { 1 , 2 , , L ( F ) ) describes the cth possible falsity-membership degree of x X to a situation. The restrictions of SVNHFS are listed below:
0 α a , β b , γ c 1 and 0 α + + β + + γ + + 3 , α + = m a x { α a } , β + = m a x { β b } , γ + = m a x { γ c } for x X .
After that, single-valued neutrosophic hesitant fuzzy measures and correlation coefficients, aggregation operators on SVNHFS have been investigated to solve MCDM problems, medical diagnoses and so on.

2.2. The Distance and Similarity Measures for SVNHFSs

Definition 3.
A mapping D : N H F S ( X ) × N H F S ( X ) [ 0 , 1 ] , “×” is the Cartesian product. Then, D is defined to be a distance measure of NHFS, if it satisfies the following four conditions [58] : A , B , C S V N H F S ( X ) ,
(1)
0 D ( A , B ) 1 ;
(2)
D ( A , B ) = 0 iff A = B ;
(3)
D ( A , B ) = D ( B , A ) ;
(4)
If A B C , then D ( A , C ) D ( A , B ) , D ( A , C ) D ( B , C ) .
Definition 4.
A mapping S : N H F S ( X ) × N H F S ( X ) [ 0 , 1 ] , “×” is the Cartesian product. Then, S is defined a similarity measure, if S has the following four axioms [58]: A , B , C N H F S ( X ) ,
(1)
0 S ( A , B ) 1 ;
(2)
S ( A , B ) = 1 iff A = B ;
(3)
S ( A , B ) = S ( B , A ) ;
(4)
If A B C , then S ( A , C ) S ( A , B ) , S ( A , C ) S ( B , C ) .
Definition 5.
A mapping E : N S ( X ) [ 0 , 1 ] is called an entropy on N S ( X ) , “×” is the Cartesian product. Then, E holds the following properties [51]: A , B N S ( X ) ,
(1)
E ( A ) = 0 if A is a crisp set;
(2)
E ( A ) = 1 iff A = { 0.5 , 0.5 , 0.5 } ;
(3)
E ( A ) E ( B ) if A is more crisper than B;
(4)
E ( A ) E ( A c ) , where A c is the complement of A.

3. The Distance and Similarity Measures of PSVNHFS

For the content of this part, as an extended theory of FS, Shao et al. [48] first proposed the probabilistic single-valued neutrosophic hesitant fuzzy set (PSVNHFS). The PSVNHFS can better describe the uncertainty by involving objectively uncertain information and subjective uncertain information. However, the vote set was first introduced by Zhai et al. [43]. Thus, according to the division of certain opinion, indeterminacy opinion and contradictory (vagueness) opinion, inference set as a new kind of vote set is constructed and applied to the NHFS. Finally, the distance measure and similarity measure are introduced and investigated.
Definition 6.
Suppose that X is a finite reference set. A PNHFS on X is denoted by the following mathematical symbol [48]:
N = { x , T ( x ) | P T ( x ) , I ( x ) | P I ( x ) , F ( x ) | P F ( x ) | x X } .
The T ( x ) | P T ( x ) , I ( x ) | P I ( x ) and F ( x ) | P F ( x ) are three elements of N, in which T ( x ) , I ( x ) and F ( x ) is defined as the possible truth-membership hesitant function, possible indeterminacy-membership hesitant function and possible falsity-membership hesitant function of x, respectively. P T ( x ) , P I ( x ) and P F ( x ) is the probabilistic information of factors in the components T ( x ) , I ( x ) and F ( x ) , respectively. This subjective information and objective information have the following requirements:
α a , β b , γ c [ 0 , 1 ] , 0 α + + β + + γ + 3 ; P a T , P b I , P c F [ 0 , 1 ] ; a = 1 L ( T ) P a T 1 , b = 1 L ( I ) P b I 1 , c = 1 L ( F ) P c F 1 ,
where α a T ( x ) , β b I ( x ) , γ c F ( x ) . α + = m a x { α a } , β + = m a x { β b } , γ + = m a x { γ c } , P a T P T , P b I P I , P c F P F . The symbols L ( T ) , L ( I ) and L ( F ) are the cardinal numbers of elements in the components T ( x ) | P T ( x ) , I ( x ) | P I ( x ) and F ( x ) | P F ( x ) , respectively.
Generally, a probabilistic neutrosophic hesitant fuzzy number (PNHFN) of x is expressed by the mathematical symbol:
N = ( α 1 | P 1 T , α 2 | P 2 T , α L ( T ) | P L ( T ) T ) , ( β 1 | P 1 I , β 2 | P 2 I , , β L ( I ) | P L ( I ) I ) , ( γ 1 | P 1 F , γ 2 | P 1 F , , γ L ( F ) | P L ( F ) F ) = { T | P T , I | P I , F | P F } .
Definition 7.
If X is a finite reference set and N is a PNHFN, then N ˜ is a normalized PNHFN [49]:
N ˜ = { T ( x ) | P ˜ T ( x ) , I ( x ) | P ˜ I ( x ) , F ( x ) | P ˜ F ( x ) } ,
where P ˜ a T = P a T P a T , P ˜ b I = P b I P b I , P ˜ c F = P c F P c F .
Example 1.
If X = { x } is a reference set, an PNHFS can be denoted by
N = { x , { 0.5 | 0.3 , 0.6 | 0.5 } , { 0.4 | 0.4 , 0.6 | 0.6 } , { 0.3 | 0.6 } } .
For every membership function, the PNHFN N ˜ = { 0.5 | 0.3 , 0.6 | 0.5 } , { 0.4 | 0.4 , 0.6 | 0.6 } , { 0.3 | 0.6 } independently denotes the whole uncertain area with three probabilistic membership functions, where a = 1 L ( T ) P a T = 0.3 + 0.5 = 0.8 , b = 1 L ( I ) P b I = 0.4 + 0.6 = 1 , c = 1 L ( F ) P c F = 0.6 .
The PNHFS is considered a generalized theory of aforementioned various of FS, including FS, IFS, HFS, etc. Next, some special cases of normal PNHFS are introduced.
(1)
If the probability values are equal for the same type of hesitant membership function, i.e.,
P 1 T = P 1 T = = P L ( T ) T , P 1 I = P 1 I = = P L ( I ) I , P 1 F = P 1 F = = P L ( F ) F .
Then, the normal PNHFS is reduced to the SVNHFS.
(2)
If L ( T ) = L ( I ) = L ( F ) = 1 and P 1 T = P 1 I = P 1 F = 1 , then the normal PNHFS reduces to the SVNS.
(3)
If I ( x ) = (there is also P I ( x ) = ), α + + β + 1 , then the normal PNHFS reduces to the PDHFS, which can be expressed by N = { x , T ( x ) | P T ( x ) , F ( x ) | P F ( x ) | x X } .
(4)
If the normal PNHFS satisfies the conditions in (3), and P 1 T = P 1 T = = P L ( T ) T , P 1 F = P 1 F = = P L ( F ) F , then the normal PNHFS reduces to the DHFS, denoted by N = { x , T ( x ) , F ( x ) | x X }
(5)
If I ( x ) = F ( x ) = (there is also P I ( x ) = P F ( x ) = ), then the normal PNHFS reduces to the PHFS, the mathematical symbol is N = { x , T ( x ) | P T ( x ) | x X } .
(6)
If the normal PNHFS satisfies the conditions in (5), and P 1 T = P 1 T = = P L ( T ) T , the normal PNHFS reduces to the HFS, denoted by N = { x , T ( x ) | x X } .
(7)
If I ( x ) = (there is also P I ( x ) = ), L ( T ) = L ( F ) = 1 , P 1 T = P 1 F = 1 , α 1 + γ 1 1 , then the normal NHFS reduces to the IFS, denoted by N = { x , α 1 , γ 1 | x X } .
(8)
If I ( x ) = (there is also P I ( x ) = ), L ( T ) = L ( F ) = 1 , P 1 T = P 1 F = 1 , and 1 α 1 γ 1 = 0 , then the normal NHFS reduces to the FS.
Definition 8.
Suppose that X = { x 1 , x 2 , , x n } is a finite reference set and N is a PNHFN, then the hesitant degree of x i is defined by the following mathematical symbol:
χ ( x i ) = 1 1 3 ( 1 L ( T ) + 1 L ( I ) + 1 L ( F ) ) ;
χ ( N ) = 1 n i = 1 n χ ( x i ) ,
where L ( T ) , L ( I ) and L ( F ) represent the total numbers of factors in the components T ( x ) | P ˜ T ( x ) , I ( x ) | P ˜ I ( x ) and F ( x ) | P ˜ F ( x ) .
The hesitant degree of x i reflects the decision maker’s degree of hesitation, the bigger χ ( N ) , the bigger the hesitation of decision maker in making decisions. If χ ( N ) = 0 , then the decision information is completely unhesitating.
By the definition of PNHFS, we know that the information { α 1 | P 1 T , α 2 | P 2 T , , α L ( T ) | P L ( T ) T } denotes the positive attitude for x to a situation A, Those data express a certain and non-vagueness component. In this case, we can not obtain effective data to denote the specific truth-membership degree. Similarly, the information elucidated by the data { γ 1 | P 1 F , γ 2 | P 2 F , , γ L ( F ) | P L ( F ) F } is like the introduction of the truth-membership hesitant degrees with probability, which denotes determinate attitude and uncertain settled data. However, the information { β 1 | P 1 I , β 2 | P 2 I , , β L ( I ) | P L ( I ) I } expresses uncertain attitude and inconclusive membership degree with probability. Thus, through the above analysis, the truth-membership hesitant degrees and false-membership hesitant degrees are considered as the components of non-vagueness subspace. The indeterminacy-membership degrees expresses the uncertain attitude. It denotes the imprecise notion of people’s knowledge about x. The rest of the region denotes a contradictory (vague) attitude about whether the x belongs to an event. It represents the unexplored domain of people’s knowledge about x. As people acquire more and more knowledge, the fuzzy information represented by contradictory (vague) subspace will be converted to the uncertain knowledge repressed by the information T ( x ) | P T ( x ) , I ( x ) | P I ( x ) and F ( x ) | P F ( x ) .
Thus, we propose a method to get all uncertain parameters and accurately describe the certain attitude subspace, indeterminate attitude subspace and contradictory (vague) subspace. Considering the certain subspace, the standpoint about the truth-membership hesitant degrees and false-membership hesitant degrees is correct. Thus, we let the truth-membership hesitant degrees have assigned positive values; the value domain is [0, 1] and then the false-membership hesitant degrees are assigned negative values; the value domain is [−1, 0]. Eventually, the value of certain attitude belongs to [−1, 1]. Obviously, by the Definition 6, the value of indeterminate attitude belongs to [0, 1]. Next, through the above analysis, we found that PNHFS is a convenient way to express fuzzy information. However, for decision makers, they prefer to get the optimal result more conveniently. However, the hesitant degree can describe the hesitation of uncertain information. Thus, we fuse the truth-membership hesitant degrees, false-membership hesitant degrees and hesitant degree into an attitude presentation. The uncertain neutrosophic space is relatively macroeconomic expressed by a certain attitude, indeterminate attitude and hesitation. The calculation process can be simplified and made more feasible for solving problems. Based on the above analysis, the definition of inference set (IS) is established as follows:
Definition 9.
Suppose that X is a finite reference set; then, a inference set (IS) is expressed by the following mathematical symbol:
I S = { x , d ( x ) , e ( x ) , g ( x ) | x X } ,
where I E = x , d ( x ) , e ( x ) , g ( x ) is defined as an inference element (IE), ( d ( x ) , e ( x ) , a n d g ( x ) ) is called an inference number (IN). The function d : X [ 1 , 1 ] describes the attitude of x belonging to the situation A. It is a compositive product about the truth-membership hesitant degrees and false-membership hesitant degrees. The mapping e : X [ 0 , 1 ] expresses the un-vagueness opinion of x belonging to the situation A. In addition, the mapping g : X [ 0 , 1 ] figures the contradictory (vague) degree for people’s attitudes about x belonging to the situation A. Note, when 0 < d ( x ) 1 , the decision makers remain optimistic about x belonging to the situation A; when 1 d ( x ) < 0 , the decision makers are pessimistic about x belonging to the situation A. If d ( x ) = 0 , then the decision makers’ attitude is neutral.
Example 2.
The mathematical symbol x , 0.4 , 0.7 , 0.2 is an IE. It describes the decision maker having a 40 % degree of agreement about x belonging to the situation A. However, there is a 70 % degree of determination about the information on x to the situation A. In addition, there is a 20 % degree of non-hesitation on the x belonging to the situation A.

3.1. The Method of Comparing PNHFSs

In this subsection, a way to convert the PNHFE to the IE is established. Next, the PNHFS can be compared by utilizing IEs. In the entire space, the certain attitude subspace, the indeterminate attitude subspace, the contradictory (vague) attitude subspace and corresponding probabilistic values express the different meanings. The certain attitude subspace represents the degrees of agreement or disagreement about x belonging to the situation A; the indeterminate attitude subspace can be described to the lack of decision makers’ information, whereas the contradictory (vague) subspace represents the contradiction of decision makers’ knowledge. Additionally, the probability theory expresses uncertainty, which is shared by the certain attitude subspace, the indeterminate sub-space and contradictory (vagueness) subspace. Thus, the probability values are integrated to reduce uncertain variables. Next, in order to establish distance measure and similarity measure, a function from a PNHFS to an IS is given.
Definition 10.
Suppose that X is a finite reference set, N is a finite PNHFE, and a mapping H is defined as follows:
H ( N ) = { a = 1 L ( T ) t a P a T c = 1 L ( F ) f c P c F , b = 1 L ( I ) ( 1 i b ) P b F , 1 χ ( x i ) } .
For instance, when P 1 T = P 2 T = = P L ( T ) T , P 1 I = P 2 I = = P L ( I ) I , P 1 F = P 2 F = = P L ( F ) F , the PNHFS is reduced to an NHFS. Thus, the function H ( N ) can be transformed to an IS as
H ( N ) = { a = 1 L ( T ) t a L ( T ) c = 1 L ( F ) f c L ( F ) , b = 1 L ( I ) ( 1 i b ) L ( I ) , 1 χ ( x i ) } .
According to Equation (6), the IS includes the probabilistic information and fuzzy information, which can be illustrated with the help of investigating the Definition 10. The formula a = 1 L ( T ) t a P a T c = 1 L ( F ) f c P c F introduces the average value of certain attitude obtained by the truth-membership subspace and the false-membership subspace. The expression b = 1 L ( I ) ( 1 i b ) P b F explains the average degree of an un-hesitant opinion given by the indeterminate-membership subspace. Then, the formula 1 χ ( x i ) illustrates the average value of the un-sloppy attitude for known information about x related to the situation A. By Definition 6, all objective and subjective uncertain elements are considered and different types of fuzzy spaces are distinguished. However, if PNHFE is infinite, the formula 6 will change
H ( N ) = { a = 1 L ( T ) t a P a T c = 1 L ( F ) f c P c F , b = 1 L ( I ) ( 1 i b ) P b F , 0 } .
Based on the importance of objective and subjective information, the method of comparison for IEs is defined as follows:
Definition 11.
Let X be a finite reference set, I E 1 = d 1 ( x ) , e 1 ( x ) , g 1 ( x ) and I E 2 = d 2 ( x ) , e 2 ( x ) , g 2 ( x ) be two IEs, then
(1)
If g 1 g 2 , then I E 1 I E 2 ;
(2)
If g 1 g 2 , then I E 1 I E 2 ;
(3)
If g 1 = g 2 , then (i) If e 1 e 2 , then I E 1 I E 2 ; (ii) If e 1 e 2 , then I E 1 I E 2 ;
(4)
If g 1 = g 2 , e 1 = e 2 , then (i) If d 1 d 2 , then I E 1 I E 2 ; (ii) If d 1 d 2 , then I E 1 I E 2 .
The division of entire uncertain field to describe the certain, indeterminate and hesitant attitude. By Definition 9, based on the internal perspective and external perspective, the IE expresses the certain subdomain without probabilistic information. Thus, according to the degree of information obtained and the importance of experience in decision-making activities, the method of comparison for IEs is based on the rule “degree of non-hesitation, determinacy and lastly opinion”.
Supposing that A and B are two PNHFEs to the finite reference set X, then the corresponding IEs can be expressed by I E A = d A ( x ) , e A ( x ) , g A ( x ) and I E B = d B ( x ) , e B ( x ) , g B ( x ) , respectively. Thus, the notion of binary relation for PNHFEs can be described as follows:
Definition 12.
Suppose that A and B are two PNHFEs to the finite reference set X. Then, the binary relations for PNHFEs are given as follows:
(1)
If d A ( x ) , e A ( x ) , g A ( x ) d B ( x ) , e B ( x ) , g B ( x ) , then A B ;
(2)
If d A ( x ) , e A ( x ) , g A ( x ) d B ( x ) , e B ( x ) , g B ( x ) , then A B ;
(3)
If d A ( x ) , e A ( x ) , g A ( x ) = d B ( x ) , e B ( x ) , g B ( x ) , then A = B .

3.2. Distance and Similarity Measures of PNHFSs

According to the work mentioned above, the distance measure, similarity measure and entropy measure of PNHFE are established in this subsection. The inclusion between I S A and I S B is given. Similarity, the inclusion between P N H F S A and P N H F S B are proposed.
Suppose that X is a finite reference set, A and B are PNHFS to set X, and I S A and I S B are corresponding ISs of A and B, respectively.
A B iff x X , T A | P T A ¯ T B | P T B ¯ , I A | P I A ¯ I B | P I B ¯ , F A | P T A ¯ F B | P F B ¯ and χ ( A ) χ ( B ) ,
where T A | P T A ¯ and T B | P T B ¯ describe the average value of truth-membership hesitant degree of A and B, respectively, I A | P I A ¯ and I B | P I B ¯ express the average indeterminate-membership hesitant degree of A and B, respectively. Similarly, F A | P F A ¯ and F B | P F B ¯ represent the corresponding average false-membership hesitant degree of A and B.
Additionally, if I S A I S B , the following conditions need to hold:
a A a B , b A b B , c A c B .
Definition 13.
Suppose that X is a finite reference set, I S A , I S B and I S C are three ISs in X. A function D I S : I S ( X ) × I S ( X ) [ 0 , 1 ] , where “×” means the Cartesian production. Then, D I S is called a distance measure, if D I S satisfies the following three requirements:
(1)
D I S ( I S A , I S B ) = 0 iff I S A = I S B ;
(2)
D I S ( I S A , I S B ) = D I S ( I S B , I S A ) ;
(3)
D I S ( I S A , I S C ) D I S ( I S A , I S B ) , D I S ( I S A , I S C ) D I S ( I S B , I S C ) when I S A I S B I S C .
Theorem 1.
Suppose that I S A = { d A ( x ) , e A ( x ) , g A ( x ) | x X } and I S B = { d B ( x ) , e B ( x ) , g B ( x ) | x X } are three ISs in X, then the function
D I S = A I O ( M I T ( M I U 1 ( | d A ( x ) d B ( x ) | ) , M I U 2 ( | e A ( x ) e B ( x ) | ) , M I U 3 ( | g A ( x ) g B ( x ) | ) ) )
is a distance measure for IS, where the mappings: M I U 1 , M I U 2 , M I U 3 : [ 0 , 1 ] [ 0 , 1 ] , satisfy the conditions: M I U 1 , M I U 2 and M I U 3 are three monotonically increasing unary functions and M I U 1 ( 0 ) = 0 , M I U 2 ( 0 ) = 0 , M I U 3 ( 0 ) = 0 . Those functions can be the same and are not mandatory here. The mapping M I T : [ 0 , 1 ] 3 [ 0 , 1 ] is a monotonically increasing ternary function; M I T holds the following requirements: M I T ( 0 , 0 , 0 ) = 0 ; M I T 1 0 , M I T 2 0 , and M I T 3 0 , M I T 1 , M I T 2 and M I T 1 are corresponding partial derivatives of M I U 1 , M I U 2 and M I U 3 , respectively. Additionally, A I O : [ 0 , 1 ] n [ 0 , 1 ] is an aggregation operator and the partial derivative A I O i 0 ( i { 1 , 2 , , n } ); n represses the total numbers of factors in X.
Proof. 
According to the conditions of M I U 1 , M I U 2 , M I U 3 , M I T and A I O , Definition 13 (1) and (2) obviously hold. Thus, the proof process of condition (3) is listed, here. Since the restrictive conditions I S A I S B I S C hold, thus the inequalities are listed below:
| d A ( x ) d C ( x ) | | d A ( x ) d B ( x ) | , | e A ( x ) e C ( x ) | | e A ( x ) e B ( x ) | , | g A ( x ) g C ( x ) | | g A ( x ) g B ( x ) | ; | d A ( x ) d C ( x ) | | d B ( x ) d C ( x ) | , | e A ( x ) e C ( x ) | | e B ( x ) e C ( x ) | , | g A ( x ) g C ( x ) | | g B ( x ) g C ( x ) | .
Because functions M I U 1 , M I U 2 and M I U 3 are three monotonically increasing functions, so we can get, x X
M I U 1 ( | d A ( x ) d C ( x ) | ) M I U 1 ( | d A ( x ) d B ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) M I U 2 ( | e A ( x ) e B ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) M I U 3 ( | g A ( x ) g B ( x ) | ) ; M I U 1 ( | d A ( x ) d C ( x ) | ) M I U 1 ( | d B ( x ) d C ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) M I U 2 ( | e B ( x ) e C ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) M I U 3 ( | g B ( x ) g C ( x ) | ) .
However, the partial derivatives M I T 1 0 , M I T 2 0 , and M I T 3 0 , thus
M I T ( M I U 1 ( | d A ( x ) d C ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) ) M I T ( M I U 1 ( | d A ( x ) d B ( x ) | ) , M I U 2 ( | e A ( x ) e B ( x ) | ) , M I U 3 ( | g A ( x ) g B ( x ) | ) ) ; M I T ( M I U 1 ( | d A ( x ) d C ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) ) M I T ( M I U 1 ( | d A ( x ) d C ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) ) .
According to the characteristic of function A I O , the following results are shown:
A I O ( M I T ( M I U 1 ( | d A ( x ) d C ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) ) ) A I O ( M I T ( M I U 1 ( | d A ( x ) d B ( x ) | ) , M I U 2 ( | e A ( x ) e B ( x ) | ) , M I U 3 ( | g A ( x ) g B ( x ) | ) ) ) ; A I O ( M I T ( M I U 1 ( | d A ( x ) d C ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) ) ) A I O ( M I T ( M I U 1 ( | d A ( x ) d C ( x ) | ) , M I U 2 ( | e A ( x ) e C ( x ) | ) , M I U 3 ( | g A ( x ) g C ( x ) | ) ) ) .
Namely, D I S ( I S A , I S C ) = D I S ( I S A , I S B ) , D I S ( I S A , I S C ) = D I S ( I S B , I S C ) . □
Theorem 2.
Suppose that I S A = { d A ( x ) , e A ( x ) , g A ( x ) | x X } and I S B = { d B ( x ) , e B ( x ) , g B ( x ) | x X } are three ISs in X, then the function
D I S = A I O ( M D T ( M D U 1 ( | d A ( x ) d B ( x ) | ) , M D U 2 ( | e A ( x ) e B ( x ) | ) , M D U 3 ( | g A ( x ) g B ( x ) | ) ) )
is a distance measure on IS, where the mappings: M D U 1 , M D U 2 , M D U 3 : [ 0 , 1 ] [ 0 , 1 ] satisfy the conditions: M I U 1 , M I U 2 and M I U 3 are three monotonically decreasing unary functions, respectively. M D U 1 ( 1 ) = 0 , M D U 2 ( 1 ) = 0 , M D U 3 ( 1 ) = 0 . Those functions can be the same and are not mandatory here. The mapping M D T : [ 0 , 1 ] 3 [ 0 , 1 ] is a monotonically decreasing ternary function, M D T holds the following requirements: M D T ( 1 , 1 , 1 ) = 0 ; M D T 1 0 , M D T 2 0 , and M D T 3 0 , M D T 1 , M D T 2 and M D T 1 are corresponding partial derivatives of M D U 1 , M D U 2 and M D U 3 , respectively. A I O : [ 0 , 1 ] n [ 0 , 1 ] is an aggregation operator and the partial derivative A I O i 0 ( i { 1 , 2 , , n } ), n represses the total numbers of factors in X.
Proof. 
Since the process of proof is similar to Theorem 1, thus the whole conditions of Definition 13 are held by Theorem 2. □
Definition 14.
Suppose that X is a finite reference set; A, B and C are three PNHFSs on X, a mapping D P N H F S : [ 0 , 1 ] × [ 0 , 1 ] is called a distance measure on P N H F S ( X ) , if it holds the following three requirements: “×” is the Cartesian product,
(1)
D P N H F S ( A , B ) = 0 iff A = B ;
(2)
D P N H F S ( A , B ) = D P N H F S ( B , A ) ;
(3)
If A B C , then D P N H F S ( A , B ) D P N H F S ( A , C ) and D P N H F S ( B , C ) D P N H F S ( A , C ) .
Theorem 3.
Suppose that X is a finite reference set, A, B and C are three PNHFSs in X, I S A , I S B and I S C are corresponding ISs of A, B and C, respectively. Then, a real-valued mapping:
D P N H F S ( A , B ) = M I U ( D I S ( I S A , I S B ) )
is a distance measure on PNHFS(X), where M I U : [ 0 , 1 ] [ 0 , 1 ] is a monotonically increasing unary mapping, M I U .
Proof. 
According to the conditions of Theorem 3, the mapping D P N H F S holds the requirements of Definition 14 ( 1 ) , ( 2 ) . Thus, the requirement (3) merely needs to be proved.
Based on the explanation of A B C , A , B , C P N H F S ( X ) , thus, by Definition 10, the corresponding ISs of A , B , C exist in the following inclusion relation:
I S A I S B I S C .
Obviously, the following inequalities are obtained:
D I S ( I S A , I S C ) D I S ( I S A , I S B ) , D I S ( I S A , I S C ) D I S ( I S B , I S C ) .
Since the function M I U is a monotonically increasing unary mapping, so the following inequalities are shown:
M I U ( D I S ( I S A , I S B ) ) M I U ( D I S ( I S A , I S C ) ) , M I U ( D I S ( I S B , I S C ) ) M I U ( D I S ( I S A , I S C ) ) .
This completes the proof process. □
Example 3.
Suppose that X is a finite reference set, A, B are PNHFSs on X, I S A = { d A ( x ) , e A ( x ) , g A ( x ) | x X } and I S B = { d B ( x ) , e B ( x ) , g B ( x ) | x X } are the corresponding ISs for those two PNHFSs. Based on the Theorem 1 and Theorem 3, let M I U 1 = y ϕ , M I U 2 = y μ , M I U 3 = y ν , y [ 0 , 1 ] , 0 ϕ , μ , ν 1 . M I T = l o g 4 ( 1 + y 1 + y 2 + y 3 ) , y 1 , y 2 , y 3 [ 0 , 1 ] . Additionally, suppose M I U = y λ , where y [ 0 , 1 ] , 0 λ . Then, we have
D 1 ( A , B ) = 1 2 n x X ( l o g 4 ( 1 + ( | d A ( x ) d B ( x ) | 2 ) ϕ + ( | e A ( x ) e B ( x ) | ) μ + ( | g A ( x ) g B ( x ) | 2 ) ν ) ) λ .
If ϕ = μ = ν = λ = 1 , we have
D 1 ϕ = μ = ν = λ = 1 ( A , B ) = 1 2 n x X ( l o g 4 ( 1 + ( | d A ( x ) d B ( x ) | 2 ) + ( | e A ( x ) e B ( x ) | ) + ( | g A ( x ) g B ( x ) | 2 ) ) ) .
If ϕ = μ = ν = 2 , λ = 1 2 , then
D 1 ϕ = μ = ν = 2 , λ = 1 2 ( A , B ) = 1 2 n x X ( ( l o g 4 ( 1 + | d A ( x ) d B ( x ) | 2 ) ) 2 + ( | e A ( x ) e B ( x ) | ) 2 + ( | g A ( x ) g B ( x ) | ) 2 4 ) 1 2 .
From the formulas of D 1 ( A , B ) , D 1 ϕ = μ = ν = λ = 1 ( A , B ) and D 1 ϕ = μ = ν = 2 , λ = 1 2 ( A , B ) , we know that the parameters ϕ , μ , ν manage the functions of | d A ( x ) d B ( x ) | , | e A ( x ) e B ( x ) | and | g A ( x ) g B ( x ) | to establish the internal framework of D 1 ( A , B ) . However, the parameter λ is utilized to regulate the reciprocity among the | d A ( x ) d B ( x ) | , | e A ( x ) e B ( x ) | and | g A ( x ) g B ( x ) | in the regulate area. Based on different application environments, the parameters ϕ , μ , ν are decided. Thus, for a MCDM problem, it is a tool applied to measure the distinction in their knowledge background. Thus, it is rational to decide the parameters utilized to manage the internal framework of measures based on respective importance degree. By dispatching different functions to | d A ( x ) d B ( x ) | , | e A ( x ) e B ( x ) | and | g A ( x ) g B ( x ) | , the value of adjusting the feasibility of | d A ( x ) d B ( x ) | , | e A ( x ) e B ( x ) | and | g A ( x ) g B ( x ) | can also be solved.
Example 4.
Suppose that X, A, B, I S A and I S B are as mentioned above in Example 3, M I U 1 = l n ( 1 + y ) , ( y [ 0 , 1 ] ) ; M I U 1 = y ϕ , ( y [ 0 , 1 ] ) , ϕ 0 ; M I U 3 = y μ , ( y [ 0 , 1 ] ) , μ 0 , M I T = ( y 1 · y 2 · y 3 ) λ , ( y 1 , y 2 , y 3 [ 0 , 1 ] , λ 0 ) . Additionally, M I U = t ( l n y ) , ( y [ 0 , 1 ] , t 0 ) . Then,
D 2 ( A , B ) = x X t ( ( ( l n ( 1 + | d A ( x ) d B ( x ) | 2 ) ) ϕ | e A ( x ) e B ( x ) | μ | g A ( x ) g B ( x ) 2 | ν ) λ ) y .
In addition, if ϕ = μ = ν = λ = t = 1 , then
D 2 , λ = 1 , y = 1 ϕ = μ = ν = 1 ( A , B ) = x X l n ( 1 + | d A ( x ) d B ( x ) | 2 ) | e A ( x ) e B ( x ) | | g A ( x ) g B ( x ) 2 | .
Definition 15.
Suppose that X is a finite reference set, I S A , I S B and I S C are three ISs on X, S I S : I S ( X ) × I S ( X ) [ 0 , 1 ] is a real-valued function, where “×” is the Cartesian product. Then, S I S is called a similarity measure on I S ( X ) , if it holds the following three axiomatic conditions:
(1)
S I S ( I S A , I S B ) = 1 iff I S A = I S B ;
(2)
S I S ( I S A , I S B ) = S I S ( I S B , I S A ) ;
(3)
If I S A I S B I S C , then S I S ( I S A , I S B ) S I S ( I S A , I S C ) , S I S ( I S B , I S C ) S I S ( I S A , I S C ) .
Theorem 4.
Suppose that X is a finite reference set, I S A = { d A ( x ) , e A ( x ) , g A ( x ) | x X } , I S B = { d B ( x ) , e B ( x ) , g B ( x ) | x X } are two ISs; then, function S I S ( I S A , I S B ) is called a similarity measure, and the mathematical symbol is as follows:
S I S ( I S A , I S B ) = A I O ( M D T ( M I U 1 ( | d A ( x ) d B ( x ) | 2 ) , M I U 2 ( | e A ( x ) e B ( x ) | ) , M I U 3 ( | g A ( x ) g B ( x ) | ) ) ) ,
where M I U 1 , M I U 2 , M I U 3 : [ 0 , 1 ] [ 0 , 1 ] hold the following conditions: M I U 1 , M I U 2 and M I U 3 are three monotonically increasing unary mappings, M I U 1 ( 0 ) = M I U 2 ( 0 ) = M I U 3 ( 0 ) = 0 . They may be the same functions, and there are no requirements here. M D T : [ 0 , 1 ] 3 [ 0 , 1 ] is a monotonically decreasing ternary mapping, M D T 1 , M D T 2 , M D T 3 are three corresponding partial derivatives of M D T with respect to M I U 1 , M I U 2 , M I U 3 , respectively. Those partial derivatives hold the following requirements: M D T 1 0 , M D T 2 0 , M D T 3 0 and M D T ( 0 , 0 , 0 ) = 1 . The mapping A I O : [ 0 , 1 ] n [ 0 , 1 ] is an aggregation operator, the partial derivative describes A I O i 0 ( i { 1 , 2 , , n } ) ; n describes the total numbers of factors in X.
Proof. 
The process of proof is similar to Theorem 1, thus it is unimportant here. □
Theorem 5.
Suppose that X is a finite reference set, I S A = { d A ( x ) , e A ( x ) , g A ( x ) | x X } , I S B = { d B ( x ) , e B ( x ) , g B ( x ) | x X } are two ISs, then function S I S ( I S A , I S B ) is called a similarity measure, and the mathematical symbol is as follows:
S I S ( I S A , I S B ) = A I O ( M I T ( M D U 1 ( | d A ( x ) d B ( x ) | 2 ) , M D U 2 ( | e A ( x ) e B ( x ) | ) , M D U 3 ( | g A ( x ) g B ( x ) | ) ) ) ,
where M D U 1 , M D U 2 , M D U 3 : [ 0 , 1 ] [ 0 , 1 ] satisfy the following requirements: M D U 1 , M D U 2 and M D U 3 are three monotonically decreasing unary mappings, M D U 1 ( 1 ) = M D U 2 ( 1 ) = M D U 3 ( 1 ) = 0 . They may have equal functions, and there are no requirements here. M I T : [ 0 , 1 ] 3 [ 0 , 1 ] is a monotonically increasing ternary mapping, M I T 1 , M I T 2 , M I T 3 are three corresponding partial derivatives of M I T with respect to M I U 1 , M I U 2 , M I U 3 , respectively. Those partial derivatives hold the following requirements: M I T 1 0 , M I T 2 0 , M I T 3 0 and M I T ( 0 , 0 , 0 ) = 0 . The mapping A I O : [ 0 , 1 ] n [ 0 , 1 ] is an aggregation operator and the partial derivative A I O i 0 ( i { 1 , 2 , , n } ) , n describe the total numbers of factors in X.
Proof. 
The proof process is omitted. □
Definition 16.
Suppose that X is a finite reference set, for any three PNHFSs A, B and C on X, a function S P N H F S : P N H F S ( X ) × P N H F S ( X ) [ 0 , 1 ] is called a similarity measure, if it holds the following three axiomatic conditions: “×” is the Cartesian product,
(1)
S P N H F S ( A , B ) = 1 iff A = B ;
(2)
S P N H F S ( A , B ) = S P N H F S ( B , A ) ;
(3)
If A B C , then S P N H F S ( A , B ) S P N H F S ( A , C ) and S P N H F S ( B , C ) S P N H F S ( A , C ) .
Theorem 6.
Let X be a finite reference set, and A and B be two PNHFSs on X. The I S A and I S B are corresponding ISs of A ,B, respectively. Then, the mapping S P N H F S is called a similarity measure on P N H F S ( X ) , and the mathematical symbol is
S P N H F S ( A , B ) = M I U ( S I S ( I S A , I S B ) ) ,
where M I U : [ 0 , 1 ] [ 0 , 1 ] is an increasing function and M I U ( 0 ) = 0
Proof. 
According to the Theorem 14, we know the proof is obvious. Thus, the process of proof is omitted. □
Example 5.
Suppose that X, A, B, I S A , I S B are as above mentioned, M D U 1 = M D U 2 = M D U 3 = t y t , ( 0 t , y 1 ) ; M I T = ( y 1 + y 2 + y 3 ) ϕ , 0 ϕ , y 1 , y 2 , y 3 1 . Additionally, suppose M I U = y λ , 0 y 1 , λ 0 . The similarity measure is described as follows:
S 1 ( A , B ) = x X ( t | d A ( x ) d B ( x ) | 2 + t | e A ( x ) e B ( x ) | + t | g A ( x ) g B ( x ) | 3 t ) λ .
In addition, suppose t = 1 3 , ϕ = λ = 1 , thus
S 1 , t = 1 3 ϕ = λ = 1 ( A , B ) = x X ( 1 3 ) | d A ( x ) d B ( x ) | 2 + ( 1 3 ) | e A ( x ) e B ( x ) | + ( 1 3 ) | g A ( x ) g B ( x ) | 1 .
Through Example 5, we know that those parameters and mappings to decide the effects of | d A ( x ) d B ( x ) | , | e A ( x ) e B ( x ) | and | g A ( x ) g B ( x ) | to establish the internal framework of similarity measures. Those parameters and mappings’ selection methods are similar to the methods of Example 3.

3.3. The Interrelations among Distance, Similarity and Entropy Measures

According to the concept of “duality”, the distance and similarity measures among SVNS, IVNS were investigated. However, different knowledge backgrounds of decision makers will lead to different results. According to the interrelation among distance and similarity measures, Wang [23] first proposed the definition of entropy and across entropy of MVNS and applied them to solving MCDM problems.
In the section, the interrelations among distance, similarity and entropy measures of PNHFS are investigated. According to SubSection 3.2, the distance measure shows the difference between factors. Additionally, the similarity measure investigated the uniformity of factors. Because distance measure and similarity measure describe two opposite aspects, the relationship between these two measures is investigated based on the following theorem:
Theorem 7.
Suppose that A and B are two PNHFS on X, the distance measure D P N H F S ( A , B ) holds the conditions in Definition 14, and then S P N H F S ( A , B ) = F N ( D P N H F S ( A , B ) ) is a similarity measure, which holds the axiomatic conditions in Definition 16, in which F N : [ 0 , 1 ] [ 0 , 1 ] is a fuzzy negation.
Proof. 
By Definition 14 and Definition 16, the process proof is obvious, so it is omitted. □
According to the interpretation of the divisions of the neutrosophic space, to better describe stability of PNHFS, the entropy measure of a PNHFS is designed as follows:
Definition 17.
Suppose that X is a reference set, A = { x , { T | P T } , { I | P I } , { F | P F } | x X } is a PNHFS in X. Then, the complement of A is expressed by the following mathematical symbol:
A c = { x , { F | P F } , { I | P I } , { T | P T } | x X } .
Obviously, A c is also a PNHFS.
Definition 18.
Suppose that X is a finite reference set, A and B are two PNHFSs in X, I S A , I S B are corresponding ISs of A and B, respectively. Then, a function E : P N H F S ( X ) [ 0 , 1 ] is called to be an entropy measure when it holds the following four requests:
(1)
E ( A ) = 0 if A = { x , { 1 | 1 } , { 0 | 1 } , { 0 | 1 } | x X } or A = { x , { 0 | 1 } , { 0 | 1 } , { 1 | 1 } | x X }
or A = { x , { 0 | P 1 } , { 0 | P 2 } , { 0 | P 3 } | x X } ;
(2)
E ( A ) = 1 if A = { x , { 0.5 | 1 } , { 0.5 | 1 } , { 0.5 | 1 } | x X } ;
(3)
E ( A ) = E ( A c ) iff A = { x , { T | P T } , { I | P I } , { F | P F } | x X } holds the requirement that b = 1 L ( I ) i b P b I = c = 1 L ( F ) f c P c F , in which A c is the complement of A.
(4)
E ( B ) E ( C ) when S P N H F S ( A , B ) S P N H F S ( A , C ) or D P N H F S ( A , B ) D P N H F S ( A , C ) , in which
A = { x , { 0.5 | P 1 } , { 0.5 | P 2 } , { 0.5 | P 3 } | x X } .
Since we only are concerned with the importance of a ( x ) , b ( x ) and c ( x ) on the stationarity of IS, the following theorems are introduced:
Theorem 8.
Suppose that X is a finite reference set, A is a PNHFS in X, and the corresponding IS of A is described by I S A . Then, the following formula:
E ( A ) = M D T ( M I U 1 ( | d A ( x ) | ) , M I U 2 ( | 2 e A ( x ) 1 | ) , M I U 3 ( | g A ( x ) | )
is an entropy measure, in which M I U 1 , M I U 2 , M I U 3 : [ 0 , 1 ] [ 0 , 1 ] are three monotonically increasing unary mappings with M I U 1 0 , M I U 2 0 , M I U 3 0 , and M I U 1 ( 0 ) = M I U 2 ( 0 ) = M I U 3 ( 0 ) = 0 , M I U 1 ( 1 ) = M I U 2 ( 1 ) = M I U 3 ( 1 ) = 1 . The function M D T : [ 0 , 1 ] 3 [ 0 , 1 ] is a monotonically decreasing ternary mapping, and its partial derivatives are lower than zero with the requirements: M D T ( 0 , 0 , 0 ) = 1 , M D T ( 1 , 1 , 1 ) = 0 .
Proof. 
The function E ( A ) is illustrated to hold all the conditions of Definition 18.
  • Let A = { x , { 1 | 1 } , { 0 | 1 } , { 0 | 1 } | x X } , A = { x , { 0 | 1 } , { 0 | 1 } , { 0 | 1 } | x X } or A = { x , { 0 | 1 } , { 0 | 1 } , { 1 | 1 } | x X } , thus the corresponding ISs of A are shown:
    I S A = ( 1 , 1 , 1 ) o r I S A = ( 1 , 1 , 1 ) .
    Next, the entropy measure of A is calculated as follows:
    E ( A ) = M D T ( M I U 1 ( 1 ) , M I U 2 ( 1 ) , M I U 3 ( 1 ) ) = M D T ( 1 , 1 , 1 ) = 0 .
  • E ( A ) = 1 M D T ( M I U 1 ( | d A ( x ) | ) , M I U 2 ( | 2 e A ( x ) 1 | ) , M I U 3 ( | g A ( x ) | ) = 1 M I U 1 ( 0 ) = 0 , M I U 2 ( 0 ) = 0 , M I U 3 ( 0 ) = 0 | d ( x ) | = 0 , | 2 e ( x ) 1 | = 0 , | g ( x ) | = 0 , t a = f c = 0.5 , i b = 0.5 . a , b , c .
  • Let A = { x , T A | P T , I A | P I , F A | P F | x X } , then the complementary of A is obtained: A c = { x , F A | P F , I A | P I , T A | P T , | x X } . By Definition 9, the following equality is obtained: I S A = I S A c . Obviously, E ( A ) = E ( A c ) .
  • Suppose that B and C are two PNHFS of X, A = { x , { 0.5 | P a T } , { 0.5 | P b I } , { 0.5 | P c F } | x X } . Thus, the corresponding IS of A is I S A = { 0 , 0 , 0 } . By Theorem 5, the following similarity measures can be obtained:
    S P N H F S ( A , B ) = M I U ( A I O ( M I B ( M D U 1 ( | d B ( x ) | 2 ) , M D U 2 ( | e B ( x ) | ) , M D U 3 ( | g B ( x ) | ) ) ) ) ; S P N H F S ( A , C ) = M I U ( A I O ( ( M I B ( M D U 1 ( | d C ( x ) | 2 ) , M D U 2 ( | e C ( x ) | ) , M D U 3 ( | g C ( x ) | ) ) ) ) .
Since S P N H F S ( A , B ) S P N H F S ( A , C ) , every function is monotonous, thus, we have | d B ( x ) | | d C ( x ) | , | e B ( x ) | | e C ( x ) | and | g B ( x ) | | g C ( x ) | . Finally, based on the requirements of Theorem 8, E ( B ) E ( C ) .
Additionally, D P N H F S ( A , B ) = 1 S P N H F S ( A , B ) , D P N H F S ( A , C ) = 1 S P N H F S ( A , C ) . Thus, the process of proof based on the distance measure is omitted. □
Theorem 9.
Suppose that X is a finite reference set, A is a PNHFS on X, and I S A is the corresponding IS about A. Then, Equation (17) is an entropy measure:
E ( A ) = M I T ( M D U 1 ( | d A ( x ) | ) , M D U 2 ( | 2 e A ( x ) 1 | ) , M D U 3 ( | g A ( x ) | ) ) .
E ( A ) satisfies the following limits: M D U 1 , M D U 2 , M D U 3 : [ 0 , 1 ] [ 0 , 1 ] are two monotonically decreasing unary mappings, and M D U 1 ( 0 ) = M D U 2 ( 0 ) = M D U 3 ( 0 ) = 1 , M D U 1 ( 1 ) = M D U 2 ( 1 ) = M D U 3 ( 1 ) = 0 . The mapping M I B : [ 0 , 1 ] 3 [ 0 , 1 ] is a monotonically increasing binary function, its partial derivatives are better than 0, M I B ( 0 , 0 , 0 ) = 0 , M I B ( 1 , 1 , 1 ) = 1 .
Based on the Equations (16) and (17), the different entropy measures can be established.
Through the above analysis, we know that entropy measure can be depicted by the unsteadiness of a PNHFS. However, distance measure and similarity measure play a vital role. Vice versa, entropy measure can better help us to comprehend distance measurement and similarity measurement. Next, according to the distance measure and similarity measure, respectively, the entropy measure can be established.
Theorem 10.
Suppose D is a distance measure obtained according to Definition 14, B = { x , { 0.5 | P a T } , { 0.5 | P b I } , { 0.5 | P c F } | x X } , then E ( A ) = M D U ( D P N H F S ( A , B ) ) . The M D U : [ 0 , 1 ] [ 0 , 1 ] is a decreasing unary function, its partial derivatives are lower than 0, and M D U ( 0 ) = 1 , M D U ( 1 ) = 0 .
Theorem 11.
Suppose S is a similarity measure obtained according to Definition 14, B = { x , { 0.5 | P a T } , { 0.5 | P b I } , { 0.5 | P c F } | x X } , then E ( A ) = M I U ( S P N H F S ( A , B ) ) . The M I U : [ 0 , 1 ] [ 0 , 1 ] is a decreasing unary function, its partial derivatives are bigger than 0, and M D U ( 0 ) = 0 , M D U ( 1 ) = 1 .
The process of proof about Theorem 10 and Theorem 11 is not unfolded here. Similarity, we can also get the following theorems. The proof processes are visualized.
Theorem 12.
Supposing that D P N H F S is the distance measure of PNHFS A, S P N H F S is the similarity measure of PNHFS A, B = { x , { 0.5 | P a T } , { 0.5 | P b I } , { 0.5 | P c F } | x X } , then E ( A ) = M I B ( M D U ( D P N H F S ( A , B ) ) , M I U ( S P N H F S ( A , B ) ) ) is a entropy measure. M I B : [ 0 , 1 ] [ 0 , 1 ] is an increasing binary function under the conditions that the partial derivatives are bigger than 0, M I B ( 0 , 0 ) = 0 , M I B ( 1 , 1 ) = 1 . The mappings M D U : [ 0 , 1 ] [ 0 , 1 ] and M I U : [ 0 , 1 ] [ 0 , 1 ] are decreasing unary function and increasing function, respectively. In addition, M D U ( 0 ) = 1 , M D U ( 1 ) = 0 , M I U ( 0 ) = 0 , M D U ( 1 ) = 1 .
Theorem 13.
Supposing that D P N H F S is the distance measure of PNHFS A, S P N H F S is the similarity measure of PNHFS A, B = { x , { 0.5 | P a T } , { 0.5 | P b I } , { 0.5 | P c F } | x X } , then E ( A ) = M D B ( M I U ( D P N H F S ( A , B ) ) , M D U ( S P N H F S ( A , B ) ) ) is an entropy measure. M D B : [ 0 , 1 ] [ 0 , 1 ] is a decreasing binary function under the conditions that the partial derivatives are lower than 0, M I B ( 1 , 1 ) = 0 , M I B ( 0 , 0 ) = 1 . The mappings M I U : [ 0 , 1 ] [ 0 , 1 ] and M D U : [ 0 , 1 ] [ 0 , 1 ] are increasing unary function and decreasing function, respectively. In addition, M I U ( 1 ) = 1 , M I U ( 0 ) = 0 , M D U ( 1 ) = 0 , M D U ( 1 ) = 1 .

4. Method Analysis Based on Illustrations and Applications

4.1. Comparative Evaluations

In real life, the investment problem is a common MCDM problem, and many researchers have proposed different types of distance and similarity measures of SVNHFS to settle this problem. In this part, a famous investment selection situation is introduced. The specific evaluation and the precise data of alternatives for investment company to invest the money problem are listed in Table 1. Table 1 displays the decision matrix of four alternatives A 1 , A 2 , A 3 , A 4 and three evaluated criteria C 1 , C 2 , C 3 . The four alternatives are Real Estate, Oil Exploitation, Bank Financial and Western Restaurant, respectively. The three criteria are Market Prospect, Risk Assessment and Earning Cycle, respectively. The idea element A * = 1 | 1 , 0 | 0 , 0 | 0 } .
Note 1.
The data on this investment selection problem in Table 1 is in the form of PNHFNs. The PNHFS is one of the generalized from the NHFS, which we have described by item (1) after Definition 6. Thus, the definition of PNHFS can also utilized to NHFS. For instance, { 0.5 , 0.6 } , { 0.1 } , { 0.3 } is an NHFE. We can describe it as { 0.5 | 0.5 , 0.6 | 0.5 } , { 0.1 | 1 } , { 0.3 | 1 } , which is an PNHFE.
Note 2.
The results are listed in Table 2, and the optimal result is according to the minimum value among distance measures.
The optimal selections are shown in Table 3. By comparing the conclusions shown by the present distance measures Xu and Xia’s Method, Singh’s Method, and Sahin’s Method, we found that the selections calculated are the same as our method with D ϕ = μ = ν = λ = 1 , D ϕ = μ = ν = 2 , λ = 1 2 and D ϕ = μ = 1 , ν = 2 , λ = 1 . However, the conclusions calculated by D ϕ = ν = 1 , μ = 2 λ = 1 , D ϕ = 2 , ν = 1 = μ = 1 λ = 1 are different from the present method.
Thus, we deduce that the consequences may change if we change the inner frames of the distance measure formula. According to the components of | d A ( x ) d B ( x ) | , | e A ( x ) e B ( x ) | and | g A ( x ) g B ( x ) | , which describe the certain attitudes, knowledge backgrounds and hesitancy degree, respectively, we trust that the new type of distance measures are effective and significant. If the difference of the decision makers’ hesitancy degree and background knowledge is relatively big, it does not have a lot of effective consult values regarding whether they have the same conclusions. However, when the difference between the decision maker’s hesitation and background knowledge is not too big, analyzing the reasons for the difference in their opinions is significant. Thus, it is important for making rational decisions.

4.2. Streamlining the Talent Selection Process

In many areas of life, the existing evaluation systems are incomplete, resulting in redundancy in the evaluation processes and waste of resources. This situation results in the low efficiency of evaluation for the entire decision-making section. Through the evaluation and analysis of the existing concerned decision documents, the matter of unnecessary waste of manpower resource is extensive. For example, many companies with well-established evaluation systems are concentrated in large cities or large countries, under the context of rapid growth in information and the trend of economic globalization. In addition, the untimely exchange of information is an important reason for the waste of decision resources. In the process of multi-criteria decision-making, the final results show inaccurate features in the case of a loss of decision information. Thus, in this situation, we explain the application by taking the investment company’s choice of the best investment project as an example.
ABC Investment Co., Ltd. is a large investment consulting company. The company’s decision-making level is in the leading position. Thus, policymakers prefer to choose ABC Investment Co., Ltd. instead of other relatively backward companies. As a result, large investment companies are common, and small investment sectors create a waste of corporate resources. Ultimately, helping companies to share information in decision-making systems to improve decision-making processes is critical to guiding companies to choose more rational decision-making companies. Thus, when enterprises face risky decision-making problems, they should choose large decision-making departments to deal with them effectively, but not all decision-making problems blindly choose large investment departments to solve.
With regard to those decision-making issues that need to be transferred to the upper-level department for processing, the decision given by the decision-maker is a critical step. Therefore, accurate judgment, the consensus of the decision-makers at the corresponding level and the decision-making departments at higher levels provide a reference for the development of the enterprise. This can synthesize different levels of knowledge information to improve decision-making efficiency.
Combined with the above considerations, companies establish decision-making systems to improve decision-making efficiency. It is necessary for companies to have a database of their decision information. In some enterprises, decision information storage and retrieval systems have been established based on computer networks for enterprise-centric data collection and investigation. Effectively sharing decision data among decision-making departments is beneficial to the development of companies. Therefore, in reducing excessive unnecessary decisions, PNHFNs are used to express the conclusions of decision makers for the MCDM problems faced by companies.
For instance, the formula { T | P T , I | P I , F | P F } is a decision maker’s judgment for an MCDM problem, where T describes that the decision maker’s support degrees for the problem can be solved, I indicates that the professor’s indeterminacy degrees for the problem can be solved, and F expresses that the decision maker’s dissentient degrees for the problem can be solved. The probabilities P T , P I and P F are the corresponding statistic value of T, I and F, respectively.
Next, we introduce an illustration by utilizing the new distance and similarity measures to perfect the accurate evaluation for reducing the excessive re-evaluations. The special illustration of a talents selection problem is introduced as follows:
C : { A 1 , A 2 , A 3 , A 4 } is a set of three investors.
E : { E 1 , E 2 } is a set of two stock consultants from the higher and lower companies, respectively.
A : { RE Network Technology Company (RE); DR Biotechnology Company (DR); EV Chemical Company (EV); and FL Technology Company (FL)} is a set of stocks that the investors need to be premeditated.
Then, regarding the investment questions, the evaluation information of the two experts is described and listed in Table 4 and Table 5.
First, normalize the evaluation information, since the space is limited, so the results are neglected. According to the above-mentioned explanations, the distance and similarity measures among the two reports’ evaluations are calculated by utilizing the following functions:
D ( E 1 , E 2 ) = 5 l o g 3 ( 1 + | d A ( x ) d B ( x ) | 2 4 + | e A ( x ) e B ( x ) | + | g A ( x ) g B ( x ) | 2 ) , w h e n | e A ( x ) e B ( x ) | 0.15 ; 5 l o g 3 ( 1 + | d A ( x ) d B ( x ) | 2 + | e A ( x ) e B ( x ) | 2 + | g A ( x ) g B ( x ) | 2 ) , w h e n | e A ( x ) e B ( x ) | 0.15 .
S ( E 1 , E 2 ) = 1 2 ( ( 1 2 | d A ( x ) d B ( x ) | 2 ) 3 + ( 1 2 ) | e A ( x ) e B ( x ) | + | g A ( x ) g B ( x ) | 2 0.5 ) , w h e n | e A ( x ) e B ( x ) | 0.15 ; 1 2 ( ( 1 2 ) 3 | e A ( x ) e B ( x ) | | d A ( x ) d A ( x ) | 2 + | g A ( x ) g B ( x ) | 2 + 0.5 ) , w h e n | e A ( x ) e B ( x ) | 0.15 .
According to the investors knowledge backgrounds, the threshold value is set to 0.15. If the difference of the stock consultant evaluation is lower than 0.15, the discussion of their evaluations is worth deeply discussing and studying, and it may be a key factor of the investment choice. Conversely, the impact of the difference in conclusions is not the most important.
Next, for the consequences of distance and similarity measures of every criterion for each investment problem, these are described by the corresponding matrices D ( E 1 , E 2 ) , S ( E 1 , E 2 ) :
D ( E 1 , E 2 ) = 0 , 2242 , 0.0396 , 0.7380 ^ , 0.0715 0.7777 ^ , 0.4676 , 0.5701 , 0.1101 0.2693 , 0.3948 , 0.7351 , 0.7932 ^ 0.2208 , 0.3892 , 0.5937 ^ , 0.2866 ,
S ( E 1 , E 2 ) = 0.6575 , 0.7257 , 0.6833 ^ , 0.7455 0.5023 ^ , 0.6088 , 0.5272 , 0.6933 0.6367 , 0.5848 , 0.6522 , 0.6589 ^ 0.6806 , 0.6400 , 0.5485 ^ , 0.6628 .
Based on the above conclusions, in order to confirm which criterion needs further examination, the stock consultant should discuss the threshold value of the distance value with investors. However, the similarity consequences are considered as a reference for the investor and stock consultant for the consideration of further examinations. According to this question background, 0.15 is the threshold value of distance measures for every investor (the threshold value of distance measures is determined by a third party data source, and we are not discussing this here.). On the basis of the explanation of distance measure, the threshold value of similarity measure will be determined. Next, the matrices D ( E 1 , E 2 ) and S ( E 1 , E 2 ) help us to understand the meaning.
Observing the matrix D ( E 1 , E 2 ) , investor A 1 needs to focus on E V ; investor A 2 needs to focus on R E ; investor A 3 needs to focus on E V and F L ; and investor A 4 does not need to focus on: R E , D R and F L .
Likewise, about the matrix S ( E 1 , E 2 ) , for the investors A 2 and A 4 , we can obtain the same conclusion as the ones explained by D ( E 1 , E 2 ) . However, the similarity measure of A 1 is not the smallest, and neither is the similarity measure of A 3 for E V and F L . Both A 1 and A 3 reflect the greater distance and similarity measures. The reason is that the context of the problem is different, and the distance and similarity measure of A 1 are investigated by the corresponding first formulas in (18) and (19); the conclusions of the A 3 are investigated by the corresponding second formula in (18) and (19). Obviously, the different knowledge background of the stock consultants caused the results of A 1 , the results of A 1 are relatively less strict for rule E V . Furthermore, stock consultants need more in-depth communication to make judgments and suggestions about rules E V and F L for A 3 .
However, in order to make a decision faster for A 3 , the entropy measure can be utilized. For A 3 , the stock consultants provide the normal probabilistic neutrososphic hesitant fuzzy information with respect to the E V listed:
E 1 = { 0.6 | 1 } , { 0.4 | 0.625 , 0.5 | 0.375 } , { 0.4 | 0.56 , 0.6 | 0.44 } , E 2 = { 0.5 | 0.44 , 0.6 | 0.56 } , { 0.5 | 0.44 , 0.7 | 0.56 } , { 0.5 | 1 } .
By utilizing Equations (18) and (19), and the following entropy measures
E ( A ) = 1 D ( A , B ) + S ( A , B ) 2
to obtain the stock consultants’ entropy for rule E V , in which B = { x , { 0.5 | P 1 } , { 0.5 | P 2 } , { 0.5 | P 3 } | x X } , we can get
E 1 = 0.5393 ; E 2 = 0.5977 .
The bigger the entropy value, the easier it is for the stock consultant to change his/her mind. The investor should make a contract with the stock consultant E 2 first, then make a contract with E 1 . Suppose the stock consultant E 2 changes his mind previously, and his opinion is closer to E 1 . Then, it is not necessary for investor A 3 to make an appointment with E 1 . Obviously, this method is more convenient, flexible and efficient. This method is beneficial for reducing the unnecessary selective re-examinations. In addition, the entropy measure is applied in MCDM situations, which is conducive to improving resource utilization.
It is worth noting that the evaluation information is described by PNHFS, which include the objective information and subjective degrees. The decision makers can select the optimal form of expression of PNHFS to solve practical situations.

5. Conclusions and Future Research

Based on the concept of PNHFS, the theories of NSs are enriched and its application ranges are increased. Next, the different types of fuzziness related to the uncertainty neutrosophic space are investigated. Through analysis and comparison, we know that the neutrosophic space is composed of indeterminate subspace and relatively certain subspace. These two different types of subspace should be distinguished. Simultaneously, the connections among these subspaces are investigated. According to the drawbacks of distance and similarity measures, a new method is established to describe the measures of PNHFSs. The basic axioms of measure are satisfied. Next, the connections among the novel distance, similarity and entropy measures are researched, and compared with other proposed methods. It shows that our methods are more effective. Finally, under the background of investment selection, the novel distance, similarity and entropy measures are shown for reducing the invalid evaluation processes. This is important for improving the evaluation efficiency of the entire selection system. The results have expressed that our proposed methods are meaningful and, if applied, solve the more complicated problems, like talent selections.
Furthermore, in Example 3 and Example 5, the parameters ϕ , μ , ν and λ can depict the experts’ individual preferences and knowledge background. Additionally, the more information that is expressed, the more accurate the parameters will be. Thus, how to decide the parameters in measurements is a significant problem. Next, the practicality of new measures is explained by applying distance, similarity and entropy measures into the investment selection. The new distance (similarity) and entropy measures will be researched by integrating them with some related backgrounds to promote the other practical situations. Considering the privacy of information, the related situations of new measurements will help with evaluation to guide decision makers. In the future, the novel measures will be investigated and integrate some related methods in order to expand the scope of application. Based on the correlation and complexity of investors’ information, the novel measures will be established. Finally, the properties of entropy measurements have not been studied in full. Thus, in the future, the axioms of the entropy measure will be given more attention. The basic operation laws of PNHFSs and IS have been omitted, so the research about this situation will be studied further.

Author Contributions

All authors have contributed equally to this paper. S.S. and X.Z. initiated the investigation and organized the draft. S.S. put forward this idea and completed the preparation of the paper. S.S. collected existing research results on PNHFS. X.Z. revised and submitted the document.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61573240).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smarandache, F. Neutrosophy: Neutrosophic Probability, Set, and Logic: Analytic Synthesis &; Synthetic Analysis; American Research Press: Santa Fe, NM, USA, 1998. [Google Scholar]
  2. Smarandache, F. A Unifying Field in Logics. Neutrosophy: Neutrosophic Probability, Set and Logic; American Research Press: Santa Fe, NM, USA, 1999. [Google Scholar]
  3. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [Green Version]
  4. Haibin, W.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single Valued Neutrosophic Sets; Infinite Study: Brasov, Romania, 2010. [Google Scholar]
  5. Kazimieras Zavadskas, E.; Baušys, R.; Lazauskas, M. Sustainable assessment of alternative sites for the construction of a waste incineration plant by applying WASPAS method with single-valued neutrosophic set. Sustainability 2015, 7, 15923–15936. [Google Scholar]
  6. Şahin, R.; Küçük, A. Subsethood measure for single valued neutrosophic sets. J. Intell. Fuzzy Syst. 2015, 29, 525–530. [Google Scholar] [Green Version]
  7. Wang, H.; Smarandache, F.; Sunderraman, R.; Zhang, Y.Q. Interval Neutrosophic Sets and Logic: Theory and Applications in Computing: Theory and Applications in Computing; Hexis: Phoenix, AZ, USA, 2005; Volume 5. [Google Scholar]
  8. Broumi, S.; Talea, M.; Smarandache, F.; Bakali, A. Decision-making method based on the interval valued neutrosophic graph. In Proceedings of the 2016 Future Technologies Conference (FTC), San Francisco, CA, USA, 6–7 December 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 44–50. [Google Scholar]
  9. Liu, P.; Tang, G. Some power generalized aggregation operators based on the interval neutrosophic sets and their application to decision-making. J. Intell. Fuzzy Syst. 2016, 30, 2517–2528. [Google Scholar]
  10. Bao, Y.L.; Yang, H.L. On single valued neutrosophic refined rough set model and its application. In Fuzzy Multi-Criteria Decision-Making Using Neutrosophic Sets; Springer: Cham, Switzerand, 2019; pp. 107–143. [Google Scholar]
  11. Bakali, A.; Smarandache, F.; Rao, V.V. Single-Valued Neutrosophic Techniques for Analysis of WIFI Connection. In Advanced Intelligent Systems for Sustainable Development (AI2SD’2018): Volume 5: Advanced Intelligent Systems for Computing Sciences; Springer: Cham, Switzerand, 2019; p. 405. [Google Scholar]
  12. Ashraf, S.; Abdullah, S.; Smarandache, F.; ul Amin, N. Logarithmic hybrid aggregation operators based on single valued neutrosophic sets and their applications in decision support systems. Symmetry 2019, 11, 364. [Google Scholar]
  13. Sun, R.; Hu, J.; Chen, X. Novel single-valued neutrosophic decision-making approaches based on prospect theory and their applications in physician selection. Soft Comput. 2019, 23, 211–225. [Google Scholar]
  14. Ye, J. Multiple attribute group decision-making method with single-valued neutrosophic interval number information. Int. J. Syst. Sci. 2019, 50, 152–162. [Google Scholar]
  15. Thong, N.T.; Dat, L.Q.; Hoa, N.D.; Ali, M.; Smarandache, F.; Son, L.H. Dynamic interval valued neutrosophic set: Modeling decision-making in dynamic environments. Comput. Ind. 2019, 108, 45–52. [Google Scholar]
  16. Nagarajan, D.; Lathamaheswari, M.; Broumi, S.; Kavikumar, J. A new perspective on traffic control management using triangular interval type-2 fuzzy sets and interval neutrosophic sets. Oper. Res. Perspect. 2019, 6, 100099. [Google Scholar]
  17. Ye, J. Multiple-attribute decision-making method under a single-valued neutrosophic hesitant fuzzy environment. J. Intell. Syst. 2015, 24, 23–36. [Google Scholar]
  18. Ye, J. Multiple-attribute decision-making method using similarity measures of single-valued neutrosophic hesitant fuzzy sets based on least common multiple cardinality. J. Intell. Fuzzy Syst. 2018, 34, 4203–4211. [Google Scholar]
  19. Li, X.; Zhang, X. Single-valued neutrosophic hesitant fuzzy Choquet aggregation operators for multi-attribute decision-making. Symmetry 2018, 10, 50. [Google Scholar]
  20. Peng, J.J.; Wang, J.Q.; Wu, X.H.; Wang, J.; Chen, X.H. Multi-valued neutrosophic sets and power aggregation operators with their applications in multi-criteria group decision-making problems. Int. J. Comput. Intell. Syst. 2015, 8, 345–363. [Google Scholar]
  21. Ji, P.; Zhang, H.-Y.; Wang, J.-Q. A projection-based TODIM method under multi-valued neutrosophic environments and its application in personnel selection. Neural Comput. Appl. 2018, 29, 221–234. [Google Scholar]
  22. Li, B.-L.; Wang, J.-R.; Yang, L.-H.; Li, X.-T. Multiple criteria decision-making approach with multivalued neutrosophic linguistic normalized weighted Bonferroni mean Hamacher operator. Math. Probl. Eng. 2018, 2018. [Google Scholar] [CrossRef]
  23. Wang, X.; Wang, X.K.; Wang, J.Q. Cloud service reliability assessment approach based on multi-valued neutrosophic entropy and cross-entropy measures. Filomat 2018, 32, 2793–2812. [Google Scholar]
  24. Fang, Z.; Ye, J. Multiple attribute group decision-making method based on linguistic neutrosophic numbers. Symmetry 2017, 9, 111. [Google Scholar]
  25. Tian, Z.-P.; Wang, J.; Wang, J.-Q.; Zhang, H.-Y. Simplified neutrosophic linguistic multi-criteria group decision-making approach to green product development. Group Decis. Negot. 2017, 26, 597–627. [Google Scholar]
  26. Liu, P.; You, X. Bidirectional projection measure of linguistic neutrosophic numbers and their application to multi-criteria group decision-making. Comput. Ind. Eng. 2019, 128, 447–457. [Google Scholar]
  27. Broumi, S.; Smarandache, F.; Maji, P.K. Intuitionistic neutrosphic soft set over rings. Math. Stat. 2014, 2, 120–126. [Google Scholar]
  28. Broumi, S.; Nagarajan, D.; Bakali, A.; Talea, M.; Smarandache, F.; Lathamaheswari, M. The shortest path problem in interval valued trapezoidal and triangular neutrosophic environment. Complex Intell. Syst. 2019, 1–12. [Google Scholar] [CrossRef]
  29. Broumi, S.; Bakali, A.; Talea, M.; Smarandache, F.; Singh, P.K.; Uluçay, V.; Khan, M. Bipolar complex neutrosophic sets and its application in decision making problem. In Fuzzy Multi-Criteria Decision-Making Using Neutrosophic Sets; Springer: Cham, Switzerland, 2019; pp. 677–710. [Google Scholar]
  30. Deli, I. npn-Soft sets theory and their applications. Ann. Fuzzy Math. Inf. 2015, 10, 3–16. [Google Scholar]
  31. Deli, I.; Broumi, S. Neutrosophic soft matrices and NSM-decision-making. J. Intell. Fuzzy Syst. 2015, 28, 2233–2241. [Google Scholar]
  32. Smarandache, F. n-Valued Refined Neutrosophic Logic and Its Applications to Physics; Infinite Study 2013. Available online: https://arxiv.org/abs/1407.1041 (accessed on 10 June 2019).
  33. Li, L. p-topologicalness—A relative topologicalness in ⊤—Convergence spaces. Mathematics 2019, 7, 228. [Google Scholar]
  34. Li, L.; Jin, Q.; Hu, K. Lattice-valued convergence associated with CNS spaces. Fuzzy Sets Syst. 2019, 317, 91–98. [Google Scholar]
  35. Zhang, X.; Wu, X.; Mao, X.; Smarandache, F.; Park, C. On neutrosophic extended triplet groups (loops) and Abel-Grassmann’s groupoids (AG-groupoids). J. Intell. Fuzzy Syst. 2019, 1–11. [Google Scholar] [CrossRef]
  36. Wu, X.; Zhang, X. The decomposition theorems of AG-neutrosophic extended triplet loops and strong AG-(l, l)-loops. Mathematics 2019, 7, 268. [Google Scholar]
  37. Ma, Y.; Zhang, X.; Yang, X.; Zhou, X. Generalized neutrosophic extended triplet group. Symmetry 2019, 11, 327. [Google Scholar]
  38. Zhang, X.; Wang, X.; Smarandache, F.; Jaíyéolá, T.G.; Lian, T. Singular neutrosophic extended triplet groups and generalized groups. Cogn. Syst. Res. 2019, 57, 32–40. [Google Scholar]
  39. Zhang, X.; Borzooei, R.; Jun, Y. Q-filters of quantum B-algebras and basic implication algebras. Symmetry 2018, 10, 573. [Google Scholar]
  40. Zhang, X.; Bo, C.; Smarandache, F.; Park, C. New operations of totally dependent-neutrosophic sets and totally dependent-neutrosophic soft sets. Symmetry 2018, 10, 187. [Google Scholar]
  41. Xu, Z.; Zhou, W. Consensus building with a group of decision makers under the hesitant probabilistic fuzzy environment. Fuzzy Optim. Decis. Mak. 2017, 16, 481–503. [Google Scholar]
  42. Hao, Z.; Xu, Z.; Zhao, H.; Su, Z. Probabilistic dual hesitant fuzzy set and its application in risk evaluation. Knowl.-Based Syst. 2017, 127, 16–28. [Google Scholar]
  43. Zhai, Y.; Xu, Z.; Liao, H. Measures of probabilistic interval-valued intuitionistic hesitant fuzzy sets and the application in reducing excessive medical examinations. IEEE Trans. Fuzzy Syst. 2018, 26, 1651–1670. [Google Scholar]
  44. He, Y.; Xu, Z. Multi-attribute decision-making methods based on reference ideal theory with probabilistic hesitant information. Expert Syst. Appl. 2019, 118, 459–469. [Google Scholar]
  45. Wu, W.; Li, Y.; Ni, Z.; Jin, F.; Zhu, X. Probabilistic interval-valued hesitant fuzzy information aggregation operators and their application to multi-attribute decision-making. Algorithms 2018, 11, 120. [Google Scholar]
  46. Zhou, W.; Xu, Z. Group consistency and group decision-making under uncertain probabilistic hesitant fuzzy preference environment. Inf. Sci. 2017, 414, 276–288. [Google Scholar]
  47. Xie, W.; Ren, Z.; Xu, Z.; Wang, H. The consensus of probabilistic uncertain linguistic preference relations and the application on the virtual reality industry. Knowl.-Based Syst. 2018, 162, 14–28. [Google Scholar]
  48. Shao, S.; Zhang, X.; Li, Y.; Bo, C. Probabilistic single-valued (interval) neutrosophic hesitant fuzzy set and its application in multi-attribute decision-making. Symmetry 2018, 10, 419. [Google Scholar]
  49. Peng, H.G.; Zhang, H.Y.; Wang, J.Q. Probability multi-valued neutrosophic sets and its application in multi-criteria group decision-making problems. Neural Comput. Appl. 2018, 30, 563–583. [Google Scholar]
  50. Ye, J. Similarity measures between interval neutrosophic sets and their applications in multicriteria decision-making. J. Intell. Fuzzy Syst. 2014, 26, 165–172. [Google Scholar]
  51. Majumdar, P.; Samanta, S.K. On similarity and entropy of neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26, 1245–1252. [Google Scholar] [Green Version]
  52. Ye, J. Clustering methods using distance-based similarity measures of single-valued neutrosophic sets. J. Intell. Syst. 2014, 23, 379–389. [Google Scholar]
  53. Garg, H. Algorithms for possibility linguistic single-valued neutrosophic decision-making based on COPRAS and aggregation operators with new information measures. Measurement 2019, 138, 278–290. [Google Scholar] [Green Version]
  54. Biswas, P.; Pramanik, S.; Giri, B.C. Some distance measures of single valued neutrosophic hesitant fuzzy sets and their applications to multiple attribute decision-making. In New Trends in Neutrosophic Theory and Applications; Pons Editions: Brussells, Belgium, 2016; pp. 55–63. [Google Scholar]
  55. Ren, H.; Xiao, S.; Zhou, H. A Chi-square distance-based similarity measure of single-valued neutrosophic set and applications. Int. J. Comput. Commun. Control 2019, 14, 78–89. [Google Scholar]
  56. Cao, C.; Zeng, S.; Luo, D. A single-valued neutrosophic linguistic combined weighted distance measure and its application in multiple-attribute group decision-making. Symmetry 2019, 11, 275. [Google Scholar]
  57. Zhao, S.; Wang, D.; Changyong, L.; Lu, W. Induced choquet integral aggregation operators with single-valued neutrosophic uncertain linguistic numbers and their application in multiple attribute group decision-making. Math. Probl. Eng. 2019, 2019. [Google Scholar] [CrossRef]
  58. Sahin, R.; Liu, P. Distance and similarity measures for multiple attribute decision making with single-valued neutrosophic hesitant fuzzy information. In New Trends in Neutrosophic Theory and Applications; Pons Editions: Brussells, Belgium, 2016; pp. 35–54. [Google Scholar]
Table 1. Probabilistic neutrosophic hesitant fuzzy decision matrix of the investment problem.
Table 1. Probabilistic neutrosophic hesitant fuzzy decision matrix of the investment problem.
  C 1   C 2  
  A 1 { { 0.3 | 0.3 , 0.4 | 0.3 , 0.5 | 0.3 } , { 0.1 | 1 } , { 0.3 | 0.5 , 0.4 | 0.5 } }   { { 0.5 | 0.5 , 0.6 | 0.5 } , { 0.2 | 0.5 , 0.3 | 0.5 } , { 0.3 | 0.5 , 0.4 | 0.5 } }  
  A 2 { { 0.6 | 0.5 , 0.7 | 0.5 } , { 0.1 | 0.5 , 0.2 | 0.5 } , { 0.2 | 0.5 , 0.3 | 0.5 } }   { { 0.6 | 0.5 , 0.7 | 0.5 } , { 0.1 | 1 } , { 0.3 | 1 } }  
  A 3 { { 0.5 | 0.5 , 0.6 | 0.5 } , { 0.4 | 1 } , { 0.2 | 0.5 , 0.3 | 0.5 } }   { { 0.6 | 1 } , { 0.3 | 1 } , { 0.4 | 1 } }  
  A 4 { { 0.7 | 0.5 , 0.8 | 0.5 } , { 0.1 | 1 } , { 0.1 | 0.5 , 0.2 | 0.5 } }   { { 0.6 | 0.5 , 0.7 | 0.5 } , { 0.1 | 1 } , { 0.2 | 1 } }  
  C 3   
  A 1 { { 0.2 | 0.5 , 0.3 | 0.5 } , { 0.1 | 0.5 , 0.2 | 0.5 } , { 0.5 | 0.5 , 0.6 | 0.5 } }  
  A 2 { { 0.6 | 0.5 , 0.7 | 0.5 } , { 0.1 | 0.5 , 0.2 | 0.5 } , { 0.1 | 0.5 , 0.2 | 0.5 } }  
  A 3 { { 0.5 | 0.5 , 0.6 | 0.5 } , { 0.1 | 1 } , { 0.3 | 1 } }  
  A 4 { { 0.3 | 0.5 , 0.5 | 0.5 } , { 0.2 | 1 } , { 0.1 | 0.3 , 0.2 | 0.3 , 0.3 | 0.3 } }  
Table 2. Results shown by Equation (10) corresponding to different parameters.
Table 2. Results shown by Equation (10) corresponding to different parameters.
Parameter  A 1   A 2   A 3   A 4  Ranking
  D ϕ = μ = ν = λ = 1   0.1269 0.0635 0.0989   0.1053   A 1 > A 4 > A 3 > A 2
  D ϕ = μ = ν = 2 , λ = 1 2   0.1498   0.1063   0.1150   0.1239   A 1 > A 4 > A 3 > A 2
  D ϕ = μ = 1 , ν = 2 , λ = 1   0.0956   0.0622   0.0561   0.0792   A 1 > A 4 > A 3 > A 2
  D ϕ = ν = 1 , μ = 2 λ = 1   0.1024   0.691   0.0523   0.0733   A 1 > A 4 > A 2 > A 3
  D ϕ = 2 , ν = 1 = μ = 1 λ = 1   0.1871   0.1203   0.1071   0.1449   A 1 > A 4 > A 2 > A 3
Table 3. Relationships between presenting methods and our method .
Table 3. Relationships between presenting methods and our method .
  Method Ranking The Best Result The Worst Result
 Xu and Xia’s  Method  A 1 > A 4 > A 3 > A 2   A 1   A 2
 Singh’s  Method  A 1 > A 4 > A 3 > A 2   A 1   A 2
 Sahin’s  Method  A 1 > A 4 > A 3 > A 2   A 1   A 2
Table 4. Probabilistic neutrosophic hesitant fuzzy decision matrix of E 1 .
Table 4. Probabilistic neutrosophic hesitant fuzzy decision matrix of E 1 .
  RE   DR  
  A 1 { { 0.4 | 0.6 , 0.6 | 0.2 } , { 0.4 | 0.6 } , { 0.3 | 0.4 , 0.4 | 0.5 } }   { { 0.3 | 0.4 , 0.6 | 0.4 } , { 0.5 | 0.5 , 0.6 | 0.4 } , { 0.3 | 0.4 } }  
  A 2 { { 0.5 | 0.4 , 0.6 | 0.3 } , { 0.4 | 0.2 , 0.6 | 0.5 } , { 0.3 | 0.4 } }   { { 0.6 | 0.5 } , { 0.4 | 0.3 , 0.6 | 0.5 } , { 0.4 | 0.6 , 0.6 | 0.3 } }  
  A 3 { { 0.5 | 0.7 } , { 0.4 | 0.3 , 0.5 | 0.4 } , { 0.4 | 0.3 , 0.6 | 0.5 } }   { { 0.4 | 0.5 , 0.6 | 0.5 } , { 0.5 | 0.6 } , { 0.4 | 0.4 , 0.5 | 0.4 } }  
  A 4 { { 0.5 | 0.3 } , { 0.2 | 0.1 , 0.4 | 0.5 , 0.6 | 0.2 } , { 0.5 | 0.7 } }   { { 0.6 | 0.5 } , { 0.4 | 0.5 , 0.6 | 0.5 } , { 0.5 | 0.3 , 0.6 | 0.5 }  
  EV   FL  
  A 1 { { 0.7 | 0.5 , 0.8 | 0.5 } , { 0.3 | 0.5 , 0.4 | 0.4 } , { 0.5 | 0.6 } }   { { 0.5 | 0.4 , 0.7 | 0.6 } , { 0.3 | 0.5 , 0.5 | 0.4 } , { 0.5 | 0.4 } }  
  A 2 { { 0.7 | 0.3 , 0.8 | 0.5 } , { 0.6 | 0.6 } , { 0.4 | 0.5 , 0.6 | 0.4 } }   { { 0.6 | 0.4 , 0.8 | 0.4 } , { 0.4 | 0.2 , 0.6 | 0.5 } , { 0.5 | 0.3 } }  
  A 3 { { 0.6 | 0.5 } , { 0.4 | 0.5 , 0.5 | 0.3 } , { 0.4 | 0.5 , 0.6 | 0.4 } }   { { 0.6 | 0.5 } , { 0.5 | 0.4 , 0.6 | 0.4 } , { 0.5 | 0.6 , 0.6 | 0.4 } }  
  A 4 { { 0.6 | 0.3 , 0.8 | 0.5 } , { 0.4 | 0.6 } , { 0.5 | 0.3 , 0.6 | 0.5 } }   { { 0.6 | 0.5 , 0.8 | 0.4 } , { 0.4 | 0.6 } , { 0.4 | 0.5 , 0.5 | 0.4 }  
Table 5. Probabilistic neutrosophic hesitant fuzzy decision matrix of E 2 .
Table 5. Probabilistic neutrosophic hesitant fuzzy decision matrix of E 2 .
  RE   DR  
  A 1 { { 0.6 | 0.5 } , { 0.4 | 0.2 , 0.6 | 0.6 } , { 0.4 | 0.6 , 0.6 | 0.2 } }   { { 0.5 | 0.4 , 0.7 | 0.4 } , { 0.6 | 0.4 } , { 0.4 | 0.6 , 0.5 | 0.4 } }  
  A 2 { { 0.3 | 0.4 } , { 0.5 | 0.4 } , { 0.2 | 0.2 , 0.4 | 0.5 , 0.6 | 0.3 } }   { { 0.5 | 0.6 } , { 0.6 | 0.4 } , { 0.5 | 0.3 , 0.6 | 0.4 } }  
  A 3 { { 0.4 | 0.6 , 0.6 | 0.2 } , { 0.6 | 0.3 } , { 0.5 | 0.4 , 0.6 | 0.5 } }   { { 0.6 | 0.4 , 0.8 | 0.4 } , { 0.5 | 0.3 , 0.7 | 0.5 } , { 0.5 | 0.4 } }  
  A 4 { { 0.5 | 0.4 , 0.6 | 0.4 } , { 0.5 | 0.3 } , { 0.3 | 0.4 , 0.6 | 0.5 } }   { { 0.7 | 0.5 } , { 0.5 | 0.6 , 0.6 | 0.3 } , { 0.5 | 0.6 }  
  EV   FL  
  A 1 { { 0.5 | 0.3 , 0.6 | 0.5 } , { 0.4 | 0.4 , 0.6 | 0.6 } , { 0.3 | 0.6 } }   { { 0.6 | 0.6 } , { 0.3 | 0.5 } , { 0.4 | 0.4 , 0.5 | 0.3 , 0.6 | 0.3 } }  
  A 2 { { 0.5 | 0.4 , 0.6 | 0.3 } , { 0.5 | 0.6 , 0.6 | 0.3 } , { 0.5 | 0.5 } }   { { 0.5 | 0.6 , 0.6 | 0.4 } , { 0.4 | 0.5 , 0.6 | 0.3 } , { 0.3 | 0.4 } }  
  A 3 { { 0.5 | 0.4 , 0.6 | 0.5 } , { 0.5 | 0.4 , 0.7 | 0.5 } , { 0.5 | 0.8 } }   { { 0.4 | 0.6 , 0.7 | 0.4 } , { 0.3 | 0.4 , 0.4 | 0.6 } , { 0.5 | 0.5 } }  
  A 4 { { 0.5 | 0.6 } , { 0.5 | 0.5 } , { 0.4 | 0.2 , 0.6 | 0.5 , 0.7 | 0.3 } }   { { 0.5 | 0.5 , 0.7 | 0.5 } , { 0.5 | 0.4 } , { 0.4 | 0.6 , 0.6 | 0.3 }  

Share and Cite

MDPI and ACS Style

Shao, S.; Zhang, X. Measures of Probabilistic Neutrosophic Hesitant Fuzzy Sets and the Application in Reducing Unnecessary Evaluation Processes. Mathematics 2019, 7, 649. https://doi.org/10.3390/math7070649

AMA Style

Shao S, Zhang X. Measures of Probabilistic Neutrosophic Hesitant Fuzzy Sets and the Application in Reducing Unnecessary Evaluation Processes. Mathematics. 2019; 7(7):649. https://doi.org/10.3390/math7070649

Chicago/Turabian Style

Shao, Songtao, and Xiaohong Zhang. 2019. "Measures of Probabilistic Neutrosophic Hesitant Fuzzy Sets and the Application in Reducing Unnecessary Evaluation Processes" Mathematics 7, no. 7: 649. https://doi.org/10.3390/math7070649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop