Next Article in Journal
Review of Ka-Band Power Amplifier
Previous Article in Journal
Low-Noise Amplifier for Deep-Brain Stimulation (DBS)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Similarity Measure of Single-Valued Neutrosophic Sets Based on Modified Manhattan Distance and Its Applications

1
Chengyi University College, Jimei University, Xiamen 361021, China
2
Department of Basic Subjects, Jiangxi University of Science and Technology, Nanchang 330013, China
3
School of Vocation Teachers, Jiangxi Agricultural University, Nanchang 330045, China
4
Department of Computer Science and Mathematics, Sul Ross State University, Alpine, TX 79830, USA
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(6), 941; https://doi.org/10.3390/electronics11060941
Submission received: 8 February 2022 / Revised: 4 March 2022 / Accepted: 14 March 2022 / Published: 17 March 2022
(This article belongs to the Section Artificial Intelligence)

Abstract

:
A single-valued neutrosophic (SVN) set contains three parameters, which can well describe three aspects of an objective thing. However, most previous similarity measures of SVN sets often encounter some counter-intuitive examples. Manhattan distance is a well-known distance, which has been applied in pattern recognition, image analysis, ad-hoc wireless sensor networks, etc. In order to develop suitable distance measures, a new distance measure of SVN sets based on modified Manhattan distance is constructed, and a new distance-based similarity measure also is put forward. Then some applications of the proposed similarity measure are introduced. First, we introduce a pattern recognition algorithm. Then a multi-attribute decision-making method is proposed, in which a weighting method is developed by building an optimal model based on the proposed similarity measure. Furthermore, a clustering algorithm is also put forward. Some examples are also used to illustrate these methods.

1. Introduction

In the objective world, there are many fuzzy phenomena. In 1965, Zadeh [1] created a method to describe the fuzzy phenomena-fuzzy set theory. Fuzzy set theory has been successfully applied in many fields since it was put forward. In 1974, Mamdani [2] successfully applied fuzzy linguistic logic to industrial processes, marking the birth of fuzzy control. Over the past half-century, fuzzy set theory has made great progress and has been extensively used in many areas [3,4,5,6].
However, Zadeh’s fuzzy set only contains one membership degree, so it can only reflect the information of “yes” and “no”. In some decision-making problems, due to the limitation of professional knowledge and the lack of time and energy, decision makers will have different degrees of hesitation when making decisions, resulting in three aspects of evaluation results: affirmation, negation, and hesitation between affirmation and negation. Zadeh’s fuzzy set cannot describe this kind of situation. Some extended fuzzy sets, such as intuitionistic fuzzy sets, hesitant fuzzy sets, and neutrosophic fuzzy sets, are introduced [7,8,9,10]. For example, in order to solve this problem, Professor Atanassov extended and developed fuzzy sets in 1986 and put forward the concept of intuitionistic fuzzy sets. The intuitionistic fuzzy set can well describe the hesitation and uncertainty of decision makers’ judgment by adding nonmembership parameters. Compared with fuzzy sets, intuitionistic fuzzy sets are more flexible and practical in dealing with fuzziness and uncertainty and have been widely used in the fields of pattern recognition, medical diagnosis, image processing, and management decision-making.
Intuitionistic fuzzy sets still have their shortcomings, which are difficult to deal with uncertain and inconsistent information. For example, in the multi-voting process, 30% are in favor, 20% are against, 10% abstain, and 40% are neutral or absent. This situation is beyond the scope of intuitionistic fuzzy sets. In order to deal with the practical decision-making problems similar to the above example, Professor Smarandache [11] proposed the concept of a neutrosophic set in 1999. Neutrosophic set adds an independent uncertainty measure to the intuitionistic fuzzy set. In other words, decision makers use membership, uncertainty, and nonmembership degrees to describe the evaluation of things. It can be said that the neutrosophic set is the expansion of the intuitionistic fuzzy set, which can more finely describe the fuzzy essence of the real world. In the neutrosophic set, the uncertainty is clearly quantified, and the degrees of truth, indeterminacy, and falsity are completely independent. Like the above example, the neutrosophic set can be expressed as x(0.3,0.4,0.2). Then it can well depict three aspects of a thing, and this is an advantage of it over the classical fuzzy set. However, there are some difficulties for neutrosophic sets in the application of many real engineering problems. To apply it easily in science and engineering fields, Wang et al. [12] introduced a subclass of Smarandache’s neutrosophic set named single-valued neutrosophic (SVN) set, which has become a useful tool for handling various engineering problems. There are rich achievements in the theory and application of fuzzy sets [13,14,15,16,17,18,19,20].
Fuzzy distance can represent the degree of differentiation between two fuzzy subsets. It can be used for pattern recognition and information retrieval and can also be used to optimize the weight in a comprehensive decision [21,22]. Fuzzy similarity measure is an effective measurement tool for comparing the degree of similarity between two fuzzy subsets. In a sense, distance and similarity measures can be represented by each other. In recent years, some distance measures of SVN sets are also put forward, but the references are still rare. Manhattan distance is a term coined by Herman Minkowski in the 19th century. It is a geometric term used in geometric metric space to indicate the sum of the absolute wheelbase of two points in the standard coordinate system. For two non-zero real vectors x = ( x 1 , x 2 , , x n ) and y = ( y 1 , y 2 , , y n ) , Manhattan distance is defined as
d ( x , y ) = j = 1 m | x j y j |
Perlibakas [21] introduced a modified Manhattan distance as follows:
d M ( x , y ) = j = 1 m | x j y j | j = 1 m | x j | × j = 1 m | y j |
Modified Manhattan distance has been successfully applied in the field of face recognition and image analysis [23,24]. Elgamel and Dandoush [25] applied modified Manhattan distance into a localization algorithm in ad-hoc wireless sensor networks, which has been widely used in various engineering fields [26,27,28,29].
Many classical distances have been extended to SVN sets, such as Euclidean distance, Hamming distance, etc. This paper will extend modified Manhattan distance to SVN sets and put forward a new distance-based similarity measure. Based on the proposed similarity measure, we will introduce a pattern recognition algorithm and a clustering algorithm. A multi-attribute decision-making method is also proposed, in which two weighting methods are proposed under an SVN environment when the attributes’ weight information is completely unknown and partially known. Furthermore, we will put forward a new similarity measure-based decision-making (DM) method. Finally, some analyzed cases show that the proposed DM method, pattern recognition, and clustering algorithms are effective and feasible.

2. Materials and Methods

2.1. Preliminary Knowledge

Some concepts and properties of SVN sets are introduced in this section. Let U be a universal set.
Definition 1.
A set P ˜ is defined in  U  characterized by three mappings  T P , I P , F P : U [ 0 , 1 ]  is named an SVN set, if it has the following expression (Wang et al. [10])
P ˜ = { < u , T P ˜ ( u ) , I P ˜ ( u ) , F P ˜ ( u ) > | u U } ,
Here, T P ˜ ( u ) is truth-membership, I P ˜ ( u ) is indeterminacy-membership and F P ˜ ( u ) is falsity-membership functions, and for all element u U , they satisfy 0 T P ˜ ( u ) + I P ˜ ( u ) + F P ˜ ( u ) 3 . For convenience, the element < u , T P ˜ ( u ) , I P ˜ ( u ) , F P ˜ ( u ) > of P ˜ A ˜ is called as SVN value. If U contains only one element, then < u , T P ˜ ( u ) , I P ˜ ( u ) , F P ˜ ( u ) > can be denoted by < T P ˜ , I P ˜ , F P ˜ > .
Definition 2.
Let P ˜ i = { < u , T P ˜ i ( u ) , I P ˜ i ( u ) , F P ˜ i ( u ) > | u U } (i = 1, 2) be two SVN sets in  U , then (Wang et al. [10])
(1) 
The complement of P ˜ 1 is
P ˜ 1 c = { < u , F P ˜ 1 ( u ) , 1 I P ˜ 1 ( u ) , T P ˜ 1 ( u ) > | u U }
(2) 
P ˜ 1 P ˜ 2  if and only if
T P ˜ 1 ( u ) T P ˜ 2 ( u ) , I P ˜ 1 ( u ) I P ˜ 2 ( u ) , F P ˜ 1 ( u ) F P ˜ 2 ( u ) for all u in U .
(3) 
P ˜ 1 = P ˜ 2 if and only if P ˜ 1 P ˜ 2 and P ˜ 2 P ˜ 1 .
Definition 3.
Let S V N ( U ) denote the set of all SVN sets in  U , and  S  is a mapping  S : S V N ( U ) × S V N ( U ) [ 0 , 1 ] . Then  S ( P ˜ , Q ˜ )  is called the similarity measure between two SVN sets  P ˜ and Q ˜ , if it satisfies the following four properties (Ye [17]):
(i)
S ( P ˜ , Q ˜ ) [ 0 , 1 ]
(ii)
S ( P ˜ , Q ˜ ) = 1 iff P ˜ = Q ˜ ;
(iii)
S ( P ˜ , Q ˜ ) = S ( Q ˜ , P ˜ )
(iv)
If P ˜ Q ˜ R ˜ , then S ( P ˜ , R ˜ ) S ( P ˜ , Q ˜ ) , S ( P ˜ , R ˜ ) S ( Q ˜ , R ˜ ) .

2.2. A New Modified Manhattan Distance and Similarity Measure of SVN Sets

This section will develop a new distance measure between two SVN sets based on modified Manhattan distance.
For two SVN values a = < T a , I a , F a > and b = < T b , I b , F b > , the difference between a and b can be reflected by two vectors x = ( T a , 1 I a , 1 F a ) and y = ( T b , 1 I b , 1 F b ) . Then based on modified Manhattan distance dM(x,y) with m = 3, we can define a distance between a and b as follows:
d M ( a , b ) = | T a T b | + | ( 1 I a ) ( 1 I b ) | + | ( 1 F a ) ( 1 F b ) | [ T a + ( 1 I a ) + ( 1 F a ) ] × [ T b + ( 1 I b ) + ( 1 F b ) ]
According to Equation (2), we can establish a new distance measure for two SVN sets P ˜ = { < u , T P ˜ ( u ) , I P ˜ ( u ) , F P ˜ ( u ) > | u U } and Q ˜ = { < u , T Q ˜ ( u ) , I Q ˜ ( u ) , F Q ˜ ( u ) > | u U } in U = { u 1 , u 2 , , u n } as follows:
d M ( P ˜ , Q ˜ ) = 1 n i = 1 n | T P ˜ ( u i ) T Q ˜ ( u i ) | + | ( 1 I P ˜ ( u i ) ) ( 1 I Q ˜ ( u i ) ) | + | ( 1 F P ˜ ( u i ) ) ( 1 F Q ˜ ( u i ) ) | [ T P ˜ ( u i ) + ( 1 I P ˜ ( u i ) ) + ( 1 F P ˜ ( u i ) ) ] × [ T Q ˜ ( u i ) + ( 1 I Q ˜ ( u i ) ) + ( 1 F Q ˜ ( u i ) ) ]
It can be reduced to the following form:
d M ( P ˜ , Q ˜ ) = 1 n i = 1 n | T P ˜ ( u i ) T Q ˜ ( u i ) | + | I P ˜ ( u i ) I Q ˜ ( u i ) | + | F P ˜ ( u i ) F Q ˜ ( u i ) | ( ( 2 + T P ˜ ( u i ) I P ˜ ( u i ) F P ˜ ( u i ) ) × ( 2 + T Q ˜ ( u i ) I Q ˜ ( u i ) F Q ˜ ( u i ) )
Example 1.
Let P ˜ = { < u , 0 , 0.8 , 0.2 > } , Q ˜ = { < u , 0.3 , 0.5 , 0.2 > } and R ˜ = { < u , 0.3 , 0.6 , 0.0 > } be three SVN sets andif we regard them as IF sets P ˜ = { < u , 0 , 0.2 > } , Q ˜ = { < u , 0.3 , 0.2 > } , and R ˜ = { < u , 0.3 , 0.1 > } . According to IF number ranking rule proposed by Xu and Yang [30], we have P ˜ Q ˜ R ˜ , and their distances should satisfy d ( P ˜ , Q ˜ ) d ( P ˜ , R ˜ ) . We know that Euclid distance measure of two SVN sets P ˜ and Q ˜ in U = { u 1 , u 2 , , u n } is
d E ( P ˜ , Q ˜ ) = 1 n i = 1 n [ ( T P ˜ ( u i ) T Q ˜ ( u i ) ) 2 + ( I P ˜ ( u i ) I Q ˜ ( u i ) | ) 2 + ( F P ˜ ( u i ) F Q ˜ ( u i ) ) 2 ]
According to Equation (6), we get d E ( P ˜ , Q ˜ ) = 0.4243 0.4123 = d E ( P ˜ , R ˜ ) , which contradicts with d ( P ˜ , Q ˜ ) d ( P ˜ , R ˜ ) . Using our proposed distance measure (5), we have d M ( P ˜ , Q ˜ ) = 0 . 3750 , d M ( P ˜ , R ˜ ) = 0.4118 , which satisfies d ( P ˜ , Q ˜ ) d ( P ˜ , R ˜ ) . This example shows that the modified Manhattan distance d M ( P ˜ , Q ˜ ) performed better thanEuclid distance measure d E ( P ˜ , Q ˜ ) .
According to d M ( P ˜ , Q ˜ ) , we can construct the following information measure:
S M ( P ˜ , Q ˜ ) = exp ( d M ( P ˜ , Q ˜ ) )
In the following part, we will prove S M ( P ˜ , Q ˜ ) is a valid similarity measure.
By straightforward calculation, we can easilyprove the following lemma.
Lemma 1.
Let a i , b i , c i ( i = 1 , 2 , 3 ) be non-negative real numbers, and they satisfy δ 1 δ 2 δ 3 ,for each δ { a , b , c } . Then
(i) 
| a 1 a 2 | + | b 1 b 2 | + | c 1 c 2 | a 2 + b 2 + c 2 | a 1 a 3 | + | b 1 b 3 | + | c 1 c 3 | a 3 + b 3 + c 3
(ii) 
| a 2 a 3 | + | b 2 b 3 | + | c 2 c 3 | a 2 + b 2 + c 2 | a 1 a 3 | + | b 1 b 3 | + | c 1 c 3 | a 1 + b 1 + c 1
Theorem 1.
Let U = { u 1 , u 2 , , u n } be a universal set. Suppose that P ˜ = { < u , T P ˜ ( u ) , I P ˜ ( u ) , F P ˜ ( u ) > | u U } and Q ˜ = { < u , T Q ˜ ( u ) , I Q ˜ ( u ) , F Q ˜ ( u ) > | u U } are two SVN sets.Then, S M ( P ˜ , Q ˜ ) is a valid similarity measure between SVN sets P ˜ and Q ˜ . That is, S M ( P ˜ , Q ˜ ) satisfies the properties (i)–(iv) of Definition 3.
Proof. 
(i) Obviously, if P ˜ = Q ˜ , i.e., T P ˜ ( u i ) = T Q ˜ ( u i ) , I P ˜ ( u i ) = I Q ˜ ( u i ) , F P ˜ ( u i ) = F Q ˜ ( u i ) for all u i U , then we have d M ( P ˜ , Q ˜ ) = 0 .
For a special case: P ˜ = { < u , 1 , 0 , 0 > | u U }  and Q ˜ = { < u , 0 , 1 , 1 > | u U } , we know that d M ( P ˜ , Q ˜ ) .
Then 0 d M ( P ˜ , Q ˜ ) . Thus 0 S M ( P ˜ , Q ˜ ) 1 .
(ii) When P ˜ = Q ˜ , d M ( P ˜ , Q ˜ ) = 0 . Then S M ( P ˜ , Q ˜ ) = 1 .
(iii) Obviously, S M ( P ˜ , Q ˜ ) satisfies the symmetric properties.
(iv) If P ˜ Q ˜ R ˜ , i.e.,
T P ˜ ( u ) T Q ˜ ( u ) T R ˜ ( u ) , I P ˜ ( u ) I Q ˜ ( u ) I R ˜ ( u ) , F P ˜ ( u ) F Q ˜ ( u ) F R ˜ ( u ) for all u in U .
Then
T P ˜ ( u ) T Q ˜ ( u ) T R ˜ ( u ) , 1 I P ˜ ( u ) 1 I Q ˜ ( u ) 1 I R ˜ ( u ) , 1 F P ˜ ( u ) 1 F Q ˜ ( u ) 1 F R ˜ ( u ) ,
for all u in U .
According to Lemma 1, we have
  | T P ˜   ( u i ) T Q ˜   ( u i ) | + | ( 1 I P ˜   ( u i ) ) ( 1 I Q ˜ ( u i ) ) | + | ( 1 F P ˜   ( u i ) ) ( 1 F Q ˜   ( u i ) ) | [ T P ˜   ( u i ) + ( 1 I P ˜   ( u i ) ) + ( 1 F P ˜   ( u i ) ) ] × [ T Q ˜   ( u i ) + ( 1 I Q ˜ ( u i ) ) + ( 1 F Q ˜ ( u i ) ) ] | T P ˜   ( u i ) T R ˜   ( u i ) | + | ( 1 I P ˜   ( u i ) ) ( 1 I R ˜   ( u i ) ) | + | ( 1 F P ˜   ( u i ) ) ( 1 F R ˜   ( u i ) ) | [ T P ˜   ( u i ) + ( 1 I P ˜   ( u i ) ) + ( 1 F P ˜   ( u i ) ) ] × [ T R ˜   ( u i ) + ( 1 I R ˜   ( u i ) ) + ( 1 F R ˜   ( u i ) ) ]
and
  | T Q ˜ ( u i ) T R ˜ ( u i ) | + | ( 1 I Q ˜ ( u i ) ) ( 1 I R ˜ ( u i ) ) | + | ( 1 F Q ˜ ( u i ) ) ( 1 F R ˜ ( u i ) ) | [ T Q ˜ ( u i ) + ( 1 I Q ˜ ( u i ) ) + ( 1 F Q ˜ ( u i ) ) ] × [ T R ˜ ( u i ) + ( 1 I R ˜ ( u i ) ) + ( 1 F R ˜ ( u i ) ) ] | T P ˜ ( u i ) T R ˜ ( u i ) | + | ( 1 I P ˜ ( u i ) ) ( 1 I R ˜ ( u i ) ) | + | ( 1 F A ˜ ( u i ) ) ( 1 F R ˜ ( u i ) ) | [ T P ˜ ( u i ) + ( 1 I P ˜ ( u i ) ) + ( 1 F P ˜ ( u i ) ) ] × [ T R ˜ ( u i ) + ( 1 I R ˜ ( u i ) ) + ( 1 F R ˜ ( u i ) ) ]
Thus, we can conclude that
d M ( P ˜ , R ˜ ) d M ( P ˜ , Q ˜ ) , d M ( P ˜ , R ˜ ) d M ( Q ˜ , R ˜ ) }
Consequently, we have
S M ( P ˜ , R ˜ ) S M ( P ˜ , Q ˜ ) , S M ( P ˜ , R ˜ ) S M ( Q ˜ , R ˜ ) } .  
Now, we will observe the performance of S M ( P ˜ , Q ˜ ) . There are many similarity measures of SVN sets that have been proposed. For example, Ye [31] introduced three similarity measures between SVN sets based on vector similarities: Jaccard similarity measure S J ( P ˜ , Q ˜ ) , Dice similarity measure S D ( P ˜ , Q ˜ ) , and cosine similarity measure S C ( P ˜ , Q ˜ ) . Ye [32,33] put forward similarity measures ( C 1 ( P ˜ , Q ˜ ) , C 2 ( P ˜ , Q ˜ ) , C o T 1 ( P ˜ , Q ˜ ) , C o T 2 ( P ˜ , Q ˜ ) ) of SVN sets based on cosine and cotangent functions, respectively, and applied them to deal with medical diagnosis. For more references to SVN sets, one can refer to the work of [34,35,36,37,38].
Example 2 will provide a comparison analysis between them with the proposed similarity measure. For convenience, we always suppose that U = { u 1 , u 2 , , u n } is the universal set, and P ˜ = { < u , T P ˜ ( u ) , I P ˜ ( u ) , F P ˜ ( u ) > | u U } and Q ˜ = { < u , T Q ˜ ( u ) , I Q ˜ ( u ) , F Q ˜ ( u ) > | u U } are two SVN sets in U , and let
V ( P ˜ ) = T P ˜ 2 ( u i ) + I P ˜ 2 ( u i ) + F P ˜ 2 ( u i ) ,
V ( Q ˜ ) = T Q ˜ 2 ( u i ) + I Q ˜ 2 ( u i ) + F Q ˜ 2 ( u i ) ,
C O V ( P ˜ , Q ˜ ) = T P ˜ ( u i ) T Q ˜ ( u i ) + I P ˜ ( u i ) I Q ˜ ( u i ) + F P ˜ ( u i ) F Q ˜ ( u i ) .
Some existing similarity measures are briefly introduced in (1)–(6).
(1)
Jaccard vector similarity-based similarity measure:
S J ( P ˜ , Q ˜ ) = 1 n i = 1 n C O V ( P ˜ , Q ˜ ) V ( P ˜ ) + V ( Q ˜ ) C O V ( P ˜ , Q ˜ )
(2)
Dice vector similarity-based similarity measure:
S D ( P ˜ , Q ˜ ) = 1 n i = 1 n 2 C O V ( P ˜ , Q ˜ ) V ( P ˜ ) + V ( Q ˜ )
(3)
Cosine function-based similarity measure:
S C ( P ˜ , Q ˜ ) = 1 n i = 1 n C O V ( P ˜ , Q ˜ ) V ( P ˜ ) V ( Q ˜ )
(4)
Improved cosine function-based similarity measure:
C 1 ( P ˜ , Q ˜ ) = 1 n i = 1 n cos [ π 2 max ( | T P ˜ ( u i ) T Q ˜ ( u i ) | , | I P ˜ ( u i ) I Q ˜ ( u i ) | , | F P ˜ ( u i ) F Q ˜ ( u i ) | ) ] C 2 ( P ˜ , Q ˜ ) = 1 n i = 1 n cos [ π 6 ( | T P ˜ ( u i ) T Q ˜ ( u i ) | + | I P ˜ ( u i ) I Q ˜ ( u i ) | + | F P ˜ ( u i ) F Q ˜ ( u i ) | ) ]
(5)
Tangent function-based similarity measure:
T 1 ( P ˜ , Q ˜ ) = 1 1 n i = 1 n tan [ π 4 max ( | T P ˜ ( u i ) T Q ˜ ( u i ) | , | I P ˜ ( u i ) I Q ˜ ( u i ) | , | F P ˜ ( u i ) F Q ˜ ( u i ) | ) ] T 2 ( P ˜ , Q ˜ ) = 1 1 n i = 1 n tan [ π 12 ( | T P ˜ ( u i ) T Q ˜ ( u i ) | + | I P ˜ ( u i ) I Q ˜ ( u i ) | + | F P ˜ ( u i ) F Q ˜ ( u i ) | ) ]
(6)
Cotangent function-based similarity measure:
C o T 1 ( P ˜ , Q ˜ ) = 1 n i = 1 n cot [ π 4 + π 4 max ( | T P ˜ ( u i ) T Q ˜ ( u i ) | , | I P ˜ ( u i ) I Q ˜ ( u i ) | , | F P ˜ ( u i ) F Q ˜ ( u i ) | ) ] C o T 1 ( P ˜ , Q ˜ ) = 1 n i = 1 n cot [ π 4 + π 12 ( | T P ˜ ( u i ) T Q ˜ ( u i ) | , | I P ˜ ( u i ) I Q ˜ ( u i ) | , | F P ˜ ( u i ) F Q ˜ ( u i ) | ) ]
Example 2.
To compare the performance of our proposed similarity measure with other existing similarity measures, we consider a pattern recognition problem with six pairs of SVN sets, and here, we let U = { u } be the universal set. Table 1 shows the similarity values under the above-mentioned similarity measures and the proposed similarity measure.
From Table 1, we know that C 1 ( P ˜ , Q ˜ ) and T 1 ( P ˜ , Q ˜ ) cannot distinguish the recognition between Case 1 and Case 2. For Case 3, counter-intuitive cases are bold italic, and there only C 2 ( P ˜ , Q ˜ ) , T 2 ( P ˜ , Q ˜ ) , C o T 2 ( P ˜ , Q ˜ ) and S M ( P ˜ , Q ˜ ) are reasonable similarity measures. For Case 4 and Case 5, all similarity measures have well recognition results. Then in this example C 2 ( P ˜ , Q ˜ ) , T 2 ( P ˜ , Q ˜ ) , C o T 2 ( P ˜ , Q ˜ ) and S M ( P ˜ , Q ˜ ) are more valid similarity measures than others.

3. Results

3.1. Application in Pattern Recognition

In pattern recognition, each object observed becomes a sample. For each sample, it is necessary to determine some factors related to the identification. As the basis of the study, each factor becomes a feature. A pattern is a description of the characteristics of a sample.
If a sample X ˜ has n features, and the values of features are expressed by SVN values. Then X ˜ can be regarded as an n-elemental SVN set. The problem of pattern recognition is to judge whether the pattern X ˜ belongs to some known patterns P ˜ k ( k = 1 , 2 , , m ) . Then a new similarity-based pattern recognition algorithm is given as follows:
Step 1: 
Choose the set of known patterns P = { P ˜ 1 , P ˜ 2 , , P ˜ m }  and the set of characters (indicators, attributes) U = { u 1 , u 2 , , u n } .
Step 2: 
Depict the target to be identified, assume the target is X ˜ .
Step 3: 
Calculate similarity measure between X ˜ and P ˜ i  according to Equation (5).
Step 4: 
Recognition criteria: Assign the object X ˜ to P ˜ k 0 if S ( P ˜ k 0 , X ˜ ) is maximal among all S ( P ˜ i , X ˜ ) ( i = 1 , 2 , 3 ) .
Example 3.
There are three known patterns P ˜ k ( k = 1 , 2 , 3 ) in U = { u 1 , u 2 , u 3 } , and we assume Q ˜ is an unknown object. They are modeled by SVN sets shown in Table 2. We need to classify the object Q ˜ into some classes P ˜ k ( k = 1 , 2 , 3 ) .
Because S M ( P ˜ 3 , Q ˜ ) > S M ( P ˜ 1 , Q ˜ ) > S M ( P ˜ 2 , Q ˜ ) , then, according to the recognition criteria, the object Q ˜ belongs to P ˜ 3 .
Example 4.
Medical diagnosis problems are often complex, but with the help of artificial intelligence technologies, doctors can use information based on experience or historical data to make decisions. Wang et al. [39] pointed out that SVN sets are more suitable tools in tracking medical diagnosis problems than Zadeh’s fuzzy set. Here we use the pattern recognition method to deal with a medical diagnosis problem.
Let D ˜ be a set of diseases (diagnoses), and here D ˜ = {Viral fever ( D ˜ 1 ), Malaria ( D ˜ 2 ), Q 3 Typhoid ( D ˜ 3 ), Stomach problem ( D ˜ 4 ), Chest problem ( D ˜ 5 )}. Supposed that U is a set of various symptoms, and here U = {Temperature ( u 1 ), Headache ( u 2 ), Stomach pain ( u 3 ), Cough ( u 4 ), Chest pain ( u 5 )}. Each diagnosis D ˜ j ( j = 1 , 2 , 3 , 4 , 5 ) is modeled by an SVN set. A patient P ˜ comes to see the doctor, and the corresponding symptoms are also represented by the SVN sets. The data information and similarity values between the patient and each diagnosis can be found in Table 3.
Our task is to determine the patient P ˜ belongs to which diagnosis of D ˜ i ( i = 1 , 2 , , 5 ) , respectively.
Among all the values, S M ( D ˜ 1 , P ˜ ) is largest. Hence the patient P ˜ is diagnosed as D ˜ 1 (Viral Fever).

3.2. Application in MADM Problems

Consider that a MADM problem includes an alternative set { P ˜ 1 , P ˜ 2 , , P ˜ m } and an attribute set { u 1 , u 2 , , u n } . p ˜ i j = < T i j , I i j , F i j > is the evaluation value of P ˜ i on u j . This section will propose a new DM method, described in Figure 1, which is on the basis of the modified Manhattan distance and its-based similarity measure of SVN sets.
Let w j be the important degree (weight) of the attribute u j , and w = ( w 1 , w 2 , , w n ) T be the weight vector. The determination of attribute weights is a hot topic [40,41,42,43]. This subsection will introduce two weighting methods using proposed distance and similarity measures for the above-mentioned two cases.
(1)
When the attribute weight information is completely unknown, Wang [44] introduced the maximizing deviation method to determine the weights with the following formula:
w j = i = 1 m k = 1 m d M ( p ˜ i j , p ˜ k j ) j = 1 n i = 1 m k = 1 m d M ( p ˜ i j , p ˜ k j ) , j = 1 , 2 , , n
where d ( p ˜ i j , p ˜ k j ) is the modified Manhattan distance between p ˜ i j and p ˜ k j defined in Equation (5), that is,
d M ( p ˜ i j , p ˜ k j ) = | T i j T k j | + | I i j I k j | + | F i j F k j | ( 2 + T i j I i j F i j ) × ( 2 + T k j I k j F k j ) .
(2)
When the attribute weight information is partially unknown, Ren et al. [45] introduced an optimization programming model:
max S = i = 1 m j = 1 n w j S M ( p ˜ i j , p ˜ j + ) s . t . { j = 1 n w j = 1 w H ,
Here H is the set of known information of weights and S M ( p ˜ i j , p ˜ j + ) is the similarity measure between p ˜ i j and p ˜ j + = < 1 , 0 , 0 > , that is,
S M ( p ˜ i j , p ˜ j + ) = exp ( 1 T i j + I i j + F i j 3 ( 2 + T i j I i j F i j ) )
The optimal solution w of model (9) can be chosen as the attribute weights. Then we proposed a new MADM algorithm as follows:
Step 1. 
Calculate the attribute weights according to (7) and (8) or (9) and (10);
Step 2. 
Define the positive ideal solution (PIS) P ˜ + = ( p ˜ 1 + , p ˜ 2 + , , p ˜ n + ) , where p ˜ j + = < 1 , 0 , 0 > ( j = 1 , 2 , , n ).
Step 3. 
Calculate the similarity measures between alternative P ˜ i with PIS, respectively, as follows:
S ( P ˜ i , P ˜ + ) = j = 1 n w j S M ( p ˜ i j , p ˜ j + ) = j = 1 n w j exp ( 1 T i j + I i j + F i j 3 ( 2 + T i j I i j F i j ) )
Step 4. 
Rank the alternatives according to S ( P ˜ i , P ˜ + ) with the rule:
The larger the value of S ( P ˜ i , P ˜ + ) corresponds to the better alternative.
Now, two examples will be used to illustrate the effectiveness and feasibility of the proposed DM method.
Example 5.
Under economic globalization and fierce business competition, with the expansion of enterprise scale, enterprises need to introduce ERP management systems to improve their competitiveness. ERP is developed by MRP II gradually evolved, mainly to integrate internal and external resources of the enterprise, to carry out a series of enterprise process transformations, improve the enterprise system, and enhance the competitiveness. Its content includes procurement, financial management, production and manufacturing, sales, research and development, human resources, and other functional modules. It can be further extended to the outside, combined with supply chain management, customer relationship management, and business wisdom. A set of software packages with perfect functions, high complexity, high risk, and expensive import process. Based on the above characteristics, if an enterprise wants to develop its own software system, it needs to invest many human resources, material resources, time, and money, which is totally not in line with the economic benefits. Therefore, it is considering economic benefits, and it adopts the experienced package software suppliers to purchase the existing ERP package software so as to save the system development cost and shorten the import time and grasp more business opportunities.
At present, with the expansion of enterprise scale, an enterprise urgently needs to implement ERP management. After preliminary investigation and screening, a set of five alternative ERP soft wares { P ˜ 1 , P ˜ 2 , , P ˜ 5 } is finally determined for selection. The evaluation indexes are set up according to the six principles of completeness, relevance, hierarchy, conciseness, measurability, and independence. According to the following five evaluation indexes (attributes), the ERP software is ranked and selected. These evaluation indexes are: system cost ( u 1 ), function satisfaction degree ( u 2 ), system stability (u3), software reputation ( u 4 ), and service level ( u 5 ). After expert investigation and statistical analysis, the evaluation results of these five ERP soft wares (schemes) about these five evaluation attributes are obtained, which are expressed by SVN values. The evaluation values are shown in Table 4.
In order to determine the best ERP management software, the method proposed in this paper is used to rank and select the best ERP management software.
The calculation steps are given as follows:
Step 1. 
According to (7) and (8), the attribute weights are solved as
w 1 = 0 . 1914 , w 2 = 0 . 1858 , w 3 = 0 . 2412 , w 4 = 0 . 2184 , w 5 = 0 . 1632 .
Step 2. 
Define PIS as P ˜ + = ( p ˜ 1 + , p ˜ 2 + , , p ˜ 5 + ) , where p ˜ j + = < 1 , 0 , 0 > ( j = 1 , 2 , , 5 ).
Step 3. 
According to Equation (10), we can get the similarity measures among P ˜ i with PIS as follows:
S ( P ˜ 1 , P ˜ + ) = 0.8559 , S ( P ˜ 2 , P ˜ + ) = 0.8536 , S ( P ˜ 3 , P ˜ + ) = 0.8402 , S ( P ˜ 4 , P ˜ + ) = 0.8868 , S ( P ˜ 5 , P ˜ + ) = 0.9116
Step 4. 
Due to S ( P ˜ 5 , P ˜ + ) > S ( P ˜ 4 , P ˜ + ) > S ( P ˜ 1 , P ˜ + ) > S ( P ˜ 3 , P ˜ + ) > S ( P ˜ 2 , P ˜ + ) , then the rank order is P ˜ 5 P ˜ 4 P ˜ 1 P ˜ 3 P ˜ 2 and P ˜ 5 is the best ERP management software.
Example 6.
A new energy vehicle manufacturer wants to find the most suitable lithium battery supplier within a limited time. After market research and preliminary investigation, five alternative lithium battery suppliers are finally determined. After discussion by the decision-making leaders, it is decided to score each supplier according to the following five indicators: product price ( u 1 ), risk factors ( u 2 ), product quality ( u 3 ), supplier situation ( u 4 ), and supplier service performance ( u 5 ). The evaluation values are SVN numbers shown in Table 5.
Assumed that the attribute weight information is partially known and
H = { ( w 1 , , w 5 ) | 0.2 w 1 0.35 , 0.05 w 2 0.25 , w 2 w 3 0.35 , w 4 w 1 0.1 , 0.2 w 5 0.4 }
Now, we use the method proposed in this paper to rank and select the best suppliers.
Step 1. 
According to (9) and (10), we have the optimization model:
max S = 4.5147 w 1 + 4.5118 w 2 + 4.5485 w 3 + 4.5143 w 4 + 4.5826 w 5 s . t . { 0.2 w 1 0.35 0.05 w 2 0.25 w 2 w 3 w 3 0.35 w 4 w 1 0.1 0.2 w 5 0.4 w 1 + w 2 + w 3 + w 4 + w 5 = 1 w j 0 , j = 1 , 2 , , 5
Then the attribute weights can be solved as
w 1 = 0.2 , w 2 = 0.05 , w 3 = 0.25 , w 4 = 0.1 , w 5 = 0.4 .
Step 2. 
Define PIS as P ˜ + = ( p ˜ 1 + , , p ˜ 5 + ) = ( < 1 , 0 , 0 > , , < 1 , 0 , 0 > ) .
Step 3. 
According to Equation (11), we can get the similarity measures between P ˜ i with PIS as follows:
S ( P ˜ 1 , P ˜ + ) = 0.9124 , S ( P ˜ 2 , P ˜ + ) = 0.9034 , S ( P ˜ 3 , P ˜ + ) = 0.9159 S ( P ˜ 4 , P ˜ + ) = 0.9083 , S ( P ˜ 5 , P ˜ + ) = 0.9101
Step 4. 
Due to S ( P ˜ 3 , P ˜ + ) > S ( P ˜ 1 , P ˜ + ) > S ( P ˜ 5 , P ˜ + ) > S ( P ˜ 4 , P ˜ + ) > S ( P ˜ 2 , P ˜ + ) , the rank order is
P ˜ 3 P ˜ 1 P ˜ 5 P ˜ 4 P ˜ 2
and P ˜ 3 is the best desirable supplier.

3.3. Application in Clustering Analysis

Cluster analysis is not only an important research method in the field of data mining but also a very important decision-making method in the field of management decision-making. This section will develop a neutrosophic fuzzy netting clustering algorithm based on the new similarity measure.
The network clustering algorithm was first proposed by Zhao [46]. Its feature is to cluster directly on the cut matrix of the fuzzy similarity matrix, and the classification results are consistent with those when using the maximum and minimum transitive closure of the fuzzy similarity matrix. Wang et al. [47] and Ye [48] developed a network clustering algorithm under intuitionistic fuzzy and neutrosophic fuzzy environments, respectively.
Let a clustering problem contain m objects A 1 , A 2 , , A m and n characteristic indexes: o 1 , o 2 , , o n . The measurement (evaluation) value of object A i under characteristic index o j is an SVN p ˜ i j = < T i j , I i j , F i j > . It is required to classify this object. This paper will propose a new intuitionistic fuzzy netting clustering method based on similarity measures. The steps are as follows:
Step 1: 
Calculate the fuzzy similarity matrix S = ( s i k ) m × m , where
s i k = exp [ | T i j T k j | + | I i j I k j | + | F i j F k j | ( 2 + T i j I i j F i j ) × ( 2 + T k j I k j F k j ) ] .
Step 2: 
Select appropriate parameters λ [ 0 , 1 ] , calculate the corresponding λ c u t matrix R λ , and only keep the main diagonal and the lower left element of λ c u t matrix R λ = ( r i k λ ) m × m , where r i k λ = { 1 , s i k λ 0 , s i k < λ .
Step 3: 
Replace the main diagonal element “1” with the index i of A i . Remove the “0” at the lower left of the main diagonal, replace “1” with “*” and call the position of “*” as the node, then we get R λ ;
Step 4: 
Connect the node with the label on the diagonal with horizontal and vertical lines, that is, netting. The objects connected by knotting in this way are classified into one category.
Now, an example will be used to illustrate the effectiveness and feasibility of the proposed fuzzy clustering algorithm.
Example 7.
Suppose an automobile market wants to classify five different vehicles A 1 , A 2 , , A 5 . After expert discussion, the following six evaluation indexes (attributes) are determined: o 1 (fuel consumption), o 2 (friction degree), o 3 (price), o 4 (comfort); o 5 (Design) and o 6 (safety). The SVN evaluation matrix of these vehicles is shown in Table 6.
Now, we use the proposed netting clustering algorithm for the classification of the five vehicles. The clustering steps are described as follows:
Step 1. 
Calculate the fuzzy similarity matrix S as follows:
S = ( 1.0000   0.9331   0.8577   0.8816   0.8835 0.9331   1.0000   0.8577   0.8105   0.8835 0.8577   0.8577   1.0000   0.7562   0.9125 0.8816   0.8105   0.7562   1.0000   0.8286 0.8835   0.8835   0.9125   0.8286   1.0000 )
Step 2. 
(i) For the case of λ ( 0.9331 , 1 ] , we get λ c u t  matrix with the form:
R λ = ( 1 1 1 1 1 )
Then we have
R λ = ( 1 2 3 4 5 )
and then these vehicles are divided into five categories: { A 1 }, { A 2 },{ A 3 }, { A 4 }, { A 5 }.
(ii) For the case of λ ( 0.9125 , 0.9331 ] , we get λ c u t matrix with the form:
R λ = ( 1 1 1 1 1 1 )
and
R λ = ( 1 2 3 4 5 )
and then these vehicles are divided into four categories: { A 1 , A 2 },{ A 3 }, { A 4 }, { A 5 }.
(iii) For the case of λ ( 0.8835 , 0.9125 ] , we get λ c u t  matrix with the form:
R λ = ( 1 1 1 1 1 1 1 )
and
R λ = ( 1 2 3 4 5 )
and then these vehicles are divided into three categories: { A 1 , A 2 }, { A 3 , A 5 }, { A 4 }.
By similar calculation, we also have the following results:
(iv) For the case of λ ( 0.8816 , 0.8835 ] , these vehicles are divided into two categories: { A 1 , A 2 , A 3 , A 5 }, { A 4 };
(v) For the case of λ ( 0 , 0.8816 ] , these vehicles are divided into one category: { A 1 , A 2 , A 3 , A 4 , A 5 }.

4. Conclusions

A novel distance measure of SVN sets is established based on modified Manhattan distance. Then, a new similarity measure between two SVN sets is put forward using the proposed distance formula. By a comparable analysis, we find that the new similarity measure can overcome some counter-intuitive cases, which is more valid than most existing similarity measures.
For further applications, we develop an algorithm in pattern recognition and clustering analysis. A new DM method for MADM problem under SVN environment is also proposed. Two weighting methods were also proposed. When the attribute weights information is completely unknown, we develop maximizing deviation method to determine the weights of the attribute. When the attribute weights information is partially known, we establish an optimization model based on the proposed similarity measure for solving weights. Two examples are introduced to illustrate the feasibility and effectiveness of the proposed DM method.
The pattern recognition algorithm and clustering algorithm can be used in some engineering practice and medical diagnosis problems. The proposed DM method could also be applied to other evaluation problems, such as the assessment of construction safety and water quality evaluation.
In the future, we will study fuzzy image segmentation using the proposed distance and similarity measures.

Author Contributions

Conceptualization, Y.Z., S.X. and H.R.; methodology, T.Y., Y.Z. and H.R.; software, T.Y.; validation, T.Y. and N.X.; formal analysis, H.R.; writing—original draft preparation, Y.Z. and S.X.; writing—review and editing, Y.Z., H.R. and T.Y.; funding acquisition, S.X. and H.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was mainly supported by the National Natural Science Foundation of China (no. 71661012), Science Foundation for Youth Teacher of Fujian Educational Committee (no. JAT201026) and the Industry University Research Innovation Fund of Chinese Universities (no. 2019ITA01053). The authors also appreciate the support of the Foundation of Fujian Provincial Education Department (no. JT180872).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data included in this study are included in this published article, and further details are available upon request by contact with the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inform. Control 1965, 8, 338–356. [Google Scholar] [CrossRef] [Green Version]
  2. Mamdani, E.H. Application of fuzzy algorithms for control of simple dynamic plant. Proc. Inst. Electr. Eng. 1974, 121, 1585–1588. [Google Scholar] [CrossRef]
  3. Dalman, H.; Güzel, N.; Sivri, M. A fuzzy set-based approach to multi-objective multi-item solid transportation problem under uncertainty. Int. J. Fuzzy Syst. 2016, 18, 716–729. [Google Scholar] [CrossRef]
  4. Vasavi, C.; Kumar, G.S.; Murty, M.S.N. Generalized differentiability and integrability for fuzzy set-valued functions on time scales. Soft Comput. 2016, 20, 1093–1104. [Google Scholar] [CrossRef]
  5. Seiti, H.; Hafezalkotob, A. A new risk-based fuzzy cognitive model and its application to decision-making. Cogn. Comput. 2020, 12, 309–326. [Google Scholar] [CrossRef]
  6. Jin, C.; Zhang, G.; Wu, M.; Zhou, S.; Fu, T. Textual content prediction via fuzzy attention neural network model without predefined knowledge. China Commun. 2020, 17, 211–222. [Google Scholar] [CrossRef]
  7. Xu, Z.; Zhao, N. Information fusion for intuitionistic fuzzy decision making: An overview. Inform. Fusion 2016, 28, 10–23. [Google Scholar] [CrossRef]
  8. Kabir, S.; Geok, T.K.; Kumar, M.; Yazdi, M.; Hossain, F. A method for temporal fault tree analysis using intuitionistic fuzzy set and expert elicitation. IEEE Access 2020, 8, 980–996. [Google Scholar] [CrossRef]
  9. Smarandache, F. A Unifying Field in Logics. Neutrosophy: Neutrosophic Probability, Set and Logic; American Research Press: Rehoboth, MA, USA, 1999. [Google Scholar]
  10. Garg, H.; Kumar, K. A novel possibility measure to interval-valued intuitionistic fuzzy set using connection number of set pair analysis and its applications. Neural Comput. Appl. 2020, 32, 3337–3348. [Google Scholar] [CrossRef]
  11. Liao, H.; Tang, M.X.; Zhang, X.L.; Al-Barakati, A. Detecting and visualizing in the field of hesitant fuzzy sets: A bibliometric analysis from 2009 to 2018. Int. J. Fuzzy Syst. 2019, 21, 1289–1305. [Google Scholar] [CrossRef]
  12. Wang, H.; Smarandache, F.; Zhang, Y.; Sunderraman, R. Single valued neutrosophic sets. Rev. Air Force Academy 2010, 17, 10–14. [Google Scholar]
  13. Akram, M.; Nasir, M.; Shum, K.P. Novel applications of bipolar single-valued neutrosophic competition graphs. Appl. Math. 2018, 33, 436–467. [Google Scholar] [CrossRef] [Green Version]
  14. Naz, S.; Akram, M.; Smarandache, F. Certain notions of energy in single-valued neutrosophic graphs. Axioms 2018, 7, 50. [Google Scholar] [CrossRef] [Green Version]
  15. Ye, J. A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets. J. Intell. Fuzzy Syst. 2014, 26, 2459–2466. [Google Scholar] [CrossRef]
  16. Peng, J.; Wang, J.; Wang, J.; Zhang, H.Y.; Chen, X.H. Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems. Int. J. Syst. Sci. 2016, 47, 2342–2358. [Google Scholar] [CrossRef]
  17. Akram, M.; Sitara, M. Novel applications of single-valued neutrosophic graph structures in decision-making. J. Appl. Math. Comput. 2018, 56, 501–532. [Google Scholar] [CrossRef]
  18. Ye, J. Single valued neutrosophic cross-entropy for multicriteria decision making problems. Appl. Math. Model. 2014, 38, 1170–1175. [Google Scholar] [CrossRef]
  19. Pramanik, S.; Pramanik, S.; Giri, B.C. TOPSIS method for multi-attribute group decision-making under single-valued neutrosophic environment. Neural Comput. Appl. 2016, 27, 727–737. [Google Scholar]
  20. Huang, H. New distance measure of single-valued neutrosophic sets and its application. Int. J. Intell. Syst. 2016, 31, 1021–1032. [Google Scholar] [CrossRef] [Green Version]
  21. Li, D.F.; Mahmood, T.; Ali, Z.; Dong, Y. Decision making based on interval-valued complex single-valued neutrosophic hesitant fuzzy generalized hybrid weighted averaging operators. J. Intell. Fuzzy Syst. 2020, 38, 4359–4401. [Google Scholar] [CrossRef]
  22. Faruk, K.; Khizar, H. Some new operations on single-valued neutrosophic matrices and their applications in multi-criteria group decision making. Appl. Intell. 2018, 48, 4594–4614. [Google Scholar]
  23. Perlibakas, V. Distance measures for PCA-based face recognition. Pattern Recogn. Lett. 2004, 25, 711–724. [Google Scholar] [CrossRef]
  24. Elgamel, M.S.; Dandoush, A. A modified Manhattan distance with application for localization algorithms in ad-hoc WSNs. Ad Hoc Netw. 2015, 33, 168–189. [Google Scholar] [CrossRef]
  25. Peiravi, A.; Kheibari, H.T. A fast algorithm for connectivity graph approximation using modified Manhattan distance in dynamic networks. Appl. Math. Comput. 2008, 201, 319–332. [Google Scholar] [CrossRef]
  26. Barthwal, N.; Verma, S.K. An optimized routing algorithm for enhancing scalability of wireless sensor network. Wireless Pers. Commun. 2021, 117, 2359–2382. [Google Scholar] [CrossRef]
  27. Guo, W.Z.; Xiong, N.X.; Chao, H.C.; Hussain, S.; Chen, G. Design and analysis of self-adapted task scheduling strategies in wireless sensor networks. Sensors 2011, 11, 6533–6554. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Zheng, H.F.; Guo, W.Z.; Xiong, N.X. A kernel-based compressive sensing approach for mobile data gathering in wireless sensor network systems. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 2315–2327. [Google Scholar] [CrossRef]
  29. Xiong, N.X.; Han, W.; Vandenberg, A. Green cloud computing schemes based on networks: A survey. IET Commun. 2012, 6, 3294–3300. [Google Scholar] [CrossRef]
  30. Xu, Z.S.; Yager, R.R. Some geometric aggregation operators based on intuitionistic fuzzy sets. Int. J. Gen. Syst. 2006, 35, 417–433. [Google Scholar] [CrossRef]
  31. Ye, J. Vector similarity measures of simplified neutrosophic sets and their application in multicriteria decision making. Int. J. Fuzzy Syst. 2014, 16, 204–211. [Google Scholar]
  32. Ye, J. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses. Artif. Intell. Med. 2015, 63, 171–179. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Ye, J. Single-valued neutrosophic similarity measures based on cotangent function and their application in the fault diagnosis of steam turbine. Soft Comput. 2017, 21, 817–825. [Google Scholar] [CrossRef]
  34. Abdel-Basset, M.; Mohamed, M. The role of single valued neutrosophic sets and rough sets in smart city: Imperfect and incomplete information systems. Measurement 2018, 124, 47–55. [Google Scholar] [CrossRef] [Green Version]
  35. Freen, G.; Kousar, S.; Khalil, S.; Imran, M. Multi-objective non-linear four-valued refined neutrosophic optimization. Comput. Appl. Math. 2020, 39, 35. [Google Scholar] [CrossRef]
  36. Liu, P.; Khan, Q.; Mahmood, T. Multiple-attribute decision making based on single-valued neutrosophic Schweizer-Sklar prioritized aggregation operator. Cogn. Syst. Res. 2019, 57, 175–196. [Google Scholar] [CrossRef] [Green Version]
  37. Ye, J.; Fu, J. Multi-period medical diagnosis method using a single valued neutrosophic similarity measure based on tangent function. Comput. Methods Programs Biomed. 2016, 123, 142–149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Ye, J. The generalized Dice measures for multiple attribute decision making under simplified neutrosophic environments. J. Intell. Fuzzy Syst. 2015, 31, 663–671. [Google Scholar] [CrossRef] [Green Version]
  39. Wang, H.B.; Smarandache, F.; Zhang, Y.Q. Single valued neutrosophic sets. Multispace Multistruct. 2010, 4, 410–413. [Google Scholar]
  40. Nancy, G.H. A novel divergence measure and its based TOPSIS method for multi criteria decision-making under single-valued neutrosophic environment. J. Intell. Fuzzy Syst. 2019, 36, 101–115. [Google Scholar] [CrossRef] [Green Version]
  41. Şahin, R.; Liu, P. Maximizing deviation method for neutrosophic multiple attribute decision making with incomplete weight information. Neural Comput. Appl. 2016, 27, 2017–2029. [Google Scholar] [CrossRef]
  42. Wan, S.P.; Jin, Z.; Dong, J.Y. A new order relation for Pythagorean fuzzy numbers and application to multi-attribute group decision making. Knowl. Inform. Syst. 2020, 62, 751–785. [Google Scholar] [CrossRef]
  43. Khan, M.S.A.; Abdullah, S. Interval-valued Pythagorean fuzzy GRA method for multiple-attribute decision making with incomplete weight information. Int. J. Intell. Syst. 2018, 33, 1689–1716. [Google Scholar] [CrossRef]
  44. Wang, Y.M. Using the method of maximizing deviations to make decision for multiindicies. J. Syst. Eng. Electron. 1997, 8, 21–26. [Google Scholar]
  45. Ren, H.P.; Xiao, S.S.; Zhou, H. A Chi-square distance-based similarity measure of single-valued neutrosophic set and applications. Int. J. Comput. Commun. 2019, 14, 78–89. [Google Scholar] [CrossRef]
  46. Zhao, R.H. The net-making for the fuzzy clustering. J. Xi’an Jiao Tong Univ. 1980, 14, 29–36. [Google Scholar]
  47. Ye, J. A netting method for clustering-simplified neutrosophic information. Soft Comput. 2017, 21, 7571–7577. [Google Scholar] [CrossRef]
  48. Wang, Z.; Xu, Z.S.; Liu, S.S.; Tang, J. A netting clustering analysis method under intuitionistic fuzzy environment. Appl. Soft Comput. 2011, 11, 5558–5564. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the fuzzy MADM model under SVN environment.
Figure 1. Flow chart of the fuzzy MADM model under SVN environment.
Electronics 11 00941 g001
Table 1. Values of the similarity measures under different pairs of ( P ˜ , B ˜ ) .
Table 1. Values of the similarity measures under different pairs of ( P ˜ , B ˜ ) .
Case 1Case 2Case 3Case 4Case 5
P ˜ <0.3,0.3,0.4><0.3,0.3,0.4><0.4,0.2,0.6><0.4,0.4,0.2><0.4,0.4,0.2>
Q ˜ <0.4,0.3,0.4><0.4,0.3,0.3><0.2,0.2,0.3><0.5,0.2,0.3><0.5,0.3,0.2>
S J ( P ˜ , Q ˜ ) 0.97370.942900.85000.9474
S D ( P ˜ , Q ˜ ) 0.98670.970600.91890.9730
S C ( P ˜ , Q ˜ ) 0.99100.9706Null0.91930.9733
C 1 ( P ˜ , Q ˜ ) 0.98770.987700.95110.9877
C 2 ( P ˜ , Q ˜ ) 0.99860.99450.86600.97810.9945
T 1 ( P ˜ , Q ˜ ) 0.92130.921300.84160.9213
T 2 ( P ˜ , Q ˜ ) 0.97380.94760.73210.89490.9476
C o T 1 ( P ˜ , Q ˜ ) 0.85410.854100.72650.8541
C o T 2 ( P ˜ , Q ˜ ) 0.94900.90040.57740.80980.9004
S M ( P ˜ , Q ˜ ) 0.96390.93290.83210.89480.9460
Table 2. SVN sets expression of patterns and similarity values.
Table 2. SVN sets expression of patterns and similarity values.
u 1 u 2 u 3 S M ( A ˜ i , B ˜ )
P ˜ 1 < 0.9 , 0.2 , 0.1 > < 0.8 , 0.2 , 0.1 > < 0.7 , 0.2 , 0.1 > 0.9329
P ˜ 2 < 0.8 , 0.2 , 0.2 > < 0.9 , 0.1 , 0.2 > < 0.9 , 0.1 , 0.1 > 0.9297
P ˜ 3 < 0.7 , 0.3 , 0.1 > < 0.8 , 0.1 , 0.2 > < 0.8 , 0.1 , 0.2 > 0.9438
Q ˜ < 0.5 , 0.3 , 0.2 > < 0.6 , 0.3 , 0.2 > < 0.8 , 0.2 , 0.1 >
Table 3. SVN sets expression of diagnoses and patient.
Table 3. SVN sets expression of diagnoses and patient.
u 1 u 2 u 3 u 4 u 5 S M ( D ˜ i , P ˜ )
D ˜ 1 <0.4,0.6,0.0><0.3,0.2,0.5><0.1,0.2,0.7><0.4,0.3,0.3><0.1,0.2,0.7>0.7254
D ˜ 2 <0.7,0.0,0.3><0.2,0.2,0.6><0.0,0.1,0.9><0.7,0.3,0.0><0.1,0.1,0.8>0.6601
D ˜ 3 <0.3,0.4,0.3><0.6,0.3,0.1><0.2,0.1,0.7><0.2,0.2,0.6><0.1,0.0,0.9>0.7154
D ˜ 4 <0.1,0.2,0.7><0.2,0.4,0.4><0.8,0.2,0.0><0.2,0.1,0.7><0.2,0.1,0.7>0.6750
D ˜ 5 <0.1,0.1,0.8><0.0,0.2,0.8><0.2,0.0,0.8><0.2,0.0,0.8><0.8,0.1,0.1>0.6246
P ˜ <0.8,0.1,0.1><0.8,0.1,0.1><0.0,0.6,0.4><0.2,0.1,0.7><0.0,0.5,0.5>
Table 4. Evaluation attributes information expressed by SVN values.
Table 4. Evaluation attributes information expressed by SVN values.
ERP SoftwareEvaluation Attribute
u 1 u 2 u 3 u 4 u 5
P ˜ 1 <0.55,0.1,0.4><0.45,0.2,0.3><0.5,0.3,0.3><0.7,0.1,0.2><0.8,0.2,0.1>
P ˜ 2 <0.6,0.3,0.3><0.55,0.2,0.4><0.6,0.35,0.4><0.7,0.2,0.1><0.7,0.1,0.2>
P ˜ 3 <0.6,0.25,0.3><0.5,0.15,0.2><0.5,0.15,0.3><0.6,0.2,0.2><0.7,0.2,0.2>
P ˜ 4 <0.7,0.3,0.4><0.6,0.3,0.25><0.5,0.2,0.4><0.6,0.3,0.4><0.7,0.1,0.4>
P ˜ 5 <0.65,0.2,0.3><0.55,0.2,0.3><0.5,0.1,0.2><0.75,0.2,0.3><0.65,0.05,0.2>
Table 5. Evaluation attribute value of each alternative supplier.
Table 5. Evaluation attribute value of each alternative supplier.
SupplierEvaluation Attribute
u 1 u 2 u 3 u 4 u 5
P ˜ 1 <0.7,0.2,0.25><0.8,0.1,0.2><0.7,0.2,0.1><0.7,0.3,0.2><0.8,0.1,0.3>
P ˜ 2 <0.65,0.2,0.1><0.7,0.1,0.3><0.8,0.3,0.2><0.6,0.2,0.2><0.7,0.3,0.1>
P ˜ 3 <0.8,0.15,0.2><0.8,0.2,0.4><0.7,0.2,0.3><0.7,0.1,0.3><0.8,0.1,0.2>
P ˜ 4 <0.65,0.15,0.2><0.7,0.2,0.2><0.6,0.1,0.4><0.7,0.2,0.2><0.7,0.1,0.2>
P ˜ 5 <0.6,0.25,0.2><0.7,0.1,0.2><0.8,0.2,0.1><0.7,0.1,0.1><0.7,0.2,0.2>
Table 6. Evaluation attribute value of each vehicle.
Table 6. Evaluation attribute value of each vehicle.
A 1 A 2 A 3 A 4 A 5
o 1 <0.2,0.2,0.5><0.4,0.1,0.3><0.4,0.4,0.3><0.3,0.4,0.5><0.6,0.2,0.4>
o 2 <0.5,0.4,0.1><0.6,0.3,0.2><0.6,0.4,0.1><0.3,0.5,0.1><0.3,0.2,0.6>
o 3 <0.6,0.2,0.3><0.5,0.2,0.1><0.5,0.4,0.2><0.7,0.2,0.2><0.5,0.2,0.3>
o 4 <0.7,0.2,0.1><0.4,0.3,0.3><0.5,0.3,0.2><0.7,0.2,0.1><0.6,0.3,0.1>
o 5 <0.2, 0.3, 0.6><0.5, 0.2, 0.3><0.8, 0.1, 0.5><0.2, 0.3, 0.6><0.4,0.2, 0.2>
o 6 <0.4,0.3, 0.4><0.4, 0.4, 0.3><0.7, 0.2, 0.2><0.1, 0.3, 0.4><0.3, 0.2, 0.2>
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zeng, Y.; Ren, H.; Yang, T.; Xiao, S.; Xiong, N. A Novel Similarity Measure of Single-Valued Neutrosophic Sets Based on Modified Manhattan Distance and Its Applications. Electronics 2022, 11, 941. https://doi.org/10.3390/electronics11060941

AMA Style

Zeng Y, Ren H, Yang T, Xiao S, Xiong N. A Novel Similarity Measure of Single-Valued Neutrosophic Sets Based on Modified Manhattan Distance and Its Applications. Electronics. 2022; 11(6):941. https://doi.org/10.3390/electronics11060941

Chicago/Turabian Style

Zeng, Yanqiu, Haiping Ren, Tonghua Yang, Shixiao Xiao, and Neal Xiong. 2022. "A Novel Similarity Measure of Single-Valued Neutrosophic Sets Based on Modified Manhattan Distance and Its Applications" Electronics 11, no. 6: 941. https://doi.org/10.3390/electronics11060941

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop