Next Article in Journal
A Comparative Analysis of Numerical Methods for Solving the Leaky Integrate and Fire Neuron Model
Next Article in Special Issue
Modeling the Production Process of Fuel Gas, LPG, Propylene, and Polypropylene in a Petroleum Refinery Using Generalized Nets
Previous Article in Journal
A Matheuristic Approach to the Integration of Three-Dimensional Bin Packing Problem and Vehicle Routing Problem with Simultaneous Delivery and Pickup
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intuitionistic Fuzzy Deep Neural Network

1
Institute of Biophysics and Biomedical Engineering, Bulgarian Academy of Sciences, Acad. Georgi Bonchev Str., Bl. 105, 1113 Sofia, Bulgaria
2
Intelligent Systems Laboratory, Prof. Dr. Assen Zlatarov University, 1 “Prof. Yakimov” Blvd., 8010 Burgas, Bulgaria
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 716; https://doi.org/10.3390/math11030716
Submission received: 30 December 2022 / Revised: 26 January 2023 / Accepted: 28 January 2023 / Published: 31 January 2023
(This article belongs to the Special Issue Intuitionistic Fuzziness and Parallelism: Theory and Applications)

Abstract

:
The concept of an intuitionistic fuzzy deep neural network (IFDNN) is introduced here as a demonstration of a combined use of artificial neural networks and intuitionistic fuzzy sets, aiming to benefit from the advantages of both methods. The investigation presents in a methodological way the whole process of IFDNN development, starting with the simplest form—an intuitionistic fuzzy neural network (IFNN) with one layer with single-input neuron, passing through IFNN with one layer with one multi-input neuron, further subsequent complication—an IFNN with one layer with many multi-input neurons, and finally—the true IFDNN with many layers with many multi-input neurons. The formulas for strongly optimistic, optimistic, average, pessimistic and strongly pessimistic formulas for NN parameters estimation, represented in the form of intuitionistic fuzzy pairs, are given here for the first time for each one of the presented IFNNs. To demonstrate its workability, an example of an IFDNN application to biomedical data is here presented.

1. Introduction

The idea for combined use of Artificial Neural Networks (ANNs, see, e.g., [1,2,3,4,5,6,7,8]) and Intuitionistic Fuzzy Sets (IFSs, see, e.g., [9,10]) is to benefit from the advantages of both concepts. ANN [4,5], often called just as NN, is a mathematical or computational model based on biological neural networks. It consists of an interconnected group of artificial neurons that process information using the connectionism approach to computation. IFSs were defined as extensions of Lotfi Zadeh’s fuzzy sets [11].
In [12], fuzzy NN (FNN) is represented as a combination of an ANN and fuzzy logic. FNN are successfully applied for control (see, e.g., [13,14]), pattern recognition (see, e.g., [15]), identification (see, e.g., [16]), mineral exploration (see, e.g., [17]), etc. Up to now, there are a few papers [18,19,20] that combine ANN and IFSs. During the last several years, IFSs have different applications, and in particular, for the introduction of intuitionistic fuzzy NNs (IFNN, see [9,18]).
In [21], a nonlinear network structure known as fuzzy cellular NN of type II is reported by integrating fuzzy operations with the classical cellular NN structure, which is an extension of the cellular non-linear network from classical to fuzzy form. In [22], a multi-layer perceptron NN is proposed consisting of fuzzy flip-flop neurons based on various fuzzy operations applied to approximate a real-life application. In [23,24], a new adaptive fuzzy inference NN has been described. In [25], a way for aggregation of multi-layer NNs is considered.
An extension of Kohonen’s self-organizing map for structure identification in linguistic (fuzzy) system modeling applications is presented in [26]. The authors have shown in particular the granular self-organizing map neural model to induce a distribution of non-parametric fuzzy interval numbers from the data.
In [27,28], the supervised learning process of multi-layer feed-forward NNs is considered as a class of multi-objective, multi-stage optimal control problems. An iterative parametric mini-max method is proposed in which the original optimization problem is embedded into a weighted mini-max formulation. In [29], the supervised learning process is successfully applied to the dynamic programming of multi-layer neural networks learning.
In [30], a max-min intuitionistic fuzzy Hopfield NN is proposed by combining IFSs with Hopfield NNs. The authors have shown that, for any given weight matrix and any given initial intuitionistic fuzzy pattern, the iteration process of the proposed NN converges to a limit cycle.
In [31], intuitionistic fuzzy NNs (IFNNs) are designed to adapt the antecedent and consequent parameters of intuitionistic fuzzy inference systems. A mean of maximum defuzzification method is proposed for a class of Takagi–Sugeno intuitionistic fuzzy inference systems, and the method is compared to the center of area and basic defuzzification distribution operator. On credit scoring data, it is shown that the IFNN trained with gradient descent and Kalman filter algorithms outperforms the traditional adaptive network based fuzzy inference system method.
An IFNN with Gaussian membership function is proposed in [32]. Since the intuitionistic fuzzy pairs (IFPs)—elements of intuitionistic fuzzy logic (IFL, see [33])—consider membership and non-membership values simultaneously, the incorporation of the concept of IFL into a FNN can enhance its performance. A back-propagation learning algorithm is developed to optimize the IFNN parameters and weights. The proposed IFNN is applied to ten tasks, including nonlinear control and prediction problems. The computational results indicate that the proposed IFNN is more efficient than conventional algorithms, such as ANN, FNN, and a support vector regression.
The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. A new multi-dimensional classifier is proposed in [34] applying an intelligent algorithm, which is the general fuzzy cerebellar model NN. To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed are drawn out by comparing the same medical example under the methods of IFSs and intuitionistic fuzzy cross-entropy with different score functions.
Convolutional NNs have been applied on the raw ECG data for detection of life threatening cardiac arrhythmias [35,36]. Dense NNs have been applied on engineering (hand-crafted) features for detection of life threatening cardiac arrhythmias [37,38].
The present paper has been provoked by [39], where an intuitionistic fuzzy feed-forward NN (IFFFNN) was constructed by combining feed-forward NN and IFL. Some operations, as well as two types of transferring functions involved in the IFFFNN working process, were introduced. Here, we intend to define a NN that uses intuitionistic fuzzy information, called an intuitionistic fuzzy deep NN (IFDNN).
The paper presents an application of IFSs tools sequentially on NN with one layer with single-input neuron (Section 3), on NN with one layer with one with multi-input neuron (Section 4), on NN with one layer with many multi-input neurons (Section 5), as well as on NN with many layers with many multi-input neurons in each layer (Section 6). The equations for strongly optimistic, optimistic, average, pessimistic and strongly pessimistic formula are given for each type of NNs. The application of developed here IFDNN is illustrated on biomedical data (Section 7).

2. Definitions of an IFDNN and of an IFP

The formal definition of an IFDNN is as follows:
p 1 , p 2 , , p m 0 , { a 1 i , a 2 i , , a m i i | 1 i k } ,
{ w j , 1 i , w j , 2 i , , w j , m i + 1 i | 1 j m i , 1 i k } , { b 1 i , b 2 i , , b m i i | 1 i k } ,
where
  • k is the number of layers;
  • m i —number of the neurons on the i-th layer, 1 i k ( m 0 is the number of the neurons in the zeroth (input) layer);
  • p 1 , p 2 , , p m 0 —the values for the input neuron; p – for k = 1 , m k = 1 ;
  • a 1 i , a 2 i , , a m i i —the output values of the neurons on the i-th layer;
  • w j , l i —the weight coefficient from the j-th input neuron to the l-th neuron in layer i ( 1 j m i , 1 l m i + 1 , 0 i k 1 ) ;
  • b 1 i , b 2 i , , b m i i —bias coefficients for the neurons on the i-th layer; b—for k = 1 , m k = 1 ;
  • F 1 i , F 2 i , , F m i i —transfer functions for the neurons on the i-th layer; F—for k = 1 , m k = 1 .
The ordered pair a , b , where a , b , a + b [ 0 , 1 ] is called an IFP. In it, a and b represent the degrees of validity (membership, etc.) and of non-validity (non-membership, etc.), respectively. Over two IFPs’ different operations, relations and operators are defined (see, e.g., [33]). The operations, which are bases of the above ones, are the following:
a , b + c , d = a + c a c , b d , a , b c , d = max ( a , c ) , min ( b , d ) , a , b @ c , d = a + c 2 , b + d 2 , a , b c , d = min ( a , c ) , max ( b , d ) , a , b . c , d = a c , b + d b d .

3. An IFNN with One Layer with Single-Input Neuron

We start with the simplest case—an IFNN with one layer with single-input neuron, as shown in Figure 1. The input value p for the IFNN is represented by IFP.
The intuitionistic fuzzy weight w of the neuron that has the value μ w , ν w is multiplied by the input value p = μ p , ν p by one of the following formulas:
  • Strongly optimistic formula:
    w p = μ w p , ν w p = 1 ( 1 μ w ) ( 1 μ p ) , ν w ν p ;
  • Optimistic formula:
    w p = μ w p , ν w p = max ( μ w , μ p ) , min ( ν w , ν p ) ;
  • Average formula:
    w p = μ w p , ν w p = μ w + μ p 2 , ν w + ν p 2 ;
  • Pessimistic formula:
    w p = μ w p , ν w p = min ( μ w , μ p ) , max ( ν w , ν p ) ;
  • Strongly pessimistic formula:
    w p = μ w p , ν w p = μ w μ p , 1 ( 1 ν w ) ( 1 ν p ) .
The reason for these names we find in the relations
1 ( 1 μ w ) ( 1 μ p ) max ( μ w , μ p ) μ w + μ p 2 min ( μ w , μ p ) μ w μ p
and
ν w ν p min ( ν w , ν p ) ν w + ν p 2 max ( ν w , ν p ) 1 ( 1 ν w ) ( 1 ν p ) .
For example, let
μ w , ν w = 0.3 , 0.5 ,
μ p , ν p = 0.6 , 0.2 ,
μ b , ν b = 0.4 , 0.3 .
Then, the values for strongly optimistic, optimistic, average, pessimistic and strongly pessimistic formulas are:
w p = 0.832 , 0.03 ,
w p = 0.7 , 0 , 2 ,
w p = 0.56 ( 6 ) , 0.3 ( 3 ) ,
w p = 0.4 , 0.5 ,
w p = 0.168 , 0.72 ,
respectively.
The next groups of formula might be illustrated in the same manner.
In general, if φ , ψ : [ 0 , 1 ] × [ 0 , 1 ] [ 0 , 1 ] are two fixed functions, so that, for every x , y [ 0 , 1 ]
0 φ ( x , y ) + ψ ( x , y ) 1 ,
then n may have the form
n = φ ( μ w , μ p ) , ψ ( ν w , ν p ) .
The same is valid for the next formulas.
The result achieved by one of the above formulas enters the summator (summation function, weighted sum) Σ . The other element that is passed to the summator is the IF-bias b represented by the IFP μ b , ν b , where μ b and ν b are real numbers from the interval [ 0 , 1 ] . The obtained result is presented again in one of the aforementioned five forms, namely:
  • Strongly optimistic formula:
    n = μ n , ν n = 1 ( 1 μ w p ) ( 1 μ b ) , ν w p ν b
    = 1 ( 1 μ w ) ( 1 μ p ) ( 1 μ b ) , ν w ν p ν b ;
  • Optimistic formula:
    n = μ n , ν n = max ( μ w p , μ b ) , min ( ν w p , ν b )
    = max ( μ w , μ p , μ b ) , min ( ν w , ν p , ν b ) ;
  • Average formula:
    n = μ n , ν n = μ w p + μ b 2 , ν w p + ν b 2
    = μ w + μ p + 2 μ b 4 , ν w + ν p + 2 ν b 4 ;
  • Pessimistic formula:
    n = μ n , ν n = min ( μ w p , μ b ) , max ( ν w p , ν b )
    = min ( μ w , μ p , μ b ) , max ( ν w , ν p , ν b ) ;
  • Strongly pessimistic formula:
    n = μ n , ν n = μ w p μ b , 1 ( 1 ν w p ) ( 1 ν b )
    = μ w μ p μ b , 1 ( 1 ν w ) ( 1 μ p ) ( 1 ν b ) .
It is worth noting that it is possible for the last IFP to be calculated by one formula, while the IFP μ p , ν p by another. If the IFP μ p , ν p is obtained by the first average formula, then the last average formula may have the form
n = μ n , ν n = μ w + μ p + μ b 3 , ν w + ν p + ν b 3 .
The summator output n acts as an input for the transfer function F, which produces the neuron’s output a, calculated by:
a = F ( w p + b ) .
In the classical feed-forward NN, two types of transfer functions with the form a = F ( n ) can be used: linear and logical sigmoidal ones.
The output value of the linear transfer function is equal to
a = F ( n ) = μ n , ν n .
The output value of the logical sigmoidal transfer function is equal to the output in the interval [ 0 , 1 ] according to the expression:
a = 1 1 + e n .
Now, we construct the pair
F s i g m = 2 3 ( 1 + 1 1 + e μ n ) , 2 3 ( 1 + 1 1 + e ν n ) .
Proposition 1.
F s i g m is an IFP.
Proof. 
We check directly that
2 3 ( 1 + 1 1 + e μ n ) , 2 3 ( 1 + 1 1 + e ν n ) [ 0 , 1 ] ,
and
2 3 ( 1 + 1 1 + e μ n ) + 2 3 ( 1 + 1 1 + e ν n ) = 2 3 1 1 + 1 1 + e μ n + 1 1 + 1 1 + e ν n
= 2 3 1 + e μ n 2 + e μ n + 1 + e ν n 2 + e ν n = 2 3 4 + 3 e μ n + 3 e ν n + 2 e μ n e ν n 4 + 2 e μ n + 2 e ν n + e μ n e ν n .
From
12 + 6 e μ n + 6 e ν n + 3 e μ n e ν n 8 6 e μ n 6 e ν n 4 e μ n e ν n = 4 e μ n + ν n > 0
the validity of the assertion follows. □

4. An IFNN with One Layer with One Multi-Input Neuron

Let us have a neuron (see Figure 2) that have as input values a vector of IFPs μ p 1 , ν p 1 , μ p 2 , ν p 2 , …, μ p m 0 , ν p m 0 , a vector of weight coefficients with IFPs μ w 1 , 1 , ν w 1 , 1 , μ w 1 , 2 , ν w 1 , 2 , …, μ w 1 , m 0 , ν w 1 , m 0 , and IFP μ b 1 , ν b 1 for the bias coefficient b 1 .
Then, by analogy with Section 3, we can use, e.g., one of the following five formulas:
  • Strongly optimistic formula:
    n 1 = 1 j = 1 m 0 ( 1 μ w 1 , j ) ( 1 μ p j ) ( 1 μ b 1 ) , j = 1 m 0 ν w 1 , j ν p j ν b 1 ;
  • Optimistic formula:
    n 1 = max max 1 j m 0 ( μ w 1 , j , μ p j ) , μ b 1 , min min 1 j m 0 ( ν w 1 , j , ν p j ) , ν b 1 ;
  • Average formula:
    n 1 = 1 2 m 0 m 0 μ b 1 + j = 1 m 0 ( μ w 1 , j + μ p j ) , 1 2 m 0 m 0 ν b 1 + j = 1 m 0 ( ν w 1 , j + ν p j )
    or
    n 1 = 1 m 0 + 1 μ b 1 + j = 1 m 0 ( μ w 1 , j + μ p j ) , 1 m 0 + 1 ν b 1 + j = 1 m 0 ( ν w 1 , j + ν p j ) ;
  • Pessimistic formula:
    n 1 = min min 1 j m 0 ( μ w 1 , j , μ p j ) , μ b 1 , max max 1 j m 0 ( ν w 1 , j , ν p j ) , ν b 1 ;
  • Strongly pessimistic formula:
    n 1 = j = 1 m 0 μ w 1 , j μ p j μ b 1 , 1 j = 1 m 0 ( 1 ν w 1 , j ) ( 1 ν p j ) ( 1 ν b 1 ) .

5. An IFNN with One Layer with Many Multi-Input Neurons

Let us have m 0 neurons in the input layer (see Figure 3) that have as input values the IFPs μ p 1 , ν p 1 , μ p 2 , ν p 2 , …, μ p m 0 , ν p m 0 , weight coefficients with values the IFPs μ w 1 , 1 , ν w 1 , 1 , μ w 1 , 2 , ν w 1 , 2 , …, μ w 1 , m 1 , ν w 1 , m 1 , μ w 2 , 1 , ν w 2 , 1 …, μ w m 0 , 1 , ν w m 0 , 1 …, μ w m 0 , m 1 , ν w m 0 , m 1 , and bias coefficients μ b 1 , ν b 1 , μ b 2 , ν b 2 , …, μ b m 1 , ν b m 1 .
When distinct neurons are operating, one cannot expect a high success rate in recognizing individual images, patterns, and behaviors. Therefore, when distinct neurons are used in NNs, they are grouped together with a uniform connectivity between neurons called a layer. Typically, the same transfer functions are used for all neurons in a layer, but this is not a dogma. Using the same function is a prerequisite for the layer behavior to be uniform and for computations to be more easily performed.
By analogy with Section 3 and Section 4, we can use, e.g., one of the following five formulas for the calculation of the value of n l for 1 l m 1 :
  • Strongly optimistic formula:
    n l = 1 j = 1 m 0 ( 1 μ w j , l ) ( 1 μ p j ) ( 1 μ b l ) , j = 1 m 0 ν w j , l ν p j ν b l ;
  • Optimistic formula:
    n l = max max 1 j m 0 ( μ w j , l , μ p j ) , μ b l , min min 1 j m 0 ( ν w j , l , ν p j ) , ν b l ;
  • Average formula:
    n l = 1 2 m 0 m 0 μ b l + j = 1 m 0 ( μ w j , l + μ p j ) , 1 2 m 0 m 0 ν b l + j = 1 m 0 ( ν w j , l + ν p j )
    or
    n l = 1 m 0 + 1 μ b l + j = 1 m 0 ( μ w j , l + μ p j ) , 1 m 0 + 1 ν b l + j = 1 m 0 ( ν w j , l + ν p j ) ;
  • Pessimistic formula:
    n l = min min 1 j m 0 ( μ w j , l , μ p j ) , μ b l , max max 1 j m 0 ( ν w j , l , ν p j ) , ν b l ;
  • Strongly pessimistic formula:
    n l = j = 1 m 0 μ w j , l μ p j μ b l , 1 j = 1 m 0 ( 1 ν w j , l ) ( 1 ν p j ) ( 1 ν b l ) .

6. An IFDNN with Many Layers with Many Multi-Input Neurons

Finally, we discuss the most general case, in which we have k layers, so that the net becomes an IFDNN.
Let the first layer coincide with the layer, described in Section 5, and let the next layers have similar forms as the first one, so that each component of these layers to be numbered by a upper index i ( 1 i k ) is as it is shown in Figure 4.
Let us have m 0 neurons in the input layer (see Figure 3) that have as input values the IFPs μ p 1 , ν p 1 , μ p 2 , ν p 2 , …, μ p m 0 , ν p m 0 , weight coefficients with values the IFPs μ w 1 , 1 , ν w 1 , 1 , μ w 1 , 2 , ν w 1 , 2 , …, μ w 1 , m 1 , ν w 1 , m 1 , μ w 2 , 1 , ν w 2 , 1 , …, μ w m 0 , 1 , ν w m 0 , 1 …, μ w m 0 , m 1 , ν w m 0 , m 1 , and bias coefficients μ b 1 , ν b 1 , μ b 2 , ν b 2 , …, μ b m 0 , ν b m 0 .
To achieve better results, the connection of several neural layers into a common NN is used, where the outputs of each neural layer are connected as inputs of the next layer. In this structure, there are three clear parts: an input neural layer that serves to perceive external signals; an output layer which is adapted to be able to obtain the exact output values; and a hidden layer (or layers), which is the “real“ part of the NN that defines the behavior of whole system. If the number of hidden neural layers is greater than 3, it is called to be a deep NN and to use deep learning.
By analogy with Section 3, Section 4 and Section 5, we can use, e.g., one of the following five formulas for the calculation of value of n l i for 1 l m i , 1 l k :
  • Strongly optimistic formula:
    n l i = 1 j = 1 m i ( 1 μ w j , l i ) ( 1 μ p j i ) ( 1 μ b l i ) , j = 1 m i ν w j , l i ν p j i ν b l i ;
  • Optimistic formula:
    n l i = max max 1 j m i ( μ w j , l i , μ p j i ) , μ b l i , min min 1 j m i ( ν w j , l i , ν p j i ) , ν b l i ;
  • Average formula:
    n l i = 1 2 m i m i μ b l i + j = 1 m i ( μ w j , l i + μ p j i ) , 1 2 m i m i ν b l i + j = 1 m i ( ν w j , l i + ν p j i )
    or
    n l i = 1 m i + 1 μ b l i + j = 1 m i ( μ w j , l i + μ p j i ) , 1 m i + 1 ν b l i + j = 1 m i ( ν w j , l i + ν p j i ) ;
  • Pessimistic formula:
    n l i = min min 1 j m i ( μ w j , l i , μ p j i ) , μ b l i , min max 1 j m i ( ν w j , l i , ν p j i ) , ν b l i ;
  • Strongly pessimistic formula:
    n l i = j = 1 m i μ w j , l i μ p j i μ b l i , 1 j = 1 m i ( 1 ν w j , l i ) ( 1 ν p j i ) ( 1 ν b l i ) .
The training of all presented here different types of IFNNs is performed by a standard NN algorithm. It is based on the root mean square error obtained at the NN output. The error back-propagation algorithm distributes the individual sensitivity coefficients among the layers of NN.
As shown above, the investigation describes different types of assessments that correspond to the strength of signals in an abstract NN. Insofar as it is a model of a real set of neurons, these formulas can also be used to estimate boundaries (upper and lower—optimistic and pessimistic) in which the signal strength falls when flowing through the real neurons. When a neuron sends to the next one a signal but with lower strength, it might not continue its propagation, i.e., it might die out. If the (strong) optimistic assessment of the signal strength is below any preset constant, this will correspond to the fading case. On the other hand, if the (strong) pessimistic assessment for the signal is above any preset constant, this could be interpreted in one of the following two ways:
  • The signal is strong enough to reach its final goal;
  • The signal is so strong that it results in the blocking of neurons system (e.g., shock from severe pain).

7. An Example for an IFDNN Application to Biomedical Data

To illustrate the IFDNN work, we will use data from one of our experiments to control a robotic arm with IFPs. Applying them, the movement of a stepper motor is controlled to determine the position of the hand based on signals about the density of the muscle that should ensure movement.
In the considered case, a signal from MyoWare muscle sensor (AT-04-001) is fed to the NN input. The MyoWare sensor signal is of the Envelope Output (ENV) type. As the NN output, a voltage is obtained to control a selected motor of a robotic manipulator. Based on these signals, IFPs μ , ν are calculated [40].
The structure of the NN with one intuitionistic fuzzy input, with eight neurons in the first, second and third hidden layers, respectively, and one intuitionistic fuzzy output is presented in Figure 5.
The training set contains 1428 values, which is divided into three parts. The first part is the largest and represents 70% of all data. The second part is the testing data set, containing 20% of all data. The last part is the validation data set. This is the so-called external data and do not participate in the NN training. They are 10% of all data. It is important to say that the distribution of training/testing/validation data is randomized by the system.
The training process of the neural network and the error distribution is shown in Figure 6.
Figure 7 presents the correlation coefficient (R) for the training, testing and validation data sets, as well as for the whole data set (the right bottom subfigure).
In order to validate the training, the target values from the training sets of the neural network are compared with all values μ and ν obtained at the outputs. During the testing all 1263 intuitionistic fuzzy pairs, we obtained the following result: arithmetic mean deviation of 0.002067 for μ and 0.00214 for ν .
In the future work, the IFDNN developed here is going to be applied to studies performed in [41], where data from three groups of proximal humerus fracture (PHF) patients were analyzed: group 1 with 63 patients without augmentation, group 2 with 28 patients with bone graft augmentation, and group 3 with 29 patients with cement augmentation. IFDNN is expected to predict the recovery process from PHFx to the respective patient. They reflect the degree of a successful range of motion μ , or a non-successful range of motion ν for each patient. The degree of uncertainty π = 1 − μ ν will represent the cases in which the treatment procedure has not been completed or the information for the current patient is not complete.
The IFDNN developed here can also be used to predict with a better possibility the choice of the most suitable surgical approach and implants, to reduce the operative time, to obtain less blood loss and lower X-ray radiation. In [42], patients with complex acetabulum fractures were divided into two groups: (1) conventional group (n = 12); and (2) 3D printed group (n = 10). Both groups included participants with a posterior column fracture plus a posterior wall fracture, a transverse fracture plus a posterior wall fracture, or a fracture of both acetabular columns. The scan datasets are segmented and converted to stereo lithography (STL) format, with bones and fragments separated for 3D printing in different colors. The comparison between the two groups was performed in terms of quality of fracture reduction—equal to or less than 2 mm displacement and greater than 2 mm displacement, functional assessment, operative time, blood loss and number of intraoperative X-rays.

8. Conclusions

The investigation presented here methodologically demonstrates the whole process of IFDNN elaboration, starting with IFNN one layer with a single-input neuron, passing through the IFNN with one layer with multi-input neurons, IFNN with one layer with many multi-input neurons, and reaching the idea of a set of layers with multi-input neurons in neural networks—the so-called deep learning. For the first time and for each type of the IFNN presented here, strongly optimistic, optimistic, average, pessimistic and strongly pessimistic formulas of the output n of the summator are presented. The IFDNN developed here has been shown as successfully applied to biomedical data. IFDNN might be useful to other scientific applications, for which the intuitionistic fuzzy evaluations of the considered data should exist.

Author Contributions

Conceptualization, K.A. and S.S.; methodology, K.A., S.S. and T.P.; software, S.S.; validation, K.A., S.S. and T.P.; formal analysis, K.A., S.S. and T.P.; investigation, K.A., S.S. and T.P.; writing—original draft preparation, K.A., S.S. and T.P.; writing—review and editing, K.A., S.S. and T.P.; visualization, K.A., S.S. and T.P.; supervision, K.A., S.S. and T.P.; project administration, K.A. and S.S.; funding acquisition, K.A. and S.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was realized in the frames of projects KP-06-N22-1/2018 “Theoretical research and applications of InterCriteria Analysis” of the Bulgarian National Science Fund and BG05M20P001-1.002-0011 “Centre of Competence MIRACle–Mechatronics, Innovation, Robotics, Automation, Clean technologies”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

This research was realized in the frames of project KP-06-N22- 1/2018 “Theoretical research and applications of InterCriteria Analysis” of the Bulgarian National Science Fund.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abiodun, O.I.; Jantan, A.; Omolara, A.E.; Dada, K.V.; Mohamed, N.A.; Arshad, H. State-of-the-art in artificial neural network applications: A survey. Heliyon 2018, 4, e00938. [Google Scholar] [CrossRef] [Green Version]
  2. Sleeman, W.C.; Syed, K.; Hagan, M.; Palta, J.; Kapoor, R.; Ghosh, P. Deep neural network models to automate incident triage in the radiation oncology incident learning system. In Proceedings of the 12th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics, Gainesville, FL, USA, 1–4 August 2021. [Google Scholar]
  3. Gawlikowski, J.; Tassi, C.R.N.; Ali, M.; Lee, J.; Humt, M.; Feng, J.; Kruspe, A.; Triebel, R.; Jung, P.; Roscher, R.; et al. A survey of uncertainty in deep neural networks. arXiv 2021, arXiv:2107.03342. [Google Scholar]
  4. Graupe, D. Principles of Artificial Neural Networks; World Scientific: Singapore, 2013; Volume 7. [Google Scholar]
  5. Chen, M.; Challita, U.; Saad, W.; Yin, C.; Debbah, M. Artificial neural networks-based machine learning for wireless networks: A tutorial. IEEE Commun. Surv. Tutor. 2019, 21, 3039–3071. [Google Scholar] [CrossRef] [Green Version]
  6. Gurney, K. An Introduction to Neural Networks; CRC Press: London, UK, 2018. [Google Scholar]
  7. Montavon, G.; Samek, W.; Müller, K. Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 2018, 73, 1–15. [Google Scholar] [CrossRef]
  8. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Atanassov, K. Intuitionistic Fuzzy Sets; Springer: Berlin/Heidelberg, Germany, 1999. [Google Scholar]
  10. Atanassov, K. On Intuitionistic Fuzzy Sets Theory; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  11. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  12. Fang, R.; Zhao, Y.; Li, W.-S. A novel fuzzy neural network: The vague neural network. In Proceedings of the Fourth IEEE Conference on Cognitive Informatics (ICCI’05), Irvine, CA, USA, 8–10 August 2005; pp. 94–99. [Google Scholar]
  13. Baruch, I.; Lopez, Q.; Flores, J. A fuzzy-neural multi-model for nonlinear systems identification and control. Fuzzy Sets Syst. 2008, 159, 2650–2667. [Google Scholar] [CrossRef]
  14. Hsua, C.; Linb, P.; Lee, T.; Wang, C. Adaptive asymmetric fuzzy neural network controller design via network structuring adaptation. Fuzzy Sets Syst. 2008, 159, 2627–2649. [Google Scholar] [CrossRef]
  15. Ravi, V.; Zimmermann, H. A neural network and fuzzy rule base hybrid for pattern classification. Soft Comput. 2001, 5, 152–159. [Google Scholar] [CrossRef]
  16. Liang, Y.; Feng, D.; Liu, G.; Yang, X.; Han, X. Neural identification of rock parameters using fuzzy adaptive learning parameters. Comput. Struct. 2003, 81, 2373–2382. [Google Scholar] [CrossRef]
  17. Shirazi, A.; Hezarkhani, A.; Pour, A.B.; Shiraz, A.; Hashim, M. Neuro-Fuzzy-AHP (NFAHP) Technique for Copper Exploration Using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Geological Datasets in the Sahlabad Mining Area, East Iran. Remote Sens. 2022, 14, 5562. [Google Scholar] [CrossRef]
  18. Hadjyisky, L.; Atanassov, K. Intuitionistic fuzzy model of a neural network. Busefal 1993, 54, 36–39. [Google Scholar]
  19. Kuncheva, L.; Atanassov, K. An Intuitionistic fuzzy RBF network. In Proceedings of the EUFIT’96, Aachen, Germany, 2–5 September 1996; pp. 777–781. [Google Scholar]
  20. Lei, Y.-J.; Lu, Y.-L.; Li, Z.-Y. Function approximation capabilities of intuitionistic fuzzy reasoning neural networks. Control. Decis. 2007, 5, 596–600. [Google Scholar]
  21. Senthilkumar, S. Raster simulation using advanced fuzzy cellular non-linear network. Int. J. Auton. Adapt. Commun. Syst. 2010, 3, 464–478. [Google Scholar] [CrossRef]
  22. Lovassy, R.; Kóczy, L.; Gál, L. Function Approximation Performance of Fuzzy Neural Networks. Acta Polytech. Hung. 2010, 7, 25–38. [Google Scholar]
  23. Iyatomi, H.; Hagiwara, M. Adaptive fuzzy inference neural network. Pattern Recognit. 2004, 37, 2049–2057. [Google Scholar] [CrossRef]
  24. Qin, Y.; Pei, Z. A new adaptive fuzzy inference neural network. Intell. Decis. Mak. Syst. 2009, 661–666. [Google Scholar] [CrossRef]
  25. Krawczak, M. A Way to Aggregate Multilayer Neural Networks. Lect. Notes Comput. Sci. 2005, 3697, 750–756. [Google Scholar]
  26. Kaburlasos, V.G.; Papadakis, S.E. Granular self-organizing map (grSOM) for structure identification. Neural Netw. 2006, 19, 623–643. [Google Scholar] [CrossRef]
  27. Krawczak, M. Backpropagation versus dynamic programming approach for neural networks learning. In Proceedings of the 6th International Conference on Neural Information Processing (ICONIP’99), Perth, WA, Australia, 16–20 November 1999; pp. 1057–1062. [Google Scholar]
  28. Krawczak, M. Neural networks learning as a multiobjective optimal control problem. Mathw. Soft Comput. 1997, 4, 195–202. [Google Scholar]
  29. Krawczak, M. Neural Networks Learning and Homotopy Method. In Proceedings of the 5th International Conference on Neural Networks and Soft Computing, Zakopane, Poland, 6–10 June 2000. [Google Scholar]
  30. Li, L.; Yang, J.; Wu, W. Intuitionistic fuzzy Hopfield neural network and its stability. Neural Netw. World 2011, 21, 461–472. [Google Scholar] [CrossRef] [Green Version]
  31. Hajek, P.; Olej, V. Intuitionistic fuzzy neural network: The case of credit scoring using text information. In Engineering Applications of Neural Networks; Communications in Computer and Information Science; Springer: Cham, Switzerland, 2015; Volume 517, pp. 337–346. [Google Scholar]
  32. Kuo, R.J.; Cheng, W.C. An intuitionistic fuzzy neural network with Gaussian membership function. J. Intell. Fuzzy Syst. 2019, 36, 6731–6741. [Google Scholar] [CrossRef]
  33. Atanassov, K. Intuitionistic Fuzzy Logics; Springer: Cham, Switzerland, 2017. [Google Scholar]
  34. Zhao, L.; Lin, L.-Y.; Lin, C.-M. A general fuzzy cerebellar model neural network multidimensional classifier using intuitionistic fuzzy sets for medical identification. Comput. Intell. Neurosci. 2016, 2016, 8073279. [Google Scholar] [CrossRef] [Green Version]
  35. Krasteva, V.; Ménétré, S.; Didon, J.P.; Jekova, I. Fully convolutional deep neural networks with optimized hyperparameters for detection of shockable and non-shockable rhythms. Sensors 2020, 20, 2875. [Google Scholar] [CrossRef]
  36. Jekova, I.; Krasteva, V. Optimization of end-to-end convolutional neural networks for analysis of out-of-hospital cardiac arrest rhythms during cardiopulmonary resuscitation. Sensors 2021, 21, 4105. [Google Scholar] [CrossRef]
  37. Krasteva, V.; Christov, I.; Naydenov, S.; Stoyanov, T.; Jekova, I. Application of dense neural networks for detection of atrial fibrillation and ranking of augmented ECG feature set. Sensors 2021, 21, 6848. [Google Scholar] [CrossRef]
  38. Jekova, I.; Christov, I.; Krasteva, V. Atrioventricular synchronization for detection of atrial fibrillation and flutter in one to twelve ECG leads using a dense neural network classifier. Sensors 2022, 22, 6071. [Google Scholar] [CrossRef]
  39. Sotirov, S.; Atanassov, K. Intuitionistic fuzzy feed forward neural network. Cybern. Inf. Technol. 2009, 9, 62–68. [Google Scholar]
  40. Sotirov, S.; Ribagin, S. Hybrid sensor system for robot control with nonlinear autoregressive network with exogenous inputs. In Proceedings of the 20th International Workshop on Intuitionistic Fuzzy Sets and Generalized Nets, Warsaw, Poland, 15 October 2022. in press. [Google Scholar]
  41. Hristov, S.; Visscher, L.; Winkler, J.; Zhelev, D.; Ivanov, S.; Veselinov, D.; Baltov, A.; Varga, P.; Berk, T.; Stoffel, K.; et al. A novel technique for treatment of metaphyseal voids in proximal humerus fractures in elderly patients. Medicina 2022, 58, 1424. [Google Scholar] [CrossRef]
  42. Ivanov, S.; Valchanov, P.; Hristov, S.; Veselinov, D.; Gueorguiev, B. Management of complex acetabular fractures by using 3D printed models. Medicina 2022, 58, 1854. [Google Scholar] [CrossRef]
Figure 1. IFNN with a single-input neuron.
Figure 1. IFNN with a single-input neuron.
Mathematics 11 00716 g001
Figure 2. IFNN with one layer and multi-input neuron.
Figure 2. IFNN with one layer and multi-input neuron.
Mathematics 11 00716 g002
Figure 3. IFNN with one layer and many multi-input neurons.
Figure 3. IFNN with one layer and many multi-input neurons.
Mathematics 11 00716 g003
Figure 4. IFNN with many layers with many multi-input neurons.
Figure 4. IFNN with many layers with many multi-input neurons.
Mathematics 11 00716 g004
Figure 5. Structure of the constructed IFDNN.
Figure 5. Structure of the constructed IFDNN.
Mathematics 11 00716 g005
Figure 6. Training process of the Intuitionistic fuzzy deep neural network.
Figure 6. Training process of the Intuitionistic fuzzy deep neural network.
Mathematics 11 00716 g006
Figure 7. Correlation coefficients for training, testing and validation data sets, as well as for the whole data set.
Figure 7. Correlation coefficients for training, testing and validation data sets, as well as for the whole data set.
Mathematics 11 00716 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Atanassov, K.; Sotirov, S.; Pencheva, T. Intuitionistic Fuzzy Deep Neural Network. Mathematics 2023, 11, 716. https://doi.org/10.3390/math11030716

AMA Style

Atanassov K, Sotirov S, Pencheva T. Intuitionistic Fuzzy Deep Neural Network. Mathematics. 2023; 11(3):716. https://doi.org/10.3390/math11030716

Chicago/Turabian Style

Atanassov, Krassimir, Sotir Sotirov, and Tania Pencheva. 2023. "Intuitionistic Fuzzy Deep Neural Network" Mathematics 11, no. 3: 716. https://doi.org/10.3390/math11030716

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop