Next Article in Journal
A New Disruptive Technology for Zero-Brine Discharge: Towards a Paradigm Shift
Previous Article in Journal
Application of an Improved A* Algorithm for the Path Analysis of Urban Multi-Type Transportation Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling of Fuzzy Systems Based on the Competitive Neural Network

Division of Graduate Studies and Research, TECNM/Tijuana Institute of Technology, Tijuana 22414, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 13091; https://doi.org/10.3390/app132413091
Submission received: 19 October 2023 / Revised: 1 December 2023 / Accepted: 4 December 2023 / Published: 8 December 2023
(This article belongs to the Special Issue Applied Neural Networks and Fuzzy Logic)

Abstract

:
This paper presents a method to dynamically model Type-1 fuzzy inference systems using a Competitive Neural Network. The aim is to exploit the potential of Competitive Neural Networks and fuzzy logic systems to generate an intelligent hybrid model with the ability to group and classify any dataset. The approach uses the Competitive Neural Network to cluster the dataset and the fuzzy model to perform the classification. It is important to note that the fuzzy inference system is generated automatically from the classes and centroids obtained with the Competitive Neural Network, namely, all the parameters of the membership functions are adapted according to the values of the input data. In the approach, two fuzzy inference systems, Sugeno and Mamdani, are proposed. Additionally, variations of these models are presented using three types of membership functions, including Trapezoidal, Triangular, and Gaussian functions. The proposed models are applied to three classification datasets: Wine, Iris, and Wisconsin Breast Cancer (WDBC). The simulations and results present higher classification accuracy when implementing the Sugeno fuzzy inference system compared to the Mamdani system, and in both models (Mamdani and Sugeno), better results are obtained when the Gaussian membership function is used.

1. Introduction

Computer sciences have developed rapidly and researchers around the world in the field on artificial intelligence [1] have applied multiple algorithms for optimization, clustering, and classification problems. One of the challenges is to extract meaningful patterns in large datasets. This is where clustering and classification, two basic concepts in machine learning, play a transformative role. As datasets grow in size and complexity, the demands on algorithms to adapt and scale also increase.
The most famous optimization algorithms are based on swarm optimization, such as Particle Swarm Optimization (PSO) [2], Fireworks Algorithm (FWA) [3], Gravitational Search Algorithm (GSA) [4], and Grey Wolf Optimizer (GWO) [5], etc. In the field of clustering, we can find the Fuzzy C-Means [6] and K-Means clustering [7] algorithms. For classification tasks, we can explore algorithms such as K-Nearest Neighbors [8] or robust artificial neural networks [9].
Artificial neural networks can be divided into two classes depending on the type of learning they use: neural networks with supervised learning and neural networks with unsupervised learning [10]. It is important to highlight this aspect because the challenge that we are going to attack in the proposed method involves the utilization of an artificial neural network with unsupervised learning, specifically, with a Competitive Neural Network (CNN), which is a technique in the area of clustering [11,12] and not in the classification area.
On the other hand, according to the state of the art method, a plethora of researchers have used Fuzzy Inference Systems, FISs, to solve different types of problems, for example, by dynamically adjusting parameters [13] in optimization algorithms, in control problems [14], as well as prediction [15] and classification tasks. These latter problems have been solved with better accuracy, especially when working with noisy datasets, thanks to the ability of fuzzy inference systems to model the uncertainty present in the information [16,17].
All methods have strengths and weaknesses; therefore, part of the main the contribution of this paper is to generate an intelligent hybrid method using the potential of the CNN to cluster and aggregate the classification task by implementing Fuzzy Inference Systems. In the approach, first the CNN is performed to obtain the clusters and centroids of the input data, after that the Type-1 FIS is automatically designed according to the output produced by the CNN. Thus, another contribution of this paper is the methodology to dynamically generate the parameters of the input and output membership functions MFs of the FIS and these can be adapted according to the dataset classification problem used. In the approach, two FIS models were considered using Sugeno and Mamdani. Furthermore, these models are generated using three types of MFs (Trapezoidal, Gaussian, and Triangular) to have more variants in the models and analyze the behavior and precision obtained. The general approach is applied to three datasets (Wine, Iris, and WDBC) to measure the performance and accuracy of these algorithms in classification tasks.
This paper is organized as follows. Section 2 explains the theoretical concepts, techniques, and components that were used to develop this work. Section 3 presents the general proposal, describing the methodology to dynamically generate the Mamdani Type-1 FIS and Sugeno Type-1 FIS with different types of MFs. Section 4 explains the experimental results obtained with the proposed methods after being applied to the three databases. Finally, Section 5 offers the conclusions based on the results and some future works.

2. Theoretical Framework

This section presents the theory of the models used to develop this work. First, important concepts of competitive neural networks are explained. Second, a summary of the theory of fuzzy sets is presented, ending with the discussion subsection.

2.1. Competitive Neural Network

Competitive Neural Networks deviate from traditional feedforward neural networks by introducing a competitive layer. This layer is often referred to as the competitive or Kohonen layer which facilitates competition among neurons. Neurons compete to become the most responsive to specific input patterns, fostering a dynamic environment where only the most relevant neurons survive and adapt [18].
Competitive Neural Networks work in unsupervised learning scenarios, where the network must identify patterns and structure within the data without explicit labels. The competitive layer fosters neuron specialization, with each neuron becoming an expert in recognizing a specific pattern or feature. Through continuous competition and adaptation, the network refines its internal representation, making it highly adept at capturing intricate relationships within the input data.
In [19] the authors describe three fundamental components of competitive learning rules:
  • Activation of Neuron Set: Neurons (process units) exhibit activation or inactivity in response to specific input patterns. Distinct synaptic weight values are assigned to each neuron, contributing to individualized responsiveness.
  • Limitation on Neuron Strength: A constraint is applied to the “strength” of each neuron, regulating its responsiveness within the network.
  • Competitive Mechanism: Neurons compete to respond to subsets of input patterns. The outcome is designed such that only one neuron from the group is activated, promoting specialization.
Moreover, within a CNN, a binary process unit serves as a simple computational entity capable of assuming only two states: active (on) or inactive (off). For each binary process unit i, there exists an associated synaptic weights vector ( w i 1 , w i 2 , w i N ) to weigh the incoming values from the input vectors.
The synaptic potential is defined as follows: if N signals, represented by the vector ( x 1 , x 2 , , x N ), reach the processing unit i, and the synaptic weights vector of each unit is ( w i 1 , w i 2 , , w i N ) , then the synaptic potential is computed using Equations (1) and (2).
h i = w i 1 x 1 + w i 1 x 2 + _ + w i N x N θ i
where:
θ i = 1 2 ( w i 1 2 + w i 2 2 + . . . + w i N 2 )
Defined by N input sensors, M process units, and interconnecting links between each sensor and process unit, a CNN is characterized by the association of specific values, denoted as w i j , with each connection linking sensor j to process unit i . Figure 1 provides a visual representation of a Competitive Neural Network (CNN).
With each input collected by the sensors, the activation of only one process unit occurs, specifically the unit with the highest synaptic potential, which is acknowledged as the winning unit.
Consequently, when denoting the state of process unit i with the variable y i , it adopts a value of 1 during activation and 0 otherwise. The computational dynamics of the network are obtained by Equation (3).
y i = 1   i f     h i = max k h 1 ,   h 2 , , h M   0   o t h e r w i s e
i = 1 ,   2 ,   3 , . . ,   M
The synaptic weights are established through an unsupervised learning process. The objective is to activate the process unit whose synaptic weight vector is most similar to the input vector. The degree of similarity between the input vector X = ( x 1 , x 2 , , x N ) and the synaptic weights vector of process unit i , W = ( w i 1 , w i 2 , , w i N ), is determined by the Euclidean distance between these vectors. This distance is given by Equation (4).
d x , w i = x w i = x 1 w i 1 2 + . . . + x N w i N 2
Hence, if r represents the winning process unit, after introduction of the input pattern, the validation that the winning unit possesses the synaptic weight vector most similar to the input vector is expressed through Equation (5).
d x , w r   d x , w k ,   k = 1 ,   2 , , M
The aim of the unsupervised learning in the CNN is to autonomously identify groups or classes. To achieve this, the criterion of the minimum squares is calculated using Equation (6).
E k = i = 1 M a i k x k w i ( k ) 2
In (6), E denotes the error in iteration k, derived from the distance between the input pattern x ( k ) belonging to class M and the synaptic weight vector x i ( k ) . This is identified as the learning rule, which is denoted in Equation (7).
w r k + 1 = w r k + η k x k w r k
In this scenario, the new synaptic weights vector w r ( k + 1 ) is a linear combination of the vectors w r ( k ) and x ( k ) , with η ( k ) representing the learning rate, as expressed in Equation (8), where k is the current iteration and T is the total number of the iterations; therefore, the initial value of η 0 could be (0, 1). The learning rate will be decrementing; if η reaches 0, the CNN will stop learning.
η k = η 0 1 k T
In the state of the art method, we can find a large number of research works, exploiting the advantages of Competitive Neural Networks or Competitive Learning and applying them in different areas. In [9], the author provides a comprehensive overview of competitive learning-based clustering methods. In [11], a framework for different clustering methods like Fuzzy C-Means and K-Means clustering, entropy constrained vector quantization, or topological feature maps and CNN is proposed. In [20], the authors explore the exponential synchronization problem for a class of the CNN.
In other applications, the CompNet model based on Competitive Neural Networks and Learnable Gabor Kernels is presented in [21], the model is implemented for Palmprint Recognition; according to the results, the proposal achieved the lowest error rate compared to the most commonly state of the art methods. In [22], a Competitive Neural Network is used to estimate a rice-plant area; the authors demonstrate that these kinds of models are useful for the classification of the satellite data. In hybrid methods, in [23], the CNN is optimized and is applied to solve a complex problem in the field of chemical engineering; the authors proposed a novel neural network optimizer that leverages the advantages of both an improved evolutionary competitive algorithm and gradient-based backpropagation. In [24], the Fireworks Algorithm (FWA) was implemented to optimize the neurons of the CNN and improve the results of the traditional model. In order to improve the CNN, a novel classification algorithm that is based on the integration between competitive learning and the computational power of quantum computing is presented in [25]. These and other researches [26] demonstrate the potential of this area to solve a variety of problems.

2.2. Fuzzy Sets

A fuzzy set is characterized by a membership function that assigns a degree of membership to each element of the universal set. The membership function represents the uncertainty or vagueness associated with the degree to which an element belongs to a particular set [16,17].
The most popular Type-1 FISs are Mamdani [27] and Takagi–Sugeno–Kang [28]. The Mamdani FIS [27] is a model based on fuzzy logic principles that facilitates the representation and processing of uncertain or imprecise information. It has been successfully applied in several domains, due to its ability to model complex systems and handle linguistic variables in a way that humans can interpret. In this system, antecedents and consequents are represented by fuzzy membership functions.
The Takagi–Sugeno–Kang FIS, better known as Sugeno FIS, is a type of fuzzy logic system that uses linguistic rules and fuzzy logic principles to model and infer relationships between input and output variables. Introduced by Takagi and Sugeno [28], the Sugeno FIS represents rules in an IF-THEN format, where the antecedents are formed using fuzzy sets and the consequents are linear functions. This model is widely applied in areas such as control systems, decision making, and pattern recognition due to its transparency and ease of interpretation.
Researchers around the world have incorporated the fuzzy sets theory to solve classification challenges, more specifically, to classify features with a high degree of similarity or uncertainty for different datasets. In [6], the authors proposed a fuzzy clustering method using Interval Type-2 FIS combined with Possibilistic C-Means (PCM) and Fuzzy C-Means (FCM) clustering algorithms. In [29], optimization with swarm algorithms is applied to enhance an Interval Type-2 Fuzzy FIS for data classification. The proposal was tested on datasets from the UCI machine learning library and satellite image data. The results demonstrate that utilizing optimization algorithms can significantly improve accuracy in solving data classification problems. In [30], the authors introduce an unsupervised outlier detection algorithm which implements K-NN and fuzzy logic. Recognizing the limitations of unsupervised approaches in handling complex datasets, the authors provide a solution to this problem by proposing the utilization of K-NN rule and fuzzy logic for effective outlier detection. In [31], a hybrid method based on fuzzy logic and Genetic Algorithm applied to the text classification Twitter is proposed. In [32], the approach combines Fuzzy C-Means clustering and a fuzzy inference system for an audiovisual quality of experience. Experimental analyses indicate that the proposed framework outperforms methods without optimization and existing models, specifically MLP-based techniques.

2.3. Discussion

For this research, the methodologies of Competitive Neural Networks and Fuzzy Inference Systems were selected due to the great results they have produced in research carried out by scientists around the world, specifically, in research focused on data clustering and classification problems. On the one hand, unsupervised approaches have limitations under complex datasets; therefore, to solve this problem, we propose the use of fuzzy logic to generate a robust model. Furthermore, one of the limitations of fuzzy logic is how to obtain the parameters (type of inference system, type of input and output MF, the parameters of the MF, how the MFs are granulated, and generate the fuzzy rules). In this sense, the CNN helps to obtain the MF parameters from the data, how these are granulated, and generate fuzzy rules based on the input data.
To model the FIS, we selected the Mamdani and Sugeno systems; these have yielded good results in classification problems as we have previously mentioned. The choice of Triangular, Gaussian, and Trapezoidal MFs is based on the experts in the area according to previous works [24,33], where the FIS is modeled with the aforementioned MFs and after several tests, we can obtain the best accuracy results. Finally, it is important to mention that these three types of membership functions are designed with mathematical and metric functions according to their form. The detailed explanation and the obtained results will be presented in Section 3.

3. Proposed Method

This section presents the proposed methodology for dynamically designing fuzzy systems using Competitive Neural Networks; the model will be identified in the rest of the paper as CNNT1FL. For more details, the general structure is illustrated in Figure 2.
This approach is a combination between a CNN with Mamdani Type-1 FIS and Sugeno Type-1 FIS. The aim is to propose models that, based on input data from any dataset, are capable of dynamically generating a Type-1 FIS and that these can be applied to any problem adapting to research needs, namely, it could be useful for classification or prediction.
It is important to mention that for this investigation, we are going to consider the classification line. In the state of the art method, we found three investigations with some similarities, but only one with quantitative results [6] and two with qualitative results [6,33].
As we can see in Figure 2, the model, in addition to the input dataset, comprises two parts: (1) the implementation of the CNN and (2) the design of the FIS. The next two subsections explain in detail the implementation of each part to generate the general model.

3.1. Competitive Neural Network

As highlighted in the state of the art method, a CNN employs unsupervised learning, namely, the neural network learns from input data without prior knowledge of the specific targets [9,19,20].
In the proposed method (CNNT1FL), the main objective of the CNN is to generate clusters derived from the input data. In this context, the targets of the CNN represent the centroids of these clusters.
In the CNN functionality, the targets will be considered as centroids. Equation (9) illustrates the process of cluster formation; for each data point x i , where i = 1 ,   2 ,   3 , . . N , the distance to each centroid (targets) is calculated.
d i s t = x i C n
where n = 1 ,   2 , , M ;   N denotes the total input data, and M represents the total number of neurons in the CNN. The minimum distance among the data x i and the centroids signifies that the data belong to the centroid C n .
After the clusters are established with their corresponding data, the subsequent step involves designing the Type-1 FIS, as detailed in the following subsection.

3.2. Design Process of the Fuzzy Inference Systems

This section provides a detailed explanation of the Type-1 FIS design process, using the clusters generated by a CNN.
The construction of the Type-1 FIS including the antecedents, consequents, and fuzzy rules depends on all the input data and the features number. Furthermore, the MF number for each input variable will be influenced by the number of clusters generated by the CNN.
In this approach, the design of the FIS is proposed using three different MFs: Trapezoidal, Gaussian, and Triangular.

3.2.1. Triangular Membership Functions

Creating a Triangular MF requires three points denoted as a ,   b , and c , with the restriction that a < b < c , as expressed in Equation (10) and illustrated in Figure 3.
T r i a n g l e x ; a , b , c = 0 ,   x a .     x a b a ,   a x b .     c x c b ,   b x c . 0 ,   c x .
In this approach, the MF number is based on the number of clusters. Table 1 details how to calculate the parameters to design a Triangular MF from Triangular M F 1 to Triangular M F n . Specifically, the following steps are implemented:
  • To obtain the parameter a n , two standard deviations (STDs) of the c l u s t e r n are calculated from the position of the C e n t r o i d n .
  • The parameter b n is the position of the centroid of c l u s t e r n .
  • To calculate the parameter c n , two STDs of the c l u s t e r n are also added from the position of the C e n t r o i d n in the same cluster.

3.2.2. Gaussian Membership Functions

The parameterization of the Gaussian MF requires two parameters, c and σ . The point c is the mean and σ denotes the STD. The mathematical model for generating a Gaussian MF is expressed in Equation (11) and visually represented in Figure 4.
G a u s s i a n x ; c , σ = e 1 2 ( x c σ )
Table 2 explains the procedure to design a Gaussian MF from Gaussian M F 1 to Gaussian M F n . For each feature (dimension) of data, C n is the position of the C e n t r o i d n of c l u s t e r n , and point σ n is given by the standard deviation of c l u s t e r n .

3.2.3. Trapezoidal Membership Functions

Trapezoidal MF comprises four points a , b ,   c , and d , with the mathematical restriction that a < b c < d , as expressed in Equation (12).
T r a p e z o i d a l x ; a , b , c , d = 0 ,   x a .     x a b a ,   a x b .     1 ,   b x c .       d x d c ,   c x d . 0 ,   d x .
In this approach, we incorporate the statistical method, specifically a summary of the five numbers, to generate the graphic known as Box-and-Whisker [34]. This process enables the automatic design of a Trapezoidal MF, as illustrated in Figure 5. Table 3 details how the parameters a n ,   b n ,   c n , and d n are calculated.

3.2.4. Fuzzy Rules

The fuzzy rules for the proposed method will be automatically generated. Specifically, the rules depend on both the number of input data and the quantity of features in each dataset.
In an FIS, the knowledge base or fuzzy rules depend on the antecedents (input variables) and the partition of these variables (number of MF). It is essential for researchers to have the option of using either the total number of fuzzy rules or, in specific cases, employing fewer rules than the total possible number.
The fuzzy rule generation process for the proposed CNNT1FL method is detailed below:
  • IF ( I n p u t 1 is C l u s t e r 1 ) and ( I n p u t 2 is C l u s t e r 1 ) and, …, ( I n p u t n is C l u s t e r 1 ), THEN ( O u t p u t is C l a s s 1 ).
  • IF ( I n p u t 1 is C l u s t e r 2 ) and ( I n p u t 2 is C l u s t e r 2 ) and, …, ( I n p u t n is C l u s t e r 2 ), THEN ( O u t p u t i s C l a s s 2 ).
  • IF ( I n p u t 1 is C l u s t e r 3 ) and ( I n p u t 2 is C l u s t e r 3 ) and, …, ( I n p u t n is C l u s t e r 3 ), THEN ( O u t p u t is C l a s s 3 ).
m.
IF ( I n p u t 1 is C l u s t e r k ) and ( I n p u t 2 is C l u s t e r k ) and, …, ( I n p u t n is C l u s t e r k ), THEN ( O u t p u t is C l a s s k ).
where m denotes the maximum number of fuzzy rules, n represents the number of input data, and k denotes the events to evaluate.

3.2.5. Fuzzy Outputs

The consequents or output variables may vary depending on the specific problem and rules employed. For example, when applying the CNNT1FL method to a classification problem, the output will depend on the number of classes to be classified.

4. Experimental Results

This section presents the simulations and results after being applied to the proposed method: Design of Fuzzy Inference Systems based on a Competitive Neural Network denoted as CNNT1FL. The variants of the CNNT1FL model are applied to three classification datasets: Iris, Wine, and Wisconsin Breast Cancer Dataset (WDBC). Table 4 shows the characteristics of each dataset, such as the data number, features, and number of classes.
In Table 4, the “number of data” parameter is the input data for the Competitive Neural Network, the “features” parameter represents the number of partitions of the input variable of the FIS, and the “Classes” parameter represents the FIS decision or the output variables.
On the one hand, in the first part of CNNT1FL, the parameters used in the Neural Network are the following:
  • Network type: Competitive Neural Network.
  • Learning of Network: Unsupervised.
  • Performance: MSE (Mean Square Error).
  • Inputs: N (data number of the dataset).
  • Outputs: M (Centroids).
  • Epochs: 100.
On the other hand, in the second part of CNNT1FL, we have designed two types of FISs, Mamdani and Sugeno, with the aim of analyzing and comparing the performance and improving the data classification task.
It is very important to mention that all the features of each dataset are used to generate the model, where 80% of the data are used to design each FIS and 20% are used to test the performance of the created FIS.
The results obtained with Mamdani FIS and Sugeno FIS are presented in the tables below containing the mean, standard deviation, best and worst result based on 31 independent runs of the proposed method for Iris, Wine, and WDBC datasets.

4.1. Mamdani FIS

Table 5 shows the results obtained with the Mamdani FIS for the Iris dataset with the Triangular, Gaussian, and Trapezoidal MFs. The best results are shown in bold.
The best results for the Iris dataset applying the Mamdani FIS were obtained using the Gaussian MF with a mean value of 81.29% and a classifier result of 90%. Figure 6 shows the results of the best classifier, correctly classifying 27 out of 30 in the test dataset.
The Iris dataset has four features: Length sepals, width sepals, length, and width petals; therefore, the Mamdani FIS has four input variables, as illustrated in Figure 7.
Figure 8 presents the output variable of the Mamdani FIS, which has three membership functions because the Iris dataset has three different classes for classification.
The three fuzzy rules used in the Mamdani FIS are the following:
  • If (Length Sepals is Cluster1) and (Width Sepals is Cluster1) and (Length Petals is Cluster1) and (Width Petals is Cluster1), then (Classification is Class 1).
  • If (Length Sepals is Cluster2) and (Width Sepals is Cluster2) and (Length Petals is Cluster2) and (Width Petals is Cluster2), then (Classification is Class 2).
  • If (Length Sepals is Cluster3) and (Width Sepals is Cluster3) and (Length Petals is Cluster3) and (Width Petals is Cluster3), then (Classification is Class 3).
The results obtained for the Wine dataset with Mamdani FIS are described in Table 6.
The best result obtained for the Wine dataset is using the Gaussian membership function with 74.29% based on 26 out of 35 correct classifications in the test dataset, as shown in Figure 9. Figure 10, Figure 11, Figure 12 and Figure 13 illustrate the input variables which represent the 13 features of the Wine dataset, these are divided into parts 1, 2, 3, and 4, respectively and Figure 14 presents the output variable with two classes.
Similarly, in the Iris dataset, the Wine dataset has three classes; therefore, in Mamdani FIS, three fuzzy rules are used, which are presented below:
  • If (Alcohol is Cluster1) and (Malic acid is Cluster1) and (Ash is Cluster1) and (Alcalinity of ash is Cluster1) and (Magnesium is Cluster1) and (Total phenols is Cluster1) and (Flavanoids is Cluster1) and (Nonflavanoid phenols is Cluster1) and (Proanthocyanins is Cluster1) and (Color intensity is Cluster1) and (Hue is Cluster1) and (OD280/OD315 of diluted wines is Cluster1) and (Proline is Cluster1), then (Classification is Class 1).
  • If (Alcohol is Cluster2) and (Malic acid is Cluster2) and (Ash is Cluster2) and (Alcalinity of ash is Cluster2) and (Magnesium is Cluster2) and (Total phenols is Cluster2) and (Flavanoids is Cluster2) and (Nonflavanoid phenols is Cluster2) and (Proanthocyanins is Cluster2) and (Color intensity is Cluster2) and (Hue is Cluster2) and (OD280/OD315 of diluted wines is Cluster2) and (Proline is Cluster2), then (Classification is Class 2).
  • If (Alcohol is Cluster3) and (Malic acid is Cluster3) and (Ash is Cluster3) and (Alcalinity of ash is Cluster3) and (Magnesium is Cluster3) and (Total phenols is Cluster3) and (Flavanoids is Cluster3) and (Nonflavanoid phenols is Cluster3) and (Proanthocyanins is Cluster3) and (Color intensity is Cluster3) and (Hue is Cluster3) and (OD280/OD315 of diluted wines is Cluster3) and (Proline is Cluster3), then (Classification is Class 3).
Continuing with the Mamdani FIS, Table 7 shows the results obtained for the WDBC dataset with three different membership functions. The best average and classifier accuracy was achieved using the Gaussian membership function with 92.33% and 94.74%, respectively.
Figure 15 presents the best classification result for the WDBC dataset, where the best model correctly classifies 98 features out of 118 in the test dataset.
The WDBC dataset has 30 features for their data and two classes for classification. Figure 16 illustrates the first of eight parts of the input variables and shows the first four features: radius_mean, texture_mean, perimeter_mean, and area_mean.
Figure 17 and Figure 18 present Parts 2 and 3 of input variables. Part 2 comprises the features: smoothness_mean, compactness_mean, concavity_mean, and concave_points_mean, and Part 3 comprises symmetry_mean, fractal_dimension_mean, radius_se, and texture_se.
Figure 19 presents the fourth part of input variables with the features: perimeter_se, area_se, smoothness_se, and compactness_se. In Figure 20, the fifth part of input variables is illustrated.
The radius_largest_worst, texture_largest_worst, perimeter_largest_worst, and area_largest_worst features are presented in Figure 21. Figure 22 shows the seventh part.
The last two input variables of the Mamdani FIS are shown in Figure 23, which are symmetry_largest_worst and fractal_dimension_largest_worst. In Figure 24, we present the output variable with its respective classes.
In the WDBC, we only use two fuzzy rules as described below:
  • If (radius_mean is Cluster1) and (texture_mean is Cluster1) and (perimeter_mean is Cluster1) and (area_mean is Cluster1) and (smoothness_mean is Cluster1) and (compactness_mean is Cluster1) and (concavity_mean is Cluster1) and (concave_points_mean is Cluster1) and (symmetry_mean is Cluster1) and (fractal_dimension_mean is Cluster1) and (radius_se is Cluster1) and (texture_se is Cluster1) and (perimeter_se is Cluster1) and (area_se is Cluster1) and (smoothness_se is Cluster1) and (compactness_se is Cluster1) and (concavity_se is Cluster1) and (concave_points_se is Cluster1) and (symmetry_se is Cluster1) and (fractal_dimension_se is Cluster1) and (radius_largest_worst is Cluster1) and (texture_largest_worst is Cluster1) and (perimeter_largest_worst is Cluster1) and (area_largest_worst is Cluster1) and (smoothness_largest_worst is Cluster1) and (compactness_largest_worst is Cluster1) and (concavity_largest_worst is Cluster1) and (concave_points_largest_worst is Cluster1) and (symmetry_largest_worst is Cluster1) and (fractal_dimension_largest_worst is Cluster1), then (Classification is Class 1).
  • If (radius_mean is Cluster2) and (texture_mean is Cluster2) and (perimeter_mean is Cluster2) and (area_mean is Cluster2) and (smoothness_mean is Cluster2) and (compactness_mean is Cluster2) and (concavity_mean is Cluster2) and (concave_points_mean is Cluster2) and (symmetry_mean is Cluster2) and (fractal_dimension_mean is Cluster2) and (radius_se is Cluster2) and (texture_se is Cluster2) and (perimeter_se is Cluster2) and (area_se is Cluster2) and (smoothness_se is Cluster2) and (compactness_se is Cluster2) and (concavity_se is Cluster2) and (concave_points_se is Cluster2) and (symmetry_se is Cluster2) and (fractal_dimension_se is Cluster2) and (radius_largest_worst is Cluster2) and (texture_largest_worst is Cluster2) and (perimeter_largest_worst is Cluster2) and (area_largest_worst is Cluster2) and (smoothness_largest_worst is Cluster2) and (compactness_largest_worst is Cluster2) and (concavity_largest_worst is Cluster2) and (concave_points_largest_worst is Cluster2) and (symmetry_largest_worst is Cluster2) and (fractal_dimension_largest_worst is Cluster2), then (Classification is Class 2).

4.2. Sugeno FIS

This section presents the results obtained with Sugeno FIS; the difference between Mamdani and Sugeno is the output variable. Mamdani uses output variables with membership functions and Sugeno uses a constant in the output variables.
Table 8 shows the results for the Iris dataset with Sugeno FIS. As we can see, the best results are obtained using the Gaussian MF with an accuracy of 89.66%.
Figure 25 illustrates the results of the best classification model. Figure 26 and Figure 27 present the input and output variables, respectively, which are used in the implementation of the Sugeno FIS.
In the output variable of the Sugeno FIS, we implement a rounding rule based on ranges: range from 0 to 1.00 is Class 1, range from 1.01 to 2.00 is Class 2, and range from 2.01 to 3.00 is Class 3.
For the Wine dataset, the best result was obtained using the Triangular MF with an accuracy of 80%, and the best average was using the Gaussian MF with a value of 66.46%, as shown in Table 9.
The best classification result is presented in Figure 28, achieving 80% with 28 correct classifications out of 35 in the test dataset.
Figure 29 illustrates, as an example, part of the input variables of the best Sugeno FIS using Triangular MF; in this case, only 4 out of the 13 features are presented.
It is important to mention that the fuzzy rules in the Mamdani FIS and Sugeno FIS are the same, and the output variables are the same in the Iris dataset because both datasets have three classes.
Table 10 shows the results for the WDBC dataset; the best result was obtained using Trapezoidal MF. The best classification result with a value of 96% is presented in Figure 30, with a classification of 109 out of 114 in the test dataset.
A part of the input variables of the best Sugeno FIS for WDBC using the Trapezoidal MF is illustrated in Figure 31, which presents a sample of 4 out of 30 features. The output variable of the Sugeno FIS is represented in Figure 32; in this case study, it comprises two classes.

4.3. Comparative Analysis

A summary of the results is presented in Table 11 and Table 12. Table 11 describes the comparative analysis achieved by the different MFs when implementing Mamdani FIS and Table 12 presents the results of Sugeno FIS. According to these values, we can notice that in Mamdani FIS, the configuration using the Gaussian MF was better for all datasets. Additionally, in Sugeno FIS, the behavior had a slight variation; for the Iris dataset, the Gaussian MF also presents better results, but in the case of Wine, the best precision was obtained with the Triangular MF and the best average with the Gaussian MF, and for WDBC dataset, the performance was improved with the Trapezoidal MF.
Analyzing the general performance of Mamdani FIS against Sugeno FIS, we can conclude that Sugeno was more stable than Mamdani, achieving a better precision in two of the three datasets (Wine and WDBC). On the other hand, comparing the results with respect to the MFs, the use of Gaussian MF presents greater stability in most results. In Mamdani FIS, it is better in all cases, and in Sugeno FIS, it is better in the Iris and Wine datasets.
In order to evaluate the performance of the proposed CNNT1FL method versus other state of the art methods, we performed a comparison with Interval Type-2 Fuzzy C-Means (IT2FCM) [6] using only the Iris data. This analysis comparison is described in Table 13.
The comparison between both methods, CNNTFL and IT2FCM, comprises analyzing the best percentage classification of Mamdani FIS and Sugeno FIS for the Iris database, since it is the only dataset tested by the authors.
As we can notice in Table 13, with the three variations of membership functions in Mamdani FIS, the proposed method CNNT1FL obtained better results than IT2FCM; however, for the Sugeno FIS, the results of CNNT1FL were not good compared to IT2FCM.
It is important to mention that one of the disadvantages of CNNT1FL is that it uses all features for each dataset of the databases, on the contrary, IT2FCM did not specify how many and which features are used.

5. Conclusions

In this paper, we have presented a new hybrid method based on a Competitive Neural Network and the Type-1 Fuzzy Inference Systems. The main contribution comprises automatically designing Type-1 FIS, wherein we added three variations in the design of the FIS, specifically, we made a variant in the membership functions and designed FIS with Triangular, Gaussian, and Trapezoidal membership functions, in particular, in the input variables. We also designed two types of FISs, Mamdani and Sugeno, with the three variations mentioned above.
CNNT1FL was applied in the area of data classification and the results can be summarized into three main points. (1) In the Iris dataset, the highest result was achieved by employing the Gaussian MF for both input and output variables. (2) In the Wine dataset, the best mean across all variations was obtained with Sugeno FIS using Gaussian MF; however, the best performing FIS was reached using the Triangular MF as well as Sugeno FIS. (3) The best designed FIS for the WDBC dataset is a Sugeno FIS with Trapezoidal MF.
On the other hand, in the comparison we made, we achieved an improvement in the best design of the Mamdani Type-1 FIS, i.e., the three variations obtained a better percent of correct classification than IT2FCM [6]. For the Sugeno FIS, the results were not satisfactory compared to IT2FCM; it is important to mention two things: first, we only carried out a comparison with the Iris dataset and the best correct classification percentage because the data are not available in the paper of IT2FCM [6], i.e., the mean and standard deviation are not documented to carry out a statistics test; second, we neither found quantitative nor qualitative features data and, as mentioned in the previous paragraphs, this results in a disadvantage to the proposed CNNT1FL method because we are using all the features.
In general, we can conclude that the main objective of automatically designing a Type-1 FIS was achieved; according to the results, the Gaussian membership function is the best option for variables in Type-1 FIS, and the Sugeno FIS is the best option for data classification problems. We can also conclude that similarity in data features is very important for correct data classification, for example, in the Wine dataset, the percentage of correct data classification is not high because the thirteen features have a little similarity, namely, the range of data features is highly variable.
Finally, as described in the previous paragraph, we have achieved the main objective of this work, highlighting the contribution with the combination of two powerful computing areas using artificial neural networks and fuzzy logic to develop robust methods that can improve results in complex classification problems.
In future work, we will be designing the Interval Type-2 FIS or General Type-2 FIS and applying it to similar problems with the objective of carrying out a comparison with CNNT1FL and other state of the art methods as well as improving accuracy. Also, we could use different datasets (more data and features) with highly complex problems to evaluate the performance of the proposal.

Author Contributions

Conceptualization, J.B. and P.M.; methodology, J.B.; software, J.B.; validation, C.I.G., F.V. and P.M.; formal analysis, J.B., P.M., F.V. and C.I.G.; investigation, J.B.; data curation, J.B.; writing—original draft preparation, J.B.; writing—review and editing, C.I.G. and F.V.; supervision, P.M.; project administration, C.I.G.; funding acquisition, C.I.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CONAHCyT, grant number CF-2023-I-555.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Acknowledgments

We thank TECNM/Tijuana Institute of Technology and CONAHCyT for support with the finances with the grant number CF-2023-I-555.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bonabeau, E.; Dorigo, M.; Theraulaz, G. Swarm Intelligence: From Natural to Artificial Systems; Oxford Academic: New York, NY, USA, 1999. [Google Scholar]
  2. Esmin, A.A.A.; Coelho, R.; Matwin, S. A review on particle swarm optimization algorithm and its variants to clustering high-dimensional data. Artif. Intell. Rev. 2015, 44, 23–45. [Google Scholar] [CrossRef]
  3. Tan, Y. Fireworks Algorithm (FWA); Springer: Berlin/Heidelberg, Germany, 2015; pp. 17–35. [Google Scholar]
  4. Rashedi, E.; Nezamabadi, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  5. Barraza, J.; Rodríguez, L.; Castillo, O.; Melin, P.; Valdez, F. A New Hybridization Approach between the Fireworks Algorithm and Grey Wolf Optimizer Algorithm. J. Optim. Res. 2018, 2018, 6495362. [Google Scholar] [CrossRef]
  6. Rubio, E.; Castillo, O.; Melin, P. Interval type-2 fuzzy system design based on the interval type-2 fuzzy c-means algorithm. In Fuzzy Technology: Present Applications and Future Challenges; Collan, M., Fedrizzi, M., Kacprzyk, J., Eds.; Springer: Cham, Switzerland, 2016; Volume 335, pp. 133–146. [Google Scholar]
  7. Likas, A.; Vlassis, N.; Verbeek, J.J. The global k-means clustering algorithm. Pattern Recognit. 2003, 36, 451–461. [Google Scholar] [CrossRef]
  8. Kramer, O. K-Nearest Neighbors. In Dimensionality Reduction with Unsupervised Nearest Neighbors. Intelligent Systems Reference Library; Springer: Berlin/Heidelberg, Germany, 2013; Volume 51, pp. 13–23. [Google Scholar]
  9. Du, K.L. Clustering: A neural network approach. Neural Netw. 2010, 23, 89–107. [Google Scholar] [CrossRef]
  10. Hagan, M.T.; Demuth, H.B.; Beale, M.H. Neural Network Design; PWS Publishing: Boston, MA, USA, 1996. [Google Scholar]
  11. Buhmann, J.; Kühnel, H. Complexity Optimized Data Clustering by Competitive Neural Networks. Neural Comput. 1993, 5, 75–88. [Google Scholar] [CrossRef]
  12. Chen, C.; Mi, L.; Liu, Z.; Qiu, B.; Zhao, H.; Xu, L. Predefined-time synchronization of competitive neural networks. Neural Netw. 2021, 142, 492–499. [Google Scholar] [CrossRef]
  13. Rodríguez, L.; Castillo, O.; Soria, J. A Study of Parameters of the Grey Wolf Optimizer Algorithm for Dynamic Adaptation with Fuzzy Logic. In Nature-Inspired Design of Hybrid Intelligent Systems. Studies in Computational Intelligence; Melin, P., Castillo, O., Kacprzyk, J., Eds.; Springer: Cham, Switzerland, 2017; Volume 667, pp. 371–390. [Google Scholar]
  14. Simoes, M.; Bose, K.; Spiegel, J. Fuzzy Logic Based Intelligent Control of a Variable Speed Cage Machine Wind Generation System. IEEE Trans. Power Electron. 1997, 12, 87–95. [Google Scholar] [CrossRef]
  15. Soto, J.; Melin, P. Optimization of the Fuzzy Integrators in Ensembles of ANFIS Model for Time Series Prediction: The case of Mackey-Glass. In Proceedings of the IFSA-EUSFLAT 2015, Gijón, Spain, 30 June–3 July 2015; pp. 994–999. [Google Scholar]
  16. Zadeh, L.A. Knowledge Representation in Fuzzy Logic. IEEE Trans. Knowl. Data Eng. 1989, 1, 89–100. [Google Scholar] [CrossRef]
  17. Guillaume, S. Designing fuzzy inference systems from data: An interpretability-oriented review. IEEE Trans. Fuzzy Syst. 2001, 9, 426–443. [Google Scholar] [CrossRef]
  18. Men, H.; Liu, H.; Wang, L.; Pan, Y. An Optimizing Method of Competitive Neural Network Optimizing Method of Com. Key Eng. Mater. 2011, 467–469, 894–899. [Google Scholar] [CrossRef]
  19. Rumelhart, D.E.; Zipser, D. Feature discovery by competitive learning. Cogn. Sci. 1985, 9, 75–112. [Google Scholar]
  20. Lou, X.; Cui, B. Synchronization of competitive neural networks with different time scales. Phys. A Stat. Mech. Its Appl. 2007, 380, 563–576. [Google Scholar] [CrossRef]
  21. Liang, X.; Yang, J.; Lu, G.; Zhang, D. CompNet: Competitive Neural Network for Palmprint Recognition Using Learnable Gabor Kernels. IEEE Signal Process. Lett. 2021, 28, 1739–1743. [Google Scholar] [CrossRef]
  22. Omatu, S. Estimation of rice-planted area using competitive neural network. In Proceedings of the 2015 10th Asian Control Conference (ASCC), Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–6. [Google Scholar]
  23. Gavrilescu, M.; Floria, S.-A.; Leon, F.; Curteanu, S. A Hybrid Competitive Evolutionary Neural Network Optimization Algorithm for a Regression Problem in Chemical Engineering. Mathematics 2022, 10, 3581. [Google Scholar] [CrossRef]
  24. Barraza, J.; Valdez, F.; Melin, P.; Gonzalez, C.I. Optimal number of clusters finding using the fireworks algorithm. In Hybrid Intelligent Systems in Control, Pattern Recognition and Medicine; Castillo, O., Melin, P., Eds.; Springer: Cham, Switzerland, 2020; Volume 827, pp. 83–93. [Google Scholar]
  25. Zidan, M.; Abdel-Aty, A.-H.; El-shafei, M.; Feraig, M.; Al-Sbou, Y.; Eleuch, H.; Abdel-Aty, M. Quantum Classification Algorithm Based on Competitive Learning Neural Network and Entanglement Measure. Appl. Sci. 2019, 9, 1277. [Google Scholar] [CrossRef]
  26. Men, H.; Liu, H.; Pan, Y.; Wang, L.; Zhang, H. Electronic Nose Based on an Optimized Competition Neural Network. Sensors 2011, 11, 5005–5019. [Google Scholar] [CrossRef]
  27. Kickert, W.J.M.; Mamdani, E.H. Analysis of fuzzy logic controller. Fuzzy Sets Syst. 1978, 1, 29–44. [Google Scholar] [CrossRef]
  28. Takagi, T.; Sugeno, M. Fuzzy Identification of Systems and Its Applications to Modeling and Control. IEEE Trans. Syst. Man Cybern. 1985, 15, 116–132. [Google Scholar] [CrossRef]
  29. Mai, D.S. Interval type-2 fuzzy logic systems optimization with swarm algorithms for data classification. In Proceedings of the 2021 13th International Conference on Knowledge and Systems Engineering (KSE), Bangkok, Thailand, 10–12 November 2021; pp. 1–5. [Google Scholar]
  30. Velázquez-González, J.R.; Peregrina-Barreto, H.; Martinez-Trinidad, J.F. Unsupervised Outlier detection algorithm based on k-NN and fuzzy logic. In Proceedings of the 2019 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), Ixtapa, Mexico, 13–15 November 2019; pp. 1–6. [Google Scholar]
  31. Abdul-Jaleel, M.; Ali, Y.H.; Ibrahim, N.J. Fuzzy logic and Genetic Algorithm based Text Classification Twitter. In Proceedings of the 2019 2nd Scientific Conference of Computer Sciences (SCCS), Baghdad, Iraq, 27–28 March 2019; pp. 93–98. [Google Scholar]
  32. Boudjerida, F.; Akhtar, Z.; Lahoulou, A.; Chettibi, S. Integrating fuzzy C-means clustering and fuzzy inference system for audiovisual quality of experience. Int. J. Inf. Tecnol. 2023. [Google Scholar] [CrossRef]
  33. Barraza, J.; Valdez, F.; Melin, P.; Gonzalez, C.I. Interval Type 2 Fuzzy Fireworks Algorithm for Clustering. In Handbook of Research on Fireworks Algorithms and Swarm Intelligence; Yan, Y., Ed.; IGI Global: Hershey, PA, USA, 2020; pp. 195–211. [Google Scholar]
  34. Moreno, J.E.; Sanchez, M.A.; Mendoza, O.; Rodríguez-Díaz, A.; Castillo, O.; Melin, P.; Castro, J.R. Design of an interval Type-2 fuzzy model with justifiable uncertainty. Inf. Sci. 2020, 513, 206–221. [Google Scholar] [CrossRef]
Figure 1. Architecture of a CNN. Red represents the winning unit and orange the deactivated unit.
Figure 1. Architecture of a CNN. Red represents the winning unit and orange the deactivated unit.
Applsci 13 13091 g001
Figure 2. Model of Competitive Neural Network Fuzzy Logic (CNNT1FL).
Figure 2. Model of Competitive Neural Network Fuzzy Logic (CNNT1FL).
Applsci 13 13091 g002
Figure 3. Design of a Triangular MF.
Figure 3. Design of a Triangular MF.
Applsci 13 13091 g003
Figure 4. Design of a Gaussian MF.
Figure 4. Design of a Gaussian MF.
Applsci 13 13091 g004
Figure 5. Trapezoidal MF.
Figure 5. Trapezoidal MF.
Applsci 13 13091 g005
Figure 6. Test data for the Iris dataset with Mamdani FIS.
Figure 6. Test data for the Iris dataset with Mamdani FIS.
Applsci 13 13091 g006
Figure 7. Input variables of the Mamdani FIS.
Figure 7. Input variables of the Mamdani FIS.
Applsci 13 13091 g007
Figure 8. Output variable of the Mamdani FIS.
Figure 8. Output variable of the Mamdani FIS.
Applsci 13 13091 g008
Figure 9. Test data for the Wine dataset with Mamdani FIS.
Figure 9. Test data for the Wine dataset with Mamdani FIS.
Applsci 13 13091 g009
Figure 10. Input variables of the Mamdani FIS (Part 1) Wine dataset.
Figure 10. Input variables of the Mamdani FIS (Part 1) Wine dataset.
Applsci 13 13091 g010
Figure 11. Input variables of the Mamdani FIS (Part 2) Wine dataset.
Figure 11. Input variables of the Mamdani FIS (Part 2) Wine dataset.
Applsci 13 13091 g011
Figure 12. Input variables of the Mamdani FIS (Part 3) Wine dataset.
Figure 12. Input variables of the Mamdani FIS (Part 3) Wine dataset.
Applsci 13 13091 g012
Figure 13. Input variables of the Mamdani FIS (Part 4) Wine dataset.
Figure 13. Input variables of the Mamdani FIS (Part 4) Wine dataset.
Applsci 13 13091 g013
Figure 14. Output variable of the Mamdani FIS Wine dataset.
Figure 14. Output variable of the Mamdani FIS Wine dataset.
Applsci 13 13091 g014
Figure 15. Test data for the WDBC dataset with Mamdani FIS.
Figure 15. Test data for the WDBC dataset with Mamdani FIS.
Applsci 13 13091 g015
Figure 16. Input variables of the Mamdani FIS (Part 1) WDBC dataset.
Figure 16. Input variables of the Mamdani FIS (Part 1) WDBC dataset.
Applsci 13 13091 g016
Figure 17. Input variables of the Mamdani FIS (Part 2) WDBC dataset.
Figure 17. Input variables of the Mamdani FIS (Part 2) WDBC dataset.
Applsci 13 13091 g017
Figure 18. Input variables of the Mamdani FIS (Part 3) WDBC dataset.
Figure 18. Input variables of the Mamdani FIS (Part 3) WDBC dataset.
Applsci 13 13091 g018
Figure 19. Input variables of the Mamdani FIS (Part 4) WDBC dataset.
Figure 19. Input variables of the Mamdani FIS (Part 4) WDBC dataset.
Applsci 13 13091 g019
Figure 20. Input variables of the Mamdani FIS (Part 5) WDBC dataset.
Figure 20. Input variables of the Mamdani FIS (Part 5) WDBC dataset.
Applsci 13 13091 g020
Figure 21. Input variables of the Mamdani FIS (Part 6) WDBC dataset.
Figure 21. Input variables of the Mamdani FIS (Part 6) WDBC dataset.
Applsci 13 13091 g021
Figure 22. Input variables of the Mamdani FIS (Part 7) WDBC dataset.
Figure 22. Input variables of the Mamdani FIS (Part 7) WDBC dataset.
Applsci 13 13091 g022
Figure 23. Input variables of the Mamdani FIS (Part 8) WDBC dataset.
Figure 23. Input variables of the Mamdani FIS (Part 8) WDBC dataset.
Applsci 13 13091 g023
Figure 24. Output variable of the Mamdani FIS WDBC dataset.
Figure 24. Output variable of the Mamdani FIS WDBC dataset.
Applsci 13 13091 g024
Figure 25. Test data for the Iris dataset with Sugeno FIS.
Figure 25. Test data for the Iris dataset with Sugeno FIS.
Applsci 13 13091 g025
Figure 26. Input variables of the Sugeno FIS for the Iris dataset.
Figure 26. Input variables of the Sugeno FIS for the Iris dataset.
Applsci 13 13091 g026
Figure 27. Output variable of the Sugeno FIS for the Iris dataset.
Figure 27. Output variable of the Sugeno FIS for the Iris dataset.
Applsci 13 13091 g027
Figure 28. Test data for the Wine dataset with Sugeno FIS.
Figure 28. Test data for the Wine dataset with Sugeno FIS.
Applsci 13 13091 g028
Figure 29. Input variables of the Sugeno FIS for the Wine dataset.
Figure 29. Input variables of the Sugeno FIS for the Wine dataset.
Applsci 13 13091 g029
Figure 30. Test data for the WDBC dataset with Sugeno FIS.
Figure 30. Test data for the WDBC dataset with Sugeno FIS.
Applsci 13 13091 g030
Figure 31. Input variables of the Sugeno FIS for WDBC.
Figure 31. Input variables of the Sugeno FIS for WDBC.
Applsci 13 13091 g031
Figure 32. Output variable of the Sugeno FIS for WDBC.
Figure 32. Output variable of the Sugeno FIS for WDBC.
Applsci 13 13091 g032
Table 1. Points for a Triangular membership function.
Table 1. Points for a Triangular membership function.
M e m b e r s h i p F u n c t i o n 1 M e m b e r s h i p F u n c t i o n 2 M e m b e r s h i p F u n c t i o n n
a 1 = C e n t r o i d e 1 2 S T D ( C l u s t e r 1 ) a 2 = C e n t r o i d e 2 2 S T D ( C l u s t e r 2 ) a n = C e n t r o i d e 1 2 S T D ( C l u s t e r n )
b 1 = C e n t r o i d e 1 b 2 = C e n t r o i d e 2 b n = C e n t r o i d e n
c 1 = C e n t r o i d e 1 + 2 S T D ( C l u s t e r 1 ) c 2 = C e n t r o i d e 2 + 2 S T D ( C l u s t e r 2 ) c n = C e n t r o i d e 1 + 2 S T D ( C l u s t e r n )
Table 2. Points for a Gaussian membership function.
Table 2. Points for a Gaussian membership function.
M e m b e r s h i p F u n c t i o n 1 M e m b e r s h i p F u n c t i o n 2 M e m b e r s h i p F u n c t i o n n
c 1 = C e n t r o i d ( C l u s t e r 1 ) c 2 = C e n t r o i d ( C l u s t e r 2 ) c n = C e n t r o i d ( C l u s t e r n )
σ 1 = S T D ( C l u s t e r 1 ) σ 2 = S T D ( C l u s t e r 2 ) σ n = S T D ( C l u s t e r n )
Table 3. Points for a Trapezoidal MF.
Table 3. Points for a Trapezoidal MF.
M e m b e r s h i p F u n c t i o n 1 M e m b e r s h i p F u n c t i o n 2 M e m b e r s h i p F u n c t i o n n
a 1 = m i n ( C l u s t e r 1 ) a 2 = m i n ( C l u s t e r 2 ) a n = m i n ( C l u s t e r n )
b 1 = Q u a r t i l e   1 ( C l u s t e r 1 ) b 2 = Q u a r t i l e   1 ( C l u s t e r 2 ) b n = Q u a r t i l e   1 ( C l u s t e r n )
c 1 = Q u a r t i l e   3 ( C l u s t e r 1 ) c 2 = Q u a r t i l e   3 ( C l u s t e r 2 ) c n = Q u a r t i l e   3 ( C l u s t e r n )
d 1 = m a x ( C l u s t e r 1 ) d 2 = m a x ( C l u s t e r 2 ) d n = m a x ( C l u s t e r n )
Table 4. Parameters of the datasets.
Table 4. Parameters of the datasets.
DatasetClassesFeaturesData Number
Iris34150
Wine313178
WDBC230569
Table 5. Mamdani FIS results for the Iris dataset.
Table 5. Mamdani FIS results for the Iris dataset.
Membership FunctionsMeanSTDBestWorst
Triangular75.594.1686.6770
Gaussian81.295.219070
Trapezoidal70.974.5776.6760
Table 6. Mamdani FIS results for the Wine dataset.
Table 6. Mamdani FIS results for the Wine dataset.
Membership FunctionsMeanSTDBestWorst
Triangular48.5314.4672.2233.33
Gaussian63.219.1674.2936.11
Trapezoidal51.7610.3568.5744.44
Table 7. Mamdani FIS results for the WDBC dataset.
Table 7. Mamdani FIS results for the WDBC dataset.
Membership FunctionsMeanSTDBestWorst
Triangular71.840.3572.8171.05
Gaussian92.331.2294.7489.47
Trapezoidal73.795.8391.2371.05
Table 8. Sugeno FIS results for the Iris dataset.
Table 8. Sugeno FIS results for the Iris dataset.
Membership FunctionsMeanSTDBestWorst
Triangular73.334.878063.33
Gaussian84.345.2389.6675.86
Trapezoidal68.174.468060
Table 9. Sugeno FIS results for the Wine dataset.
Table 9. Sugeno FIS results for the Wine dataset.
Membership FunctionsMeanSTDBestWorst
Triangular49.6215.698033.33
Gaussian66.467.937550
Trapezoidal52.4510.616744
Table 10. Sugeno FIS results for the WDBC dataset.
Table 10. Sugeno FIS results for the WDBC dataset.
Membership FunctionsMeanSTDBestWorst
Triangular73.174.859171.05
Gaussian92.440.7494.7492.04
Trapezoidal93.910.329694
Table 11. Comparative results using Mamdani FIS.
Table 11. Comparative results using Mamdani FIS.
DatasetTriangular MFGaussian MFTrapezoidal MF
MeanBestMeanBestMeanBest
Iris75.5986.6781.299070.9776.67
Wine48.5372.2263.2174.2951.7668.57
WDBC71.8472.8192.3394.7473.7991.23
Table 12. Comparative results using Sugeno FIS.
Table 12. Comparative results using Sugeno FIS.
DatasetTriangular MFGaussian MFTrapezoidal MF
MeanBestMeanBestMeanBest
Iris73.338084.3489.6668.1780
Wine49.628066.467552.4567
WDBC73.179192.4494.7493.9196
Table 13. FIS comparison between CNNT1FL and IT2FCM for the Iris dataset.
Table 13. FIS comparison between CNNT1FL and IT2FCM for the Iris dataset.
MethodMembership FunctionsMamdani FISSugeno FIS
IT2FCMGaussian65.3394.67
Triangular86.6780
CNNT1FLGaussian9089.66
Trapezoidal76.6780
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Barraza, J.; Melin, P.; Valdez, F.; Gonzalez, C.I. Modeling of Fuzzy Systems Based on the Competitive Neural Network. Appl. Sci. 2023, 13, 13091. https://doi.org/10.3390/app132413091

AMA Style

Barraza J, Melin P, Valdez F, Gonzalez CI. Modeling of Fuzzy Systems Based on the Competitive Neural Network. Applied Sciences. 2023; 13(24):13091. https://doi.org/10.3390/app132413091

Chicago/Turabian Style

Barraza, Juan, Patricia Melin, Fevrier Valdez, and Claudia I. Gonzalez. 2023. "Modeling of Fuzzy Systems Based on the Competitive Neural Network" Applied Sciences 13, no. 24: 13091. https://doi.org/10.3390/app132413091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop