Next Article in Journal
Quantitative Hahn-Banach Theorems and Isometric Extensions for Wavelet and Other Banach Spaces
Next Article in Special Issue
Discrete Integrals Based on Comonotonic Modularity
Previous Article in Journal
Time Scale Analysis of Interest Rate Spreads and Output Using Wavelets

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using the Choquet Integral in the Fuzzy Reasoning Method of Fuzzy Rule-Based Classification Systems

by
Edurne Barrenechea
,
Humberto Bustince
,
Javier Fernandez
,
Daniel Paternain
and
José Antonio Sanz
*
*
Author to whom correspondence should be addressed.
Axioms 2013, 2(2), 208-223; https://doi.org/10.3390/axioms2020208
Submission received: 25 February 2013 / Revised: 21 March 2013 / Accepted: 3 April 2013 / Published: 23 April 2013
(This article belongs to the Special Issue Axiomatic Approach to Monotone Measures and Integrals)

Abstract

:
In this paper we present a new fuzzy reasoning method in which the Choquet integral is used as aggregation function. In this manner, we can take into account the interaction among the rules of the system. For this reason, we consider several fuzzy measures, since it is a key point on the subsequent success of the Choquet integral, and we apply the new method with the same fuzzy measure for all the classes. However, the relationship among the set of rules of each class can be different and therefore the best fuzzy measure can change depending on the class. Consequently, we propose a learning method by means of a genetic algorithm in which the most suitable fuzzy measure for each class is computed. From the obtained results it is shown that our new proposal allows the performance of the classical fuzzy reasoning methods of the winning rule and additive combination to be enhanced whenever the fuzzy measure is appropriate for the tackled problem.

1. Introduction

A classification problem [1,2] consists of assigning objects into predefined groups or classes based on the observed variables related to the objects. To do so, a learning algorithm, which uses the available information, is used to give some decision function to determine the class to which the objects belong.
Fuzzy Rule-Based Classification Systems (FRBCSs) [3] aside from their good performance provide a model close to the one used by humans, since it is composed of a set of rules formed of linguistic labels. Due to these reasons, they are widely used to deal with real world problems [4]. The two main components of FRBCSs are the knowledge base, where it is stored the information about the problem, and the Fuzzy Reasoning Method (FRM).
The FRM is an inference procedure that uses the information in the knowledge base to determine the class in which new examples are classified. To do so, in first place the local information is computed, that is, the compatibility between the example and each fuzzy rule in the system. Then, this local information is aggregated to generate global information that is associated with each class of the problem and finally, the example is classified in the class having the maximum global information.
The FRM of the winning rule is traditionally used in the specialized literature [5,6,7,8,9]. It uses the maximum as aggregation function [10,11] to obtain the global information. This FRM only considers per each class the information given by a single fuzzy rule having the greatest compatibility with the example, and consequently it ignores the available information given by the remainder fuzzy rules of the system.
In this paper, we propose a new FRM that takes into account the information given by several or even all the fuzzy rules in the system. To do so, we consider the use of the Choquet integral [12,13] as the aggregation operator in the FRM. The Choquet integral is related to a fuzzy measure [12,14], which models the interaction among the elements to be aggregated (the information given by the rules of the system in this case). Therefore, a key point is the choice of an appropriate fuzzy measure for each problem we want to deal with. To perform this choice, we propose to use a genetic algorithm [15] in order to learn the most suitable fuzzy measure for each class to carry out the aggregation stage.
In order to study the usefulness of the new proposal, we apply the well-known Chi et al.’s algorithm [8] to accomplish the fuzzy rule learning process. We compare the performance of the classic FRMs of the winning rule and additive combination with respect to both the ones obtained when using the Choquet integral related to several fuzzy measures and the Choquet integral when the fuzzy measure is genetically learned. The behaviour of the approaches is tested in seventeen numerical dataset selected from the KEEL data-set repository [16,17], and in order to support our conclusions, we use some non-parametric statistical tests [18,19].
This paper is arranged as follows. In Section 2 we recall some preliminary concepts that are necessary to understand the paper. The new proposal is described in detail in Section 3, including the new FRM, the fuzzy measures considered in the paper and the method to genetically learn the fuzzy measure. Next, the experimental set-up and the corresponding analysis of the results are presented in Section 4 and Section 5 respectively. Finally, the main conclusions are drawn in Section 6.

2. Preliminaries

This section is aimed at introducing the background necessary to understand the new proposal. In first place we recall some theoretical concepts, next we introduce basic concepts about FRBCSs and finally we describe the evolutionary model considered in this paper.

2.1. Theoretical Concepts

In this paper we use fuzzy sets to model the linguistic labels composing the antecedents of the rules.
Definition 1 [20] A fuzzy set F defined on a finite and non-empty universe $U = { u 1 , . . . , u n }$ is given by
$F = { ( u i , μ F ( u i ) ) | u i ∈ U }$
where $μ F : U → [ 0 , 1 ]$ is the membership function.
The conjunction among the antecedents of the rules is modelled by means of t-norms.
Definition 2 [10,11] A triangular norm (t-norm) $T : [ 0 , 1 ] 2 → [ 0 , 1 ]$, is an associative, commutative, increasing function such that $T ( 1 , x ) = x$ for all $x ∈ [ 0 , 1 ]$.
When several numerical values need to be combined into a single value, we use aggregation functions.
Definition 3 [10,11] An aggregation function of dimension n (n-ary aggregation function) is a non-decreasing mapping $M : [ 0 , 1 ] n → [ 0 , 1 ]$ such that $M ( 0 , ⋯ , 0 ) = 0$ and $M ( 1 , ⋯ , 1 ) = 1$.
Finally, we recall the necessary concept that derives in the definition of the aggregation function known as the Choquet integral [12].
Definition 4 [12,14] Let $N = { 1 , . . . , n }$. A fuzzy measure is a function $m : 2 N → [ 0 , 1 ]$ which is monotonic (i.e., $m ( A ) ≤ m ( B )$ whenever $A ⊂ B$) and satisfies $m ( ∅ ) = 0$ and $m ( N ) = 1$.
In the context of aggregation functions, fuzzy measures are used to model the importance of a coalition, that is, the relationship among the elements to be aggregated.
Definition 5 [12] Let m be a fuzzy measure on N. We say that m is:
• Additive if for any disjoint subsets $A , B ⊆ N$, $m ( A ∪ B ) = m ( A ) + m ( B )$;
• Symmetric if for any subsets $A , B ⊆ N$, $| A | = | B |$ implies $m ( A ) = m ( B )$;
Definition 6 [12] Let m be a fuzzy measure on N and $x ∈ [ 0 , ∞ ] n$. The discrete Choquet integral of x with respect to m is defined by:
$C m ( x ) = ∑ i = 1 n x ( i ) − x ( i − 1 ) × m ( A ( i ) )$
where $( · )$ is a permutation of ${ 1 , . . . , n }$ such that $0 ≤ x ( 1 ) ≤ x ( 2 ) ≤ . . . ≤ x ( n )$ with the convention $x ( 0 ) = 0$ and $A ( i ) = { A ( i ) , . . . , A ( n ) }$.

2.2. Fuzzy Rule-Based Classification Systems

FRBCSs are widely used in data mining, since they allow the inclusion of all the available information in system modelling, i.e., expert knowledge, empirical measures or mathematical models. They have the advantage of generating an interpretable model and therefore allowing the knowledge representation to be understandable for the users of the system. The two main components of FRBCSs are:
• Knowledge Base: It is composed of both the Rule Base (RB) and the Data Base, where the rules and the membership functions are stored respectively.
• Fuzzy Reasoning Method: It is the mechanism used to classify examples using the information stored in the knowledge base.
Any classification problem consists of m training examples $x p = ( x p 1 , … , x p n , y p )$, $p = 1 , 2 , … , m$ from M classes where $x p i$ is the value of the ith variable ($i = 1 , 2 , … , n$) and $y p$ is the class label of the p-th training example.
We use fuzzy rules of the following form:
where $R j$ is the label of the jth rule, $x = ( x 1 , … , x n )$ is an n-dimensional example vector, $A j i$ is an antecedent fuzzy set representing a linguistic term, $C j$ is a class label, and $R W j ∈ [ 0 , 1 ]$ is the rule weight [21].
In the remainder of this subsection, the FRM applied to determine the classes of new examples and the fuzzy rule learning algorithm used to generate the RB are described in detail.

2.2.1. Fuzzy Reasoning Method

Let $x p = ( x p 1 , … , x p n )$ be a new example to be classified, L the number of rules in the RB and M the number of classes of the problem. The steps of the FRM [22] are the following:
• Matching degree, that is, the strength of activation of the if-part for all rules in the RB with the example $x p$. To compute it we use a t-norm.
$μ A j ( x p ) = T ( μ A j 1 ( x p 1 ) , … , μ A j n ( x p n ) ) , j = 1 , … , L$
• Association degree. The association degree of the example $x p$ with the class of each rule in the RB.
$b j k = μ A j ( x p ) × R W j k k = C l a s s ( R j ) , j = 1 , … , L$
• Example classification soundness degree for all classes. We use an aggregation function that combines the positive association degrees calculated in the previous step.
• Classification. We apply a decision function F over the example classification soundness degree for all classes. This function determines the class corresponding to the maximum value.

2.2.2. Chi et al. Rule Generation Algorithm

Chi et al. fuzzy rule learning method [8] is the extension of the Wang and Mendel algorithm [23] to solve classification problems. This method is one of the most used learning algorithms in the specialized literature due to the simplicity of the fuzzy rule generation method.
To generate the fuzzy RB, this FRBCSs design method determines the relationship between the variables of the problem and establishes an association between the space of the features and the space of the classes by means of the following steps:
• Establishment of the linguistic partitions. Once the domain of variation of each feature $A i$ is determined, the Ruspini’s fuzzy partitions are computed using triangular shaped membership functions in this paper.
• Generation of a fuzzy rule for each example $x p = ( x p 1 , … , x p n , C p )$ applying the following process:
2.1.
To compute the matching degree $μ ( x p )$ of the example with all the fuzzy regions using a conjunction operator (usually modelled with the minimum or product t-norm).
2.2.
To assign the example $x p$ to the fuzzy region with the greatest matching degree.
2.3.
To generate a rule for the example, whose antecedent is determined by the selected fuzzy region and whose consequent is the class label of the example.
2.4.
To compute the rule weight. In this paper, we use the Penalized Certainty Factor (PCF) defined in [24] as:
$R W j = P C F j = ∑ x p ∈ C l a s s C j μ A j ( x p ) − ∑ x p ∉ C l a s s C j μ A j ( x p ) ∑ p = 1 m μ A j ( x p )$
where $μ A j ( x p )$ is the matching degree of the example $x p$ with the antecedent of the rule that is being generated.
We must remark that rules with the same antecedent can be generated during the learning process. If they have the same class in the consequent we just remove one of the duplicated rules, but if they have a different class only the rule with the highest weight is kept in the RB.

2.3. Evolutionary Model

In this paper, we consider the evolutionary model of CHC [25] to accomplish the learning of the fuzzy measure. CHC is a GA that presents a good trade-off between exploration and exploitation, being a good choice in problems with complex search spaces.
The CHC evolutionary model considers a population-based selection approach in order to perform a suitable global search. It makes use of a “Population-based Selection” approach, where N parents and their corresponding offspring are combined to select the best N individuals to form the next population. The CHC approach uses an incest prevention mechanism and a restarting process to provoke diversity in the population, instead of the well known mutation operator.
The incest prevention mechanism is only considered in order to apply the crossover operator. In our case, two parents are only crossed if half their Hamming distance is above a predetermined threshold, $T h$. Since we consider a real coding scheme, we have to transform each gene considering a Gray Code (binary code) with a fixed number of bits per gene ($B I T S G E N E$), which is determined by the system expert. In this way, the threshold value is initialized as:
$T h = ( # G e n e s · B I T S G E N E ) / 4.0$
where $# G e n e s$ stands for the total length of the chromosome. Following the original CHC scheme, $T h$ is decremented by one ($B I T S G E N E$ in our case) when there are no new individuals in the next generation. The algorithm restarts when $T h$ is below zero. The scheme of this model is depicted in Figure 1.
Figure 1. CHC scheme.
Figure 1. CHC scheme.

3. A Novel Fuzzy Reasoning Method Using the Choquet Integral

This section is aimed at describing our new FRM making use of the Choquet integral to aggregate the local information given by the rules in the RB, that is, the values of $b j k$ computed with Equation (3). Specifically, we propose to modify the third step of the general FRM introduced in Section 2.2.1. by applying Equation (7) instead of Equation (4).
where $m k$ is the fuzzy measure considered for the k-th class of the problem, M is the number of classes of the classification problem and L is the number of rules composing the RB.
In the remainder of this section, we introduce the fuzzy measures considered in this paper in first place, and then we provide a learning proposal in which we optimize the fuzzy measure for each class of the problem.

3.1. Fuzzy Measures

According to Equation (7) we apply the Choquet integral to obtain the global information from the local information given by each rule of the system. A key point on the success of the Choquet integral is the definition of the fuzzy measure related to it. Let $N = { 1 , . . . , n }$ and $A ⊆ N$, we consider the use of the following five fuzzy measures:
(1)
Cardinality or uniform measure.
$m ( A ) = | A | n$
(2)
Dirac’s measure.
where $i ∈ A$ is selected beforehand. We must point out that the result of the Choquet integral with this fuzzy measure is the i-th smallest value of X, that is, i-th order statistic.
We take an arbitrary vector of weights $( w 1 , . . . , w n ) ∈ [ 0 , 1 ] n$ with $∑ i = 1 n w i = 1$.
(3)
Weighted mean. We assign the following values for the fuzzy measure: $m ( { 1 } ) = w 1 , . . . , m ( { n } ) = w n$. For $| A | > 1$ the fuzzy measure is
$m ( A ) = ∑ i ∈ A m ( { i } )$
(4)
Ordered Weighted Averaging (OWA). We assign the following values for the fuzzy measure: $m ( { i } ) = w j$, with i being the j-th largest component to be aggregated, that is, we construct an OWA operator. For $| A | > 1$ the fuzzy measure is
$m ( A ) = ∑ i ∈ A m ( { i } )$
(5)
Exponential cardinality
We must point out that all these fuzzy measures are additive (the exponential cardinality is additive only when $q = 1$) and the cardinality and exponential cardinality are also symmetric.

3.2. Fuzzy Measure Learning Method

The definition of an appropriate fuzzy measure plays an essential role in the success of the Choquet integral. In the proposed FRM, the first attempt is to apply the Choquet integral with the same fuzzy measure for each class of the problem. However, the set of rules of each class can interact in a different way. This fact could be taken into account by taking different values of the parameter q for each class using the exponential cardinality. In this manner, a specific fuzzy measure would be constructed for the different classes of the problem, that is, $m k ( A ) = | A | n q k$, with $k = 1 , … , M$. Consequently, we propose a learning method to compute the most appropriate fuzzy measure for each class of the problem, since it can provoke an increase on the system’s accuracy.
In order to carry out this optimization problem, we consider the use of the CHC evolutionary model [25] (see Section 2.3). In the remainder of this section, we describe the specific features of our evolutionary model.
• Coding scheme. We have a set of real parameters to be optimized ($q k$, with $k = 1 , . . . , M$), where the range in which we suggest to vary each one is $[ 0.01 , 100 ]$. However, we do not directly encode them in a chromosome but we adapt them using chromosomes in the form:
$C C H O Q U E T = { G 1 , . . . , G M }$
where $G k ∈ [ 0.01 , 1.99 ]$ with $k = 1 , . . . , M$. In order to compute their real values (in the range $[ 0.01 , 100 ]$) we apply Equation (8).
The change of range is provoked because we need to give the same chances to produce offspring in the ranges $[ 0.01 , 1 ]$ and $[ 1 , 100 ]$ after applying the crossover operator. Looking at how the crossover operator works, if we encoded the parameters in the range $[ 0.01 , 100 ]$ we would favour the generation of offspring in the range $[ 1 , 100 ]$ and consequently, we would reduce the probability of the generation of offspring in the range $[ 0.01 , 1 ]$. For this reason, we adapt the range in order to solve this undesirable situation.
• Initial Gene Pool. We include an individual having all genes with value 1. In this manner, at least we obtain the results provided by the cardinality measure.
• Chromosome Evaluation. We use the most common metric for classification, i.e., the accuracy rate that is the percentage of correctly classified examples.
• Crossover Operator. The crossover operator is based on the concept of environments (the offspring are generated around their parents). These kinds of operators present a good cooperation when they are introduced within evolutionary models forcing the convergence by pressure on the offspring (as the case of CHC). Figure 2 depicts the behaviour of these kinds of operators, which allow the offspring genes to be around the genes of one parent, Parent Centric BLX (PCBLX), or around a wide zone determined by both parent genes BLX-α. Specifically, we consider the PCBLX operator that is based on the BLX-α [26].
The PCBLX is described as follows. Assuming that $X = ( x 1 ⋯ x n )$ and $Y = ( y 1 ⋯ y n )$, $( x i , y i ∈ [ a i , b i ] ⊂ ℜ , i = 1 ⋯ n )$ are two real-coded chromosomes that are going to be crossed. The PCBLX operator generates the two following offsprings:
$O 1 = ( o 11 ⋯ o 1 n )$, where $o 1 i$ is a randomly (uniformly) chosen number from the interval $[ l i 1 , u i 1 ]$, with $l i 1 = max { a i , x i − I i }$, $u i 1 = min { b i , x i + I i }$, and $I i = ∣ x i − y i ∣$.
$O 2 = ( o 21 ⋯ o 2 n )$, where $o 2 i$ is a randomly (uniformly) chosen number from the interval $[ l i 2 , u i 2 ]$, with $l i 2 = max { a i , y i − I i }$ and $u i 2 = min { b i , y i + I i }$.
• Restarting Approach. To get away from local optima, this algorithm uses a restarting approach since it does not apply mutation during the recombination phase. Therefore, when the threshold value is lower than zero, all the chromosomes are regenerated randomly to introduce new diversity to the search. Furthermore, the best global solution found is included in the population to increase the convergence of the algorithm as in the elitist scheme.
Figure 2. Scheme of the behaviour of the BLX and PCBLX operators.
Figure 2. Scheme of the behaviour of the BLX and PCBLX operators.

4. Experimental Framework

In this section, we first present the real world classification data-sets selected for the experimental study. Next, we introduce the parameter set-up considered along this study. Finally, we introduce the statistical tests that are necessary to compare the results achieved throughout the experimental study.

4.1. Data-Sets

We have selected seventeen numerical data-sets selected from the KEEL data-set repository [16,17]. Table 1 summarizes the properties of the selected data-sets, showing for each data-set the number of examples (#Ex.), the number of attributes (#Atts.) and the number of classes (#Class.). We must point out that the magic, ring and twonorm data-sets have been stratified sampled at 10% in order to reduce their size for training and examples with missing values have been removed like in the wisconsin data-set.
A 5-fold cross-validation model was considered in order to carry out the different experiments. That is, we split the data-set into 5 random partitions of data, each one with 20% of the examples, and we use a combination of 4 of them (80%) to train the system and the remaining one to test it. This process is repeated five times by using a different partition to test the system each time. We consider the average result of the five partitions as the final classification rate of the algorithm. This procedure is a standard for testing the performance of classifiers [27,28].
Table 1. Summary Description for the employed data-sets.
Table 1. Summary Description for the employed data-sets.
Id.Data-set#Ex.#Atts.#Class.
balBalance62543
banBanana530022
ecoEcoli33678
glaGlass21496
iriIris15043
ledLed7digit500710
magMagic1902102
newNewthyroid21553
phoPhoneme540452
pimPima76882
rinRing740202
segSegment2310197
titTitanic220132
twoTwonorm740202
vehVehicle846184
winWine178133
wisWisconsin683112

4.2. Configuration of the Proposals and Notation

We will apply the following configuration for the Chi et al. rule generation algorithm:
• Conjunction operator: Product t-norm.
• Rule weight: Penalized Certainty Factor.
• Number of linguistic labels: 3.
For the new proposal using the Dirac’s fuzzy measure, the value selected as i is the one associated with the median, that is, if the number of elements is odd we take $i = n + 1 2$, whereas if the number of elements is even we take $i = n 2 + 1$. We must stress that when using the Dirac’s measure taking $i = n$ we obtain the same results provided by the maximum (Max.). In addition, if we used $i = 1$, we would obtain the results provided by the minimum as aggregation function [29] but we do not include them since the achieved performance is poor.
Regarding the genetic process, we have used the values suggested in [30], which are:
• Population Size: 50 individuals.
• Number of evaluations: 20,000.
• Bits per gene for the Gray codification (for incest prevention): 30 bits.
Finally, for the sake of clarity, Table 2 shows the names given to the different approaches considered along the experimental study.
Table 2. Names given to the seven approaches used in the paper.
Table 2. Names given to the seven approaches used in the paper.
NameAggregation functionFuzzy Measure
Max.Maximum-
ACSum (This function is not an-
aggregation function as introduced in Definition 3
because it does not provide a result in $[ 0 , 1 ]$.)
Card.Choquet integralCardinality
Dirac.Choquet integralDirac’s measure
WMean.Choquet integralWeighted mean
OWAChoquet integralOWA
Card_GAChoquet integralExponential cardinality

4.3. Statistical Tests for Performance Comparison

In this paper, we use some hypothesis validation techniques in order to give statistical support to the analysis of the results [31,32]. We will use non-parametric tests because the initial conditions that guarantee the reliability of the parametric tests cannot be fulfilled, which implies that the statistical analysis loses credibility with these parametric tests [18].
Specifically, we use the Friedman aligned ranks test [33] to detect statistical differences among a group of results and the Holm post-hoc test [34] to find the algorithms that reject the equality hypothesis with respect to a selected control method.
The post-hoc procedure allows us to know whether a hypothesis of comparison could be rejected at a specified level of significance α. Furthermore, we compute the adjusted p-value (APV) in order to take into account the fact that multiple tests are conducted. In this manner, we can directly compare the APV with respect to the level of significance α in order to be able to reject the null hypothesis.
Furthermore, we consider the method of aligned ranks of the algorithms in order to show graphically how good a method is with respect to its partners. The first step to compute this ranking is to obtain the average performance of the algorithms in each data set. Next, we compute the subtractions between the accuracy of each algorithm minus the average value for each data-set. Then, we rank all these differences in descending order and, finally, we average the rankings obtained by each algorithm. In this manner, the algorithm that achieves the lowest average ranking is the best one.
These tests are suggested in the studies presented in [18,31,35], where it is shown that their use in the field of machine learning is highly recommended.

5. Experimental Results

Table 3 shows the classification accuracy along with the standard deviation obtained both in training and in testing by the different approaches used in the experimental study, where the best global result for each data-set is emphasised in bold-face. From these results it can be observed that the behaviour of the new proposal using standard fuzzy measures (Card., Dirac, WMean and OWA) is similar among themselves except the proposal associated with Dirac’s measure, since it provides worse results. Regarding the behaviour of these proposals with respect the classical FRM of the winning rule, which uses the maximum as aggregation function (Max.), we can observe that although they provide a worse mean performance, the lack of accuracy is mainly due to three datasets, namely balance, iris and twonorm, the latter being especially bad for these proposals. However, when the fuzzy measure is appropriate for the specific problem we are dealing with, like the one that has been genetically learned (Card_GA), both the increase in the system’s performance and the robustness of the method can be noted, since it provides the best result in eleven out of the seventeen datasets of the study. Finally, we also compare our new FRM with respect to the classical additive combination FRM (AC), which aggregates the positive association degrees by summing them and therefore does not provide a result in the range between the minimum and maximum of the aggregated values like the Choquet integral. In this comparison, we can stress that the Card_GA proposal obtains an average mean enhancement of 2.04%, which is based on the improvement of the performance of the AC FRM in more than half of the data-sets.
Table 3. Results in training (Tr.) and testing (Tst.) along with their standard deviations achieved by the seven approaches considered in this paper.
Table 3. Results in training (Tr.) and testing (Tst.) along with their standard deviations achieved by the seven approaches considered in this paper.
DataMax.ACCard.DiracWMeanOWACard_GA
Set$Tr .$$Tst$$Tr .$$Tst$$Tr .$$Tst$$Tr .$$Tst$$Tr .$$Tst$$Tr .$$Tst$$Tr .$$Tst$
Bal91.52 ± 0.2390.56 ± 1.0491.52 ± 0.2390.56 ± 1.1989.96 ± 1.1186.72 ± 2.5789.48 ± 0.4187.04 ± 1.0490.20 ± 0.9386.72 ± 1.9390.24 ± 1.0786.88 ± 2.3791.52 ± 0.2389.92 ± 0.91
Ban60.36 ± 0.3960.32 ± 1.3359.83 ± 0.2860.02 ± 1.3060.68 ± 0.6060.47 ± 1.6262.00 ± 1.1861.77 ± 0.7960.34 ± 0.6360.08 ± 1.4460.55 ± 0.7160.36 ± 1.3463.11 ± 0.9062.36 ± 1.28
Eco76.27 ± 1.3471.76 ± 6.6978.42 ± 0.8673.53 ± 5.4075.97 ± 2.0369.71 ± 5.9473.36 ± 2.5767.94 ± 6.0175.60 ± 1.4570.59 ± 6.3375.45 ± 1.7370.88 ± 6.3683.26 ± 2.1275.00 ± 9.24
Gla66.01 ± 2.5857.67 ± 1.0466.47 ± 1.8459.07 ± 2.6562.61 ± 3.3958.60 ± 3.8262.15 ± 4.8257.21 ± 2.0861.09 ± 3.9757.21 ± 4.5363.31 ± 2.8659.53 ± 2.6568.23 ± 1.3659.53 ± 3.12
Iri93.00 ± 0.7592.67 ± 1.4995.33 ± 1.5194.67 ± 4.4788.50 ± 1.0987.33 ± 1.4984.67 ± 1.7384.00 ± 4.3588.67 ± 1.9287.33 ± 2.7987.00 ± 1.2686.67 ± 2.3696.67 ± 0.8394.67 ± 2.98
Led75.90 ± 2.9664.20 ± 5.6375.90 ± 2.9664.20 ± 5.6375.90 ± 2.9664.20 ± 5.6375.90 ± 2.9664.20 ± 5.6375.90 ± 2.9664.20 ± 5.6375.90 ± 2.9664.20 ± 5.6375.90 ± 2.9664.20 ± 5.63
Mag76.00 ± 0.6174.75 ± 1.8576.49 ± 0.5675.38 ± 1.5174.11 ± 0.2272.65 ± 1.5071.65 ± 0.4370.81 ± 1.0474.01 ± 0.3172.97 ± 1.6574.29 ± 0.3273.07 ± 1.4079.31 ± 0.9377.80 ± 3.62
New86.40 ± 1.0685.12 ± 3.5387.44 ± 1.4086.51 ± 4.1687.79 ± 1.2386.05 ± 3.6889.19 ± 0.8886.98 ± 4.2287.91 ± 1.6187.44 ± 3.5388.02 ± 1.5286.51 ± 3.4594.42 ± 1.2792.56 ± 4.47
Pho71.91 ± 0.1171.91 ± 0.3772.82 ± 0.0972.62 ± 0.6471.54 ± 0.1871.23 ± 0.8072.20 ± 0.3672.16 ± 0.7171.79 ± 0.2771.56 ± 0.6072.03 ± 0.2671.93 ± 1.0176.16 ± 0.2575.39 ± 1.28
Pim75.46 ± 0.7072.99 ± 0.9874.64 ± 0.5073.25 ± 1.5575.75 ± 0.4073.77 ± 1.7575.36 ± 0.4973.38 ± 1.6675.98 ± 0.4874.55 ± 2.5375.46 ± 0.2874.03 ± 2.4378.39 ± 0.7175.06 ± 1.18
Rin59.39 ± 0.4452.70 ± 0.8357.70 ± 0.4452.03 ± 0.4853.75 ± 0.4451.08 ± 0.3751.11 ± 0.2850.68 ± 0.4853.99 ± 0.6451.08 ± 0.3753.72 ± 0.4151.22 ± 0.3081.35 ± 1.6977.70 ± 1.85
Seg86.01 ± 1.3185.02 ± 2.2686.03 ± 0.9584.81 ± 1.8484.40 ± 0.9283.51 ± 1.9378.40 ± 1.9377.92 ± 3.9384.43 ± 1.2283.38 ± 2.1683.99 ± 0.9182.94 ± 2.6287.18 ± 0.7485.06 ± 2.23
Tit78.33 ± 0.4178.32 ± 1.7178.33 ± 0.4178.32 ± 1.7178.33 ± 0.4178.32 ± 1.7178.33 ± 0.4178.32 ± 1.7178.33 ± 0.4178.32 ± 1.7178.33 ± 0.4178.32 ± 1.7178.33 ± 0.4178.32 ± 1.71
Two87.13 ± 0.7783.78 ± 1.7294.97 ± 0.5193.24 ± 1.8573.34 ± 1.2370.41 ± 4.3963.68 ± 1.4462.57 ± 4.9173.01 ± 1.3369.19 ± 4.2974.02 ± 2.2370.68 ± 3.5991.45 ± 0.4487.57 ± 1.83
Veh66.11 ± 0.8061.41 ± 3.6664.16 ± 0.7061.29 ± 3.2664.21 ± 0.5660.94 ± 4.2957.54 ± 1.2553.41 ± 3.2964.18 ± 0.8560.12 ± 3.5663.95 ± 0.5859.65 ± 3.2667.85 ± 0.4860.59 ± 3.53
Win98.74 ± 0.5892.78 ± 5.4198.74 ± 0.5993.33 ± 5.7696.21 ± 0.6291.11 ± 5.6989.61 ± 1.3285.56 ± 4.1295.93 ± 0.7691.11 ± 5.6996.21 ± 1.0790.00 ± 4.6599.86 ± 0.3192.22 ± 4.56
Wis98.17 ± 0.2995.62 ± 1.3797.99 ± 0.4595.77 ± 1.3198.21 ± 0.2796.06 ± 1.4298.13 ± 0.2495.91 ± 1.6098.24 ± 0.2895.77 ± 1.6698.21 ± 0.2796.06 ± 1.4298.54 ± 0.1395.33 ± 1.42
Mean79.22 ± 0.9075.98 ± 2.4179.81 ± 0.8476.98 ± 2.6377.13 ± 1.0474.24 ± 2.8674.87 ± 1.3372.34 ± 2.8077.03 ± 1.1874.21 ± 2.9677.10 ± 1.1174.29 ± 2.7483.03 ± 0.9379.02 ± 2.99
These facts are confirmed in Figure 3, where it is clearly shown that Card_GA is the best ranking method. The p-value obtained with the Friedman aligned ranks test is 0.02, which confirms the existence of statistical differences among these seven approaches. For this reason, we perform the Holm post-hoc test to check whether the best ranking method (Card_GA) is able to statistically enhance the remainder methods. From results in Table 4, the goodness of the proposal using the Choquet integral with a suitable fuzzy measure is clearly determined, since it outperforms both the proposals using a standard fuzzy measure and the classical FRM of the winning rule. Furthermore, the obtained APV shows that the Card_GA allows the performance of the additive combination FRM to be clearly enhanced. Therefore, it can be concluded that the best approach is the one that makes use of the Choquet integral with the fuzzy measure genetically learned.
Figure 3. Rankings of the seven approaches considered in the study.
Figure 3. Rankings of the seven approaches considered in the study.
Table 4. Holm test to compare Card_GA with respect to the different approaches.
Table 4. Holm test to compare Card_GA with respect to the different approaches.
iAlgorithmAPV
1Dirac5.52E–7
2OWA1.14E–4
3Card.1.56E–4
4WMean1.56E–4
5Max.0.06
6AC0.13

6. Conclusions

In this paper we have proposed a novel FRM in which the Choquet integral is used to aggregate the local information of the rules. The Choquet integral is associated with a fuzzy measure, which allows us to model the relationship among the rules. For this reason, we have applied several standard fuzzy measures in order to take into account such an interaction. However, the definition of an appropriate fuzzy measure is a complex problem and consequently, we have proposed a genetic learning method in which a fuzzy measure is computed to aggregate the information related to the different classes of the problem.
In the experimental study, we have used the Chi et al.’s algorithm to generate the fuzzy rules. In order to test the goodness of our method, we have used a wide benchmark of numerical data-sets to compare the behaviour of the classical FRMs of both the winning rule and the additive combination with respect to our new approach using both several well-known fuzzy measures and the fuzzy measure genetically learned. From this comparison, it can be concluded that the use of our new approach is advisable to face classification problems when the fuzzy measure is learnt to suit the features of each specific problem, since it statistically outperforms the results of the FRM of the winning rule and it clearly enhances the performance of the additive combination FRM.

Acknowledgements

This work was partially supported by the Spanish Ministry of Science and Technology under projects TIN2010-15055 and TIN2011-29520.

References

1. Duda, R.O.; Hart, P.E.; Stork, D.G. Pattern Classification, 2nd ed.; John Wiley: Hoboken, NJ, USA, 2001. [Google Scholar]
2. Alpaydin, E. Introduction to Machine Learning, 2nd ed.; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
3. Ishibuchi, H.; Nakashima, T.; Nii, M. Classification and Modeling with Linguistic Information Granules: Advanced Approaches to Linguistic Data Mining; Springer-Verlag: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
4. Samantaray, S.R.; El-Arroudi, K.; Joós, G.; Kamwa, I. A fuzzy rule-based approach for islanding detection in distributed generation. IEEE Trans. Power Deliv. 2010, 25, 1427–1433. [Google Scholar] [CrossRef]
5. Kuncheva, L.I. On the equivalence between fuzzy and statistical classifiers. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 1996, 4, 245–253. [Google Scholar] [CrossRef]
6. Mandal, D.; Murthy, C.; Pal, S. Formulation of a multivalued recognition system. Syst. Man Cybern. IEEE Trans. 1992, 22, 607–620. [Google Scholar] [CrossRef]
7. Ishibuchi, H.; Nozaki, K.; Tanaka, H. Distributed representation of fuzzy rules and its application to pattern classification. Fuzzy Sets Syst. 1992, 52, 21–32. [Google Scholar] [CrossRef]
8. Chi, Z.; Yan, H.; Pham, T. Fuzzy Algorithms with Applications to Image Processing and Pattern Recognition; World Scientific: Singapore, 1996. [Google Scholar]
9. Ray, K.S.; Ghoshal, J. Approximate reasoning approach to pattern recognition. Fuzzy Sets Syst. 1996, 77, 125–150. [Google Scholar] [CrossRef]
10. Beliakov, G.; Pradera, A.; Calvo, T. Aggregation Functions: A Guide for Practitioners. What is an aggregation function; Studies in Fuzziness and Soft Computing; Springer: San Mateo, CA, USA, 2007; pp. 1–37. [Google Scholar]
11. Calvo, T.; Kolesarova, A.; Komornikova, M.; Mesiar, R. Aggregation Operators New Trends and Applications: Aggregation Operators: Properties, Classes and Construction Methods; Physica-Verlag: Heidelberg, Germany, 2002; pp. 3–104. [Google Scholar]
12. Choquet, G. Theory of capacities. Ann. I’Inst. Fourier 1953, 5, 131–295. [Google Scholar] [CrossRef]
13. Klement, E.P.; Mesiar, R. Discrete integrals and axiomatically defined functionals. Axioms 2012, 1, 9–20. [Google Scholar] [CrossRef]
14. Sugeno, M. Theory of Fuzzy Integrals and It’s Applications. Ph.D. Thesis, Tokyo Institute of Techonology, Tokyo, Japan, 1974. [Google Scholar]
15. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–72. [Google Scholar] [CrossRef]
16. Alcalá-Fdez, J.; Sánchez, L.; García, S.; del Jesus, M.J.; Ventura, S.; Garrell, J.; Otero, J.; Romero, C.; Bacardit, J.; Rivas, V.; et al. KEEL: A software tool to assess evolutionary algorithms for data mining problems. Soft Comput. 2009, 13, 307–318. [Google Scholar] [CrossRef]
17. Alcalá-Fdez, J.; Fernández, A.; Luengo, J.; Derrac, J.; García, S.; Sánchez, L.; Herrera, F. KEEL data-mining software tool: Data set repository, integration of algorithms and experimental analysis framework. J. Mult.-Valued Logic Soft Comput. 2011, 17, 255–287. [Google Scholar]
18. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
19. García, S.; Fernández, A.; Luengo, J.; Herrera, F. Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 2010, 180, 2044–2064. [Google Scholar] [CrossRef]
20. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
21. Ishibuchi, H.; Nakashima, T. Effect of rule weights in fuzzy rule-based classification systems. IEEE Trans. Fuzzy Syst. 2001, 9, 506–515. [Google Scholar] [CrossRef]
22. Cordón, O.; del Jesus, M.J.; Herrera, F. A proposal on reasoning methods in fuzzy rule-based classification systems. Int. J. Approx. Reason. 1999, 20, 21–45. [Google Scholar] [CrossRef]
23. Wang, L.X.; Mendel, J.M. Generating fuzzy rules by learning from examples. IEEE Trans. Syst. Man Cybern. 1992, 25, 353–361. [Google Scholar] [CrossRef]
24. Ishibuchi, H.; Yamamoto, T. Rule weight specification in fuzzy rule-based classification systems. IEEE Trans. Fuzzy Syst. 2005, 13, 428–435. [Google Scholar] [CrossRef]
25. Eshelman, L. The CHC Adaptive Search Algorithm: How to Have Safe Search When Engaging in Nontraditional Genetic Recombination. In Foundations of Genetic Algorithms; Morgan Kaufman: San Francisco, CA, USA, 1991; pp. 265–283. [Google Scholar]
26. Herrera, F.; Lozano, M.; Sánchez, A.M. A taxonomy for the crossover operator for real-coded genetic algorithms: An experimental study. Int. J. Intell. Syst. 2003, 18, 309–338. [Google Scholar] [CrossRef]
27. Galar, M.; Fernandez, A.; Barrenechea, E.; Bustince, H.; Herrera, F. A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 2012, 42, 463–484. [Google Scholar] [CrossRef]
28. Galar, M.; Fernndez, A.; Barrenechea, E.; Bustince, H.; Herrera, F. An overview of ensemble methods for binary classifiers in multi-class problems: Experimental study on one-vs-one and one-vs-all schemes. Pattern Recognit. 2011, 44, 1761–1776. [Google Scholar] [CrossRef]
29. Bardossy, A.; Duckstein, L. Fuzzy Rule-Based Modeling with Applications to Geophysical, Biological, and Engineering Systems; CRC Press: Boca Raton, FL, USA, 1995. [Google Scholar]
30. Sanz, J.; Fernandez, A.; Bustince, H.; Herrera, F. Improving the performance of fuzzy rule-based classification systems with interval-valued fuzzy sets and genetic amplitude tuning. Inf. Sci. 2010, 180, 3674–3685. [Google Scholar] [CrossRef]
31. García, S.; Fernández, A.; Luengo, J.; Herrera, F. A study of statistical techniques and performance measures for genetics-based machine learning: Accuracy and interpretability. Soft Comput. 2009, 13, 959–977. [Google Scholar] [CrossRef]
32. Sheskin, D. Handbook of Parametric and Nonparametric Statistical Procedures, 2nd ed.; Chapman & Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
33. Hodges, J.L.; Lehmann, E.L. Ranks methods for combination of independent experiments in analysis of variance. Ann. Math. Stat. 1962, 33, 482–497. [Google Scholar] [CrossRef]
34. Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
35. García, S.; Herrera, F. An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J. Mach. Learn. Res. 2008, 9, 2677–2694. [Google Scholar]

Share and Cite

MDPI and ACS Style

Barrenechea, E.; Bustince, H.; Fernandez, J.; Paternain, D.; Sanz, J.A. Using the Choquet Integral in the Fuzzy Reasoning Method of Fuzzy Rule-Based Classification Systems. Axioms 2013, 2, 208-223. https://doi.org/10.3390/axioms2020208

AMA Style

Barrenechea E, Bustince H, Fernandez J, Paternain D, Sanz JA. Using the Choquet Integral in the Fuzzy Reasoning Method of Fuzzy Rule-Based Classification Systems. Axioms. 2013; 2(2):208-223. https://doi.org/10.3390/axioms2020208

Chicago/Turabian Style

Barrenechea, Edurne, Humberto Bustince, Javier Fernandez, Daniel Paternain, and José Antonio Sanz. 2013. "Using the Choquet Integral in the Fuzzy Reasoning Method of Fuzzy Rule-Based Classification Systems" Axioms 2, no. 2: 208-223. https://doi.org/10.3390/axioms2020208