Next Article in Journal
Definition and Time Evolution of Correlations in Classical Statistical Mechanics
Next Article in Special Issue
Bayesian Inference in Auditing with Partial Prior Information Using Maximum Entropy Priors
Previous Article in Journal
Tight Bounds on the Rényi Entropy via Majorization with Applications to Guessing and Compression
Previous Article in Special Issue
Ranking the Impact of Different Tests on a Hypothesis in a Bayesian Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Heuristics for Structure Learning of k-Dependence Bayesian Classifier

1
Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
2
College of Computer Science and Technology, Jilin University, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(12), 897; https://doi.org/10.3390/e20120897
Submission received: 18 October 2018 / Revised: 13 November 2018 / Accepted: 20 November 2018 / Published: 22 November 2018
(This article belongs to the Special Issue Bayesian Inference and Information Theory)

Abstract

:
The rapid growth in data makes the quest for highly scalable learners a popular one. To achieve the trade-off between structure complexity and classification accuracy, the k-dependence Bayesian classifier (KDB) allows to represent different number of interdependencies for different data sizes. In this paper, we proposed two methods to improve the classification performance of KDB. Firstly, we use the minimal-redundancy-maximal-relevance analysis, which sorts the predictive features to identify redundant ones. Then, we propose an improved discriminative model selection to select an optimal sub-model by removing redundant features and arcs in the Bayesian network. Experimental results on 40 UCI datasets demonstrate that these two techniques are complementary and the proposed algorithm achieves competitive classification performance, and less classification time than other state-of-the-art Bayesian network classifiers like tree-augmented naive Bayes and averaged one-dependence estimators.

1. Introduction

In machine learning, classification is one of the most important tasks that predicts the unknown class labels according to some known evidence or labeled training samples. Bayesian network classifiers (BNCs) [1,2] provide a classical method to implement classification decision based on the probability framework. In general, BNCs consist of two parts, i.e., B = < G , Θ > . The network structure G is a directed acyclic graph (DAG). Nodes in G represent stochastic variables or features, arc X i X j denotes probabilistic dependency relationships between these two features and X i is one of immediate parent nodes of X j , i.e., X i P a ( X j ) . Parameter Θ quantitatively describes this dependency. Let each instance x be characterized with n values { x 1 , , x n } for features { X 1 , , X n } , and class label c { c 1 , , c m } is the value of class variable C. Θ contains conditional probability tables θ x i | P a ( x i ) = p B ( x i | P a ( x i ) ) for each feature.
According to Bayesian theorem [3], BNC makes the classification decision in the following way:
arg max C p ( c | x ) = arg max C p ( x , c ) p ( x ) arg max C p ( x , c )
According to the chain rule of joint probability distribution [1], p ( x , c ) can be calculated as follows:
p ( x , c ) = p ( c ) p ( x 1 | c ) p ( x 2 | x 1 , c ) p ( x n | x 1 , x 2 , , x n 1 , c ) = p ( c ) i = 1 n p ( x i | P a ( x i ) , c ) .
In this paper, we mainly focus on the restricted BNCs, which suppose that each feature is directly dependent on the class variable C and C does not have any parents. In this paper, we mainly focus on the restricted BNCs, which require that the class variable C be a parent of every feature and no feature be the parent of C.
The k-dependence Bayesian classifier (KDB) is one of the famous restricted BNCs [4]. To achieve the trade-off between structure complexity and classification accuracy, KDB allows to represent different number of interdependencies for different data sizes. During the learning procedure of KDB, it utilizes mutual information between features and class variable to rank and sort all features first. This sorting method gives priority to the features with high relevance between features and class. Feature X i may be a possible parent feature of X j if X i ranks before X j , not the other way around. Then conditional mutual information between features is used to measure and select significant conditional dependencies. The dependency relationships between features and class, and that between different features, are considered in different learning phases. Obviously, some independent features with high mutual information value may achieve higher rank but demonstrate weak conditional dependencies. To address this issue, Peng et al. [5] propose a first-order incremental feature selection method based on minimal-redundancy-maximal-relevance (mRMR) criterion, which takes into account the maximal relevance between features and class, meanwhile considering the minimal redundancy between features. Its effectiveness has not been proved in the context of KDB.
The structure complexity will increase exponentially as the number of features increases. The features that rank at the end of the order are the least relevant to classification and may be disregarded. Regular KDB does not consider the negative effect caused by redundant features, which may bias the classification results. Many researchers have recognized that using a heuristic wrapper approach to delete redundant features helps minimize zero-one loss on the training samples [6,7,8]. Martínez et al. [9] propose discriminative model selection to select an optimal KDB sub-model which contains feature subset with necessary features. The resulting algorithm not only has the competitive classification performance of generative learning, but also has the excellent expressive power of discriminative learning. At each iteration for model selection, any feature X i in the order should have k parent features if i > k as KDB defines. However, the dependencies between X i at the end of the order and the other feature X j ( 1 j n , i j ) may be very weak, and these two features can be assumed to be independent. That is, the dependencies between X i and X j may be redundant.
In this paper, we will investigate the feasibility of applying discriminative model selection to remove redundant features and dependencies, and the interoperability of mRMR analysis and discriminative model selection. Section 2 reviews the state-of-the-art restricted BNCs, including naive Bayes (NB), tree-augmented naive Bayes (TAN) and especially KDB. In Section 3 we present the theoretical justification of our proposed algorithm, mRMR-based KDB with discriminative model selection (MMKDB).Section 4 presents a detailed analysis of the experimental results. Finally, we present conclusions in Section 5.

2. Restricted Bayesian Network Classifiers

The classification task in a BNC can be separated into two subtasks, structure learning and parameter learning. The former is to identify the structure of the network, and the latter is to calculate the probability distribution for a given network structure. In the following discussion, we will review some state-of-the-art BNCs from the perspective of structure learning and parameter learning.
NB is the simplest BNC [10,11], since the features are assumed to be conditionally independent given the class variable. The formula of joint probability p ( x , c ) is presented as follows:
p ( x , c ) = p ( c ) i = 1 n p ( x i | c ) .
Note that, for NB, the parameter learning only involves the learning of the probability p ( c ) and the conditional probability p ( x i | c ) , and the structure learning is not necessary since NB has a definite structure as shown in Figure 1. However, features may be interrelated in practice. Therefore, many researchers have exploited methods to alleviate the conditional independence assumption of NB [12,13,14]. It is worthwhile to mention that, Webb et al. [15] present a new approach, named averaged one-dependence estimators (AODE), to weaken the feature independence assumption by averaging all of the constrained class of classifiers. The class of all such classifiers has all other features depend on a common feature and the class variable.
TAN [1] is an extension of NB. It uses a variant of the Chow-Liu algorithm [16] to construct the Bayesian network, and it utilizes conditional mutual information
I ( X i ; X j | C ) = x i X i x j X j c C p ( x i , x j , c ) l o g 2 p ( x i , x j | c ) p ( x i | c ) p ( x j | c )
to find a maximum weighted spanning tree. Additional arcs between features are allowed, i.e., dependencies between features can be captured. Each feature in the network has at most one other feature as its parents, except a single feature (the root of the tree), which has only the class variable as its parent. TAN alleviates some of conditional independence assumption of NB and, thus, improves its prediction accuracy at the cost of adding its structure complexity. The joint probability of TAN is calculated by:
p ( x , c ) = p ( c ) i = 1 n p ( x i | x j , c ) .
where X j is the parent of X i in the tree structure.
KDB is another classical improvement to NB [4]. It allows for most k features to be the parents for each feature. In this sense, NB is a 0-dependence BNC and TAN is a one-dependence BNC. In the real-world domains we find that modeling feature dependencies very often improves classification performance. This is especially true for the KDB, with respect to lower value of k, larger value of k may helps to improve the classification accuracy [4]. Two passes are required for KDB to learn over the training examples. Structure learning is the first pass. Algorithm 1 depicts the structure learning process of KDB. Parameter learning is the second pass. According to the Bayesian network obtained from the former pass, the joint probability of KDB for each instance can be calculated by:
p ( x , c ) = p ( c ) i = 1 n p ( x i | P a ( x i ) , c ) .
where P a ( x i ) denotes the parents of X i in the structure. Suppose there is an ordered feature set { X 1 , X 2 , X 3 , X 4 } , we give some examples of corresponding structures of KDB classifiers in Figure 2 when given different k values. Corresponding joint probability distributions are shown in Table 1.
Algorithm 1: Structure learning process of KDB.
Entropy 20 00897 i001

3. The mRMR-Based KDB with the Discriminative Model Selection

To elaborate our motivations for doing selection based on mRMR criterion and discriminative model selection in the context of the KDB classifier, we consider two extreme examples of constructing BNCs over two sets of features. The first feature set contains two perfectly correlated features X i and X j , where X i is an exact copy of X j . Both X i and X j will be included in the network structure of KDB, that is, X i (or X j alternatively) will have twice the influence of the other features, which may strongly bias the performance of the classifier. A possible way to improve the classification performance is to eliminate one of the features { X i , X j } from the feature set and to construct the classifier over the reduced set of features. The ordered feature set { X a , X b , X c } is the second extreme example and contains non-redundant features to construct a KDB with k = 2 . Suppose that the values of I ( X c ; X a | C ) and I ( X c ; X b | C ) are respectively 0.99 and 0.0001. As KDB defines, feature X c should select X a and X b as its parent features in any case. This naturally results in a redundant dependency between X c and X b , which may lead to negative effects on the classification performance of KDB and increases the risk of over-fitting at a certain extent.
Therefore, we utilize the sorting method based on an mRMR criterion to identify possible redundant features and discriminative model selection to achieve the aim of removing the redundant features or conditional interdependencies. The usual feature selection based on mutual information in KDB intends to select features that are independent of each other. Instead, the mRMR method tries to select a feature that minimizes the redundancy and maximizes the relevance. As argued by Peng et al. [5], for real data, the features selected in this way will have more or less correlation with each other and the joint effect of these features can lead to very good classification accuracy.
Let S denote the feature set and | S | is the cardinality of S. Given a feature set X = { X 1 , X 2 , , X n } and a class variable C, in order to make sure that the selected feature subset is the most appropriate one, two conditions should be met. The first one is the minimum redundancy condition [5]:
M i n R ( S ) , w h e r e R ( S ) = 1 | S | 2 X i , X j S I ( X i ; X j )
where R represents the level of redundancy between features.
And the other one is the maximum relevancy condition [5]:
M a x D ( S ) , w h e r e D ( S ) = 1 | S | X i S I ( X i ; C )
where D represents the level of relevancy between feature and class variable.
There are two combinations of these two conditions, named MID (Mutual Information Difference) and MIQ (Mutual Information Quotient) [17], which balance the two objectives, maximum relevance and minimum redundancy, in different ways as follows:
M I D ( S ) = m a x ( D ( S ) R ( S ) )
M I Q ( S ) = m a x ( D ( S ) / R ( S ) )
As argued by Gulgezen et al. [18], MID produces more stable feature subsets, so in this paper we choose MID as the criterion. Suppose there exists a selected feature subset S m 1 , which consists of m 1 features, then the m-th feature can be determined by following equation:
M I D ( X j ) = m a x { I ( X j ; C ) 1 m 1 X i S m 1 I ( X i ; X j ) }
where X j S S m 1 .
The feature selection based on mRMR criterion utilizes forward selection strategy, it starts with an empty feature set L and then iteratively add one feature into the L at a time by Equation (11). Sorting all features in this way, we consider the feature subsets { X 1 , X 2 , , X i } , 1 i n , each feature subset contains i ordered features. That is, for n features there are n alternative feature subsets that could be explored for our proposed algorithm.
From Equation (6) we can observe that the joint probability p ( x , c ) can be considered as the product of a set of conditional probabilities p ( x i | P a ( x i ) , c ) . This means that we can build a model space by using a nested method, each model can be built upon the previous one. For an instance x = ( x 1 , x 2 , , x n ) , as Table 2 shows, the joint probability p ( x , c ) 2 is obtained by multiplying the conditional probability of feature X 2 (i.e., p ( x 2 | P a ( x 2 ) , c ) ) to p ( x , c ) 1 and the joint probability p ( x , c ) 3 is obtained by multiplying the conditional probability of feature X 3 (i.e., p ( x 3 | P a ( x 3 ) , c ) ) to p ( x , c ) 2 . That is to say, if the model p ( x , c ) i has been built, it is not necessary to repeat the process of structure learning with feature set { X 1 , X 2 , , X i } for the model p ( x , c ) i + 1 . We only need to find parents of the feature X i + 1 in the BN and then multiply the conditional probability p ( x i + 1 | P a ( x i + 1 ) , c ) with the joint probability p ( x , c ) i (which has been learnt in the previous model p ( x , c ) i ) to obtain the model p ( x , c ) i + 1 . The discriminative model selection framework is derived from the chain rule of BNCs’ joint probability, it firstly constructs a space of sub-models, and then selects the best sub-model by the evaluation function to achieve our purpose in feature selection.
Based on the above observations and discussions, we further improve the framework of discriminative model selection from the view of feature dependencies. To make the idea of the improved framework of discriminative model selection clear in KDB, we restrict that at most two features can be the parents for each feature in the following discussion. As Figure 3 shows, for feature subset { X 1 , X 2 , X 3 } , the corresponding model space of our proposed algorithm MMKDB is composed of BNC 3 0 , BNC 3 1 and BNC 3 2 . The only difference of these three BNCs is the number of parents for feature X 3 . We employ the conditional mutual information to assign 0, 1 or 2 features to X 3 as parents, respectively. Note that all BNCs with { X 1 , X 2 , X 3 } are built upon BNC 2 1 , which is the best BNC for feature subset { X 1 , X 2 } and selected by using an evaluation function. Similarly, BNC 3 0 , BNC 3 1 and BNC 3 2 also need to be evaluated the classification performance to select the best one. In this way, we can remove not only redundant features but also redundant dependencies between them. That is to say, at each iteration for model selection, any feature X i should have k parent features, where 0 k k if i > k . Note that we employ the root mean squared error (RMSE) [19] as the evaluation function in the procedure of discriminative model selection, which is an effective measure of probability estimates:
R M S E = 1 t x D ( 1 p ( c ^ | x ) ) 2
where D is the training set, t is the number of training examples, c ^ is the true class label for the instance x , and p ( c ^ | x ) is the estimated posterior probability of the true class given x .
It is worthwhile to note that, in order to avoid over-fitting of sub-models on training examples, we employ the leave-one-out cross-validation (LOOCV) [20] to evaluate the classification performance of each model. Kohavi et al. [21] propose an incremental method to refine the cross-validation. The traditional LOOCV for BNCs recomputes the joint probability of a new model over the training examples for each instance. Differently, the incremental cross-validation firstly calculates the total joint frequency counts for all training examples, and then when testing an instance, temporarily removing its counts from the total counts to calculate the joint probability of corresponding model.
Figure 4 presents the schematic diagram of our proposed algorithm MMKDB. Step 1 sets the order of features by the mRMR sorting method, computes the conditional mutual information between features and class variable through the training examples, and the ordered feature set would be divided into n feature subsets. Steps 2 and 3 correspond to the framework of discriminative model selection. All ordered feature subsets are introduced as the input to construct the corresponding BNCs. Sub-models containing features that rank ahead in the order would be built upon sub-models containing features ranks behind. These n ( k + 2 ) 2 sub-models form the model space. Each sub-model denotes as BNC s r , where s is the number of feature subsets and r is the number of parents for feature X s . In the model space, all sub-models are evaluated by using the LOOCV to calculate the values of RMSE through the training examples. According to the chain rule of BNCs’ joint probability, BNC s + 1 needs to be built upon BNC s . Thus, only one sub-model that has the lowest value of RMSE would be selected for each feature subset. Finally, there are n alternative local optimum BNCs for n feature subsets. The optimal BNC would be selected from these n sub-models.
Based on the discussion presented above, we present the pseudo-codes of MMKDB in Algorithm 2. Calculating I ( X i ; C ) and I ( X i ; X j ) respectively need O ( t c n v ) and O ( t n 2 v 2 ) time, where t is the number of training examples, c is the number of classes, n is the number of features and v is the maximum number of possible values per feature. From Equation (11) we can infer that, if ( n > c ) the time complexity of step 2 in Algorithm 2 is O ( t n 2 v 2 ) , or else O ( t c n v ) . The procedure of computing the conditional mutual information needs O ( t c n 2 v 2 ) time. The space complexity of the table of joint frequencies of all combinations of n features values and the class label is O ( c n 2 v 2 ) . Feature ordering needs O ( n log n ) time and parent assignment for each feature needs O ( n 2 log n ) time. Moreover, classify an instance using the selected sub-model only requires O ( c n k ) time. That is, the procedure of discriminative model selection needs O ( t c n k 2 ) time. So the overall time complexity is O ( t c n 2 v 2 + t c n k 2 ) for MMKDB and O ( t n 2 v 2 ) for KDB. This is an acceptable result, since k is a user-set parameter. That is, the time complexity of MMKDB scales linearly with the number of training examples, classes and features.
Algorithm 2: Algorithm MMKDB.
Entropy 20 00897 i002

4. Experiments

We run the experiments on a C++ system (GCC 5.4.0) which is specially designed for BNCs. For KDB, with respect to lower value of k, larger value of k may helps to improve the classification accuracy. However, the restrictions of currently available hardware place some requirements on the software. The structure complexity and time complexity will increase exponentionally as k increases. When k = 4 , due to the amount of memory and CPU available the experimental results of MMKDB on some datasets cannot be achieved. Thus in the following experimental study the maximum value of k is 3.
In our experimental study, we gather a group of datasets from UCI machine learning repository [22]. These datasets are described in Table 3. Missing values are referred to as a distinct value. For each dataset, we discretize quantitative features using 5-bin equal frequency discretization, and we employ the m-estimation ( m = 1 ) [23,24] to smooth the probability estimates.
As a contrast, we also present respectively two extensional version of KDB as follows:
  • KDB with the sorting method based on the mRMR criterion (MKDB).
  • KDB with the discriminative model selection (MSKDB).
Note that, the experiments have been done by using 10 rounds of 10-fold cross-validation, and we employ the zero-one loss to evaluate classification accuracy of different algorithms [25]. Suppose that c is the predicted class label of an algorithm and c ^ is the true class label, the value of zero-one loss is calculated as follow:
ξ ( c , c ^ ) = 1 δ ( c , c ^ )
where δ ( c , c ^ ) = 1 if c = c ^ and 0 otherwise.
The detailed zero-one loss results of all alternative algorithms are presented in Table A1 in the Appendix A. In order to give the experimental results an intuitive explanation, we employ the Win/Draw/Loss (W/D/L) records to summarize the number of datasets for different algorithms in the following three situations on a given evaluation function: a win represents an algorithm achieves significant advantages over the other one on a dataset, a loss indicates the opposite case and the draw suggests that these two algorithms perform comparably. Each entry compares the algorithm in the row against the one in the column. We regard a difference as significant between two algorithms if their outcomes of a one-tailed binomial sign test is less than 0.05.

4.1. Impact of Sorting Method Based on the mRMR Criterion and Discriminative Model Selection

In order to investigate the impact of sorting method based on the mRMR criterion, we present the W/D/L records when comparing the zero-one loss results of KDB and MKDB in Table 4. The only difference between KDB and MKDB is the sorting method of features, the former performs the sorting method based on the mutual information and the latter performs the one based on mRMR criterion. From Table 4 we can see that, MKDB achieves significant advantages over KDB and results in W/D/L of 12/25/3. This proves that the sorting method based on the mRMR criterion is superior to the one based on mutual information in KDB. Compared with KDB, there are only three datasets, i.e., Lung-Cancer, House-Votes-84 and Anneal, have higher results of zero-one loss over MKDB, which indicates that MKDB seldom performs worse than KDB, and for many datasets, it substantially improved the classification performance of KDB, such as, the datasets Adult, Dermatology, Labor and Hypo.
In order to explore the effect of discriminative model selection, we present the W/D/L records in terms of zero-one loss between KDB and MSKDB in Table 5. The only difference between these two algorithms is that MSKDB need an extra pass to perform the discriminative model selection through the training examples. As expected, MSKDB achieves lower zero-one loss results more often than KDB, for example, the decrease from 0.1926 to 0.0598 for the dataset Splice-C4.5. Note that MSKDB only performs worse than KDB on one dataset, i.e., Contact-Lenses. We argue that the lack of enough instances is the main reason why MSKDB performs not well on this dataset.

4.2. Comparison of MMKDB vs. KDB

According to the zero-one loss, the corresponding comparison with MMKDB and KDB is given in Table 6. Table 6 also presents the W/D/L records of MMKDB over MKDB and MSKDB. As we can see that MMKDB achieves significant advantages than KDB, MKDB and MSKDB, which indicates that the interoperability of mRMR analysis and discriminative model selection is feasible. To further demonstrate the performance of MMKDB over other algorithms, we employ the goal difference ( G D ) [26]. Suppose there are two classifiers A and B, the value of G D can be computed as follow:
G D ( A ; B | T ) = | w i n | | l o s s | ,
where T is the datasets, | w i n | and | l o s s | represent the number of datasets on which A performs better or worse than B, respectively.
Figure 5 shows the fitting curve of G D ( MMKDB; KDB | S t ) in terms of zero-one loss. The X-axis shows the indexes of different datasets, referred to as t, which correspond to that described in Table 3, and the Y-axis corresponds to the value of G D ( MMKDB; KDB | S t ) , where S t = { D m | m t } and D m is the dataset with index m. We categorize datasets according to their size. Datasets with instances ≤1000, >1000 and ≤10,000, >10,000 are represented as small, medium and large size, respectively. Two dotted lines divide the figure into three parts, each part is associated to the corresponding sizes of different datasets. From Figure 5 we can see a clear positive correlation between the values of G D ( MMKDB; KDB | S t ) and the dataset size. As the size of datasets increases, MMKDB achieves significant advantages over KDB on small and medium datasets. When the number of instances >10,000, MMKDB has similar zero-one loss performance to KDB, but it speeds up classification time. Since MMKDB removes not only features but also dependencies between them may be redundant. Thus, we can come to the conclusion that, MMKDB not only retains the privileges of KDB, i.e., the capacity of high dependence representation and the model fitting ability on large datasets, but also improves the model fitting ability on small and medium datasets and enhances the classification efficiency on large datasets. That is, it proves the feasibility of applying discriminative model selection to remove redundant features and dependencies.
To further evaluate whether mRMR analysis and discriminative model selection are compatible and the extent to which applying both together improves the classification performance relative to applying each alone, we employ the relative zero-one loss ratio [27]. Given two classifiers A and B, the value of the relative zero-one loss ratio, referred to as R Z ( · ) , is calculated as follow:
R Z ( A | B ) = 1 Z A Z B
where Z denotes the zero-one loss, and Z A ( o r B ) is the value of zero-one loss of classifier A ( o r B ) on a dataset. The smaller ratio of Z A and Z B , the higher value of R Z ( A | B ) , and the better performance of A.
Figure 6 presents the comparison results of R Z ( · ) between MKDB, MSKDB, MMKDB and KDB. The X-axis shows the index of dataset, and the Y-axis corresponds to the value of R Z ( · ) . As we can see, on the dataset Audio (No. 9), the values of R Z (MKDB|KDB) and R Z (MSKDB|KDB) are respectively 0.1486 and 0.1328. But when it comes to MMKDB, R Z (MMKDB|KDB) is 0.2184, which is more higher than the former two results. There are also two extreme situations. For the dataset Dermatology (No. 11), R Z (MKDB|KDB) is 0.2006 but R Z (MSKDB|KDB) is 0.0198. That is, MKDB improves significantly with KDB but MSKDB does not. But nevertheless, the value of R Z (MMKDB|KDB) on dataset Dermatology is 0.3482. Another extreme situations is just on the contrary, such as dataset Splice-C4.5 (No. 21), R Z (MKDB|KDB) is 0.0729 and R Z (MSKDB|KDB) is 0.6895, which are very unbalanced. However, R Z (MMKDB|KDB) is 0.6936. That is, the value of R Z (MMKDB|KDB) is always equally well to or better than R Z (MKDB|KDB) and R Z (MSKDB|KDB). Therefore, we can draw a conclusion that mRMR analysis and discriminative model selection are compatible in the framework of KDB.
A more intuitive explanation is presented in Figure 7. Each bar represents the mean relative zero-one loss ratio of an algorithm to KDB on 40 datasets. As shown in Figure 7, the values of R ¯ Z (MKDB|KDB), R ¯ Z (MSKDB|KDB) and R ¯ Z (MMKDB|KDB) are respectively 0.0449, 0.0620 and 0.1006. That is, the average improved extent of MMKDB to KDB is obviously higher than MKDB and MSKDB. It proves that the interoperability of mRMR analysis and discriminative model selection is the major reason why applying both together improves the classification performance relative to applying each alone.

4.3. Comparison of MMKDB vs. NB, TAN and AODE

Table 7 presents the corresponding W/D/L results. As we can see, the zero-one loss results of MMKDB are significantly better when compared to NB and TAN, MMKDB also achieves competitive classification performance over AODE. Figure 8 and Figure 9 respectively present the fitting curves of G D ( MMKDB; NB | S t ) and G D ( MMKDB; TAN | S t ) in terms of zero-one loss. As we can see that, MMKDB has similar performance to NB and TAN on small datasets. However, when the dataset size increased to 1000 instances (Led, No. 18), the prediction performance of MMKDB is obviously better than NB and TAN. That is, MMKDB achieves significant advantages over NB and TAN on medium and large datasets.
For one of the famous ensemble BNCs, AODE, Figure 10 presents the corresponding fitting curve of G D ( MMKDB; AODE | S t ) in terms of zero-one loss. we can see that the values of G D decrease when the dataset size ≤1000, which means that MMKDB is very difficult to beat AODE on small datasets. When the dataset >1000 and ≤10,000, MMKDB has similar classification performance to AODE. This makes MMKDB a good substitute to AODE on medium datasets. Note that the fitting curve obviously turns upward when the dataset size >10,000 (the size of dataset Pendigits, No. 31). That is to say, the single BNC, MMKDB, achieves significant advantages over AODE on large datasets.
The training and classification time comparisons of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB are shown in Figure 11 and Figure 12. Each bar depicts the sum of time on 40 datasets. Although, for most datasets our proposed algorithm MMKDB requires substantially more time for learning than other BNCs, such NB, TAN, AODE and KDB, while the classification time of MMKDB is the least. Note that the training and classification time of MSKDB is similar to MMKDB, and the training and classification time of MKDB is similar to KDB. AODE is an ensemble algorithm. Its classification time increases quadratically with the number of features, and hence much higher for other BNCs in Figure 12. In general, MMKDB saves about 42% of KDB’s classification time and greatly improves the classification performance of KDB at the cost of increasing less training time, and it also enjoys an even greater advantage at classification time compared to NB, TAN and AODE.

4.4. Global Comparison

In this section, we use the Friedman test for comparison of all alternative algorithms on 40 datasets to perform the significance test [28]. The Friedman test is a non-parametric measure, it can be computed as follows:
F F = ( D 1 ) χ F 2 D ( g 1 ) χ F 2
and
χ F 2 = 12 D g ( g + 1 ) i R i 2 g ( g + 1 ) 2 4
where g is the number of alternative algorithms, D is the number of datasets and R i is the average rank of the i-th algorithm. The best performing algorithm getting the rank of 1, the second best rank 2, . In case of ties, average ranks are assigned. The null hypothesis of the Friedman test is that there is no difference in average ranks. The detailed results of the average rank on 40 datasets are presented in Table A2 in the Appendix A. With 7 algorithms and 40 datasets, the Friedman test is distributed according to the F distribution with g 1 = 7 1 = 6 and ( g 1 ) ( D 1 ) = ( 7 1 ) × ( 40 1 ) = 234 degrees of freedom. The critical value of F ( 6.234 ) for α = 0.05 is 2.1375. The result of Friedman test for zero-one loss, F F = 8.6308 > 2.1375 with p < 0.001 . Hence, we reject the null-hypothesis. That is to say, the seven algorithms are not equivalent in terms of zero-one loss results.
Figure 13 presents the results of ranking in terms of zero-one loss for all alternative algorithms. The average ranks of different algorithms in terms of zero-one loss on all datasets are respectively {NB(5.34), TAN(4.53), AODE(3.73), KDB(4.20), MKDB(4.00), MSKDB(3.83), MMKDB(2.39)}. That is, the ranking of MMKDB is better than that of other algorithms, followed by AODE, MSKB, MKDB, KDB, TAN and NB.
In order to further explore which algorithm is significantly different to others, we also perform the Nemenyi test [29] shown in Figure 14. The algorithms are plotted on the dotted line on the basis of their average ranks, which are corresponding to the nodes on the top solid line. Critical Difference (CD) is also shown in the figure. The value of CD is calculated as follow:
C D = q α g ( g + 1 ) 6 D
where the critical value q α for α = 0.05 and g = 7 is 2.949. For α = 0.05 with 7 algorithms and 40 datasets, CD = 2.949 × g × ( g + 1 ) / ( 6 × D ) = 2.949 × 7 × ( 7 + 1 ) / ( 6 × 40 ) = 1.4245 . It is worthwhile to note that, the more leftward the position of algorithms on the black line, the lower the rank will be, and hence the better the performance. The algorithms are connected by a line if their differences are not significant. As the figure shows, NB, TAN, KDB and MKDB have equivalent mean rank. The mean rank of MMKDB is significantly lower than those of NB, TAN, KDB, MKDB and MSKDB. MMKDB also achieves lower mean ranks than AODE, but not significantly so.

5. Conclusions

KDB is a famous BNC with the capacity of high dependence representation. To achieve the trade-off between structure complexity and classification accuracy, KDB allows to represent different number of interdependencies for different data sizes. The mRMR analysis and discriminative model selection have both previously been demonstrated to be computationally efficient approaches, the former improves the feature selection method and the latter improves the classification error of KDB. However, on the one hand, the mRMR analysis has not been studied in the context of KDB, and on the other hand, the discriminative model selection still can be improved more, such as removing the redundant dependencies in a BNC. Therefore, in this paper, we investigate the feasibility of applying discriminative model selection to remove redundant features and dependencies, and the interoperability of mRMR analysis and discriminative model selection.
Regular KDB utilizes mutual information between features and class variable to rank and sort all features first. Obviously, some independent features with high mutual information value may achieve higher rank but demonstrate weak conditional dependencies. However, the use of mRMR analysis makes up for this shortcoming. Moreover, KDB does not consider the negative effect caused by redundant features, which may bias the classification results. We use the discriminative model selection to achieve the aim of removing the redundant features and arcs in the Bayesian network.
We conduct experiments on 40 UCI datasets to explore the impact of a sorting method based on the mRMR criterion and discriminative model selection. The advantages of MKDB and MSKDB over KDB in terms of zero-one loss, respectively, demonstrate that each technique can help reduce KDB’s classification error. The advantages of MMKDB over KDB, MKDB and MSKDB further demonstrate that the interoperability of these two techniques is feasible. That is, there is strong synergy between the mRMR analysis and discriminative model selection in KDB and they can operate in tandem to reduce the classification error of KDB more effectively than does either in isolation. The fitting curve of goal difference between MMKDB and KDB clarifies the superior performance of the MMKDB on datasets of different scales. MMKDB not only retains the privileges of KDB, i.e., the capacity of high dependence representation and the model fitting ability on large datasets, but also improves the model fitting ability on small and medium datasets and enhances the classification efficiency on large datasets. These two techniques help save about 42% of KDB’s classification time and greatly improve the classification performance of KDB. Besides, we also have compared MMKDB against other state-of-the-art BNCs, such as NB, TAN and AODE. The results demonstrate that MMKDB achieves significant advantages over NB and TAN on medium and large datasets, and over AODE on large datasets in terms of classification performance. We additionally conduct a set of focused tests for some significance analysis, such as the Friedman test and the Nemenyi test. The results showed that the mean rank of MMKDB is significantly lower than those of NB, TAN, KDB, MKDB and MSKDB. MMKDB also achieves lower mean rank than AODE, but not significantly so.

Author Contributions

All authors have contributed to the study and preparation of the article. The 1st author conceived the idea, derived equations and wrote the paper. The 2nd author and the 3rd author did the analysis and finished the programming work. All authors have read and approved the final manuscript.

Funding

This work was supported by the National Science Foundation of China (Grant No. 61272209 and No. 61872164).

Acknowledgments

This work was supported by the National Science Foundation of China (Grant No. 61272209 and No. 61872164).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

This appendix presents the detailed zero-one loss results of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB in Table A1, and the ranks of different algorithms on 40 datasets are shown in Table A2.
Table A1. Detailed zero-one loss results of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB.
Table A1. Detailed zero-one loss results of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB.
DatasetNBTANAODEKDBMKDBMSKDBMMKDB
Contact-Lenses0.37880.37880.37880.29460.28880.37130.3257
Lung-Cancer0.44190.59970.50500.50500.68060.52590.5496
Labor0.03550.05310.05310.07090.03470.05210.0343
Post-Operative0.34780.37040.33660.39280.37580.37400.3692
Zoo0.03000.01000.03000.06000.04900.04900.0387
Promoters0.07630.13340.13340.32400.28960.33620.2582
Echocardiogram0.33930.33150.32380.33150.33250.33250.3282
Autos0.31530.21670.20690.20200.19310.19310.1859
Audio0.24130.29490.20550.34860.29680.30220.2724
Hungarian0.16150.17180.16840.20270.19530.20880.1928
Dermatology0.01930.03310.01660.06900.05410.06760.0453
Horse-Colic0.21960.21130.20310.28820.24750.16140.1620
House-Votes-840.09520.05580.05340.04880.05240.04780.0494
Chess0.11360.09350.10080.07330.07370.07010.0727
Crx0.13910.14930.13610.16390.15490.14780.1431
Vehicle0.39630.29720.29250.29010.28090.29370.2784
Anneal0.03830.01120.00900.00900.00970.00880.0087
Led0.26970.26870.27070.26870.26330.26330.2599
Volcanoes0.33490.33490.33490.33490.32830.32830.3240
Car0.14140.05730.08240.02050.02010.02010.0198
Splice-C4.50.04480.04710.03690.19260.17860.05980.0590
Hypo0.01390.01420.00960.01390.01180.00970.0091
Sick0.03110.02600.02760.02190.02180.02130.0202
Abalone0.48100.46330.45170.46470.43800.45550.4423
Spambase0.10250.06760.06790.06630.06710.06520.0657
Waveform-50000.20260.18620.14770.26990.25840.26000.2503
Phoneme0.26410.27600.24160.20750.20330.18900.1865
Page-Blocks0.06250.04190.03410.03320.03410.03170.0313
Mushrooms0.01980.00010.00010.00000.00000.00000.0000
Thyroid0.11220.07270.07080.07260.07140.07070.0697
Pendigits0.11930.03240.02020.04440.04440.04360.0438
Sign0.36220.27830.28490.23110.22650.22650.2236
Nursery0.09830.06610.07370.01820.01750.01780.0176
Magic0.22610.16920.17700.16610.16180.16290.1597
Letter-Recog0.25500.13130.08920.10280.09600.10080.0981
Adult0.16080.13940.15080.14040.11830.13550.1343
Shuttle0.00390.00150.00080.00080.00080.00080.0007
Connect-40.28110.23780.24440.21660.21970.21240.2168
Localization0.50050.36110.36320.30970.29740.30350.2996
Census-Income0.23870.06340.10140.05090.05060.04880.0483
Table A2. Ranks of different algorithms on 40 datasets.
Table A2. Ranks of different algorithms on 40 datasets.
DatasetNBTANAODEKDBMKDBMSKDBMMKDB
Contact-Lenses5.55.55.51.51.55.53.0
Lung-Cancer1.06.02.52.57.04.05.0
Labor2.55.05.07.02.55.01.0
Post-Operative2.03.01.07.06.05.04.0
Zoo2.51.02.57.05.55.54.0
Promoters1.02.52.56.05.07.04.0
Echocardiogram6.02.51.02.56.06.04.0
Autos7.06.05.04.02.52.51.0
Audio2.04.01.07.05.06.03.0
Hungarian1.03.02.06.05.07.04.0
Dermatology2.03.01.06.55.06.54.0
Horse-Colic5.04.03.07.06.01.02.0
House-Votes-847.06.04.51.54.51.53.0
Chess7.05.06.02.04.01.03.0
Crx2.04.01.07.06.05.03.0
Vehicle7.05.04.03.02.06.01.0
Anneal7.06.03.03.05.03.01.0
Led6.03.57.03.53.53.51.0
Volcanoes4.54.54.54.54.54.51.0
Car7.05.06.03.03.03.01.0
Splice-C4.52.03.01.07.06.05.04.0
Hypo5.57.02.05.54.03.01.0
Sick7.05.06.03.04.02.01.0
Abalone7.04.03.05.51.05.52.0
Spambase7.04.05.01.06.02.03.0
Waveform-50003.02.01.07.05.06.04.0
Phoneme6.07.05.03.53.52.01.0
Page-Blocks7.06.04.03.05.02.01.0
Mushrooms7.05.55.52.52.52.52.5
Thyroid7.05.01.04.06.03.02.0
Pendigits7.02.01.03.56.03.55.0
Sign7.05.06.03.03.03.01.0
Nursery7.05.06.03.51.03.52.0
Magic7.05.06.03.52.03.51.0
Letter-Recog7.06.01.04.52.04.53.0
Adult7.04.06.05.01.03.02.0
Shuttle7.06.03.53.53.53.51.0
Connect-47.05.06.01.54.01.53.0
Localization7.05.06.03.51.03.52.0
Census-Income7.05.06.03.04.02.01.0
Mean rank5.344.533.734.204.003.832.39

References

  1. Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian network classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef]
  2. Bielza, C.; Larrañaga, P. Discrete Bayesian network classifiers: A survey. ACM Comput. Surv. 2014, 47. [Google Scholar] [CrossRef]
  3. Pearl, J. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference; Morgan Kaufmann: Burlington, MA, USA, 1988. [Google Scholar]
  4. Sahami, M. Learning Limited Dependence Bayesian Classifiers. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, Portland, OR, USA, 2–4 August 1996; pp. 335–338. [Google Scholar]
  5. Peng, H.; Long, F.; Ding, C. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. 2005, 27, 1226–1238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Bermejo, P.; Gámez, J.A.; Puerta, J.M. Speeding up incremental wrapper feature subset selection with Naive Bayes classifier. Knowl.-Based Syst. 2014, 55, 140–147. [Google Scholar] [CrossRef]
  7. Zare, H.; Niazi, M. Relevant based structure learning for feature selection. Eng. Appl. Artif. Intell. 2016, 55, 93–102. [Google Scholar] [CrossRef] [Green Version]
  8. Nhaila, H.; Elmaizi, A.; Sarhrouni, E.; Hammouch, A. New wrapper method based on normalized mutual information for dimension reduction and classification of hyperspectral images. In Proceedings of the IEEE Fourth International Conference on Optimization and Applications (ICOA), Mohammedia, Morocco, 26–27 April 2018; pp. 1–7. [Google Scholar]
  9. Martínez, A.M.; Webb, G.I.; Chen, S.; Zaidi, N.A. Scalable learning of Bayesian network classifiers. J. Mach. Learn. Res. 2016, 17, 1–35. [Google Scholar]
  10. Duda, R.O.; Hart, P.E. Pattern Classification and Scene Analysis; A Wiley-Interscience Publication; Wiley: New York, NY, USA, 1973. [Google Scholar]
  11. Lewis, D.D. Naive (Bayes) at forty: The independence assumption in information retrieval. In Proceedings of the European Conference on Machine Learning, Chemnitz, Germany, 21–24 April 1998; Springer: Berlin/Heidelberg, Germany, 1998; pp. 4–15. [Google Scholar]
  12. Frank, E.; Hall, M.; Pfahringer, B. Locally weighted naive bayes. In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, Edmonton, AB, Canada, 1–4 August 2002; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 2002; pp. 249–256. [Google Scholar]
  13. Jiang, L.; Zhang, H.; Cai, Z. A novel Bayes model: Hidden naive Bayes. IEEE Trans. Knowl. Data Eng. 2009, 21, 1361–1371. [Google Scholar] [CrossRef]
  14. Langley, P.; Sage, S. Induction of selective Bayesian classifiers. In Proceedings of the 10th International Conference Uncertainty Artificial Intelligence, Washington, DC, USA, 29–31 July 1994; pp. 399–406. [Google Scholar]
  15. Webb, G.I.; Boughton, J.R.; Wang, Z. Not so naive Bayes: Aggregating one-dependence estimators. Mach. Learn. 2005, 58, 5–24. [Google Scholar] [CrossRef]
  16. Chow, C.; Liu, C. Approximating discrete probability distributions with dependence trees. IEEE Trans. Inform. Theory 1968, 14, 462–467. [Google Scholar] [CrossRef] [Green Version]
  17. Ding, C.; Peng, H. Minimum redundancy feature selection from microarray gene expression data. J. Bioinform. Comput. Biol. 2005, 3, 185–205. [Google Scholar] [CrossRef] [PubMed]
  18. Gulgezen, G.; Cataltepe, Z.; Yu, L. Stable and accurate feature selection. In Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Bled, Slovenia, 7–11 September 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 455–468. [Google Scholar]
  19. Hyndman, R.J.; Koehler, A.B. Another look at measures of forecast accuracy. Int. J. Forecast. 2006, 22, 679–688. [Google Scholar] [CrossRef] [Green Version]
  20. Hastie, T.; Tibshirani, R.; Friedman, J. Unsupervised learning. In The Elements of Statistical Learning; Springer: New York, NY, USA, 2009; pp. 485–585. [Google Scholar]
  21. Kohavi, R. The power of decision tables. In Proceedings of the European Conference on Machine Learning, Crete, Greece, 25–27 April 1995; Springer: Berlin/Heidelberg, Germany, 1995; pp. 174–189. [Google Scholar]
  22. Bache, K.; Lichman, M. UCI Machine Learning Repository. Available online: https://archive.ics.uci.edu/ml/datasets.html (accessed on 1 June 2018).
  23. Cestnik, B. Estimating probabilities: A crucial task in machine learning. In Proceedings of the 9th European Conference on Artificial Intelligence, Stockholm, Sweden, 6–10 August 1990; pp. 147–149. [Google Scholar]
  24. Zaidi, N.A.; Cerquides, J.; Carman, M.J.; Webb, G.I. Alleviating naive bayes attribute independence assumption by attribute weighting. J. Mach. Learn. Res. 2013, 14, 1947–1988. [Google Scholar]
  25. Kohavi, R.; Wolpert, D. Bias Plus Variance Decomposition for Zero-One Loss Functions. In Proceedings of the Thirteenth International Conference on Machine Learning, Bari, Italy, 3–6 July 1996; pp. 275–283. [Google Scholar]
  26. Duan, Z.; Wang, L. K-Dependence Bayesian Classifier Ensemble. Entropy 2017, 19, 651. [Google Scholar] [CrossRef]
  27. Wang, L.M.; Zhao, H.Y.; Sun, M.H.; Ning, Y. General and Local: Averaged k-Dependence Bayesian Classifiers. Entropy 2015, 17, 4134–4154. [Google Scholar] [CrossRef] [Green Version]
  28. Demšar, J. Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
  29. Nemenyi, P. Distribution-Free Multiple Comparisons. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 1963. [Google Scholar]
Figure 1. The topology structure of NB.
Figure 1. The topology structure of NB.
Entropy 20 00897 g001
Figure 2. Some examples of corresponding structures of KDB classifiers when given different k values.
Figure 2. Some examples of corresponding structures of KDB classifiers when given different k values.
Entropy 20 00897 g002
Figure 3. The examples of corresponding model space of MMKDB for three feature subsets when k = 2 .
Figure 3. The examples of corresponding model space of MMKDB for three feature subsets when k = 2 .
Entropy 20 00897 g003
Figure 4. The schematic diagram of MMKDB.
Figure 4. The schematic diagram of MMKDB.
Entropy 20 00897 g004
Figure 5. The fitting curve of G D ( MMKDB; KDB | S t ) in terms of zero-one loss.
Figure 5. The fitting curve of G D ( MMKDB; KDB | S t ) in terms of zero-one loss.
Entropy 20 00897 g005
Figure 6. The comparison results of relative zero-one loss ratio between MKDB, MSKDB, MMKDB and KDB.
Figure 6. The comparison results of relative zero-one loss ratio between MKDB, MSKDB, MMKDB and KDB.
Entropy 20 00897 g006
Figure 7. The comparison results of mean relative zero-one loss ratio between MKDB, MSKDB, MMKDB and KDB.
Figure 7. The comparison results of mean relative zero-one loss ratio between MKDB, MSKDB, MMKDB and KDB.
Entropy 20 00897 g007
Figure 8. The fitting curve of G D ( MMKDB; NB | S t ) in terms of zero-one loss.
Figure 8. The fitting curve of G D ( MMKDB; NB | S t ) in terms of zero-one loss.
Entropy 20 00897 g008
Figure 9. The fitting curve of G D ( MMKDB; NB | S t ) in terms of zero-one loss.
Figure 9. The fitting curve of G D ( MMKDB; NB | S t ) in terms of zero-one loss.
Entropy 20 00897 g009
Figure 10. The fitting curve of G D ( MMKDB; AODE | S t ) in terms of zero-one loss.
Figure 10. The fitting curve of G D ( MMKDB; AODE | S t ) in terms of zero-one loss.
Entropy 20 00897 g010
Figure 11. Training time comparisons of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB.
Figure 11. Training time comparisons of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB.
Entropy 20 00897 g011
Figure 12. Classification time comparisons of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB.
Figure 12. Classification time comparisons of NB, TAN, AODE, KDB, MKDB, MSKDB and MMKDB.
Entropy 20 00897 g012
Figure 13. The results of ranking in terms of zero-one loss for all alternative algorithms.
Figure 13. The results of ranking in terms of zero-one loss for all alternative algorithms.
Entropy 20 00897 g013
Figure 14. The results of Nemenyi test in terms of zero-one loss for all alternative algorithms.
Figure 14. The results of Nemenyi test in terms of zero-one loss for all alternative algorithms.
Entropy 20 00897 g014
Table 1. Corresponding joint probability distributions of KDB when given different k values.
Table 1. Corresponding joint probability distributions of KDB when given different k values.
k ValueJoint Probability Distribution
k = 1 p ( x 1 , , x 4 , c ) = p ( c ) p ( x 1 | c ) p ( x 2 | x 1 , c ) p ( x 3 | x 1 , c ) p ( x 4 | x 1 , c )
k = 2 p ( x 1 , , x 4 , c ) = p ( c ) p ( x 1 | c ) p ( x 2 | x 1 , c ) p ( x 3 | x 1 , x 2 , c ) p ( x 4 | x 1 , x 2 , c )
k = 3 p ( x 1 , , x 4 , c ) = p ( c ) p ( x 1 | c ) p ( x 2 | x 1 , c ) p ( x 3 | x 1 , x 2 , c ) p ( x 4 | x 1 , x 2 , x 3 , c )
Table 2. Space of approximate models of KDB with n feature subsets.
Table 2. Space of approximate models of KDB with n feature subsets.
Feature SubsetsJoint Probability
{ X 1 } p ( x , c ) 1 = p ( c ) p ( x 1 | c )
{ X 1 , X 2 } p ( x , c ) 2 = p ( c ) p ( x 1 | c ) p ( x 2 | P a ( x 2 ) , c )
{ X 1 , X 2 , X 3 } p ( x , c ) 3 = p ( c ) p ( x 1 | c ) p ( x 2 | P a ( x 2 ) , c ) p ( x 3 | P a ( x 3 ) , c )
{ X 1 , X 2 , X 3 , , X n } p ( x , c ) n = p ( c ) p ( x 1 | c ) p ( x 2 | P a ( x 2 ) , c ) p ( x 3 | P a ( x 3 ) , c ) p ( x n | P a ( x n ) , c )
Table 3. Datasets.
Table 3. Datasets.
No.DatasetInstFeatureClassNo.DatasetInstFeatureClass
1Contact-Lenses244321Splice-C4.53177603
2Lung-Cancer3256322Hypo3772294
3Labor5716223Sick3772292
4Post-Operative908324Abalone417783
5Zoo10116725Spambase4601572
6Promoters10657226Waveform-50005000403
7Echocardiogram1316227Phoneme5438750
8Autos20525728Page-Blocks5473105
9Audio226692429Mushrooms8124222
10Hungarian29413230Thyroid91692920
11Dermatology36634631Pendigits10,9921610
12Horse-Colic36821232Sign12,54683
13House-Votes-8443516233Nursery12,96085
14Chess55139234Magic19,020102
15Crx69015235Letter-Recog20,0001626
16Vehicle84618436Adult48,842142
17Anneal89838637Shuttle58,00097
18Led100071038Connect-467,557423
19Volcanoes15203439Localization164,860511
20Car17286440Census-Income299,285412
Table 4. W/D/L records when comparing the zero-one loss of MKDB and KDB.
Table 4. W/D/L records when comparing the zero-one loss of MKDB and KDB.
W/D/LKDB
MKDB12/25/3
Table 5. W/D/L records when comparing the zero-one loss of MSKDB and KDB.
Table 5. W/D/L records when comparing the zero-one loss of MSKDB and KDB.
W/D/LKDB
MSKDB8/31/1
Table 6. W/D/L records when comparing the zero-one loss of KDB, MKDB, MSKDB and MMKDB.
Table 6. W/D/L records when comparing the zero-one loss of KDB, MKDB, MSKDB and MMKDB.
W/D/LKDBMKDBMSKDB
MMKDB17/21/215/23/210/30/0
Table 7. W/D/L records in terms of zero-one loss: MMKDB vs. NB, TAN and AODE.
Table 7. W/D/L records in terms of zero-one loss: MMKDB vs. NB, TAN and AODE.
W/D/LNBTANAODE
MMKDB26/5/924/9/720/9/11

Share and Cite

MDPI and ACS Style

Liu, Y.; Wang, L.; Sun, M. Efficient Heuristics for Structure Learning of k-Dependence Bayesian Classifier. Entropy 2018, 20, 897. https://doi.org/10.3390/e20120897

AMA Style

Liu Y, Wang L, Sun M. Efficient Heuristics for Structure Learning of k-Dependence Bayesian Classifier. Entropy. 2018; 20(12):897. https://doi.org/10.3390/e20120897

Chicago/Turabian Style

Liu, Yang, Limin Wang, and Minghui Sun. 2018. "Efficient Heuristics for Structure Learning of k-Dependence Bayesian Classifier" Entropy 20, no. 12: 897. https://doi.org/10.3390/e20120897

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop