Next Article in Journal
Nyquist Zone Index and Chirp Rate Estimation of LFM Signal Intercepted by Nyquist Folding Receiver Based on Random Sample Consensus and Fractional Fourier Transform
Next Article in Special Issue
Monitoring Student Activities with Smartwatches: On the Academic Performance Enhancement
Previous Article in Journal
Next Location Prediction Based on an Adaboost-Markov Model of Mobile Users
Previous Article in Special Issue
Distributed Learning Fractal Algorithm for Optimizing a Centralized Control Topology of Wireless Sensor Network Based on the Hilbert Curve L-System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved PSO_AdaBoost Ensemble Algorithm for Imbalanced Data

1
College of Computer and Communication Engineering, China University of Petroleum, Qingdao 266580, Shandong, China
2
Institute for Sensing and Embedded Network Systems Engineering, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431, USA
3
School of Geosciences, China University of Petroleum, Qingdao 266580, Shandong, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(6), 1476; https://doi.org/10.3390/s19061476
Submission received: 3 January 2019 / Revised: 18 March 2019 / Accepted: 19 March 2019 / Published: 26 March 2019
(This article belongs to the Special Issue Artificial Intelligence and Sensors)

Abstract

:
The Adaptive Boosting (AdaBoost) algorithm is a widely used ensemble learning framework, and it can get good classification results on general datasets. However, it is challenging to apply the AdaBoost algorithm directly to imbalanced data since it is designed mainly for processing misclassified samples rather than samples of minority classes. To better process imbalanced data, this paper introduces the indicator Area Under Curve (AUC) which can reflect the comprehensive performance of the model, and proposes an improved AdaBoost algorithm based on AUC (AdaBoost-A) which improves the error calculation performance of the AdaBoost algorithm by comprehensively considering the effects of misclassification probability and AUC. To prevent redundant or useless weak classifiers the traditional AdaBoost algorithm generated from consuming too much system resources, this paper proposes an ensemble algorithm, PSOPD-AdaBoost-A, which can re-initialize parameters to avoid falling into local optimum, and optimize the coefficients of AdaBoost weak classifiers. Experiment results show that the proposed algorithm is effective for processing imbalanced data, especially the data with relatively high imbalances.

1. Introduction

Since imbalanced data can be found in any area, effective classification of imbalanced data has become critical for many applications. The classification results of imbalanced data generated by existing classification algorithms are usually significantly affected by the majority class, resulting in low accuracy in classification of the minority class. For example, the sensor network can accurately achieve target recognition under the assumption of data distribution equilibrium. However, in practical applications, the filed environment is complex and variable, and the difficulty of obtaining samples is different, which results in imbalanced data. It is easy to ignore samples of minority class in this case, resulting in incorrect classification. In the intrusion alarm application, misclassification of samples of minority class means false alarm of system, which will cause very serious consequences.
Existing approaches processing imbalanced data can be generally divided into two categories [1,2]. The first category is based on resampling at the data level, which either (i) increases the number of samples using upsampling by synthesizing new data or copying the original data, or (ii) reduces the number of samples using subsampling by extracting a small amount of data. Although resampling can improve the accuracy of minority class classification, there are some challenges. It is impossible to properly interpret the synthetic new data generated by upsampling. In addition, important information may be lost during the subsampling process. The second category is based on the ensemble and cost-sensitive approaches at the algorithm level [3,4], which increases the weights of the misclassified samples, thus improving the classification performance. The ensemble approaches that currently widely used are typically based on Boosting [5,6,7,8] or Bagging [9,10,11]. AdaBoost is a boosting algorithm and is widely used to process imbalanced data. It uses a single-layer decision tree as a weak classifier. In each training iteration, the weight of the misclassified samples generated by the previous iteration is increased, and the weight of the correctly classified samples is reduced, improving the significance of the misclassified samples in the next iteration. Although the AdaBoost algorithm can be directly used to process imbalanced data, the algorithm focuses more on the misclassified samples than samples of minority class. In addition, it may generate many redundant or useless weak classifiers, increasing the processing overhead and causing performance reduction.
Many approaches have been proposed to improve the performance of AdaBoost. Li et al. [12] proposed the BPSO-AdaBoost-KNN algorithm for multiclass imbalanced data classification, and this algorithm improves the stability of AdaBoost by effectively extracting key features. Cao et al. [13] used the gradient descent algorithm to optimize the new loss function based on the Boosting framework, and proposed the AsB and AsBL algorithms, which further verified that this approach can generate cost-sensitive classifiers with lower error cost. Yang et al. [14] used mathematical analysis and graphical methods to clarify the working principle of multiclass AdaBoost, and proposed a novel approach for processing multiclass data. This algorithm not only reduces the requirements of weak classifiers, but also ensures the effectiveness of the classification. Li et al. [15] proposed the AdaBoost composite kernel extreme learning machine, by combining the composite kernel method and the AdaBoost framework with the weighted ELM. The proposed algorithm improves performance in hyperspectral image classification. Dou et al. [16] proposed an improved AdaBoost algorithm that assigns a weight to each individual class and uses weight vectors to represent the recognition power of the base classifiers. This algorithm significantly avoids overfitting and improves classification accuracy. Xie et al. [17] proposed an ensemble evolve algorithm for imbalanced data classification by introducing the genetic algorithm to the AdaBoost algorithm. Better classifiers are generated using gene evolution and improved fitness functions, and imbalanced data classification is optimized during evolution. Guo et al. [18] treated samples of majority class that exceeded the threshold during the iteration as noise, and proposed four algorithms (i.e., A-AdaBoost, B-AdaBoost, C-AdaBoost and D-AdaBoost) based on limiting threshold growth and modifying class labels. Results show that these algorithms can effectively process imbalanced data.
In this paper, we propose AdaBoost-A, an improved AdaBoost algorithm based on AUC. The AdaBoost-A redefines the error calculation formula by introducing the AUC index into the error calculation of the weak classifier. The AUC can evaluate the performance of a classifier, and reflect the effects of imbalanced data on the classifier. As a result, the proposed AdaBoost algorithm can focus more on samples of minority class. In addition, the AdaBoost-A algorithm generates a set of weak classifiers to build a strong classifier, and the improved particle swarm optimization algorithm based on population diversity is used to further optimize the weight of the classifiers, thus decreasing the weight of redundant and useless classifiers and avoiding waste of system resources and time overhead.
The remainder of this paper is organized as follows. In Section 2, we introduce the basic principles and implementation steps of AdaBoost and Particle Swarm Optimization (PSO) algorithms. In Section 3, we illustrate the improved AdaBoost-A algorithm and ensemble algorithm PSOPD-AdaBoost-A. In Section 4, the effectiveness of PSOPD-AdaBoost-A is proved by comparison experiments with traditional AdaBoost algorithm and various improved algorithms. The conclusions are drawn in Section 5.

2. Background

2.1. Adaptive Boosting (AdaBoost)

AdaBoost (Adaptive Boosting) is an adaptive enhancement technique. It is a typical ensemble algorithm which improves classification performance by combining multiple weak classifiers into one strong classifier. In the beginning, all the samples are assigned the same weight. During the iteration, the weights of samples vary with the coefficients of weak classifiers, and the coefficients of the classifiers are calculated by the error. As a result, the AdaBoost algorithm can increase the weight of the misclassified samples and decrease the weight of the correctly classified samples. In the next iteration, the classifier will focus the misclassified samples more. Finally, all the generated weak classifiers are merged using linear combination to form a strong classifier. The steps of the AdaBoost algorithm [19] are as follows:
Input:
Training data set T = { ( x 1 , y 1 ) , ( x 2 , y 2 ) , , ( x N , y N ) } , where x i R n , y Y = { 1 , + 1 } , and a weak learning algorithm.
Output:
Final classifier G ( x ) .
  • Initialize the weight distribution of the training samples following Equation (1).
    D 1 = ( w 11 , , w 1 i , , w 1 N ) ,   w 1 i = 1 N ,   i = 1 , 2 , , N
    where N represents the number of samples.
  • For m = 1 , 2 , , M , where M represents the number of weak classifiers.
    • Following Equation (2), get the weak classifier based on weight distribution D m .
      G m ( x ) = { 1 , + 1 }
    • Calculate the classification error rate of G m ( x ) on the training data set following Equation (3).
      e m = P ( G m ( x ) y i ) = i = 1 N w m i I ( G m ( x ) y i )
    • Calculate the coefficient of G m ( x ) following Equation (4).
      α m = 1 2 l o g 1 e m e m
    • Update the weight distribution of the training samples following Equations (5)–(7).
      D m + 1 = ( w m + 1 , 1 , , w m + 1 , i , , w m + 1 , N )
      w m + 1 , i = w m i Z m exp ( α m y i G m ( x i ) )
      where Z m is the normalization factor.
      Z m = i = 1 N w m i exp ( α m y i G m ( x i ) )
  • Build a linear combination of basic classifiers and get the final classifier G ( x ) following Equations (8) and (9).
    f ( x ) = m = 1 M α m G m ( x )
    G ( x ) = sign ( f ( x ) ) = sign ( m = 1 M α m G m ( x ) )
The advantages of the AdaBoost algorithm are summarized as follows. (1) The AdaBoost algorithm can use various weak classifiers without filtering features. In addition, it delivers high execution efficiency, and can avoid overfitting issues. (2) The AdaBoost algorithm trains the weak classifiers without knowing the prior knowledge. The synthetic strong classifier can significantly improve the classification accuracy, and it is suitable for classification of most types of data. (3) The training of rough weak classifiers is much easier than training of the accurate strong classifiers. It trains multiple weak classifiers to form a strong classifier with better classification performance.

2.2. PSO

PSO was proposed by James Kenney and Russ Eberhart in 1995 [20]. The algorithm is derived from the study of predation behavior of birds, and it is a method based on iteration. Imagine a scene where there is a piece of food in a certain area and a group of randomly distributed birds are searching for the food. They obtain their distances from the food, but do not get the specific location of the food. The best way to solve this problem is to change the flight path based on the current location of the bird closest to the food and flight experience of each bird, to locate the food.
The PSO algorithm considers each solution as a bird, called a particle. Each particle has an adaptive value that represents the current state of its own solution. In each iteration, each particle adjusts its moving direction and velocity based on the global optimal solution and the optimal solution found by the particle itself, and gradually approaches the optimal particle.
The basic principle of the standard particle swarm algorithm is as follows [21].
Suppose that there are m particles searching for the optimal solution in an N-dimensional target space and randomly initialize the position and velocity of each particle following Equations (10)–(12). Where the vector U i represents the position of particle i , and the vector V i represents the flight speed of particle i .
U i = ( u i 1 , u i 2 , , u i N )
V i = ( v i 1 , v i 2 , , v i N )
i = 1 , 2 , , m
As Equation (13) shows, the current best position P i found by particle i is:
P i = ( p i 1 , p i 2 , , p i N )
As Equation (14) shows, the current best location P g b e s t found by all particles is:
P g b e s t = ( p g 1 , p g 2 , , p g N )
The position and velocity of particle i is then updated following Equations (15) and (16).
v i n k + 1 = ω v i n k + c 1 · r a n d ( ) · ( p i n u i n k ) + c 2 · r a n d ( ) · ( p g n u i n k )
u i n k + 1 = u i n k + v i n k + 1
where ω is the inertia weight, c 1 , c 2 , two positive constant, are the acceleration factors, v i n k + 1 represents the n th -dimensional velocity component generated by the ( k +1)th iteration of the i th particle, and u i n k + 1 represents the n th -dimensional position component generated by the ( k +1)th iteration of the i th particle. The position and velocity update formula is divided into three parts. The first part is the inertia part, which indicates the particle’s degree of trust in its own speed. The second part is the self-cognitive part, which indicates the particle’s degree of trust in its own experience. The third part is the social cognitive part, which indicates the degree of trust in the best adaptive particle [22].
Characteristics of PSO algorithm can be summarized as [23]:
  • It is possible to quickly approximate the optimal solution and achieve effective optimization of parameters.
  • It is suitable for searching within the scope of continuity and solving the maximum and minimum problems of continuous functions.
  • It is easy to implement with low complexity and requires a small number of parameters.
  • It is easy to fall into local optimum.

3. The Proposed Approach

3.1. Area Under Curve (AUC)

Confusion matrix is the common method to reflect performance of classification model. Taking a two-class model as an example, the confusion matrix of this model is calculated as shown in Table 1.
Based on the confusion matrix, the Accuracy, Precision, Recall and F1-Measure are defined as follows:
Accuracy = T P + T N T P + F P + T N + F N
Precision = T P T P + F P
Recall = T P T P + F N
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l = 2 T P 2 T P + F P + F N
where TP is the number of true positives, which represents cases that the positive class are correctly classified. Where FN is the number of false negatives, which represents cases that the positive class are classified as negative. Where TN is the number of true negatives, which represents cases that the negative class are correctly classified. Where FP is the number of false positives, which represents cases that negative class are classified as positive.
The TP, FP, TN, and FN measures can be collected to construct a plot, which is a Receiver Operating Characteristic (ROC) curve, which the true positive rate (TPR) as the ordinate and the false positive rate (FPR) as the abscissa. The calculation formula TPR and FPR are shown in Equation (21).
TPR = T P T P + F N ,   FPR = F P T N + F P
The value of AUC is the area under the ROC curve. Suppose 1 s and r are the probabilities of FP and TP, respectively. The AUC is estimated by Equation (22), where Δ ( 1 s ) = ( 1 s ) γ ( 1 s ) γ 1 , Δ r = r γ r γ 1 and γ is an index.
AUC = γ { [ r γ · Δ ( 1 s ) ] + 1 2 [ Δ r · Δ ( 1 s ) ] }
AUC is a comprehensive evaluation of classification models, which can provide more useful information than accuracy measurement.

3.2. The AdaBoost-A Algorithm

Although the AdaBoost algorithm can be directly applied to imbalanced data, the ensemble algorithm pays more attention to the misclassified samples, rather than samples of minority class. According to the error calculation formula of the weak classifier of AdaBoost, the error is only related to the weight and the number of misclassified samples. There is no additional processing for the misclassified samples of minority class, so the AdaBoost ensemble algorithm is not well suited for processing imbalanced data [24]. To solve this challenge, we propose an improved AdaBoost algorithm (AdaBoost-A) that introduces the AUC [25] into the error function calculation. At the algorithm level, the error rate metric cannot properly reflect the performance of the classifier. For example, there are 90 samples in class A and 10 samples in class B. If classifier divides all test samples into class A, the error rate of classifier is 10%. However, it is clear that this classifier makes no sense. As the area under the ROC curve, AUC can effectively reflect the comprehensive performance of the classifier. If the classifier is biased towards majority class classification, the AUC of the classifier will be very small, and 1-AUC will be very large. The error is determined by combining the product of classification error rate and 1-AUC, which can effectively improve the classification accuracy of AdaBoost. The improved error calculation is shown in Equation (23).
e m = 2 ( 1 AUC ) · P ( G m ( x ) y i ) = 2 ( 1 AUC ) · i = 1 N w m i I ( G m ( x ) y i )
where e m represents error rate of the m th weak classifer, G m ( x ) is the m th weak classifer, y i represents the actual label of the i th sample, w m i represents the weight corresponding to the i th sample in the m th iteration.

3.3. The PSOPD-AdaBoost-A Ensemble Algorithm

Although the AdaBoost algorithm can combine multiple weak classifiers into one strong classifier, the coefficients of the weak classifiers are determined in the iteration process. These coefficients cannot be changed later, so it is inevitable to generate redundant or useless weak classifiers that have large weights. This can significantly affect the readability of the classifiers and increase system overhead. To overcome these shortcomings, our approach uses the PSO algorithm to optimize the weights of the weak classifiers of AdaBoost-A. This algorithm assigns large weights to the weak classifiers with high accuracy, and small weights to the redundant or useless weak classifiers, further improving the accuracy and readability of AdaBoost classifier.
PSO is an optimization algorithm with a small number of parameters and fast convergence, but it is easy to fall into local optimum [26]. Therefore, this paper proposes an ensemble algorithm by improved PSO based on population diversity optimizing AdaBoost-A (PSOPD-AdaBoost-A). It can further optimize the coefficient weights of the weak classifiers of AdaBoost-A by performing re-initialization when it falls into in local optimum. The proposed improvements focus on using the error function of AdaBoost-A as the fitness function, and adopting the standard PSO algorithm to optimize the weights of the weak classifiers of AdaBoost-A. If the optimal particle does not change for ten consecutive iterations, the optimal particle is retained, and the position and velocity of other particles are reinitialized. The iteration is continued until the configured number of iterations is reached. The optimal particle does not change in multiple iterations, and it is likely to fall into local optimum. By re-initialization, the search range of the particle is enlarged, and the population diversity is enhanced. At the same time, the optimal particle is retained during re-initialization to avoid loss of the optimal solution of the population.
The PSOPD-AdaBoost-A ensemble algorithm is described as follows:
  • Use the AdaBoost-A algorithm to generate several ( M ) weak classifiers, and the coefficients of the weak classifiers are expressed following Equation (24).
    A = ( a 1 , a 2 , , a M ) k = 1 , 2 , , M
    where a k represents the weight coefficient of the k th weak classifier.
  • Set the population size to m and randomly initialize the position and velocity of each particle following Equations (25)–(27).
    U i = ( u i 1 , u i 2 , , u i M )
    V i = ( v i 1 , v i 2 , , v i M )
    i = 1 , 2 , , m
  • Use the position component of each particle as the weight coefficient of the weak classifier of AdaBoost-A. As Equation (28) shows, the error rate e i of AdaBoost-A is calculated as the fitness value of each particle.
    e i = 2 ( 1 AUC ) s = 1 Q I ( sign ( k = 1 M u i k G k ( x ) ) y s )
    where Q represents the number of samples, e i represents the error rate of the i th particle, and y s represents the true class label of the s th sample.
  • For each particle, the fitness value generated by each iteration is compared with the fitness value of the optimal position passed by the particle. If the fitness value is greater than the fitness value of the optimal position, the current position is taken as the optimal location passed by the particle, recorded as P i .
  • For each particle, the fitness value generated by each iteration is compared with the fitness value of the optimal position passed by all particles. If the fitness value is greater than the fitness value of the optimal position of all particles, the current position is taken as the global optimal location, recorded as P g b e s t .
  • Update the position and velocity of the particle in the following iteration based on the Equations (15) and (16).
  • When the maximum number of iterations is reached or the error is small enough, the iteration stops. Otherwise, check the number of consecutive times that the optimal particle remains unchanged. If it reaches the threshold (10 is used in our configuration), the optimal particle is retained, and the position and velocity of other particles are reinitialized. If it is less the threshold, no action is performed. Then continue to execute steps 4–6.

4. Evaluation

4.1. Test Data

We evaluate the proposed algorithm using the Vehicle, Horse Colic, Ionosphere and Statlog imbalanced datasets from UCI repository and KC1, JM1, PC3, PC5, CM1 imbalanced datasets from NASA. In addition, the weak classifiers are generated by Decision-Stump. Table 2 lists the details of the nine imbalanced datasets used in the evaluation. The label bad in Ionosphere is considered to be a minority class, and the label good in Ionosphere is considered to be a majority class. The label 1 in Statlog is considered to be a minority class, and other labels in Statlog are considered as a majority class. The label van in Vehicle is considered to be a minority class, and labels saab, bus, and opel in Vehicle are considered as a majority class.

4.2. Analysis of the AdaBoost-A Algorithm

The Vehicle dataset is split into training and test sets at a ratio of 7:3. The standard AdaBoost algorithm is used to classify the samples in the training set. As the number of weak classifiers increases, the growth trend of AUC is shown in Figure 1. When the number of weak classifiers reaches 10, the increase of the evaluation index AUC significantly slows down, indicating that increasing the number of weak classifiers hardly improves the AUC. Therefore, the number of weak classifiers in the experiments is set to 10. Figure 2 shows the comparison of accuracy, precision, recall, and F1 value of the standard AdaBoost algorithm and the AdaBoost-A algorithm on the Vehicle test set. Results show that the AdaBoost-A algorithm achieves 92.9% accuracy, 84.8% precision, 83% recall, and 83.8% F1 value, and the standard AdaBoost algorithm achieves 91.0% accuracy, 83.4% precision, 79.5% recall, and 81.4% F1 value. The proposed algorithm not only improves the overall accuracy, but also reduces the error of minority class classification.
To eliminate the impact of data division and guarantee valid results, the 10-fold CV is employed to evaluate the classification performance. The detailed comparison results for the AdaBoost-A algorithm and the AdaBoost algorithm on Vehicle dataset in terms of the error and AUC are showed through box plots in Figure 3 and Figure 4, respectively. Figure 3 shows that the maximum, minimum, and average of AdaBoost-A algorithm is lower than the AdaBoost algorithm in terms of error. Figure 4 shows that the maximum, minimum, and average of AdaBoost-A algorithm is higher than the AdaBoost algorithm in terms of AUC.
The KC1 dataset is split into training and test sets at a ratio of 7:3. The standard AdaBoost algorithm is used to classify the samples in the training set. As the number of weak classifiers increases, the growth trend of AUC is shown in Figure 5. When the number of weak classifiers reaches 10, the increase of the evaluation index AUC significantly slows down. Therefore, the number of weak classifiers in this experiment is set to 10. Figure 6 shows the comparison of accuracy, precision, recall, and F1 value of the standard AdaBoost algorithm and the AdaBoost-A algorithm on the KC1 test set. Results show that the AdaBoost-A algorithm achieves 76.2% accuracy, 45.8% precision, 30.2% recall, and 35.3% F1 value, and the standard AdaBoost algorithm achieves 74.9% accuracy, 58.2%precision, 17.2% recall, and 26% F1 value.
The detailed comparison results of the 10-fold CV for the AdaBoost-A algorithm and the AdaBoost algorithm on KC1 dataset in terms of the error and AUC are showed through box plots in Figure 7 and Figure 8, respectively. Figure 7 shows that the maximum, minimum, and average of AdaBoost-A algorithm is lower than the AdaBoost algorithm in terms of error. Figure 8 shows that the maximum, minimum, and average of AdaBoost-A algorithm is higher than the AdaBoost algorithm in terms of AUC.
Through the above experiments, it is proved that the proposed AdaBoost-A algorithm is more effective than AdaBoost algorithm.

4.3. Analysis of the PSOPD-AdaBoost-A Ensemble Algorithm

The coefficients of AdaBoost-A weak classifiers are optimized by the improved PSO based on population diversity and the standard PSO on the five imbalanced datasets, respectively. We compare classification performance of them by performing 5-fold CV. The detailed classification results of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A algorithms based on the average of 100 runs are showed in Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13.
As shown in Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13, the classification performance of the PSO-AdaBoost-A and PSOPD-AdaBoost-A ensemble algorithms is much higher than the AdaBoost algorithm. It illustrates that optimizing the weight coefficients of AdaBoost weak classifiers can significantly improve the performance of the classifiers. The PSOPD-AdaBoost-A algorithm achieves 80.4% accuracy, 63.2% precision, 84.1% recall, and 72.1% F1 value on the Horse Colic dataset, which is higher than that of the PSO-AdaBoost-A classifier. The PSOPD-AdaBoost-A algorithm achieves 92.0% accuracy, 80.2% precision, 65.8% recall, and 72.2% F1 value on the Ionosphere dataset, which is higher than that of the PSO-AdaBoost-A classifier. The PSOPD-AdaBoost-A algorithm achieves 82.3% accuracy, 84.2% precision, 99.0% recall, and 91.0% F1 value on the JM1 dataset, which is higher than that of the PSO-AdaBoost-A classifier. The PSOPD-AdaBoost-A algorithm achieves 77.5% accuracy, 50.6% precision, 35.3% recall, and 41.6% F1 value on the KC1 dataset, which is higher than that of the PSO-AdaBoost-A classifier in terms of accuracy, recall, and F1 value. The PSOPD-AdaBoost-A algorithm achieves 98.9% accuracy, 99.5% precision, 99.7% recall, and 99.3% F1 value on the Statlog dataset, which is higher than that of the PSO-AdaBoost-A classifier in terms of precision, recall, and F1 value. The experimental results presented above show that the improved PSO algorithm based on population diversity can effectively avoid falling into local optimum and achieve higher classification accuracy, and prove that the PSOPD-AdaBoost-A algorithm is effective in processing imbalanced data.

4.4. Comparison the PSOPD-AdaBoost-A and Other Improved Algorithms

To solve the imbalance problem, researchers have proposed many approaches to improve the ensemble algorithms, but most of the improved methods are still sensitive to the relatively high imbalance rate. Next, we compare classification performance of our PSOPD-AdaBoost-A approach and boosting algorithms including G-AdaBoost based on genetic algorithm [17], B-AdaBoost based on label modification and D-AdaBoost based on weight limitation [18], bagging algorithms including Random Forest and Extra Trees, sampling method including Smote-based AdaBoost by performing 5-fold CV on the Vehicle, PC3, PC5, and CM1 datasets. For a fair comparison, the number of weak classifiers of algorithms for experiment mentioned above is set to 10, and the weak classifier is generated by Decision-Stump. Results show that the PSOPD-AdaBoost-A ensemble algorithm is effective on datasets with relatively high imbalance rates.
The mean of Accuracy, Precision, Recall, F1, AUC, and Error of the four datasets are summarized in Table 3, Table 4, Table 5 and Table 6, respectively. The largest values are highlighted in bold for each performance measure in each table. To further verify the effectiveness of PSOPD-AdaBoost-A ensemble algorithm for processing imbalanced data, the AUC values of each run are showed through box plots in Figure 14, Figure 15, Figure 16 and Figure 17.
Table 3 shows that the PSOPD-AdaBoost-A method achieves the highest performance of the seven comparison algorithms in terms of accuracy, F1 value, and AUC classifying the Vehicle dataset, its precision is slightly lower than the G-AdaBoost algorithm, and its recall is slightly lower than the Smote method. Figure 14 shows that the maximum, minimum, and average of PSOPD-AdaBoost-A algorithm is the highest among seven algorithms in terms of AUC, demonstrating the effectiveness of the PSOPD-AdaBoost-A algorithm in classifying the Vehicle dataset.
Table 4 shows that the PSOPD-AdaBoost-A method achieves the highest performance of the seven comparison algorithms in terms of accuracy, precision, F1 value and AUC classifying the PC3 dataset, and its recall is lower than the Smote method. Figure 15 shows that the maximum, minimum, and average of PSOPD-AdaBoost-A algorithm is the highest among seven algorithms in terms of AUC, demonstrating the effectiveness of PSOPD-AdaBoost-A in classifying the PC3 dataset.
Table 5 shows that the PSOPD-AdaBoost-A method achieves the highest performance of the seven comparison algorithms in terms of precision, F1 value, and AUC classifying the PC5 dataset, its accuracy is slightly lower than the Extra Trees algorithm, and its recall is slightly lower than the Smote method. Figure 16 shows that the maximum, minimum, and average of PSOPD-AdaBoost-A algorithm is the highest among seven algorithms in terms of AUC, demonstrating the effectiveness of PSOPD-AdaBoost-A in classifying the PC5 dataset.
Table 6 shows that the PSOPD-AdaBoost-A method achieves the highest performance of the seven comparison algorithms in terms of accuracy, precision, F1 value and AUC classifying the CM1 dataset, and its recall is lower than the Smote method. Figure 17 shows that the maximum, minimum, and average of PSOPD-AdaBoost-A algorithm is the highest among seven algorithms in terms of AUC, demonstrating the effectiveness of PSOPD-AdaBoost-A in classifying the CM1 dataset.
Through the above comparative experiments, it is proved that the PSOPD-AdaBoost-A ensemble algorithm is more effective in processing imbalanced data compared to many improved algorithms.

5. Conclusions

Traditional AdaBoost algorithm focuses on the misclassified samples instead of the samples of minority class. In this paper, we propose an improved AdaBoost algorithm (AdaBoost-A). Since the AUC can effectively reflect the performance of the classifier, we introduce the AUC into error calculation, making the AdaBoost focus more on the classification accuracy of the minority. Furthermore, the AdaBoost algorithm may generate redundant or useless weak classifiers, significantly affecting the readability of the classifier. We propose an ensemble algorithm, PSOPD-AdaBoost-A, which can further optimize the weight of the weak classifiers. Experimental results show that the AdaBoost-A and PSOPD-AdaBoost-A ensemble algorithms can effectively classifying imbalanced datasets, Vehicle, KC1, Horse Colic, Ionosphere, JM1, and Statlog. Next, we compare the imbalanced data classification performance of PSOPD-AdaBoost-A with ensemble algorithms including G-AdaBoost, B-AdaBoost, D-AdaBoost, Random Forest, and Extra Trees, sampling method including Smote, and four datasets with relatively high imbalance rate, Vehicle, PC3, PC5, and CM1 are used in the comparison. The results show that the PSOPD-AdaBoost-A ensemble algorithm is effective in processing data with relatively high imbalance rate compared to other improved algorithms. Our future work is dedicated to applying the proposed algorithm to the field of sensors, accurately achieving classification of targets by processing imbalanced data acquired by sensors.

Author Contributions

K.L. and G.Z. proposed the ensemble algorithm, conceived and designed the experiments; G.Z. performed the experiments; J.Z., F.L. and M.S. analyzed the data; G.Z. wrote the paper. K.L. and J.Z. contributed to manuscript definition of important intellectual content and manuscript revision; K.L. approved the final version of the manuscript.

Funding

This work was also supported by grants from the National Natural Science Foundation of China, with No.61673396, and the Natural Science Foundation of Shandong Province, China, with No.ZR2017MF032.

Acknowledgments

The authors are very indebted to the anonymous referees for their critical comments and suggestions for the improvement of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weiss, G. Mining with rarity: A unifying framework. SIGKDD Explor. 2004, 6, 7–19. [Google Scholar] [CrossRef]
  2. Prachuabsupakij, W. CLUS: A new hybrid sampling classification for imbalanced data. In Proceedings of the 12th International Joint Conference on Computer Science and Software Engineering (JCSSE), Hat Yai, Thailand, 22–24 July 2015; pp. 281–286. [Google Scholar]
  3. Maloof, M.A.; Langley, P.; Binford, T.O. Improved rooftop detection in aerial images with machine learning. Mach. Learn. 2003, 53, 157–191. [Google Scholar] [CrossRef]
  4. Huang, K.Z.; Yang, H.Q.; King, I. Learning classifiers from imbalanced data based on biased minimax probability machine. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; pp. 558–563. [Google Scholar]
  5. Viola, P.; Jones, M. Fast and robust classification using asymmetric AdaBoost and a detector cascade. Adv. Neural Inf. Process. Syst. 2002, 14, 1311–1318. [Google Scholar]
  6. Li, Y.; Wang, S.; Tian, Q. A Boosting Approach to Exploit Instance Correlations for Multi-Instance Classification. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 1–8. [Google Scholar] [CrossRef] [PubMed]
  7. Chawla, N.V.; Lazarevic, A.; Hall, L.O. SMOTEBoost: Improving prediction of the minority class in boosting. In Proceedings of the 7th European Conferenc on Priciples and Practice of Knowledge Discovery in Databases, Cavtat-Dubrovnik, Croatia, 22–26 September 2003; pp. 107–109. [Google Scholar]
  8. Joshi, M.; Kumar, V.; Agarwal, R. Evaluating boosting algorithms to classify rare classes: Comparison and improvements. In Proceedings of the IEEE International Conference on Data Mining, San Jose, CA, USA, 29 November–2 December 2001; pp. 257–264. [Google Scholar]
  9. Sun, B.; Chen, H.; Wang, J. Evolutionary under-sampling based bagging ensemble method for imbalanced data classification. Front. Comput. Sci. 2017, 12, 331–350. [Google Scholar] [CrossRef]
  10. Chung, D.; Kim, H. Accurate ensemble pruning with PL-bagging. Comput. Stat. Data Anal. 2015, 83, 1–13. [Google Scholar] [CrossRef]
  11. Hsu, K.W.; Srivastava, J. Improving bagging performance through multi-algorithm ensembles. Front. Comput. Sci. 2012, 6, 498–512. [Google Scholar]
  12. Li, Y.; Guo, H.; Li, Y. A boosting based ensemble learning algorithm in imbalanced data classification. Syst. Eng. Theory Pract. 2016, 36, 189–199. [Google Scholar]
  13. Cao, Y.; Miao, Q.; Liu, J. Advance and Prospects of AdaBoost Algorithm. Acta Autom. Sin. 2013, 39, 745–758. [Google Scholar] [CrossRef]
  14. Yang, X.; Ma, Z.; Yuan, S. Multi-class Adaboost Algorithm Based on the Adjusted Weak Classifier. J. Electron. Inf. Technol. 2016, 38, 373–380. [Google Scholar]
  15. Li, L.; Wang, C.; Li, W. Hyperspectral Image Classification by AdaBoost Weighted Composite Kernel Extreme Learning Machines. Neurocomputing 2018, 275, 1725–1733. [Google Scholar] [CrossRef]
  16. Dou, P.; Chen, Y. Remote sensing imagery classification using AdaBoost with a weight vector (WV AdaBoost). Remote Sens. Lett. 2017, 8, 733–742. [Google Scholar] [CrossRef]
  17. Li, K.; Xie, P.; Liu, W. An Ensemble Evolve Algorithm for Imbalanced Data. J. Comput. Theor. Nanosci. 2017, 14, 4624–4629. [Google Scholar] [CrossRef]
  18. Guo, Q.-J.; Li, L.; Li, N. Novel modified AdaBoost algorithm for imbalanced data classification. Comput. Eng. Appl. 2008, 44, 217–221. [Google Scholar]
  19. Zhang, C.; Chen, Y. Improved Piecewise Nonlinear Combinatorial Adaboost Algorithm Based on Noise Self-detection. Comput. Eng. 2017, 43, 163–168. [Google Scholar]
  20. Bratton, D.; Kennedy, J. Defining a Standard for Particle Swarm Optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007. [Google Scholar]
  21. Yang, X.; Yuan, J.; Yuan, J.; Mao, H. A modified particle swarm optimizer with dynamic adaptation. Appl. Math. Comput. 2007, 189, 1205–1213. [Google Scholar] [CrossRef]
  22. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  23. Yu, J.; Zhou, X.; Chen, M. Research on representative algorithms of swarm intelligence. Comput. Eng. Appl. 2010, 46, 1–4. [Google Scholar]
  24. Gu, Y.; Cheng, L. Classification of unbalanced data based on MTS-AdaBoost. Appl. Res. Comput. 2018, 35, 346–348. (In Chinese) [Google Scholar]
  25. Calders, T.; Jaroszewicz, S. Efficient AUC Optimization for Classification. In Proceedings of the 18th European Conference on Machine Learning, Warsaw, Poland, 17–21 September 2007. [Google Scholar]
  26. Ren, K.-Q.; Gao, X.-L.; Xie, B. AdaBoost Face Detection Algorithm Based on Fusion Optimization of AFSA and PSO. J. Chin. Comput. Syst. 2016, 37, 861–865. (In Chinese) [Google Scholar]
Figure 1. The AUC of AdaBoost Algorithm on Vehicle Training Set.
Figure 1. The AUC of AdaBoost Algorithm on Vehicle Training Set.
Sensors 19 01476 g001
Figure 2. Performance Comparison on Vehicle Test Set.
Figure 2. Performance Comparison on Vehicle Test Set.
Sensors 19 01476 g002
Figure 3. The Error Comparison of AdaBoost and AdaBoost-A on Vehicle Dataset.
Figure 3. The Error Comparison of AdaBoost and AdaBoost-A on Vehicle Dataset.
Sensors 19 01476 g003
Figure 4. The AUC Comparison of AdaBoost and AdaBoost-A on Vehicle Dataset.
Figure 4. The AUC Comparison of AdaBoost and AdaBoost-A on Vehicle Dataset.
Sensors 19 01476 g004
Figure 5. The AUC of AdaBoost Algorithm on KC1 Training Set.
Figure 5. The AUC of AdaBoost Algorithm on KC1 Training Set.
Sensors 19 01476 g005
Figure 6. Performance Comparison on KC1 Test Set.
Figure 6. Performance Comparison on KC1 Test Set.
Sensors 19 01476 g006
Figure 7. Error Comparison of AdaBoost and AdaBoost-A on KC1 Dataset.
Figure 7. Error Comparison of AdaBoost and AdaBoost-A on KC1 Dataset.
Sensors 19 01476 g007
Figure 8. The AUC Comparison of AdaBoost and AdaBoost-A on KC1 Dataset.
Figure 8. The AUC Comparison of AdaBoost and AdaBoost-A on KC1 Dataset.
Sensors 19 01476 g008
Figure 9. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on Horse Colic Dataset.
Figure 9. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on Horse Colic Dataset.
Sensors 19 01476 g009
Figure 10. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on Ionosphere Dataset.
Figure 10. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on Ionosphere Dataset.
Sensors 19 01476 g010
Figure 11. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on JM1 Dataset.
Figure 11. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on JM1 Dataset.
Sensors 19 01476 g011
Figure 12. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on KC1 Dataset.
Figure 12. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on KC1 Dataset.
Sensors 19 01476 g012
Figure 13. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on Statlog Dataset.
Figure 13. Performance Comparison of the AdaBoost, PSO-AdaBoost-A, and PSOPD-AdaBoost-A on Statlog Dataset.
Sensors 19 01476 g013
Figure 14. Comparison of AUC on the Vehicle Dataset.
Figure 14. Comparison of AUC on the Vehicle Dataset.
Sensors 19 01476 g014
Figure 15. Comparison of AUC on the PC3 Dataset.
Figure 15. Comparison of AUC on the PC3 Dataset.
Sensors 19 01476 g015
Figure 16. Comparison of AUC on the PC5 Dataset.
Figure 16. Comparison of AUC on the PC5 Dataset.
Sensors 19 01476 g016
Figure 17. Comparison of AUC on the CM1 Dataset.
Figure 17. Comparison of AUC on the CM1 Dataset.
Sensors 19 01476 g017
Table 1. Confusion matrix.
Table 1. Confusion matrix.
Predicted Class
PositiveNegative
Actual classPositiveTPFN
NegativeFPTN
Table 2. Details of the Five Imbalanced Datasets.
Table 2. Details of the Five Imbalanced Datasets.
DatasetThe Number of SamplesMajority ClassMinority ClassImbalance Ratio (IR)
Vehicle8466471993.25:1
KC1149711833143.76:1
Horse Colic3682271411.61:1
Ionosphere3512251261.79:1
JM110,878877621024.17:1
Statlog231019803306:1
PC310779431347.04:1
PC5171112404712.63:1
CM15054574510.2:1
Table 3. Comparison of Classification Results on the Vehicle Dataset.
Table 3. Comparison of Classification Results on the Vehicle Dataset.
AlgorithmAccuracyPrecisionRecallF1AUCError
PSOPD-AdaBoost-A0.9250000.8093450.9024000.8514060.9171870.074999
G-AdaBoost0.9235840.8619400.8119990.8331730.8850120.076415
D-AdaBoost0.9245290.8571780.8240000.8365530.8897770.075471
B-AdaBoost0.9141500.7819360.8920000.8311310.9064930.085849
Random Forest0.9118860.8416050.8060010.8231280.8725670.088114
Extra Trees0.9203770.8315280.848000.8389030.8960980.079633
Smote0.8971690.7084730.9600000.8145940.8982710.102831
Table 4. Comparison of Classification Results on the PC3 Dataset.
Table 4. Comparison of Classification Results on the PC3 Dataset.
AlgorithmAccuracyPrecisionRecallF1AUCError
PSOPD-AdaBoost-A0.8597040.4144260.2482350.3109440.5937360.140296
G-AdaBoost0.8562930.1428570.0470580.0913140.5099700.143707
D-AdaBoost0.8592930.3579360.1117640.1652390.5397800.140707
B-AdaBoost0.8540750.2671250.0941150.1359470.5288380.145925
Random Forest0.8540740.2070450.1135290.1362620.5242230.145936
Extra Trees0.8543220.2424090.1252940.1642340.5322230.145677
Smote0.7377770.2081300.5064050.2946730.5729230.262223
Table 5. Comparison of Classification Results on the PC5 Dataset.
Table 5. Comparison of Classification Results on the PC5 Dataset.
AlgorithmAccuracyPrecisionRecallF1AUCError
PSOPD-AdaBoost-A0.7376620.5819460.4557640.5116650.6472680.262336
G-AdaBoost0.7440600.5754780.2389830.3394320.5916010.255940
D-AdaBoost0.7443920.5773830.2491520.34600500.5910270.255607
B-AdaBoost0.7397190.5602150.2576270.34860720.5904260.260280
Random Forest0.7471960.5458230.4038750.4633640.6122190.252904
Extra Trees0.7495320.5522120.4033890.4660450.6130780.250468
Smote0.6500000.4147150.6242350.4983260.6313120.350000
Table 6. Comparison of Classification Results on the CM1 Dataset.
Table 6. Comparison of Classification Results on the CM1 Dataset.
AlgorithmAccuracyPrecisionRecallF1AUCError
PSOPD-AdaBoost-A0.8965530.3441510.3550000.3493760.6347600.103464
G-AdaBoost0.8656200.2812040.2045550.2364180.5263760.138880
D-AdaBoost0.8676370.3400350.106660.1614390.5255070.123463
B-AdaBoost0.8502100.2502560.1260860.1672120.5263760.140788
Random Forest0.8940600.2624450.1900000.2207760.5171730.103938
Extra Trees0.8857840.3433600.1733330.2290550.5062310.110236
Smote0.7527550.2264270.6666660.3358520.6142390.247245

Share and Cite

MDPI and ACS Style

Li, K.; Zhou, G.; Zhai, J.; Li, F.; Shao, M. Improved PSO_AdaBoost Ensemble Algorithm for Imbalanced Data. Sensors 2019, 19, 1476. https://doi.org/10.3390/s19061476

AMA Style

Li K, Zhou G, Zhai J, Li F, Shao M. Improved PSO_AdaBoost Ensemble Algorithm for Imbalanced Data. Sensors. 2019; 19(6):1476. https://doi.org/10.3390/s19061476

Chicago/Turabian Style

Li, Kewen, Guangyue Zhou, Jiannan Zhai, Fulai Li, and Mingwen Shao. 2019. "Improved PSO_AdaBoost Ensemble Algorithm for Imbalanced Data" Sensors 19, no. 6: 1476. https://doi.org/10.3390/s19061476

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop