Next Article in Journal
Annotation of Potential Vaccine Targets and Designing of mRNA-Based Multi-Epitope Vaccine against Lumpy Skin Disease Virus via Reverse Vaccinology and Agent-Based Modeling
Next Article in Special Issue
GIT-Net: An Ensemble Deep Learning-Based GI Tract Classification of Endoscopic Images
Previous Article in Journal
Detecting Early Ocular Choroidal Melanoma Using Ultrasound Localization Microscopy
Previous Article in Special Issue
Compression of Bio-Signals Using Block-Based Haar Wavelet Transform and COVIDOA for IoMT Systems
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

ECG Heartbeat Classification Using Machine Learning and Metaheuristic Optimization for Smart Healthcare Systems

Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj 16278, Saudi Arabia
Faculty of Computers and Information, Minia University, Minia 61519, Egypt
Faculty of Computers and Information, Luxor University, Luxor 85951, Egypt
Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY 40292, USA
Author to whom correspondence should be addressed.
Bioengineering 2023, 10(4), 429;
Submission received: 26 February 2023 / Revised: 14 March 2023 / Accepted: 15 March 2023 / Published: 28 March 2023


Early diagnosis and classification of arrhythmia from an electrocardiogram (ECG) plays a significant role in smart healthcare systems for the health monitoring of individuals with cardiovascular diseases. Unfortunately, the nonlinearity and low amplitude of ECG recordings make the classification process difficult. Thus, the performance of most traditional machine learning (ML) classifiers is questionable, as the interrelationship between the learning parameters is not well modeled, especially for data features with high dimensions. To address the limitations of ML classifiers, this paper introduces an automatic arrhythmia classification approach based on the integration of a recent metaheuristic optimization (MHO) algorithm and ML classifiers. The role of the MHO is to optimize the search parameters of the classifiers. The approach consists of three steps: the preprocessing of the ECG signal, the extraction of the features, and the classification. The learning parameters of four supervised ML classifiers were utilized for the classification task; support vector machine (SVM), k-nearest neighbors (kNNs), gradient boosting decision tree (GBDT), and random forest (RF) were optimized using the MHO algorithm. To validate the advantage of the proposed approach, several experiments were conducted on three common databases, including the Massachusetts Institute of Technology (MIT-BIH), the European Society of Cardiology ST-T (EDB), and the St. Petersburg Institute of Cardiological Techniques 12-lead Arrhythmia (INCART). The obtained results showed that the performance of all the tested classifiers were significantly improved after integrating the MHO algorithm, with the average ECG arrhythmia classification accuracy reaching 99.92% and a sensitivity of 99.81%, outperforming the state-of the-art methods.

1. Introduction

The recent developments in biomedical sensors, the Internet of Medical Things (IoMT), and artificial intelligence (AI)-based techniques have increased interest in smart healthcare technologies [1,2]. Microelectronics, smart sensors, AI, 5G, and IoMT constitute the cornerstone of smart healthcare [3,4]. A smart healthcare system does not suffer fatigue; hence, it can process big data at a much higher speed than humans with greater accuracy [5]. With smart healthcare systems, the diagnosis and treatment of diseases have become more intelligent. For instance, smart patient monitoring empowers the observation of a patient outside the traditional clinical settings, which offers a lower cost through reducing visits to physician offices and hospitalizations [6].
The human body is known as a complex electromechanical system generating several types of biomedical signals, such as an electrocardiogram (ECG), which is a record of the dynamic changes of the human body that need to be monitored by smart healthcare systems. For instance, the EKG sensor measures cardiac electrical potential waveforms. It is used to create standard 3-lead electrocardiogram (EKG) tracings to record the electrical activity in the heart or to collect surface electromyography (sEMG) to study the contractions in the muscles of the arm, leg, or jaw. Simply, an ECG graphs heartbeats and rhythms. The classification of an ECG heartbeat plays a substantial role in smart healthcare systems [7,8], where the presence of multiple cardiovascular problems is generally indicated by an ECG. In the subsequent ECG waveform, diseases cause defects. However, early diagnosis via an ECG allows for the selection of suitable cardiac medication and is thus very important and helpful for reducing heart attacks [9]. The method of detecting and classifying arrhythmia is not an easy task and may be very difficult even for professionals because sometimes it is important to examine multiple pulses of ECG data, obtained, for example, during hours, or even days, by a Holter clock. Furthermore, there is a possibility for errors by humans during the ECG recording study due to fatigue. Building a fully automatic arrhythmia detection or classification system is difficult. The difficulty comes from the large amount of data and the diversities in the ECG signals due to the nonlinearity, complexity, and low amplitude of ECG recordings, as well as the nonclinical conditions, such as noise [10].
Despite all these difficulties, methods for ECG arrhythmia classification have been widely explored [11,12] but choosing the best technique for smart patient monitoring depends on the robustness and performance of these methods. Several convolutional neural network (CNN)-based approaches have been introduced for the task [13,14]. Bollepalli et al. [10] proposed a CNN-based heartbeat detector to learn fused features from multiple ECG signals. It achieved an accuracy of 99.92% on the MITBIH database using two ECG channels. In [15], a subject-adaptable ECG arrhythmia classification model was proposed and trained with unlabeled personal data. It achieved an average performance of 99.4% classification accuracy on the MIT-BIH database. In [16], an end to-end deep multiscale fusion CNN model of multiple convolution kernels with different receptive fields was proposed, achieving an F1 score of 82.8% and 84.1% on two datasets. Chen et al. [17] combined CNN with long short-term memory to classify six types of arrhythmia and achieved an average accuracy of 97.15% on the MIT-BIH database. A recent approach by Atal and Singh [18] proposed using the bat-rider optimization to optimally tune a deep CNN to achieve an accuracy of 93.19% with a sensitivity of 93.9% on the MIT-BIH database. Unfortunately, most CNN-based methods are effective only for small numbers of arrhythmia classes, are computationally intensive, and need a very large amount of training data [13]. This is a great challenge for using the CNN-based methods on real-time applications or wearable devices with limited hardware [19].
On the other hand, many research efforts have been devoted to ECG arrhythmia classification using ML classifiers, such as SVM, RF, kNN, linear discriminants, multilayered perceptron, and regression tree [20,21]. It is well known that the SVM classifier does not become trapped in the well-known local minima points, requires less training data, and is faster than CNN-based methods [22]. In [23], wavelet transform and ICA were used for the morphological features description of the segmented heartbeats. The features were fed into an SVM to classify an ECG into five classes. In [24], least square twin SVM and kNN classifiers based on features’ sparse representation were used for cardiac arrhythmia recognition. The experiments were carried out on the MIT-BIH database in category and personalized schemes. A method based on improved fuzzy C-means clustering and Mahalanobis distance was introduced in [25], while in [26], abstract features from abductive interpretation of the ECG signals were utilized in heartbeat classification. Borui et al. [27] proposed a deep learning model integrating a long short-term memory with SVM for ECG arrhythmia classification. Martis et al. [28] evaluated the performance of several ML classifiers and concluded that the kNN and higher-order statistics features achieved an average accuracy of 97.65% and sensitivity of 98.16% on the MIT-BIH database. In [29], the RF classifier was utilized with CNN and PQRST features for arrhythmia classification from imbalanced ECG data. The major drawback of ML classifiers (e.g., SVM) is their deficiency in interpreting the impact of ECG data features on different arrhythmia patterns for extracting the optimal features. Further, the performance of most ML classifiers is questionable because the interrelationship between the learning parameters is not well modeled, especially for data features with high dimensions.
Despite the large amount of previous studies in the field, ECG arrhythmia classification has not been completely solved and remains a challenging problem. Consequently, there is room for improvement in several aspects, including classification, feature extraction, preprocessing, and ECG data segmentation. Most ML classifiers have some limitations; for example, SVM does not perform well with noisy data, while random forest (RF) suffers from interpretability issues and fails to determine the significance of variables. In addition, these ML classifiers have many parameters, and tuning such parameters has a crucial influence on the efficiency of the classification. Motivated by the advantages of the ML classifiers compared to the CNN-based methods, although they face a major challenge with a low classification accuracy, in this work, we focus on enhancing the classification accuracy of the ML classifiers. To this end and to develop an efficient classifier model, we propose to optimize the learning parameters of these classifiers using a naturally inspired metaheuristic algorithm called the marine predators optimization algorithm (MPA). The parameters of the classifier are gradually optimized using the MPA algorithm, which introduces an optimal classifier model that can classify the ECG features efficiently. Four different machine learning classifiers are considered, namely SVM, GBDT, RF, and kNN. The performance of these classifiers without learning parameter optimization and with optimization (i.e., MPA-SVM, MPA-GBDT, MPA-RF, and MPA-kNN) are compared. The experiments are validated on the three common benchmarking databases: the MIT-BIH, EDB, and INCART.
The remainder of this paper is organized as follows. Section 2 presents the methodology proposed to classify the ECG arrhythmia based on the optimization of the parameters of the ML classifiers. The experimental results and analysis as well as a comparison with the state of the art are presented in Section 3. Finally, the paper is concluded in Section 4.

2. Methodology

A complete smart healthcare system consists of several parts, such as sensors for heartbeat recordings, dry electrodes sensing of heartbeats, interpretation of the heartbeat signals, a personalized system for heartbeat monitoring, and incorporation of the heartbeat monitoring system into healthcare. The overview of an early diagnosis and classification of ECG arrhythmia healthcare system is illustrated in Figure 1. It consists of three main steps: data preprocessing, feature extraction, and classification. Detection or classification is the vital step in the system; thus, the contribution of this work is mainly in the classification step as explained in the following.

2.1. Data Preprocessing and Feature Extraction

Denoising and reliable segmentation increase the efficiency of the classifiers [30], where the frequency of an ECG is between 0.5 Hz and 50 Hz [31]. To eliminate disturbances from the digital ECG signal, an FIR band-pass filter [32] designed with cutoff frequencies was utilized for this task. For the segmentation task of the ECG, the R-peaks annotations described by the MIT-BIH, EDB, and INCART datasets were considered as an indication of the beats segmentation, and for every beat, we centered a patch of size 200 ms around its R-peak within 75 ms before the R-peak and 110 ms after the R-peak.
After the segmentation phase, the features were extracted around the regions of the segmented ECG signal. In this work, different techniques were used for the feature extraction phase, including the 1D-local binary pattern (LBP) [33], higher-order statistics (HOS) [34], discrete wavelet transform (DWT), the Hermite basis function (HBF) [35], the central moment (CM), and the R-R intervals. Table 1 outlines the number of extracted features by each descriptor from the MIT-BIH, EDB, and INCART databases.

2.2. Classification

The supervised machine learning classifiers considered for detecting rhythm diseases with a number of parameters optimized using the proposed artificial intelligence metaheuristic optimization (MHO) algorithm were the SVM, random forest (RF), gradient boosting decision trees (GBDT), and K-nearest neighbor (kNN).

2.2.1. Support Vector Machine

The SVM offers strong insight for practical applications and contributes to high efficiency [36]. It works by transforming input data from the basic domain P into a new higher dimension feature space; thereafter, it searches in this space for the optimal hyperplane. It aims to split the training data into groups in order to find a maximum marginal hyperplane. Mathematically, an instance x i is connected with a label y i { + 1 , 1 } . The hyperplane divides the multidimensional space into negative and positive instances induced by the kernel function with the maximum margin and the minimum classification [37]. Suppose z = φ ( x ) is a feature space vector, and  w i maps φ from P to the feature space Z ; then, the hyperplane is
w · z + b = 0 ,
defined by the pair ( w , b ) , which is obtained by separating the point X i , such that
f x i = sign w · z i + b = 1 , if x i = 1 1 , if x i = 1 ,
where w Z and b P . S is linearly separable, if there is ( w , b ) with
w · z i + b 1 , if x i = 1 w · z i + b 1 , if x i = 1 , , w h e r e i = 1 , , l
is applicable for each element on S. If S is not linearly separable, the SVM formulation must allow for classification violations. The ideal with a hyperplane is a solution to
minimize 1 2 w · w + C i = 1 l ξ i , such that y i w · z i + b 1 ξ i , , where ξ i 0 , i = 1 , , l .

2.2.2. Gradient Boosting Decision Tree

In decision trees, every internal node is labeled with a distinctive input. The arcs derived from the node marked with a certain feature are labeled with each of the possible feature values. Every tree leaf is classified as having a class or distribution of probability over classes. The basic concept of the decision tree for gradient boosting is to combine a series of weak base classifiers into one strong classifier. Unlike traditional methods of boosting samples that weigh positive and negative, the GBDT uses global algorithm convergence by following the negative gradient direction. The weak learner estimates the error at every splitting node based on a test function κ : R n R considering a threshold τ for the returns η l and η r . The optimal split is achieved by identifying the triplet τ , η l , η r that minimizes the error after the split, where
ϵ ( τ ) = i : κ x i < τ w i j r i j η l 2 + i : κ x i τ w i j r i j η r 2 .
The weight w i j and response r i j of x i at an iteration j are
w i j = exp y i f j 1 x i , r i j = g x i / w i j = y i exp y i f j 1 x i / w i j = y i .

2.2.3. Random Forests

RF is close to the Bayesian method and is used to recognize an ensemble with a combination of hierarchical tree structure predictors [38]. The basic concept behind the RF is that a set of learning tree models may perform well compared to single decision trees if they make uncorrelated mistakes. In this context, we develop several trees instead of a single tree, where each tree is constructed upon values of random vectors sampled independently following the whole forest distribution. Consequently, the RF is an ensemble classifier consisting of many random decision trees. A single classification output of these decision trees is taken, and the values are collected to produce the final result of the classifier [39]. The RF, once constructed, is very fast, as it requires little computation. It has clear interpretability, which provides a natural way to incorporate prior knowledge. Employing appropriate randomness produces precise regressors and classifiers. Moreover, some studies have shown that random input features result in a high classification performance [40].

2.2.4. K-Nearest Neighbor

The kNN is one of the straightforward and simplest machine learning schemes based on supervised learning. It is a non-parametric technique, which means that the underlying data do not need any assumptions during data classification. It implies the similarity between the new class and available instances and puts the new class in the category that is most closely related to the available categories. Generally, the estimation that can be obtained with the kNN scheme is prone to local noise and not very satisfactory. The larger the value of k, the smoother the classification boundary, while a smaller k is more convoluted to the boundary. An advantage of the kNN is that there is no training required.

2.3. Marine Predator Algorithm (MPA)

The MPA is a naturally inspired metaheuristic algorithm that imitates the behavior of predators to catch their victim or prey, employing two techniques when targeting their prey (Brownian and Lévy) [41].

2.3.1. Initialization

Similar to all the metaheuristic schemes, the algorithm begins with an initial solution uniformly distributed over the search space such that
Y 0 = Y L + rand Y U Y L ,
where Y U and Y L are the minimum and maximum boundary limits of the search spaces, respectively.

2.3.2. Elite and Prey Matrix Construction

It is employed to construct an n × d Elite matrix E with
E = Y 1 , 1 I Y 1 , 2 I Y 1 , d I Y 2 , 1 Y 2 , 2 I Y 2 , d Y n , 1 I Y n , 2 I Y n , d I ,
where n refers to the number of search space agents, and  y I symbolizes the superior predator vector iterated n times to create the matrix. The prey matrix with the same dimensions d as the Elite is
Py = Y 1 , 1 Y 1 , 2 Y 1 , 4 Y 2 , 1 Y 2 , 2 Y 2 , d Y 3 , 1 Y 3 , 2 Y 3 , d Y n , 1 Y 1 , 2 Y n , d .
The optimization process in the MPA is mainly based on these two metrics, where the initialization generates the starting prey, from which the optimal fit builds this elite matrix.

2.3.3. Optimization Process

The most critical step in which the predators seek to find the optimal fit or solution is the optimization cycle. In the discovery phase, which is the starting point, the predators attempt to move faster before they detect the prey such that:
For t < 1 3 t max
s t e p s i z e j = R L E j R L p y j
Py j = Py j + P · R s t e p s i z e j ,
where the vector R L consists of the random values computed by the Lévy distribution, which represents the Lévy movements. Meanwhile, the process of the multiplication of the R L and elite symbolizes the movements of the predators in the Lévy scheme, while utilizing the phase size for the elite position mimics the movements of the predators updating the position of the prey.
In the middle stage, the algorithm divides the population into two portions to distinguish the difference between exploration and exploitation. In this stage with 1 3 t max < t < 2 3 t max , the half population is
s t e p s i z e j = R L E j R L p y j
p y j = p y j + P R s t e p s i z e j ,
while in the second half, it is
s t e p s i z e i = R B R B E j P j
p y j = E i j + P C F s t e p s i z e j ,
C F = 1 t t max 2 t t max .
The population is modified using Lévy flight with t > 2 3 max -iter
s t e p s i z e j = R L R L E j p y j
p y j = E j + P C F s t e p s i z e j .
The predators accurately remember the previous locations of successful foraging because of their good memory. Using memory saving, the MPA algorithm simulates this ability of remembering successful foraging places, which can increase the quality of the solutions with the increase in iterations. Solution fitness at the present iteration is matched with its counterpart in the previous one. The new one replaces the solution if it is more suitable. The steps of the MPA are summarized in Algorithm 1.

2.3.4. Parameters Optimization

Using the MPA algorithm, four optimized versions of the ML classifiers were introduced for ECG signal classification, namely the MPA-SVM, MPA-GBDT, MPA-RF, and MPA-KNN. The fitness function acts according to each classifier and its parameters. Tuning parameters has a crucial influence on the efficiency of the classification. Thus, a diverse set of parameters for each classifier was considered to optimize the classification stage. To fine-tune the best value for the parameters, the holdout strategy was considered with 80 % for the training set, and the remaining 20 % was used to test the performance. The list of parameters considered in the experiments for each classifier is provided in Table 2.
Algorithm 1 Pseudo-code of MPA algorithm.
  1: Initialization step, P, TP, TF, P y i .
  2: while t < t m a x  do
  3:     Compute the fitness value of each p y i , f p y i
  4:     Construct E
  5:     Implement the memory saving
  6:     Update C F using Equation (16)
  7:     for each p y i  do
  8:     if  t < 1 3 t max  then
  9:        Reposition the current p y i based on Equation (11)
10:     else
11:        if   1 3 t max < t < 2 3 t max  then
12:         if  i < 1 2 n  then
13:         Reposition the current p y l using Equation (13)
14:         else
15:         Reposition the current p y i using Equation (15)
16:         end if
17:        else
18:         Reposition the current p y i using Equation (18)
19:        end if
20:     end if
21:    end for
22:    Compute the fitness value of each p y i , f p y i
23:    Update T o p P r a d a t o r P o s , and T o p P r e d a t o r F i t .
24:    Apply the memory saving
25:    Apply the FADS for ∀ py i
26:     t + +
27: end while

3. Experiments and Results

3.1. Database Descriptions

The main characteristics of the three ECG databases used in the evaluation process are summarized in Table 3.

3.1.1. The MIT-BIH Arrhythmia Dataset (MIT-BIH)

This is a public dataset showing the regular investigation content of cardiac rhythm detection collected from 47 patients. It consists of 48 records, where each one is 30 min in duration, with a 360 Hz sample rate. Moreover, the records have two signals: the first is a bipolar limb lead named the modified lead (MLII), and the second one is related to unipolar chest leads called V leads (V1, V2, V3, V4, V5, and V6). The MLII type is shareable though all records because it provides an ideal view for the significant waves (e.g., Q-waves, P-waves, R-Waves, T-waves, and S-waves) [42].

3.1.2. The European ST-T Dataset (EDB)

The EDB was planned to be used to evaluate the performance of ST and T-wave architectures. It is a collection of 90 annotated samples of patients’ ECG records taken from 79 subjects. Each record has a two hour duration with two signals recorded at 250 sps [43].

3.1.3. St. Petersburg INCART Dataset (INCART)

This is a 12-lead arrhythmia dataset containing 75 annotated recordings taken from 32 Holter records, each 30 min long. The INCART consists of 12 regular leads, and each lead is sampled at 257 Hz. The main records were acquired from patients undergoing coronary heart disease examinations [43].

3.2. Evaluation Criteria

As a strategy of classification, the holdout strategy was used to evaluate the performance of the optimized ML classifiers against five standard criteria including accuracy (Acc), precision (Pr), specificity (Sp), sensitivity (Sn), and the F1-score (F1). The performance criteria typically rely on different major metrics (positive/negative/true/false) of a binary classification test as follows:
A c c = T p + T n T p + F n + F p + T n
S n = T p T p + F n
S p = T n F p + T n
P r = T P T P + F P
F 1 = T P T P + F P

3.3. Performance Evaluation

3.3.1. Evaluation of the MPA-SVM

For the MPA-SVM classifier, two parameters, c and gamma, which had important effects on the classification process, were optimized. According to Table 4, all measures were higher than 98 % . Looking at the class level, it is clear that the A C C and S n were high on three datasets, where the ACC > 99.14 % , and Se > 98.11 % . The accuracy of the classification process by all models was remarkably enhanced and nearly balanced for all classes. The best reported accuracy was obtained for the class F with 99.93 % , and the lowest performance was obtained for the class N with an accuracy of 99.86 . Regarding the misclassification, only ≤0.49% of the S class and ≤0.82% of the VEBs class were not classified accurately. The results in the case of the S and VEBs classes were very promising, signifying improvements over counterpart studies. These two classes are important cases for the AAMI, which recommends that evaluation measures should focus on the classification of the S and VEBs classes. The performance of the MPA-SVM classifier was sufficient for these two classes on the three databases.

3.3.2. Evaluation of the MPA-GBDT

The MPA-GBDT was introduced to optimize three parameters, the max depth, gamma, and learning rate. Table 5 lists the results of the optimized parameters for the MPA-GBDT model for the AMMI classes (N, S, VEBs, and F). In addition to the high accuracy of classification (≥99.45), the classification performance (sensitivity of ≥98.49% and positive predictivity of ≥98.81%) for the S and VEBs classes were very high, where the positive predictivity reached 100 % for the classes of the MIT-BIH and EDB databases.

3.3.3. Evaluation of the MPA-RF

The proposed MPA-RF optimized the same parameters as for the MPA-GBDT. Table 6 reports the obtained results on the same databases with the same validation scheme, and the MPA-RF provided the highest accuracy A c c = 99.93 and sensitivity S n = 100 % in the recognition of cardiac disorders. At the class level, the MPA-RF achieved a sensitivity of 100 % for class F on the EDB database and ≥99.75 on the other two databases, although the class F had the minimum number of samples of ≤0.08% from the total number of class samples.

3.3.4. Evaluation of the MPA-kNN

For the MPA-kNN classifier, the K parameter (number of nearest neighbors) was optimized. The MPA-kNN provided the lowest classification performance compared to the MPA-SVM, MPA-GBDT, and MPA-RF due to the characterization of the kNN as a lazy classifier depending on the distance for the classification process. However, the optimized version, MPA-kNN, performed well compared to the kNN itself. The detection accuracy of the AMMI classes with the MPA-kNN classifier was 94.96 % , 95.40 % and 92.14 % on the MIT-BIH, EDB, and INCART database, respectively. As depicted in Table 7, it is clear that the MPA-kNN achieved an average A c c of 94:96%. Approximately ≤6.97% of the S class and ≤9.07% of the VEBs class were not classified correctly. Thus, according to the experimental results, we can conclude that the MPA-kNN performed well in terms of the classification accuracy.
For more investigation, the convergence curves are presented for each optimized classifier on the MIT-BIH, EDB, and INCART datasets in Figure 2. The MPA-SVM higighted a high-speed convergence on the MIT-BIH database compared to the other models, while the MPA-kNN was in last place. On other two databases, the MPA-GBDT and MPA-RF had the highest speed convergence, and the MPA-kNN still had the least convergence. Moreover, the MPA-GBDT and MPA-RF had close convergence on the three databases.
In order to higight the improvement in the performance of the classifiers after optimization using the MPA algorithm, Table 8, Table 9, Table 10 and Table 11 show the improvements in the performance of each classifier with optimization (i.e., the classifier and its optimized version). The reported performance criteria were the average values of the A c c , S n , S p , P r , and F 1 on the three databases. The highest improvement was in the performance metrics of the SVM. Moreover, the improvement on the INCART database was higher than on the other two databases. Thus, it is clear that utilizing the proposed optimization algorithm improved the performance of these four ML classifiers significantly.

3.4. Comparison with Other Methods

The classification performance of the four optimized ML classifiers was compared to 16 of the state-of-the art methods, and the obtained results are reported in Table 12. In contrast, the results achieved by the previous other works were obtained for only five classes, of which four were known classes, and only one was unknown. The current proposed approaches accomplished average accuracies of 99.67 % , 99.91 % , 99.92 % , and 97.07 % on the EDB dataset for the MPA-SVM, MPA-GBDT, MPA-RF, and MPA-kNN, respectively. The MPA-SVM, MPA-GBDT, and MPA-RF achieved the highest percentages in terms of the A C C and S n . Even the MPA-kNN, which was based on the lazy classifier kNN, performed well against the SVM, CNN, and kNN models in [44,45,46]. It can be concluded from Table 12 that the proposed method yielded a significantly improved classification performance in terms of the overall measurement factors compared to the other methods, which confirms the effectiveness of the proposed optimized classifiers.

4. Conclusions

This paper proposed an automatic arrhythmia classification method based on a new AI metaheuristic optimization algorithm and four ML classifiers for IoT-assisted smart healthcare systems. Multiclassifier models including the MPA-SVM, MPA-GBDT, MPA-RF, and MPA-kNN were introduced for classification with parameter optimization. The average classification accuracies achieved by the MPA with the SVM classifier were 99.48% (MIT-BIH), 99.90% (EDB), and 99.47% (INCART). The accuracies achieved by the MPA with the GBDT were 99.61% (MIT-BIH), 99.91% (EDB), and 99.72% (INCART); meanwhile, the MPA with the RF achieved 99.67% (MIT-BIH), 99.92% (EDB), and 99.73% (INCART), while the MPA with the kNN achieved 96.44% (MIT-BIH), 97.07% (EDB), and 93.51% (INCART). It is clear that the RF showed the most accurate results of these methods. Hence, it can be concluded that incorporating the MPA scheme can effectively optimize the ML classifiers, even a lazy one, such as the kNN. The achieved performance by the optimization step was ranked among the highest reported to date.
In future works, to enhance the ability to predict heart problems, other optimization algorithms can be investigated. For efficient methods to extract features and perform classification, it is necessary to incorporate the real-time surveillance of cardiac patients. Using powerful classification models (e.g., deep learning) is the possible next step of this research. To have meaningful classification outcomes with greater accuracy, these powerful classification models can be combined with the MPA algorithm, as it performed very well and enhanced the accuracy of the classification process.

Author Contributions

Conceptualization, M.H. and Y.M.W.; methodology, M.H. and A.F.; software, I.E.I.; validation, M.H. and Y.M.W.; formal analysis, M.H.; writing—original draft preparation, M.H. and I.E.I.; writing—review and editing, A.F. and Y.M.W.; project administration, M.H. and A.F.; funding acquisition, M.H. and A.F. All authors have read and agreed to the published version of the manuscript.


Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia, project number (IF2/PSAU/2022/01/22578).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All used datasets are available free for public at


The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number (IF2/PSAU/2022/01/22578).

Conflicts of Interest

The authors declare no conflict of interest.


  1. Ismail, A.R.; Jovanovic, S.; Ramzan, N.; Rabah, H. ECG Classification Using an Optimal Temporal Convolutional Network for Remote Health Monitoring. Sensors 2023, 23, 1697. [Google Scholar] [CrossRef] [PubMed]
  2. Huang, Y.; Li, H.; Yu, X. A novel time representation input based on deep learning for ECG classification. Biomed. Signal Process. Control 2023, 83, 104628. [Google Scholar] [CrossRef]
  3. Wang, W.; Kumar, N.; Chen, J.; Gong, Z.; Kong, X.; Wei, W.; Gao, H. Realizing the Potential of Internet of Things for Smart Tourism with 5G and AI. IEEE Netw. 2020, 34, 295–301. [Google Scholar] [CrossRef]
  4. Balasundaram, A.; Routray, S.; Prabu, A.; Krishnan, P.; Malla, P.P.; Maiti, M. Internet of things (IoT) based Smart Healthcare System for Efficient Diagnostics of Health Parameters of Patients in Emergency Care. IEEE Internet Things J. 2023. [Google Scholar] [CrossRef]
  5. Sangaiah, A.K.; Arumugam, M.; Bian, G.B. An intelligent learning approach for improving ECG signal classification and arrhythmia analysis. Artif. Intell. Med. 2020, 103, 101788. [Google Scholar] [CrossRef]
  6. Yang, S.; Lian, C.; Zeng, Z.; Xu, B.; Zang, J.; Zhang, Z. A Multi-View Multi-Scale Neural Network for Multi-Label ECG Classification. IEEE Trans. Emerg. Top. Comput. Intell. 2023. [Google Scholar] [CrossRef]
  7. Irfan, S.; Anjum, N.; Althobaiti, T.; Alotaibi, A.A.; Siddiqui, A.B.; Ramzan, N. Heartbeat Classification and Arrhythmia Detection Using a Multi-Model Deep-Learning Technique. Sensors 2022, 22, 5606. [Google Scholar] [CrossRef]
  8. Subasi, A.; Dogan, S.; Tuncer, T. A novel automated tower graph based ECG signal classification method with hexadecimal local adaptive binary pattern and deep learning. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 711–725. [Google Scholar] [CrossRef]
  9. Tripoliti, E.E.; Papadopoulos, T.G.; Karanasiou, G.S.; Naka, K.K.; Fotiadis, D.I. Heart failure: Diagnosis, severity estimation and prediction of adverse events through machine learning techniques. Comput. Struct. Biotechnol. J. 2017, 15, 26–47. [Google Scholar] [CrossRef] [Green Version]
  10. Chandra, B.S.; Sastry, C.S.; Jana, S. Robust heartbeat detection from multimodal data via CNN-based generalizable information fusion. IEEE Trans. Biomed. Eng. 2019, 66, 710–717. [Google Scholar] [CrossRef] [Green Version]
  11. Mewada, H. 2D-wavelet encoded deep CNN for image-based ECG classification. Multimed. Tools Appl. 2023, 1–17. [Google Scholar] [CrossRef]
  12. Ebrahimi, Z.; Loni, M.; Daneshtalab, M.; Gharehbaghi, A. A review on deep learning methods for ECG arrhythmia classification. Expert Syst. Appl. X 2020, 7, 100033. [Google Scholar] [CrossRef]
  13. Murat, F.; Yildirim, O.; Talo, M.; Baloglu, U.B.; Demir, Y.; Acharya, U.R. Application of deep learning techniques for heartbeats detection using ECG signals-analysis and review. Comput. Biol. Med. 2020, 120, 103726. [Google Scholar] [CrossRef]
  14. Yao, Q.; Wang, R.; Fan, X.; Liu, J.; Li, Y. Multi-class Arrhythmia detection from 12-lead varied-length ECG using Attention-based Time-Incremental Convolutional Neural Network. Inf. Fusion 2020, 53, 174–182. [Google Scholar] [CrossRef]
  15. Ye, C.; Kumar, B.V.; Coimbra, M.T. An automatic subject-adaptable heartbeat classifier based on multiview learning. IEEE J. Biomed. Health Inform. 2016, 20, 1485–1492. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, R.; Fan, J.; Li, Y. Deep Multi-Scale Fusion Neural Network for Multi-Class Arrhythmia Detection. IEEE J. Biomed. Health Inform. 2020, 24, 2461–2472. [Google Scholar] [CrossRef]
  17. Chen, C.; Hua, Z.; Zhang, R.; Liu, G.; Wen, W. Automated arrhythmia classification based on a combination network of CNN and LSTM. Biomed. Signal Process. Control 2020, 57, 101819. [Google Scholar] [CrossRef]
  18. Atal, D.K.; Singh, M. Arrhythmia classification with ECG signals based on the optimization-enabled deep convolutional neural network. Comput. Methods Programs Biomed. 2020, 196, 105607. [Google Scholar] [CrossRef]
  19. Wu, Q.; Sun, Y.; Yan, H.; Wu, X. ECG signal classification with binarized convolutional neural network. Comput. Biol. Med. 2020, 121, 103800. [Google Scholar] [CrossRef]
  20. Li, Q.; Rajagopalan, C.; Clifford, G.D. A machine learning approach to multi-level ECG signal quality classification. Comput. Methods Programs Biomed. 2014, 117, 435–447. [Google Scholar] [CrossRef]
  21. Khorrami, H.; Moavenian, M. A comparative study of DWT, CWT and DCT transformations in ECG arrhythmias classification. Expert Syst. Appl. 2010, 37, 5751–5757. [Google Scholar] [CrossRef]
  22. Santos, M.A.; Munoz, R.; Olivares, R.; Rebouças Filho, P.P.; Del Ser, J.; de Albuquerque, V.H.C. Online heart monitoring systems on the internet of health things environments: A survey, a reference model and an outlook. Inf. Fusion 2020, 53, 222–239. [Google Scholar] [CrossRef]
  23. Ye, C.; Kumar, B.V.; Coimbra, M.T. Heartbeat classification using morphological and dynamic features of ECG signals. IEEE Trans. Biomed. Eng. 2012, 59, 2930–2941. [Google Scholar] [PubMed]
  24. Raj, S.; Ray, K.C. Sparse representation of ECG signals for automated recognition of cardiac arrhythmias. Expert Syst. Appl. 2018, 105, 49–64. [Google Scholar] [CrossRef]
  25. Haldar, N.A.H.; Khan, F.A.; Ali, A.; Abbas, H. Arrhythmia classification using Mahalanobis distance based improved Fuzzy C-Means clustering for mobile health monitoring systems. Neurocomputing 2017, 220, 221–235. [Google Scholar] [CrossRef]
  26. Zhang, S.; Li, Y.; Wang, X.; Gao, H.; Li, J.; Liu, C. Label decoupling strategy for 12-lead ECG classification. Knowl.-Based Syst. 2023, 263, 110298. [Google Scholar] [CrossRef]
  27. Hou, B.; Yang, J.; Wang, P.; Yan, R. LSTM-Based Auto-Encoder Model for ECG Arrhythmias Classification. IEEE Trans. Instrum. Meas. 2020, 69, 1232–1240. [Google Scholar] [CrossRef]
  28. Roshan, M.J.; Acharya, U.R.; Prasad, H.; Chua, C.K.; Lim, C.M.; Suri, J.S. Application of higher order statistics for atrial arrhythmia classification. Biomed. Signal Process. Control 2013, 8, 888–900. [Google Scholar]
  29. Lu, W.; Hou, H.; Chu, J. Feature fusion for imbalanced ECG data analysis. Biomed. Signal Process. Control 2018, 41, 152–160. [Google Scholar] [CrossRef]
  30. Hesar, H.D.; Mohebbi, M. A multi rate marginalized particle extended Kalman filter for P and T wave segmentation in ECG signals. IEEE J. Biomed. Health Inform. 2019, 23, 112–122. [Google Scholar] [CrossRef]
  31. Rishi, S.R.; Pachori, R.B. Baseline wander and power line interference removal from ECG signals using eigenvalue decomposition. Biomed. Signal Process. Control 2018, 45, 33–49. [Google Scholar]
  32. Kærgaard, K.; Jensen, S.H.; Puthusserypady, S. A comprehensive performance analysis of EEMD-BLMS and DWT-NN hybrid algorithms for ECG denoising. Biomed. Signal Process. Control 2016, 25, 178–187. [Google Scholar] [CrossRef] [Green Version]
  33. Kaya, Y.; Uyar, M.; Tekin, R.; Yıldırım, S. 1D-local binary pattern based feature extraction for classification of epileptic EEG signals. Appl. Math. Comput. 2014, 243, 209–219. [Google Scholar] [CrossRef]
  34. Nascimento, N.M.M.; Marinho, L.B.; Peixoto, S.A.; do Vale Madeiro, J.P.; de Albuquerque, V.H.C.; Rebouças Filho, P.P. Heart arrhythmia classification based on statistical moments and structural co-occurrence. Circuits Syst. Signal Process. 2020, 39, 631–650. [Google Scholar] [CrossRef]
  35. Tuncer, T.; Dogan, S.; Pławiak, P.; Acharya, U.R. Automated arrhythmia detection using novel hexadecimal local pattern and multilevel wavelet transform with ECG signals. Knowl.-Based Syst. 2019, 186, 104923. [Google Scholar] [CrossRef]
  36. Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef] [Green Version]
  37. Lin, C.F.; Wang, S.D. Fuzzy support vector machines. IEEE Trans. Neural Netw. 2002, 13, 464–471. [Google Scholar] [PubMed]
  38. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  39. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  40. Jain, N.; Jana, P.K. LRF: A logically randomized forest algorithm for classification and regression problems. Expert Syst. Appl. 2023, 213, 119225. [Google Scholar] [CrossRef]
  41. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  42. Moody, G.B.; Mark, R.G. The impact of the MIT-BIH arrhythmia database. IEEE Eng. Med. Biol. Mag. 2001, 20, 45–50. [Google Scholar] [CrossRef] [PubMed]
  43. Goldberger, A.L.; Amaral, L.A.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [Green Version]
  44. Martis, R.J.; Acharya, U.R.; Mandana, K.; Ray, A.K.; Chakraborty, C. Cardiac decision making using higher order spectra. Biomed. Signal Process. Control 2013, 8, 193–203. [Google Scholar] [CrossRef]
  45. Acharya, U.R.; Oh, S.L.; Hagiwara, Y.; Tan, J.H.; Adam, M.; Gertych, A.; San Tan, R. A deep convolutional neural network model to classify heartbeats. Comput. Biol. Med. 2017, 89, 389–396. [Google Scholar] [CrossRef]
  46. Sharma, M.; Tan, R.S.; Acharya, U.R. Automated heartbeat classification and detection of arrhythmia using optimal orthogonal wavelet filters. Inform. Med. Unlocked 2019, 16, 100221. [Google Scholar] [CrossRef]
  47. Li, H.; Yuan, D.; Wang, Y.; Cui, D.; Cao, L. Arrhythmia classification based on multi-domain feature extraction for an ECG recognition system. Sensors 2016, 16, 1744. [Google Scholar] [CrossRef] [Green Version]
  48. Taiyong, L.; Min, Z. ECG classification using wavelet packet entropy and random forests. Entropy 2016, 18, 285. [Google Scholar]
  49. Serkan, K.; Ince, T.; Gabbouj, M. Real-time patient-specific ECG classification by 1-D convolutional neural networks. IEEE Trans. Biomed. Eng. 2015, 63, 664–675. [Google Scholar]
  50. Yang, H.; Wei, Z. Arrhythmia recognition and classification using combined parametric and visual pattern features of ECG morphology. IEEE Access 2020, 8, 47103–47117. [Google Scholar] [CrossRef]
  51. Oh, S.L.; Ng, E.Y.; San Tan, R.; Acharya, U.R. Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats. Comput. Biol. Med. 2018, 102, 278–287. [Google Scholar] [CrossRef] [PubMed]
  52. Li, Z.; Zhou, D.; Wan, L.; Li, J.; Mou, W. Heartbeat classification using deep residual convolutional neural network from 2-lead electrocardiogram. J. Electrocardiol. 2020, 58, 105–112. [Google Scholar] [CrossRef] [PubMed]
  53. Yıldırım, Ö.; Pławiak, P.; Tan, R.S.; Acharya, U.R. Arrhythmia detection using deep convolutional neural network with long duration ECG signals. Comput. Biol. Med. 2018, 102, 411–420. [Google Scholar] [CrossRef]
  54. Shu-Lih, O.; Ng, E.Y.; San Tan, R.; Acharya, U.R. Automated beat-wise arrhythmia diagnosis using modified U-net on extended electrocardiographic recordings with heterogeneous arrhythmia types. Comput. Biol. Med. 2019, 105, 92–101. [Google Scholar]
  55. Marinho, L.B.; de MM Nascimento, N.; Souza, J.W.M.; Gurgel, M.V.; Rebouças Filho, P.P.; de Albuquerque, V.H.C. A novel electrocardiogram feature extraction approach for cardiac arrhythmia classification. Future Gener. Comput. Syst. 2019, 97, 564–577. [Google Scholar] [CrossRef]
  56. Yang, W.; Si, Y.; Wang, D.; Zhang, G. A novel approach for multi-lead ECG classification using DL-CCANet and TL-CCANet. Sensors 2019, 19, 3214. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  57. Pławiak, P.; Acharya, U.R. Novel deep genetic ensemble of classifiers for arrhythmia detection using ECG signals. Neural Comput. Appl. 2020, 32, 11137–11161. [Google Scholar] [CrossRef] [Green Version]
  58. Patro, K.K.; Jaya Prakash, A.; Jayamanmadha Rao, M.; Rajesh Kumar, P. An Efficient Optimized Feature Selection with Machine Learning Approach for ECG Biometric Recognition. IETE J. Res. 2020, 68, 2743–2754. [Google Scholar] [CrossRef]
Figure 1. Overview of an early diagnosis and classification of ECG arrhythmia healthcare system.
Figure 1. Overview of an early diagnosis and classification of ECG arrhythmia healthcare system.
Bioengineering 10 00429 g001
Figure 2. The convergence curves of the optimized classifiers on the three databases.
Figure 2. The convergence curves of the optimized classifiers on the three databases.
Bioengineering 10 00429 g002
Table 1. Total number of features extracted from the MIT-BIH, EDB, and INCART datasets.
Table 1. Total number of features extracted from the MIT-BIH, EDB, and INCART datasets.
DatasetsLBPHOS + CMHBFDWTRRTotal Number of Features
MIT-BIH6012 + 1163210131
EDB6012 + 1162610125
INCART6012 + 1162610125
Table 2. The parameters of each classifier optimized using the MPA algorithm.
Table 2. The parameters of each classifier optimized using the MPA algorithm.
Gamma[0.0001, 1]
GBDTMax_depth[1, 13]
Gamma[0.0001, 1]
Learning Rate[0, 1]
Gamma[0.0001, 1]
Learning Rate[0, 1]
Table 3. Details of the ECG databases used in the evaluation process.
Table 3. Details of the ECG databases used in the evaluation process.
DatabaseSubjectsRecordsLeadsLocation of ElectrodesSample RateResolutionDuration
MIT-BIH484812Chest and limbs360 HZ1130 min
EDB79902Chest and limbs250 HZ12120 min
INCART327512Chest and limbs257 HZ1230 min
Table 4. The performance of the MPA-SVM classifier on all the databases’ classes.
Table 4. The performance of the MPA-SVM classifier on all the databases’ classes.
Classes Acc Se Sp Pr F 1 Acc Se Sp Pr F 1 Acc Se Sp Pr F 1
N 99.14 98.11 99.48 98.41 98.26 99.86 99.7 99.91 99.72 99.71 99.23 98.45 99.48 98.4 98.43
S 99.51 99.19 99.61 98.85 99.02 99.9 99.85 99.92 99.75 99.8 99.57 99.45 99.61 98.84 99.15
VEBs 99.51 98.4 99 . 89 99 . 67 99.03 99.18 97.93 99.61 98.83 98.38 99.9 99.58 100 100 99 . 8
F 99 . 78 99 . 9 99.74 99.21 99 . 55 99 . 93 99 . 97 99.92 99.75 99.86 99 . 92 99 . 9 99 . 93 99.77 99.84
Average 99.48 98.90 99.68 99.03 98.97 99.90 99.77 99.94 99.81 99.79 99.47 98.93 99.65 98.96 98.95
Table 5. The performance of the MPA-GBDT classifier on all the databases’ classes.
Table 5. The performance of the MPA-GBDT classifier on all the databases’ classes.
Classes Acc Se Sp Pr F 1 Acc Se Sp Pr F 1 Acc Se Sp Pr F 1
N 99.45 98.97 99.61 98.81 98.89 99.8999.6799.9699.8799.7799.699.3399.6999.0499.18
S 99 . 7 99.4 99.79 99.39 99.4 99.9199.9499.9199.7399.8399.7799.7499.7899.3599.55
VEBs 99.62 98.49 100 100 99.24 99.9299.4910010099.8499.5898.5999.9299.7699.17
F 99.69 99 . 73 99.68 99.04 99.39 99.999.9999.8899.6399.8199.9299.9799.9199.7199.84
Average 99.61 99.15 99.77 99.31 99.23 99.9199.7799.9599.8599.8199.7299.4199.8299.4799.44
Table 6. The performance of the MPA-RF classifier on all the databases’ classes.
Table 6. The performance of the MPA-RF classifier on all the databases’ classes.
Classes Acc Se Sp Pr F 1 Acc Se Sp Pr F 1 Acc Se Sp Pr F 1
N 99.5299.1199.6698.9699.0499.9199.7299.9899.9299.8299.6299.3799.799.0799.22
S 99.7499.4999.8399.4999.4999.9399.9499.9399.7899.8699.7899.7199.899.499.55
VEBs 99.6698.7399.9899.9499.399.9399.610010099.8799.6198.7599.9199.7399.24
F 99.7599.7599.7599.2499.599.9210099.999.6999.8499.9399.9599.9299.7799.86
Average 99.6799.2799.899.4199.3499.9299.8199.9699.8899.8599.7399.4599.8399.4999.47
Table 7. The performance of the MPA-kNN classifier on all the databases’ classes.
Table 7. The performance of the MPA-kNN classifier on all the databases’ classes.
Classes Acc Se Sp Pr F 1 Acc Se Sp Pr F 1 Acc Se Sp Pr F 1
N 94.0380.7898.3794.1886.9794.7381.2999.1396.8388.3889.8273.6395.1183.1178.09
S 96.3697.5795.9689.0493.1196.6699.1995.888.9593.7993.0398.5891.1778.9587.68
VEBs 96.9992.1498.6595.993.9897.8595.4698.6595.9695.7191.9375.497.5691.3182.59
F 98.3999.7297.9594.1396.8599.0299.998.7396.398.0799.2499.8599.0497.1998.5
Average 96.4492.5597.7393.3192.7397.0793.9698.0894.5193.9993.5186.8795.7287.6486.72
Table 8. The performance comparison of the SVM and its optimized version, the MPA-SVM.
Table 8. The performance comparison of the SVM and its optimized version, the MPA-SVM.
AverageSVMMPA-SVMImprovement %SVMMPA-SVMImprovement %SVMMPA-SVMImprovement %
A c c 91.79 99.48 7 . 69 91.31 99.90 8 . 59 87.36 99.47 12 . 11
S n 83.73 98.90 15 . 17 82.63 99.77 17 . 14 75.0 98.93 23 . 93
S p 94.55 99.68 5 . 13 94.23 99.94 5 . 71 91.61 99.65 8 . 04
P r 89.41 99.03 9 . 62 89.65 99.81 10 . 16 87.21 98.96 11 . 75
F 1 84.20 98.97 14 . 77 82.18 99.79 17 . 61 72.62 98.95 26 . 33
Table 9. The performance comparison of the GBDT and its optimized version, the MPA-GBDT.
Table 9. The performance comparison of the GBDT and its optimized version, the MPA-GBDT.
AverageGBDTMPA-GBDTImprovement %GBDTMPA-GBDTImprovement %GBDTMPA-GBDTImprovement %
A c c 96.93 99.61 2 . 68 97.65 99.91 2 . 26 97.44 99.72 2 . 28
S n 93.38 99.15 4 . 79 94.80 99.77 4 . 97 95.13 99.41 4 . 28
S p 98.12 99.77 1 . 65 98.60 99.95 1 . 35 98.38 99.82 1 . 44
P r 94.36 99.31 4 . 95 95.82 99.85 4 . 03 94.6 99.47 4 . 87
F 1 93.84 99.23 5 . 39 95.29 99.81 4 . 52 94.88 99.44 4 . 56
Table 10. The performance comparison of the RF and its optimized version, the MPA-RF.
Table 10. The performance comparison of the RF and its optimized version, the MPA-RF.
AverageRFMPA-RFImprovement %RFMPA-RFImprovement %RFMPA-RFImprovement %
A c c 96.90 99.67 2 . 77 97.65 99.92 2 . 27 97.5 99.73 2 . 23
S n 93.37 99.27 5 . 9 94.83 99.81 4 . 98 94.75 99.45 4 . 7
S p 98.09 99.8 1 . 71 98.86 99.96 1 . 1 98.42 99.83 1 . 41
P r 94.26 99.41 5 . 15 98.61 99.88 1 . 27 95.27 99.49 4 . 22
F 1 93.79 99.34 5 . 55 95.3 99.85 4 . 55 94.99 99.47 4 . 48
Table 11. The performance comparison of the kNN and its optimized version, the MPA-kNN.
Table 11. The performance comparison of the kNN and its optimized version, the MPA-kNN.
AveragekNNMPA-kNNImprovement %kNNMPA-kNNImprovement %kNNMPA-kNNImprovement %
A c c 94.96 96.44 1 . 48 95.40 97.07 1 . 67 92.14 93.51 1 . 37
S n 89.17 92.55 3 . 38 90.35 93.96 3 . 61 83.83 86.87 3 . 04
S p 96.9 97.73 0 . 8 97.06 98.08 1 . 02 94.94 95.72 0 . 78
P r 91.3 93.31 2 . 01 92.16 94.51 2 . 35 86.88 87.64 0 . 76
F 1 89.71 92.73 7 . 14 90.56 93.99 3 . 43 83.83 86.72 2 . 89
Table 12. Classification performance comparisons between the proposed method and other methods.
Table 12. Classification performance comparisons between the proposed method and other methods.
Methods#Classes#BeatsClassifierAcc %Sn %
Roshan et al. [44]534,989SVM93.5099.30
Li et al. [47]51800SVM97.3097.40
Taiyong and Min [48]4100,688RF94.6098.51
Serkan et al. [49]583,648CNN99.1093.91
Acharya et al. [45]5109,449CNN94.0396.71
Yang et al. [50]15104986KNN97.70-
Rishi et al. [46]4109,449KNN98.0085.33
Shu-Lih et al. [51]516,499LSTM-CNN98.1097.50
Li et al. [52]494,013ResNet99.0693.21
Yildirim et al. [53]138331D-CNN95.2093.52
Oh et al. [54]594,667Modified U-Net97.3294.44
Marinho et al. [55]5100467Bayes94.30-
Yang et al. [56]153350DL-CCANet98.3190.89
Plawiak et al. [57]17774FGE95.0094.62
Patro et al. [58]53551PSO-GA-SVM95.3094.00
Qihang  et al. [14]86877ATI-CNN81.2080.10
The proposed method480,000MPA-SVM99.4898.90
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hassaballah, M.; Wazery, Y.M.; Ibrahim, I.E.; Farag, A. ECG Heartbeat Classification Using Machine Learning and Metaheuristic Optimization for Smart Healthcare Systems. Bioengineering 2023, 10, 429.

AMA Style

Hassaballah M, Wazery YM, Ibrahim IE, Farag A. ECG Heartbeat Classification Using Machine Learning and Metaheuristic Optimization for Smart Healthcare Systems. Bioengineering. 2023; 10(4):429.

Chicago/Turabian Style

Hassaballah, Mahmoud, Yaser M. Wazery, Ibrahim E. Ibrahim, and Aly Farag. 2023. "ECG Heartbeat Classification Using Machine Learning and Metaheuristic Optimization for Smart Healthcare Systems" Bioengineering 10, no. 4: 429.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop