Next Article in Journal
Locally Activated Gated Neural Network for Automatic Music Genre Classification
Previous Article in Journal
New Trends in Biosciences II
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Analysis of Supervised Machine and Deep Learning Algorithms for Kyphosis Disease Detection

1
Department of Computer Application, School of Computing Science & Engineering, Galgotias University, Greater Noida 203201, India
2
Department of Computer Science and Engineering, Chandigarh University, Punjab Gharuan, Mohali 140413, India
3
Department of Computer Applications, KIET Group of Institutions, Ghaziabad 201206, India
4
College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha 999043, Qatar
5
Applied Sciences Department, Meerut Institute of Engineering and Technology, Meerut 250005, India
6
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
7
Computer Science and Information Systems Department, College of Applied Sciences, AlMaarefa University, Riyadh 11597, Saudi Arabia
8
College of Information Sciences and Technology, Data Science and Artificial Intelligence Program, Penn State University, State College, PA 16801, USA
9
School of Optometry and Vision Science, Faculty of Science, University of Waterloo, 200 University Ave W, Waterloo, ON N2L3G1, Canada
10
Faculty of Engineering, University of Waterloo, 200 University Ave W, Waterloo, ON N2L3G1, Canada
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(8), 5012; https://doi.org/10.3390/app13085012
Submission received: 7 March 2023 / Revised: 13 April 2023 / Accepted: 14 April 2023 / Published: 17 April 2023

Abstract

:
Although Kyphosis, an excessive forward rounding of the upper back, can occur at any age, adolescence is the most common time for Kyphosis. Surgery is frequently performed on Kyphosis patients; however, the condition may persist after the operation. The tricky part is figuring out, based on the patient’s traits, if the Kyphosis condition will continue after the treatment. There have been numerous models employed in the past to predict the Kyphosis disease, including Logistic Regression (LR), Naive Bayes (NB), Random Forest (RF), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Deep Neural Network (DNN), and others. Unfortunately, the precision was overestimated. Based on the dataset received from Kaggle, we investigated how to predict Kyphosis disorders more accurately by using these models with Hyperparameter tuning. While the calculations were being performed, certain variables were modified. The accuracy was increased by optimizing the fit parameters based on Hyperparameter tuning. Accuracy, recall or sensitivity, specificity, precision, balanced accuracy score, F1 score, and AUC-ROC score of all models, including the Hyperparameter tuning, were compared. Overall, the Hyperparameter-tuned DNN models excelled over the other models. The DNN models’ accuracy was 87.72% with 5-fold cross-validation and 87.64% with 10-fold cross-validation. It is advised that when a patient has a clinical procedure, the DNN model be trained to detect and foresee Kyphosis disease. Medical experts can use this study’s findings to correctly predict if a patient will still have Kyphosis after surgery. We propose that deep learning should be adopted and utilized as a crucial and necessary tool throughout the broad range of resolving biological queries.

1. Introduction

Machine learning (ML) involves making predictions based on the data provided to the computer [1], and using this concept of ML, machines can be trained to respond to new information without human interaction. Deep learning (DL) is a type of artificial intelligence and ML that mimics human behavior when used with certain data. It is a type of ML that enables computers to evaluate enormous amounts of data and generate preliminary findings that may subsequently be used to support any response [2].
DL, a subfield of artificial intelligence, works with algorithms that are influenced by the structure and function of the brain. Artificial neural networks with a second hidden layer are used in DL. Cascading neural networks are used to process non-linear data sets. DL is a method that searches through numerous buried data layers to find the key. It can be used to diagnose other illnesses. Due to DL, the healthcare sector may now review data quickly and accurately [3].
There could be a backbone abnormality called Kyphosis (spine). A toddler either has a humpback from birth or is predisposed to developing one due to abnormal medical conditions. There will be anguish and distortion because of the extreme humpback. Kyphosis, a spinal curve that denotes an excessively rounded back, can emerge from stress, infectious disease, aberrant development, genetic disease, and iatrogenic disease on occasion [4]. How Kyphosis is handled depends on several factors, including age, the cause of the curve, and its effects. Kyphosis disease in children should be more thoroughly diagnosed at a young age to prevent unusual spinal and vertebral problems. It is more crucial to recognize that the diagnosis of Kyphosis disease in children at an early stage may prevent issues with deformed spinal vertebrae [5]. The Mayo Clinic has more information on Kyphosis at (https://www.mayoclinic.org/diseases-conditions/kyphosis/symptoms-causes/syc-20374205, accessed on 1 December 2022). Medical diagnosis is a vital yet difficult task that must be conducted correctly and consistently; therefore, automating it would be advantageous [6].
Hyperparameter tuning is finding the perfect model architecture by adjusting the parameters that determine the model architecture, called Hyperparameters. Hyperparameter adjustment is essential to govern ML’s or DL’s general behavior [7]. A model parameter known as a “Hyperparameter” is one whose value affects learning and cannot be inferred from training data. Before beginning the process of training and learning the model, Hyperparameters are externally specified. Finding a set of ideal Hyperparameter values for a learning algorithm and using this tuned algorithm on any data set is called Hyperparameter tuning. A Hyperparameter governs the learning process, and as a result, the values of these parameters directly impact other model parameters such as weights and biases, which in turn affect how well the model performs. By tuning these Hyperparameters, any ML or DL model’s accuracy is frequently increased [8].
By comparing and using various ML algorithms, alternative medical practitioners have already done a respectable job detecting and forecasting diseases. According to Chatter et al., Kyphosis must be treated as some cases are congenital and some are iatrogenic, depending on the patient’s needs and current condition [4]. In [5], Artificial Neural Networks (ANN), Support Vector Machine (SVM), and Random Forest (RF) were utilized to recognize and predict Kyphosis disease. After applying grid search strategies, they found that the ANN performed the best, with 86.42% and 85.19% supporting 10-fold and 5-fold cross-validations, respectively. Singla’s group examined the relationship between front postural stability, round shoulders, and progressive Kyphosis disease [7].
According to [9], artificial intelligence and ML have significantly advanced the field of spine research. In this review, the author covered several decision-support systems, computer-aided diagnoses, and issues. ML performed brilliantly and showed great promise in spine research [10]. ML could assist clinical staff in improving patient care, boosting output, and reducing unfavorable outcomes. ANN is frequently utilized in diagnosis and care system claims [11]. In the study [12], they covered how to forecast the clinical expenses of spinal fusion using the Naive Bayes (NB), SVM, Logistic Regression (LR), Decision Tree (DT), and RF models. They concluded that the RF model performed the best in terms of prediction, with an accuracy of 84.30% compared to the other methods.
Abdullah et al. [13] employed the RF and K-Nearest Neighbors (KNN) models to find spinal abnormalities. They found that, with an accuracy of 85.32%, the KNN model outperformed the RF model. In work [14], they use the SVM, LR, and bagged SVM and LR models, all of which are openly available in the Kaggle repository. The outcomes demonstrated that on the test dataset, SVM, LR, and bagged SVM and LR all performed similarly well, with an accuracy of 86.96%. However, SVM’s superior recall value and miss rate set it apart from the competitors.
Many people undergo Kyphosis surgeries, although the illness may persist after the operation. Therefore, the issue is to predict whether or not the patient will still have Kyphosis after the treatment based on the patient’s many attributes. To classify the disease of Kyphosis, supervised learning systems are the main topic. Thus, in the present study, we apply Logistic Regression (LR), Naive Bayes (NB), Random Forest (RF), K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Deep Neural Network (DNN) algorithms based on a biomedical dataset to build models to predict the absence or presence of a Kyphosis disease based on historical healthcare and personal records of Kyphosis disease patients after they have undergone surgery.
Even though various models, including LR, NB, RF, KNN, SVM, ANN, etc., have been employed to predict the Kyphosis disease, regrettably, up until recently, it was virtually impossible to increase the accuracy. Therefore, by using these supervised models with Hyperparameter tuning, this work will examine Kyphosis disorder prediction using the dataset received from Kaggle. To choose the most efficient parameters, Hyperparameter tuning was done. While the calculations were being performed, certain variables were modified. The model’s performance is maximized using that set of Hyperparameters. Then, evaluation and comparison of the model outputs were performed.
The study examines the performance of supervised machine learning and deep learning (ML/DL) models in Kyphosis diagnosis across datasets to understand which model performs the best. Accuracy, recall or sensitivity, specificity, precision, balanced accuracy score, F1 score, and AUC-ROC score of all models, including the Hyperparameter tuning, were compared. In this study, we investigated the performance of tuned Hyperparameters and stratified K-fold cross-validation on the most popular ML and DL algorithms: LR, NB, RF, SVM, KNN, and DNN. This research aims to show the biomedical community how ML and DL algorithms have been used to categorize and forecast Kyphosis disease using biomedical data.
The complete article is organized as follows: Section 2 covers the materials and methods, Section 3 covers the implementation results, Section 4 covers the discussion, and Section 5 covers the conclusion and future works.

2. Materials and Methods

The foundation for implementing the advised plan is provided in this section. The preprocessing procedures are discussed in this part, followed by the final classification methods specific to each implementation. In the current study, models are developed using the LR, NB, RF, KNN, SVM, and DNN algorithms to identify people with Kyphosis who have previously received treatment for the condition and have personal records showing when they will need skilled surgery. The effects of the models are then assessed and investigated in detail.

2.1. Data Preparation

The dataset for Kyphosis was downloaded from Kaggle (https://www.kaggle.com/datasets/abbasit/kyphosis-dataset, accessed on 1 December 2022). The records of patients who underwent corrective surgery for Kyphosis problems are included in this dataset on the condition. Age, variety, and starts are three inputs, and one outcome of this choice of knowledge is Kyphosis. In Table 1, the dataset’s credits are explained. The data was prepared using the Scikit-Learn library. To improve ML/DL model performance, the data were scaled using the standard scalar function of the Scikit-Learn library.

2.2. Classification Learning Algorithms

A computer program is taught to make new observations or classifications depending on the given data through machine learning, deep learning, and statistical category. The proposed training and result prediction models utilized the learning methods LR, NB, RF, SVM, KNN, and DNN.
Machine learning techniques such as logistic regression are used to solve classification issues. It is a predictive analysis technique that is based on the idea of probability. The output of a categorical dependent variable is predicted by logistic regression. The outcome must therefore have a discrete or absolute value. Rather than delivering the exact answer as 0 or 1, it provides the probability values that fall between 0 and 1. It might be either yes or no, 0 or 1, true or false, etc. Probability is determined in logistic regression using either the logistic function or the sigmoid function. The logistic function, a simple S-shaped curve, converts input into a value between 0 and 1. A logistic regression model mathematically predicts P(Y = 1) as a function of X [15,16], presented in Equation (1). Where h Φ ( x ) define the output of the logistic function, β 1   is the slope, β   0 is the y-intercept, and X is the independent variable.
h Φ ( x ) = 1 [ ( 1 + e ( β 0   + β 1   )   X   for   0 h Φ ( x ) 1
The NB algorithm is a supervised learning method that addresses classification problems based on the Bayes theorem. As a probabilistic classifier, it makes predictions based on the likelihood of an object [17,18]. The Bayes theorem, often called Bayes’ Rule, is a formula for determining how likely a hypothesis is given certain information. What happens will depend on the conditional probability. The Bayes theorem is expressed as defined in Equation (2).
P ( A | B ) = P ( B | A ) P ( A ) P ( B )
Whereas the probability that a particular hypothesis (A) will happen is expressed as the posterior probability or P(A|B), P(B|A) stands for Likelihood Probability, which quantifies the likelihood that a given hypothesis is true based on the available data; the probability of a theory before viewing the evidence is known as priority probability, or P(A). P(B) stands for the probability of a marginal event.
The well-known ML algorithm RF is a component of the supervised learning approach. It is probably a classifier that uses a collection of selection bushes on different subsets of the provided dataset to increase its predicted accuracy. Instead of relying exclusively on one selection tree, it anticipates the final phrase output using predictions from each tree and most predictions [19,20]. Rather than relying on just one, it uses forecasts from all the trees to predict the result based on which predictions earned the most votes. To improve the predicted accuracy of the dataset, the random forest classifier, as its name suggests, “Contains several decision trees on diverse subsets of the input dataset and takes the average”.
The most popular use of the supervised, linear ML method known as SVM is the best fit for classification problems. This method makes it easier to solve the nonlinearity of the problem using the kernel method [21]. It aids in categorizing the data into two or more groups by using a boundary to distinguish between similar groups. Hence, let’s first update the formulas for how each data point is represented in space, and then consider the equation of a line that will aid in dividing comparable categories, and lastly, consider the formula for the distance between a data point and a line (a boundary separating the categories). SVM, one of the finest classifiers, displays some linearity. It can handle non-linear situations using a non-linear basis function and is backed by solid mathematical intuition [22]. In categories of n-dimensional space, the SVM rule aims to establish a single decision boundary or line. This will enable us to swiftly assign novel data to the appropriate class over the long run [23].
One supervised learning method that is widely used to categorize data depending on how its neighbors are ranked is the K-Nearest Neighbor algorithm. It keeps track of all the available instances and classifies new cases based on a similarity score. The number of nearest neighbors to be considered when choosing the winner by majority vote is indicated by the KNN parameter. Because it is a “lazy learner”, we can apply it when the dataset is labeled, noise-free, and small. Finding the closest point from the given data set to the input point is the definition of “finding nearest neighbors”. Using the majority votes of its K neighbors, the algorithm classifies new cases by storing all existing cases (test data). Data points are first converted into their mathematical values before employing KNN (vectors). The algorithm operates by measuring the separation between these points’ mathematical values. The distance between each data point and the test data is calculated, and the likelihood that the facts are similar to the test data is estimated. For classification, the most likely set of points is utilized. The distance functions that can be used are the Hamming, Minkowski, and Euclidean distances. The KNN approach only saves the dataset during the preparation phase, and when it receives new data, it organizes that data into a collection that is abundant and equal to the incoming information [24].
A deep neural network has many layers and neurons in each layer. When the output of one layer serves as the input for another layer in the forward direction, all layers in a deep neural network will alternately turn on and off. In a deep network, there are three types of data flow: input, output, and sequential [25,26]. The difference between an ANN and a DNN is that the former only comprises one input layer, one output layer, and perhaps one hidden layer. However, the latter ought to have more hidden layers [2,27]. The only neural networks we utilize are feed-forward ones. Three hidden layers have been used in the proposed work to extract a high-quality feature from the dataset.
The Keras model was used to implement the DNN model. A grid of potential discrete Hyperparameter values has been constructed to fit the model. We keep track of the model output for each set and then choose the combination that has delivered the best outcome. The DNN models have recently become quite popular because of their exceptional capacity to learn the underlying structure of the input data vectors and the non-linear input-output mapping [25,28,29].

2.3. Various Performance Measures

Table 2, with the dimensions “actual” and “predicted”, as well as the scales “True Positives (TP)”, “True Negatives (TN)”, “False Positives (FP)”, and “False Negatives (FN)”, makes up the confusion matrix. The following performance measures assess predictions using ML and DL models for specified categorization problems [5,30,31]. The range of correct predictions is created as a quantitative relation of all made predictions; therefore, classification accuracy should be described. It is equal to (TP + TN)/(TP + FP + FN + TN). Precision, used in document retrievals, is also explained by the diversity of accurate documents that ML and DL models deliver. Precision is equal to (TP + FP)/TP. Because it indicates how successfully a model can recognize positive samples, recall is also examined. The recall equals (TP + FN)/TP.
Specificity is also reviewed to assess a model’s capacity to forecast true negatives for each available category. Specificity is defined as (TN + FP)/TN. The F1 score is calculated using the recall and precision weighted average. The F1 score is influenced equally by recall and precision. It equals 2 × (recall × precision)/(recall + precision). The AUC-ROC statistic measures a model’s ability to identify the categories. Balanced accuracy deals with unbalanced datasets in binary and multiclass classification problems. It is outlined because the average recall was found in each category.

2.4. Proposed Approach

This study illustrates the interaction between the functional design of the proposed predictive model and other significant DL and ML applications, the tuning and control of Hyperparameters, and model testing using K-fold cross-validation. The proposed system has classified people with Kyphosis sickness and healthy individuals. Different ML and DL models for Kyphosis sickness were examined for their results. The design used DL models, DNN, and popular ML models, LR, NB, RF, SVM, and KNN. Figure 1 shows the system’s framework in action.
A dataset for Figure 1 associated with the Kyphosis sickness of the patients was chosen. During the preparation stage, all inconsistencies that might have existed in the dataset when the data was acquired must be eliminated. The next step was to select the testing mode and the classification methods used during implementation. The previously described classification algorithms can now be put to use with the aid of Hyperparameter tuning methods. A model can be trained using existing data, making it capable of fitting the model parameters. Other sensible parameters, referred to as Hyperparameters, cannot be immediately learned using the standard training procedure. Sometimes they are set before the training process itself starts. These factors represent categorical model requirements, such as how complex the model is or how quickly it should learn [32]. The model’s performance can be maximized using that set of Hyperparameters, which minimizes a predetermined loss function and results in better results with fewer errors. When there is a lack of information or information that is not evenly distributed, the K-fold is often used. Sampling also affects K-fold cross-validation. When tackling classification challenges with imbalanced class distributions, stratified K-fold became more popular than K-fold. As a result, stratified K-fold cross-validation was used in the current study to evaluate the models.
To determine which algorithm is the best and delivers the highest level of accuracy when forecasting the outcome, the study will finally assess and contrast each of the produced algorithms. The main steps in the proposed model are summarized below:
(a)
The dataset for the Kyphosis disease was obtained from Kaggle. This dataset on Kyphosis contains information on patients who underwent corrective surgery for the disease. The Kyphosis data set has 81 rows and 4 columns regarding adolescents who underwent spinal fusion surgery. The dataset consists of one outcome, Kyphosis, and three inputs, age, variety, and start. These 81 samples are then divided into a training set, a validation set, and a test set using a ratio of 6:1:2.
(b)
The data were preprocessed using the Scikit-Learn program. The label encoder was used to divide the Kyphosis region into 0s and 1s. To help the ML and DL models work better, the data were scaled using the Standard Scaler function of the Sklearn library.
(c)
A model is trained to produce new observations or classifications based on the data it is provided through the classification process in machine learning and deep learning. The proposed models used the LR, NB RF, SVM, KNN, and DNN algorithms for training and result prediction.
(d)
Certain important parameters, known as Hyperparameters, are difficult to learn using conventional training. These parameters categorize the model’s fundamental features, such as complexity or learning rate. Hyperparameter tuning was used in this study to select the most efficient parameters. With that particular collection of Hyperparameters, the model’s performance is maximized, a preset loss function is minimized, and better outcomes with fewer errors are obtained.
(e)
When one lacks sufficient or evenly distributed information, the K-fold method is often used. Sampling also harms K-fold cross-validation. When tackling classification challenges with imbalanced class distributions, stratified K-fold became more popular than K-fold. As a result, stratified K-fold cross-validation was used in the current study to evaluate the models.
(f)
The study will ultimately evaluate and compare each of the developed algorithms to identify which one is the best and gives the highest level of accuracy when projecting the outcome. Predictions for the given classification problem are evaluated using machine learning and deep learning models utilizing the performance metrics accuracy, recall or sensitivity, specificity, precision, a balanced accuracy score, F1-score, and an AUC-ROC score.

3. Implementation and Results

Version 3.6.9 of the Python programming language was used to implement the models. The DNN model was applied with the aid of the Keras model. DL is the foundation for the deep multilayer perceptron architecture with regularization and dropout used in the developed DNN learning model. By making an analytical inquiry on the selected dataset and using ML/DL models with LR, NB, RF, SVM, KNN, and DNN algorithms supporting stratified K-fold (K = 5 and 10) cross-validation, the outcomes of the examination effort have been reported in this section. The results of the ML and DL models were presented for model evaluation and comparison after Hyperparameter tuning with stratified K-fold cross-validation was performed.
The classification accuracy of the LR model using stratified five-fold cross-validation is shown in Figure 2. It demonstrates fold-3′s superior accuracy. This model validated the stratified 5-fold cross-validation with a mean accuracy of 81.47% and a standard deviation (SD) of 0.0398. After Hyperparameter tuning, the best parameter was chosen at max_iter = 500 and solver = Newton-Cg using 5-fold stratified cross-validation.
The stratified ten-fold cross-validation results of the LR model’s classification accuracy are shown in Figure 3. It demonstrates that fold-1 has the best accuracy. The 10-fold stratified cross-validation was determined to be validated by the LR model’s 82.64% mean accuracy at the time and SD of 0.0625. After Hyperparameter tuning, the best parameter was chosen with max_iter = 1000 and solver = Newton-Cg by stratified 10-fold cross-validation.
The classification accuracy of the NB model using stratified five-fold cross-validation is shown in Figure 4. It demonstrates fold-3′s superior accuracy. This model validated the stratified 5-fold cross-validation with a mean accuracy of 81.47% and a standard deviation (SD) of 0.0398. After Hyperparameter tuning, the best parameter of the Gaussian NB classifier was chosen at prior = none and var_smoothing = 1 × 10−6 using 5-fold stratified cross-validation.
The stratified ten-fold cross-validation results of the NB model’s classification accuracy are shown in Figure 5. It demonstrates that fold-1 has the best accuracy. The 10-fold stratified cross-validation was determined to be validated by the NB model’s 82.64% mean accuracy at the time and SD of 0.0625. After Hyperparameter tuning, the best parameter of the Gaussian NB classifier was chosen at prior = none and var_smoothing = 1 × 10−6 using 5-fold stratified cross-validation.
The classification accuracy of the RF model using stratified five-fold cross-validation is shown in Figure 6. It demonstrates fold-3′s superior accuracy. This model validated the stratified 5-fold cross-validation with a mean accuracy of 85.22% and a standard deviation (SD) of 0.0485. After Hyperparameter tuning, the best parameter was chosen at n_estimators = 400 and criterion = Gini using 5-fold stratified cross-validation.
The stratified ten-fold cross-validation results of the RF model’s classification accuracy are shown in Figure 7. It demonstrates that fold-5 has the best accuracy. The 10-fold stratified cross-validation was determined to be validated by the RF model’s 83.89% mean accuracy at the time and SD of 0.0982. After Hyperparameter tuning, the best parameter was chosen with n_estimators = 150 and criterion = Gini by stratified 10-fold cross-validation.
The classification accuracy of the SVM model using stratified five-fold cross-validation is shown in Figure 8. It demonstrates fold-3′s superior accuracy. The stratified 5-fold cross-validation was supported by this model, which had a mean accuracy of 85.22% and an SD of 0.0839. Kernel = rbf, Gamma = 0.1, and C = 3 were chosen as the optimal parameters following Hyperparameter tuning using stratified 5-fold cross-validation.
The classification accuracy of the SVM model using stratified ten-fold cross-validation is shown in Figure 9. It demonstrates that the accuracy of fold-6 is the highest. This model was then discovered to support stratified 10-fold cross-validation with a mean accuracy of 85.14% and an SD of 0.0940. Using stratified 10-fold cross-validation, the optimal Hyperparameter values were kernel = rbf, gamma = 0.1, and C = 8.
The classification accuracy of the KNN model using stratified five-fold cross-validation is shown in Figure 10. It demonstrates that folds 3 and 5 have the most remarkable accuracy. This model supported the stratified 5-fold cross-validation with a mean accuracy of 85.22% and an SD of 0.0740. After Hyperparameter tuning using stratified 5-fold cross-validation, the optimal parameters were determined at n_Neighbor = 9 and p = 2.
The classification accuracy of the KNN model using stratified five-fold cross-validation is shown in Figure 11. It demonstrates that the highest accuracy folds are 2, 4, 5, 6, 7, 9, and 10. The KNN model at the time was discovered to have a mean accuracy of 84.03% and an SD of 0.0535 that supported the 10-fold stratified cross-validation. The best parameters were determined to be n_Neighbor = 11 and p = 2, given Hyperparameter tuning by 10-fold stratified cross-validation.
The classification accuracy of the DNN model using stratified five-fold cross-validation is shown in Figure 12. It demonstrates fold-3’s superior accuracy. This model, which had a mean accuracy of 87.72% and an SD of 0.6666, validated the stratified 5-fold cross-validation. With the following settings: learning rate = 0.0005, epochs = 100, batch size = 10, dropout = 0.2, kernel initialize = “uniform”, inputs = 60, 35, and 25 neurons in 3 hidden layers, severally rectified linear unit activation functions, and sigmoid activation were selected at the final layer with one neuron and the Adam optimizer, the best Hyperparameter tuning parameters were determined.
The classification accuracy of the DNN model using stratified ten-fold cross-validation is shown in Figure 13. It demonstrates that the accuracy of fold-10 is the highest. It was found that this model had a mean accuracy of 87.64% and an SD of 0.0561. The deviation supported the stratified 10-fold cross-validation. With the following settings: learning rate = 0.0005, epochs = 50, batch size = 20, dropout = 0.2, kernel initialize = “uniform”, inputs = 30, 30, and 25 neurons in 3 hidden layers, severally rectified linear unit activation functions, and sigmoid activation were selected at the final layer with one neuron and the Adam optimizer, the best Hyperparameter tuning parameters were determined.

Simulation Results

We conducted a comparative study of every algorithm utilized in this study. The study’s objective was met by creating ML and DL algorithms to recognize Kyphosis disease. Using ML/DL models with learning algorithms; stratified K-fold cross-validation with Hyperparameter tuning has been investigated. Each test underwent 5- and 10-fold cross-validations. The most widely accepted K-fold cross-validation management is often the 5- or 10-fold control.
As indicated in Table 3 and represented in Figure 14, the LR model reached 81.47% accuracy with an SD of 0.0398 by undertaking 5-fold stratified cross-validation, followed by the NB model at 81.47% accuracy with an SD of 0.0398, the RF model at 85.22% accuracy with an SD of 0.0485, the SVM model at 85.22% accuracy with an SD of 0.0839, the KNN model at 85.22% accuracy with an SD of 0.0740, and the DNN model at 87.72% accuracy with an SD of 0.0666.
As indicated in Table 3 and Figure 15, the LR model achieved 82.64% accuracy with an SD of 0.0625 by doing 10-fold stratified cross-validation, the NB model gained 82.64% accuracy with an SD of 0.0625, the RF model achieved 83.89% accuracy with an SD of 0.0982, the SVM model gained 85.14% accuracy with an SD of 0.0940, the KNN model achieved 84.03% accuracy with an SD of 0.0535, and the DNN model achieved 87.64% accuracy with an SD of 0.0561. In addition to classification accuracy, Table 3 and Table 4 and Figure 16 and Figure 17 give a comparative analysis of key performance characteristics for the ML and DL models based on stratified 5-fold and 10-fold cross-validation, respectively.

4. Results and Discussion

Based on LR, NB, RF, KNN, SVM, and DNN models, the effects of Hyperparameter tuning on the dataset for Kyphosis disease have been thoroughly examined. The ML/DL model’s performance was maximized using that set of Hyperparameters. We compared these models using several metrics, including accuracy, recall or sensitivity, specificity, and precision. We also considered F1 and AUC-ROC scores. According to our findings, the LR model used 5-fold cross-validation to obtain 81.47% accuracy with an SD of 0.0398, a recall value of 0.30, a specificity of 0.95, a balanced accuracy score of 0.63, a precision of 0.68, an F1-score of 0.36, and an AUC-ROC score of 0.63. With a 0.48 recall value, 0.91 specificities, a 0.70 balanced accuracy score, 0.53 precision, a 0.44 F1-score, and 0.70 AUC-ROC scores, the NB model had an accuracy rate of 81.47% with an SD of 0.0398.
With a recall value of 0.48, a specificity of 0.95, a balanced accuracy score of 0.72, a precision of 0.70, an F1-score of 0.51, and an AUC-ROC score of 0.72, the RF model had an accuracy rate of 85.22% with an SD of 0.0485. With a 0.48 recall value, 0.95 specificities, a 0.72 balanced accuracy score, 0.68 precision, a 0.51 F1-score, and a 0.72 AUC-ROC score, the SVM model had an accuracy rate of 85.22% with an SD of 0.0839. With a 0.53 recall value, 0.94 specificities, a 0.74 balanced accuracy score, 0.63 precision, a 0.52 F1-score, and a 0.74 AUC-ROC score, the KNN model had an accuracy rate of 85.22% with an SD of 0.0740. On the other hand, the DNN model achieved 87.72% accuracy with an SD of 0.0666, a 0.62 recall value, 0.95 specificities, a 0.79 balanced accuracy score, 0.90 precision, a 0.65 F1-score, and 0.79 AUC-ROC scores.
Similarly, with 10-fold cross-validation, the LR model achieved 82.64% accuracy with an SD of 0.0625, a 0.40 recall value, 0.96 specificities, a 0.68 balanced accuracy score, 0.45 precision, a 0.39 F1-score, and 0.68 AUC-ROC scores. With a 0.50 recall value, 0.92 specificities, a 0.71 balanced accuracy score, 0.58 precision, a 0.48 F1-score, and a 0.71 AUC-ROC score, the RF model had an accuracy rate of 83.89% with an SD of 0.0982. With a 0.35 recall value, 0.96 specificities, 0.65 balanced accuracy score, 0.60 precision, 0.43 F1-score, and 0.65 AUC-ROC scores, the RF model had an accuracy rate of 83.89% with an SD of 0.0982. With a 0.55 recall value, 0.94 specificities, 0.75 balanced accuracy score, 0.68 precision, 0.54 F1-score, and 0.75 AUC-ROC scores, the SVM model had an accuracy rate of 85.14% with an SD of 0.0940. With a 0.50 recall value, 0.94 specificities, a 0.72 balanced accuracy score, 0.55 precision, a 0.46 F1-score, and 0.72 AUC-ROC score, the KNN model had an accuracy rate of 84.03% with an SD of 0.0535. At the same time, the DNN model had an 87.64% accuracy rate with an SD of 0.0561 and had a 0.55 recall, 0.97 specificities, a 0.76 balanced accuracy score, 0.70 precision, a 0.57 F1-score, and 0.76 AUC-ROC scores.
Overall, the Hyperparameter-tuned DNN models excelled over the other models. The DNN model was shown to perform better than all other models in classification accuracy, with accuracy validated by stratified 10-fold and 5-fold cross-validation of 87.64% with SD of 0.0561 and 87.72% with SD of 0.0666, respectively. Regarding classification accuracy, recall or sensitivity, specificity, a balanced accuracy score, precision, an F1 score, and an AUC-ROC score that supported stratified 5- and 10-fold cross-validation, the DNN model outperformed a wide variety of other models, as shown in Table 3 and Table 4, respectively. This model likewise received the highest score based on these additional performance metrics.
If we compare our results with the previous work reported by others, we can find some significant accuracy improvements in our proposed model. In Dankwa and Zheng’s work [5], the ANN (3-6-6-1) model surpassed all the other models by attaining 85.19% and 86.42% based on the 5-fold and 10-fold cross-validation, respectively. Our proposed Hyperparameter-tuned DNN model outperformed all other ML models by achieving 87.72% and 87.64% with stratified 5-fold and 10-fold cross-validation, respectively. Our proposed work improves the accuracy by 2.53% and 1.22%, respectively, with the same dataset used in both studies. Compared to ANN, the numerous layers in DNN enable models to learn complicated features and carry out more demanding computational tasks more efficiently.
Furthermore, because DNNs can learn faster, they are more effective in training time. In comparison to other methods presented in this work and the literature, the DNN model is the best ideal for the diagnosis of Kyphosis. This is because DL algorithms can eventually learn from their own mistakes. It can check the accuracy of its estimations or results and make the required corrections. Traditional machine learning algorithms, on the other hand, require varying levels of human input to assess output accuracy.

5. Conclusions

DL methods are significantly more efficient in Kyphosis illness identification and diagnosis than traditional computing technologies. The DNN model outperformed all other ML models regarding classification accuracy, achieving 87.64% and 87.72% with stratified 10-fold and 5-fold cross-validation, respectively. These results exceed the most recent analysis in various academic publications. If we compare the experimental results with the literature with the same dataset, our proposed work improves the accuracy by 2.53% and 1.22% based on the 5-fold and 10-fold cross-validation, respectively. The DNN model is the best suitable for the diagnosis of Kyphosis in comparison to other methods presented in this work and the literature. Compared to ANN, the numerous layers in DNN enable models to learn complicated features and carry out more demanding computational tasks more efficiently. Furthermore, DNN is more effective in training time as they can learn faster.
It is advised that when a patient has a clinical procedure, the DNN model be trained to detect and foresee Kyphosis disease. Medical experts can use this study’s findings to correctly predict if a patient will still have Kyphosis after surgery. We propose that DL should be adopted and utilized as a crucial and necessary tool throughout the broad range of resolving biological queries. Future research can explore more DL methods to increase the accuracy of Kyphosis disease prediction.

Author Contributions

Conceptualizations, A.S.C. and U.K.L.; methodology, A.S.C., U.K.L. and A.K.G.; software, A.S.C., P.M. and K.R.; validation, A.S.C., F.H. and I.K.; formal analysis, A.S.C., F.H., I.K. and K.R.; data curation, A.S.C., P.M. and R.R.G.; writing—original draft preparation, A.S.C. and U.K.L.; writing—review and editing, A.K.G., P.M. and R.R.G.; visualization, A.S.C., F.H., I.K. and K.R.; supervision, A.S.C., U.K.L. and P.M.; project administration, F.H., I.K. and K.R.; funding acquisition, F.H. and I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not Applicable.

Acknowledgments

The authors thank Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R236), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rastogi, V. Machine learning algorithms: Overview. Int. J. Adv. Res. Eng. Technol. 2020, 11, 122–132. [Google Scholar]
  2. Schmidhuber, J. Deep Learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef] [PubMed]
  3. Khan, R.; Srivastava, A.K.; Gupta, M.; Kumari, P.; Kumar, S. Medicolite-Machine Learning-Based Patient Care Model. Comput. Intell. Neurosci. 2022, 2022, 8109147. [Google Scholar] [CrossRef]
  4. Chatter, P.; Swetha Ramana, D.V.; Suzain, S.; Suma Latha, P.V. Prediction of Kyphosis Disease Using Machine Learning. In Lecture Notes in Networks and Systems; Springer: Singapore, 2021; Volume 167. [Google Scholar] [CrossRef]
  5. Dankwa, S.; Zheng, W. Special issue on using machine learning algorithms in the prediction of kyphosis disease: A comparative study. Appl. Sci. 2019, 9, 3322. [Google Scholar] [CrossRef]
  6. Singh, S.K.; Khamparia, A.; Sinha, A. Explainable Machine Learning Model for Diagnosis of Parkinson Disorder. In Intelligent Systems Reference Library; Springer: Singapore, 2022; Volume 222. [Google Scholar] [CrossRef]
  7. Singla, D.; Veqar, Z. Association Between Forward Head, Rounded Shoulders, and Increased Thoracic Kyphosis: A Review of the Literature. J. Chiropr. Med. 2017, 16, 220–229. [Google Scholar] [CrossRef]
  8. Zhang, P.; Peng, W.; Wang, X.; Luo, C.; Xu, Z.; Zeng, H.; Ge, L. Minimum 5-year follow-up outcomes for single-stage transpedicular debridement, posterior instrumentation and fusion in the management of thoracic and thoracolumbar spinal tuberculosis in adults. Br. J. Neurosurg. 2016, 30, 666–671. [Google Scholar] [CrossRef]
  9. Galbusera, F.; Casaroli, G.; Bassani, T. Artificial intelligence and machine learning in spine research. JOR Spine 2019, 2, e1044. [Google Scholar] [CrossRef] [PubMed]
  10. Ren, G.; Yu, K.; Xie, Z.; Wang, P.; Zhang, W.; Huang, Y.; Wu, X. Current Applications of Machine Learning in Spine: From Clinical View. Glob. Spine J. 2022, 12, 1827–1840. [Google Scholar] [CrossRef] [PubMed]
  11. Hazra, A.; Mandal, S.K.; Gupta, A.; Mukherjee, A.; Mukherjee, A. Heart Disease Diagnosis and Prediction Using Machine Learning and Data Mining Techniques: A Review. Adv. Comput. Sci. Technol. 2017, 10, 2137–2159. [Google Scholar]
  12. Kuo, C.Y.; Yu, L.C.; Chen, H.C.; Chan, C.L. Comparison of models for the prediction of medical costs of spinal fusion in Taiwan diagnosis-related groups by machine learning algorithms. Healthc. Inform. Res. 2018, 24, 29–37. [Google Scholar] [CrossRef] [PubMed]
  13. Abdullah, A.A.; Yaakob, A.; Ibrahim, Z. Prediction of Spinal Abnormalities Using Machine Learning Techniques. In Proceedings of the 2018 International Conference on Computational Approach in Smart Systems Design and Applications, ICASSDA, Kuching, Malaysia, 15–17 August 2018. [Google Scholar] [CrossRef]
  14. Raihan-Al-Masud, M.; Rubaiyat Hossain Mondal, M. Data-driven diagnosis of spinal abnormalities using feature selection and machine learning algorithms. PLoS ONE 2020, 15, e0228422. [Google Scholar] [CrossRef]
  15. Tyagi, H.; Agarwal, A.; Gupta, A.; Goel, K.; Srivastava, A.K.; Srivastava, A.K. Prediction and diagnosis of diabetes using machine learning classifiers. Int. J. Forensic Softw. Eng. 2022, 1, 335–347. [Google Scholar] [CrossRef]
  16. Singh, Y.V.; Singh, P.; Khan, S.; Singh, R.S. A Machine Learning Model for Early Prediction and Detection of Sepsis in Intensive Care Unit Patients. J. Healthc. Eng. 2022, 2022, 9263391. [Google Scholar] [CrossRef] [PubMed]
  17. Goyal, N.; Chandra Trivedi, M. WITHDRAWN: Breast cancer classification and identification using machine learning approaches. In Materials Today: Proceeding; Elsevier Ltd.: Amsterdam, The Netherlands, 2020. [Google Scholar] [CrossRef]
  18. Ayeldeen, H.; Elfattah, M.A.; Shaker, O.; Hassanien, A.E.; Kim, T.H. Case-based retrieval approach of clinical breast cancer patients. In Proceedings of the 2015 3rd International Conference on Computer, Information and Application, CIA 2015, Yeosu, Republic of Korea, 21–23 May 2015. [Google Scholar] [CrossRef]
  19. Sharma, S.K.; Lilhore, U.K.; Simaiya, S.; Trivedi, N.K. An improved random forest algorithm for predicting the COVID-19 pandemic patient health. Ann. Rom. Soc. Cell Biol. 2021, 25, 67–75. [Google Scholar]
  20. Balyan, A.K.; Ahuja, S.; Lilhore, U.K.; Sharma, S.K.; Manoharan, P.; Algarni, A.D.; Raahemifar, K. A Hybrid Intrusion Detection Model Using EGA-PSO and Improved Random Forest Method. Sensors 2022, 22, 5986. [Google Scholar] [CrossRef] [PubMed]
  21. Singh, S.K.; Sinha, A.; Yadav, S. Performance Analysis of Machine Learning Algorithms for Erythemato-Squamous Diseases Classification. In Proceedings of the IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics, ICDCECE 2022, Ballari, India, 23–24 April 2022. [Google Scholar] [CrossRef]
  22. Palaniappan, S.; Awang, R. Intelligent heart disease prediction system using data mining techniques. In Proceedings of the AICCSA 08—6th IEEE/ACS International Conference on Computer Systems and Applications, Doha, Qatar, 31 March–4 April 2008. [Google Scholar] [CrossRef]
  23. Lilhore, U.K.; Simaiya, S.; Pandey, H.; Gautam, V.; Garg, A.; Ghosh, P. Breast Cancer Detection in the IoT Cloud-based Healthcare Environment Using Fuzzy Cluster Segmentation and SVM Classifier. In Lecture Notes in Networks and Systems; Springer Nature Singapore: Singapore, 2022; Volume 356. [Google Scholar] [CrossRef]
  24. Guleria, K.; Sharma, A.; Lilhore, U.K.; Prasad, D. Breast Cancer Prediction and Classification Using Supervised Learning Techniques. J. Comput. Theor. Nanosci. 2020, 17, 2519–2522. [Google Scholar] [CrossRef]
  25. Hamdi, M.; Bourouis, S.; Rastislav, K.; Mohmed, F. Evaluation of Neuro Images for the Diagnosis of Alzheimer’s Disease Using Deep Learning Neural Network. Front. Public Health 2022, 10, 35. [Google Scholar] [CrossRef]
  26. Miao, K.H.; Miao, J.H. Coronary heart disease diagnosis using deep neural networks. Int. J. Adv. Comput. Sci. Appl. 2018, 9. [Google Scholar] [CrossRef]
  27. Lilhore, U.K.; Imoize, A.L.; Lee, C.C.; Simaiya, S.; Pani, S.K.; Goyal, N.; Li, C.T. Enhanced Convolutional Neural Network Model for Cassava Leaf Disease Identification and Classification. Mathematics 2022, 10, 580. [Google Scholar] [CrossRef]
  28. Poongodi, M.; Hamdi, M.; Malviya, M.; Sharma, A.; Dhiman, G.; Vimal, S. Diagnosis and combating COVID-19 using wearable Oura smart ring with deep learning methods. Pers. Ubiquitous Comput. 2022, 26, 25–35. [Google Scholar] [CrossRef]
  29. Poongodi, M.; Hamdi, M.; Wang, H. Image and audio caps: Automated captioning of background sounds and images using deep learning. Multimed. Syst. 2022, 1–9. [Google Scholar] [CrossRef]
  30. Trivedi, N.K.; Simaiya, S.; Lilhore, U.K.; Sharma, S.K. COVID-19 pandemic: Role of machine learning & deep learning methods in diagnosis. Int. J. Curr. Res. Rev. 2021, 13, 150–156. [Google Scholar] [CrossRef]
  31. Lilhore, U.K.; Poongodi, M.; Kaur, A.; Simaiya, S.; Algarni, A.D.; Elmannai, H.; Hamdi, M. Hybrid Model for Detection of Cervical Cancer Using Causal Analysis and Machine Learning Techniques. Comput. Math. Methods Med. 2022, 2022, 4688327. [Google Scholar] [CrossRef] [PubMed]
  32. Elgeldawi, E.; Sayed, A.; Galal, A.R.; Zaki, A.M. Hyperparameter tuning for machine learning algorithms used for arabic sentiment analysis. Informatics 2021, 8, 79. [Google Scholar] [CrossRef]
Figure 1. The framework of the proposed system for detecting Kyphosis disease.
Figure 1. The framework of the proposed system for detecting Kyphosis disease.
Applsci 13 05012 g001
Figure 2. The classification accuracies of the LR model using the stratified five-fold cross-validation.
Figure 2. The classification accuracies of the LR model using the stratified five-fold cross-validation.
Applsci 13 05012 g002
Figure 3. The classification accuracies of the LR model using the stratified ten-fold cross-validation.
Figure 3. The classification accuracies of the LR model using the stratified ten-fold cross-validation.
Applsci 13 05012 g003
Figure 4. The classification accuracies of the NB model using the stratified five-fold cross-validation-1.
Figure 4. The classification accuracies of the NB model using the stratified five-fold cross-validation-1.
Applsci 13 05012 g004
Figure 5. The classification accuracies of the NB model using the stratified ten-fold cross-validation-2.
Figure 5. The classification accuracies of the NB model using the stratified ten-fold cross-validation-2.
Applsci 13 05012 g005
Figure 6. The classification accuracies of the RF model using the stratified five-fold cross-validation.
Figure 6. The classification accuracies of the RF model using the stratified five-fold cross-validation.
Applsci 13 05012 g006
Figure 7. The classification accuracies of the RF model using the stratified ten-fold cross-validation.
Figure 7. The classification accuracies of the RF model using the stratified ten-fold cross-validation.
Applsci 13 05012 g007
Figure 8. The classification accuracies of the SVM model using the stratified five-fold cross-validation.
Figure 8. The classification accuracies of the SVM model using the stratified five-fold cross-validation.
Applsci 13 05012 g008
Figure 9. The classification accuracies of the SVM model using the ten-fold stratified cross-validation.
Figure 9. The classification accuracies of the SVM model using the ten-fold stratified cross-validation.
Applsci 13 05012 g009
Figure 10. The classification accuracies of the KNN model using the stratified five-fold cross-validation.
Figure 10. The classification accuracies of the KNN model using the stratified five-fold cross-validation.
Applsci 13 05012 g010
Figure 11. The classification accuracies of the KNN model using the stratified ten-fold cross-validation.
Figure 11. The classification accuracies of the KNN model using the stratified ten-fold cross-validation.
Applsci 13 05012 g011
Figure 12. The classification accuracies of the DNN model using the stratified five-fold cross-validation.
Figure 12. The classification accuracies of the DNN model using the stratified five-fold cross-validation.
Applsci 13 05012 g012
Figure 13. The classification accuracies of the DNN model using the stratified ten-fold cross-validation.
Figure 13. The classification accuracies of the DNN model using the stratified ten-fold cross-validation.
Applsci 13 05012 g013
Figure 14. The pictorial representation of comparative accuracy estimation using different ML/DL models based on the stratified five-fold cross-validation.
Figure 14. The pictorial representation of comparative accuracy estimation using different ML/DL models based on the stratified five-fold cross-validation.
Applsci 13 05012 g014
Figure 15. The pictorial representation of comparative accuracy estimation using different ML/DL models based on the stratified ten-fold cross-validation.
Figure 15. The pictorial representation of comparative accuracy estimation using different ML/DL models based on the stratified ten-fold cross-validation.
Applsci 13 05012 g015
Figure 16. The pictorial representation of comparative performance estimation using different ML/DL models based on the stratified five-fold cross-validation.
Figure 16. The pictorial representation of comparative performance estimation using different ML/DL models based on the stratified five-fold cross-validation.
Applsci 13 05012 g016
Figure 17. The pictorial representation of comparative performance estimation using different ML/DL models based on the stratified ten-fold cross-validation.
Figure 17. The pictorial representation of comparative performance estimation using different ML/DL models based on the stratified ten-fold cross-validation.
Applsci 13 05012 g017
Table 1. Description of Dataset.
Table 1. Description of Dataset.
CharacteristicExplanation
KyphosisWhether or not the Kyphosis issue persisted after the procedure
AgeMonthly age of the patient
NumberInvolved vertebrae within the procedure
StartThe count of the surgery’s first or highest vertebra that was affected
Table 2. Confusion Matrix.
Table 2. Confusion Matrix.
ActualPrediction
01
0TNFP
1FNTP
Table 3. The comparative performance estimation using different ML/DL models based on the stratified five-fold cross-validation.
Table 3. The comparative performance estimation using different ML/DL models based on the stratified five-fold cross-validation.
ML/DL ModelsClassification Accuracy in % (with SD)Recall or SensitivitySpecificityBalanced Accuracy ScorePrecisionF1-ScoreAUC-ROC Score
LR81.47 (SD = 0.0398)0.300.950.630.680.360.63
NB81.47 (SD = 0.0398)0.480.910.700.530.440.70
RF85.22 (SD = 0.0485)0.480.950.720.700.510.72
SVM85.22 (SD = 0.0839)0.480.950.720.680.510.72
KNN85.22 (SD = 0.0740)0.530.940.740.630.520.74
DNN87.72 (SD = 0.0666)0.620.950.790.900.650.79
Table 4. The comparative performance estimation using different ML/DL models based on the stratified ten-fold cross-validation.
Table 4. The comparative performance estimation using different ML/DL models based on the stratified ten-fold cross-validation.
ML/DL ModelsClassification Accuracy in % (With SD)Recall or SensitivitySpecificityBalanced Accuracy ScorePrecisionF1-ScoreAUC-ROC Score
LR82.64 (SD = 0.0625)0.400.960.680.450.390.68
NB82.64 (SD = 0.0625)0.500.920.710.580.480.71
RF83.89 (SD = 0.0982)0.350.960.650.600.430.65
SVM85.14 (SD = 0.0940)0.550.940.750.680.540.75
KNN84.03 (SD = 0.0535)0.500.940.720.550.460.72
DNN87.64 (SD = 0.0561)0.550.970.760.700.570.76
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chauhan, A.S.; Lilhore, U.K.; Gupta, A.K.; Manoharan, P.; Garg, R.R.; Hajjej, F.; Keshta, I.; Raahemifar, K. Comparative Analysis of Supervised Machine and Deep Learning Algorithms for Kyphosis Disease Detection. Appl. Sci. 2023, 13, 5012. https://doi.org/10.3390/app13085012

AMA Style

Chauhan AS, Lilhore UK, Gupta AK, Manoharan P, Garg RR, Hajjej F, Keshta I, Raahemifar K. Comparative Analysis of Supervised Machine and Deep Learning Algorithms for Kyphosis Disease Detection. Applied Sciences. 2023; 13(8):5012. https://doi.org/10.3390/app13085012

Chicago/Turabian Style

Chauhan, Alok Singh, Umesh Kumar Lilhore, Amit Kumar Gupta, Poongodi Manoharan, Ruchi Rani Garg, Fahima Hajjej, Ismail Keshta, and Kaamran Raahemifar. 2023. "Comparative Analysis of Supervised Machine and Deep Learning Algorithms for Kyphosis Disease Detection" Applied Sciences 13, no. 8: 5012. https://doi.org/10.3390/app13085012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop