Next Article in Journal
Postoperative 30-Day Comparative Complications of Multilevel Anterior Cervical Discectomy and Fusion and Laminoplasty for Cervical Spondylotic Myelopathy: An Evidence in Reaching Consensus
Previous Article in Journal
Detection of c.375A>G, c.385A>T, c.571C>T, and sedel2 of FUT2 via Real-Time PCR in a Single Tube
Previous Article in Special Issue
A CT-Based Radiomics Model for Prediction of Prognosis in Patients with Novel Coronavirus Disease (COVID-19) Pneumonia: A Preliminary Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sepsis Prediction by Using a Hybrid Metaheuristic Algorithm: A Novel Approach for Optimizing Deep Neural Networks

1
Faculty of Engineering and Architecture, Department of Software Engineering, İstanbul Beykent University, Istanbul 34398, Turkey
2
Faculty of Engineering and Architecture, Department of Computer Engineering, İstanbul Beykent University, Istanbul 34398, Turkey
3
Intensive Care Unit, Bakirkoy Dr. Sadi Konuk Training and Research Hospital, Istanbul 34147, Turkey
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(12), 2023; https://doi.org/10.3390/diagnostics13122023
Submission received: 7 February 2023 / Revised: 11 May 2023 / Accepted: 21 May 2023 / Published: 10 June 2023

Abstract

:
The early diagnosis of sepsis reduces the risk of the patient’s death. Gradient-based algorithms are applied to the neural network models used in the estimation of sepsis in the literature. However, these algorithms become stuck at the local minimum in solution space. In recent years, swarm intelligence and an evolutionary approach have shown proper results. In this study, a novel hybrid metaheuristic algorithm was proposed for optimization with regard to the weights of the deep neural network and applied for the early diagnosis of sepsis. The proposed algorithm aims to reach the global minimum with a local search strategy capable of exploring and exploiting particles in Particle Swarm Optimization (PSO) and using the mental search operator of the Human Mental Search algorithm (HMS). The benchmark functions utilized to compare the performance of HMS, PSO, and HMS-PSO revealed that the proposed approach is more reliable, durable, and adjustable than other applied algorithms. HMS-PSO is integrated with a deep neural network (HMS-PSO-DNN). The study focused on predicting sepsis with HMS-PSO-DNN, utilizing a dataset of 640 patients aged 18 to 60. The HMS-PSO-DNN model gave a better mean squared error (MSE) result than other algorithms in terms of accuracy, robustness, and performance. We obtained the MSE value of 0.22 with 30 independent runs.

1. Introduction

Sepsis is a serious disease that requires serious treatment and a long intensive care process, and it can be fatal if not diagnosed in time. Given the current epidemic, the limited capacity of critical care units might be a major issue for both patients and healthcare professionals. Antibiotic treatment for patients should be undertaken as soon as possible. Several studies in the literature use neural networks to predict sepsis [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. In recent studies, sepsis and sepsis mortality have been predicted using a multilayer perceptron, neural networks with deep learning, Long Short-Term Memory (LSTM), and Recurrent Neural Networks (RNN). These models employ feedforward and back-propagation learning algorithms. Feedforward neural networks with a back-propagation algorithm that works with a fixed learning rate in the background can face many problems due to the hidden layer structure and the number of neurons. These problems include overlearning, undertraining, and not reaching the global minimum solution. Gradient-based algorithms are used in these models to adjust weights and back-propagate the error. Gradient-based methods initialize from a point of solution space and gradually move to the global minimum but sometimes get stuck at the local minimum. On the other hand, metaheuristic algorithms provide a different operation to escape from the local minimum to obtain the best chance.
The work goals to propose a model to solve the global minimum and other problems. Heuristic and metaheuristic algorithms can be used instead of gradient-based methods in solving real-world problems. Metaheuristic algorithms are population-based and start from multiple points and then cooperate. These algorithms have the potential to provide the closest result or global minimum to the solution of the problem.
Some studies refer to swarm intelligence, which can be used in the different artificial intelligence fields, the training of artificial neural networks, and machine learning [16,17,18,19]. Many hybrid algorithms have been proposed and analyzed; however, they are not capable of tuning the parameters of neural networks into different research.
A standard algorithm version was chosen that includes the ability to modify the weights of a new type of neural network. Specific issues are dynamic and covered by other variants [20,21]. The multi-swarm method was created for dynamic problems, not for static problems, which means the parameters are considered vectors/weights to find minimum global optimization changes at the same time. In this study, the parameters are constant at each iteration and provided a space solution for global minimum optimization.
To predict sepsis for patients, the study is focused on the integration of a metaheuristic algorithm with a deep neural network (DNN). In this study, Particle Swarm Optimization (PSO) and Human Mental Search (HMS) are combined and used to tune the parameters of the DNN. Then, the proposed model (HMS-PSO-DNN) is applied to the sepsis dataset. Following that, a comparison of metaheuristic and gradient-based algorithms used in neural network optimization is mentioned in the literature.
Barkat et al. suggested the chicken flock optimization method. The suggested approach is compared to back-propagation neural networks, artificial bee colony back-propagation, and artificial neural networks using a genetic algorithm. They claim that their suggested method surpasses existing algorithms in accuracy and mean square error [22].
To improve the efficiency of neural network training in supervised learning, Ahmad and Mansour employed the cuckoo search algorithm, one of the metaheuristic methods for addressing optimization issues, in a multilayer feed-forward artificial neural network. They tested the network’s accuracy by solving classification problems with this trained network. They compared their findings to the Particle Swarm Optimization (PSO) and Guaranteed Convergence Particle Swarm Optimization (GCSPO) algorithms. They claimed that the suggested strategy outperformed the others on four separate categorization problems [23].
Irmak and Gülcü optimized the weight and bias settings when training an artificial neural network using the butterfly optimization method established by studying the behavior of butterflies. They compared their suggested model to the bat algorithm artificial neural network (ANN) model, the State of Matter Search algorithm model, and the back-propagation ANN model. According to the findings of the learning models they used for the XOR, balloon, breast cancer, and heart datasets, the model they presented outperformed the others [24]. Shi and Li assessed the performance of residential buildings using neural networks designed for ant colonies [25]. Sivagaminathan and Ramakrishnan presented a hybrid approach to trait subset selection based on neural networks and ant colony optimization [26]. For training feed-forward neural networks, Socha and Blum employed the ant colony optimization algorithm [27].
Dorigo et al. employed ant colony optimization to solve various optimization problems. It was created by [28]. By integrating PSO based on Newton’s laws of motion with accelerated central particle gravity optimization, Beheshti and Shamsuddin created the centripetal accelerated PSO approach. They claim that this approach, dubbed “extended PSO”, improves the ANN’s convergence and learning speed. They employed the proposed approach in conjunction with an ANN to diagnose diseases [29]. Mirjalili trained the multilayer perceptron using the gray wolf algorithm [30].
The firefly method was proposed by Brajevic and Tuba for training a feed-forward neural network [31]. For training the back-propagation neural network, Nandy et al. introduced the nature-inspired firefly algorithm [32]. Kovalski and Lukasik trained ANN using the Krill Swarm algorithm [33]. Devikanniga and Raj suggested an ANN model for osteoporosis classification based on monarch butterfly optimization [34]. Jaddie et al. optimized the neural network using a modified bat-inspired method [35]. Yaghini et al. established a hybrid back-propagation method with momentum-based anti-based PSO by integrating the capabilities of metaheuristic and greedy gradient-based algorithms. They claimed that utilizing the time-varying parameter increases the traditional PSO’s search capabilities and that the limitation factor assures particle convergence [36]. To solve the time series problem, Alweshah employed the firefly algorithm and ANN [37].
Karaboğa and Öztürk employed the Artificial Bee Colony Algorithm to train an ANN used to classify patterns [38]. Leung et al. suggested a sophisticated evolutionary method for fine-tuning the topology and parameters of the neural network [39]. To train the neural network, Yang and Kao created a powerful evolutionary algorithm [40]. Mirjalili suggested a hybrid approach for ANN training that combines gravitational search with PSO [41]. Donate et al. suggested a sophisticated ANN model that makes use of the genetic algorithm, differential evolution, and distribution estimation algorithm for estimating time series [42].
Da and Xiurun proposed an ANN based on PSO created using the simulated annealing approach [43]. To train the feed-forward ANN model, they created e-learning materials. Khan and Sahai compared the genetic algorithm, the PSO method, and the back-propagation algorithm to other algorithms [44]. Including Parsian by using the gray wolf optimization technique to optimize the ANN, they created a hybrid neural network model for melanoma diagnosis. They claim that their strategy improved the multilayer perceptron’s performance [45].
Yelghi et al. suggested a modified version of the firefly technique utilizing metaheuristics. The Adaptive Neural Fuzzy Inference System (ANFIS) adjusts the parameters of the clustering algorithm in an attempt to address difficult problems such as neural networks [46,47,48,49,50]. They have attempted to use optimization techniques to address complicated issues in the fields of finance and software engineering [51,52,53]. Therefore, resolving these issues will open new doors for the implementation of solutions to other genuine issues. This work’s contributions and novelty are as follows:
  • The hybrid metaheuristic algorithm is proposed.
  • Overlearning, insufficient learning, and memorization are attempted to be avoided by focusing on solving the local minimum problem encountered by gradient-based algorithms.
  • A novel deep neural network design with optimum weights is proposed by the hybrid metaheuristic algorithm.
  • The suggested model and topology are aimed to reduce the risk of sepsis in intensive care units.
The rest of this study has been arranged as follows: Section 2 presents the topology of the proposed algorithm, which is integrated with the DNN. Section 3 presents the proposed optimization algorithm. Section 4 mentions the experimental studies. These applications consist of the demonstration of the convergence ability of the proposed algorithm using benchmark functions, the experiments for the proposed algorithm with benchmark datasets, and the experiments for the proposed algorithm with the sepsis dataset. Section 5 describes the experimental results. These results are the convergence results of the proposed algorithm, the test results of the proposed algorithm integrated with the DNN for benchmark datasets, and experimental results for the sepsis dataset. Section 6 is the conclusion, which presents the general findings of the whole paper.

2. The Topology of the Proposed Algorithm Integrated with the DNN

Several deep learning architectures have been mentioned in the literature. Deep Convolutional Neural Networks, which are among these models, are used in image and sound recognition, image processing, and natural language processing, whereas LSTMs and variants of it are utilized in speech recognition, text and signal processing, and drug discovery. These models are more expensive to run, impractical, and unsuitable in circumstances such as data preparation and feature extraction than the deep neural network architecture used for sepsis prediction in the sepsis dataset, which solely comprises numerical data. Deep neural networks can perform the solution of complex and nonlinear real problems by forming relationships between the data.
The HMS-PSO algorithm is utilized to optimize the weights and biases (parameters) of neural network structures which are integrated called HMS-PSO-DNN. Figure 1 illustrates a flowchart of HMS-PSO-DNN. The model was trained by using the sigmoid activation function per neuron and applied to the benchmark dataset and real dataset, which will be discussed in the next section. All evaluations are based on the accuracy mean square error (MSE) and therefore investigated in this study. The formula of MSE is as follows. The mean squared error is calculated by Equation (1) [18].
M S E = 1 U U = 1 U ( y u y u ı ) 2
The framework of HMS-PSO-DNN is indicated in Figure 2. As indicated earlier, the parameters of each layer should be tuned. All parameters convert to vectors or particles and they can be defined as one solution in metaheuristics. As we know, the metaheuristic algorithms based on the population size show the different solutions per iteration and optimize the solutions according to the paradigm of the algorithm. The number of adjustable parameters is calculated by Equation (2) [19].
N N = N i + 1 x h 1 + h 1 + 1 x h 2 + h 2 + 1 x h 3 + h 3 + 1 x N 0
To tune the hyperparameters, the model was trained by using the sigmoid activation function in neurons and applied to the dataset with 1200 instances and runs with 500 iterations. Based on the accuracy mean square error (MSE) computed and investigated and based on the experimental stochastic strategy, the “24-18-9-3-2” model was selected as the best topology structure. Table 1, below, shows this. Here, AUC is the area under the curve and MSE is the mean squared error.

3. The Proposed Optimization Algorithm

In this part, the Particle Swarm Optimization (PSO) and Human Mental Search (HMS) population-based metaheuristic algorithms were utilized together and called the HMS-PSO algorithm. The proposed HMS-PSO algorithm employs the HMS mental search operator, which imitates human behavior and includes the exploration and exploitation techniques of PSO. Depending on the scale of the problem and the optimization challenge, the classic PSO algorithm finds the minimum by a combination of exploration and exploitation operators related to the social coefficient, wherein exploitation refers to the cognitive coefficient in PSO. The advantage of HMS is a mental search that can be defined as a new extended exploitation. By considering the advantages of these algorithms, we provided a new hybrid algorithm. The suggested HMS-PSO algorithm’s pseudo code is displayed below [54].
In PSO, Equation (1) is used to calculate the updating particles, where v is the particle’s velocity, w is the inertia weight, c1 and c2 are the cognitive and social coefficients, and r1 and r2 are random vectors that regulate the stochastic impact of the cognitive and social components on the particle. Gbest is the global minimum value for the global best solution, and Pbest is the minimum value of the local best solution for each particle.
v = w     v + c 1     r 1     P b e s t x + c 2 x     r 1     ( G b e s t x )
In Equation (3), w represents inertia weight, while c1 and c2 and r1 and r2 are randomly selected values. A high coefficient of inertia can make a particle travel faster, but a high coefficient of inertia can also make a particle drift away from the swarm. The particle may become trapped in its local optimal solution as a result of the cognitive coefficient selection. The social coefficient that is chosen may cause it to imitate the best global solution without discovering any alternative or superior global solutions.
The HMS-PSO method aids the particle by using the mental search operator to discover the optimal solution. The normal distribution d is a form that describes the random data. The random number is mc, which produces the conventional normal distribution in Equation (4). The layout is symmetrical because the majority of data hover close to the mean.
d = 1 2 π e m c 2 2
According to the HMS method in Equation (5), Beta is a random uniform distribution with a range of 0.3 and 1.99 [55].
s i g m a = s i n ( π B e t a 2 )
S denotes the number of successive values generated for each idea in the mental search. In this stage, a proposal based on the Levy flight, which is a random integer between the higher and lower bounds, determines a new solution. Levy flight is a unique variety of random walks in which step size is determined by the Levy distribution. The next location in a random walk is determined only by the present position in the solution space. The Levy flight involves many quick trips and numerous small steps. By way of explanation, the Levy flight simultaneously enhances the quality of exploration and exploitation.
Levy flight is more effective than Brownian motion for navigating ambiguous regions, which is an important distinction to make [55]. S is the Levy formula, which is shown in Equation (6), where u and v are random normal distribution numbers between 0 and 1, i is the number of particles, xi is the random uniform initial location of each particle, and 0.01 is the Levy distribution coefficient number [55].
S = ( d     0.01     u     s i g m a v 1 B e t a     x i )
To obtain the best solution, the mental search operator is performed in a diverse manner along with the Levy flight distribution. According to the outcome of the mental search, the particle’s current position is Lbest, the local best position is Pbest, and S is the new position in Equation (7). While running the algorithm, the fitness value of positions is compared and the local best position is saved.
i f   L b e s t < S   t h e n   P b e s t = L b e s t ,   i f   S < L b e s t   t h e n   P b e s t = S
The distribution scale factor is represented by α, whereas Beta represents the distribution index, which ranges from 0.3 to 1.99. The suggested pseudo code of the HMS-PSO algorithm is presented below in Algorithm 1 [54].
Algorithm 1. The proposed algorithm pseudo code.
I n i t i a l i z e   p a r t i c l e   p o p u l a t i o n
f o r   t = 1 : m a x i m u m   g e n e r a t i o n
I n i t i a l i z e   g l o b a l   a n d   l o c a l   b e s t   p a r t i c l e s   p b   a n d   p g
f o r   i = 1 : p o p u l a t i o n   s i z e
v i t + 1   =   w t     v i t + c 1     r 1     p b x i t + c 2     r 2     p g x i t
i f   v i t + 1   >   v m a x   t h e n   v i t + 1 = v m a x
e l s e   i f v i t + 1   <   v m i n   t h e n   v i t + 1 = v m i n
e n d
x i t + 1 = x i t + v i t + 1 ;
i f   x i t + 1   >   x m a x   t h e n   x i t + 1 = x m a x
e l s e   i f   x i t + 1   <   x m i n   t h e n   x i t + 1 = x m i n
e n d
e n d
I n i t i a l i z e   t h e   m e n t a l   s e a r c h   f o r   e a c h   p a r t i c l e
f o r   i   =   1 : p o p u l a t i o n   s i z e
S = d     0.01     u     s i g m a ( v 1 B e t a )     x i
x s = S
i f   f x i t < f p b t   t h e n   p b t = x i t
e l s e   i f   f x s t < f p b t   t h e n   p b t = x s t
e n d
e n d
f p g t < m i n i f p b t
w t = t m a x t t m a x w m a x w m i n + w m i n
e n d

4. Experimental Studies

Proposed method’s convergence to the minimum and maximum values with using benchmark functions is demonstrated in this section. Because metaheuristic algorithms cannot be proven, convergence criteria are used to show algorithms. The network design for performing the sepsis classification problem is established when the created hybrid metaheuristic algorithm is integrated with deep neural networks. The network architecture developed as a result of experimental research is used to predict sepsis.
The proposed HMS-PSO algorithm and other algorithms, called PSO and HMS, were implemented using the Python programming language in the Jupyter Notebook environment installed on Windows 10. The processor was an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz. The Python libraries used included NumPy, Math, Random, Matplotlib, Pandas, and Scipy. The iris, wine, and breast cancer benchmark datasets were obtained using the Sklearn package. Sklearn Preprocessing MinMaxScaler was used to standardize the data, Sklearn Simple Imputer was used to replace missing data, and Sklearn train-test-split was used to separate the dataset into training and test. The data were divided into two pieces: training 80% and testing 20%. One hot encoding was performed on the outputs to work on increasing the classification accuracy.

4.1. Demonstration of the Convergence of the Proposed Algorithm Using Benchmark Functions

The performance of the HMS-PSO algorithm was compared to PSO and HMS algorithms using benchmark test functions. According to the benchmark test functions, algorithms with 30 dimensions and 2 dimensions were each performed 30 times. The suggested HMS-PSO method outperforms the PSO and HMS algorithms, according to performance measurements.
Metaheuristic algorithms could not be demonstrated with mathematical proof due to being implanted based on the stochastic strategy. Instead, for surveying the effectiveness of the algorithms according to whether algorithms lead to convergence in the global best position or close to the global best position, the proposed and compared algorithms were applied to the benchmark functions. In this section, benchmark functions were utilized and defined with limitations and features, which are indicated in Table 2, and the remaining benchmark functions refer to [56,57,58].
The experiment was performed with 30 runs on 30 and 2 dimensions of functions. For all algorithms, we considered 50 population sizes and 100 iterations, equaling approximately 5000 algorithms to evaluate and provide the solutions. The most widely used parameter values in the literature were selected for convergence analysis. The determined parameters of the algorithms for the benchmark functions are presented in Table 3.

4.2. Experiments for the Proposed Algorithm with Benchmark Datasets

To investigate and acquire the proper topology of the structure, the model was trained by using the sigmoid activation function in neurons and applied to the benchmark dataset. The experimental topology for DNN based on the accuracy mean square error (MSE) was investigated with a different topology [26], which was applied to the benchmark dataset. The proposed model based on MSE was applied to different datasets and compared to other models. The most frequently used parameter values in the literature were selected for the analysis of DNN performance on benchmark datasets. The determined parameters of algorithms for the benchmark datasets are presented in Table 4.

4.3. Experiments for the Proposed Algorithm with Sepsis Dataset

After performing and investigating the benchmark dataset, the proposed model was implemented to the real dataset. The dataset included the vital records of 640 patients, who ranged in age from 18 to 60. All data were acquired under the direction of doctors in the intensive care unit. In the study, the patient’s gender and height were taken into consideration, along with the RR (respiratory rate), LYM (lymphocyte), LYM100 (lymphocyte ratio), WBC (white blood cell), neu (neutrophil), plt (thrombocyte), CRP (c-creative protein), Procalcitonin, HR (heart rate), spo2 (peripheral capillary oxygen saturation), Temp (body temperature), BPSYS (systolic blood pressure), bpDias (diastolic blood pressure), bpMean (mean blood pressure), Apache2_first (disease severity classification), Sofa_first (sequential organ failure assessment), and Apache2_Mort (disease mortality severity classification).
The deep neural network applied to the sepsis dataset was trained using the HMS-PSO, HMS, PSO, Gradient, Adadelta, and Stochastic Gradient Descent (SGD), RMSprop, and Adam optimization methods. The test dataset’s mean square error (MSE) and root mean square error (RMSE) values were obtained for comparison. The deep neural network was run 30 times independently and 2000 times with each algorithm. For back-propagation techniques, the learning rate was chosen by default. For each method, the sigmoid activation function was chosen. The determined topology for the DNN was applied to the sepsis dataset.

5. Discussion

5.1. Convergence Results of the Proposed Algorithm

The convergence analysis of the algorithms was performed using benchmark functions. The summarized details are compared in Table 5. The proposed algorithm, HMS, and PSO were applied to the benchmark functions. As can be seen in Table 5, HMS-PSO showed the best performance for convergence analysis.
The second column of the table shows the best algorithms for the related function, and based on the “No Free lunch” theory, there is no one algorithm best for optimization functions [59]. We can see that two or three algorithms show approximately the same results.
Based on the 30 runs for all algorithms, we observe the results of minimum value and maximum value.
The proposed HMS-PSO algorithm with the advantage of HMS, which is the extension exploitation, was combined with PSO. In the PSO algorithm, the exploration and exploitation operations sometimes become stuck at the local minimum and cannot cooperate and balance with each other. To solve the problem of PSO, we used the exploitation operation HMS, which provides a small step size in the deep solution space and a big step size in the hill of the solution space. The exploitation operation of HMS shows the random walk, which dynamically changes in the solution space, which is inspired by the Levy flight distribution.
The HMS-PSO algorithm converges in space solutions such as Rosenbrock, Rastrigin, Ackley, Griewank, Whitley, Schwefel_12, Schwefel_21, Hyper_Sphere, and McCormick benchmark functions and also runs on the remaining benchmark functions showing close values to global best. Based on the “No Free lunch” theory, this study is not expected to provide the best results of the whole metaheuristic algorithm. By considering the entire results of the experiment, we observe that the proposed algorithm shows suitable performance with different functions.

5.2. The Test Results of the Proposed Algorithm Integrated with DNN for Benchmark Datasets

The proposed algorithm was integrated with the DNN and the integrated model was implemented to determine the best topology with the sample dataset. In the experiment of the sample sepsis dataset, “24-18-9-3-2” was chosen as the best topology structure. In Table 6, the statistical analysis of all algorithms for benchmark datasets is given. The benchmark datasets are iris, breast cancer, and wine. Upon observation, the proposed model considering the best values indicates the best performance and robustness compared to the other models, according to MSE values.
Based on the 30 runs of all algorithms on the related dataset, the average (avg.), maximum (max.), minimum (min.), standard deviation (std.), and variance (var.) were computed.

5.3. Experimental Results for Sepsis Dataset

Statistical analyses based on MSE and RMSE for all algorithms run on the sepsis dataset are indicated in Table 7 and Table 8, respectively. The values are included to show the robustness and performance of the algorithms on the sepsis dataset. The proposed algorithm has the best MSE and RMSE values.

6. Conclusions

This research presents a new hybrid metaheuristic algorithm for the prediction of the sepsis dataset. In the work, we first tried to propose a hybrid metaheuristic algorithm to solve the global optimization and demonstrated its correctness with benchmark functions. Then, we integrated the proposed metaheuristic algorithm into a deep neural network.
In the experiment, all models were applied to the benchmark dataset to show reliability, robustness, and correctness. As a result, according to the statistical and other analysis metrics, the proposed model shows the optimal performance in comparison with the other applied models. After achieving a correct and robust model, we applied it to the sepsis dataset.
This data were obtained from Sadi Konuk Training and Research Hospital. The results indicate that a total of 216 of 640 individuals between the ages of 18 and 60 had sepsis, whereas the remaining 424 did not have sepsis.
Dataset parameters were chosen by medical specialists in the intensive care unit. The output values were also converted into two-dimensional vectors by applying one hot encoding to the Sklearn preprocessing library. These vectors consist of 0 s and 1 s.
The deep neural network applied to the sepsis dataset was trained using the HMS-PSO, HMS, PSO, Gradient, Adadelta, and Stochastic Gradient Descent (SGD), RMSprop, and Adam optimization methods. The test dataset’s mean square error (MSE) and root mean square error (RMSE) values were obtained for comparison. The deep neural network was run 30 times independently for each algorithm. The sigmoid was chosen for activation because it is more successful in binary classification. For back-propagation techniques, the learning rate was chosen by default. For each method, the sigmoid activation function was chosen. The determined topology for DNN based was applied to the sepsis dataset.
The statistical MSE and RMSE analysis of the proposed algorithm has shown robustness and the best performance on the sepsis dataset. The proposed algorithm succeeded in solving the classification problem of sepsis prediction. Since the sepsis dataset was unstable, it learned 0s better than 1s. A lower MSE value is associated with population size and a more appropriate selection of parameters. It is thought that the model can provide better results with a larger and more balanced dataset.

Author Contributions

Conceptualization, U.K. and A.Y.; methodology, U.K. and A.Y.; software, U.K.; validation, U.K., A.Y. and S.A.; formal analysis, U.K. and A.Y.; investigation, U.K.; resources, U.K.; data curation, S.A.; writing—original draft preparation, U.K.; supervision, A.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Bakirkoy Dr. Sadi Konuk Training and Research Hospital (protocol code 2020/528 and date of approval 07.12.2020) for studies involving humans.

Informed Consent Statement

Patient consent was not required due to the retrospective nature of study.

Data Availability Statement

Not applicable.

Acknowledgments

This work was generated from U.K.’s doctorate thesis under the supervision of A.Y., a doctoral thesis adviser, and the sepsis dataset was examined by S.A., a specialist.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saqib, M.; Sha, Y.; Wang, M.D. Early Prediction of Sepsis in EMR Records Using Traditional ML Techniques and Deep Learning LSTM Networks. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2018, 2018, 4038–4041. [Google Scholar] [CrossRef]
  2. Zhang, Z.; Pan, Q.; Ge, H.; Xing, L.; Hong, Y.; Chen, P. Deep learning-based clustering robustly identified two classes of sepsis with both prognostic and predictive values. EBioMedicine 2020, 62, 103081. [Google Scholar] [CrossRef] [PubMed]
  3. Perng, J.W.; Kao, I.H.; Kung, C.T.; Hung, S.C.; Lai, Y.H.; Su, C.M. Mortality Prediction of Septic Patients in the Emergency Department Based on Machine Learning. J. Clin. Med. 2019, 8, 1906. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Rodríguez, A.; Mendoza, D.; Ascuntar, J.; Jaimes, F. Supervised classification techniques for prediction of mortality in adult patients with Sepsis. Am. J. Emerg. Med. 2020, 45, 392–397. [Google Scholar] [CrossRef]
  5. Rafiei, A.; Rezaee, A.; Hajati, F.; Gheisari, S.; Golzan, M. SSP: Early prediction of sepsis using fully connected LSTM-CNN model. Comput. Biol. Med. 2021, 128, 104110. [Google Scholar] [CrossRef] [PubMed]
  6. Wernly, B.; Mamandipoor, B.; Baldia, P.; Jung, C.; Osmani, V. Machine learning predicts mortality in septic patients using only routinely available ABG variables: A multicenter entrée evaluation. Int. J. Med. Inform. 2021, 145, 104312. [Google Scholar] [CrossRef]
  7. Meiring, C.; Dixit, A.; Harris, S.; MacCallum, N.S.; Brealey, D.A.; Watkinson, P.J.; Jones, A.; Ashworth, S.; Beale, R.; Brett, S.J.; et al. Optimal intensive care outcome prediction over time using machine learning. PLoS ONE 2018, 13, e0206862. [Google Scholar] [CrossRef] [Green Version]
  8. Kamal, S.A.; Yin, C.; Qian, B.; Zhang, P. An interpretable risk prediction model for healthcare with pattern attention. BMC Med. Inform. Decis. Mak. 2020, 20 (Suppl. S11), 307. [Google Scholar] [CrossRef]
  9. Bedoya, A.D.; Futoma, J.; Clement, M.E.; Corey, K.; Brajer, N.; Lin, A.; Simons, M.G.; Gao, M.; Nichols, M.; Balu, S.; et al. Machine learning for early detection of sepsis: An internal and temporal validation study. JAMIA Open 2020, 3, 252–260. [Google Scholar] [CrossRef]
  10. Parreco, J.P.; Hidalgo, A.E.; Badilla, A.D.; Ilyas, O.; Rattan, R. Predicting central line-associated bloodstream infections and mortality using supervised machine learning. J. Crit. Care 2018, 45, 156–162. [Google Scholar] [CrossRef]
  11. Ahmed, F.S.; Ali, L.; Joseph, B.A.; Ikram, A.; Ul Mustafa, R.; Bukhari, S.A.C. A statistically rigorous deep neural network approach to predict mortality in trauma patients admitted to the intensive care unit. J. Trauma Acute Care Surg. 2020, 89, 736–742. [Google Scholar] [CrossRef]
  12. Zhang, D.; Yin, C.; Hunold, K.M.; Jiang, X.; Caterino, J.M.; Zhang, P. An interpretable deep-learning model for early prediction of sepsis in the emergency department. Patterns 2021, 2, 100196. [Google Scholar] [CrossRef]
  13. Wickramaratne, S.D.; Shaad Mahmud, M.D. Bi-Directional Gated Recurrent Unit Based Ensemble Model for the Early Detection of Sepsis. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. 2020, 2020, 70–73. [Google Scholar] [CrossRef]
  14. Tang, C.Q.; Li, J.Q.; Xu, D.Y.; Liu, X.B.; Hou, W.J.; Lyu, K.Y.; Xiao, S.C.; Xia, Z.F. Comparison of the machine learning method and logistic regression model in prediction of acute kidney injury in severely burned patients. Zhonghua Shao Shang Za Zhi 2018, 34, 343–348. (In Chinese) [Google Scholar] [CrossRef]
  15. Shashikumar, S.P.; Josef, C.S.; Sharma, A.; Nemati, S. DeepAISE—An interpretable and recurrent neural survival model for early prediction of Sepsis. Artif. Intell. Med. 2021, 113, 102036. [Google Scholar] [CrossRef]
  16. Jahed Armaghani, D.; Hajihassani, M.; Tonnizam, E.; Marto, A.; Noorani, S.A. Blasting-Induced Fly rock and Ground Vibration Prediction through Expert Artificial Neural Network Based on Particle Swarm Optimization. Arab. J. Geosci. 2013, 7, 5383–5396. [Google Scholar] [CrossRef]
  17. Bousmaha, R.; Hamou, R.M.; Abdelmalek, A. Training Feedforward Neural Networks Using Hybrid Particle Swarm Optimization, Multi-Verse Optimization. In Proceedings of the 1st International Conference on Innovative Trends in Computer Science, CITSC 2019, Guelma, Algeria, 20–21 November 2019. [Google Scholar]
  18. Rahkar Farshi, T. Battle royale optimization algorithm. Neural Comput. Appl. 2021, 33, 1139–1157. [Google Scholar] [CrossRef]
  19. Agahian, S.; Akan, T. Battle royale optimizer for training multi-layer perceptron. Evol. Syst. 2021, 13, 563–575. [Google Scholar] [CrossRef]
  20. Xia, X.; Gui, L.; Zhan, Z.-H. A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting. Appl. Soft Comput. 2018, 67, 126–140. [Google Scholar] [CrossRef]
  21. Song, H.; Qin, A.K.; Tsai, P.-W.; Liang, J.J. Multitasking Multi-Swarm Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 1937–1944. [Google Scholar]
  22. Ali, B.; Lashari, S.A.; Sharif, W.; Khan, A.; Ullah, K.; Ramli, D.A. An Efficient Learning Weight of Elman Neural Network with Chicken Swarm Optimization Algorithm. Procedia Comput. Sci. 2021, 192, 3060–3069. [Google Scholar] [CrossRef]
  23. Kawam, A.A.L.; Mansour, N. Metaheuristic Optimization Algorithms for Training Artificial Neural Networks. Int. J. Comput. Inf. Technol. 2012, 1, 156–161. [Google Scholar]
  24. Irmak, B.; Gülcü, Ş. Training of the feed-forward artificial neural networks using butterfly optimization algorithm. MANAS J. Eng. 2021, 9, 160–168. [Google Scholar] [CrossRef]
  25. Shi, H.; Li, W. Artificial neural networks with ant colony optimization for assessing the performance of residential buildings. In Proceedings of the 2009 International Conference on Future BioMedical Information Engineering (FBIE), Sanya, China, 13–14 December 2009; pp. 379–382. [Google Scholar]
  26. Sivagaminathan, R.K.; Ramakrishnan, S. A hybrid approach for feature subset selection using neural networks and ant colony optimization. Expert Syst. Appl. 2007, 33, 49–60. [Google Scholar] [CrossRef]
  27. Socha, K.; Blum, C. An ant colony optimization algorithm for continuous optimization: Application to feed-forward neural network training. Neural Comput. Appl. 2007, 16, 235–247. [Google Scholar] [CrossRef]
  28. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  29. Beheshti, Z.; Shamsuddin, S.M.H. CAPSO: Centripetal accelerated particle swarm optimization. Inf. Sci. 2014, 258, 54–79. [Google Scholar] [CrossRef]
  30. Mirjalili, S. How effective is the Gray Wolf optimizer in training multilayer perceptrons. Appl. Intell. 2015, 43, 150–161. [Google Scholar] [CrossRef]
  31. Brajevic, I.; Tuba, M. Training feed-forward neural networks using firefly algorithm. Recent Adv. Knowl. Eng. Syst. Sci. 2013, 10, 156–161. [Google Scholar]
  32. Nandy, S.; Sarkar, P.P.; Das, A. Analysis of a nature-inspired firefly algorithm based back-propagation neural network training. arXiv 2012, arXiv:1206.5360. [Google Scholar]
  33. Kowalski, P.A.; Łukasik, S. Training neural networks with krill herd algorithm. Neural Process. Lett. 2016, 44, 5–17. [Google Scholar] [CrossRef] [Green Version]
  34. Devikanniga, D.; Raj, R.J.S. Classification of osteoporosis by artificial neural network based on monarch butterfly optimization algorithm. Healthc. Technol. Lett. 2018, 5, 70–75. [Google Scholar] [CrossRef]
  35. Jaddi, N.S.; Abdullah, S.; Hamdan, A.R. Optimization of neural network model using the modified bat-inspired algorithm. Appl. Soft Comput. 2015, 37, 71–86. [Google Scholar] [CrossRef]
  36. Yaghini, M.; Khoshraftar, M.M.; Fallahi, M. A hybrid algorithm for artificial neural network training. Eng. Appl. Artif. Intell. 2013, 26, 293–301. [Google Scholar] [CrossRef]
  37. Alweshah, M. Firefly algorithm with artificial neural network for time series problems. Res. J. Appl. Sci. Eng. Technol. 2014, 7, 3978–3982. [Google Scholar] [CrossRef]
  38. Karaboga, D.; Ozturk, C. Neural networks training by artificial bee colony algorithm on pattern classification. Neural Netw. World 2009, 19, 279. [Google Scholar]
  39. Leung, F.H.F.; Lam, H.; Ling, S.; Tam, P.-S. Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw. 2003, 14, 79–88. [Google Scholar] [CrossRef] [Green Version]
  40. Yang, J.-M.; Kao, C.-Y. A robust evolutionary algorithm for training neural networks. Neural Comput. Appl. 2001, 10, 214–230. [Google Scholar] [CrossRef]
  41. Mirjalili, S.; Hashim, S.Z.M.; Sardroudi, H.M. Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Appl. Math. Comput. 2012, 218, 11125–11137. [Google Scholar] [CrossRef]
  42. Donate, J.P.; Li, X.; Sánchez, G.G.; de Miguel, A.S. Time series forecasting by evolving artificial neural networks with genetic algorithms, differential evolution and estimation of distribution algorithm. Neural Comput. Appl. 2011, 22, 11–20. [Google Scholar] [CrossRef]
  43. Da, Y.; Xiurun, G. An improved PSO-based ANN with simulated annealing technique. Neurocomputing 2005, 63, 527–533. [Google Scholar] [CrossRef]
  44. Khan, K.; Sahai, A. A comparison of BA, GA, PSO, BP, and LM for training feed-forward neural networks in an e-learning context. Int. J. Intell. Syst. Appl. 2012, 4, 23. [Google Scholar] [CrossRef]
  45. Parsian, A.; Ramezani, M.; Ghadimi, N. A hybrid neural network-gray wolf optimization algorithm for melanoma detection. Biomed. Res.-Tokyo 2017, 28, 3408–3411. [Google Scholar]
  46. Yelghi, A.; Köse, C. A modified firefly algorithm for global minimum optimization. Appl. Soft Comput. 2018, 62, 29–44. [Google Scholar] [CrossRef]
  47. Yelghi, A.; Köse, C.; Yelghi, A.S.; Shahkar, A. Automatic fuzzy-DBSCAN algorithm for morphological and overlapping datasets. J. Syst. Eng. Electron. 2020, 31, 1245–1253. [Google Scholar] [CrossRef]
  48. Kumar, A.; Pant, S.; Ram, M.; Yadav, O. (Eds.) Meta-Heuristic Optimization Techniques: Applications in Engineering; Walter de Gruyter GmbH & Co KG.: Berlin, Germany, 2022; Volume 10. [Google Scholar]
  49. Tavangari, S.H.; Yelghi, A. Features of metaheuristic algorithm for integration with ANFIS model. In Proceedings of the 2022 International Conference on Theoretical and Applied Computer Science and Engineering (ICTASCE), Istanbul, Turkey, 29 September–1 October 2022. [Google Scholar]
  50. Yelghi, A.; Yelghi, A.S.; Shahkar, A. Estimation of Triangle Relationship with Artificial Neural Networks of Exchange Rate, Inflation, and Interest. In Proceedings of the 5th International Research Congress on Social Sciences, Berlin, Germany, 17–19 December 2021. [Google Scholar]
  51. Yelghi, A. Investigate clustering and association rules and provide customers’ favorite products. In Proceedings of the 1st National Conference Mathematics and Its Applications in Engineering Sciences, Baghdad, Iraq, 7–8 November 2012; p. 11805. [Google Scholar]
  52. Yelghi, A. A new strategy reverse engineering of business process from BPEL to formal specification. In Proceedings of the 2nd National Conference Soft Computing and IT, NCSCIT2012, Islamic Azad University, Mahshahr Branch, (In Persian). IBandar-e Mahshahr, Iran, 23 February 2012. [Google Scholar]
  53. Yelghi, A.S.; Gürsoy, M.; Yelghi, A. The relationship between inflation and exchange rate of interest rates determined by loan type in the banking market. J. Empir. Econ. Soc. Sci. 2021, 3, 21–42. [Google Scholar]
  54. Kaya, U.; Yılmaz, A. A Hybrid Metaheuristic Algorithm Based on Mental Search: PSO-HMS. In Proceedings of the International Conference on Science, Engineering Management and IT (SEMIT 2022), Ankara, Turkey, 2–3 February 2022; pp. 257–272. [Google Scholar]
  55. Mousavirad, S.J.; Ebrahimpour-Komleh, H. Human mental search: A new population-based metaheuristic optimization algorithm. Appl. Intell. 2017, 47, 850–887. [Google Scholar] [CrossRef]
  56. Eröz ETanyıldız, E. Güncel Metasezgisel Optimizasyon Algoritmaların Performans Karşılaştırması. In Proceedings of the 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 28–30 September 2018; pp. 1–16. [Google Scholar] [CrossRef]
  57. Available online: https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/f4d6be8c-ddaa-4aa4-93b6-07ff656cc94b/f630b041-ae4e-44fb-90a9-21c636968b69/previews/Test_Functions.m/index.html (accessed on 30 December 2021).
  58. Available online: https://www.sfu.ca/~ssurjano/shubert.html (accessed on 30 December 2021).
  59. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of HMS-PSO-DNN.
Figure 1. Flowchart of HMS-PSO-DNN.
Diagnostics 13 02023 g001
Figure 2. A sample topology of the HMS-PSO-DNN algorithm.
Figure 2. A sample topology of the HMS-PSO-DNN algorithm.
Diagnostics 13 02023 g002
Table 1. Network models.
Table 1. Network models.
Number of Input Layer NeuronsNumber of Hidden Layer NeuronsOutput Layer NeuronsMSEAUC
2440-40-20-10-320.17910.5
2420-40-9-6-320.17910.5
2410-40-9-320.17910.5
2410-20-30-5-220.17910.5
2440-40-90-3020.07910.83
2450-15-18-320.10410.78
2410-30-10-3020.10410.76
2470-35-30-720.15830.78
2410-40-10-520.13750.71
2430-25-10-320.08750.78
2425-15-10-220.14160.65
2421-10-5-320.1750.51
2420-20-30-50-5020.08330.84
2430-30-70-1020.17910.5
243-15-9-220.1250.796
2421-12-9-320.12910.81
2418-9-320.07910.897
2418-12-9-320.0750.8
2418-12-620.13330.66
2430-50-6-320.17910.5
2415-18-12-320.17910.5
247-15-9-320.14160.74
2418-12-6-320.11250.786
249-15-8-320.10410.73
2412-18-9-320.15410.815
2416-8-5-220.17910.5
Table 2. The benchmark functions with features.
Table 2. The benchmark functions with features.
FunctionFunction DefinitionV noBoundaryF min
Hyper Sphere F 1 x = i = 1 n x i 2 30[−100, 100]0
Shifted Schwefel’s Problem 1.2 F 2 x = i = 1 n x i 2 x i + i = 1 n x i 30[−10, 10]0
Schwefel 1.2 F 3 x = i = 1 n ( j = 1 i x j ) 2 30[−100, 100]0
Schwefel 2.21 F 4 x = max x i , 1 i n 30[−100, 100]0
Rosenbrock F 5 x = i = 1 n 1 [ 100 x i + 1 x i 2 2 + x i 1 2 ] 30[−30, 30]0
Step2 F 6 x = i = 1 n ( [ x i + 0.5 ] ) 2 30[−100, 100]0
Quartic F 7 x = i = 1 n i x i 4 + r a n d o m   [ 0 , 1 ] 30[−1.28, 1.28]0
Schwefel F 8 x = i = 1 n x i s i n ( | x i | ) 30[−500, 500]−418.9829 × 5
Rastrigin F 9 x = i = 1 n [ x i 2 10 c o s ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
Ackley F 10 x = 20 exp 1 n i = 1 n x i 2 0.2 exp 1 n i = 1 n cos 2 π x i + 20 + e 30[−32, 32]0
Griewank F 11 x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600, 600]0
Six Hump F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
Branin F 17 x = ( x 2 5.1 4 π 2 x 1 2 6 ) 2 + 10 1 1 8 π c o s x 1 + 10 2[−5, 5]0.398
Gold Stein and Price F 18 x = [ 1 + x 1 + x 2 + 1 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + 2 x 1 3 x 2 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
Table 3. Parameters of algorithms for benchmark functions.
Table 3. Parameters of algorithms for benchmark functions.
ParametersDefinitionValue
WInertia values0.729
C1Local coefficient1.49
C2Global coefficient1.49
MLMental lower bound0.3
MHMental higher bound1.99
UBThe upper bound of particles10
LBLower bound of particles2
N_varNumber of dimensions30, 2
CsCluster size5
Table 4. Determined parameters of algorithms for benchmark datasets.
Table 4. Determined parameters of algorithms for benchmark datasets.
ParametersDefinitionValue
WInertia values0.6
C1Local coefficient1.45
C2Global coefficient1.45
MLMental lower bound0.3
MHMental higher bound1.99
UBThe upper bound of particles10
LBLower bound of particles2
N_varNumber of dimensions for sepsis640
CsCluster size5
Table 5. The summary of convergence performance for the algorithms implemented.
Table 5. The summary of convergence performance for the algorithms implemented.
FunctionThe Best-Performing AlgorithmMin (Value)
RosenbrockHMS-PSO0.000000005
Hyper_SphereHMS-PSO0.04846808
Schwefel_12HMS-PSO0.35624187
Schwefel_21HMS-PSO1.49143852
Step2HMS-PSO, PSO, HMS0–0
QuarticHMS-PSO, PSO, HMS0–0
SchwefelHMS−4.00 × 10197
RastriginHMS-PSO11.06655678
AckleyHMS-PSO0.10581148
GriewankHMS-PSO0.58381321
BraninHMS-PSO, PSO0.39788736
Six_Hump_CamelHMS-PSO, PSO−1.03162845
Goldstein PriceHMS-PSO, PSO, HMS3–3
DejongHMS-PSO, PSO0–0
SchubertHMS-PSO, PSO−186.7309
WhitleyHMS-PSO0.00000013
MatyasHMS4.99 × 10−20
ZakharovHMS-PSO, PSO0.05094591
McCormickHMS-PSO−149.2967921
BohachevskyHMS6.64 × 10−4
MichalewiczHMS-PSO, PSO−1.91988
Table 6. The statistical analysis of all algorithms for benchmark datasets.
Table 6. The statistical analysis of all algorithms for benchmark datasets.
DatasetModelMse (Avg.)Mse (Min.)Mse (Max.)Mse (Std.)Mse (Var.)
IrisPSO-DNN0.210661.500000.000000.288540.08325
PROPOSED ALGORITHM0.064400.000000.433300.099010.00980
HMS-DNN1.291671.291671.291670.000000.00000
WinePSO-DNN0.415701.000000.000000.318600.10151
PROPOSED ALGORITHM0.168500.000000.944400.224950.05060
HMS_DNN1.379311.379311.379310.000000.00000
Breast CancerPSO-DNN0.058500.017500.377200.069310.00480
PROPOSED ALGORITHM0.041500.017500.087700.018420.00033
HMS-DNN0.604400.604400.604400.000000.00000
Table 7. The statistical analysis based on MSE for all algorithms on sepsis.
Table 7. The statistical analysis based on MSE for all algorithms on sepsis.
OptimizerMSE (Avg.)MSE (Max.)MSE (Min.)MSE (Std.)MSE (Var.)
HMS-PSO0.251110.279950.220050.015670.00025
HMS0.358070.358070.358070.000000.00000
PSO0.343600.358070.274090.025350.00064
Grad0.469310.641930.357420.138330.01914
Adadelta0.393120.682940.319660.123990.01537
Sgd0.338870.360030.318360.009660.00009
Rmsprop0.278580.297530.259770.009130.00008
Adam0.286760.354170.231770.031660.00100
Table 8. The statistical analysis based on RMSE for all algorithms on sepsis.
Table 8. The statistical analysis based on RMSE for all algorithms on sepsis.
OptimizerRMSE (Average)RMSE (Max)RMSE (Min)RMSE (Standard Deviation)RMSE (Variance)
HMS-PSO0.500870.529100.469100.015670.00025
HMS0.598390.598390.598390.000000.00000
PSO0.585770.598390.523540.022240.00049
Grad0.678130.801200.597850.098920.00979
Adadelta0.620880.826400.565390.088830.00789
Sgd0.582070.600020.564230.008290.00007
Rmsprop0.527740.545460.509670.008630.00007
Adam0.534750.595120.481430.028880.00083
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaya, U.; Yılmaz, A.; Aşar, S. Sepsis Prediction by Using a Hybrid Metaheuristic Algorithm: A Novel Approach for Optimizing Deep Neural Networks. Diagnostics 2023, 13, 2023. https://doi.org/10.3390/diagnostics13122023

AMA Style

Kaya U, Yılmaz A, Aşar S. Sepsis Prediction by Using a Hybrid Metaheuristic Algorithm: A Novel Approach for Optimizing Deep Neural Networks. Diagnostics. 2023; 13(12):2023. https://doi.org/10.3390/diagnostics13122023

Chicago/Turabian Style

Kaya, Umut, Atınç Yılmaz, and Sinan Aşar. 2023. "Sepsis Prediction by Using a Hybrid Metaheuristic Algorithm: A Novel Approach for Optimizing Deep Neural Networks" Diagnostics 13, no. 12: 2023. https://doi.org/10.3390/diagnostics13122023

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop