Next Article in Journal
VARMA Models with Single- or Mixed-Frequency Data: New Conditions for Extended Yule–Walker Identification
Previous Article in Journal
Switch-Transformer Sentiment Analysis Model for Arabic Dialects That Utilizes a Mixture of Experts Mechanism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fine-Tuned Cardiovascular Risk Assessment: Locally Weighted Salp Swarm Algorithm in Global Optimization

by
Shahad Ibrahim Mohammed
1,
Nazar K. Hussein
1,
Outman Haddani
2,
Mansourah Aljohani
3,
Mohammed Abdulrazaq Alkahya
4 and
Mohammed Qaraad
2,5,*
1
Department of Mathematics, College of Computer Sciences and Mathematics, Tikrit University, Tikrit 34001, Iraq
2
TIMS, Faculty of Science, Abdelmalek Essaadi University, Tetouan 93000, Morocco
3
College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
4
College of Education for Pure Sciences, University of Mosul, Mosul 41003, Iraq
5
The Hormel Institute, University of Minnesota, 801 16th Ave NE, Austin, MN 55912, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(2), 243; https://doi.org/10.3390/math12020243
Submission received: 26 November 2023 / Revised: 6 January 2024 / Accepted: 8 January 2024 / Published: 11 January 2024

Abstract

:
The Salp Swarm Algorithm (SSA) is a bio-inspired metaheuristic optimization technique that mimics the collective behavior of Salp chains hunting for food in the ocean. While it demonstrates competitive performance on benchmark problems, the SSA faces challenges with slow convergence and getting trapped in local optima like many population-based algorithms. To address these limitations, this study proposes the locally weighted Salp Swarm Algorithm (LWSSA), which combines two mechanisms into the standard SSA framework. First, a locally weighted approach is introduced and integrated into the SSA to guide the search toward locally promising regions. This heuristic iteratively probes high-quality solutions in the neighborhood and refines the current position. Second, a mutation operator generates new positions for Salp followers to increase randomness throughout the search. In order to assess its effectiveness, the proposed approach was evaluated against the state-of-the-art metaheuristics using standard test functions from the IEEE CEC 2021 and IEEE CEC 2017 competitions. The methodology is also applied to a risk assessment of cardiovascular disease (CVD). Seven optimization strategies of the extreme gradient boosting (XGBoost) classifier are evaluated and compared to the proposed LWSSA-XGBoost model. The proposed LWSSA-XGBoost achieves superior prediction performance with 94% F1 score, 94% recall, 93% accuracy, and 93% area under the ROC curve in comparison with state-of-the-art competitors. Overall, the experimental results demonstrate that the LWSSA enhances SSA’s optimization ability and XGBoost predictive power in automated CVD risk assessment.

1. Introduction

Cardiovascular disease (CVD) remains the leading cause of mortality worldwide, responsible for over 18 million deaths per year according to the World Health Organization [1]. Early identification of at-risk individuals allows for timely lifestyle and pharmaceutical interventions to control modifiable risk factors such as hypertension, diabetes, and smoking. Conventionally, risk is assessed manually using factors like age, sex, family history, blood pressure, cholesterol, and smoking status [2]. However, this approach suffers from limited scalability and predictive ability [3].
Machine learning has emerged as a promising solution to automate CVD risk prediction from electronic health records at a population scale. A variety of supervised algorithms have been explored, including logistic regression, decision trees, support vector machines, naïve Bayes classifiers, and ensemble methods [4]. Among these, ensemble techniques such as random forests and gradient boosting machines have achieved state-of-the-art performance due to their ability to handle complex interactions between risk variables [5]. In particular, extreme gradient boosting (XGBoost) has proven highly effective for medical applications through its efficient implementation of regularized model fitting [6]. Nonetheless, machine learning models remain limited by the improper selection of hyperparameters that determine model complexity, learning, and regularization strategies [7]. Conventional grid searches exhaustively evaluate a fixed set of predefined parameter combinations but scale poorly with dimensions and fail to explore interactions between settings effectively [8]. Random searches offer superior coverage of the search space but evaluate many suboptimal configurations without exploiting promising regions [8]. To address these limitations, metaheuristic techniques have emerged as automated approaches to nonlinear, multimodal hyperparameter optimization problems [9,10,11,12,13]. The class of optimization methods known as metaheuristic algorithms has seen widespread adoption. These methods can be broken down into nine classes, with contributions from fields as varied as biology, physics, sociology, music, chemistry, sports, mathematics, collective behaviors (swarm-based), and hybrid approaches that combine elements from several of these classes [14]. In particular, metaheuristic algorithms are useful for solving high-dimensional, nonlinear, constrained optimization problems that arise in the real world. Metaheuristic algorithms have gained substantial recognition and utilization in diverse fields of research and practical applications. One area where these algorithms have shown promise is in the domain of cardiovascular disease (CVD) risk assessment, a critical healthcare concern worldwide. The use of metaheuristic algorithms in CVD risk assessment presents an opportunity to enhance the accuracy and efficiency of predictive models, enabling healthcare professionals to better identify individuals at high risk of developing cardiovascular diseases. Recent studies have explored the application of metaheuristic algorithms, such as Genetic Algorithms, Particle Swarm Optimization, and Artificial Neural Networks, in developing CVD risk prediction models [15]. These algorithms can effectively process large datasets containing diverse risk factors, biomarkers, and clinical parameters, thereby providing a more comprehensive assessment of an individual’s risk profile [16]. The use of these algorithms helps not only with predicting the risk of CVD but also in identifying the key contributing factors and their complex interactions, which can be instrumental in tailoring personalized prevention and treatment strategies [17]. Recent research has also emphasized the importance of integrating data from various sources, including clinical records, medical imaging, and genomic information. Metaheuristic algorithms can aid in fusing these heterogeneous data sources to build more holistic and accurate CVD risk prediction models [18]. Furthermore, considering the rapid advancements in medical technology and the increasing availability of health-related data, the application of metaheuristic algorithms in CVD risk assessment is expected to evolve. This evolution includes the incorporation of deep learning and hybrid models that combine the strengths of different metaheuristic techniques to enhance prediction accuracy and robustness [19].
Bio-inspired swarm intelligence algorithms take inspiration from social behaviors observed in nature to robustly solve complex optimization tasks. The domain of bio-inspired swarm intelligence research is experiencing burgeoning growth, with an increasing number of scholars amalgamating machine learning techniques with optimization algorithms rooted in swarm intelligence in pursuit of enhancing the efficacy and efficiency of machine learning methods [20]. Annually, a plethora of novel swarm intelligence algorithms are introduced with the primary aim of addressing a myriad of optimization problems, including but not limited to Particle Swarm Optimization (PSO), which Kennedy and Eberhart initially introduced [21], or Genetic Algorithms (GAs) [22], Salp Swarm Algorithm (SSA) [23], Artificial Bee Colonies (ABCs) [24], Ant Colony optimization (ACO) [25], differential evolution (DE) [26], and others. Other types that can be listed here include Black Holes (BHs) [27], Thermal Exchange Optimization (TEO) [28], Chemical Reaction Optimization (CRO) [29], etc., all of which are examples of techniques that are based on physics or chemistry. More sophisticated examples include those based on the social behaviors of a human like Teaching–Learning-Based Optimization (TLBO) [30] and Imperialist Competitive Algorithm (ICA) [31], and music-based examples such as Melody Search [32] and Salp Swarm Algorithm (SSA) [23].
The Salp Swarm Algorithm (SSA) constitutes a notable advancement in the realm of bio-inspired swarm intelligence. Developed from the inspiration drawn from the collective behavior of Salps, a type of marine organism, the SSA has garnered considerable attention within the academic community. This algorithm has demonstrated its prowess in addressing complex optimization problems by employing a population of virtual Salps that emulate the biological characteristics and interactions observed in their natural counterparts. The SSA leverages these principles to guide the search process, facilitating the identification of optimal solutions across a spectrum of domains. The algorithm’s unique features, including its adaptive mechanisms and its capacity to adapt to dynamic environments, render it a compelling subject of investigation and application within various scientific disciplines [10,33,34,35,36]. Nonetheless, the standard SSA faces some limitations that hinder its search abilities. Like many population-based metaheuristics, the SSA can become stuck in local optima and exhibit slow convergence characteristics [37]. This is due in part to the lack of fine-grained neighborhood information incorporated into Salps’ movement updates, which focus exploration more globally without sufficient local refinement [38]. Recent studies have also shown that the SSA’s performance degrades for highly multimodal problems with many local optima due to its tendency to converge prematurely to suboptimal solutions [39]. Additionally, the use of fixed control parameters facilitates exploitation but constrains exploration over time, restricting flexibility when tackling diverse problem landscapes [40]. To address these drawbacks, localized adaptations guiding Salps towards high-quality neighboring solutions have been explored as an effective means of balancing exploration and exploitation abilities [38]. To address these drawbacks, in the past few years, multiple modified forms of the SSA have been developed by researchers with the aim of enhancing its performance, rectifying its shortcomings, and augmenting its capabilities.
This paper presents a novel variant of the Salp Swarm Algorithm (SSA) and applies it to the important real-world application domain of cardiovascular disease risk prediction through automated machine learning. The core innovation of the LWSSA resides in its integration of two distinct mechanisms into the Salp Swarm Algorithm (SSA), thereby mitigating well-known challenges such as slow convergence and susceptibility to local optima. The first mechanism introduces a locally weighted approach that enhances the search process by effectively guiding it towards locally promising regions, thereby facilitating a more efficient exploration of the solution space. The iterative probing and refinement of high-quality solutions in the neighborhood contribute to the algorithm’s enhanced performance. The second mechanism incorporates a mutation operator, injecting randomness into the search process and thus further improving the algorithm’s ability to escape local optima.
Below is a list of the main contributions of this study:
  • This research introduces the Locally Weighted Salp Swarm Algorithm (LWSSA), an enhancement of the Salp Swarm Algorithm (SSA), which combines two mechanisms to mitigate issues like slow convergence and local optima.
  • The LWSSA introduces a new Local Search (LS) Algorithm known as the “Locally Weighted approach” that guides the search process toward promising local regions, improving search efficiency by iteratively probing and refining high-quality solutions. This local search strategy is employed to refine individual solutions after each iteration of the optimization process.
  • The LWSSA incorporates a mutation operator to inject randomness into the search process, enhancing its ability to escape local optima and explore the solution space more effectively.
  • This research extends its contributions to practical applications by evaluating the LWSSA-XGBoost model for cardiovascular disease (CVD) risk assessment, showcasing its superior predictive performance.
The rest of the paper is arranged as follows: Section 2 discusses the related work regarding the SSA and its variants. Section 3 provides the essential theory of the original Salp Swarm Algorithm. The new Local Search (LS) Algorithm is explained in Section 4. Section 4 also provides a description of the proposed algorithm (LWSSA) in detail. Section 5 provides the experimental results and analysis of CEC2021 and CEC2017. Section 6 provides the experimental results and analysis results of the CVD dataset and the classification of 16 datasets. The conclusion and future work are outlined in Section 7.

2. Related Work

Algorithms inspired by nature have unique characteristics that have attracted the interest of researchers in many fields, since they may be used for a wide variety of problems. Furthermore, the No Free Lunch Theorem states that there is no silver bullet for optimization problems [41], citing a lack of optimal solutions. Therefore, it is still a challenging task to create novel optimization algorithms suited to the requirements of practical application settings. As a result, there has been a rise in interest in developing new optimization techniques by combining existing, simpler meta-algorithms. Hybridization is gaining popularity because it combines the best features of different algorithms into one, creating new systems with improved efficiency and accuracy. Since its invention in 2017, the Salp Swarm Algorithm (SSA) has been improved iteratively by incorporating many adjustments proposed by researchers. These additions have been carefully selected to strengthen the algorithm’s flexibility and efficiency across a wide range of problem-solving domains by catering to specific optimization issues. According to [42], the conventional Salp Swarm Algorithm’s (SSA) performance was improved by using the DE algorithm’s operators in order to avoid becoming stuck in local optimum solutions and hasten the convergence to the global optimum. The SSAGWO approach was proposed by [34] to adjust the locations of Salp followers using the GWO search strategy. Differential evolution and the SSA are integrated into the proposed study [43] to improve accuracy and convergence rate. El-Shorbagy et al. improved the SSA method to achieve this goal by utilizing a chaotic collection of functions to expand the algorithm’s capacity to search across wide regions for the best solutions [44]. QSSALEO [33] presents a novel hybrid that improves the Salp swarm by combining Local Escape Operator (LEO) and Quadratic Interpolation (QI), where the QI technique enhanced the new algorithm’s capacity to exploit and the precision of the optimal solution. The LEO was employed on the site of QI of the ideal search element at the same time to solve the local optima problem. The TBLSBCL [35] is a novel search technique that aims to show issues of population diversity, imbalanced exploitation and exploration, and premature convergence in the SSA. The hybridization process includes two stages: The first stage is temporary dynamic fundaments used to represent the hierarchy of followers and leaders in the SSA. This approach raises the number of leaders and decreases the number of followers linearly. The leader’s position in the population is also edited using the effective exploitation capabilities of the SSA. The second stage, a competitive learning strategy, is employed to revise the condition of the followers by allowing them to learn from the leading member. Problems with population diversity, unequal exploration and exploitation, and premature convergence are all handled by the one-of-a-kind search strategy known as SSALEO [36]. The SSA has these issues, but SSALEO is able to solve them. The LEO (Local Escape Operator) is the component that contributes to SSALEO by streamlining the search procedure in the Salp Swarm Algorithm and boosting the local search effectiveness of the swarm members. The utilization of the SSA has been widely observed across many applications due to its notable optimization potential. Table 1 presents a comprehensive compilation of various modifications and hybridizations within the field of the SSA. In the majority of instances, the initial Salp Swarm Algorithm (SSA) was suitably enhanced prior to its implementation. However, it was uncommon for the unmodified SSA to be explicitly implemented. The concept of the No Free Lunch (NFL) theorem effectively exemplifies this notion. This serves as an incentive for academics to participate in the endeavor of enhancing the SSA.
Following an analysis of the relevant published material, one can reach the conclusion that there is a significant body of work devoted to the Salp Swarm Algorithm. Although the SSA has proven its efficacy in a variety of contexts by outperforming conventional optimizers, it is not immune to the influence of local optima. In response to this issue, numerous distinct iterations of the SSA have been devised. These versions vary the mechanism in order to improve the rate of convergence, strike a balance between exploration and exploitation, and steer clear of locally optimal solutions. Moreover, diversity techniques are used in complex optimization algorithms in order to increase search quality and decrease the impacts of genetic drift, which can result in a loss of diversity in bio-inspired algorithms. This is performed in order to maximize the potential of the algorithms.
In this paper, a technique known as the Locally Weighted Salp Swarm Algorithm (LWSSA) is suggested in order to address these deficiencies. The LWSSA method is different from other works in a number of ways:
  • For more accurate fine-tuning of solutions, the Salp Swarm Algorithm (SSA) incorporates the locally weighted approach as a local search tool. The method is able to better solve difficult optimization problems and converge more quickly because of the concentrated exploration.
  • The implementation of the mutation scheme within the SSA adds an extra layer of randomness to the Salps, which in turn improves the latter’s capacity for conducting global searches. This helps to diversify the exploration process, which enables the algorithm to break free from local optimal solutions and investigate a wider range of possible solutions.
  • The performance of the SSA’s global search is improved thanks to the synergistic effect of the combination of the locally weighted technique and the mutation scheme. The method is made more robust and successful in tackling difficult optimization problems with multiple local optima as a result of the integration of both strategies.
The LWSSA algorithm that was proposed was evaluated using CEC2021 and CEC2017, in addition to a risk assessment of cardiovascular disease and 19 dataset classifications. According to the findings, the LWSSA algorithm appears to be capable of efficiently addressing the issues posed by the SSA. The performance of the suggested method is superior to that of previous SSA variations and optimization strategies in the vast majority of cases.

3. Preliminaries

In this segment, the algorithm pertaining to Salp Swarm optimization, as well as the articulation of the problem statement, are dissected into their constituent elements.

Principles of Salp Swarm Optimization (SSA)

The Salp Swarm Algorithm (SSA) [23] is a nature-inspired optimization algorithm that models the collective behavior of Salps, marine invertebrates known for their swarming patterns. SSA is designed to tackle optimization problems by mimicking the social interactions and cooperation observed in Salp swarms. The algorithm employs mathematical formulations to govern the movement and decision-making processes of virtual Salps within a search space. The basic mathematical representation of the SSA involves the update of each Salp’s position in the solution space based on its current position, the influence of the best solution encountered so far, and the collective influence of the entire swarm. The position update equation is as follows:
y 1 , j t + 1 =   F j + w 1 u b j l b j w 2 + l b j ;   w 3 0.5 F j w 1 u b j l b j w 2 + l b j ;   w 3 < 0.5
y 1 , j t + 1 and F j denote the coordinates of the first Salp position and the location of the food source in the jth dimension, respectively. The upper bounds and lower bounds are represented by u b j and l b j , respectively. The two scales w 2 and w 3 are random in the interval [0, 1]. Parameter w 1 is used to achieve constancy in the process of exploitation and exploration, also representing a critical point parameter.
w 1 = 2 e 4 t T 2 ,
The maximum number of iterations is denoted by T , and the present iteration is t . Equation (1), mentioned previously, pertains exclusively to the updating of the position of the leading Salp. To determine the position update relative to the follower Salps, Equation (3) is utilized and provided as follows.
y i , j t + 1 = 1 2 y i , j t + y i 1 , j t ,
y i , j t is the location of the i th follower in the j th dimension, y i 1 , j t is the location of the i th −1 follower in the j th dimension. Equation (3) depends on Equation (4) (Newton’s law of movement):
y i , j t = 1 2 a × t i m e 2 + u 0 × t i m e  
y i , j t is the location of the i th follower in the j th dimension, variable t i m e denotes the time per iteration, u 0 is the initial speed, and a is calculated as follows:
a = u f i n a l u 0
u 0 is the initial speed, and u 0 is the final speed. It is vital to note that the leader and follower Salp positions are updated by using Equations (1) and (3). Follower Salps are programmed to mimic the leader’s actions until the maximum number of iterations is reached, while the leader Salp itself is updated depending on the availability of food sources. During the iterations, the exploratory phase of the algorithm gives way to the exploitative phase, which is marked by a downward trend in the parameter w 1 . To avoid becoming stuck at local optimums and instead find the best approximation to the global optimum, it is essential to strike a balance between global and local search strategies.

4. Proposed Algorithm

The locally weighted approach (LW) and proposed approach are introduced in this section.

4.1. Locally Weighted Approach ( L W )

The locally weighted approach, also known as local search, is a heuristic algorithm used to find the solution to complex optimization issues. It involves continuously changing the present solution with a neighboring solution in the search space. The number of potential neighbors for a solution is often infinite, so the key to a successful local search is to find an effective way to select the appropriate neighbors. The proposed algorithm, called the local search algorithm (LW), uses this local search strategy to develop the current solution, or Salp, after all iterations of the optimization process. Algorithm 1 outlines the steps for the LW local search method.
Initially, in each iteration t, a population   p o p t with the size of Npop Salps and a solution y i t = ( y i , 1 t , y i , 2 t , y i , d i m t ) will be optimized by the SSA according to the proposed algorithm and become x n e w i t ; after that, this Salp will be improved by the LW to become y n e w i t by using this formula:
w e i g h t j = 1 1 + exp   ( x n e w i t y i , j t )
y n e w i t = x n e w i t   i + Z × w e i g h t j × y r 1 , j t y r 2 , j t
where i = 1, 2, … Npop, y r 1 , j t ,   y r 2 , j t are two particles chosen randomly from the population p o p t (except the current particle y i t ). In addition, Z is random numbers generated by the magenta method based on Levy distribution [56] by the following equation:
z = 0.01 × b q 1
where b and q are drawn from normal distributions as b ~ N 0 , β 2 and   q ~ N 0 , β 2 , and β is generated by the following equation:
β = Γ 1 + sin   ( π α 2 ) Γ α + 1 2 α 2 α 1 2 α
where α is the index of stability α [ 0 , 2 ] , also referred to as the Levy index.

4.2. Update Salp Followers’ Position

Due to the benefits that the SSA enjoys, such as its adaptability [57] and ease of implementation [32], which result from its straightforward mathematical formula and usage of fewer parameters than other algorithms [58], the efficiency of the search for the food site is decreased through continuous iterations of updating the leader’s location, which results in a stagnation process. Additionally, in the SSA mathematical model, the line between exploration and exploitation is not clearly defined, and the use of the SSA to improve high-dimensional problems is not clear [40]. The problem of local stagnation has been taken into consideration in this paper; we proposed a modification to the formula for updating followers’ positions in Equation (5) in a Salp chain through a mutation factor as follows:
y n e w i = y i t + r a n d × m u × y r 1 t y r 2 t
where m u is the constant mutation factor set equal to 0.5, r a n d is a random value within the interval ( 0 , 1 ) , and y r 1 t and y r 2 t are two random locations within the population that different from location y j t .
As can be seen, y j t can communicate with y r 1 t and y r 2 t about the whereabouts of others. When there is a large gap between y r 1 t and y r 2 t , the individual receiving the update is more likely to update somewhere in the middle of that range. This sort of conduct is helpful in the exploration process. When the distance between y r 1 t and y r 2 t is small, on the other hand, the individual is more likely to look nearby for possible solutions. This attitude fosters exploitation. When the m u principle is applied, individuals as a whole are led to the best possible results, and inequality gradually decreases as a result. Instead of depending solely on the efforts of individual actors, communities can more effectively investigate and establish potential locations. Concurrently, a r a n d value from the interval [0, 1] is integrated to shift the food supply position and take advantage of the inherent randomness of the initial SSA Equation (1), proactively avoiding the local optimum problem, and permitting the convergence process to be resumed. It is worth noting that the proposed algorithm optimization scenario is fully described in the next section.
Algorithm 1: The developed Locally Weighted Approach
  For i = 1: N
     If random < 0.5:
      Randomly choose two positions  y r 1 t and y r 2 t from p o p t
      Calculate w e i g h t for position i by Equation (6)
      Calculate t h e   r a n d o m   v a l u e   Z   b a s e d   o n   L e v y   f l i g h t   for position i by Equation (8)
      Calculate the new position  y n e w i of particle i by Equation (7)
  End For

4.3. LWSSA Optimization Scenario

The proposed algorithm, shown in Figure 1, combines local search, the SSA, and mutation to efficiently find optimal solutions and avoid getting stuck in local minima. In the first stage, the population is divided into two groups. The first group includes the first half of the population (leaders) updated using Equation (1), and the second half of the population (followers) updated using the proposed mutation Equation (10). In the second stage, the “local weight” (LW) technique is used to enhance solutions and determine the optimal location for individuals, and it is updated with a probability of 0.5 for all populations. The combination of these techniques aims to quicken the search process and find optimal solutions quickly, while also avoiding local minima. The steps of the proposed technique are outlined in Algorithm 2.
Algorithm 2: The LWSSA
Each search agent’s dimension (D), as well as its upper bound (ub) and lower bound (lb), as well as the evaluation fitness function (Fitness), as well as the maximum number of iterations (T), are input.
Optimal individual (Food Location), and optimal cost function (Food Fitness) are the final results.
According to Dimensions D, population size N, ub, lb, initial the Salp population. According to the fitness function, select the least costly individual in the population as the Food Fitness.
while (stopping condition is not hold)
   compute w1 by Equation (2)
   for (all Salp (Salp Location))
    if ( i N / 2 ) then
      Update the location of the Salp leaders by Equation (1)
    else
      Update the Location of the followers Salp by Equation (10)
    if (random < 0.5)
      Apply (LW) technique as Algorithm (2)
    Compute the value of the fitness of y n e w i and reported it as NewFitness
    if (NewFitness < Food Fitness) then
      Update the Food Location and Food Fitness
    end
endend

5. Experimental Results and Analysis

To assess the efficacy of the LWSSA, a series of tests were managed using the IEEE CEC 2017 and IEEE CEC 2021 function groups. These tests were designed to evaluate the optimization performance of the LWSSA and were managed under the same operating conditions, with a number of sophisticated algorithms used for comparison. The LWSSA was also applied as a search algorithm to find and localize the values of the optimal hyperparameters for the XGBoost Classifier for risk assessment of cardiovascular disease classification. Further information about the risk assessment of cardiovascular disease problems and their characteristics can be found in a subsequent subsection.
The experiments were conducted on a computer running the Windows operating system using an Inter(R) Intel Core i5-7300U (2.50 GHz) processor and 8 GB of memory. Both the LWSSA and the comparison algorithms were implemented in Python. The IEEE CEC 2021 functions, which include 20 dimensions and evaluate the performance of algorithms on shifted, rotated, and biased functions, and the IEEE CEC 2017 test suite, which includes 50 dimensions, were used to thoroughly examine the performance of the LWSSA. Further details about the IEEE CEC 2017 and IEEE CEC 2021 functions can be found in [59,60], respectively. The results of the experiment were obtained through 30 independent runs, each comprising 2500 iterations, for statistical analysis. When the LWSSA reached the highest number of permitted iterations, the algorithm terminated. To compare the effectiveness of different algorithms, it is common to use statistical measures such as the average solution (Avg), median solution (Med), and standard deviation (Std) in the analysis.
The two non-parametric statistical hypothesis assessments, the Friedman test and the Wilcoxon signed-rank test, were performed to analyze the outcomes of the LWSSA and its competitors. The Wilcoxon signed-rank technique was performed to determine if there was a great difference in the usage of each function by the LWSSA and its rivals and if the LWSSA is better than its competitors. The Friedman test’s final rankings for algorithms on all equations can be used to check for significant divergences in overall performance among the techniques. In this paper, the effectiveness of the LWSSA is compared to seven Salp variant optimization techniques, including ESSA [61], HSSASCA [62], ISSA [63], ISSA-OBL [64], TVSSA [65], SSALEO [36], and SSA-FGWO [34], and eight other recent algorithms: GBO [66], EGBO [67], SMA [68], PSO [69], EO [70], SADE [71], CL-PSO [72], and HPSO_TVAC [72]. The parameters related to the competing algorithms, which were suggested by their respective authors, were adopted and are listed in Table 2 for reference in this paper.

5.1. Experiment 1: Benchmark Examination for the IEEE CEC 2021

There are a total of 10 various functions included in the IEEE CEC 2021 benchmark suite. The associated technical paper [60] contains in-depth explanations of each and every one of these functions. The extra material includes an illustration that provides a summary of all 10 functions in total. Some of the Salp variants, and GBO, EGBO, SMA, PSO, EO, SADE, CL-PSO, and HPSO_TVAC, are the algorithms that are competing in this examination. All of the participants had their parameters determined, as can be shown in Table 2. The size of the population and number of members for these optimization techniques were both over 30 and 2500, respectively.

5.1.1. Comparison of LWSSA with Some SSA Variants

In this subsection, the effectiveness of the LWSSA in solving mathematical CEC2021 test functions is compared to seven other optimization techniques: ESSA [61], HSSASCA [62], ISSA [63], ISSA-OBL [64], TVSSA [65], SSALEO [36], and SSA-FGWO [34]. The size of the population and number of members for these optimization techniques were both over 30 and 2500, respectively.
The outcomes presented in Appendix A.1 show that the LWSSA performed well in terms of average values for most of the test functions, except for f8. In terms of standard deviation, the LWSSA significantly outperformed the other algorithms in almost all cases. These results suggest that the LWSSA is a reliable, robust, and scalable optimization algorithm that performs well compared to the other algorithms included in the comparison.
The outcomes of the Wilcoxon signed-rank test, with an importance level of 0.05, are outlined in Appendix A.2. These results indicate that the LWSSA outperformed several other algorithms, including ISSA, ISSA_OBL, ESSA, SSALEO, SSA-FGWO, and HSSASCA, on all of the CEC 2021 test functions. In addition, the LWSSA outperformed TVSSA in terms of nine functions. These results suggest that the LWSSA is a strong performer among the algorithms included in the comparison. The Wilcoxon signed-rank test showed that the LWSSA is a highly reliable optimization technique as compared to the seven other algorithms. Figure 2 illustrates the convergence behaviors of the LWSSA and its competitors on the CEC 2021 test functions. It can be seen that the LWSSA significantly outperforms the other algorithms in both initial and final iterations, indicating strong exploration and exploitation capabilities. Additionally, as presented in Figure 2, the LWSSA is able to find the global optimal solution and avoid becoming stuck in local optima thanks to its new local search feature.

5.1.2. Comparison of LWSSA with State-of-the-Art Competitors

The LWSSA was extensively evaluated using the IEEE CEC 2021 functions and compared to the application of eight other algorithms. The outcomes, displayed in Appendix A.3, indicate that the LWSSA performed well in terms of average values for most of the test functions, except for f8 and f10. In terms of standard deviation, the LWSSA significantly outperformed the other algorithms in almost all cases. Overall, these results suggest that the LWSSA is a reliable, robust, and scalable optimization algorithm that performs well compared to the other algorithms included in the correlation and differences.
The outcomes of the Wilcoxon signed-rank test, with an importance level of 0.05, are outlined in Appendix A.4. These results indicate that the LWSSA outperformed several other algorithms, including EGBO, GBO, HPSO_TVAC, CL-PSO and PSO, on all of the CEC 2021 equations. In addition, the LWSSA outperformed SMA and EO on nine functions and SADE on seven functions. These results suggest that the LWSSA is a strong performer among the algorithms included in the comparison. The Wilcoxon signed-rank test showed that the LWSSA is a highly reliable optimization technique as compared to the eight other algorithms. Figure 3 illustrates the convergence behaviors of the LWSSA and its competitors on the CEC 2021 test functions. It can be seen that the LWSSA significantly outperformed the other algorithms in both initial and final iterations, indicating strong exploration and exploitation capabilities. Additionally, as presented in Figure 3, the LWSSA was able to find the global optimal solution and avoid becoming stuck in local optima thanks to its new local search feature.

5.2. Experiment 2: Benchmark Examination for the IEEE CEC 2017

There are a total of 29 various functions included in the IEEE CEC 2017 benchmark suite. The range [−100, 100] was used to determine the search zone for each variable and function in each dimension. The associated technical paper [59] contains in-depth explanations of each and every one of these functions. The extra material includes an illustration that provides a summary of all 29 functions in total. ESSA, HSSASCA, ISSA OBL, ISSA, SSALEO, TVSSA, and QSSALEO are some of the algorithms that are competing in this examination. All of the participants had their parameters determined, as can be seen in Table 2. The size of the population and the number of members for these optimization techniques were both over 30 and 2500, respectively.
Friedman’s rank test was utilized since there was a strong feeling that the LWSSA algorithm was the best option. In addition, the Wilcoxon signed-rank test with a significance level of 5% was taken into consideration in order to investigate the statistical differences between the results obtained by the LWSSA and those obtained by its rivals. Appendix A.5 contains a report that details the results of the LWSSA’s comparison with those of its competitors. Appendix A.6 contains the p-value that was determined using the Wilcoxon signed-rank test. In Appendix A.7, the symbol “<0.05” indicates whether the LWSSA performed substantially better than its peers, significantly worse, or about the same as its contemporaries.
In comparison to SSALEO and QSSALEO, the newly suggested algorithm LWSSA demonstrated superior performance across all 26 of the evaluations. In addition, the average ranking that is shown in Appendix A.6 shows that the LWSSA received the highest ranking, while the HSSASCA and ISSA-OBL had the lowest ranking. This reveals that all SSA variants enhance the performance of the regular SSA, with the LWSSA boosting it the most. According to Appendix A.7, there is hardly any p-value larger than 0.05, which demonstrates that the differences between the LWSSA and the comparison methods are statistically significant. This can be seen as evidence that the LWSSA is superior to the comparison methods.

5.3. LWSSA Computation Complexity

The computation complexity of the LWSSA was analyzed stage by stage, where the population size is N, the dimension of the given problem is D, and the maximum iteration number is L. The computational complexity under the worst situation of computation time is computed as follows.
In the initialization stage, the LWSSA obtains one phase, one random initialization with the same computational complexity O (N × D) as the original SSA, and the computational complexity of obtaining the optimal solution is O(N), the same as the original SSA. So, in the initialization stage, the computational complexity of the LWSSA is O(N × D + N).
In each iteration stage, first, the position of the agent is renewed according to Equation (1) or Equation (9); therefore, the computational complexity is O(N × D), and then, the location of each agent is further renewed by a LW strategy with O(N) computational complexity. So, in each iteration, the computational complexity of LWSSA is O(N × D + N).
Because of this, the computational complexity of the original LWSSA is O(L × N × D + L × N), which can be condensed to be read as O(L × N × N × (D)).

5.4. Qualitative Analysis

This paper presents a novel optimization strategy called the LWSSA that combines a locally weighted approach with a mutation scheme to overcome the limitations of traditional SSA methods. The LWSSA aims to address issues such as early convergence, uneven exploration and exploitation, and population heterogeneity. The paper investigates the performance of the LWSSA in five functions (Function 1, Function 2, Function 3, Function 8, and Function 9) from the CEC 2017 benchmark (see Figure 4a), specifically focusing on both uni-modal and multimodal functions. These examples were chosen due to their representative nature and significant influence. The qualitative results of the investigation are displayed in Figure 4, which shows the LWSSA’s expected exploration and exploitation phases and its average global best fitness level as it traverses along its first-dimensional trajectory.
Figure 4b also demonstrates the adaptability and resilience of the LWSSA in various contexts, as its trajectory shows rapid oscillation during the exploration phase and moderate oscillation during the exploitation phase, enabling it to efficiently seek out the best solution. The LWSSA’s position amplitude in the first few iterations has the potential to encompass half of the exploration space, and its subsequent adjustments are slower but still effective.
The graphical representation in Figure 4c also highlights the LWSSA’s planned exploitation and exploitation phases, with the initial algorithm having a high exploration ratio but a low exploitation ratio. However, as iterations progress, the focus shifts towards exploitation for most of the chosen functions, and the LWSSA strikes a balance between exploration and exploitation.
Furthermore, Figure 4d shows that the LWSSA’s global fitness has a tendency to fluctuate during iterative approaches, but its average fitness value decreases and oscillation frequency decreases with increasing iterations, allowing it to thoroughly search the entire space and rapidly arrive at a trustworthy conclusion.
In conclusion, this paper presents the LWSSA as a promising optimization strategy that can overcome the limitations of traditional SSA methods, and its performance is demonstrated through the investigation of five functions. The qualitative results presented in Figure 4 provide a comprehensive understanding of the LWSSA’s growth pattern in exploration and exploitation.

5.5. Demonstrated Effectiveness

The Locally-Weighted Salp Search Algorithm (LWSSA) distinguishes itself from existing algorithms through several key innovations in its design and optimization strategies. The notable differences include:
  • Population Division and Dynamic Mutation:
    -
    LWSSA approach: The LWSSA employs a unique population division strategy, categorizing individuals into leaders and followers. Leaders are updated using a specific equation, while followers undergo a distinct mutation strategy. This dynamic population management introduces a nuanced exploration–exploitation balance.
    -
    Contrast with existing algorithms: Unlike conventional algorithms with uniform population treatment, the LWSSA’s tailored approach enhances diversity within the population, fostering a more effective search process.
  • Local Weight (LW) Technique:
    -
    LWSSA approach: The LWSSA introduces the LW technique, functioning as a form of local search with a 50% update probability for all individuals. This technique strategically enhances solutions and determines optimal locations, providing adaptability and responsiveness to the optimization process.
    -
    Contrast with existing algorithms: Many existing algorithms lack a dedicated local search strategy. The LWSSA’s integration of the LW technique contributes to its ability to navigate the search space dynamically.
  • Robustness and Local Minima Avoidance:
    -
    LWSSA approach: The LWSSA demonstrates enhanced robustness, effectively avoiding local minima through its dynamic population strategies and local search capabilities.
    -
    Contrast with existing algorithms: Some existing algorithms may struggle in complex landscapes, leading to premature convergence to local minima. The LWSSA’s adaptability and strategic updates contribute to its robust performance.
  • Convergence Speed and Solution Quality:
    -
    LWSSA approach: The LWSSA consistently exhibits superior convergence speed, rapidly approaching optimal solutions. This can be attributed to the innovative combination of population division, mutation, and the LW technique.
    -
    Contrast with existing algorithms: While many algorithms may converge more slowly or struggle to reach high-quality solutions, the LWSSA’s distinctive techniques contribute to its efficiency in reaching optimal outcomes.
  • Versatility and Competitive Performance:
    -
    LWSSA approach: The LWSSA competes favorably against state-of-the-art Salps algorithm variants and advanced optimization algorithms, showcasing its versatility and competitiveness across diverse benchmark problems.
    -
    Contrast with existing algorithms: The LWSSA’s performance consistently outpaces existing solutions, substantiating its effectiveness in addressing a wide range of optimization challenges.
In summary, the LWSSA’s differentiation from existing algorithms lies in its dynamic population division, unique mutation strategies, incorporation of the LW technique for local search, enhanced robustness, accelerated convergence speed, and competitive performance across various benchmarks. These distinctive features collectively position the LWSSA as a novel and effective optimization algorithm in the landscape of swarm intelligence and metaheuristic algorithms.

6. Risk Assessment Method for CVD Using LWSSA-XGBoost Model

In this part, we present a novel CVD risk assessment method called LWSSA-XGBoost, which is based on machine learning and swarm intelligence algorithms. The XGBoost model is used in this approach to differentiate between healthy individuals and those with cardiovascular disease. However, imprecise parameterization of the XGBoost model can negatively affect the prediction accuracy. To address this issue, we propose an improved version of the SSA (LWSSA) to ensure a more reliable prediction of cardiovascular disease. The LWSSA integrates the Salp algorithm into the finder position update formula and uses the LW strategy to enhance the searchability of the worst individual. Ultimately, we apply the proposed LWSSA to optimize the parameters of the XGBoost model, resulting in more precise disease prediction outcomes. This can help medical professionals make more informed decisions and decrease the misdiagnosis rate. In this section, this work details the XGBoost classifier and the CVD dataset employed. Following this, to find out the efficiency of the proposed LWSSA in this work, it was compared with some results of swarm intelligence optimization algorithms, including SSA-XGBoost, GWO-XGBoost, ESSA-XGBoost, TVSSA-XGBoost, SSALEO-XGBoost, ISSA-XGBoost, and SSAFGWO- XGBoost. Moreover, the proposed LWSSA is compared with some basic ML classifiers, such as KNN, LR, RF, SVM, and LGB.

6.1. Dataset Description

The CVD dataset, which includes data on combined instances of heart failure, is used here to validate the model that has been developed. The dataset contains 11 features, including age, gender, the type of chest pain experienced, resting blood pressure, and a few others. There are a total of 918 instances, 508 of which are sick while the other 410 are not. Table 3 contains in-depth information regarding the features that are currently available.

6.2. The XGBoost Method

XGBoost is a type of gradient boosting machine (gbm) that is widely recognized as one of the top-performing algorithms used in supervised learning. It can be employed for both regression and classification tasks. Data scientists often favor XGBoost due to its ability to execute computations at high speeds, even when dealing with large datasets that require out-of-core computation. To use XGBoost, the algorithm operates in the following manner: if a dataset, d s , contains m features and n examples, the function that we aim to minimize at iteration t, which consists of the loss function and regularization, is expressed as follows:
d s = x i ,   y i :   i = 1,2 , , n , x i R m ,   y i R
Suppose that y ^ i represents the predicted output of an ensemble tree model produced using the following equations:
y ^ i = φ   x i = k = 1 k f k   x i ,   f k F
In Equation (10), k denotes the quantity of trees present in the model, while f k represents the k-th tree. In order to address the aforementioned equation, the optimal set of functions must be determined through the minimization of the loss and regularization objective.
L   φ = i l   y i ,   y ^ i   +   k Ω   ( f k )
The symbol l refers to the loss function, which is determined by the variance between the predicted output ( y ^ i ) and the actual output ( y i ). The symbol Ω is used to gauge the complexity of the model, which is important in preventing overfitting. The complexity is calculated using a specific formula.
Ω f k = γ T + 1 2 λ w 2
The symbol T in the equation mentioned earlier indicates the leaf count of the tree, while w represents the weight of each leaf. Boosting is utilized during the training of decision tree models to reduce the objective function. This approach involves incorporating a new function (in the form of a tree) into the model as it continues to learn. During the t-th iteration, a new function (tree) is added in the following manner:
L t = i = 1 n l   ( y i ,   y ^ i ( t 1 ) ) + f t ( x i ) ) + Ω   ( f t )
L s p l i t = 1 2 ( i I L g i ) 2 i I L h i + λ + ( i I R g i ) 2 i I R h i + λ ( i I g i ) 2 i I h i + λ   γ
where   g i = y ^ i ( t 1 ) l ( y i ,   y ^ i ( t 1 ) )
h i = 2 y ^ i ( t 1 ) l ( y i , y ^ i ( t 1 ) )

6.3. Objective Function and LWSSA-XGBoost Parameter Settings

Creating an optimization issue requires thoughtful study of the objective function as it is being developed. In this investigation, the LWSSA technique that was proposed was used in order to retrieve the most optimal settings for XGBoost. The F1 score was utilized in order to ascertain how beneficial the LWSSA is to one’s fitness, with a higher score being preferred. The parameters that were determined by the method were utilized in subsequent iterations of the LWSSA to produce an improved version of the optimal solution. The XGBoost model parameters that resulted in the greatest average F1 score were the ones that were considered to be the optimal solution. After that, a predetermined test set was used to assess the performance of the final model which had received the highest F1 score. Parameters and settings for the search space are detailed in Table 4.

6.4. Methodology

The initial step involves cleansing the CVD dataset of outliers and processing the data. Since the CVD dataset comprises primarily numerical features, only five features—Sex, ChestPainType, RestingECG, ExerciseAngina, and ST_Slope—are categorical and require preprocessing for classification purposes. To convert these categorical features into numerical values, one-hot encoding was utilized. Subsequently, to optimize the performance of XGBoost techniques, a suggested approach was employed to identify the optimal XGBoost hyperparameters. A search process was used with a train and test data split, and the classifier’s parameters were explored by minimizing the F1 score error value “1-F1-score”. The search process was repeated for every combination of parameters, and the parameter set with the lowest fitness value was considered the best combination for XGBoost. The methodology’s approach is demonstrated in Figure 5, and further details about the techniques are discussed in subsequent sections.

6.5. Measure of Performance

The following metrics were utilized in order to make a comparison between the proposed algorithm and the following ones: SSA, GWO, WOA, ISSA, ESSA, TVBSSA, SSALEO, and SSA-FGWO.

6.5.1. Classification Evaluation Indicators Based on the Confusion Matrix

To measure the precision of a predictive model, experts typically use a confusion matrix and a Rating Evaluation Index. The proposed method is evaluated in comparison to preexisting algorithms using the following criteria:

Accuracy

The proposed algorithm was put through rigorous testing and analysis based on precise calculations, with accuracy and quality of the solution being key factors. Accuracy was determined by calculating the proportion of correctly predicted samples (for both those with and without cardiovascular diseases) out of the total number of samples analyzed. The accuracy of the calculation was determined using the following formula.
A c c u r a c y = T P + T N / ( T P + T N + F P + F N )
TP represents the number of correctly identified sick people (true positives), FN represents the number of incorrectly identified normal people, TN represents the number of correctly identified normal people (true negatives), and FP represents the number of incorrectly identified sick people (false positives). The model’s ability to predict the presence or absence of CVD is measured by TP and TN.

Precision

The precision is the proportion of individuals correctly identified as having an illness compared to the total number of individuals expected to have the illness. This is calculated using the following formula:
P r e c i s i o n = T P / ( T P + F P )

Recall

The recall rate is the percentage of patients with cardiovascular disease who are correctly identified among all patients with the disease. Classifiers that have a higher recall rate will prioritize correctly identifying patients with cardiovascular disease to minimize the chances of misclassifying those with the disease as healthy individuals.
R e c a l l = T P / ( T P + F N )

F1 Score

The F1 score percentage is the mean of both precision and recall rates and ranges [0, 1]. The equation used to calculate it is displayed below.
F 1 s c o r e = 2 × [ P r e c i s i o n × R e c a l l / P r e c i s i o n + r e c a l l ]

6.5.2. ROC Area and AUC Area

The ROC curve is a plot of the true positive rate (TPR) versus the false positive rate (FPR) calculated from a confusion matrix. The TPR, also known as sensitivity, is the proportion of individuals with cardiovascular disease who are correctly identified as having the disease (TP/(TP + FN)). The FPR is the proportion of individuals without cardiovascular disease who are incorrectly identified as having the disease (FP/(FP + TN)), where TN is the number of true negatives. The AUC value, which represents the area under the ROC curve, is often used as a measure of model performance. AUC ranges from 0.5 to 1, with values closer to 1 indicating better detection efficiency. An AUC of 0.5 suggests that the method is not effective in distinguishing between individuals with and without cardiovascular disease.

6.6. Analysis and Outcomes

In this subsection, the CVD dataset and four evaluation criteria are employed to test the proposed algorithm’s effectiveness in this study. The proposed method’s performance is compared with other methods. Firstly, it is compared against the outcomes of other swarm intelligence optimization algorithms that are designed to improve XGBoost. Secondly, it is compared against various machine learning classifiers.

6.6.1. Comparison with Most Advanced Optimization Techniques

In this section, the effectiveness and efficiency of the proposed LWSSA are examined and compared to other optimization techniques such as SSA, WOA, GWO, ESSA, ISSA, TVBSSA, SSALEO, and SSA-FGWO in terms of optimizing the performance of XGBoost. The efficacy and efficiency of the LWSSA is the focus of this particular investigation. Table 5 contains the findings of the comparisons, with the conclusion that was deemed to be the most advantageous for each comparison being highlighted in bold text. It is worth mentioning that the “Fitness” column in Table 5 represents the cost function (1 − f1_score).
Table 5 compares the best fitness results of various optimization algorithms. The proposed approach, the LWSSA, performed better than the others with the lowest average fitness value of 0.069; this clearly shows that the LWSSA’s optimization effect on XGBoost is particularly noteworthy, and that the LWSSA-XGBoost can search for more parameter values and yield better results than the other competitors. The second-best algorithm was GWO with an average fitness value of 0.072. SSALEO and TVSSA were the third-best algorithms with the same average fitness of 0.073. ISSA and SSA-FGWO were the fourth-best algorithms with the same average fitness value of 0.074. SSA, WOA, and ESSA were in the worst position with average fitness values of 0.075, 0.076, and 0.079, respectively. The convergence of the suggested method is compared to SSA, GWO, WOA, ISSA, ESSA, TVSSA, SSALEO, and SSA-FGWO in Figure 6. This comparison is available, and it is evident that the proposed LWSSA approach can achieve swift convergence for the CVD dataset.
According to Figure 6, the LWSSA performs significantly better than the other approaches, jumping out of the local optimum and achieving a fitness value. This is due to the LWSSA’s enhanced global search and ability to jump out of the local optimum, making it a promising optimization algorithm.
The optimal prediction model was determined by final parameter selection, and the model with the highest average F1 score was evaluated on a specialized test set. The results are presented in Table 5. It is evident that LWSSA-XGBoost had the highest F1 score in the test set, surpassing other models by 1.0%. These findings demonstrate the effectiveness and strong generalization ability of LWSSA-XGBoost. Therefore, using this model to predict CVD can significantly reduce misdiagnosis rates, which can ultimately prevent, reverse, and reduce the spread of CVD and associated health risks. Additionally, Table 5 also shows the accuracy and AUC (area under the receiver operating characteristic curve) of the LWSSA, and other optimization algorithms based on the XGBoost classifier were compared in a test set as seen in Figure 7. The LWSSA-XGBoost approach achieved the highest accuracy (92.30%), AUC (92.10%), recall (94.0%), and precision (92.2%), outperforming other methods. Figure 8 shows that the LWSSA shows a good performance, and the improved SSA used in the LWSSA-XGBoost model enhances its global search capabilities and achieves better results. The improved model performance can help in the early detection of CVD, reducing the risk of CVD in advance.

Average Run Time

When determining whether or not an algorithm is effective, the amount of time it takes to calculate results is an essential parameter. Figure 9 demonstrates how effective the LWSSA is by presenting the average amount of time that it takes for the CPU to compute each of the alternative algorithms that are being considered for use in optimizing the performance of the XGBoost classifier. The experiments were carried out on a computer that featured a 2.50 GHz Inter(R) Intel Core i5-7300U processor, 8 GB of memory, and a Python implementation running on Windows 10. The amount of time required by each algorithm to successfully solve the problem differed according to the model. The LWSSA has the lowest computational time, whereas SSA-FGWO has the longest average CPU calculation time compared to the other algorithms. However, the SSA has the longest average CPU calculation time in comparison to the LWSSA. Because of this, the LWSSA demonstrates highly competitive performance and requires less computational time than the other algorithms. As a result, we are in a position to confidently draw the conclusion that the LWSSA is an algorithm with considerable potential and adequate computational efficiency.

6.6.2. Comparing to ML Models Currently Used

Prior to evaluating the proposed method, six commonly used classifiers, including RF, LR, KNN, SVM, DT, LGB, and XGBoost with the default parameters for each classifier, were tested on the test set of the CVD dataset. LWSSA-XGBoost was then tested under the same experimental conditions (see Section 6.4), and the prediction results and corresponding indicators were presented in Table 6. As shown in Table 6, LWSSA-XGBoost achieved the best accuracy results among all models. The accuracy rates for RF, LR, KNN, SVM, DT, LGB, and XGBoost were 0.885, 0.695, 0.755, 0.880, 0.717, 0.869, and 0.858, respectively. KNN had the poorest accuracy performance at 0.695. LR and RF had the highest accuracy rates of 0.885 and 0.880 among these basic classification models. LWSSA-XGBoost with the optimal parameters ({‘learning_rate’: 0.46001000000000003, ‘max_depth’: 6, ‘gamma’: 0.2, ‘colsample_bytree’: 0.30000000000000004, ‘reg_alpha’: 6.400000100000001, ‘reg_lambda’: 3.7400001}) achieved an accuracy of 0.929 on the CVD dataset, meaning that 168 out of 180 people were correctly classified. The experiments showed that the LWSSA could optimize the XGBoost model with parameters, thereby enhancing its accuracy. The F1 score is the average of precision and recall. As demonstrated in Table 6, the F1 score obtained by LWSSA-XGBoost was the highest. The F1 scores of RF, LR, KNN, SVM, DT, LGB, and XGBoost were 0.901, 0.725, 0.774, 0.911, 0.735, 0.872, and 0.862, respectively. Among them, KNN and SVM had the poorest F1 score performance, while RF had the highest F1 score. This indicates that XGBoost has certain advantages in predicting CVD compared with other classifiers. Furthermore, after being optimized by the LWSSA proposed in this paper, the F1 score of XGBoost reached 0.936, which was 2.20% to 19.70% higher than the other classifiers, demonstrating the superiority of LWSSA-XGBoost in CVD classification.

6.6.3. Evaluation of LWSSA Algorithm with Various Kinds of Datasets

To demonstrate the efficacy of metaheuristic algorithms in optimizing the performance of the XGBoost classifier in large feature spaces, the LWSSA’s efficiency was compared to that of other such algorithms. Sixteen benchmark datasets of various kinds, obtained from UCI’s machine learning repository [73], were used for this purpose, and each dataset is summarized in Table 7. To ensure a fair comparison, all algorithms began with the same population, and their remaining parameters are presented in Table 2. Each algorithm was run 20 times on an Intel Core i3 with 4GB of RAM, using a fixed number of iterations (100) and initial population (30). The parameters and settings for the search space are provided in Table 4, while the methodology and performance measures employed are consistent with those of the preceding section.
This section compares the proposed LWSSA algorithm with other optimization techniques such as the SSA, WOA, GWO, ESSA, ISSA, TVBSSA, and SSALEO for optimizing the performance of XGBoost in terms of effectiveness and efficiency. The main focus of this investigation is to evaluate the efficacy and efficiency of the LWSSA algorithm. The results of the comparisons are presented in Table 8, Table 9 and Table 10, with the most advantageous conclusion for each comparison highlighted in bold text. It should be noted that the “Fitness” column in Table 10 represents the cost function (1 − f1_score).
Table 7. Dataset characteristics.
Table 7. Dataset characteristics.
IDDataset NameNo. ClassesNo. AttributesNo. of Samples
DS1Zoo716101
DS2Wine313178
DS3Heart213270
DS4Vehicle418846
DS5Breastcancer29699
DS6Soybean small43547
DS7Spambase2574601
DS8Dermatology634366
DS9fri_c0_500_10210500
DS10fri_c0_1000_102101000
DS11fri_c1_1000_102101000
DS12Pc12211109
DS13stock29950
DS14CLEAN2166476
DS15Semeion102561593
DS16Waveform3405000

Test Accuracy and Precision Analysis

In Table 8, we compare the accuracy of classification achieved by LWSSA-XGBoost and other methods across all available datasets. According to the findings, LWSSA-XGBoost performed better than any other method in 14 out of 16 datasets, which is equivalent to 87.5% of all datasets combined. In addition, LWSSA-XGBoost, ESSA-XGBoost, and SSA-XGBoost all managed to achieve accuracy rates of 100% across three different datasets (Zoo, Wine, and Soybean small). The SSALEO-XGBoost algorithm came in second place because it achieved the highest accuracy in 2 of the 16 datasets and obtained 100% accuracy in the Zoo, Wine, and Soybean small datasets. This was the same level of accuracy that was achieved by the GWO-XGBoost algorithm, which came in third place. ISSA-XGBoost came in at position four across all three datasets, achieving results that were comparable to those of GWO-XGBoost while, on the whole, outperforming WOA-XGBoost, TVSSA-XGBoost, and SSA-XGBoost. Even though the remaining methods produced good results, their performance was not as good as the performance of the other methods across all of the matching datasets. Based on these findings, it appears that the approach that was proposed has the potential to achieve superior results when compared to other methods, particularly for both small and large datasets.
Table 8. Statistical outcome comparison, average test accuracy, and precision values for 20 runs for all methods.
Table 8. Statistical outcome comparison, average test accuracy, and precision values for 20 runs for all methods.
IDGWO-XGBoostISSA-XGBoostESSA-XGBoostTVSSA-XGBoostSSA-XGBoostWOA-XGBoostSSALEO-XGBoostLWSSA-XGBoost
Acc.Preci.Acc.Preci.Acc.Preci.Acc.Preci.Acc.Preci.Acc.Preci.Acc.Preci.Acc.Preci.
DS11.0001.0001.0001.0000.9640.9051.0001.0000.8760.8111.0001.0001.0001.0001.0001.000
DS21.0001.0001.0001.0000.9940.9951.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
DS30.8770.8760.8880.8870.8520.8540.8850.8840.8400.8130.8700.8710.8840.8830.8890.888
DS40.8200.8200.8160.8160.7910.7850.8130.8140.8050.8040.8080.8080.8190.8200.8260.826
DS50.9700.9630.9710.9640.9650.9590.9710.9640.9680.9620.9690.9620.9720.9650.9710.964
DS61.0001.0001.0001.0001.0001.0001.0001.0000.9400.9101.0001.0001.0001.0001.0001.000
DS70.9580.9560.9560.9540.9500.9490.9560.9540.9550.9530.9560.9540.9570.9560.9580.955
DS80.9870.9870.9910.9900.9890.9890.9840.9830.9730.9710.9860.9850.9940.9930.9890.989
DS90.9640.9570.9590.9590.9010.9010.9540.9540.9460.9460.9490.9490.9600.9600.9740.964
DS100.9030.9010.8970.8970.8830.8830.8970.8970.8920.8930.8950.8950.9020.9020.9040.904
DS110.9570.9550.9540.9530.9450.9440.9540.9530.9500.9490.9540.9520.9590.9580.9610.959
DS120.9590.8730.9590.8830.9480.7600.9590.8920.9550.8640.9570.8760.9590.8780.9610.886
DS130.9880.9880.9880.9880.9680.9680.9890.9890.9760.9760.9870.9870.9900.9900.9910.991
DS140.9690.9700.9690.9700.9650.9670.9690.9700.9670.9680.9670.9670.9690.9700.9700.970
DS150.9720.9740.9730.9750.9670.9680.9580.9610.9570.9600.9590.9620.9620.9640.9740.976
DS160.8720.8720.8680.8680.8690.8700.8670.8670.8690.8700.8670.8680.8720.8720.8760.876
Acc. refers to test accuracy, and Preci. indicates precision. Bold denotes to the best results.

Recall, Precision, F1 Score, and AUC Analysis

According to the recall and F1 score measures, LWSSA-XGBoost performed significantly better than the regular SSA-XGBoost in every dataset, as shown in Table 9. The proposed method achieved better results in terms of recall and F1 score measures than contemporary variants of the SSA, such as ESSA-XGBoost, ISSA-XGBoost, and TVBSSA-XGBoost. Because of the hybridization procedure that was applied, the LWSSA-XGBoost that was recommended was successful in achieving a higher recall and F1 score measures. The LW method, which is centered on the best search agent, improves the LWSSA’s capability of exploitation and the correctness of the solution in each iteration, whereas the mutation operator uses random operators to get away from local optima. As a direct consequence of this, the best solution can be enhanced by accomplishing better performance criteria measurements.
Table 9. Statistical outcomes of comparison, average recall, and F1 score values for 20 runs for all methods.
Table 9. Statistical outcomes of comparison, average recall, and F1 score values for 20 runs for all methods.
IDGWO-XGBoostISSA-XGBoostESSA-XGBoostTVSSA-XGBoostSSA-XGBoostWOA-XGBoostSSALEO-XGBoostLWSSA-XGBoost
RecallF1 ScoreRecallF1 ScoreRecallF1 ScoreRecallF1 ScoreRecallF1 ScoreRecallF1 ScoreRecallF1 ScoreRecallF1 Score
DS11.0001.0001.0001.0000.9110.9071.0001.0000.8290.8161.0001.0001.0001.0001.0001.000
DS21.0001.0001.0001.0000.9950.9951.0001.0001.0001.0001.0001.0001.0001.0001.0001.000
DS30.8790.8760.8870.8870.8580.8520.8850.8840.8370.8190.8730.8700.8850.8830.8880.888
DS40.8210.8190.8170.8160.7930.7840.8140.8120.8060.8040.8090.8070.8200.8190.8270.826
DS50.9720.9670.9730.9680.9640.9610.9720.9680.9680.9650.9700.9660.9740.9690.9730.968
DS61.0001.0001.0001.0001.0001.0001.0001.0000.9250.9141.0001.0001.0001.0001.0001.000
DS70.9540.9550.9530.9540.9460.9480.9540.9540.9520.9520.9530.9530.9550.9550.9550.956
DS80.9850.9850.9890.9890.9860.9870.9810.9820.9710.9700.9860.9850.9930.9930.9860.988
DS90.9580.9570.9590.9580.9010.9000.9540.9530.9460.9450.9500.9490.9610.9600.9610.963
DS100.9010.9000.8970.8970.8830.8830.8970.8970.8920.8920.8950.8950.9020.9020.9070.903
DS110.9570.9560.9550.9540.9440.9440.9540.9530.9510.9500.9530.9530.9600.9590.9610.960
DS120.7720.8130.7650.8100.6440.6760.7570.8060.7360.7820.7540.7990.7630.8070.7760.820
DS130.9880.9880.9880.9880.9680.9680.9890.9890.9760.9760.9870.9870.9900.9900.9910.991
DS140.9670.9680.9680.9690.9630.9640.9670.9680.9650.9670.9650.9660.9680.9690.9680.969
DS150.9720.9720.9730.9730.9670.9670.9590.9590.9570.9570.9600.9600.9620.9620.9740.975
DS160.8720.8710.8680.8680.8690.8680.8670.8670.8700.8690.8670.8670.8720.8720.8770.876
Bold denotes to the best results.
In a similar vein, the LWSSA-XGBoost fitness values improved as a result of the adjustments and the gained search balance. Table 10 compares the average fitness values and AUC of all competing methods, indicating that LWSSA-XGBoost exceeded all other techniques. Furthermore, these findings indicate the LWSSA-XGBoost algorithm’s robustness and repeatability across multiple datasets.
Table 10. Statistical outcomes of comparison, average fitness, and AUC values for 20 runs for all methods.
Table 10. Statistical outcomes of comparison, average fitness, and AUC values for 20 runs for all methods.
IDGWO-XGBoostISSA-XGBoostESSA-XGBoostTVSSA-XGBoostSSA-XGBoostWOA-XGBoostSSALEO-XGBoostLWSSA-XGBoost
FitnessAUC.FitnessAUC.FitnessAUC.FitnessAUC.FitnessAUC.FitnessAUC.FitnessAUC.FitnessAUC.
DS10.00001.0000.00001.0000.09330.9990.00000.9980.18420.8980.00001.0000.00001.0000.00001.000
DS20.00001.0000.00001.0000.00510.9990.00001.0000.00001.0000.00001.0000.00001.0000.00001.000
DS30.12400.8790.11340.8870.14840.8580.11600.8850.18070.8370.13040.8730.11670.8850.11250.888
DS40.18090.9360.18410.9380.21650.9270.18760.9360.19580.9350.19320.9340.18140.9380.17450.938
DS50.03290.9720.03180.9730.03860.9640.03220.9720.03500.9680.03410.9700.03100.9740.03180.973
DS60.00001.0000.00001.0000.00001.0000.00001.0000.08570.9500.00001.0000.00001.0000.00001.000
DS70.04440.9550.04630.9530.05230.9460.04600.9540.04760.9520.04650.9530.04460.9550.04520.954
DS80.01450.9980.01070.9980.01310.9970.01840.9980.02970.9970.01520.9980.00690.9980.01230.998
DS90.04300.9580.04150.9590.09950.9010.04650.9540.05450.9460.05100.9500.04000.9610.03650.964
DS100.09950.9010.10330.8970.11730.8830.10300.8970.10780.8920.10480.8950.09780.9020.09680.903
DS110.04390.9570.04640.9550.05620.9440.04670.9540.05050.9510.04720.9530.04140.9600.04010.961
DS120.18730.7720.18990.7650.32400.6440.19440.7570.21790.7360.20060.7540.19280.7630.18030.776
DS130.01210.9880.01190.9880.03210.9680.01130.9890.02420.9760.01340.9870.00970.9900.00870.991
DS140.03180.9670.03130.9680.03560.9630.03180.9670.03340.9650.03400.9650.03130.9680.03080.968
DS150.02760.9980.02690.9970.03320.9970.04120.9960.04280.9970.04030.9960.03770.9970.02550.998
DS160.12860.9730.13190.9700.13180.9730.13330.9740.13080.9730.13290.9720.12840.9740.12370.974
AUC—Area Under Curve. Bold denotes to the best results.
After taking into account all of the preliminary results, it is clear that the LWSSA-XGBoost is a strong candidate in the effort to improve the performance of the XGBoost classifier on a variety of datasets. In addition, the LWSSA-exploitation XGBoost ability and solution accuracy were both improved by the mutation operator agent, while the exploration of the method was improved by the local search operator LW agent. Because of these enhancements, the LWSSA was better able to deliver high-quality solutions in comparison to other optimization algorithms. This can be seen in the increased average classification accuracy, fitness function values, area under the curve (AUC), precision, and F1 score. Based on these findings, it appears that the LWSSA may be a workable strategy for tackling problems that occur in the real world.

7. Problematic Constraints and Difficulties

While the LWSSA did outperform several competing algorithms in our hand-picked tests, it is not without its flaws. Here are some of our thoughts on the most important constraints that we believe the algorithm imposes:
  • Computational Overhead: Because of the increased complexity of the optimization process, the locally weighted technique and the mutation strategy could result in the introduction of additional computing overhead. This could result in a greater amount of processing time compared to some of the simpler forms of the Salp Swarm Algorithm, depending on the size of the problem and the number of iterations.
  • Parameter Tuning: The success of the strategy that has been suggested is strongly dependent on the accurate tuning of parameters linked to the locally weighted strategy and the mutation scheme. Finding the ideal values for the parameters may need additional work and trial and error, particularly when dealing with a variety of distinct optimization situations.
  • Sensitivity to Problem Characteristics: In the same way that the performance of any algorithm can be affected by the specifics of the optimization problems it is applied to, the performance of the suggested approach could be susceptible to those features. It is very useful in some circumstances, but its usefulness may vary depending on the nature of the problem and the structure it has.
  • Comparative Performance: Comprehensive comparative studies are required so that an in-depth evaluation may be carried out to determine whether or not the proposed method is preferable to the various other Salp Swarm versions. This requires testing the algorithm on a wide variety of benchmark issues and contrasting its results with those of previously developed variants of the Salp Swarm Algorithm as well as other cutting-edge optimization strategies.
In order to provide a full evaluation of the suggested method and its potential contributions to the field of optimization, addressing these constraints with respect to other variants of Salp Swarm that are similar will be of great assistance.

8. Conclusions and Future Work

This study aims to introduce an improved version of the Salp Search Algorithm called the LWSSA, which is designed to optimize XGBoost parameters. The LWSSA optimization approach is achieved through two modifications. Firstly, the mutation operator in the SSA is used to develop the searching ability of half of the individuals, which enhances convergence potential and accuracy. Secondly, the LWSSA technique utilizes a novel locally weighted technique (LW) to overcome the problem of sticking at local optima and improve overall performance. To evaluate its performance and compare it to other well-established metaheuristics, the proposed technique was tested on the latest Competitions on Evolutionary Computation (CEC) benchmark functions, including CEC 2021 and CEC 2017. Additionally, the LWSSA algorithm was applied to optimize the performance of the XGBoost classifier in the risk assessment of cardiovascular disease and classification of 16 datasets.
The final results demonstrate that the LWSSA-XGBoost model was highly effective in predicting cardiovascular disease risk. This research is valuable for predicting the risk of developing cardiovascular disease (CVD) and has the potential to guide individuals and encourage high-risk groups to modify their lifestyle behaviors, thus reducing the risk of CVD. Additionally, it can help clinicians identify potential risks and patterns, enabling them to identify high-risk groups more and reduce misdiagnosis rates. However, there are some limitations and opportunities for future research. For instance, healthcare big data can take on various structural forms—including structured, semi-structured, and unstructured data—which may be used for assessing the risk of CVD. However, the approach presented in this paper has only been validated on datasets that may contain a single type of data structure. Additionally, the study focuses on XGBoost as the primary model and demonstrates that the LWSSA effectively optimizes its parameters. It is unclear if the LWSSA can similarly optimize parameters for other classifiers. Future work for the Locally-Weighted Salp Search Algorithm (LWSSA) involves exploring its applicability in diverse domains beyond cardiovascular risk assessment. The algorithm’s unique features, including dynamic population division, a local weight technique, and a tailored mutation strategy, position it as a versatile optimization tool. Future research endeavors could focus on evaluating the LWSSA in areas such as financial modeling, supply chain optimization, and energy management, where adaptive and efficient solutions are crucial. Additionally, a thorough comparative analysis against state-of-the-art algorithms in these diverse application areas would provide valuable insights into the LWSSA’s relative efficacy. Collaborative efforts across disciplines could enhance the LWSSA’s adaptability, opening avenues for innovative problem-solving methodologies and fostering interdisciplinary research initiatives. This future work aims to advance the field of optimization algorithms and contribute to the algorithm’s broader utility across various domains.

Author Contributions

Conceptualization, M.Q. and N.K.H.; methodology, M.Q., M.A.A. and S.I.M.; software, M.A.A., M.Q. and M.A.; validation, M.A., M.Q. and N.K.H.; formal analysis, M.A.; investigation, S.I.M., O.H. and M.A.; resources, M.A. and O.H.; data curation, O.H., M.A.A., S.I.M., M.A. and M.Q.; writing—original draft preparation, N.K.H. and O.H.; writing—review and editing, M.Q., S.I.M., O.H., M.A.A., M.A. and N.K.H.; visualization, M.Q., S.I.M., M.A., M.A.A. and N.K.H.; supervision, M.Q., S.I.M., M.A. and N.K.H.; project administration, M.Q. and N.K.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia, project number 445-9-777.

Data Availability Statement

We have made the code for this research project available on GitHub at https://github.com/MohammedQaraad/LWSSA-algorithm (accessed on 5 January 2024). Readers interested in accessing the code can download it from this repository.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia, for funding this research work through project number 445-9-777.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Comparison of the LWSSA with Some Improved SSA Algorithms during 2500 Iterations (cec2021)

F Criteria ESSA HSSASCA ISSA ISSA_OBL SSALEO SSA-FGWO TVSSA LWSSA
F1Avg 1.498   ×   10 3 2.274   ×   10 10 3.879   ×   10 3 9.531   ×   10 2 2.966   ×   10 3 4.288   ×   10 3 4.774   ×   10 3 1.000   ×   10 2
Std 1.885   ×   10 7 6.973   ×   10 9 3.712   ×   10 3 1.276   ×   10 3 2.158   ×   10 3 3.153   ×   10 3 3.518   ×   10 3 0.000
Med 9.198   ×   10 6 2.155   ×   10 10 2.463   ×   10 3 1.363   ×   10 2 2.712   ×   10 3 3.429   ×   10 3 3.776   ×   10 3 1.000   ×   10 2
F2Avg 7.151   ×   10 8 1.781   ×   10 12 4.128   ×   10 5 3.056   ×   10 5 4.184   ×   10 5 5.496   ×   10 5 2.925   ×   10 5 1.100   ×   10 3
Std 4.993   ×   10 8 5.894   ×   10 11 4.681   ×   10 5 3.103   ×   10 5 4.099   ×   10 5 6.695   ×   10 5 3.603   ×   10 5 0.000
Med 6.395   ×   10 8 1.816   ×   10 12 1.922   ×   10 5 2.053   ×   10 5 2.847   ×   10 5 1.909   ×   10 5 1.345   ×   10 5 1.100   ×   10 3
F3Avg 3.502   ×   10 8 4.753   ×   10 11 2.295   ×   10 5 1.188   ×   10 5 1.949   ×   10 5 2.151   ×   10 5 1.692   ×   10 5 7.002   ×   10 2
Std 3.633   ×   10 8 1.499   ×   10 11 2.206   ×   10 5 9.574   ×   10 4 1.844   ×   10 5 2.304   ×   10 5 2.379   ×   10 5 6.716   ×   10 2
Med2.647   ×   10 8 4.791   ×   10 11 1.418   ×   10 5 8.691   ×   10 4 1.444   ×   10 5 1.104   ×   10 5 6.491   ×   10 4 7.002   ×   10 2
F4Avg1.909   ×   10 3 1.233   ×   10 5 1.904   ×   10 3 1.912   ×   10 3 1.904   ×   10 3 1.904   ×   10 3 1.904   ×   10 3 1.904   ×   10 3
Std4.6351.719   ×   10 5 1.2963.3761.3081.5161.254 1.329
Med1.907   ×   10 3 3.900   ×   10 4 1.904   ×   10 3 1.911   ×   10 3 1.904   ×   10 3 1.904   ×   10 3 1.904   ×   10 3 1.904   ×   10 3
F5Avg4.169   ×   10 5 5.002   ×   10 6 1.640   ×   10 5 9.883   ×   10 4 1.062   ×   10 5 1.350   ×   10 5 1.548   ×   10 5 2.555   ×   10 3
Std4.740   ×   10 5 5.063   ×   10 6 9.887   ×   10 4 7.492   ×   10 4 6.638   ×   10 4 9.461   ×   10 4 8.765   ×   10 4 5.437   ×   10 2
Med3.010   ×   10 5 3.666   ×   10 6 1.507   ×   10 5 7.715   ×   10 4 9.471   ×   10 4 1.042   ×   10 5 1.379   ×   10 5 2.456   ×   10 3
F6Avg1.346   ×   10 4 1.287   ×   10 7 2.017   ×   10 4 1.208   ×   10 4 1.060   ×   10 4 1.741   ×   10 4 2.514   ×   10 4 1.686   ×   10 3
Std9.198   ×   10 3 2.874   ×   10 7 1.716   ×   10 4 5.195   ×   10 3 7.099   ×   10 3 1.343   ×   10 4 1.530   ×   10 4 1.576   ×   10 2
Med1.032   ×   10 4 8.938   ×   10 4 1.546   ×   10 4 1.013   ×   10 4 8.698   ×   10 3 1.365   ×   10 4 2.420   ×   10 4 1.606   ×   10 3
F7Avg9.040   ×   10 5 3.593   ×   10 7 2.653   ×   10 5 1.688   ×   10 6 1.561   ×   10 5 4.767   ×   10 5 3.723   ×   10 5 2.796   ×   10 3
Std9.564   ×   10 5 6.567   ×   10 7 1.418   ×   10 5 6.874   ×   10 5 6.675   ×   10 4 4.554   ×   10 5 3.031   ×   10 5 3.971   ×   10 2
Med5.997   ×   10 5 1.035   ×   10 7 2.114   ×   10 5 1.717   ×   10 6 1.435   ×   10 5 3.859   ×   10 5 2.905   ×   10 5 2.712   ×   10 3
F8Avg2.326   ×   10 3 2.717   ×   10 3 2.315   ×   10 3 2.333   ×   10 3 2.337   ×   10 3 2.313   ×   10 3 2.315   ×   10 3 2.321   ×   10 3
Std4.2876.736   ×   10 2 1.636   ×   10 1 1.752   ×   10 1 1.540   ×   10 1 1.653   ×   10 1 1.705   ×   10 1 1.423   ×   10 1
Med2.326   ×   10 3 2.576   ×   10 3 2.303   ×   10 3 2.338   ×   10 3 2.338   ×   10 3 2.300   ×   10 3 2.303   ×   10 3 2.328   ×   10 3
F9Avg2.944   ×   10 3 1.631   ×   10 4 2.606   ×   10 3 2.662   ×   10 3 2.621   ×   10 3 2.600   ×   10 3 2.622   ×   10 3 2.567   ×   10 3
Std1.498   ×   10 2 3.795   ×   10 3 3.427   ×   10 1 1.276   ×   10 2 5.598   ×   10 1 4.968   ×   10 4 6.926   ×   10 1 4.795   ×   10 1
Med2.948   ×   10 3 1.678   ×   10 4 2.600   ×   10 3 2.600   ×   10 3 2.600   ×   10 3 2.600   ×   10 3 2.600   ×   10 3 2.600   ×   10 3
F10Avg3.265   ×   10 3 6.061   ×   10 3 3.162   ×   10 3 3.307   ×   10 3 3.215   ×   10 3 3.170   ×   10 3 3.156   ×   10 3 3.132   ×   10 3
Std8.030   ×   10 1 1.312   ×   10 3 3.603   ×   10 1 1.440   ×   10 2 7.049   ×   10 1 4.920   ×   10 1 3.858   ×   10 1 1.452   ×   10 1
Med3.258   ×   10 3 5.942   ×   10 3 3.157   ×   10 3 3.273   ×   10 3 3.221   ×   10 3 3.158   ×   10 3 3.142   ×   10 3 3.123   ×   10 3
F r i e d m a n   A v g . 5.8797.9383.8864.8863.7793.9833.8521.797
F r i e d m a n   R a n k 7.0008.0004.0006.0002.0005.0003.0001.000
Bold denotes to the best results.

Appendix A.2. Wilcoxon Rank-Sum of the LWSSA vs. Some Improved SSA Algorithms on CEC2021

Fun ESSA HSSASCA ISSA ISSA_OBL SSALEO SSA-FGWO TVSSA
1<0.05<0.05<0.05<0.05<0.05<0.05<0.05
2<0.05<0.05<0.05<0.05<0.05<0.05<0.05
3<0.05<0.05<0.05<0.05<0.05<0.05<0.05
4<0.05<0.05<0.05<0.050.154752480.613265870.3629539
5<0.05<0.05<0.05<0.05<0.05<0.05<0.05
6<0.05<0.05<0.05<0.05<0.05<0.05<0.05
7<0.05<0.05<0.05<0.05<0.05<0.05<0.05
80.91948423<0.050.22811625<0.05<0.050.054784020.23417446
9<0.05<0.050.74987697<0.050.528807430.833712540.63527561
10<0.05<0.05<0.05<0.05<0.05<0.05<0.05

Appendix A.3. Comparison between LWSSA and Some State-of-the-Art Algorithms during 2500 Iterations (cec2021)

F Criteria GBOEGBOSMAPSOEOSADECLPSOHPSO_TVACLWSSA
F1Avg9.217   ×   10 2 1.499   ×   10 2 7.679   ×   10 3 2.137   ×   10 3 4.280   ×   10 7 1.000   ×   10 2 6.292   ×   10 6 1.754   ×   10 4 1.000   ×   10 2
Std1.415   ×   10 3 2.704   ×   10 2 3.384   ×   10 3 2.187   ×   10 3 1.346   ×   10 8 0.0002.358   ×   10 7 6.309   ×   10 4 4.336   ×   10 14
Med4.134   ×   10 2 1.000   ×   10 2 8.933   ×   10 3 1.453   ×   10 3 4.233   ×   10 3 1.000   ×   10 2 7.389   ×   10 4 2.327   ×   10 3 1.000   ×   10 2
F2Avg1.122   ×   10 3 1.398   ×   10 9 9.045   ×   10 5 2.590   ×   10 5 6.700   ×   10 9 1.100   ×   10 3 2.040   ×   10 8 1.575   ×   10 8 1.100   ×   10 3
Std6.650   ×   10 1 7.657   ×   10 9 7.804   ×   10 5 3.880   ×   10 5 1.369   ×   10 10 0.0005.861   ×   10 8 8.617   ×   10 8 6.938   ×   10 13
Med1.101   ×   10 3 1.100   ×   10 3 6.705   ×   10 5 1.351   ×   10 5 2.509   ×   10 6 1.100   ×   10 3 9.251   ×   10 5 1.646   ×   10 4 1.100   ×   10 3
F3Avg1.183   ×   10 5 1.183   ×   10 5 9.370   ×   10 3 2.991   ×   10 4 1.897   ×   10 9 7.000   ×   10 2 1.753   ×   10 8 1.554   ×   10 3 7.000   ×   10 2
Std6.442   ×   10 5 6.442   ×   10 5 1.662   ×   10 4 1.418   ×   10 5 2.781   ×   10 9 0.0003.641   ×   10 8 3.196   ×   10 3 9.569   ×   10 2
Med7.000   ×   10 2 7.000   ×   10 2 1.305   ×   10 3 7.000   ×   10 2 5.883   ×   10 7 7.000   ×   10 2 7.717   ×   10 6 7.000   ×   10 2 7.002   ×   10 2
F4Avg1.907   ×   10 3 1.908   ×   10 3 1.905   ×   10 3 1.905   ×   10 3 1.907   ×   10 3 1.905   ×   10 3 1.913   ×   10 3 1.926   ×   10 3 1.904   ×   10 3
Std3.6695.1421.5251.1611.179   ×   10 1 7.251   ×   10 1 3.100   ×   10 1 1.612   ×   10 1 1.042
Med1.906   ×   10 3 1.907   ×   10 3 1.903   ×   10 3 1.903   ×   10 3 1.903   ×   10 3 1.905   ×   10 3 1.906   ×   10 3 1.922   ×   10 3 1.904   ×   10 3
F5Avg1.946   ×   10 4 1.280   ×   10 4 1.073   ×   10 5 5.173   ×   10 4 8.561   ×   10 4 5.156   ×   10 4 5.718   ×   10 4 5.280   ×   10 4 2.638   ×   10 3
Std1.119   ×   10 4 1.259   ×   10 4 4.806   ×   10 4 2.961   ×   10 4 4.972   ×   10 4 2.661   ×   10 4 2.007   ×   10 4 2.411   ×   10 4 6.982   ×   10 2
Med1.944   ×   10 4 7.881   ×   10 3 1.053   ×   10 5 4.306   ×   10 4 6.680   ×   10 4 4.570   ×   10 4 5.562   ×   10 4 4.808   ×   10 4 2.548   ×   10 3
F6Avg2.115   ×   10 3 3.107   ×   10 3 2.309   ×   10 3 1.788   ×   10 3 6.070   ×   10 3 1.672   ×   10 3 2.920   ×   10 3 2.163   ×   10 3 1.634   ×   10 3
Std5.474   ×   10 2 3.569   ×   10 3 8.531   ×   10 2 2.248   ×   10 2 2.584   ×   10 3 1.763   ×   10 2 1.306   ×   10 3 7.765   ×   10 2 9.759   ×   10 1
Med2.013   ×   10 3 2.341   ×   10 3 2.006   ×   10 3 1.677   ×   10 3 5.359   ×   10 3 1.604   ×   10 3 2.649   ×   10 3 2.000   ×   10 3 1.604   ×   10 3
F7Avg7.997   ×   10 3 4.270   ×   10 4 5.924   ×   10 4 1.203   ×   10 4 2.166   ×   10 5 2.068   ×   10 4 1.580   ×   10 5 1.678   ×   10 4 2.838   ×   10 3
Std5.292   ×   10 3 9.986   ×   10 4 2.737   ×   10 4 6.035   ×   10 3 6.129   ×   10 5 1.239   ×   10 4 7.703   ×   10 5 8.566   ×   10 3 7.969   ×   10 2
Med7.122   ×   10 3 8.697   ×   10 3 5.601   ×   10 4 1.063   ×   10 4 6.654   ×   10 4 1.719   ×   10 4 1.648   ×   10 4 1.326   ×   10 4 2.611   ×   10 3
F8Avg2.332   ×   10 3 2.337   ×   10 3 2.303   ×   10 3 2.362   ×   10 3 2.320   ×   10 3 2.318   ×   10 3 2.328   ×   10 3 2.339   ×   10 3 2.319   ×   10 3
Std8.0924.2416.499   ×   10 3 5.505   ×   10 1 1.135   ×   10 1 1.442   ×   10 1 2.1658.4171.403   ×   10 1
Med2.334   ×   10 3 2.336   ×   10 3 2.300   ×   10 3 2.342   ×   10 3 2.324   ×   10 3 2.328   ×   10 3 2.328   ×   10 3 2.338   ×   10 3 2.327   ×   10 3
F9Avg2.642   ×   10 3 2.622   ×   10 3 2.614   ×   10 3 2.715   ×   10 3 3.014   ×   10 3 2.605   ×   10 3 2.739   ×   10 3 2.647   ×   10 3 2.530   ×   10 3
Std1.117   ×   10 2 8.419   ×   10 1 4.880   ×   10 1 3.634   ×   10 2 5.436   ×   10 2 3.009   ×   10 1 3.289   ×   10 2 1.382   ×   10 2 4.661   ×   10 1
Med2.600   ×   10 3 2.600   ×   10 3 2.601   ×   10 3 2.600   ×   10 3 2.738   ×   10 3 2.600   ×   10 3 2.602   ×   10 3 2.600   ×   10 3 2.500   ×   10 3
F10Avg3.216   ×   10 3 3.208   ×   10 3 3.160   ×   10 3 3.278   ×   10 3 3.035   ×   10 3 3.164   ×   10 3 3.423   ×   10 3 3.298   ×   10 3 3.132   ×   10 3
Std6.647   ×   10 1 8.238   ×   10 1 4.418   ×   10 1 1.597   ×   10 2 7.140   ×   10 1 4.642   ×   10 1 9.664   ×   10 1 1.214   ×   10 2 1.581   ×   10 1
Med3.213   ×   10 3 3.183   ×   10 3 3.149   ×   10 3 3.232   ×   10 3 3.017   ×   10 3 3.158   ×   10 3 3.443   ×   10 3 3.281   ×   10 3 3.123   ×   10 3
F r i e d m a n   A v g . 4.6904.3665.8454.9886.5413.3596.8076.1622.243
F r i e d m a n   R a n k 4.0003.0006.0005.0008.0002.0009.0007.0001.000
Bold denotes to the best results.

Appendix A.4. Wilcoxon Rank-Sum of the LWSSA vs. Some State-of-the-Art Algorithms during 2500 Iterations

FunGBOEGBOSMAPSOEOSADECLPSOHPSO_TVAC
1<0.05<0.05<0.05<0.05<0.051<0.05<0.05
2<0.05<0.05<0.05<0.05<0.051<0.05<0.05
3<0.050.821595<0.050.651998<0.051<0.05<0.05
4<0.05<0.050.266174<0.050.5809<0.05<0.05<0.05
5<0.05<0.05<0.05<0.05<0.05<0.05<0.05<0.05
6<0.05<0.05<0.05<0.05<0.050.240346<0.05<0.05
7<0.05<0.05<0.05<0.05<0.05<0.05<0.05<0.05
8<0.05<0.05<0.05<0.050.9318380.4322540.301061<0.05
9<0.05<0.05<0.05<0.05<0.05<0.05<0.05<0.05
10<0.05<0.050.228116<0.05<0.05<0.05<0.05<0.05

Appendix A.5. Results from 2500 Iterations of the LWSSA vs. Some Salp Variants on IEEE CEC2017

FCr.ESSAHSSASCAISSA_OBLISSASSALEOTVSSAQSSALEOLWSSA
F1Avg 9.03656   ×   10 9 9.16426   ×   10 11 2.01231   ×   10 9 9.98052   ×   10 3 3.30750   ×   10 3 9.55185   ×   10 3 4.01170   ×   10 3 3.01336 × 10 3
Std 4.11154   ×   10 9 1.12483   ×   10 11 1.65616   ×   10 9 9.68170   ×   10 3 3.75111   ×   10 3 7.32136   ×   10 3 4.38525   ×   10 3 3.63301   ×   10 3
Med 7.67018   ×   10 9 9.16077   ×   10 11 1.53825   ×   10 9 7.00016   ×   10 3 2.58166   ×   10 3 8.98747   ×   10 3 2.79712   ×   10 3 1.76476   ×   10 3
F2Avg 1.41786   ×   10 5 3.44395   ×   10 5 7.17507   ×   10 4 8.99510   ×   10 4 2.50918   ×   10 3 1.25908   ×   10 5 1.58827   ×   10 4 9.50716 × 10 3
Std 1.47078   ×   10 4 8.52222   ×   10 4 1.25022   ×   10 4 1.82673   ×   10 4 9.17320   ×   10 2 4.60251   ×   10 4 4.99559   ×   10 3 3.35783   ×   10 3
Med 1.40149   ×   10 5 3.29620   ×   10 5 7.08563   ×   10 4 8.69489   ×   10 4 2.41802   ×   10 3 1.15600   ×   10 5 1.47988   ×   10 4 8.47643   ×   10 3
AF3Avg 8.10472   ×   10 2 1.93663   ×   10 4 7.54794   ×   10 2 5.65904   ×   10 2 5.64657   ×   10 2 5.82398   ×   10 2 5.68846   ×   10 2 5.60686 × 10 3
Std 8.32235   ×   10 1 4.56035   ×   10 3 7.79068   ×   10 1 4.55730   ×   10 1 5.50436   ×   10 1 4.18799   ×   10 1 4.55508   ×   10 1 4.69864   ×   10 1
Med 8.24397   ×   10 2 1.87923   ×   10 4 7.44610   ×   10 2 5.64272   ×   10 2 5.79414   ×   10 2 5.87143   ×   10 2 5.66641   ×   10 2 5.57586   ×   10 2
F4Avg 8.09324   ×   10 2 1.13284   ×   10 3 8.43132   ×   10 2 7.99437 × 10 2 8.24122   ×   10 2 8.28960   ×   10 2 8.51549   ×   10 2 9.01785   ×   10 2
Std 2.79974   ×   10 1 4.00484   ×   10 1 4.25395   ×   10 1 5.50168   ×   10 1 4.39905   ×   10 1 7.42270   ×   10 1 2.36665   ×   10 1 7.52235   ×   10 1
Med 8.09432   ×   10 2 1.14156   ×   10 3 8.48338   ×   10 2 8.07937   ×   10 2 8.23360   ×   10 2 8.03083   ×   10 2 8.47238   ×   10 2 9.06014   ×   10 2
F5Avg 6.81917   ×   10 2 7.24503   ×   10 2 6.91499   ×   10 2 6.63502   ×   10 2 6.66724   ×   10 2 6.65778 × 10 2 6.74832   ×   10 2 6.82869   ×   10 2
Std 1.02701   ×   10 1 1.50459   ×   10 1 7.28669 1.73109   ×   10 1 1.14701   ×   10 1 1.68224   ×   10 1 1.26545   ×   10 1 1.69733   ×   10 1
Med 6.83101   ×   10 2 7.22801   ×   10 2 6.92164   ×   10 2 6.64658   ×   10 2 6.66304   ×   10 2 6.64545   ×   10 2 6.77293   ×   10 2 6.85125   ×   10 2
F6Avg 1.18583   ×   10 3 1.95674   ×   10 3 1.64099   ×   10 3 1.09615   ×   10 3 1.09556   ×   10 3 1.11161   ×   10 3 1.34477   ×   10 3 1.10426 × 10 3
Std 6.58112   ×   10 1 9.37910   ×   10 1 1.55676   ×   10 2 9.76494   ×   10 1 7.48194   ×   10 1 8.18462   ×   10 1 1.26781   ×   10 2 8.60101   ×   10 1
Med 1.18845   ×   10 3 1.96788   ×   10 3 1.62038   ×   10 3 1.08841   ×   10 3 1.08852   ×   10 3 1.11536   ×   10 3 1.36312 E   ×   10 3 1.09649   ×   10 3
F7Avg 1.14113   ×   10 3 1.43990   ×   10 3 1.16850   ×   10 3 1.17294   ×   10 3 1.14861   ×   10 3 1.10273 × 10 3 1.16127   ×   10 3 1.17958   ×   10 3
Std 4.33617   ×   10 1 5.39006   ×   10 1 3.59235   ×   10 1 9.96923   ×   10 1 4.24728   ×   10 1 7.05031   ×   10 1 4.27204   ×   10 1 6.77353   ×   10 1
Med 1.14793   ×   10 3 1.43532   ×   10 3 1.16384   ×   10 3 1.15129   ×   10 3 1.14807   ×   10 3 1.09852   ×   10 3 1.17641   ×   10 3 1.17238   ×   10 3
F8Avg 1.46443   ×   10 4 3.74321   ×   10 4 1.38950   ×   10 4 1.14001   ×   10 4 7.47256 × 10 3 1.42571   ×   10 4 1.09280   ×   10 4 1.39525   ×   10 4
Std 2.36454   ×   10 3 5.76418   ×   10 3 1.84673 E   ×   10 3 2.51992   ×   10 3 2.72882   ×   10 3 4.19994   ×   10 3 1.80218   ×   10 3 3.92527   ×   10 3
Med 1.49006   ×   10 4 3.67857   ×   10 4 1.33979   ×   10 4 1.22345   ×   10 4 6.67854   ×   10 3 1.44039   ×   10 4 1.10557   ×   10 4 1.35127   ×   10 4
F9Avg 7.87305   ×   10 3 1.52464   ×   10 4 8.85248   ×   10 3 7.84373   ×   10 3 7.93135   ×   10 3 8.02314   ×   10 3 8.47217   ×   10 3 7.29078 × 10 3
Std 7.23499   ×   10 2 7.10918   ×   10 2 1.06756   ×   10 3 9.64341   ×   10 2 9.57611   ×   10 2 8.76859   ×   10 2 9.78798   ×   10 2 7.31876   ×   10 2
Med 7.85331   ×   10 3 1.53959   ×   10 4 8.78094   ×   10 3 7.66902   ×   10 3 7.66004   ×   10 3 8.23477   ×   10 3 8.27706   ×   10 3 7.33032   ×   10 3
F10Avg 3.54871   ×   10 3 1.62130   ×   10 4 1.92676   ×   10 3 1.50495   ×   10 3 1.37232   ×   10 3 1.46749   ×   10 3 1.37368   ×   10 3 1.32727 × 10 3
Std 1.65052   ×   10 3 4.04847   ×   10 3 2.29106   ×   10 2 9.84397   ×   10 1 7.03959   ×   10 1 8.61959   ×   10 1 7.15218   ×   10 1 6.92137   ×   10 1
Med 3.23887   ×   10 3 1.57903   ×   10 4 1.89588   ×   10 3 1.50591   ×   10 3 1.36892   ×   10 3 1.45323   ×   10 3 1.38586   ×   10 3 1.32272   ×   10 3
F11Avg 5.09616   ×   10 8 4.18049   ×   10 11 9.16145   ×   10 8 5.41120   ×   10 8 2.17013   ×   10 8 6.09851   ×   10 8 2.38139   ×   10 8 1.73582 × 10 6
Std 2.70692   ×   10 8 1.38807   ×   10 11 4.53783   ×   10 8 3.49198   ×   10 8 1.31139   ×   10 8 4.96511   ×   10 8 1.53479   ×   10 8 1.04780   ×   10 6
Med 4.32410   ×   10 8 4.17268   ×   10 11 7.97665   ×   10 8 4.77970   ×   10 8 2.12643   ×   10 8 4.97356   ×   10 8 2.48229   ×   10 8 1.68378   ×   10 6
F12Avg 1.68125   ×   10 8 2.19159   ×   10 11 3.01884   ×   10 4 2.03851   ×   10 5 1.16566   ×   10 5 2.16897   ×   10 5 1.28671   ×   10 5 1.07429 × 10 4
Std 2.24354   ×   10 8 9.03048   ×   10 10 1.12142   ×   10 4 1.20097   ×   10 5 6.37039   ×   10 4 1.33391   ×   10 5 9.49496   ×   10 4 2.51347   ×   10 3
Med 9.75892   ×   10 7 2.10153   ×   10 11 2.76593   ×   10 4 1.64624   ×   10 5 1.03117   ×   10 5 1.95169   ×   10 5 1.00876   ×   10 5 1.01773   ×   10 4
F13Avg 7.93422   ×   10 6 8.91424   ×   10 6 4.05799   ×   10 5 2.24767   ×   10 5 1.17258   ×   10 5 1.94325   ×   10 5 1.23123   ×   10 5 1.73686 × 10 3
Std 7.12085   ×   10 6 9.74291   ×   10 6 2.80680   ×   10 5 1.58367   ×   10 5 8.70056   ×   10 4 1.03172   ×   10 5 9.41972   ×   10 4 6.09089   ×   10 1
Med 5.16878   ×   10 6 5.38535   ×   10 6 3.62468   ×   10 5 2.01597   ×   10 5 9.18832   ×   10 4 1.61554   ×   10 5 8.62047   ×   10 4 1.74028   ×   10 3
F14Avg 1.36685   ×   10 7 3.63546   ×   10 10 1.49138   ×   10 4 1.11354   ×   10 5 4.54213   ×   10 4 9.23849   ×   10 4 6.20934   ×   10 4 2.76966 × 103
Std 1.56014   ×   10 7 2.81518   ×   10 10 5.58794   ×   10 3 5.47972   ×   10 4 2.31837   ×   10 4 4.63380   ×   10 4 4.58291   ×   10 4 3.31537   ×   10 2
Med 6.58964   ×   10 6 2.79066   ×   10 10 1.45733   ×   10 4 1.03929   ×   10 5 4.16434   ×   10 4 7.66969   ×   10 4 4.92331   ×   10 4 2.71988   ×   10 3
F15Avg 3.87651   ×   10 3 6.36455   ×   10 3 4.25149   ×   10 3 3.50712   ×   10 3 3.96245   ×   10 3 3.56187   ×   10 3 3.99024   ×   10 3 3.43327 × 10 3
Std 4.38522   ×   10 2 9.77898   ×   10 2 5.06100   ×   10 2 4.34234   ×   10 2 5.45647   ×   10 2 4.60965   ×   10 2 4.73674   ×   10 2 3.60364   ×   10 2
Med 3.86743   ×   10 3 6.04054   ×   10 3 4.25893   ×   10 3 3.47240   ×   10 3 3.86245   ×   10 3 3.53008   ×   10 3 4.05225   ×   10 3 3.37880   ×   10 3
F16Avg 3.35283   ×   10 3 8.99633   ×   10 3 3.41636   ×   10 3 3.47051   ×   10 3 3.40178   ×   10 3 3.53721   ×   10 3 3.40178   ×   10 3 3.10155 × 10 3
Std 3.06752   ×   10 2 8.04598   ×   10 3 2.98422   ×   10 2 3.19347   ×   10 2 3.04596   ×   10 2 4.04267   ×   10 2 3.04596   ×   10 2 2.68766   ×   10 2
Med 3.38986   ×   10 3 6.12882   ×   10 3 3.37638   ×   10 3 3.51138 E   ×   10 3 3.38707   ×   10 3 3.60296   ×   10 3 3.38707   ×   10 3 3.12971   ×   10 3
F17Avg 9.11523   ×   10 6 7.05276   ×   10 7 4.40012   ×   10 6 2.28832   ×   10 6 9.57515   ×   10 5 2.15412   ×   10 6 9.57515   ×   10 5 6.90995 × 10 3
Std 4.75372   ×   10 6 4.90121   ×   10 7 2.19153   ×   10 6 1.94567   ×   10 6 6.53424   ×   10 5 1.49028   ×   10 6 6.53424   ×   10 5 4.71762   ×   10 3
Med 8.50410   ×   10 6 6.40628   ×   10 7 4.00712   ×   10 6 1.76689   ×   10 6 9.21205   ×   10 5 1.83995   ×   10 6 9.21205   ×   10 5 5.81318   ×   10 3
F18Avg 1.83085   ×   10 6 1.78472   ×   10 10 1.41295   ×   10 5 1.59581   ×   10 7 5.17783   ×   10 6 1.60407   ×   10 7 4.14719   ×   10 6 2.14610 × 10 3
Std 2.66950   ×   10 6 1.82443   ×   10 10 7.81764   ×   10 4 1.21002   ×   10 7 5.60110   ×   10 6 1.35824   ×   10 7 5.83485   ×   10 6 9.85921   ×   10 1
Med 1.25293   ×   10 6 1.24151   ×   10 10 1.41243   ×   10 5 1.20016   ×   10 7 2.71450   ×   10 6 1.22599   ×   10 7 1.66892   ×   10 6 2.11550   ×   10 3
F19Avg 3.10711   ×   10 3 4.23320   ×   10 3 3.14930   ×   10 3 3.29848   ×   10 3 3.03626   ×   10 3 3.33348   ×   10 3 3.03626 × 10 3 3.13075   ×   10 3
Std 3.20684   ×   10 2 4.01035   ×   10 2 3.22682   ×   10 2 4.07341   ×   10 2 3.01334   ×   10 2 3.38872   ×   10 2 3.01334   ×   10 2 2.06599   ×   10 2
Med 3.04765   ×   10 3 4.27941   ×   10 3 3.16749   ×   10 3 3.31251   ×   10 3 3.00510   ×   10 3 3.37189   ×   10 3 3.00510   ×   10 3 3.08932   ×   10 3
F20Avg 2.60058   ×   10 3 2.98419   ×   10 3 2.72520   ×   10 3 2.54531 × 10 3 2.63792   ×   10 3 2.56892   ×   10 3 2.65532   ×   10 3 2.68525   ×   10 3
Std 4.67976   ×   10 1 7.00936   ×   10 1 8.38298   ×   10 1 5.16031   ×   10 1 5.83411   ×   10 1 6.52628   ×   10 1 9.57621   ×   10 1 7.42052   ×   10 1
Med 2.60226   ×   10 3 2.96847   ×   10 3 2.74358   ×   10 3 2.53897   ×   10 3 2.62657   ×   10 3 2.55757   ×   10 3 2.65628   ×   10 3 2.69321   ×   10 3
F21Avg 9.07504   ×   10 3 1.70175   ×   10 4 1.09582   ×   10 4 9.38908   ×   10 3 9.15415   ×   10 3 9.21257   ×   10 3 9.60887   ×   10 3 8.94920 × 10 3
Std 2.71554   ×   10 3 8.13669   ×   10 2 9.28556   ×   10 2 8.14791   ×   10 2 2.45061   ×   10 3 1.79573   ×   10 3 2.21273   ×   10 3 1.44719   ×   10 3
Med 1.00121   ×   10 4 1.71563   ×   10 4 1.11692   ×   10 4 9.39694   ×   10 3 9.74711   ×   10 3 9.38979   ×   10 3 1.03739   ×   10 4 8.93462   ×   10 3
F22Avg 3.12543   ×   10 3 3.81231   ×   10 3 3.41873   ×   10 3 2.97795 × 10 3 3.25726   ×   10 3 3.00084   ×   10 3 3.17730   ×   10 3 3.12742   ×   10 3
Std 8.78357   ×   10 1 1.52531   ×   10 2 1.91680   ×   10 2 5.57378   ×   10 1 1.18682   ×   10 2 6.13483   ×   10 1 1.36059   ×   10 2 7.16808   ×   10 1
Med 3.11586   ×   10 3 3.79582   ×   10 3 3.40470   ×   10 3 2.97593   ×   10 3 3.25904   ×   10 3 3.00346   ×   10 3 3.16339   ×   10 3 3.15600   ×   10 3
F23Avg 3.45961   ×   10 3 4.05526   ×   10 3 3.60986   ×   10 3 3.15583   ×   10 3 3.44325   ×   10 3 3.13525 × 10 3 3.23939   ×   10 3 3.30224   ×   10 3
Std 1.35080   ×   10 2 1.78941   ×   10 2 1.48160   ×   10 2 4.34247   ×   10 1 9.09276   ×   10 1 5.46767   ×   10 1 1.08095   ×   10 2 1.12273   ×   10 2
Med 3.45849   ×   10 3 4.00694   ×   10 3 3.60060   ×   10 3 3.16126   ×   10 3 3.43925   ×   10 3 3.13390   ×   10 3 3.23554   ×   10 3 3.32527   ×   10 3
F24Avg 3.27506   ×   10 3 1.23863   ×   10 4 3.28556   ×   10 3 3.03779   ×   10 3 3.06590   ×   10 3 3.04667   ×   10 3 3.06144   ×   10 3 3.03338 × 10 3
Std 7.20968   ×   10 1 2.31843   ×   10 3 9.67271   ×   10 1 2.58595   ×   10 1 2.70330   ×   10 1 2.74866   ×   10 1 3.40843   ×   10 1 3.05895   ×   10 1
Med 3.27224   ×   10 3 1.23542   ×   10 4 3.26943   ×   10 3 3.03091   ×   10 3 3.07165   ×   10 3 3.04372   ×   10 3 3.06031   ×   10 3 3.03091   ×   10 3
F25Avg 4.74285   ×   10 3 1.50522   ×   10 4 1.02495   ×   10 4 4.37560 × 10 3 6.24899   ×   10 3 5.52341   ×   10 3 6.53827   ×   10 3 5.46227   ×   10 3
Std 1.28907   ×   10 3 1.05431   ×   10 3 2.50030   ×   10 3 2.04274   ×   10 3 3.92201   ×   10 3 1.86055   ×   10 3 4.03151   ×   10 3 2.66427   ×   10 3
Med 4.31756   ×   10 3 1.52262   ×   10 4 1.13049   ×   10 4 2.90000   ×   10 3 2.90000   ×   10 3 6.11998   ×   10 3 2.90000   ×   10 3 4.69566   ×   10 3
F26Avg 3.67124   ×   10 3 5.25222   ×   10 3 4.27903   ×   10 3 3.45547   ×   10 3 3.91585   ×   10 3 3.48878   ×   10 3 3.77812   ×   10 3 3.49353 × 10 3
Std 1.21967   ×   10 2 3.34631   ×   10 2 3.58177   ×   10 2 6.70338   ×   10 1 2.29000   ×   10 2 9.99470   ×   10 1 2.10532   ×   10 2 1.32606   ×   10 2
Med 3.66725   ×   10 3 5.25638   ×   10 3 4.16634   ×   10 3 3.43902   ×   10 3 3.85379   ×   10 3 3.49813   ×   10 3 3.76422   ×   10 3 3.49973   ×   10 3
F27Avg 3.66235   ×   10 3 9.40119   ×   10 3 3.76326   ×   10 3 3.30962   ×   10 3 3.29857   ×   10 3 3.31078   ×   10 3 3.30371   ×   10 3 3.29315 × 10 3
Std 1.31898   ×   10 2 1.08787   ×   10 3 1.22552   ×   10 2 2.99909   ×   10 1 2.12804   ×   10 1 2.40586   ×   10 1 2.49944   ×   10 1 2.50653   ×   10 1
Med 3.64822   ×   10 3 9.29044   ×   10 3 3.75511   ×   10 3 3.30991   ×   10 3 3.30796   ×   10 3 3.31184   ×   10 3 3.30847   ×   10 3 3.30352   ×   10 3
F28Avg 4.77081 × 10 3 1.49295   ×   10 4 6.34313   ×   10 3 5.05409   ×   10 3 6.25662   ×   10 3 5.43005   ×   10 3 6.25662   ×   10 3 4.91540   ×   10 3
Std 4.75329   ×   10 2 8.42602   ×   10 3 6.34912   ×   10 2 4.16277   ×   10 2 8.84241   ×   10 2 4.61770   ×   10 2 8.84241   ×   10 2 3.11293   ×   10 2
Med 4.92488   ×   10 3 1.18486   ×   10 4 6.39646   ×   10 3 5.12323   ×   10 3 6.04092   ×   10 3 5.22656   ×   10 3 6.04092   ×   10 3 4.96022   ×   10 3
F29Avg 1.47848   ×   10 7 3.45085   ×   10 10 2.54666   ×   10 8 2.40567   ×   10 8 1.79885   ×   10 8 2.54486   ×   10 8 1.79885   ×   10 8 6.41334 × 10 6
Std 3.71141   ×   10 7 2.39622   ×   10 10 4.27130   ×   10 7 8.75795   ×   10 7 3.96863   ×   10 7 9.19181   ×   10 7 3.96863   ×   10 7 2.97260   ×   10 6
Med 4.79713   ×   10 6 3.06144   ×   10 10 2.59349   ×   10 8 2.18037   ×   10 8 1.79079   ×   10 8 2.50107   ×   10 8 1.79079   ×   10 8 5.30481   ×   10 6
Bold denotes to the best results.

Appendix A.6. Wilcoxon Rank-Sum of the LWSSA vs. Salp Variant Algorithms on CEC2017

FunESSAHSSASCAISSA_OBLISSASSALEOTVSSAQSSALEO
1<0.05<0.05<0.05<0.050.907149516<0.050.379592005
2<0.05<0.05<0.05<0.05<0.05<0.05<0.05
3<0.05<0.05<0.050.931838350.450706377<0.050.539032803
4<0.05<0.05<0.05<0.05<0.05<0.05<0.05
50.624228823<0.050.065354024<0.05<0.05<0.050.058827192
6<0.05<0.05<0.050.7031982180.8948371540.323393973<0.05
7<0.05<0.050.7855084540.8580634010.141670776<0.050.591599389
80.323393973<0.050.956592944<0.05<0.050.821595142<0.05
9<0.05<0.05<0.05<0.05<0.05<0.05<0.05
10<0.05<0.05<0.05<0.05<0.05<0.05<0.05
11<0.05<0.05<0.05<0.05<0.05<0.05<0.05
12<0.05<0.05<0.05<0.05<0.05<0.05<0.05
13<0.05<0.05<0.05<0.05<0.05<0.05<0.05
14<0.05<0.05<0.05<0.05<0.05<0.05<0.05
15<0.05<0.05<0.050.749876972<0.050.388084142<0.05
16<0.05<0.05<0.05<0.05<0.05<0.05<0.05
17<0.05<0.05<0.05<0.05<0.05<0.05<0.05
18<0.05<0.05<0.05<0.05<0.05<0.05<0.05
190.845869569<0.050.6464041770.1041390050.129455778<0.050.129455778
20<0.05<0.050.070028526<0.05<0.05<0.050.178564788
21<0.05<0.05<0.050.222170531<0.050.346779952<0.05
220.870291194<0.05<0.05<0.05<0.05<0.050.371215114
23<0.05<0.05<0.05<0.05<0.05<0.05<0.05
24<0.05<0.05<0.050.498735035<0.050.054784023<0.05
250.591599389<0.05<0.05<0.050.2867565450.1217778910.423194286
26<0.05<0.05<0.050.22811625<0.050.968987463<0.05
27<0.05<0.05<0.05<0.050.293850289<0.050.060939894
280.272918441<0.05<0.050.15475248<0.05<0.05<0.05
290.749876972<0.05<0.05<0.05<0.05<0.05<0.05

Appendix A.7. Friedman Test Result of the LWSSA vs. Salp Variant Algorithms on CEC2017

FunESSAHSSASCAISSA_OBLISSASSALEOTVSSAQSSALEOLWSSA
16.96558.00006.03453.44832.34483.89662.82762.4828
26.62078.00004.27595.10341.00006.00002.89662.1034
36.75868.00006.06902.79313.06903.51723.03452.7586
42.86218.00004.62072.79313.58623.44835.03455.6552
54.86218.00006.20692.65522.79312.89663.82764.7586
64.31037.96557.03452.65522.72412.89665.68972.7241
73.41388.00004.65524.44833.68972.72414.44834.6207
85.58628.00004.86213.72411.51724.82762.75864.7241
93.75868.00005.34483.79313.55174.10345.17242.2759
106.89668.00006.06904.31032.58623.93102.51721.6897
114.89668.00006.10344.93102.86214.96553.24141.0000
127.00008.00002.10344.86214.00004.93104.10341.0000
137.44837.55175.13794.31033.06904.31033.17241.0000
147.00008.00002.03455.17243.62075.17244.00001.0000
154.51728.00005.82763.03454.51723.03454.58622.4828
163.72418.00004.13794.51724.24144.72414.27592.3793
176.86217.93105.72414.27592.96554.03453.20691.0000
183.68978.00002.20696.03454.75866.03454.27591.0000
193.68977.86214.03454.41383.44834.96553.62073.9655
203.48288.00005.86211.89664.24142.48284.58625.4483
214.24148.00006.13793.27593.86213.55174.27592.6552
223.96557.93106.68971.68975.41381.96554.24144.1034
235.58628.00006.62072.17245.44831.55173.06903.5517
246.48288.00006.51722.31033.58623.00003.82762.2759
253.82768.00006.44832.96553.48283.55173.65524.0690
264.17248.00006.65521.75865.75862.37934.72412.5517
276.20698.00006.79313.37932.65523.34483.06902.5517
282.20698.00006.00002.82765.53453.75865.53452.1379
291.48288.00005.93105.27594.12075.55174.12071.5172
Avg.4.91447.97385.38413.61473.60173.84663.92392.7408
Rank6.00008.00007.00003.00002.00004.00005.00001.0000
Bold denotes to the best results.

References

  1. Cardiovascular Diseases (CVDs). Available online: https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds) (accessed on 10 November 2023).
  2. D’Agostino, R.B.; Vasan, R.S.; Pencina, M.J.; Wolf, P.A.; Cobain, M.; Massaro, J.M.; Kannel, W.B. General Cardiovascular Risk Profile for Use in Primary Care: The Framingham Heart Study. Circulation 2008, 117, 743–753. [Google Scholar] [CrossRef] [PubMed]
  3. Lloyd-Jones, D.M. Cardiovascular Risk Prediction. Circulation 2010, 121, 1768–1777. [Google Scholar] [CrossRef] [PubMed]
  4. Ward, A.; Sarraju, A.; Chung, S.; Li, J.; Harrington, R.; Heidenreich, P.; Palaniappan, L.; Scheinker, D.; Rodriguez, F. Machine Learning and Atherosclerotic Cardiovascular Disease Risk Prediction in a Multi-Ethnic Population. npj Digit. Med. 2020, 3, 125. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, T.; Guestrin, C. Undefined Xgboost: A Scalable Tree Boosting System. In Proceedings of the 2nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016. [Google Scholar]
  6. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. Lightgbm: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  7. Probst, P.; Boulesteix, A.; Bischl, B. Tunability: Importance of Hyperparameters of Machine Learning Algorithms. J. Mach. Learn. Res. 2019, 20, 1934–1965. [Google Scholar]
  8. Bergstra, J.; Bengio, Y. Random Search for Hyper-Parameter Optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  9. Qaraad, M.; Amjad, S.; Hussein, N.K.; Farag, M.A.; Mirjalili, S.; Elhosseini, M.A. Quadratic Interpolation and a New Local Search Approach to Improve Particle Swarm Optimization: Solar Photovoltaic Parameter Estimation. Expert Syst. Appl. 2024, 236, 121417. [Google Scholar] [CrossRef]
  10. Qaraad, M.; Aljadania, A.; Elhosseini, M. Large-Scale Competitive Learning-Based Salp Swarm for Global Optimization and Solving Constrained Mechanical and Engineering Design Problems. Mathematics 2023, 11, 1362. [Google Scholar] [CrossRef]
  11. Qaraad, M.; Amjad, S.; Hussein, N.K.; Badawy, M.; Mirjalili, S.; Elhosseini, M.A. Photovoltaic Parameter Estimation Using Improved Moth Flame Algorithms with Local Escape Operators. Comput. Electr. Eng. 2023, 106, 108603. [Google Scholar] [CrossRef]
  12. Qaraad, M.; Amjad, S.; Hussein, N.K.; Mirjalili, S.; Elhosseini, M.A. An Innovative Time-Varying Particle Swarm-Based Salp Algorithm for Intrusion Detection System and Large-Scale Global Optimization Problems. Artif. Intell. Rev. 2022, 56, 8325–8392. [Google Scholar] [CrossRef]
  13. Ojha, V.K.; Abraham, A.; Snášel, V. Metaheuristic Design of Feedforward Neural Networks: A Review of Two Decades of Research. Eng. Appl. Artif. Intell. 2017, 60, 97–116. [Google Scholar] [CrossRef]
  14. Akyol, S.; Alatas, B. Plant Intelligence Based Metaheuristic Optimization Algorithms. Artif. Intell. Rev. 2017, 47, 417–462. [Google Scholar] [CrossRef]
  15. Sharma, S.; Kumar, V. Application of Genetic Algorithms in Healthcare: A Review. Stud. Comput. Intell. 2022, 1039, 75–86. [Google Scholar] [CrossRef]
  16. Kumar, S.; Sahoo, G. A Random Forest Classifier Based on Genetic Algorithm for Cardiovascular Diseases Diagnosis (RESEARCH NOTE). Int. J. Eng. 2017, 30, 1723–1729. [Google Scholar]
  17. Amma, N.G.B. Cardiovascular Disease Prediction System Using Genetic Algorithm and Neural Network. In Proceedings of the 2012 International Conference on Computing, Communication and Applications, Dindigul, India, 22–24 February 2012. [Google Scholar] [CrossRef]
  18. Ay, Ş.; Ekinci, E.; Garip, Z. A Comparative Analysis of Meta-Heuristic Optimization Algorithms for Feature Selection on ML-Based Classification of Heart-Related Diseases. J. Supercomput. 2023, 79, 11797–11826. [Google Scholar] [CrossRef] [PubMed]
  19. Sheeba, P.T.; Roy, D.; Syed, M.H. A Metaheuristic-Enabled Training System for Ensemble Classification Technique for Heart Disease Prediction. Adv. Eng. Softw. 2022, 174, 103297. [Google Scholar] [CrossRef]
  20. Tharwat, A.; Schenck, W. A Conceptual and Practical Comparison of PSO-Style Optimization Algorithms. Expert Syst. Appl. 2021, 167, 114430. [Google Scholar] [CrossRef]
  21. Okwu, M.O.; Tartibu, L.K. Particle Swarm Optimisation. Stud. Comput. Intell. 2021, 927, 5–13. [Google Scholar] [CrossRef]
  22. Tang, K.S.; Man, K.F.; Kwong, S.; He, Q. Genetic Algorithms and Their Applications. IEEE Signal Process. Mag. 1996, 13, 22–37. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A Bio-Inspired Optimizer for Engineering Design Problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  24. Karaboga, D.; Basturk, B. A Powerful and Efficient Algorithm for Numerical Function Optimization: Artificial Bee Colony (ABC) Algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  25. Dorigo, M.; Birattari, M. Ant Colony Optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  26. Grandgirard, J.; Poinsot, D.; Krespi, L.; Nénon, J.P.; Cortesero, A.M. Costs of Secondary Parasitism in the Facultative Hyperparasitoid Pachycrepoideus Dubius: Does Host Size Matter? Entomol. Exp. Appl. 2002, 103, 239–248. [Google Scholar] [CrossRef]
  27. Hatamlou, A. Black Hole: A New Heuristic Optimization Approach for Data Clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  28. Kaveh, A.; Dadras, A. A Novel Meta-Heuristic Optimization Algorithm: Thermal Exchange Optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  29. Lam, A.Y.S.; Member, S.; Li, V.O.K. Chemical-Reaction-Inspired Metaheuristic for Optimization. IEEE Trans. Evol. Comput. 2009, 14, 381–399. [Google Scholar] [CrossRef]
  30. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. Comput. Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  31. Abdollahi, M.; Isazadeh, A.; Abdollahi, D. Imperialist Competitive Algorithm for Solving Systems of Nonlinear Equations. Comput. Math. Appl. 2013, 65, 1894–1908. [Google Scholar] [CrossRef]
  32. Ashrafi, S.M.; Dariane, A.B. A Novel and Effective Algorithm for Numerical Optimization: Melody Search (MS). In Proceedings of the 2011 11th International Conference on Hybrid Intelligent Systems (HIS), Melacca, Malaysia, 5–8 December 2011; pp. 109–114. [Google Scholar] [CrossRef]
  33. Qaraad, M.; Amjad, S.; Hussein, N.K.; Elhosseini, M.A. An Innovative Quadratic Interpolation Salp Swarm-Based Local Escape Operator for Large-Scale Global Optimization Problems and Feature Selection. Neural Comput. Appl. 2022, 34, 17663–17721. [Google Scholar] [CrossRef]
  34. Qaraad, M.; Amjad, S.; Hussein, N.K.; Elhosseini, M.A. Large Scale Salp-Based Grey Wolf Optimization for Feature Selection and Global Optimization. Neural Comput. Appl. 2022, 34, 8989–9014. [Google Scholar] [CrossRef]
  35. Qaraad, M.; Amjad, S.; Hussein, N.K.; Elhosseini, M.A. Addressing Constrained Engineering Problems and Feature Selection with a Time-Based Leadership Salp-Based Algorithm with Competitive Learning. J. Comput. Des. Eng. 2022, 9, 2235–2270. [Google Scholar] [CrossRef]
  36. Qaraad, M.; Amjad, S.; Hussein, N.K.; Mirjalili, S.; Halima, N.B.; Elhosseini, M.A. Comparing SSALEO as a Scalable Large Scale Global Optimization Algorithm to High-Performance Algorithms for Real-World Constrained Optimization Benchmark. IEEE Access 2022, 10, 95658–95700. [Google Scholar] [CrossRef]
  37. Chen, J.; Luo, Q.; Zhou, Y.; Huang, H. Firefighting Multi Strategy Marine Predators Algorithm for the Early-Stage Forest Fire Rescue Problem. Appl. Intell. 2023, 53, 15496–15515. [Google Scholar] [CrossRef]
  38. Abualigah, L.; Shehab, M.; Alshinwan, M.; Alabool, H. Salp Swarm Algorithm: A Comprehensive Survey. Neural Comput. Appl. 2020, 32, 11195–11215. [Google Scholar] [CrossRef]
  39. Cuevas, E. An Optimization Algorithm Inspired by the States of Matter That Improves the Balance between Exploration and Exploitation. Appl. Intell. 2014, 40, 256–272. [Google Scholar] [CrossRef]
  40. Castelli, M.; Manzoni, L.; Mariot, L.; Nobile, M.S.; Tangherloni, A. Salp Swarm Optimization: A Critical Review. Expert Syst. Appl. 2022, 189, 116029. [Google Scholar] [CrossRef]
  41. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  42. Fathi, R.; Tousi, B.; Galvani, S. Allocation of Renewable Resources with Radial Distribution Network Reconfiguration Using Improved Salp Swarm Algorithm. Appl. Soft Comput. 2023, 132, 109828. [Google Scholar] [CrossRef]
  43. Zhang, H.; Liu, T.; Ye, X.; Asghar, A.; Guoxi, H.; Huiling, L.; Zhifang, C. Differential Evolution-Assisted Salp Swarm Algorithm with Chaotic Structure for Real-World Problems; Springer: London, UK, 2022; ISBN 0123456789. [Google Scholar]
  44. El-Shorbagy, M.A.; Eldesoky, I.M.; Basyouni, M.M.; Nassar, I.; El-Refaey, A.M. Chaotic Search-Based Salp Swarm Algorithm for Dealing with System of Nonlinear Equations and Power System Applications. Mathematics 2022, 10, 1368. [Google Scholar] [CrossRef]
  45. Nautiyal, B.; Prakash, R.; Vimal, V.; Liang, G.; Chen, H. Improved Salp Swarm Algorithm with Mutation Schemes for Solving Global Optimization and Engineering Problems. Eng. Comput. 2021, 38, 3927–3949. [Google Scholar] [CrossRef]
  46. Kansal, V.; Dhillon, J.S. Emended Salp Swarm Algorithm for Multiobjective Electric Power Dispatch Problem. Appl. Soft Comput. 2020, 90, 106172. [Google Scholar] [CrossRef]
  47. Zhang, H.; Wang, Z.; Chen, W.; Heidari, A.A.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Zhang, X. Ensemble Mutation-Driven Salp Swarm Algorithm with Restart Mechanism: Framework and Fundamental Analysis. Expert Syst. Appl. 2021, 165, 113897. [Google Scholar] [CrossRef]
  48. Wang, C.; Xu, R.-Q.; Ma, L.; Zhao, J.; Wang, L.; Xie, N.-G.; Cheong, K.H. An Efficient Salp Swarm Algorithm Based on Scale-Free Informed Followers with Self-Adaption Weight. Appl. Intell. 2023, 53, 1759–1791. [Google Scholar] [CrossRef]
  49. Ren, H.; Li, J.; Chen, H.; Li, C.Y. Adaptive Levy-Assisted Salp Swarm Algorithm: Analysis and Optimization Case Studies. Math. Comput. Simul. 2021, 181, 380–409. [Google Scholar] [CrossRef]
  50. Tawhid, M.A.; Ibrahim, A.M. Improved Salp Swarm Algorithm Combined with Chaos. Math. Comput. Simul. 2022, 202, 113–148. [Google Scholar] [CrossRef]
  51. Zhang, X.; Wang, S.; Zhao, K.; Wang, Y. A Salp Swarm Algorithm Based on Harris Eagle Foraging Strategy. Math. Comput. Simul. 2023, 203, 858–877. [Google Scholar] [CrossRef]
  52. Neggaz, N.; Ewees, A.A.; Elaziz, M.A.; Mafarja, M. Boosting Salp Swarm Algorithm by Sine Cosine Algorithm and Disrupt Operator for Feature Selection. Expert Syst. Appl. 2020, 145, 113103. [Google Scholar] [CrossRef]
  53. Si, T.; Miranda, P.B.C.; Bhattacharya, D. Novel Enhanced Salp Swarm Algorithms Using Opposition-Based Learning Schemes for Global Optimization Problems. Expert Syst. Appl. 2022, 207, 117961. [Google Scholar] [CrossRef]
  54. Abbassi, A.; Abbassi, R.; Heidari, A.A.; Oliva, D.; Chen, H.; Habib, A.; Jemli, M.; Wang, M. Parameters Identification of Photovoltaic Cell Models Using Enhanced Exploratory Salp Chains-Based Approach. Energy 2020, 198, 117333. [Google Scholar] [CrossRef]
  55. Gupta, S.; Deep, K.; Heidari, A.A.; Moayedi, H.; Chen, H. Harmonized Salp Chain-Built Optimization. Eng. Comput. 2021, 37, 1049–1079. [Google Scholar] [CrossRef]
  56. Viswanathan, G.M.; Afanasyev, V.; Buldyrev, S.V.; Havlin, S.; Da Luz, M.G.E.; Raposo, E.P.; Stanley, H.E. Lévy Flights in Random Searches. Phys. A Stat. Mech. Appl. 2000, 282, 1–12. [Google Scholar] [CrossRef]
  57. Abbassi, R.; Abbassi, A.; Heidari, A.A.; Mirjalili, S. An Efficient Salp Swarm-Inspired Algorithm for Parameters Identification of Photovoltaic Cell Models. Energy Convers. Manag. 2019, 179, 362–372. [Google Scholar] [CrossRef]
  58. Zhang, J.; Wang, Z.; Luo, X. Parameter Estimation for Soil Water Retention Curve Using the Salp Swarm Algorithm. Water 2018, 10, 815. [Google Scholar] [CrossRef]
  59. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Techical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  60. Mohamed, A.W.; Sallam, K.M.; Agrawal, P.; Hadi, A.A.; Mohamed, A.K. Evaluating the Performance of Meta-Heuristic Algorithms on CEC 2021 Benchmark Problems. Neural Comput. Appl. 2022, 35, 1493–1517. [Google Scholar] [CrossRef]
  61. Qais, M.H.; Hasanien, H.M.; Alghuwainem, S. Enhanced Salp Swarm Algorithm: Application to Variable Speed Wind Generators. Eng. Appl. Artif. Intell. 2019, 80, 82–96. [Google Scholar] [CrossRef]
  62. Singh, N.; Son, L.H.; Chiclana, F.; Magnot, J.P. A New Fusion of Salp Swarm with Sine Cosine for Optimization of Non-Linear Functions. Eng. Comput. 2020, 36, 185–212. [Google Scholar] [CrossRef]
  63. Tubishat, M.; Ja’afar, S.; Alswaitti, M.; Mirjalili, S.; Idris, N.; Ismail, M.A.; Omar, M.S. Dynamic Salp Swarm Algorithm for Feature Selection. Expert Syst. Appl. 2021, 164, 113873. [Google Scholar] [CrossRef]
  64. Tubishat, M.; Idris, N.; Shuib, L.; Abushariah, M.A.M.; Mirjalili, S. Improved Salp Swarm Algorithm Based on Opposition Based Learning and Novel Local Search Algorithm for Feature Selection. Expert Syst. Appl. 2020, 145, 113122. [Google Scholar] [CrossRef]
  65. Faris, H.; Heidari, A.A.; Al-Zoubi, A.M.; Mafarja, M.; Aljarah, I.; Eshtay, M.; Mirjalili, S. Time-Varying Hierarchical Chains of Salps with Random Weight Networks for Feature Selection. Expert Syst. Appl. 2020, 140, 112898. [Google Scholar] [CrossRef]
  66. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-Based Optimizer: A New Metaheuristic Optimization Algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  67. Ahmadianfar, I.; Gong, W.; Heidari, A.A.; Golilarz, N.A.; Samadi-Koucheksaraee, A.; Chen, H. Gradient-Based Optimization with Ranking Mechanisms for Parameter Identification of Photovoltaic Systems. Energy Rep. 2021, 7, 3979–3997. [Google Scholar] [CrossRef]
  68. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime Mould Algorithm: A New Method for Stochastic Optimization. Futur. Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  69. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  70. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium Optimizer: A Novel Optimization Algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  71. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm with Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [Google Scholar] [CrossRef]
  72. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  73. Frank, A.; Asuncion, A. UCI Machine Learning Repository. 2010. Available online: https://archive.ics.uci.edu/ (accessed on 5 January 2024).
Figure 1. LWSSA flowchart.
Figure 1. LWSSA flowchart.
Mathematics 12 00243 g001
Figure 2. LWSSA convergence curves vs. some Salps variants during 2500 iterations.
Figure 2. LWSSA convergence curves vs. some Salps variants during 2500 iterations.
Mathematics 12 00243 g002
Figure 3. LWSSA convergence curves vs. some basic and advanced algorithms during 2500 iterations.
Figure 3. LWSSA convergence curves vs. some basic and advanced algorithms during 2500 iterations.
Mathematics 12 00243 g003
Figure 4. (a) LWSSA qualitative analysis (Cec2017_functions). (b) LWSSA qualitative analysis (1st-dimensional trajectory). (c) LWSSA qualitative analysis (phase of exploration and exploitation). (d) LWSSA qualitative analysis (global best fitness—average). # denotes to the number of iteration.
Figure 4. (a) LWSSA qualitative analysis (Cec2017_functions). (b) LWSSA qualitative analysis (1st-dimensional trajectory). (c) LWSSA qualitative analysis (phase of exploration and exploitation). (d) LWSSA qualitative analysis (global best fitness—average). # denotes to the number of iteration.
Mathematics 12 00243 g004aMathematics 12 00243 g004bMathematics 12 00243 g004c
Figure 5. Suggested methodology.
Figure 5. Suggested methodology.
Mathematics 12 00243 g005
Figure 6. Convergence curves for LWSSA and other competitors during 100 iterations on CVD dataset.
Figure 6. Convergence curves for LWSSA and other competitors during 100 iterations on CVD dataset.
Mathematics 12 00243 g006
Figure 7. ROC curves of the LWSSA-XGBoost and other competitors.
Figure 7. ROC curves of the LWSSA-XGBoost and other competitors.
Mathematics 12 00243 g007
Figure 8. Measures of performance test accuracy, recall, F1 score, precision, and AUC for LWSSA and other methods.
Figure 8. Measures of performance test accuracy, recall, F1 score, precision, and AUC for LWSSA and other methods.
Mathematics 12 00243 g008
Figure 9. Average CPU time of different algorithms on CVD dataset.
Figure 9. Average CPU time of different algorithms on CVD dataset.
Mathematics 12 00243 g009
Table 1. SSA modifications and hybridizations.
Table 1. SSA modifications and hybridizations.
Approach and Ref.MethodologyProblemsLimitations of the Approach
ISSA [45]Several mutation strategies were implemented.Optimization problems and engineering applications The phenomenon of slow convergence and the tendency to fall into poor solutions.
ESSA [46]The incorporation of mutation and crossover procedures into the Salp Swarm Algorithm (SSA) was implemented. Multi-objective electric power dispatch problemThe phenomenon of premature convergence is a significant concern in the context of multi-objective optimization problems.
CMSRSSSA [47]The introduction of the composite mutation strategy and the restart mechanism was undertaken. Optimization problems and engineering applications The search method exhibits limitations in its effectiveness.
E-SSA [48]Multiple evolutionary strategies were introduced.Optimization problems and engineering applications The convergence pace is relatively low, and there is a need to strike a balance between exploration and exploitation.
WLSSA [49]The introduction encompassed the incorporation of adaptive weights and the Levy flight mechanism. Engineering applicationsIt is often observed that individuals may encounter the challenge of succumbing to local optima, leading to premature convergence.
CSSA [50]Chaos mutation strategy was introduced.Optimization problems The convergence pace is relatively low, and there is a need to strike a balance between exploration and exploitation.
ISSAHF [51]Combination of SSA with the HHO optimizer.Optimization problems and engineering applications The search efficiency is relatively low, making it susceptible to being trapped in local optima.
CL-SSA [10]Pairwise competition mechanism was introduced.Optimization problems and engineering applications The capacity to balance between exploration and exploitation.
ISSAFD [52]The introduction of the SCA and disrupt operator was implemented.Feature selectionThe present study examines the phenomenon of local stagnation during the exploratory phase, with a specific focus on the implications of population diversity.
SSA-OBL [53]Opposition-based learning (OBL) strategy was introduced.Optimization problems and Engineering applications The capacity to balance between exploration and exploitation.
OLMSSA [54]The application of opposition-based learning approaches was employed to enhance.PV cells parameter extractionThe accuracy is low.
m-SSA [55]The principles of opposition-based learning and the Levy flight method were introduced.Engineering applicationsTrapped within a suboptimal solution characterized by diminished precision.
Table 2. Parameter settings for the competitors.
Table 2. Parameter settings for the competitors.
AlgorithmParameter Setting
GBOeps = 0.005 -3 × random ();
EGBOLC = 0.7, eps = 5 × 10−20 × random ()
SMAZ = 0.03
PSOVmax = 6, wMin = 0.2, wMax = 0.9, c1 = c2 = 2
EOV = 1, a1 = 2, a2 = 1, GP = 0.5
SADEprobability = 50, cr = 5 m, crm = 0.5, p1 = 0.5
CL-PSOc_local = 1.2, w_min = 0.4, w_max = 0.9, max_flag = 7
HPSO_TVACci = 0.5, cf = 0.0
ESSAeps = 0.005 - 3 × random ();
HSSASCAr1 = random × 50, r2 = r3 = c1 − c2 = random
ISSAcMax = 1, cMin = 0.00003, r = random
ISSA-OBLmax_local_iter = 10, threshold = 0.5
TVSSAc2 = c3 = random
SSALEOc2 = c3 = random
SSA-FGWOc2 = c3 = random
LWSSAc2 = c3 = random.
Mu (mutation factor) = 0.5.
Table 3. Description of the characteristics/attributes of the dataset.
Table 3. Description of the characteristics/attributes of the dataset.
FeatureNameDescription of Features
X 1 AgeAge
X 2 SexGenus; M is representative of male and is denoted as 1, while “F” represents female and is denoted as 0.
X 3 Chestpain typeDifferent types of chest pain are categorized and labeled as follows: typical angina is represented by the number 1, atypical angina is represented by the number 2, non-angina pectoris is represented by the number 3, and asymptomatic cases are also recorded.
X 4 RestingBPThe measurement of blood pressure while in a state of physical rest, expressed in units of millimeters of mercury (mmHg).
X 5 CholesterolThe concentration of cholesterol in the bloodstream measured in milligrams per deciliter (mm/dL).
X 6 FastingECGIf the blood glucose level is above 120 mg/dL, fasting blood glucose is indicated as 1; otherwise, it is presented as 0.
X 7 RestingECGThe results of a resting electrocardiogram (ECG) can be classified into three categories. If the ECG is normal, it is recorded as 0. If there is an ST-T wave anomaly present, it is recorded as 1. If the Estes standard indicates possible or definite left ventricular hypertrophy, it is recorded as 2.
X 8 MaxHRThe highest rate at which the heart beats.
X 9 ExerciseanginaThe presence of angina pectoris during exercise is indicated by Y, which represents 1, while its absence is indicated by N, which represents 0.
X 10 OldPeakCompared to other cases, exercise-induced ST segment depression.
X 11 ST-SlopeThe inclination of the ST segment at the apex of movement can be classified into three types: upward slope, denoted as 1; flat, denoted as 2; and downward slope, denoted as 3.
yHeartdiseaseThe value of 1 indicates illness, while 0 represents normalcy.
Table 4. Search space and hyperparameter settings.
Table 4. Search space and hyperparameter settings.
ApproachesParameterValue
LWSSA, GWO, WOA, ISSA, ESSA, SSALEO, TVSSA, SSAFGWOPopulation size 30
Iteration number100
XGBoost learning_rate[0.00001–1]
max_depth[5–17]
gamma[0.0–200]
colsample_bytree[0.1–1]
reg_alpha[0.0000001–100]
reg_lambda[0.0000001–100]
Table 5. A comparison of the statistical outcomes of the classification test’s accuracy, F1 score, recall, precision, and area under the curve for each optimizer throughout 30 runs on the CVD dataset (on average).
Table 5. A comparison of the statistical outcomes of the classification test’s accuracy, F1 score, recall, precision, and area under the curve for each optimizer throughout 30 runs on the CVD dataset (on average).
ApproachesAccuracyRecallF1 ScorePrecisionAUCFitness
ESSA-XGBoost0.9110.9390.9210.9010.9070.079
GWO-XGBoost0.9170.9390.9280.9180.9170.072
ISSA-XGBoost0.9170.9370.9260.9150.9140.074
SSA-FGWO-XGBoost0.9170.9390.9260.9130.9140.074
SSALEO-XGBoost0.9180.9310.9270.9140.9150.073
TVSSA-XGBoost0.9190.9360.9270.9190.9170.073
WOA-XGBoost0.9150.9380.9240.9120.9120.076
SSA-XGBoost0.9150.9360.9250.9130.9130.075
LWSSA-XGBoost (our model)0.9230.9400.9310.9220.9210.069
Table 6. Comparison of LWSSA-XGBoost with other classification models.
Table 6. Comparison of LWSSA-XGBoost with other classification models.
ML ModelF1-ScorePrecisionRecallAUCAccuracy
LogisticRegression (LR)0.9010.8710.9000.9110.885
KNeighborsClassifier (KNN)0.7250.7250.7250.7480.695
DecisionTreeClassifier (DT)0.7740.7820.7780.7530.755
RandomForestClassifier (RF)0.9110.8770.8940.9120.880
SVC0.7350.7500.7420.7980.717
LGBMClassifier (LGB)0.8720.8900.8810.9150.869
XGBClassifier (XGBoost)0.8620.8800.8710.9170.858
LWSSA-XGBoost (our model)0.9360.9320.9410.9270.929
Bold denotes to the best results.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mohammed, S.I.; Hussein, N.K.; Haddani, O.; Aljohani, M.; Alkahya, M.A.; Qaraad, M. Fine-Tuned Cardiovascular Risk Assessment: Locally Weighted Salp Swarm Algorithm in Global Optimization. Mathematics 2024, 12, 243. https://doi.org/10.3390/math12020243

AMA Style

Mohammed SI, Hussein NK, Haddani O, Aljohani M, Alkahya MA, Qaraad M. Fine-Tuned Cardiovascular Risk Assessment: Locally Weighted Salp Swarm Algorithm in Global Optimization. Mathematics. 2024; 12(2):243. https://doi.org/10.3390/math12020243

Chicago/Turabian Style

Mohammed, Shahad Ibrahim, Nazar K. Hussein, Outman Haddani, Mansourah Aljohani, Mohammed Abdulrazaq Alkahya, and Mohammed Qaraad. 2024. "Fine-Tuned Cardiovascular Risk Assessment: Locally Weighted Salp Swarm Algorithm in Global Optimization" Mathematics 12, no. 2: 243. https://doi.org/10.3390/math12020243

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop