Next Article in Journal
Research on Economic Optimal Dispatching of Microgrid Based on an Improved Bacteria Foraging Optimization
Next Article in Special Issue
Bio-Inspired Swarm Intelligence Optimization Algorithm-Aided Hybrid TDOA/AOA-Based Localization
Previous Article in Journal
Biomimetic Design and Topology Optimization of Discontinuous Carbon Fiber-Reinforced Composite Lattice Structures
Previous Article in Special Issue
Feature Extraction and Matching of Humanoid-Eye Binocular Images Based on SUSAN-SIFT Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems

by
Pavel Trojovský
* and
Mohammad Dehghani
Department of Mathematics, Faculty of Science, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(2), 149; https://doi.org/10.3390/biomimetics8020149
Submission received: 10 March 2023 / Revised: 30 March 2023 / Accepted: 4 April 2023 / Published: 6 April 2023
(This article belongs to the Special Issue Bio-Inspired Computing: Theories and Applications)

Abstract

:
This paper presents a new evolutionary-based approach called a Subtraction-Average-Based Optimizer (SABO) for solving optimization problems. The fundamental inspiration of the proposed SABO is to use the subtraction average of searcher agents to update the position of population members in the search space. The different steps of the SABO’s implementation are described and then mathematically modeled for optimization tasks. The performance of the proposed SABO approach is tested for the optimization of fifty-two standard benchmark functions, consisting of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types, and the CEC 2017 test suite. The optimization results show that the proposed SABO approach effectively solves the optimization problems by balancing the exploration and exploitation in the search process of the problem-solving space. The results of the SABO are compared with the performance of twelve well-known metaheuristic algorithms. The analysis of the simulation results shows that the proposed SABO approach provides superior results for most of the benchmark functions. Furthermore, it provides a much more competitive and outstanding performance than its competitor algorithms. Additionally, the proposed approach is implemented for four engineering design problems to evaluate the SABO in handling optimization tasks for real-world applications. The optimization results show that the proposed SABO approach can solve for real-world applications and provides more optimal designs than its competitor algorithms.

1. Introduction

Optimization is a comprehensive concept in various fields of science. An optimization problem is a type of problem that has more than one feasible solution. Therefore, the goal of optimization is to find the best solution among all these feasible solutions. From a mathematical point of view, an optimization problem is explained using three parts: decision variables, constraints, and objective function [1]. The problem solving techniques in optimization studies are placed into two groups: deterministic and stochastic approaches [2].
Deterministic approaches, which are placed into two classes, gradient-based and non-gradient-based, are effective in solving linear, convex, simple, low-dimensional, continuous, and differentiable optimization problems [3]. However, increasing the complexity of these optimization problems leads to disruption in the performance of the deterministic approaches, and these methods get stuck in inappropriate local optima. On the other hand, many optimization problems within science and real-world applications have characteristics such as a high dimensionality, a high complexity, a non-convex, non-continuous, non-linear, and non-differentiable objective function, and a non-linear and unknown search space [4]. These optimization task characteristics and the difficulties of deterministic approaches have led researchers to introduce new techniques called stochastic approaches.
Metaheuristic algorithms are one of the most widely used stochastic approaches that effectively solve complex optimization problems. They have efficiency in solving non-linear, non-convex, non-differentiable, high-dimensional, and NP-hard optimization problems. An efficiency in addressing discrete, non-linear, and unknown search spaces, the simplicity of their concepts, their easy implementation, and their non-dependence on the type of problem are among the advantages that have led to the popularity of metaheuristic algorithms [5]. Metaheuristic algorithms are employed in various optimization applications within science, such as index tracking [6], energy [7,8,9,10], protection [11], energy carriers [12,13], and electrical engineering [14,15,16,17,18,19].
The optimization process of these metaheuristic algorithms is based on random search in the problem solving space and the use of random operators. Initially, candidate solutions are randomly generated. Then, during a repetition-based process and based on the steps of the algorithm, to improve the quality of these initial solutions, the position of the candidate solutions in the problem solving space is updated. In the end, the best candidate solution is available to solve the problem. Using random search in the optimization process does not guarantee the achievement of the global optimal by a metaheuristic algorithm. For this reason, the solutions that are obtained from metaheuristic algorithms are called pseudo-optimal [20]. To organize an effective search in the problem solving space, metaheuristic algorithms should be able to provide and manage search operations well, at both global and local levels. Global search, with the concept of exploration, leads to a comprehensive search in the problem solving space and an escape from optimal local areas. Local search, with the concept of exploitation, leads to a detailed search around the promising solutions for a convergence towards possible better solutions. Considering that exploration and exploitation pursue opposite goals, the key to the success of metaheuristic algorithms is to create a balance between this exploration and exploitation during the search process [21].
On the one hand, the concepts of the random search process and quasi-optimal solutions, and, on the other hand, the desire to achieve better quasi-optimal solutions for these optimization problems, have led to the development of numerous metaheuristic algorithms by researchers.
The main research question is that now that many metaheuristic algorithms have been designed, is there still a need to introduce a newer algorithm to deal with optimization problems or not? In response to this question, the No Free Lunch (NFL) [22] theorem explains that the high success of a particular algorithm in solving a set of optimization problems will not guarantee the same performance of that algorithm for other optimization problems. There is no assumption that implementing an algorithm on an optimization problem will be successful. According to the NFL theorem, no particular metaheuristic algorithm is the best optimizer for solving all optimization problems. The NFL theorem motivates researchers to search for better solutions for these optimization problems by designing newer metaheuristic algorithms. The NFL theorem has also inspired the authors of this paper to provide more effective solutions for dealing with optimization problems by creating a new metaheuristic algorithm.
The innovation and novelty of this paper are in the introduction a new metaheuristic algorithm called the Subtraction Average of Searcher Agents (SABO) for solving the optimization problems in different sciences. The main contributions of this study are as follows:
  • The basic idea behind the design of the SABO is the mathematical concepts and information subtraction average of the algorithm’s search agents.
  • The steps of the SABO’s implementation are described and its mathematical model is presented.
  • The efficiency of the proposed SABO approach has been evaluated for fifty-two standard benchmark functions.
  • The quality of the SABO’s results has been compared with the performance of twelve well-known algorithms.
  • To evaluate the capability of the SABO in handling real-world applications, the proposed approach is implemented for four engineering design problems.
The continuation of this paper is organized as follows: the literature review is presented in Section 2. The proposed SABO approach is introduced and designed in Section 3. Its simulation studies are presented in Section 4. The performance of the SABO in solving real-world applications is evaluated in Section 5. The conclusions and several research suggestions are provided in Section 6.

2. Literature Review

Metaheuristic algorithms have been developed with inspiration from various natural phenomena, the behaviors of living organisms in nature, concepts of biology, physical sciences, rules of games, and human interactions, etc. In a general classification based on the idea that is employed in their design, metaheuristic algorithms are placed into five groups: swarm-based, evolutionary-based, physics-based, human-based, and game-based approaches.
Swarm-based metaheuristic algorithms are approaches that are inspired by various natural swarming phenomena, such as the natural behaviors of animals, birds, aquatic animals, insects, and other living organisms. Among the most famous swarm-based approaches are particle swarm optimization (PSO) [23], ant colony optimization (ACO) [24], and artificial bee colony (ABC) [25]. PSO is a swarming method that is inspired by the movement strategy of flocks of fish or birds searching for food in nature. ACO is inspired by ant colonies’ ability to choose the shortest path between the food source and the colony site. ABC is derived from the hierarchical strategy of honey bee colonies and their activities in finding food sources. The strategies of providing food through hunting and foraging, migration, and the process of chasing between living organisms are some of the most natural, characteristic swarming ways of behavior, which have been a source of inspiration in the design of numerous swarm-based algorithms, such as the Reptile Search Algorithm (RSA) [26], Orca Predation Algorithm (OPA) [27], Marine Predator Algorithm (MPA) [28], African Vultures Optimization Algorithm (AVOA) [29], Honey Badger Algorithm (HBA) [30], White Shark Optimizer (WSO) [31], Whale Optimization Algorithm (WOA) [32], Tunicate Swarm Algorithm (TSA) [33], Grey Wolf Optimizer (GWO) [34], and Golden Jackal Optimization (GJO) [35].
Evolutionary-based metaheuristic algorithms are approaches that are developed based on simulating the concepts of the biological and genetic sciences. The bases of these methods are evolution strategies (ES) [36], genetic algorithms (GA) [37], and differential evolution (DE) [36]. These methods and all their generalizations are inspired by the concepts of biology, natural selection, Darwin’s theory of evolution, reproduction, and stochastic operators such as selection, crossover, and mutation.
Physics-based metaheuristic algorithms are designed based on modeling phenomena, processes, concepts, and the different forces in physics. Simulated annealing (SA) [38] is one of the most widely used physics-based methods, whose design is inspired by the annealing process of metals. In the annealing process, the metal is first melted under heat, then gradually cooled to achieve the ideal crystal. The modeling of physical forces and the laws of motion is the design origin of physics-based algorithms such as the gravitational search algorithm (GSA) [39] and momentum search algorithm (MSA) [40]. SA is developed based on the modeling of the tensile force and Hooke’s law between bodies that are connected by springs. Gravitational force inspires the GSA, which masses at the different distances that exert on each other. The MSA is designed based on the modeling of the force that results from the momentum of balls that hit each other. The phenomenon of the transformations of different physical states in the natural water cycle is employed in the water cycle algorithm’s (WCA) [41] design. The concepts of cosmology and black holes have been the primary sources for the design of algorithms such as the Black Hole Algorithm (BHA) [42] and Multi-Verse Optimizer (MVO) [43]. Some of the other physics-based algorithms are: the Equilibrium Optimizer (EO) [44], Thermal Exchange Optimization (TEO) [45], the Archimedes optimization algorithm (AOA) [46], the Lichtenberg Algorithm (LA) [47], Henry Gas Optimization (HGO) [48], Electro-Magnetism Optimization (EMO) [49], and nuclear reaction optimization (NRO) [50].
Human-based metaheuristic algorithms are approaches with designs that are inspired by the interactions, relationships, and thoughts of humans in social and individual life. Teaching–learning-based optimization (TLBO) [51] is one of the most familiar and widely used human-based approaches, whose design is inspired by the scientific interactions between teachers and students in the educational system. The effort of two social classes, the poor and rich, to improve their economic situations was the main idea behind introducing poor and rich optimization (PRO) [45]. The cooperation and interactions between teammates within a team to achieve their set goal has been the main idea behind the introduction of the Teamwork Optimization Algorithm (TOA) [52]. Collective decision optimization (CDO) [45] is inspired by the decision making behavior of humans, the queuing search algorithm (QSA) [45] mimics human actions when performing a queuing process, the political optimizer (PO) [50] imitates a human political formwork, and the Election-Based Optimization Algorithm (EBOA) [45] is based on mimicking the voting process for leader selections. Some of the other human-based algorithms are, e.g., the gaining–sharing knowledge-based algorithm (GSK) [53], Ali Baba and the Forty Thieves (AFT) [54], Driving Training-Based Optimization (DTBO) [4], the Coronavirus herd immunity optimizer (CHIO) [55], and War Strategy Optimization (WSO) [56].
Game-based metaheuristic algorithms are approaches that are introduced based on modeling the rules of different individual and group games and the strategies of their players, coaches, referees, and the other people influencing the games. Football and volleyball are popular group games whose simulations have been employed in the design of the League Championship Algorithm (LCA) [57], Volleyball Premier League (VPL) [57], and Football-Game-Based Optimization (FGBO) [58], respectively.
Mathematics-based metaheuristic algorithms are designed based on mathematical concepts, foundations, and operations. The Sine Cosine Algorithm (SCA) [51] is one of the most familiar mathematics-based approaches, whose design is inspired by the transcendental functions sin and cos . The Arithmetic Optimization Algorithm (AOA) [51] uses the distribution behavior of mathematics’ four basic arithmetic operators (multiplication, division, subtraction, and addition). Runge Kutta (RUN) [51] uses the logic of slope variations that are computed by the RK method as a promising and logical searching mechanism for global optimization. The Average and Subtraction-Based Optimizer (ASBO) [51] has the main construction idea of computing the averages and subtractions of the best and worst population members for guiding the algorithm population in the problem search space.
Based on the best knowledge from the literature review, no metaheuristic algorithm has been developed based on the mathematical concept of “an average of subtraction of search agents”. Therefore, the primary idea of the proposed algorithm was to use an extraordinary average of all the search agents to update the algorithm’s population, which can prevent the algorithm’s dependence on specific population members. Moreover, by improving the exploration of the algorithm, this can avoid it getting stuck in local optima. Therefore, to address this research gap in optimization studies, in this paper, a new metaheuristic algorithm is designed based on the mathematical concept of a special subtraction arithmetic average, which is discussed in the next section.

3. Subtraction-Average-Based Optimizer

In this section, the theory of the proposed Subtraction-Average-Based Optimizer (SABO) approach is explained, then its mathematical modeling is presented for its employment in optimization tasks.

3.1. Algorithm Initialization

Each optimization problem has a solution space, which is called the search space. The search space is a subset of the space of the dimension, which is equal to the number of the decision variables of the given problem. According to their position in the search space, algorithm searcher agents (i.e., population members) determine the values for the decision variables. Therefore, each search agent contains the information of the decision variables and is mathematically modeled using a vector. The set of search agents together forms the population of the algorithm. From a mathematical point of view, the population of the algorithm can be represented using a matrix, according to Equation (1). The primary positions of the search agents in the search space are randomly initialized using Equation (2).
X = [ X 1 X i X N ] N × m = [ x 1 , 1 x 1 , d x 1 , m x i , 1 x i , d x i , m x N , 1 x N , d x N , m ] N × m ,
x i , d = l b d + r i , d · ( u b d l b d ) , i = 1 , , N ,   d = 1 , , m ,
where X is the SABO population matrix, Xi is the i th search agent (population member), x i , d is its d th dimension in the search space (decision variable), N is the number of search agents, m is the number of decision variables, r i , d is a random number in the interval [ 0 , 1 ] , and l b d and u b d are the lower and upper bounds of the d th decision variable, respectively.
Each search agent is a candidate solution to the problem that suggests values for the decision variables. Therefore, the objective function of the problem can be evaluated based on each search agent. The evaluated values for the objective function of the problem can be represented by using a vector called F , according to Equation (3). Based on the placement of the specified values by each population member for the decision variables of the problem, the objective function is evaluated and stored in the vector F . Therefore, the number of elements of the vector F is equal to the number of the population members N .
F = [ F 1 F i F N ] N × 1 = [ F ( X 1 ) F ( X i ) F ( X N ) ] N × 1 ,
where F is the vector of the values of the objective function, and F i is the evaluated values for the objective function based on the i th search agent.
The evaluated values for the objective function are a suitable criterion for analyzing the quality of the solutions that are proposed by the search agents. Therefore, the best value that is calculated for the objective function corresponds to the best search agent. Similarly, the worst value that is calculated for the objective function corresponds to the worst search agent. Considering that the position of the search agents in the search space is updated in each iteration, the process of identifying and saving the best search agent continues until the last iteration of the algorithm.

3.2. Mathematical Modelling of SABO

The basic inspiration for the design of the SABO is mathematical concepts such as averages, the differences in the positions of the search agents, and the sign of difference of the two values of the objective function. The idea of using the arithmetic mean location of all the search agents (i.e., the population members of the t th iteration), instead of just using, e.g., the location of the best or worst search agent to update the position of all the search agents (i.e., the construction of all the population members of the ( t + 1 ) th iteration), is not new, but the SABO’s concept of the computation of the arithmetic mean is wholly unique, as it is based on a special operation “ v ”, called the v subtraction of the search agents B from the search agent A, which is defined as follows:
A   v   B = sign   ( F ( A ) F ( B ) ) ( A v B ) ,
where v is a vector of the dimension m , in which components are random numbers that are generated from the set { 1 , 2 } , the operation “ ” represents the Hadamard product of the two vectors (i.e., all the components of the resulting vectors are formed by multiplying the corresponding components of the given two vectors), F ( A ) and F ( B ) are the values of the objective function of the search agents A and B , respectively, and sign is the signum function. It is worth noting that, due to the use of a random vector v with components from the set { 1 , 2 } in the definition of the v subtraction, the result of this operation is any of the points of a subset of the search space that has a cardinality of 2 m + 1 .
In the proposed SABO, the displacement of any search agent X i in the search space is calculated by the arithmetic mean of the v subtraction of each search agent X j ,   j = 1 , 2 , , N , from the search agent X i . Thus, the new position for each search agent is calculated using (5).
X i n e w = X i + r i 1 N j = 1 N ( X i     v   X j ) ,     i = 1 , 2 ,   ,   N ,
where X i n e w is the new proposed position for the i th search agent X i , N is the total number of the search agents, and r i is a vector of the dimension m , in which components have a normal distribution with the values from the interval [ 0 ,   1 ] .
Then, if this proposed new position leads to an improvement in the value of the objective function, it is acceptable as the new position of the corresponding agent, according to (6).
X i = { X i n e w , F i n e w < F i ; X i , e l s e ,
where F i and F i n e w are the objective function values of the search agents X i and X i n e w , respectively.
Clearly, the v subtraction X i     v   X j represents a vector χ i j , and we can look at Equation (5) as the motion equation of the search agent X i , since we can rewrite it in the form X i n e w = X i + r i M i , where the mean vector M i = 1 N j = 1 N ( X i     v   X j ) = 1 N j = 1 N χ i j determines the direction of the movement of the search agent X i to its new position X i n e w . The search mechanism based on “the arithmetic mean of the v -subtractions”, which is presented in (5), has the essential property of realizing both the exploration and exploitation phases to explore the promising areas in the search space. The exploration phase is realized by the operation of “ v -subtraction” (i.e., the vector χ i j ), see Figure 1A, and the exploitation phase by the operation of the “arithmetic mean of the v -subtractions” (i.e., the vector M i ), see Figure 1B.

3.3. Repetition Process, Pseudocode, and Flowchart of SABO

After updating all the search agents, the first iteration of the algorithm is completed. Then, based on the new values that have been evaluated for the positions of the search agents and objective function, the algorithm enters its next iteration. In each iteration, the best search agent is stored as the best candidate solution so far. This process of updating the search agents continues until the last iteration of the algorithm, based on (3) to (5). In the end, the best candidate solution that was stored during the iterations of the algorithm is presented as the solution to the problem. The implementation steps of the SABO are shown as a flowchart in Figure 2 and presented as a pseudocode in Algorithm 1.
Algorithm 1. Pseudocode of SABO.
Start SABO.
1. Input problem information: variables, objective function, and constraints.
2. Set SABO population size (N) and iterations (T).
3. Generate the initial search agents’ matrix at random using Equation (2). x i , d l b d + r i , d · ( u b d l b d )
4. Evaluate the objective function.
5.   For t = 1 to T
6.     For i = 1 to N
7.    Calculate new proposed position for ith SABO search agent using Equation (5). x i , d n e w X i + r i 1 N j = 1 N ( X i     v   X j )
8.    Update ith GAO member using Equation (6). X i { X i n e w ,     F i n e w < F i X i ,     e l s e
9.     end
10.   Save the best candidate solution so far.
11.   end
12. Output best quasi-optimal solution obtained with the SABO.
End SABO.

3.4. Computational Complexity of SABO

In this subsection, the computational complexity of the proposed SABO approach is evaluated. The initialization steps of the SABO for dealing with an optimization problem with m decision variables have a complexity that is equal to O ( N m ) , where N is the number of search agents. Furthermore, the process of updating these search agents has a complexity that is equal to O ( N m T ) , where T is the total number of iterations of the algorithm. Therefore, the computational complexity of the SABO is equal to O ( N m ( 1 + T ) ) .

4. Simulation Studies and Results

In this section, the effectiveness of the proposed SABO approach in solving optimization problems is challenged. For this purpose, a set of fifty-two standard benchmark functions is employed, consisting of seven unimodal functions (F1 to F7), six high-dimensional multimodal functions (F8 to F13), ten fixed-dimensional multimodal functions (F14 to F23), and twenty-nine functions from the CEC 2017 test suite [59]. To analyze the performance quality of the SABO in optimization tasks, the results that were obtained from the proposed approach have been compared with twelve well-known metaheuristic algorithms: GA, PSO, GSA, TLBO, GWO, MVO, WOA, MPA, TSA, RSA, WSO, and AVOA. The values of the control parameters of the competitor algorithms are specified in Table 1.
The proposed SABO and each of the competitor algorithms are implemented for twenty independent runs on the benchmark functions, where each independent run includes 1000 iterations. The optimization results are reported using six indicators: the mean, best, worst, standard deviation (std), median, and rank. The ranking criterion of these metaheuristic algorithms is based on providing a better value for the mean index.

4.1. Evaluation Unimodal Functions

The unimodal objective functions, F1 to F7, due to the lack of local optima, are suitable options for analyzing the exploitation ability of the metaheuristic algorithms. The optimization results of the F1 to F7 functions, using the SABO and competitor algorithms, are reported in Table 2.
Based on the obtained results, the proposed SABO, with a high exploitation ability, provided the global optimal when solving the F1, F2, F3, F4, and F6 functions. Additionally, the SABO is the best optimizer for the F5 and F7 functions. A comparison of the simulation results shows that the SABO, through obtaining the first rank in the total, provided a superior performance for solving the unimodal problems F1 to F7 compared to the competitor algorithms.

4.2. Evaluation High-Dimensional Multimodal Functions

The high-dimensional multimodal objective functions, F8 to F13, due to having a large number of local optima, are suitable options for evaluating the exploration ability of the metaheuristic algorithms. The results of implementing the SABO and its competitor algorithms on the functions F8 to F13 are reported in Table 3.
Based on the optimization results, the SABO provided the global optimal for the F9 and F11 functions, with a high exploration ability. The proposed SABO approach is the best optimizer for solving the functions F8, F10, F12, and F13. The analysis of the simulation results shows that the SABO provided a superior performance in handling the high-dimensional multimodal problems compared to its competitor algorithms.

4.3. Evaluation Fixed-Dimensional Multimodal Functions

The fixed-dimensional multimodal objective functions, F14 to F23, have fewer numbers of local optima than the functions F8 to F13. These functions are suitable options for evaluating the ability of the metaheuristic algorithms to create a balance between the exploration and exploitation. The optimization results of the operations F14 to F23, using the SABO and its competitor algorithms, are presented in Table 4.
Based on the obtained results, the SABO is the best optimizer for the functions F15 and F21. In solving the other benchmark functions of this group, the SABO had a similar situation to some of its competitor algorithms from the point of view of the mean criterion. However, the proposed SABO algorithm performed better in solving these functions by providing better values for the std index. Furthermore, the analysis of the simulation results shows that, compared to the competitor algorithms, the SABO provided a superior performance by balancing the exploration and exploitation in the optimization of the fixed-dimensional multimodal problems.
The performances of the proposed SABO approach and the competitor algorithms in solving the functions F1 to F23 are presented in the form of boxplot diagrams in Figure 3.

4.4. Evaluation CEC 2017 Test Suite

In this subsection, the efficiency of the SABO in solving the complex optimization problems from the CEC 2017 test suite is evaluated. The CEC 2017 test suite has thirty benchmark functions, consisting of three unimodal functions, C17-F1 to C17-F3, seven multimodal functions, C17-F4 to C17-F10, ten hybrid functions, C17-F11 to C17-F20, and ten composition functions, C17-F21 to C17-F30. The C17-F2 function was removed from this test suite due to its unstable behavior. The complete information on the CEC 2017 test suite is provided by [59]. The results of implementing the proposed SABO approach and its competitor algorithms on the CEC 2017 test suite are reported in Table 5.
Based on the obtained results, the SABO is the best optimizer for the functions C17-F1, C17-F3 to C17-F23, and C17-F25 to C17-F30. The analysis of the simulation results shows that the proposed SABO approach provided better results for most of the benchmark functions. Overall, by winning the first rank, it provided a superior performance in handling the CEC 2017 test suite compared to the competitor algorithms. The performances of the SABO and its competitor algorithms in solving the CEC 2017 test suite are plotted as boxplot diagrams in Figure 4.

4.5. Statistical Analysis

In this subsection, statistical analyses are presented for the results of the proposed SABO approach and its competing algorithms to determine whether the superiority of the SABO over the competing algorithms is significant from a statistical point of view. For this purpose, the Wilcoxon rank sum test [60] was used, which is a non-parametric statistical analysis that is used to determine the significant difference between the averages of two data samples. In this test, an index called the p -value is used to determine the significant difference. The results of implementing the Wilcoxon rank sum test on the performances of the SABO and the competitor algorithms are presented in Table 6.
Based on the simulation results, in cases where the p -value was less than 0.05, the proposed SABO approach had a significant statistical superiority over the corresponding metaheuristic algorithm.

4.6. Advantages and Disadvantages of SABO

The proposed SABO approach is a metaheuristic algorithm that performs the optimization process based on the search power of its population through an iteration-based process. Among the advantages of the SABO, it can be mentioned that, except for the parameters of the number of population members N and the maximum number of iterations of the algorithm T , which are similar in all the algorithms, it does not have any control parameters. For this reason, it does not need a parameter-setting process. The simplicity of its equations, its easy implementation, and its simple concepts are other advantages of the SABO. The SABO’s ability to balance exploration and exploitation during the search process in the problem solving space is another advantage of this proposed approach. Despite these advantages, the proposed approach also has several disadvantages. The proposed SABO approach belongs to the group of stochastic techniques for solving optimization problems, and for this reason, its first disadvantage is that there is no guarantee of it achieving the global optimal. Another disadvantage of the SABO is that, based on the NFL theorem, it cannot be claimed that the proposed approach performs best in all optimization applications. Another disadvantage of the SABO is that there is always the possibility that newer metaheuristic algorithms will be designed that have a better performance than the proposed approach in handling some optimization tasks.

5. SABO for Real-World Applications

In this subsection, the capability of the proposed SABO approach in handling optimization tasks in real-world applications is challenged by four engineering design optimization problems.

5.1. Pressure Vessel Design Problem

The pressure vessel design is an optimization challenge with the aim of minimizing construction costs. The pressure vessel design schematic is shown in Figure 5.
The mathematical model of the pressure vessel design problem is as follows [61]:
Consider: X = [ x 1 ,   x 2 ,   x 3 ,   x 4 ] = [ T s ,   T h ,   R ,   L ] .
Minimize: f ( x ) = 0.6224 x 1 x 3 x 4 + 1.778 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 .
Subject to:
g 1 ( x ) = x 1 + 0.0193 x 3     0 ,   g 2 ( x ) = x 2 + 0.00954 x 3   0 ,  
g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000   0 ,   g 4 ( x ) = x 4 240     0 .
with
0 x 1 , x 2 100   and   10 x 3 , x 4 200 .
The optimization results for the pressure vessel design, using the SABO and its competing algorithms, are reported in Table 7 and Table 8.
Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.778027075, 0.384579186, 40.3122837, and 200) and the value of the objective function being equal to 5882.901334. The analysis of the simulation results shows that the SABO more effectively dealt with the pressure vessel design compared to its competing algorithms. The convergence curve of the SABO during the pressure vessel design optimization is drawn in Figure 6.

5.2. Speed Reducer Design Problem

The speed reducer design is a real-world application within engineering science with the aim of minimizing the weight of the speed reducer. The speed reducer design schematic is shown in Figure 7.
The mathematical model of the speed reducer design problem is as follows [62,63]:
Consider: X = [ x 1 ,   x 2 ,   x 3 ,   x 4 ,   x 5 , x 6 , x 7 ] = [ b ,   m ,   p ,   l 1 ,   l 2 ,   d 1 ,   d 2 ] .
Minimize: f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 ) .
Subject to:
g 1 ( x ) = 27 x 1 x 2 2 x 3 1     0 ,     g 2 ( x ) = 397.5 x 1 x 2 2 x 3 1   0 ,
g 3 ( x ) = 1.93 x 4 3 x 2 x 3 x 6 4 1   0 ,     g 4 ( x ) = 1.93 x 5 3 x 2 x 3 x 7 4 1     0 ,
g 5 ( x ) = 1 110 x 6 3 ( 745 x 4 x 2 x 3 ) 2 + 16.9 10 6 1   0 ,
g 6 ( x ) = 1 85 x 7 3 ( 745 x 5 x 2 x 3 ) 2 + 157.5 10 6 1     0 ,
g 7 ( x ) = x 2 x 3 40 1     0 ,     g 8 ( x ) = 5 x 2 x 1 1     0 ,
g 9 ( x ) = x 1 12 x 2 1     0 ,     g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1     0 .
g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1     0 .
with
2.6 x 1 3.6 ,   0.7 x 2 0.8 ,   17 x 3 28 ,   7.3 x 4 8.3 ,   7.8 x 5 8.3 ,   2.9 x 6 3.9 ,   and   5 x 7 5.5 .
The results of implementing the proposed SABO approach and its competing algorithms on the speed reducer design problem are presented in Table 9 and Table 10.
Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (3.5, 0.7, 17, 7.3, 7.8, 3.350214666, and 5.28668323) and the value of the objective function being equal to 2996.348165. What can be concluded from the comparison of the simulation results is that the proposed SABO approach provided better results and a superior performance in dealing with the speed reducer design problem compared to the competing algorithms. The convergence curve of the SABO while achieving the optimal solution for the speed reducer design problem is drawn in Figure 8.

5.3. Welded Beam Design

The design of the welded beam is the subject of optimization by real users to minimize its production costs. The design of the welded beam schematic is shown in Figure 9.
The mathematical model of the welded beam design problem is as follows [32]:
Consider: X = [ x 1 ,   x 2 ,   x 3 ,   x 4 ] = [ h ,   l ,   t ,   b ] .
Minimize: f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4   ( 14.0 + x 2 ) .
Subject to:
g 1 ( x ) = τ ( x ) 13600     0 ,     g 2 ( x ) = σ ( x ) 30000     0 ,
g 3 ( x ) = x 1 x 4   0 ,     g 4 ( x ) = 0.10471 x 1 2 + 0.04811 x 3 x 4   ( 14 + x 2 ) 5.0     0 ,
g 5 ( x ) = 0.125 x 1   0 ,     g 6 ( x ) = δ   ( x ) 0.25     0 ,
g 7 ( x ) = 6000 p c   ( x )   0 .
where
τ ( x ) = ( τ ) 2 + ( 2 τ τ ) x 2 2 R + ( τ ) 2   ,   τ = 6000 2 x 1 x 2 ,   τ = M R J ,
M = 6000 ( 14 + x 2 2 ) ,   R = x 2 2 4 + ( x 1 + x 3 2 ) 2 ,
J = 2 x 1 x 2 2 ( x 2 2 12 + ( x 1 + x 3 2 ) 2 ) ,   σ ( x ) = 504000 x 4 x 3 2 ,
δ   ( x ) = 65856000 ( 30 · 10 6 ) x 4 x 3 3 ,   p c   ( x ) = 4.013 ( 30 · 10 6 ) x 3 2 x 4 6 36 196 ( 1 x 3 28 30 · 10 6 4 ( 12 · 10 6 ) ) .
with
0.1 x 1 ,   x 4 2   and   0.1 x 2 ,   x 3 10 .
The results of using the SABO and its competitor algorithms on the welded beam design problem are reported in Table 11 and Table 12.
Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.20572964, 3.470488666, 9.03662391, and 0.20572964) and the value of the objective function being equal to 1.724852309. Comparing these optimization results indicates the superior performance of the SABO over the competing algorithms in optimizing the welded beam design. The SABO convergence curve while providing the solution for the welded beam design problem is drawn in Figure 10.

5.4. Tension/Compression Spring Design

The tension/compression spring design is an engineering challenge with the aim of minimizing the weight of the tension/compression spring. The tension/compression spring design schematic is shown in Figure 11.
The mathematical model of the tension/compression spring design problem is as follows [32]:
Consider: X = [ x 1 ,   x 2 ,   x 3   ] = [ d ,   D ,   P ] .
Minimize: f ( x ) = ( x 3 + 2 ) x 2 x 1 2 .
Subject to:
g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4     0 ,   g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 ) + 1 5108 x 1 2 1   0 ,
g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3   0 ,   g 4 ( x ) = x 1 + x 2 1.5 1     0 .
with
0.05 x 1 2 ,     0.25 x 2 1.3   and   2   x 3 15 .
The results of employing the SABO and the competing algorithms to handle the tension/compression spring design problem are presented in Table 13 and Table 14.
Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.051689061, 0.356717736, and 11.28896595) and the value of the objective function being equal to 0.012665233. What is evident from the analysis of the simulation results is that the SABO was more effective in optimizing the tension/compression spring design than the competing algorithms. The SABO convergence curve in reaching the optimal design for the tension/compression spring problem is drawn in Figure 12.

6. Conclusions and Future Works

In this paper, a new metaheuristic algorithm called the Subtraction Average of Searcher Agents (SABO) was designed. The main idea of the design of the SABO was to use mathematical concepts and information on the average differences of searcher agents to update the population of the algorithm. The mathematical modeling of the proposed SABO approach was presented for optimization applications. The SABO’s ability to solve these optimization problems was evaluated for fifty-two standard benchmark functions, including unimodal, high-dimensional, and fixed-dimensional functions, and the CEC 2017 test suite. The optimization results indicated the SABO’s optimal ability to create a balance between exploration and exploitation while scanning the search space to provide suitable solutions for the optimization problems. A total of twelve well-known metaheuristic algorithms were employed for comparison with the proposed SABO approach. Comparing the simulation results showed that the SABO performed better than its competitor algorithms, providing better results for most of the benchmark functions. The implementation of the proposed optimization method on four engineering design problems demonstrated the SABO’s ability to handle these optimization tasks in real-world applications.
With the introduction of the proposed SABO approach, several research avenues are opened for further study. The design of binary and multi-objective versions of the SABO is one of this study’s most special research potentials. Employing the SABO to solve the optimization problems within various sciences and real-world applications is another suggestion for further studies.

Author Contributions

Conceptualization, P.T.; methodology, M.D.; software, M.D. and P.T.; validation, P.T. and M.D.; formal analysis, M.D.; investigation, P.T.; resources, M.D.; data curation, P.T.; writing—original draft preparation, M.D. and P.T.; writing—review and editing P.T.; visualization, P.T.; supervision, M.D.; project administration, P.T.; funding acquisition, P.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Project of Excellence of Faculty of Science, University of Hradec Králové, Czech Republic. Grant number 2209/2023-2024.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank University of Hradec Králové for support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sergeyev, Y.D.; Kvasov, D.; Mukhametzhanov, M. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 1–9. [Google Scholar] [CrossRef] [Green Version]
  2. Liberti, L.; Kucherenko, S. Comparison of deterministic and stochastic approaches to global optimization. Int. Trans. Oper. Res. 2005, 12, 263–285. [Google Scholar] [CrossRef]
  3. Koc, I.; Atay, Y.; Babaoglu, I. Discrete tree seed algorithm for urban land readjustment. Eng. Appl. Artif. Intell. 2022, 112, 104783. [Google Scholar] [CrossRef]
  4. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  5. Zeidabadi, F.-A.; Dehghani, M.; Trojovský, P.; Hubálovský, Š.; Leiva, V.; Dhiman, G. Archery Algorithm: A Novel Stochastic Optimization Algorithm for Solving Optimization Problems. Comput. Mater. Contin. 2022, 72, 399–416. [Google Scholar] [CrossRef]
  6. Yuen, M.-C.; Ng, S.-C.; Leung, M.-F.; Che, H. A metaheuristic-based framework for index tracking with practical constraints. Complex Intell. Syst. 2022, 8, 4571–4586. [Google Scholar] [CrossRef]
  7. Dehghani, M.; Montazeri, Z.; Malik, O.P. Energy commitment: A planning of energy carrier based on energy consumption. Electr. Eng. Electromechanics 2019, 2019, 69–72. [Google Scholar] [CrossRef]
  8. Dehghani, M.; Mardaneh, M.; Malik, O.P.; Guerrero, J.M.; Sotelo, C.; Sotelo, D.; Nazari-Heris, M.; Al-Haddad, K.; Ramirez-Mendoza, R.A. Genetic Algorithm for Energy Commitment in a Power System Supplied by Multiple Energy Carriers. Sustainability 2020, 12, 10053. [Google Scholar] [CrossRef]
  9. Dehghani, M.; Mardaneh, M.; Malik, O.P.; Guerrero, J.M.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Matas, J.; Abusorrah, A. Energy Commitment for a Power System Supplied by Multiple Energy Carriers System using Following Optimization Algorithm. Appl. Sci. 2020, 10, 5862. [Google Scholar] [CrossRef]
  10. Rezk, H.; Fathy, A.; Aly, M.; Ibrahim, M.N.F. Energy management control strategy for renewable energy system based on spotted hyena optimizer. Comput. Mater. Contin. 2021, 67, 2271–2281. [Google Scholar] [CrossRef]
  11. Ehsanifar, A.; Dehghani, M.; Allahbakhshi, M. Calculating the leakage inductance for transformer inter-turn fault detection using finite element method. In Proceedings of the 2017 Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2–4 May 2017; pp. 1372–1377. [Google Scholar]
  12. Dehghani, M.; Montazeri, Z.; Ehsanifar, A.; Seifi, A.R.; Ebadi, M.J.; Grechko, O.M. Planning of energy carriers based on final energy consumption using dynamic programming and particle swarm optimization. Electr. Eng. Electromechanics 2018, 2018, 62–71. [Google Scholar] [CrossRef] [Green Version]
  13. Montazeri, Z.; Niknam, T. Energy carriers management based on energy consumption. In Proceedings of the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; pp. 539–543. [Google Scholar]
  14. Dehghani, M.; Montazeri, Z.; Malik, O. Optimal sizing and placement of capacitor banks and distributed generation in distribution systems using spring search algorithm. Int. J. Emerg. Electr. Power Syst. 2020, 21, 20190217. [Google Scholar] [CrossRef] [Green Version]
  15. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Al-Haddad, K.; Guerrero, J.M.; Dhiman, G. A New Methodology Called Dice Game Optimizer for Capacitor Placement in Distribution Systems. Electr. Eng. Electromechanics 2020, 2020, 61–64. [Google Scholar] [CrossRef] [Green Version]
  16. Dehbozorgi, S.; Ehsanifar, A.; Montazeri, Z.; Dehghani, M.; Seifi, A. Line loss reduction and voltage profile improvement in radial distribution networks using battery energy storage system. In Proceedings of the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 22 December 2017; pp. 215–219. [Google Scholar]
  17. Montazeri, Z.; Niknam, T. Optimal utilization of electrical energy from power plants based on final energy consumption using gravitational search algorithm. Electr. Eng. Electromechanics 2018, 2018, 70–73. [Google Scholar] [CrossRef] [Green Version]
  18. Dehghani, M.; Mardaneh, M.; Montazeri, Z.; Ehsanifar, A.; Ebadi, M.J.; Grechko, O.M. Spring search algorithm for simultaneous placement of distributed generation and capacitors. Electr. Eng. Electromechanics 2018, 2018, 68–73. [Google Scholar] [CrossRef]
  19. Premkumar, M.; Sowmya, R.; Jangir, P.; Nisar, K.S.; Aldhaifallah, M. A New Metaheuristic Optimization Algorithms for Brushless Direct Current Wheel Motor Design Problem. CMC-Comput. Mater. Contin. 2021, 67, 2227–2242. [Google Scholar] [CrossRef]
  20. de Armas, J.; Lalla-Ruiz, E.; Tilahun, S.L.; Voß, S. Similarity in metaheuristics: A gentle step towards a comparison methodology. Nat. Comput. 2022, 21, 265–287. [Google Scholar] [CrossRef]
  21. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  22. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  23. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 1944, pp. 1942–1948. [Google Scholar]
  24. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  25. Karaboga, D.; Basturk, B. Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. In Proceedings of the International Fuzzy Systems Association World Congress, Daegu, Republic of Korea, 20–24 August 2023; pp. 789–798. [Google Scholar]
  26. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  27. Jiang, Y.; Wu, Q.; Zhu, S.; Zhang, L. Orca predation algorithm: A novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 2022, 188, 116026. [Google Scholar] [CrossRef]
  28. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  29. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  30. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  31. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  32. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  33. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  35. Chopra, N.; Ansari, M.M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  36. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  37. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  38. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  39. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  40. Dehghani, M.; Samet, H. Momentum search algorithm: A new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2020, 2, 1–15. [Google Scholar] [CrossRef]
  41. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  42. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  43. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  44. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  45. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  46. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  47. Pereira, J.L.J.; Francisco, M.B.; Diniz, C.A.; Oliver, G.A.; Cunha Jr, S.S.; Gomes, G.F. Lichtenberg algorithm: A novel hybrid physics-based meta-heuristic for global optimization. Expert Syst. Appl. 2021, 170, 114522. [Google Scholar] [CrossRef]
  48. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  49. Cuevas, E.; Oliva, D.; Zaldivar, D.; Pérez-Cisneros, M.; Sossa, H. Circle detection using electro-magnetism optimization. Inf. Sci. 2012, 182, 40–55. [Google Scholar] [CrossRef] [Green Version]
  50. Wei, Z.; Huang, C.; Wang, X.; Han, T.; Li, Y. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access 2019, 7, 66084–66109. [Google Scholar] [CrossRef]
  51. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  52. Dehghani, M.; Trojovský, P. Teamwork Optimization Algorithm: A New Optimization Approach for Function Minimization/Maximization. Sensors 2021, 21, 4567. [Google Scholar] [CrossRef]
  53. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  54. Braik, M.; Ryalat, M.H.; Al-Zoubi, H. A novel meta-heuristic algorithm for solving numerical optimization problems: Ali Baba and the forty thieves. Neural Comput. Appl. 2022, 34, 409–455. [Google Scholar] [CrossRef]
  55. Al-Betar, M.A.; Alyasseri, Z.A.A.; Awadallah, M.A.; Abu Doush, I. Coronavirus herd immunity optimizer (CHIO). Neural Comput. Appl. 2021, 33, 5011–5042. [Google Scholar] [CrossRef]
  56. Ayyarao, T.L.; RamaKrishna, N.; Elavarasam, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War Strategy Optimization Algorithm: A New Effective Metaheuristic Algorithm for Global Optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  57. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  58. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  59. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P.; Definitions, P. Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technology Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  60. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: Berlin/Heidelberg, Germany, 1992; pp. 196–202. [Google Scholar]
  61. Kannan, B.; Kramer, S.N. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
  62. Gandomi, A.H.; Yang, X.-S. Benchmark problems in structural optimization. In Computational Optimization, Methods and Algorithms; Springer: Berlin/Heidelberg, Germany, 2011; pp. 259–281. [Google Scholar]
  63. Mezura-Montes, E.; Coello, C.A.C. Useful infeasible solutions in engineering optimization with evolutionary algorithms. In Proceedings of the Mexican International Conference on Artificial Intelligence, Monterrey, Mexico, 14–18 November 2005; pp. 652–662. [Google Scholar]
Figure 1. Schematic illustration (for the case m = 2 ) of the exploration phase by “ v -subtraction” (A), and the exploitation phase by “arithmetic mean of the v -subtractions” (B).
Figure 1. Schematic illustration (for the case m = 2 ) of the exploration phase by “ v -subtraction” (A), and the exploitation phase by “arithmetic mean of the v -subtractions” (B).
Biomimetics 08 00149 g001
Figure 2. Flowchart of SABO.
Figure 2. Flowchart of SABO.
Biomimetics 08 00149 g002
Figure 3. Boxplot diagrams of the proposed SABO and competitor algorithms for F1 to F23 test functions.
Figure 3. Boxplot diagrams of the proposed SABO and competitor algorithms for F1 to F23 test functions.
Biomimetics 08 00149 g003
Figure 4. Boxplot diagram of SABO and competitor algorithms on the CEC 2017 test suite.
Figure 4. Boxplot diagram of SABO and competitor algorithms on the CEC 2017 test suite.
Biomimetics 08 00149 g004aBiomimetics 08 00149 g004b
Figure 5. Schematic of the pressure vessel design.
Figure 5. Schematic of the pressure vessel design.
Biomimetics 08 00149 g005
Figure 6. SABO’s performance convergence curve for the pressure vessel design.
Figure 6. SABO’s performance convergence curve for the pressure vessel design.
Biomimetics 08 00149 g006
Figure 7. Schematic of the speed reducer design.
Figure 7. Schematic of the speed reducer design.
Biomimetics 08 00149 g007
Figure 8. SABO’s performance convergence curve for the speed reducer design.
Figure 8. SABO’s performance convergence curve for the speed reducer design.
Biomimetics 08 00149 g008
Figure 9. Schematic of the welded beam design.
Figure 9. Schematic of the welded beam design.
Biomimetics 08 00149 g009
Figure 10. SABO’s performance convergence curve for the welded beam design.
Figure 10. SABO’s performance convergence curve for the welded beam design.
Biomimetics 08 00149 g010
Figure 11. Schematic of the tension/compression spring design.
Figure 11. Schematic of the tension/compression spring design.
Biomimetics 08 00149 g011
Figure 12. SABO’s performance convergence curve for the tension/compression spring.
Figure 12. SABO’s performance convergence curve for the tension/compression spring.
Biomimetics 08 00149 g012
Table 1. Control parameters values.
Table 1. Control parameters values.
AlgorithmParameterValue
GA
TypeReal coded
SelectionRoulette wheel (Proportionate)
CrossoverWhole arithmetic (Probability = 0.8,
α [ 0.5 ,   1.5 ] )
MutationGaussian (Probability = 0.05)
PSO
TopologyFully connected
Cognitive and social constant(C1, C2) = ( 2 ,   2 )
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
GSA
Alpha, G0, Rnorm, Rpower20, 100, 2, 1
TLBO
TF: teaching factorTF = round [ ( 1 + r a n d ) ]
random numberrand is a random number between [ 0 1 ] .
GWO
Convergence parameter (a)a: Linear reduction from 2 to 0.
MVO
Wormhole existence probability (WEP)Min(WEP) = 0.2 and Max(WEP) = 1.
Exploitation accuracy over the iterations (p)p = 6.
WOA
Convergence parameter (a)a: Linear reduction from 2 to 0.
r is a random vector in [0–1]
l is a random number in [ 1 , 1 ] .
TSA
Pmin and Pmax1, 4
c1, c2, c3Random numbers lie in the range of [ 0 1 ] .
MPA
Constant numberp = 0.5
Random vectorR is a vector of uniform random numbers in [ 0 ,   1 ] .
Fish Aggregating Devices (FADs)FADs = 0.2
Binary vectorU = 0 or 1
RSA
Sensitive parameter β = 0.01
Sensitive parameter α = 0.1
Evolutionary Sense (ES)ES: randomly decreasing values between 2 and −2
AVOA
L1, L20.8, 0.2
w2.5
P1, P2, P30.6, 0.4, 0.6
WSO
Fmin and Fmax0.07, 0.75
τ, ao, a1, a24.125, 6.25, 100, 0.0005
Table 2. Optimization results of unimodal functions (F1–F8).
Table 2. Optimization results of unimodal functions (F1–F8).
SABOWSOAVOARSAMPATSAWOAMVOGWOTLBOGSAPSOGA
F1Mean094.68556006.88 × 10−506.07 × 10−472.4 × 10−1550.1445953.87 × 10−588.14 × 10−759.33 × 10−170.02270230.50201
Best017.42507002.13 × 10−511.01 × 10−501.4 × 10−1670.0712267.24 × 10−615.32 × 10−774.88 × 10−174.43 × 10−617.92696
Worst0439.1736005.24 × 10−496.22 × 10−462.1 × 10−1540.2695776.86 × 10−571.07 × 10−731.92 × 10−160.15814956.92799
Std0101.7604001.26 × 10−491.64 × 10−466.1 × 10−1550.056871.52 × 10−572.34 × 10−743.76 × 10−170.04694210.46286
Median051.91934001.97 × 10−504.04 × 10−483.7 × 10−1580.1223171.34 × 10−591.27 × 10−758.64 × 10−170.00131428.19897
Rank111115629437810
F2Mean01.5751361.2 × 10−26603 × 10−281.11 × 10−285.7 × 10−1030.267177.97 × 10−356.09 × 10−395.22 × 10−80.7310552.788395
Best00.6091542.3 × 10−30103.21 × 10−311.02 × 10−303.8 × 10−1140.1890841.45 × 10−353.25 × 10−403.41 × 10−80.0897191.745356
Worst04.8735822.5 × 10−26501.5 × 10−275.72 × 10−285.3 × 10−1020.4576412.54 × 10−343.59 × 10−387.3 × 10−81.9085973.806556
Std01.088388004.14 × 10−281.6 × 10−281.5 × 10−1020.0758916.78 × 10−359.92 × 10−391.14 × 10−80.5345080.544788
Median01.2025147.1 × 10−28701.17 × 10−283.98 × 10−299 × 10−1080.2530656.46 × 10−352.45 × 10−395.16 × 10−80.7435552.741555
Rank1112176395481012
F3Mean01806.78005.38 × 10−121.31 × 10−1221,771.7814.052164.07 × 10−151.87 × 10−25434.0065643.43022168.983
Best0687.9998002.9 × 10−251.4 × 10−17804.85555.8106563.22 × 10−191.87 × 10−29235.95236.450821424.187
Worst04051.23007.54 × 10−111.89 × 10−1139,997.3429.193544.98 × 10−141.97 × 10−24905.25185210.7713458.935
Std0827.0454001.71 × 10−114.22 × 10−1210,690.486.158911.13 × 10−144.64 × 10−25157.3881118.449639.6914
Median01619.412002.19 × 10−132.52 × 10−1423,134.6512.093812.84 × 10−161.35 × 10−26418.4298284.9122100.7
Rank191154116327810
F4Mean017.761812 × 10−26304.29 × 10−190.00467344.498780.523471.32 × 10−143.14 × 10−300.7637856.4317792.829395
Best011.90369009.98 × 10−203.08 × 10−53.5495290.2906843.65 × 10−161.45 × 10−311.23 × 10−83.4350682.216469
Worst024.619713.6 × 10−26201.39 × 10−180.02600292.119750.8980585.88 × 10−141.33 × 10−294.29988914.350433.992738
Std03.607365003.25 × 10−190.00649530.236590.1635631.68 × 10−143.51 × 10−301.0493332.4392770.466936
Median016.860552.9 × 10−28203.93 × 10−190.00308340.159020.53377.33 × 10−151.86 × 10−300.4027486.0469262.783478
Rank1112146127538109
F5Mean0.19710111,081.321.8720511.5339123.5561428.4299127.20948392.740526.9915526.8866626.3931484.19977595.3854
Best0.003263455.97721.586578.2 × 10−2922.9506626.017126.5335124.7564326.0162825.6154125.8827311.41045228.808
Worst0.8194744,6032.1314528.9901524.8359129.2111528.518382433.59227.9471428.7441327.72119178.52542257.058
Std0.21528814,373.091.54207514.493840.4363220.7792480.474719735.22030.58810.9640540.42636544.9786424.9867
Median0.1116392840.2741.472451.08 × 10−2823.4246628.8263627.019530.3972327.1141326.4539926.2899187.48235475.573
Rank11323498117651012
F6Mean0119.01726.52 × 10−86.3190441.77 × 10−93.6837620.0862020.1532940.6363321.1169671.07 × 10−160.08263734.14746
Best015.051444.73 × 10−93.887976.98 × 10−102.8215920.0026790.0927880.2494030.4874074.96 × 10−175.23 × 10−515.61244
Worst0618.65012.46 × 10−77.4523634.45 × 10−94.790660.4297120.25251.2589561.9073771.92 × 10−161.54909562.76702
Std0131.42715.56 × 10−81.2010269.43 × 10−100.5439260.1103130.0396170.3093660.4092513.8 × 10−170.34526313.54999
Median078.075825.69 × 10−86.9992631.41 × 10−93.5653720.035920.1486840.5018121.0434029.84 × 10−170.00262331.68218
Rank11341131067892512
F7Mean2.38 × 10−64.93 × 10−55.44 × 10−54.88 × 10−50.000560.0053260.0022440.0117540.000910.0015420.0597620.1686450.010589
Best1.74 × 10−74.44 × 10−72.41 × 10−73.72 × 10−60.0002250.0025061.76 × 10−50.0058240.0001170.0002610.0248130.0746260.003032
Worst7.52 × 10−60.0001280.0001520.0002260.0010890.0163720.0108150.0206230.002020.0030070.1026810.2930860.021939
Std1.98 × 10−63.94 × 10−55.06 × 10−55.09 × 10−50.0002580.0033330.0027430.0041620.0005690.0007720.0212430.0606680.004819
Median1.63 × 10−65.39 × 10−53.64 × 10−53.21 × 10−50.000480.0042990.0012540.0105350.0007580.0014790.0565250.1525060.010178
Rank13425981167121310
Sum rank7711620335050603834496475
Mean rank110.142862.2857142.8571434.7142867.1428577.1428578.5714295.4285714.85714379.14285710.71429
Total rank1112348896571012
Table 3. Optimization results of high-dimensional multimodal functions (F8–F13).
Table 3. Optimization results of high-dimensional multimodal functions (F8–F13).
SABOWSOAVOARSAMPATSAWOAMVOGWOTLBOGSAPSOGA
F8Mean−12,563.1−7037.55−12,433.2−5458.28−9865.61−5913.41−11,247.2−7742.72−6220.73−5521.56−2689.18−6500.96−8421.5
Best−12,569.5−8624.3−12,569.5−5656.04−10,653.2−6776.47−12,569.2−9182.08−8101.01−6451.23−3269.84−7862.41−9681.18
Worst−12,447.1−5826.04−11,896.8−4124.78−9067.64−4968.21−6824.1−6283.06−3450.14−4631.24−2140.43−4751.67−7028.99
Std27.32588840.3862197.7043345.5191478.9808494.67591769.136677.1584896.6881562.8207341.7721885.9265641.2242
Median−12,569.5−7012.04−12,569.5−5531.08−9792.7−5881.29−12,081.1−7915.87−6226.44−5625.4−2654.18−6783.5−8399.11
Rank17212410369111385
F9Mean030.53863000190.20960104.05430.297985025.0729560.3132354.68123
Best015.2214900092.78168043.827320013.9294329.8488323.23239
Worst068.24684000273.04710152.27735.959691041.78816113.426576.90086
Std011.9343800040.82636028.716351.33262706.25611421.6222313.80758
Median030.62966000189.0894099.605790023.87956.2433452.61443
Rank1411181721365
F10Mean8.88 × 10−164.9010828.88 × 10−168.88 × 10−164.44 × 10−151.4528654.26 × 10−150.4514491.6 × 10−144.26 × 10−158.12 × 10−92.7393293.5751
Best8.88 × 10−163.5300498.88 × 10−168.88 × 10−164.44 × 10−157.99 × 10−158.88 × 10−160.0782411.15 × 10−148.88 × 10−165.45 × 10−91.7780352.881962
Worst8.88 × 10−166.8748318.88 × 10−168.88 × 10−164.44 × 10−153.4473157.99 × 10−151.7992022.22 × 10−144.44 × 10−151.23 × 10−84.382634.641967
Std00.9395790001.6542672.44 × 10−150.5747172.79 × 10−157.94 × 10−161.67 × 10−90.7152380.396644
Median8.88 × 10−164.6689238.88 × 10−168.88 × 10−164.44 × 10−152.22 × 10−144.44 × 10−150.1313731.51 × 10−144.44 × 10−157.81 × 10−92.6044213.62958
Rank11011372642589
F11Mean01.708970000.00833400.4122130.00045108.6871650.1567171.473471
Best01.076151000000.191432003.1353550.0014671.288095
Worst05.8729520000.06703100.5355730.009011015.715891.6628391.725859
Std01.06710000.0153600.0999260.00201503.7518210.3603880.123868
Median01.425853000000.430656007.8889060.0605331.447709
Rank1711131521846
F12Mean2.63 × 10−3334.69732.86 × 10−91.3146432.08 × 10−106.2935990.007381.2517540.0355250.083980.1876021.5831410.274894
Best2.13 × 10−340.9388677.56 × 10−100.7201326.03 × 10−110.2166280.0009640.0010160.0129780.0350545.91 × 10−190.0741490.060841
Worst5.73 × 10−33597.71735.14 × 10−91.6297014.75 × 10−1017.714390.0335876.1692180.073550.1708050.6343295.0951040.650842
Std1.57 × 10−33132.70471.34 × 10−90.3301259.3 × 10−114.268370.0075961.626320.0184050.0322150.209511.2511550.138648
Median2.62 × 10−333.6960762.7 × 10−91.5258771.93 × 10−106.015440.0052850.8071430.0290240.0824740.1554931.3810060.264424
Rank11331021249567118
F13Mean6.7 × 10−324239.9341.43 × 10−80.3550.0005672.815670.2756670.029760.4954531.0303910.0076914.6902512.707835
Best1.14 × 10−3412.398911.5 × 10−96.53 × 10−328.3 × 10−102.0296920.0329880.0090022.27 × 10−50.5296445.94 × 10−180.047091.291959
Worst4.34 × 10−3117,963.653.86 × 10−82.90.0113473.8328260.7818050.0797070.8522641.6266380.09888314.576193.940231
Std1.2 × 10−317400.4961.17 × 10−80.8494430.0025370.4798430.2090690.0184770.2190410.291380.0220034.5490490.754476
Median3.38 × 10−3255.134531.1 × 10−89.24 × 10−322.33 × 10−92.8236520.2338890.0232380.5691441.0315071.1 × 10−173.2163892.867222
Rank11327311658941210
Sum rank6541032145117383030404943
Mean rank191.6666675.3333332.3333338.52.8333336.333333556.6666678.1666677.166667
Total rank1122631147558109
Table 4. Optimization results of high-dimensional multimodal functions (F14–F23).
Table 4. Optimization results of high-dimensional multimodal functions (F14–F23).
SABOWSOAVOARSAMPATSAWOAMVOGWOTLBOGSAPSOGA
F14Mean0.9980041.1465161.2958173.0705750.9980049.6563541.7838980.9980044.4235821.097213.9991763.93061.048667
Best0.9980040.9980040.9980040.9980310.9980040.9980040.9980040.9980040.9980040.9980040.9980040.9980040.998004
Worst0.9980043.968252.98210510.763180.99800417.3744110.763180.99800410.763182.9821058.84951315.503821.992037
Std7.2 × 10−170.6641670.6519462.1705627.2 × 10−175.1672712.2339023.74 × 10−124.3355540.4436582.6989964.3970240.222066
Median0.9980040.9980040.9980042.9821050.99800412.670510.9980040.9980042.9821050.9980043.1462012.4870680.998004
Rank1568112721141093
F15Mean0.0003070.0003080.0003410.0011340.0007120.0084160.000610.0044740.0044750.0034360.0022720.0055460.015388
Best0.0003070.0003070.0003080.0005380.0003070.0003080.000310.0003480.0003070.0003080.0015380.0003070.000782
Worst0.0003070.0003160.0005270.002120.0022520.0566210.0015020.0565430.0203630.0203640.0040340.0565430.066917
Std2.29 × 10−191.87 × 10−66.5 × 10−50.0004510.0006650.0144430.0003070.0130220.008160.0073010.0006490.0134440.016221
Median0.0003070.0003070.0003090.000990.0003140.0006270.0005730.000620.0003080.0003170.002080.0004440.014273
Rank12365124910871113
F16Mean−1.03163−1.03163−1.03163−1.02933−1.03163−1.02688−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163
Best−1.03163−1.03163−1.03163−1.03162−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163
Worst−1.03163−1.03163−1.03163−1−1.03163−1−1.03163−1.03163−1.03163−1.03162−1.03163−1.03163−1.03161
Std2.1 × 10−168.4 × 10−88.82 × 10−170.0069711.91 × 10−160.0115871.17 × 10−104.03 × 10−85.64 × 10−91.33 × 10−61.53 × 10−168.82 × 10−174.78 × 10−6
Median−1.03163−1.03163−1.03163−1.0312−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163−1.03163
Rank1418192536117
F17Mean0.3978870.3978950.3978870.4091830.3978870.397920.3978880.3978870.3978880.4000470.3978870.527020.466023
Best0.3978870.3978870.3978870.3979620.3978870.3978880.3978870.3978870.3978870.39790.3978870.3978870.397887
Worst0.3978870.3980480.3978870.4985350.3978870.3980750.3978920.3978880.3978910.4375780.3978871.1309181.75218
Std03.59 × 10−53.98 × 10−160.02327404.3 × 10−51.18 × 10−61.05 × 10−78.45 × 10−70.00883600.2545270.302731
Median0.3978870.3978870.3978870.4017190.3978870.3979070.3978870.3978870.3978880.3979970.3978870.3978870.397905
Rank162917534811110
F18Mean333.0000035.742379312.450033.00002133.0000123.000001337.302903
Best333333.000001333.0000013333
Worst333.00002830.75151384.000113.0001693.0000013.0000383.0000063334.94955
Std9.11 × 10−164.2 × 10−166.42 × 10−68.4412971.55 × 10−1525.199183.78 × 10−53.94 × 10−79.8 × 10−61.56 × 10−63.02 × 10−152.58 × 10−1510.54375
Median333.0000013.00001433.000013.0000133.000013333.00117
Rank1171021295864311
F19Mean−3.86278−3.86278−3.86278−3.82685−3.86278−3.86274−3.86058−3.86278−3.86072−3.86047−3.86278−3.86278−3.86262
Best−3.86278−3.86278−3.86278−3.86048−3.86278−3.86278−3.86278−3.86278−3.86278−3.86273−3.86278−3.86278−3.86278
Worst−3.86278−3.86278−3.86278−3.73516−3.86278−3.86264−3.85204−3.86278−3.8549−3.85483−3.86278−3.86278−3.86183
Std2.28 × 10−152.28 × 10−153.89 × 10−130.0383072.13 × 10−153.9 × 10−50.0029041.87 × 10−70.0032730.0033411.9 × 10−152.06 × 10−150.000295
Median−3.86278−3.86278−3.86278−3.8444−3.86278−3.86276−3.86171−3.86278−3.86277−3.86245−3.86278−3.86278−3.86278
Rank1129147368115
F20Mean−3.322−3.29813−3.26819−2.87036−3.28036−3.26468−3.23382−3.25652−3.25038−3.21772−3.322−3.29494−3.2283
Best−3.322−3.322−3.322−3.07689−3.322−3.32164−3.32198−3.32199−3.32199−3.31795−3.322−3.322−3.32163
Worst−3.322−3.20308−3.1971−2.48983−3.2031−3.1574−3.0863−3.2028−3.085−3.02017−3.322−3.13764−2.99723
Std4.32 × 10−160.0487520.0610420.1553930.0581680.0640860.094320.0607610.0956490.0796374.08 × 10−160.0570120.078203
Median−3.322−3.322−3.322−2.89614−3.32199−3.3194−3.25911−3.20305−3.32199−3.19712−3.322−3.322−3.23661
Rank1251246978111310
F21Mean−10.1532−9.77968−10.1532−5.0552−10.1532−6.90572−8.24533−7.99956−9.64755−5.86981−5.69611−7.15161−6.26023
Best−10.1532−10.1532−10.1532−5.0552−10.1532−10.0952−10.1532−10.1532−10.1532−8.27119−10.1532−10.1532−9.73855
Worst−10.1532−2.68286−10.1532−5.0552−10.1532−2.61113−2.6301−2.63047−5.10027−4.17485−2.63047−2.63047−2.38578
Std2.61 × 10−151.6704192.03 × 10−143.1 × 10−71.03 × 10−73.547752.7119522.7565131.5551131.5738893.515333.4944472.711083
Median−10.1532−10.1532−10.1532−5.0552−10.1532−9.86587−10.1506−10.1531−10.1528−4.88288−3.46205−10.1532−7.06069
Rank14213396751112810
F22Mean−10.4029−9.73508−10.4029−5.08767−10.4029−8.2408−7.7929−8.9621−10.1367−7.82384−10.4029−5.31476−7.37187
Best−10.4029−10.4029−10.4029−5.08767−10.4029−10.3533−10.4029−10.4029−10.4028−10.0684−10.4029−10.4029−9.9828
Worst−10.4029−3.7243−10.4029−5.08767−10.4029−1.83234−2.76573−2.76589−5.08766−3.63254−10.4029−2.75193−2.67682
Std3.65 × 10−152.0556423.13 × 10−146.98 × 10−73.62 × 10−153.4827032.9464352.6050721.188421.9284872.79 × 10−153.4653081.916626
Median−10.4029−10.4029−10.4029−5.08767−10.4029−10.1656−10.0872−10.4029−10.4025−8.44314−10.4029−3.2451−7.86313
Rank1531217964821110
F23Mean−10.5364−10.1307−10.5364−5.12847−10.5364−8.08868−8.25778−10.266−10.5361−7.60728−10.5364−5.56226−6.36016
Best−10.5364−10.5364−10.5364−5.12848−10.5364−10.4974−10.5363−10.5364−10.5364−10.3064−10.5364−10.5364−10.1845
Worst−10.5364−2.42173−10.5364−5.12847−10.5364−2.41711−1.67653−5.12846−10.5357−3.91631−10.5364−2.42734−2.38229
Std2.51 × 10−151.8144973.97 × 10−151.53 × 10−62.85 × 10−153.6339793.2175111.2092440.0001461.8007211.63 × 10−153.7726672.608634
Median−10.5364−10.5364−10.5364−5.12847−10.5364−10.3713−10.5338−10.5364−10.5361−8.05319−10.5364−3.35328−6.88826
Rank152111764381109
Sum rank10353398208564516278406888
Mean rank13.53.39.828.56.45.16.27.846.88.8
Total rank14313211867105912
Table 5. Optimization results of the CEC 2017 test suite.
Table 5. Optimization results of the CEC 2017 test suite.
SABOWSOAVOARSAMPATSAWOAMVOGWOTLBOGSAPSOGA
C17-F1Mean1007046.8821840.869.82 × 10910,040.451.34 × 1097,369,26511,894.2819,142.491.59 × 108332.2243380.70620,223,617
Best100349.4302761.6747.37 × 1092897.08811,782,6413,382,7816418.84211,813.9970,630,075110.2765365.64126,742,684
Worst10012,596.33920.9431.28 × 101013,813.563.84 × 10910,346,87216,013.6128,455.493.83 × 108754.718710,023.5937,058,367
Std1.76 × 10−56377.111455.2642.72 × 1094862.9631.71 × 1093,477,1534126.1777771.8451.5 × 108293.23024472.08812,528,147
Median1007620.8961340.4129.56 × 10911,725.587.54 × 1087,873,70312,572.3418,150.2490,563,366231.95041566.79618,546,709
Rank15313612978112410
C17-F3Mean300353.4634336.746311,593.82303.025811,561.851014.038303.0253214.45762.083211,528.9130315,890.56
Best300305.6803303.0017495.814303.01557333.238506.675303.0096618.2583487.4169410.5863034664.339
Worst300398.7643379.224715,992.88303.03415,496.531772.56303.04387596.973941.506513,035.4230325,128.63
Std5.43 × 10−1151.2105631.530624721.0280.0080853337.066557.7070.0142363317.442199.06191570.3554.64 × 10−1410,690.16
Median300354.7045332.379711,443.29303.026711,708.81888.4577303.02332321.285809.705111,834.8230316,884.64
Rank16512411839710213
C17-F4Mean400.002411.7815422.7311890.6229407.5184588.2896435.164408.4423427.4762413.8849410.3218425.8947419.8654
Best400404.0126405.1815592.3616406.3748410.9766410.9211407.3072411.3067413.039409.1493404.1139416.5881
Worst400.008428.8649473.52861532.186408.0158956.8654461.0965409.526475.1348414.4191410.867479.8561423.8752
Std0.00402411.7561533.87191431.91960.767374248.795727.512610.90762531.773170.5917420.78923736.345113.18921
Median400407.1243406.1071718.9722407.8415492.6581434.3191408.4681411.7316414.0409410.6354409.8044419.4992
Rank15813212113106497
C17-F5Mean510.1638524.345547.7084585.8799516.3361577.0081543.3966529.3387523.0358541.9821558.7547535.2866535.4083
Best507.9597515.0492526.103566.9905512.0446539.5294519.6511520.078515.4188536.0268545.1657517.0589530.3026
Worst513.7912534.1427569.3135597.8354523.1164599.001567.8895548.081533.9026545.8445572.3282561.2747541.6999
Std2.5510818.81303817.7594513.404934.8141326.2358420.043612.734978.2000794.31559712.6820720.421175.160091
Median509.4521524.094547.7084589.3468515.0917584.7509543.0228524.5978521.411543.0286558.7624531.4064534.8153
Rank14101321295381167
C17-F6Mean600.0003606.5662621.0782650.7423609.1735633.9321646.477606.5379607.6657613.5003624.6064614.1202617.2135
Best600.0001606.0005611.45647.7003607.6625618.2979641.2523606.2218607.1218611.2009615.1481607.4805613.5467
Worst600.0004608.0621641.9305654.2638610.8793650.6244657.2451607.0517609.1673617.0857636.0319627.0489621.8526
Std0.0001331.00176514.077782.9206171.43289613.496937.5116070.389741.0017442.6829338.7262748.8760723.681783
Median600.0004606.1011615.4663650.5025609.0762633.4031643.7053606.4391607.1869612.8573623.6228610.9757616.7274
Rank13913511122461078
C17-F7Mean720.073734.6061761.3859807.9573722.6878832.3335780.3394729.1437737.4493762.9736723.9356741.8665746.3812
Best715.4835724.1491735.2345798.2316721.6198802.1454758.5209718.6366729.0264758.0501718.5417733.9806735.012
Worst724.3538744.5125809.0966818.5406724.5213871.8311810.1066735.7483757.2095771.8916733.1719754.5434751.4285
Std3.7671978.71804632.719518.5201261.30294128.9711325.439767.63084613.250346.2074616.4324539.3635637.701444
Median720.2273734.8814750.6063807.5285722.305827.6787776.3651731.0949731.7806760.9763722.0144739.471749.5423
Rank15912213114610378
C17-F8Mean810.4471816.5418839.1887862.826817.8294842.4619847.5424835.8918823.7197849.1191831.6153832.7817826.2435
Best807.9597813.0245830.2544848.7521813.055820.5764834.07817.0478818.4748841.6045824.0785825.0834821.9262
Worst812.9345820.059846.1864869.0084821.0988864.6593862.7255864.2865830.5745857.9026840.157839.7978834.8115
Std2.0711682.9009338.2335359.5729943.42031718.7886711.7411620.074745.3308188.3392087.7623037.2849975.810643
Median810.4471816.5418840.157866.7719818.5819842.306846.6871831.1164822.9147848.4846831.1129833.1227824.1182
Rank12913310118412675
C17-F9Mean900967.60491195.1051536.685909.56981285.1531156.894909.1165909.5976921.9334909913.6391914.5897
Best900915.88161042.2671391.599909.0062940.43261015.748909.0008909.0575916.909909909.9834912.06
Worst9001067.1541401.2391802.164910.32721718.3091474.688909.4606909.9263930.8768909922.4721918.9278
Std6.63 × 10−871.14703150.341186.130.668621339.4266213.34330.229410.379966.13860305.9636573.106199
Median900943.69191168.4581476.489909.47291240.9361068.57909.0023909.7033919.974909911.0504913.6856
Rank19111341210358267
C17-F10Mean1332.8241480.5651983.3222488.5991425.7812408.7382489.5291812.1921551.2492278.762590.3552033.7261784.313
Best1148.1461252.931547.3472301.9421233.4092216.2572144.7021622.7041424.4051855.1092170.8081615.9331457.91
Worst1472.8161779.222200.4352845.6241648.3772797.6782935.6512062.3891738.3772591.3572916.8712473.9962212.947
Std135.4067219.8764295.2874242.7758186.9812263.5195332.5847215.659133.4848313.057335.5618352.4818323.6579
Median1355.1661445.0562092.7542403.4141410.6692310.5092438.881781.8381521.1072334.2872636.8712022.4871733.197
Rank13711210126491385
C17-F11Mean1101.9511140.5861228.492875.6331161.1993483.11283.951128.61140.5811166.0781143.4741158.092498.755
Best1100.1061123.7861148.5872212.1661146.0011239.6951142.6531114.451125.0891151.9281134.3021145.8911127.276
Worst1103.7091166.6061396.4143651.5571172.5025750.6951486.991150.8631155.6831189.2251150.1411181.3446389.56
Std1.47186620.62707113.236604.777613.254332511.116152.659415.5977113.537216.099876.9963715.966272594.499
Median1101.9941135.9771184.482819.4031163.1473471.0051253.0781124.5441140.7751161.581144.7261152.5631239.093
Rank14912713102385611
C17-F12Mean1236.2715559.9162,511,3882.22 × 108359,149.5273,948.38,393,395183,943.71,538,4745,480,627532,151.58674.094656,172.8
Best1200.4722570.3031,341,21772,092,50073,415.0490,941.231,034,60553,497.94352,0691,466,61287,116.482632.812189,991.5
Worst1320.3938481.5894,328,2053.96 × 108782,590.9369,792.418,669,650405,2802,144,5319,702,4341,174,99814,989.431,158,399
Std56.34882534.1651,377,2371.54 × 108302,391.3128,159.57,419,726153,869.8829,555.84,362,453490,0165630.259397,641.8
Median1212.115593.8852,188,0652.1 × 108290,296.1317,529.76,934,662138,498.51,828,6485,376,730433,245.88537.066638,150.3
Rank12101365124911738
C17-F13Mean1304.9931344.3678117.32814,715,4898085.6287104.8721,009.2423,961.9713,686.0418,044.1611,826.337084.08358,962.19
Best1300.2671326.2434012.739562,966.16942.5513435.6328238.1081430.4281754.98117,033.299912.3762482.8919169.346
Worst1307.3111388.35612,303.0638,809,3438625.089713.08433,878.5932,971.5229,492.5420,513.9613,294.0218,031.96195,190.8
Std3.21669729.528823426.91418,055,269789.8153079.47911,270.6715,062.1712,680.321662.1951404.4977379.55790,872.29
Median1306.1981331.4358076.7579,744,8228387.447635.38220,960.1430,722.9711,748.3117,314.6912,049.473910.7415,744.32
Rank12613541011897312
C17-F14Mean1402.4881444.2362289.8954231.8871521.9292518.192024.331458.9242213.021621.1847031.3753146.62313,967.98
Best1400.9971436.9721480.1211776.8231465.8681498.8021555.6771451.2751520.8051540.0454068.1071449.323940.504
Worst1404.9751461.524154.4075069.2831606.8125475.4712659.5331467.144230.0621654.0529502.8417325.20527,939.94
Std1.72292411.571221251.1511636.77163.502771971.796461.4228.5664561344.80354.353472875.2552808.22210,166.57
Median1401.991439.2261762.5255040.721507.5171549.2441941.0541458.6391550.6061645.3197277.2771905.98311,995.75
Rank12811496375121013
C17-F15Mean1500.7351543.2816262.74611,927.865086.0048164.76510,192.33355.2328109.991742.13322,752.489655.3254826.672
Best1500.421516.6032595.6877176.9163413.4361622.1242332.8971548.8371626.3641606.3379801.2673004.241939.079
Worst1501.471576.69610,89518,873.625725.44923,941.4719,122.936441.31113,816.111839.48632,087.1215,949.838587.927
Std0.49291528.389243620.6415484.1581118.0410,590.536882.5442332.8625279.614114.420510,813.185410.4793305.625
Median1500.5251539.9145780.14810830.455602.5663547.7319656.6862715.3918498.7441761.35424,560.779833.6164389.841
Rank12712691148313105
C17-F16Mean1601.4911649.9231821.6492117.4741750.9311902.6671820.0041954.4021757.8871699.4232169.6851967.2851835.615
Best1600.8911618.6311746.3821931.7241626.5951703.9171662.4161861.5471676.5071671.2292002.7951857.471744.823
Worst1602.2211740.8891927.752277.2871870.1882204.7931928.2732089.9571886.6711758.3512292.9792140.7081869.332
Std0.55922760.6509788.96065175.294799.64392225.6027127.3475111.874196.6783640.60161121.7196131.203660.58069
Median1601.4261620.0871806.2322130.4431753.4711850.9791844.6631933.0521734.1861684.0572191.4841935.4811864.152
Rank12712496105313118
C17-F17Mean1723.5861765.0691823.2421893.541766.8731888.4861865.5741798.6361761.3921780.4031845.771773.8651777.796
Best1720.8061743.0941789.1681820.8521763.4011786.6011786.2881748.6611747.3351769.3981767.2871766.6271774.404
Worst1726.3761777.3211887.0881950.3651769.9462028.1621929.4961821.9211771.5941791.2142058.7631781.1351780.448
Std2.67551715.1168446.1757362.330273.283575109.689870.7601333.7907210.1670510.80578142.20066.2038132.734527
Median1723.5811769.931808.3561901.4711767.0721869.5921873.2551811.9811763.3191780.51778.5151773.8491778.167
Rank13913412118271056
C17-F18Mean1800.8371859.67213,938.9759,571,7503535.84222,783.7411,831.8517,169.5428,339.1131,819.696760.80123,558.6713,746.07
Best1800.3821831.4417463.6351,172,8172282.2397557.6544843.6223049.126350.85425,849.482784.7652988.523590.053
Worst1801.231885.45430,792.792.31 × 1086112.98538,664.3318,444.1328,482.1743,862.3939,830.0711,735.0543,985.8619,884.62
Std0.42523823.7554811,291.021.14 × 1081769.32417,002.75997.92711,063.8416,266.216430.7933719.07821,164.67117.107
Median1800.8691860.8978749.7242,951,8402874.07122,456.4912,019.8218,573.4431,571.6130,799.66261.69323,630.1515,754.81
Rank12713395811124106
C17-F19Mean1900.6991926.29417,532.55887,2102810.95269,696.0191,329.022061.0489918.3114947.72937,009.4526,876.646556.993
Best1900.021920.53611,947.26261,858.11991.0151989.7882432.3021934.731948.2142074.03919,605.582703.7152258.266
Worst1901.0181936.75822,949.021,457,6774443.733269,653.5302,603.72186.26514,970.3513,386.7556,169.1783,121.2810,563.68
Std0.4697917.2173624527.227569,1161113.981133,312.5141,491.7138.58455844.7485626.21916,830.6837,918.213426.682
Median1900.8781923.9417,616.97914,652.32404.533570.37130,140.062061.59911,377.342165.06336,131.5310,840.786703.014
Rank12813411123751096
C17-F20Mean2012.0622054.9892168.6992315.0072062.6482354.5342225.4732091.5282084.5592097.9482297.82202.9642074.353
Best2000.9952041.522097.3482255.2452054.1322248.8462216.1192060.5792060.8092086.0072211.9272176.8392058.758
Worst2022.2772065.3262289.0422384.0662075.6532525.4442244.9772174.0142122.1462109.2322418.1892237.4512082.762
Std10.6465111.1454685.7221554.879069.167641130.747213.1712655.1922726.72939.73746999.9530130.1251411.06745
Median2012.4882056.5542144.2032310.3592060.4032321.9232220.3982065.7592077.642098.2762280.5412198.7832077.947
Rank12812313106571194
C17-F21Mean2227.2692277.6282303.892376.872277.0372324.6782318.2312314.0572340.2142330.0322385.6542350.7412328.384
Best22002223.1372225.8872290.9522222.0142228.4282261.0552222.0112328.2882226.032365.7782342.0052250.78
Worst2309.0742332.0252390.7962417.5742333.0062422.9372380.972356.1422347.1392371.9582397.6622358.9252365.929
Std54.5371361.0826990.1371657.9282263.53908103.9163.5648661.893538.2325169.8316213.805388.32217352.39665
Median22002277.6752299.4392399.4772276.5632323.6742315.4482339.0362342.7152361.0692389.5882351.0162348.414
Rank13412276510913118
C17-F22Mean2281.6812332.1522330.7683087.8452329.9392718.5712342.6422327.382334.1972344.2223232337.3852342.437
Best2225.1622327.8672325.7552817.8312325.6782264.1862335.8262326.3422324.8942337.41423232323.6922339.298
Worst2300.8162337.0652342.9053336.0342333.8633182.2552350.8772328.4352348.5622356.95523232372.3082347.267
Std37.679673.8481598.161645229.87993.910382445.61786.4796510.85529311.303668.9372441.82 × 10−1023.33123.403963
Median2300.3722331.8382327.2073098.7572330.1072713.9222341.9332327.3712331.6662341.25623232326.7712341.592
Rank16513412103711289
C17-F23Mean2611.3572579.3082667.9712718.432637.2372697.982694.5352643.42646.0472672.22786.482674.0922686.975
Best2608.3052323.0032650.7792706.8772629.3392655.0672677.4642633.7782636.2142660.4342715.862666.1272665.349
Worst2616.5322672.9452699.7072741.3762640.622748.0042714.4392650.7252653.0392682.1642952.252687.1182696.114
Std3.64721170.994621.6712415.747965.31777741.052915.282667.8391378.3951939.651696111.0469.46665214.63718
Median2610.2962660.6422660.6992712.7332639.4942694.4242693.1182644.5482647.4672673.12738.9042671.5622693.219
Rank21612311104571389
C17-F24Mean25002654.7652810.5822882.2592762.1082806.982824.6442770.8442779.1672792.9612608.4982803.532757.421
Best25002525.0742790.1842864.0372748.4442669.5062794.7382767.3982761.5842788.46825252784.9892548.505
Worst25002779.4222842.7742898.1742767.6242899.0382857.292779.7392806.2562797.1652858.9922818.5822845.439
Std0.000208142.329923.0434115.246749.13871598.2578225.822085.95933219.473093.56214166.99614.18211139.8829
Median25002657.2812804.6852883.4132766.1822829.6882823.2742768.1182774.4132793.10625252805.2752817.869
Rank13111351012678294
C17-F25Mean2897.7432939.2053002.9323379.6082951.0063084.2642923.7382950.6892967.6782962.9322961.3872951.8672983.245
Best2897.7432926.722978.2513289.3042926.7442978.7712784.0282926.8762942.5012943.12926.922927.7012970.614
Worst2897.7432974.7323054.6073454.3082974.3943341.8142987.5112975.5722976.7972982.5042972.892976.0762993.868
Std3.36 × 10−823.7020835.8761570.1232126.94928173.247996.1884927.3503516.7927820.9484722.9780127.387439.901458
Median2897.7432927.6832989.4353387.412951.4443008.2352961.7062950.1542975.7072963.0622972.8692951.8452984.249
Rank13111351224987610
C17-F26Mean2825.0032972.5043385.2534092.2982831.0053908.6844085.4223257.3183200.9273262.0133220.9022933.4092925.977
Best2800.0022828.8943084.1993766.192830.3583507.4323146.6362929.1262929.2092942.092282828282719.842
Worst29003164.9474136.2144330.2972831.9234302.234744.1934241.8663841.6643988.7184399.6093047.6363156.644
Std49.99818169.2791505.5009237.36480.672201432.8631683.3607656.3655429.2242487.656785.804589.81023221.2334
Median2800.0052948.0873160.2994136.3532830.873912.5374225.4292929.143016.4173058.621282829292913.711
Rank15101321112869743
C17-F27Mean3089.3023187.3353131.4063230.8033126.613213.7573170.8193123.8653127.213148.1883285.3243170.9773196.967
Best3088.9783138.8913125.8053163.9983124.0723185.3513123.8983120.6353123.7373126.7853274.053128.6433152.806
Worst3089.7063220.6323135.8923360.8373127.9493243.353278.9073126.253134.3933209.1963292.8883222.3693260.97
Std0.36627834.631625.1530188.333271.72766428.2483572.589662.3880774.96780340.682058.3038239.4150645.7305
Median3089.2623194.9093131.9643199.1893127.2093213.1633140.2353124.2883125.3543128.3863287.1783166.4483187.046
Rank19512311724613810
C17-F28Mean31003209.4713291.8623805.8643297.4063485.6893402.4193231.9423444.9783375.1633508.5363354.0943289.755
Best31003131.00131313641.2983131.3183249.7613206.3733131.1283417.6013254.63134483214.6483179.683
Worst31003249.5053445.944085.0633480.9463689.4713510.7233417.5873468.6173446.2023545.1063446.1733579.424
Std7.84 × 10−555.48586132.2739199.9835192.2555180.4896134.1804135.44820.9387691.438642.94438104.9692193.8496
Median31003228.6893295.2543748.5473288.683501.7633446.293189.5263446.8463399.9093520.5193377.7783199.956
Rank12513611931081274
C17-F29Mean3145.6353198.9113328.4923371.8193223.143336.2763401.8233295.2233208.0393250.9213362.2773308.9443277.638
Best3136.9563178.6033221.6133307.3823211.2923249.6313291.3683232.6013191.8783200.043269.1983202.4643224.531
Worst3153.723210.7923490.8163407.0783243.253490.9383542.4043343.3453228.4173275.3913560.4423398.8113331.133
Std8.91562314.73388126.059644.1217914.06066109.1835104.216752.4004115.1908635.35944133.684789.2312644.81565
Median3145.9323203.1253300.773386.4073219.013302.2683386.763302.4733205.9313264.1273309.7333317.253277.445
Rank12912410137351186
C17-F30Mean3399.75710,068.96287,700.211,930,00554,083.49811,693.21,282,284394,299.5786,216.765,324.3934,558418,537.21,651,370
Best3395.4834233.28430,695.171,938,20932,506.2120,261.8230,760.616,058.256350.29331,434.05576,418.66669.239568,362.1
Worst3406.35923,879.15684,22831,239,24277,137.331,695,201304,99261,493,9372,907,196109,796.91,285,469830,093.53,762,394
Std4.7457769332.561317,094.713,174,54624,872.51910,839.91,321,431733,263.41,416,50138,272.74289,499.1474,595.81,505,564
Median3398.5936081.714217,938.87,271,28453,345.21765,654.9924,224.833,601.32115,660.160,033.11938,172.3418,6931,137,363
Rank12513391168410712
Sum rank30101221363113301278148187222243208224
Mean rank1.0344833.4827597.6206912.517243.89655210.379319.5862075.1034486.4482767.6551728.379317.1724147.724138
Total rank12713312114581069
Table 6. Wilcoxon rank sum test results.
Table 6. Wilcoxon rank sum test results.
Compared AlgorithmObjective Function Type
UnimodalHigh-DimensionalFixed-DimensionalCEC 2017
SABO vs. WSO1.08 × 10−241.97 × 10−210.0001131.72 × 10−19
SABO vs. AVOA0.0577511.71 × 10−51.27 × 10−181.97 × 10−21
SABO vs. RSA1.11 × 10−55.15 × 10−111.44 × 10−341.97 × 10−21
SABO vs. MPA1.01 × 10−246.98 × 10−151.02 × 10−81.22 × 10−18
SABO vs. TSA1.01 × 10−241.28 × 10−191.44 × 10−342.41 × 10−21
SABO vs. WOA1.01 × 10−245.16 × 10−141.44 × 10−345.93 × 10−21
SABO vs. MVO1.01 × 10−241.97 × 10−211.44 × 10−341.16 × 10−20
SABO vs. GWO1.01 × 10−247.58 × 10−161.44 × 10−341.97 × 10−21
SABO vs. TLBO1.01 × 10−241.04 × 10−141.44 × 10−347.05 × 10−21
SABO vs. GSA1.01 × 10−241.97 × 10−211.46 × 10−132.13 × 10−21
SABO vs. PSO1.01 × 10−241.97 × 10−211.2 × 10−161.97 × 10−21
SABO vs. GA1.01 × 10−241.97 × 10−211.44 × 10−342.09 × 10−20
Table 7. Performance of optimization algorithms for the pressure vessel design problem.
Table 7. Performance of optimization algorithms for the pressure vessel design problem.
AlgorithmOptimum VariablesOptimum Cost
TsThRL
SABO0.7780270.38457940.312282005882.901
WSO0.7780270.38457940.312282005882.901
AVOA0.7780270.38457940.312282005882.901
RSA0.8025840.84469640.701832007482.575
MPA0.7780270.38457940.312282005882.901
TSA0.7795010.3924840.313572005916.63
WOA0.8297160.51413141.05723189.88216541.663
MVO0.8578530.4261544.43659149.64756044.237
GWO0.7800280.38682840.41563198.61285891.034
TLBO1.2033251.48622258.5189869.9665314,118.06
GSA1.2276290.69840751.74761114.47659945.211
PSO1.3335181.04712667.9335717.5123712,075.36
GA1.4753490.66643153.45493162.908114,813.53
Table 8. Statistical results of optimization algorithms for the pressure vessel design problem.
Table 8. Statistical results of optimization algorithms for the pressure vessel design problem.
AlgorithmMeanBestWorstStdMedianRank
SABO5882.9015882.9015882.9011.87 × 10−125882.9011
WSO5915.1055882.9016498.223137.32025882.9013
AVOA6481.4855883.3457313.806539.70586320.1986
RSA13,096.947482.57531,153.655269.55911,801.019
MPA5882.9015882.9015882.9017.88 × 10−65882.9012
TSA6338.7235916.637390.013508.40696070.1665
WOA8510.2846541.66311,984.761564.637876.8018
MVO6728.9996044.2377328.396420.59856712.5257
GWO6025.5965891.0347223.113382.09745903.0944
TLBO29,784.0614,118.0654,039.6710,010.7629,098.1711
GSA22,548.819945.21140,013.28166.0321,797.2510
PSO32,641.112,075.3674,979.418,333.6631,455.7612
GA32,672.6214,813.5357,925.812,038.9830,822.4413
Table 9. Performance of optimization algorithms for the speed reducer design problem.
Table 9. Performance of optimization algorithms for the speed reducer design problem.
AlgorithmOptimum VariablesOptimum Cost
bMpl1l2d1d2
SABO3.50.7177.37.83.3502155.2866832996.348
WSO3.50.7177.3000117.8000213.3502155.2866862996.349
AVOA3.50.7177.37.83.3502155.2866832996.348
RSA3.60.7178.38.33.3675855.53201.663
MPA3.50.7177.37.83.3502155.2866832996.348
TSA3.5021480.7177.38.33.352455.2898423010.76
WOA3.50.7177.37.83.3679385.2917473004.112
MVO3.5129690.7177.5311037.83.3580735.287433005.97
GWO3.5001350.7177.4654147.8422083.3513875.2887833000.422
TLBO3.5805550.70271124.745338.0987788.1765513.6746435.4128834887.56
GSA3.5426860.70264817.211757.4999487.8432323.5880045.3202973152.102
PSO3.5407690.7017427.654037.5558858.172073.3909545.3898255497.948
GA3.5544450.70655320.581227.5599358.1416953.6272135.3833833897.082
Table 10. Statistical results of optimization algorithms for the speed reducer design problem.
Table 10. Statistical results of optimization algorithms for the speed reducer design problem.
AlgorithmMeanBestWorstStdMedianRank
SABO2996.3482996.3482996.3489.33 × 10−132996.3481
WSO2996.4282996.3492997.3780.2291952996.363
AVOA3001.5082996.3483012.8364.5797253001.2784
RSA3275.7553201.6633363.12858.78563268.0239
MPA2996.3482996.3482996.3481.03 × 10−52996.3482
TSA3031.0513010.763055.0512.613483031.9177
WOA3119.793004.1123241.88571.09983139.0518
MVO3027.5973005.973055.3614.622763028.796
GWO3005.6263000.4223015.2594.2616533005.5795
TLBO9.03 × 10134887.563.39 × 10149.66 × 10135.47 × 101312
GSA3622.1223152.1024409.364332.21373665.2310
PSO1.67 × 10145497.9485.21 × 10141.61 × 10141.37 × 101413
GA4.37 × 10133897.0821.77 × 10144.76 × 10132.62 × 101311
Table 11. Performance of optimization algorithms for the welded beam design problem.
Table 11. Performance of optimization algorithms for the welded beam design problem.
AlgorithmOptimum VariablesOptimum Cost
hltb
SABO0.205733.4704899.0366240.205731.724852
WSO0.205733.4704899.0366240.205731.724852
AVOA0.205733.4704899.0366240.205731.724852
RSA0.1685364.097767100.2044521.908712
MPA0.205733.4704899.0366240.205731.724852
TSA0.204873.4853279.062750.2061441.733193
WOA0.2053983.462059.0772830.214251.795184
MVO0.2040713.5024869.0587670.2056391.729722
GWO0.205633.4724379.0412850.2057271.725749
TLBO0.3669253.2308438.4725880.3997453.288168
GSA0.2694222.8188377.9070510.2694221.94981
PSO0.4074895.0010975.1203350.6445273.934221
GA0.1529496.8500277.0764480.441963.314212
Table 12. Statistical results of optimization algorithms for the welded beam design problem.
Table 12. Statistical results of optimization algorithms for the welded beam design problem.
AlgorithmMeanBestWorstStdMedianRank
SABO1.7248521.7248521.7248526.83 × 10−161.7248521
WSO1.7248591.7248521.7249842.94 × 10−51.7248523
AVOA1.7666981.7248921.8943690.0473021.7492767
RSA2.2068381.9087122.4327650.1675092.2063618
MPA1.7248521.7248521.7248529.33 × 10−91.7248522
TSA1.7433981.7331931.7535780.0057561.7427085
WOA2.5071151.7951844.8630390.8646642.22135310
MVO1.7442391.7297221.7661060.0104151.7403336
GWO1.7276451.7257491.7315380.0018041.7267484
TLBO8.86 × 10123.2881681.06 × 10142.67 × 10135.13047212
GSA2.4442951.949813.2066430.3180792.4202749
PSO3.23 × 10133.9342211.47 × 10145.01 × 10132.52 × 101213
GA5.15 × 10123.3142129.81 × 10132.19 × 10135.3270911
Table 13. Performance of optimization algorithms for the tension/compression spring design problem.
Table 13. Performance of optimization algorithms for the tension/compression spring design problem.
AlgorithmOptimum VariablesOptimum Cost
dDp
SABO0.0516890.35671811.288970.012665
WSO0.0516890.35671811.288940.012665
AVOA0.0516890.35671811.288970.012665
RSA0.050.31073150.013206
MPA0.0516880.3566911.290610.012665
TSA0.0525520.37753710.184440.012704
WOA0.0508790.33755212.507870.012677
MVO0.0603160.6018844.3732750.013955
GWO0.0508390.3365212.593960.012694
TLBO0.0690850.93656720.01788
GSA0.0545930.4206658.8071680.013549
PSO0.0550830.36286814.117230.017745
GA0.0690920.93612120.017875
Table 14. Statistical results of optimization algorithms for the tension/compression spring design problem.
Table 14. Statistical results of optimization algorithms for the tension/compression spring design problem.
AlgorithmMeanBestWorstStdMedianRank
SABO0.0126650.0126650.0126651.32 × 10−180.0126651
WSO0.012680.0126650.012782.79 × 10−50.0126713
AVOA0.0130250.0126910.0144170.0004350.0128816
RSA0.0214830.0132060.1056480.0231750.01331111
MPA0.0126650.0126650.0126653.25 × 10−80.0126652
TSA0.0129510.0127040.0138150.0002790.0128685
WOA0.0136080.0126770.0158120.0009640.0130877
MVO0.0170910.0139550.018110.0013730.0178758
GWO0.0127460.0126940.0131459.55 × 10−50.0127254
TLBO0.0185310.017880.019340.0003790.018489
GSA0.0193260.0135490.027270.0038270.01996610
PSO2.98 × 10130.0177453.97 × 10149.71 × 10130.01777313
GA1.08 × 10120.0178752.09 × 10134.66 × 10120.02385112
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trojovský, P.; Dehghani, M. Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 149. https://doi.org/10.3390/biomimetics8020149

AMA Style

Trojovský P, Dehghani M. Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics. 2023; 8(2):149. https://doi.org/10.3390/biomimetics8020149

Chicago/Turabian Style

Trojovský, Pavel, and Mohammad Dehghani. 2023. "Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems" Biomimetics 8, no. 2: 149. https://doi.org/10.3390/biomimetics8020149

Article Metrics

Back to TopTop