Next Article in Journal / Special Issue
Towards an Optimal KELM Using the PSO-BOA Optimization Strategy with Applications in Data Classification
Previous Article in Journal
Biomimetics for Sustainable Developments—A Literature Overview of Trends
Previous Article in Special Issue
A Comprehensive Review of Bio-Inspired Optimization Algorithms Including Applications in Microelectronics and Nanophotonics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reptile Search Algorithm Considering Different Flight Heights to Solve Engineering Optimization Design Problems

1
School of Mechanical and Electrical Engineering, Guizhou Normal University, Guiyang 550025, China
2
Technical Engineering Center of Manufacturing Service and Knowledge Engineering, Guizhou Normal University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(3), 305; https://doi.org/10.3390/biomimetics8030305
Submission received: 20 June 2023 / Revised: 6 July 2023 / Accepted: 8 July 2023 / Published: 11 July 2023

Abstract

:
The reptile search algorithm is an effective optimization method based on the natural laws of the biological world. By restoring and simulating the hunting process of reptiles, good optimization results can be achieved. However, due to the limitations of natural laws, it is easy to fall into local optima during the exploration phase. Inspired by the different search fields of biological organisms with varying flight heights, this paper proposes a reptile search algorithm considering different flight heights. In the exploration phase, introducing the different flight altitude abilities of two animals, the northern goshawk and the African vulture, enables reptiles to have better search horizons, improve their global search ability, and reduce the probability of falling into local optima during the exploration phase. A novel dynamic factor (DF) is proposed in the exploitation phase to improve the algorithm’s convergence speed and optimization accuracy. To verify the effectiveness of the proposed algorithm, the test results were compared with ten state-of-the-art (SOTA) algorithms on thirty-three famous test functions. The experimental results show that the proposed algorithm has good performance. In addition, the proposed algorithm and ten SOTA algorithms were applied to three micromachine practical engineering problems, and the experimental results show that the proposed algorithm has good problem-solving ability.

1. Introduction

With the deeper exploration of natural laws by humans, more and more practical problems have emerged in fields such as control [1,2], manufacturing [3,4], economics [5,6], and physics [7]. Most of these problems have characteristics such as a large scale, multiple constraints, and discontinuity [8]. Traditional algorithms often optimize the objective function results through the gradient of the objective function, a deterministic search method that makes it difficult for people to use existing traditional methods to solve such problems.
Basically, the characteristic of most heuristic algorithms is random search, and through this characteristic, higher global optimal possibilities are obtained [9]. Due to their independence from utilizing function gradients, heuristic algorithms do not require the objective function to have continuously differentiable conditions, providing optimization possibilities for some objective functions that cannot be optimized through gradient descent. Heuristic algorithms can be roughly divided into three categories based on the different ideas of imitation: simulating biological habits [10,11], cognitive thinking [12,13], and physical phenomena [14,15]. Among these, due to the abundance of natural organisms, heuristic algorithms that simulate bodily patterns are primarily used, such as the genetic algorithm (GA) [16], particle swarm optimization (PSO) [17], ant colony optimization (ACO) [18], Grey wolf optimizer (GWO) [19], etc. However, no free lunch globally exists, and no single algorithm is suitable for solving all optimization problems [20]. In recent years, in pursuit of the effectiveness of heuristic algorithms, many improved algorithms have emerged, mainly consisting of strategy-based improvement and algorithm combinations. In recent years, our research team has been committed to obtaining better-performing heuristic algorithms through algorithmic combinations, such as the beetle antenna strategy based on grey wolf optimization [21], grey wolf optimization based on the Aquila exploration method (AGWO) [22], hybrid golden jackal optimization and the golden sine algorithm [23], enhanced snake optimization [24], etc.
The reptile search algorithm (RSA) is a novel intelligent optimization algorithm based on crocodile hunting behavior that was proposed by Laith et al. in 2022 [25]. The RSA has the characteristics of fewer parameter adjustments, strong optimization stability, and easy implementation, achieving excellent results in optimization problems. Ervural and Hakli proposed a binary RSA to extend the RSA to binary optimization issues [26]. Emam et al. proposed an enhanced reptile search algorithm for global optimization. They selected the optimal thresholding values for multilevel image segmentation [27]. Xiong et al. proposed a dual-scale deep learning model based on ELM-BiLSTM and improved the reptile search algorithm for wind power prediction [28]. Elkholy et al. proposed an AI-embedded FPGA-based real-time intelligent energy management system using a multi-objective reptile search algorithm and a gorilla troops optimizer [29].
However, due to the physiological limitations of any animal, there are corresponding drawbacks to algorithms that simulate biological habits. This also leads to the RSA, like other algorithms that simulate physical patterns, having a slow convergence speed, low optimization accuracy, and being prone to falling into local optima. This article aims to solve this problem by studying the natural patterns of organisms inspired by natural laws. Crocodiles have good hunting ability as land animals but need a better observation field due to height constraints. Therefore, in the search section, the performance could be better (in line with the RSA’s slow convergence speed, low optimization accuracy, and quick fall into local optima). Inspired by the different flight heights and search horizons of natural organisms, this article introduces the African vulture optimization algorithm (AVOA) [30] and northern goshawk optimization (NGO) [31], utilizing the high-altitude advantages of birds to explore accordingly. Considering the sizeable spatial range, the northern goshawk algorithm is used in the high-altitude field, and African vulture optimization is used in the mid- to high-altitude range. In the exploration phase, the hunting advantages of crocodiles are utilized. On this basis, a reptile search algorithm considering different flight heights (FRSA) is proposed.
To verify the effectiveness of the FRSA, a comparison was made with ten SOTA algorithms on two function sets (thirty-three functions) and three engineering design optimization problems, demonstrating significant improvements in both the algorithm’s performance and its practical problem-solving capabilities. The highlights and contributions of this paper are summarized as follows: (1) The reptile search algorithm considering different flight heights is proposed. (2) Wilcoxon rank sum and Friedman tests are used to analyze the statistical data. (3) The FRSA is applied to solve three constrained optimization problems in mechanical fields and compared with ten SOTA algorithms.
The rest of this article is arranged as follows: Section 2 reviews the RSA, and Section 3 provides a detailed introduction to the FRSA, including all the processes of exploration and exploitation. Section 4 describes and analyzes the results of the FRSA and other comparative algorithms on the two sets of functions. Section 5 represents the FRSA’s performance on three practical engineering design issues. Finally, Section 6 provides a summary and the outlook of the entire article.

2. RSA

The RSA is a novel, naturally inspired meta-heuristic optimizer. It simulates the hunting behavior of crocodiles to optimize problems. Crocodiles’ hunting behavior is divided into two phases: implement encirclement (exploration) and hunting (exploitation). The implementation of hunting is achieved through high walking or belly walking, and hunting is achieved through hunting coordination or hunting cooperation.
In each optimization process, the first step is to generate an initial population. In the RSA, the initial population of crocodiles is randomly generated, as described in Equation (1), and the rules for randomly generating populations are shown in Equation (2).
P = [ p 1 p 2 p m ] m × n = [ p 1 , 1 p 1 , 2 p 1 , n p 2 , 1 p 2 , 2 p 2 , n p m , 1 p m , 2 p m , n ] m × n
where P denotes randomly generated initial solutions, and p m , n represents the position of the m-th solution in the n-th dimension. m denotes the number of candidate solutions, and n denotes the dimension of the given problem.
p i , j = r a n d 1 × ( u b l b ) + l b , j = 1 , 2 , , n
where r a n d 1 denotes a random value between 0 and 1, and l b and u b denote the lower and upper bounds of the given problem, respectively.
The RSA can transition between encirclement (exploration) and hunting (exploitation), and each phase can be divided into two states according to different situations. Therefore, the RSA can be divided into four other parts.
During the exploration phase, there are two states: high-altitude walking and abdominal walking. When t T / 4 , the crocodile population enters a high-altitude walking state, and when T / 4 < t T / 2 , the crocodile population enters an abdominal walking state. Different conditions during the exploration phase benefit the population by conducting better searches and finding better solutions. The position update rules of the population during the exploration phase are shown in Equation (3).
p i , j t + 1 = { Best j t × δ i , j t × 0.1 R i , j t × r a n d 2 t T 4 Best j t × p r 1 , j × E S t × r a n d 3 T 4 < t T 2
where Best j t denotes the position of the optimal solution at time t in the j-th dimension, T is the maximum number of iterations per experiment, and r a n d 2 and r a n d 3 denote a random value between 0 and 1. δ i , j t denotes the hunting operator for the j-th dimension of the i-th candidate solution, which can be calculated by Equation (4). R i , j t is a scaling function used to reduce the search area, which can be calculated by Equation (5). r 1 is a random number between 1 and m, and E S t is an evolutionary factor, with a randomly decreasing value between 2 and −2, which can be calculated by Equation (6).
δ i , j t = Best j t × d i , j
R i , j t = Best j t p r 2 , j Best j t + θ
E S t = 2 × r a n d 4 × ( 1 t T )
where θ is a near-zero minimum, which is to prevent cases where the denominator is zero, r a n d 4 is an integer between −1 and 1, and d i , j represents the percentage difference between the best solution and the current solution in the j-th dimension position, which can be calculated by Equation (7).
d i , j = 0.1 + p i , j 1 n j = 1 n p i , j Best j t × ( u b j l b j ) + θ
In the exploitation phase, there are two states based on the hunting behavior of crocodiles: hunting coordination and hunting cooperation. Crocodile hunting coordination and cooperation enable them to approach their target prey easily, as their reinforcement effect differs from the surrounding mechanism. Therefore, exploitation search may discover near-optimal solutions after several attempts. When T / 2 < t 3 T / 4 , the crocodile population enters a hunting coordination state, when 3 T / 4 < t T , the crocodile population enters a hunting cooperative state. Different states during the exploitation phase are beneficial in avoiding optimization from falling into local optima and helping to determine the optimal solution during the exploitation phase. The location update rules of the population during the exploration phase are shown in Equation (8).
p i , j t + 1 = { Best j t × d i , j t × r a n d 5 T 2 < t 3 T 4   Best j t δ i , j t × θ R i , j t × r a n d 6 3 T 4 < t T  
where Best j t denotes the position of the optimal solution at time t in the j-th dimension, and r a n d 5 and r a n d 6 denote random values between 0 and 1. R i , j t is a scaling function used to reduce the search area, which can be calculated by Equation (5). θ is a minimal value.
The pseudo-code of the RSA is shown in Algorithm 1.
Algorithm 1. Pseudo-code of RSA
1.Define Dim, UB, LB, Max_Iter(T), Curr_Iter(t), α, β, etc
2.Initialize the population randomly p i ( i = 1 , 2 , , m )
3.while (t < T) do
4. Evaluate the fitness of each p i ( i = 1 , 2 , , m )
5. Find Best solution
6. Update the ES using Equation (6).
7.for (i = 1 to m) do
8.  for (j = 1 to n) do
9.  Update the η, R, P and values using Equations (4), (5) and (7), respectively.
10.  If (tT/4) then
11.   Calculate p i , j t + 1 using Equation (3)
12.  else if (t ≤ 2T/4 and t > T/4) then
13.   Calculate p i , j t + 1 using Equation (3)
14.  else if (t ≤ 3T/4 and t > 2T/4) then
15.   Calculate p i , j t + 1 using Equation (8)
16.  else
17.   Calculate p i , j t + 1 using Equation (8)
19.   end if
20.  end for
21.end for
22. t = t+1
23.end while
24.Return the best solution.

3. Proposed FRSA

As a heuristic algorithm, the RSA has achieved good results in solving optimization problems due to its novel imitation approach. However, due to the limitations of natural biological behavior, this algorithm still has some drawbacks. In the process of individual optimization, multiple complex situations may be encountered, and the steady decrease in evolutionary factors does not conform to the nonlinear optimization law of algorithms when dealing with complex optimization problems. The team collaboration, search scope, and hunting mechanism of the crocodile population are all updated around the current optimal value. The iterative updating process of individuals lacks a mutation mechanism. Suppose the present optimal individual falls into a local optimum. In that case, it is easy for the population to aggregate quickly, resulting in the algorithm being unable to break free from the constraints of the local extremum.
In this section, based on the shortcomings of the RSA, the FRSA is proposed by introducing different search mechanisms (based on the exploration altitude) in the exploration phase of the algorithm and introducing fluctuation factors in the exploration phase.

3.1. High-Altitude Search Mechanism (Northern Goshawk Exploration)

The northern goshawk randomly selects prey during the prey identification stage of hunting and quickly attacks it. Due to the random selection of targets in the search space, this stage increases the exploration capability of the NGO algorithm. This stage conducts a global search of the search space to determine the optimal region. At this stage, the behavior of northern goshawks in prey selection and attack is described using Equations (9) and (10).
p i , j t + 1 = { p i , j t + ( y i , j t I × p i , j ) × r a n d 7   F y i < F i   p i , j t + ( p i , j y i , j t ) × r a n d 8   F y i F i  
P i t + 1 = { P i t + 1       F i n e w < F i   P i t           F i n e w F i    
where y i is the prey position of the i-th northern hawk, F y i is the objective function value of the prey position of the i-th northern hawk, P i t + 1 is the position of the i-th northern hawk, p i , j t + 1 is the position of the i-th northern hawk in the j-th dimension at time t, F i n e w is the updated objective function value of the i-th northern hawk, I is a random integer of 1 or 2.

3.2. Low-Altitude Search Mechanism (African Vulture Exploration)

Inspired by the speed at which vultures feed or starve, mathematical modeling is performed using Equation (11), which can be used to simulate the exploration and exploration phases. The satiety rate shows a decreasing trend, and this behavior is simulated using Equation (12).
τ = h × ( sin θ ( π 2 × t T ) + cos ( π 2 × t T ) 1 )
η = ( 2 × r a n d 9 + 1 ) × z × ( 1 t T ) + τ
where η represents the hunger level of vultures, t is the current number of iterations, T is the maximum number of iterations, z denotes a random value between −1 and 1, and h denotes a random value between −2 and 2. When | η | > 1 , the vultures are in the exploration phase. Based on the living habits of vultures, there are two different search methods in the exploration phase of the African vulture optimization algorithm, as shown in Equation (13).
p i , j t + 1 = { B e s t t j | 2 × r a n d 10 × B e s t t j p i , j t | × η       δ 0.6 B e s t t j η +   r a n d 11   × ( ( u b l b ) × r a n d + l b )   δ > 0.6  

3.3. Novel Dynamic Factor

In the exploration phase of the RSA, due to the lack of the random walkability of the algorithm, the convergence speed of the algorithm is slow, and the optimization accuracy is low at this stage. Therefore, this paper proposes a new DF on the original basis to add disturbance factors and to improve the random walkability of the algorithm in the exploration stage, enable the population to explore local regions in small steps, reduce the probability of individuals falling into the local extremum under the influence of fluctuations, and improve the optimization accuracy of the algorithm. The new DF is calculated by Equation (14). The DF graph for 500 iterations is shown in Figure 1.
D F = 0.4 × ( 2 × r 1 ) × e ( t / T ) 2
where t is the current number of iterations, T is the maximum number of iterations, and r denotes a random value between 0 and 1.
p i , j t + 1 = { E S × B e s t t j × d i , j t × r a n d 5         6 T 10 < t 8 T 10   B e s t t j E S × δ i , j t × θ R i , j t × r a n d 6   8 T 10 < t T    
After adding disturbance factors, the position update rules of the FRSA during the exploration phase are shown in Equation (15).
By utilizing the proposed strategy to improve the RSA, the optimization ability and efficiency of RSA can be effectively improved. The cooperative hunting mode of the FRSA is shown in Figure 2. The pseudocode of the FRSA is shown in Algorithm 2. And the flowchart of FRSA is shown in Figure 3.
Algorithm 2. Pseudo-code of FRSA
1.Define Dim, UB, LB, Max_Iter(T), Curr_Iter(t), α, β, etc
2.Initialize the population randomly p i ( i = 1 , 2 , , m )
3.while (t < T) do
4. Evaluate the fitness of each p i ( i = 1 , 2 , , m )
5. Find Best solution
6. Update the ES using Equation (6).
7.for (i = 1 to m) do
8.  for (j = 1 to n) do
9.  Update the η, R, P and values using Equations (4), (5) and (7), respectively.
10.  if (t ≤ 3T/10) then
11.   Calculate p i , j t + 1 using Equation (10)
12.  else if (t ≤ 6T/10 and t > 3T/10) then
13.   Calculate p i , j t + 1 using Equation (14)
14.  else if (t ≤ 8T/10 and t > 6T/10) then
15.   Calculate p i , j t + 1 using Equation (15)
16.  else
17.   Calculate p i , j t + 1 using Equation (15)
18.   end if
19.  end for
20.end for
21. t = t + 1
22.end while
23.Return best solution.

3.4. Computational Time Complexity of the FRSA

In the process of optimizing practical problems, in addition to pursuing accuracy, time is also an essential element [32]. The time complexity of an algorithm is an important indicator for measuring the algorithm. Therefore, it is necessary to analyze the time complexity of the improved algorithm compared to the original algorithm. The time complexity is mainly reflected in the algorithm’s initialization, fitness evaluation, and update solution.
When there are N solutions, the time complexity of the initialization phase is O   ( N ) , and the time complexity of the update phase is O   ( T × N ) + O ( T × N × D ) . Therefore, the algorithm complexity of the RSA can be obtained as O   ( N × ( T × D + 1 ) ) . Compared to the RSA, the time complexity of the FRSA only increases the part of the evolution factor. Assuming the time of the evolution factor is t , the time complexity of the FRSA is O   ( N × ( T × D + 1 ) + t ) = O ( N × ( T × D + 1 ) ) . From this, the FRSA proposed in this article does not increase the time complexity.

4. Analysis of Experiments and Results

4.1. Benchmark Function Sets and Compared Algorithms

This section uses the classic function set and the CEC 2019 set as the benchmark test functions for this article. There are 33 functions, including 7 unimodal, 6 multimodal, and 20 fixed-dimensional multimodal functions. Unimodal functions were used to test the exploration ability of the optimization algorithms due to having only one extreme value. Multimodal functions were used to test the exploration ability of optimization algorithms due to the existence of multiple extreme values. Finally, fixed dimensional parts were used to evaluate the algorithm’s total capacity for exploration and exploration. The details of the classic function set are shown in Table 1. The details of the CEC 2019 set are shown in Table 2.
To better compare the results with other algorithms, this study used ten well-known algorithms as benchmark algorithms, including the GA [16], PSO [17], ACO [18], GWO [19], GJO [33], SO [34], TACPSO [35], AGWO [36], EGWO [36], and the RSA [25]. These benchmark algorithms have achieved excellent results in function optimization and are often used as benchmark comparison algorithms. The details of the parameter settings for the algorithms are shown in Table 3. To be fair, the setting information for these parameters was taken from the original literature that proposed these algorithms.
To fairly compare the results of the benchmark algorithms, all algorithms adopted the following unified parameter settings: the number of independent continuous runs of the algorithm was 30, the number of populations was 50, the number of algorithm iterations was 500, and the comparison indicators included the mean, the standard deviation, the p-value, the Wilcoxon rank sum test, and the Friedman test [37,38]. The best results of the test are displayed in bold. This simulation testing environment was carried out on a computer with the following features: Intel(R) Core (TM) i5-9400F CPU @ 2.90 GHz and 16 GB RAM, Windows 10, 64-bit operating system.

4.2. Results Comparison and Analysis

To fully validate the robustness and effectiveness of the algorithm for different dimensional problems, this study adopted three dimensions (30, 100, 500) for the non-fixed dimensional functions (unimodal and multimodal functions).
Table 4 shows the results of the non-fixed dimensional functions in 30 dimensions, including the mean (Mean), standard deviation (Std), and Friedman test of 11 algorithms. Figure 4 shows the iterative curves of these 11 algorithms for solving 13 non-fixed dimensional functions. Figure 5 is a boxplot of the results obtained by these 11 algorithms after solving 13 functions with non-fixed dimensions. The boxplot results were analyzed from five perspectives: the minimum, lower quartile, median, upper quartile, and maximum. By convergence curves and boxplots, the algorithm can be more intuitively and comprehensively characterized for solving functional problems. Out of 13 non-fixed dimensional functions, the FRSA achieved ten optimal values, with the highest number among all 11 algorithms. The Friedman value shows the overall results obtained by each algorithm in 13 functions. In the Friedman value, the FRSA achieved the mark of 2.2115, ranking first in the Friedman rank, indicating that the FRSA achieved better results than the other algorithms in 30 dimensions.
Table 5 shows the results of the non-fixed dimensional functions in 100 dimensions, including the Mean, Std, and Friedman test of 11 algorithms. Figure 6 shows the iterative curves of these 11 algorithms for solving 13 non-fixed dimensional functions. Figure 7 is a boxplot of the results obtained by these 11 algorithms after solving 13 functions with non-fixed dimensions. The boxplot results were analyzed from five perspectives: the minimum, lower quartile, median, upper quartile, and maximum. By convergence curves and boxplots, the algorithm can be more intuitively and comprehensively characterized for solving functional problems. Out of the 13 non-fixed dimensional functions, the FRSA achieved 11 optimal values, with the highest number among all 11 algorithms. The Friedman value shows the overall results obtained by each algorithm in the 13 functions. For the Friedman value, the FRSA achieved a mark of 2.0192, ranking first in the Friedman test, and indicating that the FRSA achieved better results than the other algorithms in 100 dimensions.
Table 6 shows the results of non-fixed dimensional functions at 500 dimensions, including the Mean, Std, and Friedman test of 11 algorithms. Figure 8 shows the iterative curves of these 11 algorithms for solving 13 non-fixed dimensional functions. Figure 9 is a boxplot of the results obtained by these 11 algorithms after solving 13 functions with non-fixed dimensions. The boxplot results were analyzed from five perspectives: the minimum, lower quartile, median, upper quartile, and maximum. By convergence curves and boxplots, the algorithm can be more intuitively and comprehensively characterized for solving functional problems. Out of the 13 non-fixed dimensional functions, the FRSA achieved 11 optimal values, with the highest number among all 11 algorithms. The Friedman value shows the overall results obtained by each algorithm in the 13 functions. For the Friedman value, the FRSA achieved a mark of 1.9615, ranking first in the Friedman test, and indicating that the FRSA achieved better results than the other algorithms in 500 dimensions.
Table 7 shows the results of the fixed dimensional functions, including the Mean, Std, and Friedman test of 11 algorithms. Figure 10 shows the iterative curves of these 11 algorithms for solving 10 fixed dimensional functions. Figure 11 is a boxplot of the results obtained by these 11 algorithms after solving 13 functions with non-fixed dimensions. The boxplot results were analyzed from five perspectives: the minimum, lower quartile, median, upper quartile, and maximum. By convergence curves and boxplots, the algorithm can be more intuitively and comprehensively characterized for solving functional problems. The FRSA achieved 8 optimal values out of the 10 fixed dimensional functions, with the highest number among all 11 algorithms. The Friedman value shows the overall results obtained by each algorithm in the 13 functions. For the Friedman value, the FRSA achieved a mark of 1.9615, ranking first in the Friedman test, and indicating that the FRSA achieved better results than other algorithms in 500 dimensions.
To compare the results of the FRSA with 10 benchmark algorithms more comprehensively, this article introduces another statistical analysis method, the Wilcoxon rank sum test.
As a non-parametric rank sum hypothesis test, the Wilcoxon rank sum test is frequently used in statistical practice for the comparison of measures of location when the underlying distributions are far from normal or not known in advance [39]. The purpose of the Wilcoxon rank sum test is to test whether there is a significant difference between two populations that are identical except for the population mean. In view of this, this article uses the Wilcoxon rank sum test to compare the differences among the results of various algorithms.
For the Wilcoxon rank sum test, the significance level was set to 0.05, and the symbols “+”, “=”, and “-” indicate that the performance of the FRSA was superior, similar, and inferior to the corresponding algorithm, respectively. In Table 8, no underline represents “+”, and “=” and “-” are represented by different underlines: “ ” and “ ”. Thus, it is possible to evaluate the adopted algorithms from multiple perspectives. Table 8 shows the rank sum test results between the FRSA and the ten benchmark algorithms.
In order to better demonstrate the comparison of the results between the RSA and the FRSA, this study added a comparative analysis of the convergence of the two algorithms, as shown in Figure 12. There are five columns in Figure 12, which represent three dimensional plots of the benchmark function, the conversion curves of the RSA and FRSA, and the search histories, average fitness values, and trajectories. According to Figure 12, compared to the RSA, the FRSA proposed in this article had better exploration and development capabilities, and achieved higher exploration accuracy.
All functions in the CEC 2019 are fixed dimensions. The design of this function set is more complex and can be used to demonstrate the robustness and universality of the proposed FRSA. Table 9 shows the results of solving the CEC 2019 using the FRSA and benchmark algorithms, including the Mean, Std, and Friedman test of 11 algorithms. Table 10 shows the FRSA’s Wilcoxon rank sum test results and those of the ten benchmark algorithms. According to Table 9, in the CEC 2019, the FRSA achieved optimal values for 4 functions, with the highest number among all 11 algorithms, in the Wilcoxon rank sum test and Friedman test. Wilcoxon’s rank sum test compared the FRSA with other algorithms, achieving a result of 58/18/24. The Friedman value showed the overall results of each algorithm in 10 functions. In the Friedman value, the FRSA achieved a result of 3.5500, ranking first in the Friedman rank. Both statistical methods proved that the FRSA achieved better results than the other algorithms in the CEC 2019 function. Figure 13 shows the iterative curves of the 11 algorithms in solving CEC 2019. Figure 14 presents a more comprehensive representation of the results of the 11 algorithms on the CEC 2019 function in the form of a boxplot.
This section compares the non-fixed dimensional and fixed dimensional functions from two different sets of functions with ten advanced algorithms to verify the performance of the FRSA. It is proved that the improvement strategies proposed in this article can effectively improve the performance of the original RSA and obtain better solutions. The proposed FRSA algorithm has a strong exploration ability and efficient space exploration ability and can effectively solve optimization problems in different dimensions.

5. Real-World Engineering Design Problems

In this section, the FRSA solves three engineering design problems: pressure vessel design [40,41], corrugated bulkhead design [42,43], and welded beam design [44]. Including multiple variables and multiple constraints, these problems are significant practical problems and are often used to verify the performance of heuristic algorithms. These engineering design problems have become a vital aspect of the practical application of meta-heuristic algorithms. To verify the performance of the FRSA more fairly, this section used ten advanced algorithms (GA, PSO, ACO, GWO, GJO, SO, TACPSO, AGWO, EGWO, and RSA) similar to the function testing section for testing.

5.1. Pressure Vessel Design

A pressure vessel is a closed container that can withstand pressure. The use of pressure vessels is pervasive, and they have an important position and role in many sectors, such as industry, civil service, military industry, and many fields of scientific research. In the design of a pressure vessel, under the constraints of four conditions, it is required to meet the production needs while maintaining the lowest total cost. The problem has four variables: the thickness of the shell T s ( = x 1 ) , the thickness of the head T h ( = x 2 ) , the inner radius R ( = x 3 ) , and the length of the cylindrical section of the vessel, not including the head L ( = x 4 ) . The mathematical model of the pressure vessel design is as follows:
M i n   f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 S u b j e c t   t o g 1 ( x ) = x 1 + 0.0193 x 3 0 g 2 ( x ) = x 2 + 0.00954 x 3 0 g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 2 + 1296000 0 g 4 ( x ) = x 4 240 0 w h e r e , 0 x 1 99 0 x 2 99 10 x 3 200 10 x 4 200
The FRSA and ten other advanced algorithms proposed in this article were solved for the pressure vessel design problem. The minimum cost values required for pressure vessel production obtained by the 11 algorithms are shown in Table 11. According to the Table 11, the result obtained by the FRSA is x = { 0.77817 ,   0.38465 ,   40.32 ,   200 ,   5885.4 } , which is the optimal result achieved among all 11 algorithms. To better demonstrate the optimization process of 11 algorithms in pressure vessel design problems, Figure 15 shows the convergence curves of the 11 algorithms, including the FRSA. It provides the corresponding change angles for each variable to reflect the trend of differences among the parameters during multi-parameter design. To verify the robustness of the algorithm on this issue, statistical analysis was also conducted, and the relevant statistical analysis data are shown in Table 12. Among them, the unit of time was seconds per experiment, that is, the average running time of each algorithm in a single experiment. The Wilcoxson rank sum test counted the results of the FRSA compared with other algorithms, and the FRSA achieved a result of 9/1/0. Through the corresponding convergence curve and statistical analysis, the FRSA converged faster and had higher accuracy and obvious advantages compared to the other algorithms.

5.2. Corrugated Bulkhead Design

A corrugated bulkhead is made of a pressed steel plate, and then it is bent to replace the function of the stiffener. In the corrugated bulkhead design problem, the minimum weight is required under the constraints of six conditions. The issue has four variables, which are the width ( x 1 ), depth ( x 2 ), length ( x 3 ), and plate thickness ( x 4 ). The mathematical model of the corrugated bulkhead design is as follows:
M i n   f ( x ) = 5.885 x 4 ( x 1 + x 3 ) x 1 + | x 3 2 x 2 2 | S u b j e c t   t o g 1 ( x ) = x 4 x 2 ( 0.4 x 1 + x 3 6 ) + 8.94 ( x 1 + | x 3 2 x 2 2 | ) 0 g 2 ( x ) = x 4 x 2 2 ( 0.3 x 1 + x 3 12 ) + 2.2 ( 8.94 ( x 1 + | x 3 2 x 2 2 | ) ) 4 3 0 g 3 ( x ) = x 4 + 0.0156 x 1 + 0.15 0 g 4 ( x ) = x 4 + 0.0156 x 3 + 0.15 0 g 5 ( x ) = x 4 + 1.05 0 g 6 ( x ) = x 3 + x 2 0 w h e r e , 0 x 1 , x 2 , x 3 100     0 x 4 5
The FRSA and ten other advanced algorithms proposed in this article were solved for the corrugated bulkhead design problem. The corrugated bulkhead design values obtained by the 11 algorithms are shown in Table 13. According to the Table 13, the result obtained by the FRSA is x = { 57.692 ,   34.148 ,   57.692 ,   1.05 ,   6.8430 } . Among all 11 algorithms, the FRSA achieved the best result. To better demonstrate the optimization process of the 11 algorithms in the corrugated bulkhead design problem, Figure 16 shows the convergence curves of the 11 algorithms, including the FRSA. It provides the corresponding change angles for each variable to reflect the trend of differences among the parameters during multi-parameter design. To verify the robustness of the algorithm on this issue, statistical analysis was also conducted, and the relevant statistical analysis results are shown in Table 14. The Wilcoxson rank sum test counted the results of the FRSA compared with the other algorithms, and the FRSA achieved a result of 9/0/1. Through the corresponding convergence curve and statistical analysis, the FRSA converged faster, had higher accuracy, and had obvious advantages compared to the other algorithms.

5.3. Welded Beam Design

A welded beam is a simplified model obtained for the convenience of calculation and analysis in material mechanics. One end of a cantilever beam is fixed support, and the other is free. This problem is a structural engineering design problem related to the weight optimization of square-section cantilever beams. The beams consist of five hollow blocks with constant thickness. The mathematical description of the welded beam design problem is as follows:
M i n   f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 ) S u b j e c t   t o g 1 ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 0 w h e r e , 0.01 x 1 , x 2 , x 3 , x 4 , x 5 100
The FRSA and ten other advanced algorithms proposed in this article were solved for the welded beam design problem. The values of the welded beam design obtained by the 11 algorithms are shown in Table 15. According to the Table 15, the result obtained by the FRSA is x = { 0.20573 ,   3.4705 ,   9.0366 ,   0.20573 ,   1.7249 } . Among all 11 algorithms, the FRSA achieved the best result. To better demonstrate the optimization process of the 11 algorithms in the welded beam design problem, Figure 17 shows the convergence curves of the 11 algorithms, including the FRSA. It provides the corresponding change angles for each variable to reflect the trend of differences among the parameters during multi-parameter design. To verify the robustness of the algorithm on this issue, statistical analysis was also conducted, and the relevant statistical analysis results are shown in Table 16. The Wilcoxson rank sum test counted the results of the FRSA compared with the other algorithms, and FRSA achieved a result of 9/1/0. Through the corresponding convergence curve and statistical analysis, the FRSA converged faster, had higher accuracy, and had obvious advantages compared to the other algorithms.

6. Conclusions and Future Work

To improve the global optimization ability of the RSA, inspired by the different search horizons of different flying heights of natural creatures, this paper proposes a reptile algorithm considering different flying sizes based on the original RSA. In the exploration phase, introducing the different flight altitude abilities of two animals, the northern goshawk and the African vulture, enables reptiles to have better search horizons, improve their global search ability, and reduce the probability of falling into local optima during the exploration phase. In the exploration phase, a new DF is proposed to improve the algorithm’s convergence speed and optimization accuracy. To evaluate the effectiveness of the proposed FRSA, 33 benchmark functions were used for testing, including 13 non-fixed dimensional functions and 20 fixed dimensional functions. Among them, three different dimensions (30, 100, 500) were selected for the non-fixed dimensional functions for testing. The experimental and statistical results indicate that the FRSA has excellent performance and has certain advantages in accuracy, convergence speed, and stability compared to the ten most advanced algorithms. Furthermore, the FRSA was applied to solve three engineering optimization problems, and the results and comparison proved the algorithm’s effectiveness in solving practical problems.
In summary, the FRSA proposed in this article has good convergence accuracy, fast convergence speed, and good optimization performance. Through the testing of fixed and non-fixed dimensional functions and the validation of practical optimization problems, it has been proven that the proposed method can adapt to a wide range of optimization problems, and the algorithm’s robustness has been verified. In later research, the focus will be on evolving the proposed algorithm towards multi-objective optimization, such as path planning, workshop scheduling, and other fields, so that the proposed algorithm can generate more excellent value in practical life.

Author Contributions

Conceptualization, L.Y., T.Z. and D.T.; methodology, P.Y. and L.Y.; software, P.Y. and L.Y.; writing—original draft, L.Y.; writing—review & editing, L.Y., T.Z. and D.T.; data curation, G.L. and J.Y.; visualization G.L. and J.Y.; supervision, T.Z. and D.T.; funding acquisition, T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

Guizhou Provincial Science and Technology Projects (Grant No. Qiankehejichu-ZK [2022] General 320), the Growth Project for Young Scientific and Technological Talents in General Colleges and Universities of Guizhou Province (Grant No. Qianjiaohe KY [2022]167), the National Natural Science Foundation (Grant No. 52242703, 72061006), and the Academic New Seedling Foundation Project of Guizhou Normal University (Grant No. Qianshixinmiao-[2021]A30).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barkhoda, W.; Sheikhi, H. Immigrant imperialist competitive algorithm to solve the multi-constraint node placement problem in target-based wireless sensor networks. Ad Hoc Netw. 2020, 106, 102183. [Google Scholar] [CrossRef]
  2. Fu, Z.; Wu, Y.; Liu, X. A tensor-based deep LSTM forecasting model capturing the intrinsic connection in multivariate time series. Appl. Intell. 2022, 53, 15873–15888. [Google Scholar] [CrossRef]
  3. Liao, C.; Shi, K.; Zhao, X. Predicting the extreme loads in power production of large wind turbines using an improved PSO algorithm. Appl. Sci. 2019, 9, 521. [Google Scholar] [CrossRef] [Green Version]
  4. Wei, J.; Huang, H.; Yao, L.; Hu, Y.; Fan, Q.; Huang, D. New imbalanced bearing fault diagnosis method based on Sample-characteristic Oversampling TechniquE (SCOTE) and multi-class LS-SVM. Appl. Soft Comput. 2021, 101, 107043. [Google Scholar] [CrossRef]
  5. Shi, J.; Zhang, G.; Sha, J. Jointly pricing and ordering for a multi-product multi-constraint newsvendor problem with supplier quantity discounts. Appl. Math. Model. 2011, 35, 3001–3011. [Google Scholar] [CrossRef]
  6. Wu, Y.; Fu, Z.; Liu, X.; Bing, Y. A hybrid stock market prediction model based on GNG and reinforcement learning. Expert Syst. Appl. 2023, 228, 120474. [Google Scholar] [CrossRef]
  7. Sadollah, A.; Choi, Y.; Kim, J.H. Metaheuristic optimization algorithms for approximate solutions to ordinary differential equations. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 792–798. [Google Scholar]
  8. Mahdavi, S.; Shiri, M.E.; Rahnamayan, S. Metaheuristics in large-scale global continues optimization: A survey. Inf. Sci. 2015, 295, 407–428. [Google Scholar] [CrossRef]
  9. Yang, X.-S. Nature-inspired optimization algorithms: Challenges and open problems. J. Comput. Sci. 2020, 46, 101104. [Google Scholar] [CrossRef] [Green Version]
  10. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  12. Chou, J.-S.; Nguyen, N.-M. FBI inspired meta-optimization. Appl. Soft Comput. 2020, 93, 106339. [Google Scholar] [CrossRef]
  13. Askari, Q.; Younas, I.; Saeed, M. Political Optimizer: A novel socio-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 2020, 195, 105709. [Google Scholar] [CrossRef]
  14. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  15. Abedinpourshotorban, H.; Mariyam Shamsuddin, S.; Beheshti, Z.; Jawawi, D.N.A. Electromagnetic field optimization: A physics-inspired metaheuristic optimization algorithm. Swarm Evol. Comput. 2016, 26, 8–22. [Google Scholar] [CrossRef]
  16. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  17. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  18. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  19. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  20. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  21. Fan, Q.; Huang, H.; Li, Y.; Han, Z.; Hu, Y.; Huang, D. Beetle antenna strategy based grey wolf optimization. Expert Syst. Appl. 2021, 165, 113882. [Google Scholar] [CrossRef]
  22. Ma, C.; Huang, H.; Fan, Q.; Wei, J.; Du, Y.; Gao, W. Grey wolf optimizer based on Aquila exploration method. Expert Syst. Appl. 2022, 205, 117629. [Google Scholar] [CrossRef]
  23. Yuan, P.; Zhang, T.; Yao, L.; Lu, Y.; Zhuang, W. A Hybrid Golden Jackal Optimization and Golden Sine Algorithm with Dynamic Lens-Imaging Learning for Global Optimization Problems. Appl. Sci. 2022, 12, 9709. [Google Scholar] [CrossRef]
  24. Yao, L.; Yuan, P.; Tsai, C.-Y.; Zhang, T.; Lu, Y.; Ding, S. ESO: An enhanced snake optimizer for real-world engineering problems. Expert Syst. Appl. 2023, 230, 120594. [Google Scholar] [CrossRef]
  25. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  26. Ervural, B.; Hakli, H. A binary reptile search algorithm based on transfer functions with a new stochastic repair method for 0–1 knapsack problems. Comput. Ind. Eng. 2023, 178, 109080. [Google Scholar] [CrossRef]
  27. Emam, M.M.; Houssein, E.H.; Ghoniem, R.M. A modified reptile search algorithm for global optimization and image segmentation: Case study brain MRI images. Comput. Biol. Med. 2023, 152, 106404. [Google Scholar] [CrossRef] [PubMed]
  28. Xiong, J.; Peng, T.; Tao, Z.; Zhang, C.; Song, S.; Nazir, M.S. A dual-scale deep learning model based on ELM-BiLSTM and improved reptile search algorithm for wind power prediction. Energy 2023, 266, 126419. [Google Scholar] [CrossRef]
  29. Elkholy, M.; Elymany, M.; Yona, A.; Senjyu, T.; Takahashi, H.; Lotfy, M.E. Experimental validation of an AI-embedded FPGA-based Real-Time smart energy management system using Multi-Objective Reptile search algorithm and gorilla troops optimizer. Energy Convers. 2023, 282, 116860. [Google Scholar] [CrossRef]
  30. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  31. Dehghani, M.; Hubálovský, Š.; Trojovský, P. Northern goshawk optimization: A new swarm-based algorithm for solving optimization problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar] [CrossRef]
  32. Bansal, S. Performance comparison of five metaheuristic nature-inspired algorithms to find near-OGRs for WDM systems. Artif. Intell. Rev. 2020, 53, 5589–5635. [Google Scholar] [CrossRef]
  33. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  34. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  35. Ziyu, T.; Dingxue, Z. A modified particle swarm optimization with an adaptive acceleration coefficients. In Proceedings of the 2009 Asia-Pacific Conference on Information Processing, Shenzhen, China, 18–19 July 2009; pp. 330–332. [Google Scholar]
  36. Komathi, C.; Umamaheswari, M. Design of gray wolf optimizer algorithm-based fractional order PI controller for power factor correction in SMPS applications. IEEE Trans. Power Electron. 2019, 35, 2100–2118. [Google Scholar] [CrossRef]
  37. Fan, Q.; Huang, H.; Chen, Q.; Yao, L.; Yang, K.; Huang, D. A modified self-adaptive marine predators algorithm: Framework and engineering applications. Eng. Comput. 2021, 38, 3269–3294. [Google Scholar] [CrossRef]
  38. Abualigah, L.; Almotairi, K.H.; Al-qaness, M.A.A.; Ewees, A.A.; Yousri, D.; Elaziz, M.A.; Nadimi-Shahraki, M.H. Efficient text document clustering approach using multi-search Arithmetic Optimization Algorithm. Knowl.-Based Syst. 2022, 248, 108833. [Google Scholar] [CrossRef]
  39. Rosner, B.; Glynn, R.J.; Ting Lee, M.L. Incorporation of clustering effects for the Wilcoxon rank sum test: A large-sample approach. Biometrics 2003, 59, 1089–1098. [Google Scholar] [CrossRef]
  40. Yang, X.-S.; Huyck, C.R.; Karamanoğlu, M.; Khan, N. True global optimality of the pressure vessel design problem: A benchmark for bio-inspired optimisation algorithms. Int. J. Bio-Inspired Comput. 2014, 5, 329–335. [Google Scholar] [CrossRef]
  41. Braik, M.S. Chameleon Swarm Algorithm: A bio-inspired optimizer for solving engineering design problems. Expert Syst. Appl. 2021, 174, 114685. [Google Scholar] [CrossRef]
  42. Ravindran, A.R.; Ragsdell, K.M.; Reklaitis, G.V. Engineering Optimization: Methods and Applications; Wiley: Hoboken, NJ, USA, 1983. [Google Scholar]
  43. Bayzidi, H.; Talatahari, S.; Saraee, M.; Lamarche, C.-P. Social Network Search for Solving Engineering Optimization Problems. Comput. Intell. Neurosci. 2021, 2021, 8548639. [Google Scholar] [CrossRef]
  44. Coello Coello, C.A. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
Figure 1. The dynamic factor graph for 500 iterations.
Figure 1. The dynamic factor graph for 500 iterations.
Biomimetics 08 00305 g001
Figure 2. Cooperative hunting mode of FRSA.
Figure 2. Cooperative hunting mode of FRSA.
Biomimetics 08 00305 g002
Figure 3. Flowchart of FRSA.
Figure 3. Flowchart of FRSA.
Biomimetics 08 00305 g003
Figure 4. The convergence curves of the 11 algorithms with Dim = 30.
Figure 4. The convergence curves of the 11 algorithms with Dim = 30.
Biomimetics 08 00305 g004
Figure 5. Boxplot analysis of classic functions (F1−F13) with Dim = 30.
Figure 5. Boxplot analysis of classic functions (F1−F13) with Dim = 30.
Biomimetics 08 00305 g005
Figure 6. The convergence curves of the 11 algorithms with Dim = 100.
Figure 6. The convergence curves of the 11 algorithms with Dim = 100.
Biomimetics 08 00305 g006
Figure 7. Boxplot analysis of classic functions (F1−F13) with Dim = 100.
Figure 7. Boxplot analysis of classic functions (F1−F13) with Dim = 100.
Biomimetics 08 00305 g007
Figure 8. The convergence curves of the 11 algorithms with Dim = 500.
Figure 8. The convergence curves of the 11 algorithms with Dim = 500.
Biomimetics 08 00305 g008
Figure 9. Boxplot analysis of classic functions (F1−F13) with Dim = 500.
Figure 9. Boxplot analysis of classic functions (F1−F13) with Dim = 500.
Biomimetics 08 00305 g009
Figure 10. The convergence curves of the 11 algorithms with fixed dimensions.
Figure 10. The convergence curves of the 11 algorithms with fixed dimensions.
Biomimetics 08 00305 g010
Figure 11. Boxplot analysis of classic functions (F14−F23) with fixed dimensions.
Figure 11. Boxplot analysis of classic functions (F14−F23) with fixed dimensions.
Biomimetics 08 00305 g011
Figure 12. Convergence analysis between RSA and FRSA.
Figure 12. Convergence analysis between RSA and FRSA.
Biomimetics 08 00305 g012aBiomimetics 08 00305 g012bBiomimetics 08 00305 g012cBiomimetics 08 00305 g012d
Figure 13. The convergence curves of the 11 algorithms on CEC 2019 functions.
Figure 13. The convergence curves of the 11 algorithms on CEC 2019 functions.
Biomimetics 08 00305 g013
Figure 14. Boxplot analysis of CEC2019 benchmark functions.
Figure 14. Boxplot analysis of CEC2019 benchmark functions.
Biomimetics 08 00305 g014
Figure 15. The convergence curves of 11 algorithms for the pressure vessel design problem.
Figure 15. The convergence curves of 11 algorithms for the pressure vessel design problem.
Biomimetics 08 00305 g015
Figure 16. The convergence curves of 11 algorithms for the corrugated bulkhead design problem.
Figure 16. The convergence curves of 11 algorithms for the corrugated bulkhead design problem.
Biomimetics 08 00305 g016
Figure 17. Convergence curves for the welded beam design problem.
Figure 17. Convergence curves for the welded beam design problem.
Biomimetics 08 00305 g017
Table 1. The classic function set.
Table 1. The classic function set.
FunctionDimRange F m i n Type
f 1 ( x ) = i = 1 n x i 2 30,100,500[−100, 100]0Unimodal
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30,100,500[−1.28, 1.28]0Unimodal
f 3 ( x ) = i = 1 n ( j 1 i x j ) 2 30,100,500[−100, 100]0Unimodal
f 4 ( x ) = max i { | x i | , 1 i n } 30,100,500[−100, 100]0Unimodal
f 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30,100,500[−30, 30]0Unimodal
f 6 ( x ) = i = 1 n [ x i + 0.5 ] 2 30,100,500[−100, 100]0Unimodal
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ] 30,100,500[−1.28, 1.28]0Unimodal
f 8 ( x ) = i = 1 n x i sin ( | x i | ) 30,100,500[−500, 500]−418.9829 × nMultimodal
f 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30,100,500[−5.12, 5.12]0Multimodal
f 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30,100,500[−32, 32]0Multimodal
f 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30,100,500[−600, 600]0Multimodal
f 12 ( x ) = π n { 10 sin ( π yi ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 u ( x i , a , k , m ) = { k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a } 30,100,500[−50, 50]0Multimodal
f 13 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) 30,100,500[−50, 50]0Multimodal
f 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65.536, 65.536]1Multimodal
f 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.0003Multimodal
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316Multimodal
f 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398Multimodal
f 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3Multimodal
f 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[0, 1]−3.86Multimodal
f 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32Multimodal
f 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532Multimodal
f 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4029Multimodal
f 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5364Multimodal
Table 2. The CEC 2019 set.
Table 2. The CEC 2019 set.
No.FunctionsDimRange F i * = F i ( X * )
F1Storn’s Chebyshev Polynomial Fitting Problem9[−8192, 8192]1
F2Inverse Hilbert Matrix Problem16[−16,384, 16,384]1
F3Lennard–Jones Minimum Energy Cluster18[−4, 4]1
F4Rastrigin’s Function10[−100, 100]1
F5Griewangk’s Function10[−100, 100]1
F6Weierstrass Function10[−100, 100]1
F7Modified Schwefel’s Function10[−100, 100]1
F8Expanded Schaffer’s F6 Function10[−100, 100]1
F9Happy Cat Function10[−100, 100]1
F10Ackley Function10[−100, 100]1
Table 3. Parameter settings for algorithms.
Table 3. Parameter settings for algorithms.
AlgorithmsParameters and Assignments
GA α [ 0.5 ,   1.5 ]
PSO c 1 = 2 , c 2 = 2 , W min = 0.2 , W max = 0.9
ACO α = 1 ,   β = 2 ,   ρ = 0.05
GWO a = 2 ( linearly   decreases   over   iterations ) , r 1 [ 0 , 1 ] , r 2 [ 0 , 1 ]
GJO a = 1.5 ( linearly   decreases   over   iterations )
SO a = 2 ( linearly   decreases   over   iterations )
TACPSO c 1 = 2 , c 2 = 2 , W min = 0.2 , W max = 0.9
AGWO B = 0.8 , a = 2 ( linearly   decreases   over   iterations )
EGWO a = 2 ( linearly   decreases   over   iterations ) , r 1 [ 0 , 1 ] , r 2 [ 0 , 1 ]
RSA ε = 0.1 ,   ω = 0.1
FRSA ε = 0.1 ,   θ = 2.5 ,   L 1 = 0.8 ,   L 2 = 0.2 ,  
Table 4. Results and comparison of 11 algorithms on 13 classic functions with Dim = 30.
Table 4. Results and comparison of 11 algorithms on 13 classic functions with Dim = 30.
F(x) GAPSOACOGWOGJOSOTACPSOAGWOEGWORSAFRSA
F1Mean2.0706 × 1043.3853 × 1024.5737 × 10−31.0329 × 10−271.7311 × 10−543.9891 × 10−941.5111 × 10−13.2767 × 10−3171.2009 × 10−300.0000 × 1000.0000 × 100
Std7.1489 × 1031.6168 × 1026.7589 × 10−31.0808 × 10−274.1785 × 10−541.0339 × 10−932.3348 × 10−10.0000 × 1003.8756 × 10−300.0000 × 1000.0000 × 100
F2Mean5.6471 × 1011.7592 × 1012.5207 × 10−31.0724 × 10−162.0077 × 10−321.8981 × 10−421.5195 × 1006.1333 × 10−1758.6619 × 10−200.0000 × 1000.0000 × 100
Std9.9694 × 1009.9392 × 1001.9247 × 10−38.1353 × 10−172.6567 × 10−327.8124 × 10−423.0452 × 1000.0000 × 1002.0027 × 10−190.0000 × 1000.0000 × 100
F3Mean5.2325 × 1048.7587 × 1033.2509 × 1041.0617 × 10−58.0928 × 10−188.5384 × 10−561.1348 × 1035.2178 × 10−2641.2199 × 10−30.0000 × 1000.0000 × 100
Std1.5868 × 1045.3330 × 1037.1037 × 1032.7063 × 10−52.6301 × 10−173.6611 × 10−551.1917 × 1030.0000 × 1004.0506 × 10−30.0000 × 1000.0000 × 100
F4Mean7.0290 × 1011.0057 × 1018.3925 × 1017.6327 × 10−75.5119 × 10−165.6706 × 10−409.7094 × 1008.5240 × 10−1553.5666 × 10−10.0000 × 1000.0000 × 100
Std7.2884 × 1002.6342 × 1001.1652 × 1018.4243 × 10−71.3025 × 10−151.9765 × 10−393.4154 × 1004.3706 × 10−1541.3297 × 1000.0000 × 1000.0000 × 100
F5Mean2.1143 × 1071.3458 × 1046.3852 × 1022.6950 × 1012.7744 × 1012.0242 × 1014.2784 × 1022.8334 × 1012.7928 × 1011.7547 × 1019.0588 × 10−29
Std1.5073 × 1079.7957 × 1039.3899 × 1027.1489 × 10−17.5092 × 10−11.1160 × 1019.0541 × 1023.9161 × 10−18.8237 × 10−11.4272 × 1011.3586 × 10−29
F6Mean2.2120 × 1043.3844 × 1022.8991 × 10−36.9336 × 10−12.5998 × 1007.4686 × 10−12.8608 × 10−15.1108 × 1003.1744 × 1006.9887 × 1009.3967 × 10−3
Std8.1756 × 1031.3189 × 1024.3952 × 10−33.2769 × 10−14.5246 × 10−17.2966 × 10−17.1367 × 10−13.2531 × 10−16.9967 × 10−14.0996 × 10−17.3219 × 10−3
F7Mean1.4246 × 1011.4084 × 1009.2893 × 10−22.1075 × 10−35.1434 × 10−42.9363 × 10−48.4275 × 10−21.2253 × 10−47.9773 × 10−31.2720 × 10−43.2019 × 10−4
Std6.4862 × 1005.8085 × 1003.6308 × 10−21.4913 × 10−33.3543 × 10−42.2856 × 10−43.3336 × 10−29.7020 × 10−54.0919 × 10−31.4087 × 10−43.1313 × 10−4
F8Mean−2.1820 × 103−8.0517 × 103−7.2210 × 103−5.8586 × 103−4.3233 × 103−1.248 × 104−8.6030 × 103−2.7317 × 103−6.5965 × 103−5.4035 × 103−1.1553 × 104
Std4.0040 × 1029.6639 × 1021.0003 × 1037.5792 × 1021.2048 × 1032.3899 × 1024.6512 × 1024.6201 × 1027.6715 × 1023.1866 × 1021.6853 × 103
F9Mean2.5863 × 1022.0098 × 1022.4292 × 1021.8876 × 1000.0000 × 1005.2470 × 1007.3533 × 1010.0000 × 1001.5967 × 1020.0000 × 1000.0000 × 100
Std4.5208 × 1012.1856 × 1012.2226 × 1012.5924 × 1000.0000 × 1001.2881 × 1011.8901 × 1010.0000 × 1003.8336 × 1010.0000 × 1000.0000 × 100
F10Mean1.9867 × 1015.3154 × 1001.2859 × 1011.0297 × 10−137.2831 × 10−152.8853 × 10−12.2423 × 1001.7171 × 10−151.9107 × 10−18.8818 × 10−168.8818 × 10−16
Std4.6960 × 10−11.0010 × 1009.8810 × 1001.8565 × 10−141.4454 × 10−157.5143 × 10−17.3942 × 10−11.5283 × 10−157.3243 × 10−10.0000 × 1000.0000 × 100
F11Mean1.8735 × 1024.0848 × 1001.7211 × 10−14.9998 × 10−30.0000 × 1009.1944 × 10−21.3227 × 10−10.0000 × 1001.1550 × 10−20.0000 × 1000.0000 × 100
Std6.6774 × 1011.8224 × 1002.7165 × 10−18.7540 × 10−30.0000 × 1001.7896 × 10−11.5411 × 10−10.0000 × 1002.1161 × 10−20.0000 × 1000.0000 × 100
F12Mean1.7475 × 1075.9962 × 1003.2016 × 1004.7372 × 10−22.1168 × 10−11.2141 × 10−11.7178 × 1006.7014 × 10−13.1555 × 1001.2588 × 1006.1299 × 10−4
Std2.4583 × 1073.0819 × 1005.8093 × 1003.6729 × 10−26.8287 × 10−22.4035 × 10−11.6616 × 1001.4300 × 10−13.1014 × 1003.4982 × 10−14.8674 × 10−4
F13Mean5.7420 × 1072.8474 × 1012.2313 × 1006.8191 × 10−11.7212 × 1004.8266 × 10−14.1897 × 1002.5629 × 1002.6787 × 1004.1579 × 10−13.8688 × 10−31
Std4.5158 × 1072.9526 × 1015.0535 × 1002.5619 × 10−12.4044 × 10−16.9409 × 10−14.8206 × 1008.7892 × 10−25.8772 × 10−18.3308 × 10−12.0585 × 10−31
Friedman
value
1.0423 × 1019.2692 × 1008.3846 × 1005.1538 × 1004.6923 × 1004.7308 × 1007.5385 × 1003.5385 × 1006.8077 × 1003.2500 × 1002.2115 × 100
Friedman rank1110964583721
Table 5. Results and comparison of 11 algorithms on 13 classic functions with Dim =100.
Table 5. Results and comparison of 11 algorithms on 13 classic functions with Dim =100.
F(x) GAPSOACOGWOGJOSOTACPSOAGWOEGWORSAFRSA
F1Mean2.2803 × 1054.8382 × 1031.1718 × 1051.2883 × 10−127.5690 × 10−287.3577 × 10−826.3258 × 1032.3337 × 10−2442.9059 × 10−160.0000 × 1000.0000 × 100
Std2.6407 × 1042.6614 × 1031.2634 × 1047.2714 × 10−132.0721 × 10−271.6214 × 10−811.9044 × 1030.0000 × 1004.3702 × 10−160.0000 × 1000.0000 × 100
F2Mean1.3878 × 1037.9551 × 1011.0183 × 10244.0761 × 10−81.7716 × 10−171.1687 × 10−351.0765 × 1023.5171 × 10−1272.1585 × 10−100.0000 × 1000.0000 × 100
Std6.1674 × 1032.0688 × 1014.2616 × 10241.2357 × 10−81.9050 × 10−171.1341 × 10−352.4822 × 1011.9264 × 10−1262.7251 × 10−100.0000 × 1000.0000 × 100
F3Mean6.4151 × 1051.2058 × 1055.4194 × 1055.4654 × 1021.1960 × 1001.9856 × 10−277.9658 × 1049.4366 × 10−2202.3095 × 1040.0000 × 1000.0000 × 100
Std1.8635 × 1057.0068 × 1045.8785 × 1045.7433 × 1025.0520 × 1001.0876 × 10−261.7856 × 1040.0000 × 1001.5626 × 1040.0000 × 1000.0000 × 100
F4Mean9.4654 × 1012.1438 × 1019.7253 × 1011.3792 × 1005.4031 × 1001.1115 × 10−364.4837 × 1011.4570 × 10−1307.1629 × 1010.0000 × 1000.0000 × 100
Std1.8813 × 1004.9340 × 1001.1638 × 1001.5201 × 1008.6012 × 1001.6885 × 10−363.1827 × 1005.4253 × 10−1308.6435 × 1000.0000 × 1000.0000 × 100
F5Mean8.5046 × 1085.2446 × 1051.0445 × 1099.7690 × 1019.8283 × 1016.4281 × 1013.2768 × 1069.8749 × 1019.8175 × 1019.8988 × 1013.8975 × 10−28
Std1.4410 × 1084.6652 × 1052.9092 × 1087.8639 × 10−14.8343 × 10−14.1497 × 1012.0490 × 1062.4079 × 10−16.4582 × 10−13.7169 × 10−32.2620 × 10−29
F6Mean2.1753 × 1053.9481 × 1031.1119 × 1051.0013 × 1011.6765 × 1011.4058 × 1016.3639 × 1032.2476 × 1011.4930 × 1012.4607 × 1014.1033 × 10−2
Std2.1653 × 1041.5048 × 1031.1425 × 1041.2537 × 1007.1104 × 10−11.0657 × 1012.8579 × 1033.1181 × 10−11.0606 × 1002.0760 × 10−12.9663 × 10−2
F7Mean1.2316 × 1035.3044 × 1018.4073 × 1027.4525 × 10−31.2061 × 10−32.2315 × 10−48.4554 × 1002.5098 × 10−42.9401 × 10−21.1850 × 10−42.4755 × 10−4
Std2.2446 × 1028.5484 × 1013.3353 × 1022.8388 × 10−35.1273 × 10−42.4369 × 10−44.9534 × 1002.3744 × 10−41.2773 × 10−29.1632 × 10−52.2133 × 10−4
F8Mean−4.1683 × 103−1.5010 × 104−1.5812 × 104−1.6026 × 104−9.1616 × 103−4.1583 × 104−2.2513 × 104−5.0509 × 103−1.7702 × 104−1.7056 × 104−3.6466 × 104
Std9.7178 × 1022.6230 × 1032.7296 × 1032.4537 × 1034.2288 × 1035.2761 × 1021.9590 × 1039.0114 × 1021.4842 × 1037.6478 × 1027.1490 × 103
F9Mean1.5280 × 1038.7473 × 1021.3949 × 1031.0982 × 1011.5158 × 10−141.4159 × 1014.6367 × 1020.0000 × 1008.3312 × 1020.0000 × 1000.0000 × 100
Std6.4386 × 1018.6547 × 1014.5366 × 1018.3224 × 1005.7687 × 10−143.0090 × 1015.1780 × 1010.0000 × 1001.4958 × 1020.0000 × 1000.0000 × 100
F10Mean2.0786 × 1019.0803 × 1002.0778 × 1011.1377 × 10−75.0271 × 10−144.4409 × 10−151.2679 × 1014.2040 × 10−158.4006 × 10−28.8818 × 10−168.8818 × 10−16
Std1.0176 × 10−12.4931 × 1004.0391 × 10−23.5782 × 10−89.8451 × 10−150.0000 × 1001.0867 × 1009.0135 × 10−164.6012 × 10−10.0000 × 1000.0000 × 100
F11Mean1.9914 × 1033.5916 × 1011.0510 × 1035.6641 × 10−30.0000 × 1000.0000 × 1005.5387 × 1010.0000 × 1005.0051 × 10−30.0000 × 1000.0000 × 100
Std2.1758 × 1021.3366 × 1011.1905 × 1021.2302 × 10−20.0000 × 1000.0000 × 1001.6313 × 1010.0000 × 1008.8528 × 10−30.0000 × 1000.0000 × 100
F12Mean1.7624 × 1092.4562 × 1033.1606 × 1092.5960 × 10−16.0942 × 10−11.7964 × 10−11.4874 × 1051.0179 × 1001.0922 × 1011.2477 × 1002.3383 × 10−4
Std4.3233 × 1081.3099 × 1043.0994 × 1085.0711 × 10−27.7430 × 10−23.7770 × 10−14.4671 × 1056.5885 × 10−28.0541 × 1008.0783 × 10−22.0208 × 10−4
F13Mean3.4134 × 1094.4552 × 1045.6359 × 1096.8948 × 1008.3742 × 1002.1756 × 1002.4624 × 1069.6505 × 1002.6571 × 1019.6741 × 1006.2822 × 10−31
Std6.5988 × 1088.5336 × 1045.0013 × 1084.6552 × 10−12.3595 × 10−13.7113 × 1002.2076 × 1066.1528 × 10−23.9839 × 1015.8643 × 10−11.8088 × 10−31
Friedman value1.0077 × 1018.4615 × 1009.6923 × 1005.4231 × 1005.0769 × 1003.8077 × 1008.3077 × 1003.5385 × 1006.6923 × 1002.9038 × 1002.0192 × 100
Friedman rank1191065483721
Table 6. Results and comparison of 11 algorithms on 13 classic functions with Dim = 500.
Table 6. Results and comparison of 11 algorithms on 13 classic functions with Dim = 500.
F(x) GAPSOACOGWOGJOSOTACPSOAGWOEGWORSAFRSA
F1Mean1.5128 × 1063.9219 × 1041.5590 × 1061.8644 × 10−39.6545 × 10−137.1375 × 10−712.9775 × 1051.9542 × 10−166.1307 × 10−60.0000 × 1000.0000 × 100
Std3.6434 × 1041.3201 × 1043.6597 × 1047.6449 × 10−49.7800 × 10−132.4182 × 10−701.9178 × 1041.0703 × 10−156.2289 × 10−60.0000 × 1000.0000 × 100
F2Mean6.0554 × 102264.5845 × 1024.1585 × 102681.0881 × 10−26.4312 × 10−91.2654 × 10−316.3084 × 10179.6613 × 10−121.8407 × 10−40.0000 × 1000.0000 × 100
StdInf1.2379 × 102Inf1.7840 × 10−34.2103 × 10−91.7875 × 10−313.4548 × 10185.2623 × 10−111.4881 × 10−40.0000 × 1000.0000 × 100
F3Mean2.1316 × 1072.8293 × 1061.3418 × 1073.1425 × 1055.1301 × 1048.2145 × 1022.0650 × 1061.2415 × 10−51.5987 × 1060.0000 × 1000.0000 × 100
Std6.9901 × 1061.4370 × 1061.3908 × 1067.7943 × 1045.3267 × 1044.4993 × 1034.4012 × 1054.9557 × 10−52.9634 × 1050.0000 × 1000.0000 × 100
F4Mean9.9161 × 1013.5159 × 1019.9451 × 1016.5006 × 1018.2792 × 1011.4350 × 10−337.0229 × 1019.2608 × 1019.6997 × 1010.0000 × 1000.0000 × 100
Std2.3489 × 10−15.2254 × 1001.9881 × 10−14.1463 × 1004.3209 × 1001.9441 × 10−332.9160 × 1002.5175 × 1019.6498 × 10−10.0000 × 1000.0000 × 100
F5Mean6.7974 × 1098.5251 × 1067.3446 × 1094.9803 × 1024.9826 × 1023.3551 × 1024.1692 × 1084.9892 × 1022.0683 × 1054.9899 × 1022.3167 × 10−27
Std2.6217 × 1085.6668 × 1063.3600 × 1083.5404 × 10−11.4870 × 10−12.1069 × 1023.9772 × 1076.8551 × 10−23.5928 × 1056.2629 × 10−36.3915 × 10−29
F6Mean1.5114 × 1063.5914 × 1041.5648 × 1069.1100 × 1011.1002 × 1026.6077 × 1012.9553 × 1051.2326 × 1021.0553 × 1021.2463 × 1022.6280 × 10−1
Std3.8311 × 1041.7013 × 1043.9734 × 1041.8331 × 1001.2181 × 1005.6041 × 1011.6752 × 1044.2145 × 10−11.6851 × 1002.1010 × 10−12.2051 × 10−1
F7Mean5.6877 × 1042.2092 × 1035.9688 × 1045.1280 × 10−26.5673 × 10−32.4772 × 10−45.5137 × 1034.1447 × 10−32.2992 × 1001.6209 × 10−42.8637 × 10−4
Std1.7801 × 1031.1383 × 1032.5418 × 1031.2074 × 10−23.9213 × 10−31.6724 × 10−41.0697 × 1032.4388 × 10−32.0536 × 1001.9236 × 10−42.5568 × 10−4
F8Mean−8.6802 × 103−3.6383 × 104−3.1971 × 104−5.3591 × 104−2.6579 × 104−2.0786 × 105−6.3227 × 104−1.0502 × 104−4.6206 × 104−6.1323 × 104−1.8676 × 105
Std1.6314 × 1035.2307 × 1036.0899 × 1031.3793 × 1041.4002 × 1043.1591 × 1032.8055 × 1031.2858 × 1032.9204 × 1035.3327 × 1033.2489 × 104
F9Mean8.6929 × 1034.4339 × 1038.8775 × 1037.5607 × 1017.3063 × 10−127.3183 × 1004.4337 × 1033.0028 × 10−105.0232 × 1030.0000 × 1000.0000 × 100
Std1.2985 × 1025.4177 × 1021.0714 × 1022.1468 × 1012.8809 × 10−123.9910 × 1011.3537 × 1021.6447 × 10−91.1423 × 1030.0000 × 1000.0000 × 100
F10Mean2.1105 × 1011.1398 × 1012.1018 × 1011.8561 × 10−33.1785 × 10−84.9146 × 10−151.8242 × 1012.6645 × 10−151.5959 × 10−48.8818 × 10−168.8818 × 10−16
Std2.8935 × 10−23.5690 × 1001.0087 × 10−23.7330 × 10−41.5956 × 10−81.2283 × 10−151.9259 × 10−11.8067 × 10−159.5884 × 10−50.0000 × 1000.0000 × 100
F11Mean1.3426 × 1043.0914 × 1021.4057 × 1042.0278 × 10−21.0707 × 10−130.0000 × 1002.6951 × 1030.0000 × 1001.6181 × 10−20.0000 × 1000.0000 × 100
Std3.5235 × 1029.0526 × 1013.4112 × 1024.1331 × 10−29.3215 × 10−140.0000 × 1001.5936 × 1020.0000 × 1003.4848 × 10−20.0000 × 1000.0000 × 100
F12Mean1.7068 × 10102.0914 × 1051.8490 × 10107.6115 × 10−19.3858 × 10−15.1918 × 10−23.4826 × 1081.1681 × 1005.9375 × 1071.2016 × 1002.1631 × 10−4
Std7.0596 × 1083.1495 × 1056.2478 × 1087.5436 × 10−22.6207 × 10−22.0980 × 10−17.1676 × 1071.0173 × 10−26.2911 × 1072.9273 × 10−32.1681 × 10−4
F13Mean3.1399 × 10106.4059 × 1063.3112 × 10105.0441 × 1014.7911 × 1017.2088 × 1001.1450 × 1094.9797 × 1011.1536 × 1074.9921 × 1012.0996 × 10−30
Std1.3772 × 1098.1295 × 1061.3777 × 1091.4970 × 1003.2400 × 10−11.4763 × 1011.5104 × 1084.1931 × 10−21.7631 × 1073.9674 × 10−29.4926 × 10−32
Friedman
value
9.6346 × 1008.0385 × 1009.9808 × 1005.9231 × 1005.1154 × 1003.5000 × 1008.1538 × 1004.3077 × 1006.7692 × 1002.6154 × 1001.9615 × 100
Friedman rank1081165394721
Table 7. Results and comparison of 11 algorithms on 10 classic functions with fixed dimensions.
Table 7. Results and comparison of 11 algorithms on 10 classic functions with fixed dimensions.
F(x) GAPSOACOGWOGJOSOTACPSOAGWOEGWORSAFRSA
F14Mean1.1036 × 1009.9800 × 10−12.8537 × 1005.0796 × 1005.3036 × 1001.0022 × 1001.0311 × 1006.4801 × 1007.7381 × 1004.1376 × 1009.9823 × 10−1
Std3.3201 × 10−12.1481 × 10−103.8575 × 1004.1695 × 1004.4384 × 1002.0981 × 10−21.8148 × 10−14.3221 × 1004.4611 × 1003.1646 × 1001.2224 × 10−3
F15Mean1.3902 × 10−21.0272 × 10−25.3931 × 10−33.0739 × 10−38.5798 × 10−46.0445 × 10−45.2544 × 10−41.4132 × 10−31.0979 × 10−21.7245 × 10−34.1525 × 10−4
Std1.0043 × 10−21.0209 × 10−28.4021 × 10−36.8994 × 10−32.0507 × 10−33.3346 × 10−44.1271 × 10−42.7639 × 10−32.3937 × 10−21.4282 × 10−38.1372 × 10−5
F16Mean−9.4538 × 10−1−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0316 × 100−1.0306 × 100−1.0316 × 100−1.0305 × 100−1.0316 × 100
Std1.1796 × 10−11.5212 × 10−56.7752 × 10−161.8976 × 10−82.5177 × 10−75.2964 × 10−165.9036 × 10−165.7742 × 10−35.6187 × 10−91.4232 × 10−31.8373 × 10−13
F17Mean4.0005 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9789 × 10−13.9794 × 10−13.9789 × 10−14.1970 × 10−13.9789 × 10−1
Std4.0846 × 10−31.4541 × 10−50.0000 × 1007.2876 × 10−79.0667 × 10−60.0000 × 1000.0000 × 1005.3183 × 10−55.9598 × 10−72.4368 × 10−20.0000 × 100
F18Mean1.0596 × 1013.0002 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.0000 × 1003.9001 × 1004.0014 × 1003.0000 × 100
Std1.1443 × 1012.8621 × 10−46.6995 × 10−164.8544 × 10−58.5395 × 10−62.7088 × 10−152.1599 × 10−151.8450 × 10−64.9295 × 1005.4822 × 1003.7510 × 10−15
F19Mean−3.2754 × 100−3.8614 × 100−3.8628 × 100−3.8612 × 100−3.8581 × 100−3.8370 × 100−3.8628 × 100−3.8569 × 100−3.8618 × 100−3.7992 × 100−3.8628 × 100
Std3.2324 × 10−12.9771 × 10−32.7101 × 10−152.6343 × 10−33.7740 × 10−31.4113 × 10−12.6117 × 10−152.6408 × 10−32.6029 × 10−36.3061 × 10−22.0748 × 10−15
F20Mean−1.4764 × 100−3.0759 × 100−3.2467 × 100−3.2796 × 100−3.0914 × 100−3.2982 × 100−3.2665 × 100−3.1263 × 100−3.2177 × 100−2.7566 × 100−3.3213 × 100
Std4.8085 × 10−11.9536 × 10−15.8273 × 10−26.9288 × 10−21.3582 × 10−14.8370 × 10−26.0328 × 10−21.0519 × 10−19.9155 × 10−23.4506 × 10−12.7018 × 10−3
F21Mean−8.5022 × 10−1−9.0585 × 100−5.9936 × 100−9.0574 × 100−7.7219 × 100−1.0138 × 101−6.8143 × 100−7.3462 × 100−6.2985 × 100−5.0552 × 100−1.0105 × 101
Std5.1246 × 10−12.0337 × 1003.7255 × 1002.2621 × 1002.9320 × 1003.4059 × 10−23.4941 × 1002.9488 × 1003.1346 × 1003.1204 × 10−77.9343 × 10−2
F22Mean−1.0336 × 100−9.0891 × 100−7.4926 × 100−1.0401 × 101−9.8499 × 100−1.0290 × 101−7.1316 × 100−8.5041 × 100−7.1293 × 100−5.0877 × 100−1.0402 × 101
Std4.4156 × 10−12.6893 × 1003.6556 × 1001.2043 × 10−31.6359 × 1002.5749 × 10−13.4330 × 1002.5694 × 1003.6624 × 1008.0616 × 10−74.1384 × 10−3
F23Mean−1.2002 × 100−9.0372 × 100−7.2815 × 100−9.9938 × 100−9.6040 × 100−1.0469 × 101−9.4877 × 100−8.6658 × 100−6.4546 × 100−5.1314 × 100−1.0525 × 101
Std3.9772 × 10−12.6192 × 1003.8049 × 1002.0583 × 1002.4090 × 1001.4991 × 10−12.4300 × 1002.5916 × 1003.8996 × 1001.6091 × 10−23.3992 × 10−2
Friedman
value
9.2000 × 1006.4000 × 1005.8250 × 1005.1500 × 1006.2000 × 1003.4250 × 1004.5250 × 1007.3000 × 1007.8500 × 1007.8000 × 1002.3250 × 100
Friedman rank1175462381091
Table 8. Statistical analysis results of Wilcoxon rank sum test of classic functions.
Table 8. Statistical analysis results of Wilcoxon rank sum test of classic functions.
F(x)DimGAPSOACOGWOGJOSOTACPSOAGWOEGWORSATotal
F1301.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
1001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.9346 × 10−10NaN9/1/0
5001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
F2301.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
1001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
5001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
F3301.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
1001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−124.5736 × 10−12NaN9/1/0
5001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
F4301.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
1001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
5001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN9/1/0
F5303.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−1110/0/0
1003.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−1110/0/0
5003.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−1110/0/0
F6303.0199 × 10−113.0199 × 10−112.3168 × 10−63.0199 × 10−113.0199 × 10−111.0937 × 10−103.1573 × 10−53.0199 × 10−113.0199 × 10−113.0199 × 10−119/0/1
1003.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−118.9934 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−1110/0/0
5003.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−114.9752 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−1110/0/0
F7303.0199 × 10−113.0199 × 10−113.0199 × 10−111.2057 × 10−101.0315 × 10−29.8231 × 10−13.0199 × 10−113.3384 × 10−113.5010 × 10−31.7666 × 10−37/1/2
1003.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−117.3803 × 10−106.7350 × 10−13.0199 × 10−113.0199 × 10−117.0617 × 10−12.4157 × 10−27/2/1
5003.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−119.8231 × 10−13.0199 × 10−113.0199 × 10−113.4742 × 10−104.3584 × 10−28/1/1
F8303.0199 × 10−111.2541 × 10−72.1947 × 10−84.1997 × 10−107.3891 × 10−111.3017 × 10−33.3681 × 10−52.2273 × 10−93.0199 × 10−112.6099 × 10−109/0/1
1003.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−113.0161 × 10−114.5146 × 10−21.4110 × 10−93.0199 × 10−113.0199 × 10−112.9878 × 10−119/0/1
5003.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−114.0595 × 10−23.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−119/0/1
F9301.2118 × 10−121.2118 × 10−121.2118 × 10−121.1378 × 10−12NaN1.9457 × 10−91.2118 × 10−121.2118 × 10−12NaNNaN7/3/0
1001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.6074 × 10−15.3750 × 10−61.2118 × 10−121.2118 × 10−12NaNNaN7/3/0
5001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.0956 × 10−124.1926 × 10−21.2118 × 10−121.2118 × 10−123.3371 × 10−1NaN8/2/0
F10301.2118 × 10−121.2118 × 10−121.2118 × 10−121.1001 × 10−121.5479 × 10−131.2003 × 10−131.2118 × 10−125.3025 × 10−135.4660 × 10−3NaN9/1/0
1001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.0171 × 10−121.6853 × 10−141.2118 × 10−121.2118 × 10−127.1518 × 10−13NaN9/1/0
5001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−128.6442 × 10−141.2118 × 10−121.2118 × 10−129.6506 × 10−6NaN9/1/0
F11301.2118 × 10−121.2118 × 10−121.2118 × 10−122.7880 × 10−3NaN1.3702 × 10−31.2118 × 10−122.9343 × 10−5NaNNaN8/2/0
1001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaNNaN1.2118 × 10−125.8153 × 10−9NaNNaN6/4/0
5001.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−121.2118 × 10−12NaNNaN7/3/0
F12301.5099 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−112.3897 × 10−83.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−1110/0/0
1003.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−116.5183 × 10−93.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−1110/0/0
5003.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−112.0338 × 10−93.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−1110/0/0
F13303.0029 × 10−113.0029 × 10−113.0029 × 10−113.0029 × 10−113.0029 × 10−113.0029 × 10−113.0029 × 10−113.0029 × 10−113.0029 × 10−113.0029 × 10−1110/0/0
1003.0142 × 10−113.0142 × 10−113.0142 × 10−113.0142 × 10−113.0142 × 10−113.0142 × 10−113.0142 × 10−113.0142 × 10−113.0142 × 10−113.0142 × 10−1110/0/0
5003.0123 × 10−113.0123 × 10−113.0123 × 10−113.0123 × 10−113.0123 × 10−113.0123 × 10−113.0123 × 10−113.0123 × 10−113.0123 × 10−113.0123 × 10−1110/0/0
F1421.4532 × 10−11.3853 × 10−61.8070 × 10−16.2828 × 10−62.8790 × 10−61.7486 × 10−41.4435 × 10−105.4485 × 10−93.8202 × 10−103.0199 × 10−115/2/3
F1543.0199 × 10−113.0199 × 10−113.0180 × 10−118.4180 × 10−15.5546 × 10−26.3533 × 10−23.9874 × 10−46.1452 × 10−21.6813 × 10−43.0199 × 10−115/4/1
F1621.2624 × 10−111.2624 × 10−117.2549 × 10−111.2624 × 10−111.2624 × 10−111.3070 × 10−21.0374 × 10−41.2624 × 10−111.2624 × 10−111.2624 × 10−117/3/0
F1721.2118 × 10−121.2118 × 10−12NaN1.2118 × 10−121.2118 × 10−12NaNNaN1.2118 × 10−121.2118 × 10−121.2118 × 10−127/3/0
F1822.9561 × 10−112.9561 × 10−119.1184 × 10−122.9561 × 10−112.9561 × 10−111.6701 × 10−25.1977 × 10−72.9561 × 10−112.9561 × 10−112.9561 × 10−117/0/3
F1931.2007 × 10−111.2007 × 10−113.6197 × 10−131.2007 × 10−111.2007 × 10−113.7428 × 10−51.1707 × 10−91.2007 × 10−111.2007 × 10−111.2007 × 10−117/0/3
F2063.0199 × 10−111.7769 × 10−107.2389 × 10−24.0840 × 10−55.4941 × 10−118.0429 × 10−56.5763 × 10−19.8329 × 10−83.0199 × 10−113.0199 × 10−117/2/1
F2143.0199 × 10−111.6225 × 10−13.7558 × 10−17.9782 × 10−24.0840 × 10−53.4362 × 10−51.0000 × 1002.2780 × 10−59.7555 × 10−103.0199 × 10−115/4/1
F2243.0199 × 10−114.6558 × 10−71.8361 × 10−11.1937 × 10−64.1997 × 10−102.6947 × 10−11.0000 × 1003.3520 × 10−83.3384 × 10−113.0199 × 10−117/3/0
F2343.0199 × 10−111.0154 × 10−63.7432 × 10−17.2208 × 10−63.0103 × 10−73.2458 × 10−17.7028 × 10−62.8314 × 10−83.3384 × 10−113.0199 × 10−117/2/1
Table 9. Results and comparison of CEC 2019 benchmark functions.
Table 9. Results and comparison of CEC 2019 benchmark functions.
F(x) GAPSOACOGWOGJOSOTACPSOAGWOEGWORSAFRSA
F1Mean8.8777 × 1071.4936 × 1071.0333 × 1062.7611 × 1047.4643 × 1037.7800 × 1042.1715 × 1051.0000 × 1004.8953 × 1041.0000 × 1001.0000 × 100
Std9.8039 × 1073.0138 × 1078.6424 × 1058.1437 × 1043.0692 × 1041.4879 × 1052.3490 × 1050.0000 × 1001.3274 × 1050.0000 × 1000.0000 × 100
F2Mean7.7940 × 1034.1175 × 1032.6302 × 1034.7378 × 1021.6396 × 1022.8924 × 1023.3182 × 1028.2891 × 1021.6188 × 1034.9991 × 1004.9473 × 100
Std2.5938 × 1032.4895 × 1031.7641 × 1032.2747 × 1022.7699 × 1021.7559 × 1021.3204 × 1021.7985 × 1036.1388 × 1025.0323 × 10−31.0717 × 10−1
F3Mean1.1095 × 1018.7904 × 1005.9218 × 1002.9330 × 1004.4288 × 1004.4866 × 1002.9652 × 1005.9654 × 1009.0727 × 1008.0766 × 1004.9149 × 100
Std9.1758 × 10−11.2335 × 1002.1958 × 1002.0613 × 1002.5341 × 1001.9873 × 1001.8614 × 1001.1606 × 1001.9015 × 1007.9195 × 10−17.9953 × 10−1
F4Mean3.6379 × 1014.0010 × 1012.7100 × 1011.9449 × 1013.2685 × 1012.0590 × 1011.8148 × 1015.8095 × 1015.6902 × 1018.9836 × 1013.4525 × 101
Std1.3237 × 1017.7174 × 1001.1349 × 1011.1020 × 1011.1314 × 1016.0431 × 1007.9248 × 1001.0062 × 1012.6763 × 1011.3727 × 1018.6094 × 100
F5Mean6.4731 × 1003.9550 × 1001.4494 × 1002.1800 × 1003.8695 × 1001.1470 × 1001.1306 × 1001.3549 × 1011.3395 × 1018.1605 × 1011.6984 × 100
Std5.5146 × 1003.9669 × 1002.2462 × 10−11.1006 × 1002.6155 × 1001.5675 × 10−17.2699 × 10−27.7538 × 1001.6464 × 1011.8506 × 1011.8171 × 10−1
F6Mean7.8132 × 1006.6746 × 1002.9025 × 1002.7449 × 1004.5758 × 1003.8464 × 1002.5324 × 1007.6616 × 1007.7690 × 1001.0850 × 1012.4455 × 100
Std1.6857 × 1002.3143 × 1001.3796 × 1001.2443 × 1001.1042 × 1001.2605 × 1001.2228 × 1001.1028 × 1002.1973 × 1009.5171 × 10−17.5682 × 10−1
F7Mean1.1598 × 1031.2966 × 1037.5342 × 1028.1406 × 1021.2074 × 1036.9743 × 1027.4926 × 1021.5199 × 1031.2985 × 1031.7713 × 1031.3839 × 103
Std3.7858 × 1023.3549 × 1024.9219 × 1023.2861 × 1024.4397 × 1022.1649 × 1023.1405 × 1022.4732 × 1023.4208 × 1021.8725 × 1022.8138 × 102
F8Mean5.1737 × 1004.5708 × 1003.9766 × 1003.8578 × 1004.2642 × 1003.9505 × 1003.8528 × 1004.7386 × 1004.4702 × 1004.8492 × 1004.2121 × 100
Std2.7203 × 10−13.3748 × 10−14.1346 × 10−14.8374 × 10−13.3761 × 10−13.3910 × 10−13.0223 × 10−12.7315 × 10−14.2299 × 10−12.4832 × 10−12.6721 × 10−1
F9Mean1.4357 × 1001.5591 × 1001.2542 × 1001.2314 × 1001.2813 × 1001.3432 × 1001.1940 × 1001.5505 × 1001.3960 × 1003.2085 × 1001.2964 × 100
Std1.9960 × 10−13.5904 × 10−15.2841 × 10−27.6053 × 10−27.8476 × 10−28.8253 × 10−28.5805 × 10−23.4225 × 10−11.3667 × 10−16.4471 × 10−16.0916 × 10−2
F10Mean2.1548 × 1012.1479 × 1012.1494 × 1012.1445 × 1012.1172 × 1012.1477 × 1012.0500 × 1012.1011 × 1012.1279 × 1012.1425 × 1012.0393 × 101
Std1.1314 × 10−11.5156 × 10−11.0565 × 10−19.7992 × 10−21.7941 × 1008.0458 × 10−23.4673 × 1001.3787 × 1001.1003 × 10−11.2053 × 10−12.2190 × 100
Friedman value8.4500 × 1008.0000 × 1006.3000 × 1004.7500 × 1005.7500 × 1004.4500 × 1003.8500 × 1006.5500 × 1007.9000 × 1006.4500 × 1003.5500 × 100
Friedman rank1110645328971
Table 10. Statistical analysis results of Wilcoxon rank sum test of CEC 2019 functions.
Table 10. Statistical analysis results of Wilcoxon rank sum test of CEC 2019 functions.
F(x)DimGAPSOACOGWOGJOSOTACPSOAGWOEGWORSATotal
F191.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12NaNNaN8/2/0
F2162.5206 × 10−112.5206 × 10−112.5206 × 10−112.5206 × 10−116.2862 × 10−82.5206 × 10−112.5206 × 10−112.5206 × 10−119.0983 × 10−23.0922 × 10−49/1/0
F3183.0199 × 10−113.0199 × 10−113.3386 × 10−32.1327 × 10−53.2651 × 10−22.2823 × 10−14.7445 × 10−62.4386 × 10−95.2640 × 10−43.6897 × 10−116/1/3
F4109.7052 × 10−13.4029 × 10−12.3985 × 10−14.1127 × 10−74.3584 × 10−23.5201 × 10−71.5964 × 10−72.3885 × 10−43.4971 × 10−93.0199 × 10−113/3/4
F5103.0199 × 10−113.3384 × 10−117.1988 × 10−55.2978 × 10−12.3768 × 10−73.4742 × 10−103.0199 × 10−114.4440 × 10−73.0199 × 10−113.0199 × 10−116/1/3
F6103.0199 × 10−118.9934 × 10−113.5545 × 10−16.9522 × 10−13.4971 × 10−92.4327 × 10−56.6273 × 10−13.0199 × 10−113.0199 × 10−113.0199 × 10−117/3/0
F7101.0315 × 10−23.2553 × 10−15.8587 × 10−61.0666 × 10−71.3732 × 10−18.8910 × 10−107.7725 × 10−91.9073 × 10−17.4827 × 10−21.2541 × 10−71/4/5
F8103.3384 × 10−117.2951 × 10−41.8916 × 10−42.8389 × 10−42.8378 × 10−16.3772 × 10−34.4272 × 10−39.0688 × 10−38.1200 × 10−41.3289 × 10−105/1/4
F9104.2259 × 10−32.0283 × 10−76.0971 × 10−31.8575 × 10−35.9969 × 10−14.8413 × 10−26.2828 × 10−61.2362 × 10−31.4110 × 10−93.0199 × 10−116/1/3
F10106.2027 × 10−49.5207 × 10−41.6813 × 10−44.4272 × 10−32.4157 × 10−22.2360 × 10−21.0188 × 10−52.7548 × 10−31.0547 × 10−13.3874 × 10−27/1/2
Table 11. Comparison results of pressure vessel design problem.
Table 11. Comparison results of pressure vessel design problem.
Algorithms x 1 x 2 x 3 x 4 Best Value
GA1.1943 × 1005.6359 × 10−15.6935 × 1015.4332 × 1017.4044 × 103
PSO7.7876 × 10−13.8637 × 10−14.0333 × 1012.0000 × 1025.8969 × 103
ACO7.8298 × 10−13.8703 × 10−14.0569 × 1011.9656 × 1025.8936 × 103
GWO7.7826 × 10−13.8541 × 10−14.0323 × 1011.9996 × 1025.8878 × 103
GJO7.8054 × 10−13.8666 × 10−14.0404 × 1011.9884 × 1025.8972 × 103
SO7.7817 × 10−13.8482 × 10−14.0320 × 1012.0000 × 1025.8858 × 103
TACPSO7.8287 × 10−13.8697 × 10−14.0563 × 1011.9664 × 1025.8934 × 103
AGWO8.0092 × 10−14.5311 × 10−14.1339 × 1011.8843 × 1026.1686 × 103
EGWO7.7834 × 10−13.8642 × 10−14.0325 × 1011.9995 × 1025.8915 × 103
RSA1.0018 × 1005.1922 × 10−14.2327 × 1011.7775 × 1027.7528 × 103
FRSA7.7817 × 10−13.8465 × 10−14.0320 × 1012.0000 × 1025.8854 × 103
Table 12. Statistical analysis of pressure vessel design problem.
Table 12. Statistical analysis of pressure vessel design problem.
AlgorithmsBestMeanStdWorstTimep-Value
GA7.4044 × 1038.8011 × 1038.6900 × 1021.1360 × 1041.7213 × 10−13.0199 × 10−11+
PSO5.8969 × 1036.4337 × 1036.7244 × 1027.5156 × 1031.2070 × 10−13.7704 × 10−4+
ACO5.8936 × 1036.3715 × 1034.8457 × 1027.3190 × 1035.0267 × 10−11.4733 × 10−7+
GWO5.8878 × 1036.0336 × 1033.2292 × 1027.2513 × 1031.3380 × 10−13.6322 × 10−1=
GJO5.8972 × 1036.3251 × 1035.9094 × 1027.3194 × 1032.1300 × 10−12.2658 × 10−3+
SO5.8858 × 1036.2189 × 1033.3475 × 1027.1860 × 1031.4087 × 10−19.2113 × 10−5+
TACPSO5.8934 × 1036.3585 × 1033.8150 × 1027.2734 × 1031.2773 × 10−11.8500 × 10−8+
AGWO6.1686 × 1037.2195 × 1034.6584 × 1027.7575 × 1036.5110 × 10−13.0199 × 10−11+
EGWO5.8915 × 1036.3177 × 1033.7542 × 1027.3258 × 1031.6837 × 10−13.0939 × 10−6+
RSA7.7528 × 1031.2201 × 1043.2025 × 1032.0883 × 1043.1713 × 10−13.0199 × 10−11+
FRSA5.8854 × 1035.9418 × 1037.0609 × 1016.1543 × 1034.0080 × 10−1
Table 13. Comparison of the results for the corrugated bulkhead design problem.
Table 13. Comparison of the results for the corrugated bulkhead design problem.
Algorithms x 1 x 2 x 3 x 4 Best Value
GA4.9344 × 1013.4325 × 1015.3525 × 1011.0744 × 1007.1939 × 100
PSO5.6734 × 1013.4160 × 1015.7676 × 1011.0502 × 1006.8516 × 100
ACO5.7692 × 1013.4148 × 1015.7692 × 1011.0500 × 1006.8430 × 100
GWO5.7597 × 1013.4138 × 1015.7631 × 1011.0500 × 1006.8446 × 100
GJO5.7444 × 1013.4160 × 1015.7589 × 1011.0502 × 1006.8486 × 100
SO5.7692 × 1013.4148 × 1015.7692 × 1011.0500 × 1006.8430 × 100
TACPSO5.7692 × 1013.4148 × 1015.7692 × 1011.0500 × 1006.8430 × 100
AGWO5.6150 × 1013.4178 × 1015.7086 × 1011.0514 × 1006.8776 × 100
EGWO5.7645 × 1013.4159 × 1015.7672 × 1011.0500 × 1006.8444 × 100
RSA1.0786 × 1013.4025 × 1015.0382 × 1011.0613 × 1007.9687 × 100
FRSA5.7692 × 1013.4148 × 1015.7692 × 1011.0500 × 1006.8430 × 100
Table 14. Statistical analysis of corrugated bulkhead design problem.
Table 14. Statistical analysis of corrugated bulkhead design problem.
AlgorithmsBestMeanStdWorstTime p-Value
GA7.1939 × 1008.0055 × 1006.3630 × 10−11.0132 × 1011.0340 × 10−11.4157 × 10−9+
PSO6.8516 × 1006.8989 × 1003.1823 × 10−26.9810 × 1004.4200 × 10−21.4157 × 10−9+
ACO6.8430 × 1007.4451 × 1008.3118 × 10−11.0239 × 1014.1200 × 10−12.5585 × 10−2+
GWO6.8446 × 1006.8501 × 1005.4757 × 10−36.8650 × 1005.8440 × 10−21.4157 × 10−9+
GJO6.8486 × 1007.2569 × 1006.4078 × 10−18.2682 × 1001.3556 × 10−11.4157 × 10−9+
SO6.8430 × 1006.8432 × 1007.1300 × 10−46.8460 × 1006.1040 × 10−21.2780 × 10−3+
TACPSO6.8430 × 1006.9001 × 1002.8554 × 10−18.2707 × 1004.8960 × 10−22.1634 × 10−8-
AGWO6.8776 × 1007.0434 × 1002.5644 × 10−18.1805 × 1004.8984 × 10−11.4157 × 10−9+
EGWO6.8444 × 1006.9353 × 1002.8175 × 10−18.1632 × 1008.8400 × 10−21.4157 × 10−9+
RSA7.9687 × 1009.1028 × 1008.3088 × 10−11.0716 × 1012.1428 × 10−11.4157 × 10−9+
FRSA6.8430 × 1006.8430 × 1001.0000 × 10−76.8430 × 1001.8084 × 10−1
Table 15. Comparison of the results for the welded beam design problem.
Table 15. Comparison of the results for the welded beam design problem.
Algorithms x 1 x 2 x 3 x 4 Best Value
GA1.7200 × 10−14.7314 × 1008.7256 × 1002.2693 × 10−11.9390 × 100
PSO2.0560 × 10−13.4728 × 1009.0405 × 1002.0588 × 10−11.7268 × 100
ACO2.0632 × 10−13.4629 × 1009.0235 × 1002.0633 × 10−11.7270 × 100
GWO2.0547 × 10−13.4781 × 1009.0365 × 1002.0574 × 10−11.7256 × 100
GJO2.0557 × 10−13.4733 × 1009.0418 × 1002.0573 × 10−11.7259 × 100
SO2.0573 × 10−13.4705 × 1009.0368 × 1002.0573 × 10−11.7249 × 100
TACPSO2.0573 × 10−13.4705 × 1009.0366 × 1002.0573 × 10−11.7249 × 100
AGWO2.0261 × 10−13.5867 × 1009.0420 × 1002.0573 × 10−11.7366 × 100
EGWO2.0538 × 10−13.4793 × 1009.0370 × 1002.0573 × 10−11.7256 × 100
RSA2.0413 × 10−13.3786 × 1001.0000 × 1012.0723 × 10−11.8881 × 100
FRSA2.0573 × 10−13.4705 × 1009.0366 × 1002.0573 × 10−11.7249 × 100
Table 16. Statistical analysis of welded beam design problem.
Table 16. Statistical analysis of welded beam design problem.
AlgorithmsBestMeanStdWorstTimep-Value
GA1.9390 × 1003.2100 × 1009.8809 × 10−15.6391 × 1002.0927 × 10−13.0199 × 10−11+
PSO1.7268 × 1001.8233 × 1002.1321 × 10−12.4983 × 1001.5083 × 10−13.0199 × 10−11+
ACO1.7270 × 1002.1779 × 1004.3684 × 10−13.7688 × 1005.3960 × 10−13.0199 × 10−11+
GWO1.7256 × 1001.7281 × 1002.9589 × 10−31.7368 × 1001.6927 × 10−13.0199 × 10−11+
GJO1.7259 × 1001.7303 × 1004.2440 × 10−31.7429 × 1002.4570 × 10−13.0199 × 10−11+
SO1.7249 × 1001.7278 × 1006.9340 × 10−31.7533 × 1001.7103 × 10−11.8608 × 10−6+
TACPSO1.7249 × 1001.7504 × 1005.3535 × 10−21.9215 × 1001.5860 × 10−14.2039 × 10−1=
AGWO1.7366 × 1001.7725 × 1001.8314 × 10−21.8307 × 1006.9630 × 10−13.0199 × 10−11+
EGWO1.7256 × 1001.7305 × 1004.7509 × 10−31.7462 × 1002.0027 × 10−13.0199 × 10−11+
RSA1.8881 × 1002.1518 × 1001.6396 × 10−12.6872 × 1003.5377 × 10−13.0199 × 10−11+
FRSA1.7249 × 1001.7249 × 1005.6900 × 10−51.7252 × 1004.8907 × 10−1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, L.; Li, G.; Yuan, P.; Yang, J.; Tian, D.; Zhang, T. Reptile Search Algorithm Considering Different Flight Heights to Solve Engineering Optimization Design Problems. Biomimetics 2023, 8, 305. https://doi.org/10.3390/biomimetics8030305

AMA Style

Yao L, Li G, Yuan P, Yang J, Tian D, Zhang T. Reptile Search Algorithm Considering Different Flight Heights to Solve Engineering Optimization Design Problems. Biomimetics. 2023; 8(3):305. https://doi.org/10.3390/biomimetics8030305

Chicago/Turabian Style

Yao, Liguo, Guanghui Li, Panliang Yuan, Jun Yang, Dongbin Tian, and Taihua Zhang. 2023. "Reptile Search Algorithm Considering Different Flight Heights to Solve Engineering Optimization Design Problems" Biomimetics 8, no. 3: 305. https://doi.org/10.3390/biomimetics8030305

Article Metrics

Back to TopTop