Next Article in Journal
Efficient Degradation of Chlortetracycline by Graphene Supported Cobalt Oxide Activated Peroxydisulfate: Performances and Mechanisms
Previous Article in Journal
Subcritical Water Extraction of Mango Seed Kernels and Its Application for Cow Ghee Preservation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerated Arithmetic Optimization Algorithm by Cuckoo Search for Solving Engineering Design Problems

1
Faculty of Information Technology, Applied Science Private University, Amman 11931, Jordan
2
MEU Research Unit, Middle East University, Amman 11831, Jordan
3
Research and Innovation Centers, Rabdan Academy, Abu Dhabi P.O. Box 114646, United Arab Emirates
4
Faculty of Engineering, Department of Civil Engineering, The Hashemite University, Zarqa 13133, Jordan
5
School of Information Technology, Skyline University College, Sharjah P.O. Box 1797, United Arab Emirates
6
School of Mathematics, Thapar Institute of Engineering & Technology, Deemed University, Patiala 147004, India
7
Applied Science Research Center, Applied Science Private University, Amman 11931, Jordan
8
Department of Mathematics, Graphic Era Deemed to be University, Dehradun 248002, India
9
College of Technical Engineering, The Islamic University, Najaf 54001, Iraq
10
Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al al-Bayt University, Mafraq 25113, Jordan
11
Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
12
Department of Computing and Information Systems, School of Engineering and Technology, Sunway University Malaysia, Petaling Jaya 27500, Malaysia
*
Authors to whom correspondence should be addressed.
Processes 2023, 11(5), 1380; https://doi.org/10.3390/pr11051380
Submission received: 26 February 2023 / Revised: 23 April 2023 / Accepted: 24 April 2023 / Published: 3 May 2023
(This article belongs to the Section Process Control and Monitoring)

Abstract

:
Several metaheuristic algorithms have been implemented to solve global optimization issues. Nevertheless, these approaches require more enhancement to strike a suitable harmony between exploration and exploitation. Consequently, this paper proposes improving the arithmetic optimization algorithm (AOA) to solve engineering optimization issues based on the cuckoo search algorithm called AOACS. The developed approach uses cuckoo search algorithm operators to improve the ability of the exploitation operations of AOA. AOACS enhances the convergence ratio of the presented technique to find the optimum solution. The performance of the AOACS is examined using 23 benchmark functions and CEC-2019 functions to show the ability of the proposed work to solve different numerical optimization problems. The proposed AOACS is evaluated using four engineering design problems: the welded beam, the three-bar truss, the stepped cantilever beam, and the speed reducer design. Finally, the results of the proposed approach are compared with state-of-the-art approaches to prove the performance of the proposed AOACS approach. The results illustrated an outperformance of AOACS compared to other methods of performance measurement.

1. Introduction

Increasingly complicated optimization issues have arisen due to the rapid expansion of numerous application domains. Traditional optimization techniques take too much time and money to solve these new optimization challenges. It is common knowledge that exact and rigorous answers are not required in most situations [1]. That is, due to the significantly reduced time and costs, estimated ideal solutions can be acceptable in practice. In order to address these non-convex, non-linear limitations and difficult optimization problems, numerous optimization algorithms have been introduced in recent years. These algorithms have proven to be quite successful in solving these real-world problems.
The use of optimization techniques to address issues in the actual world is common. These challenging, nonlinear, and multimodal real-world issues often need the use of metaheuristic algorithms, which have proven to be reliable optimization techniques in such circumstances. Metaheuristic algorithms are popular because of their simplicity in design and implementation, gradient-freeness, and ability to work around obstacles.
Metaheuristic algorithms efficiently solve a wide range of real-world problems; this comes from the nature of these algorithms and adopts a gradient-free method. Various metaheuristic techniques have been released recently based on natural procedures, collaborative behavior, or scientific laws.
Four general categories can be used to categorize metaheuristic algorithms, as shown in Figure 1:
Human-based algorithms: Fireworks Algorithm (FW) [2], Child Drawing Development Optimization Algorithm (CDD) [3], Teaching-based learning algorithm (TBLA) [4], Socio Evolution & Learning Optimizer (SELO) [5], Genetic algorithm (GA) [6], and Harmony Search (HS) [7]. Swarm-based algorithms: Particle Swarm Optimization (PSO) [8], Prairie Dog Optimization Algorithm (PDOA) [9], Grasshopper Optimization Algorithm (GOA) [10], Moth Flame Optimization (MFO) [11], Firefly Algorithm (FA) [12], Aquila Optimizer (AO) [13], and Ant lion optimizer [14]. Evolutionary algorithms (EA): Backtracking Search Optimization Algorithm (BTSO) [15], Evolutionary Strategies algorithm (ES) [15], Differential evolution (DE) [16], Genetic Algorithm (GA) [6], and Tree Growth Algorithm (TGA) [17]. Physics-based algorithms: Multi-verse Optimizer (MVO) [18], Black Hole Algorithm (BHA) [19], Space Gravitational Algorithm (SGA) [20], The arithmetic optimization algorithm (AOA) [21], and Henry Gas Solubility Optimization (HGSO) [22]. An overview of the given method is presented in Table 1.
Still, not all of the issues can be resolved by these techniques [23,24]. Heuristic algorithms are currently used to address optimization problems in many different disciplines, including optimal power flow problems and parameter optimization of photovoltaic models [25,26]. Therefore, in the face of complex difficulties, we must provide algorithms with more efficiency [27].
A population-based metaheuristic method called the arithmetic optimization algorithm (AOA) was just recently proposed. The approach is based on how the addition, subtraction, multiplication, and division arithmetic operators behave with respect to distributivity [21].
The AOA algorithm proves its stable and robust performance in different fields, such as data clustering, power systems, power controllers, feature selection, and image processing. In this paper [28], the authors propose an improved AOA algorithm based on flow direction for data clustering. The proposed algorithm is validated on different data clustering problems and outperforms compared to other algorithms. Elkasem, Ahmed HA, et al. present an approach to using fuzzy logic and AOA algorithms to enhance the performance of power controllers, such as the proportional-integral-derivative (PID); the conducted results show the superiority of the PID based on fuzzy logic and AOA [29]. A binary version of AOA extracts and selects features from images to detect osteosarcoma. AOA with different algorithms accurately classifies the images [30]. Ewees, Ahmed A., et al. accelerated the AOA algorithm with hybridization of AOA and genetic algorithms to enhance the algorithm search method. The proposed algorithm is implemented on the Cox proportional hazards method [31].
The AOA algorithm is widely used for solving several optimization problems because of the simplicity, robustness, and effectiveness of the results in terms of solving optimization problems [32]. However, AOA would also easily fall in the local optima for optimizing some complex issues, and the exploration and exploitation capabilities are less significant [33].
The primary function in avoiding local optima and balancing exploitation and exploration in the fundamental AOA algorithm is played by the control parameter and the position vectors C_Iter. In order to integrate the advantages of AOA and CS, we present a hybrid approach in this study. To improve AOA’s search procedure and find solutions that are close to optimal, the AOACS is created. Specifically, a new formulation of the C_ Iter uses the CS algorithm.
This paper proposes a novel optimization algorithm based on the cuckoo search algorithm for solving engineering design problems. The algorithm is specifically designed to optimize arithmetic expressions that arise in engineering design problems, such as mathematical models for physical systems, circuits, or mechanical systems. The cuckoo search algorithm is a nature-inspired optimization algorithm that is based on the behavior of cuckoo birds. The algorithm is known for its ability to efficiently search large solution spaces and find optimal or near-optimal solutions. The proposed algorithm in this work enhances the cuckoo search algorithm by introducing an accelerated arithmetic optimization technique that exploits the mathematical structure of the optimization problem. The main objective of the proposed algorithm is to minimize the objective function, which represents the cost or performance of the system being optimized. The algorithm iteratively searches the solution space using a set of cuckoo nests, each of which contains a potential solution. The nests are updated using a set of optimization operators, such as mutation and crossover, which are used to generate new potential solutions. The performance of the proposed algorithm is evaluated using several benchmark functions and compared with other state-of-the-art optimization algorithms. The results show that the proposed algorithm outperforms other algorithms in terms of solution quality and convergence speed. The proposed algorithm has potential applications in various fields, such as aerospace engineering, mechanical engineering, and electrical engineering, where optimization of complex mathematical models is necessary.
The following is a summary of this paper’s significant contributions:
  • We suggest a brand-new hybrid algorithm called AOACS based on the arithmetic optimization algorithm (AOA) and cuckoo search (CS) approach inspired by the AOA and CS algorithm design.
  • CS aids the suggested algorithm in increasing the diversity of the original population and its capacity to depart from the local optimum.
  • Enhanced AOA exploration and exploitation to increase convergence accuracy.
  • Twenty-three benchmark functions and CEC-2019 functions are implemented to increase the ability of AOACS to solve several numerical optimization problems.
  • The performance of AOACS is validated using three engineering optimization issues: the welded beam, the three-bar truss, the stepped cantilever beam, and the speed reducer design.
  • The results indicate the out-performance of AOACS over the basic AOA, CS, and other metaheuristic approaches.
The following is the order of the paper: The AOA algorithm is presented in Section 2. The search algorithm for cuckoo is introduced in Section 3. Section 4 description of the proposed AOACS algorithm. Section 5 discusses the outcomes of applying the AOACS to engineering challenges. This article is concluded in Section 6.

2. Arithmetic Optimization Algorithm (AOA)

Using several equations and mathematical operators, Abualigah presented this approach in 2020 [21]. AOA mimics four basic arithmetic operators (i.e., Subtraction (S), Addition (A), Multiplication (M), Division (D)) and is used to update the positions and search for the optimal global solutions. For the exploration search, the Multiplication and Division operators are employed; on the other hand, the Addition and Subtraction operators are used to execute the exploitation search. Figure 2 shows the AOA optimization technique.
The AOA method begins with a population of unanticipated solutions, just like other metaheuristics. The objective value of each solution is computed after each iteration. Before changing the position of keys, two regulating parameters named MOA and MOP in this method should be modified as follows:
MOA ( t ) = Min + t × ( Max Min T )
where MOA(t) denotes the result of the ith iteration of the function, the maximum number of repetitions is T, and the current number of repetitions is represented by t. Min and Max are accelerated processes of the minimum and maximum values.
M O P ( t ) = 1 ( t T ) 1 α
where math optimizer probability (MOP) is a coefficient, MOP(t) is the procedure’s weight at the tth repetition, T is the highest repetitions number, t is the current repetition, and α illustrates a controlling value. Here, r1 is an integer that is created at random and MOP is a scaling parameter that encourages further exploration.
A random number named r1 is created to switch between exploitation and exploration after following MOA and MOP. The following formula is employed for exploration:
x i , J ( t + 1 ) = { b e s t ( x j ) M O P + ϵ ÷ ( U B j L B J ) × μ + L B j       i f     r 2 < 0.5     b e s t ( x j ) × M O P × ( U B j L B j ) × μ + L B j         i f   r 2   0.5
The current repetition is presented as t, µ is used as controlling element, ϵ is a small numeral to evade division by 0, and r3 is an arbitrary value in the range [0, 1]. The following formula is used for the exploitation.
x i , J ( t + 1 ) = { b e s t ( x j ) M O P × ( U B j L B j ) × μ + L B j     i f   r 3 < 0.5 b e s t ( x j ) + M O P × ( U B j L B j ) × μ + L B j     i f   r 3 0.5
where xi,j(t) demonstrates the jth placements of the ith answer at the recent repetition. The optimal (xj) is the jth placement in the optimal solution. Here, xi (t + 1) denotes the ith answer in the following repetition. Respectively, U B j and L B j define the j t h placements upper and lower bound values. Algorithm 1 shows the pseudocode for the AOA algorithm.
Algorithm 1 Pseudo-code of AOA algorithm.
Initialize the population size N and the maximum iteration T
Initialize the population size of each search agent Xi (I = 1,2, …, N)
While  t T
   Check if the position goes beyond the search space boundary and the adjust it.
   Evaluate the fitness values of all search agents
   Set Xbest as the position of current best solution
   Calculate the MOA value using Equation (1)
   Calculate the MOP value using Equation (2)
   For i = 1 to N
      If r1 >  MOA then
      Update the search agent’s position using Equation (3)
      Else
      Update the search agent’s position using Equation (4)
      End If
   End For
   t = t + 1
End While
Return  Xbest

3. Cuckoo Search Algorithm

We initially idealize the main elements of the cuckoo-host strategy as a population of n cuckoos with n nests to discuss the cuckoo search algorithm as simply as possible. In true cuckoo-host systems, host bird nests usually contain three to four eggs or more, and by laying its eggs in such nests, a cuckoo can attack many nests. Because each cuckoo may only affect one host nest at a time while also applying one egg, the number of eggs, nests, and cuckoos is similar. Thus, an optimization problem’s solution vector, x, can be thought of as the location of an egg. As a result, there is no longer a need to distinguish between eggs, cuckoos, and nests. We essentially have the equivalence “egg = cuckoo = nest” as a result [34]. Algorithm 2 shows the pseudocode for AOA algorithm.
The initial population of the CS algorithm is generated by the Lévy flight algorithm. In the 1930s, the French mathematician Paul Pierre proposed the Lévy flight, a random walk mechanism whose walk steps fit the stable heavy-tail distribution that can make big jumps at nearby sites with a high probability. Sharp peaks, asymmetry, and lagging were features of the possibility density allocation of the Lévy flight. It moved in a rhythm that rotated between periodic short-distance jumps and sporadic long-distance hops, which can widen the search region for the population and jump out of the local optimal. In nature, numerous insects and animals, including flies and reindeer, fly in a manner resembling Lévy flying.
Algorithm 2 Pseudo-code of Cuckoo search algorithm.
Objective function f ( x ) ,     x = ( x 1 , x 2 ,   ,   x d ) T  
Generation t = 1
Initial a population of n host nests x i   (   i = 1 , 2 , , n )
While (t < Stop criterion)
   Get a cuckoo (i) randomly by Lévy flight
   Evaluate the fitness values of all search agents F
   Choose a nest among n (such as c) randomly
   If  ( F i > F c )
     Replace c by the new solution
   End If
   Abandon a faction ( P a ) of the worst nests and build new ones
   Keep the optimal solutions.
   Rank the solutions an find the current best.
   Update the generation number t = t + 1
End While

4. Hybridization of AOA with CS Algorithm

The elements in AOA swarms would hunt more randomly than CS swarms. However, elements in the AOA would execute poorer performance than those in the CS swarms during the exploitation operation. The exploitation ability of elements in AOA swarms would not be sufficient, and elements in CS swarms would be less qualified than those in AOA swarms despite the fact that both of these algorithms show a significant performance in optimization several problems. Therefore, it could be preferable if the exploitation process of people in CS swarms and the exploration process of elements in AOA swarms are coupled. Figure 3 illustrates the proposed AOACS algorithm.

5. Results and Discussion

5.1. Benchmark Functions Description

Table 2 lists the 23 mathematical functions used, their categories, and their mathematical formulation. The unimodal benchmark functions in Table 2 are used because they only have one optimal solution, making them a good choice for testing the effectiveness of the proposed optimizer. Table 2 illustrates several mathematical functions. Multimodal functions contain some peaks, a few local optimums, and only one global optimum, which makes these benchmark functions the best option when assessing the exploration of the optimization process. Further, balancing the exploration and exploitation of any algorithm is a challenging assignment; thus, the fixed dimension numerical multimodal is used to prove the outperformance of the proposed algorithm. The minimum value for each function (fmin), the defined search space limitations, and the considered dimensions are demonstrated in Table 2.
The proposed AOACS algorithm’s performance in this part is assessed in two steps; the first involves processing a complex set of mathematical benchmark functions. Second, two real word problems are solved using the proposed method to show the performance of the proposed work. The results are compared with different well-known algorithms, such as the original AOA, Whale optimization algorithm (WOA), Harris hawks optimization (HHO), Salp swarm algorithm (SSA), particle swarm optimization (PSO), and Slime mould algorithm (SMA). The selected methods are the most related and new in this domain. The worst, best, average, and standard deviation of the finesse values are the four metrics employed in the comparisons. Furthermore, the Wilcoxon summation rank is utilized to show the statistical distinctions between AOACS and the rest of the algorithms. Table 3 provides the values for the essential parameters for the used algorithms.

5.2. The Global Optimization Results

The suggested AOACS has been evaluated utilizing 23 more widely used mathematical functions in this area. With the help of several statistical analyses, the proposed AOACS’s results have been compared with many contemporary state-of-the-art methods to evaluate and show how well it handles problems involving global optimization. The original AOA [21], Whale optimization algorithm (WOA) [35], Harris hawks optimization (HHO) [36], Salp swarm algorithm (SSA) [37,38], particle swarm optimization (PSO), and Slime mould algorithm (SMA) [39] are among the algorithms that were taken into consideration.
The algorithms were applied with the same settings to ensure the fairness of the experimental results: population size was set to 30, and the maximum number of iterations was 500 for 30 separate instances. The study and simulations were conducted using Windows 10 and an Intel Core i7 processor running at 2.3 GHz with 16 GB of RAM. For a fair comparison, all competitors were run on the MATLAB 2018 platform.

5.2.1. Achieved Qualitative Results

The AOACS behaviors regarding the trajectories and convergence are illustrated in Figure 4 to confirm the effectiveness of the proposed algorithm. The figure shows several results: functions plotted in 2D fashion appear in the first column. The second column represents the trajectory of the solution, followed by two columns, the fitness significance, and convergence, respectively.
The solution’s magnitude and frequency in the initial iterations may be seen from the second column (trajectory behavior). They have almost completely disappeared in recent versions. This shows how AOACS had strong exploration capabilities in the early versions and strong exploitation capabilities in the later iterations. This tendency suggests that AOACS can find the ideal answer well. The ability of the AOACS to converge to highly qualified explanations in fewer repetitions is demonstrated by the average fitness value across all solutions among the number of repetitions shown in the third column of Figure 4. The average fitness value for the AOACS starts high in the early generations.

5.2.2. Results of Simulation of 23 Benchmark Functions and Discussions

Figure 5 compares the convergence curves of the proposed AOACS with those of the basic AOA and cutting-edge methods to evaluate the effectiveness of the AOACS for the main balance target exploitation and exploration. In contrast to the SMA, PSO, SSA, WOA, and HHO, which experienced severe dormancy at the optimal local explanations, the results demonstrate the smooth convergence of the AOACS by reaching superior quality solutions.
Table 4 statistics compare the performance of the AOACS to that of the standard AOA, PSO, WOA, SSA, SMA, HHO, and AOACS. It displayed the lowest values of the computed metrics in the functions of the worst, best, average, and standard deviation (STD) values by AOACS, demonstrating that it can outperform all other algorithms in around 50% of all the benchmarks that were taken into consideration (F: 2, 3, 5, 8, 9, 11, 12, 14, 19, 22). Furthermore, it performs similarly well in the other 50% of the relevant functions. The P-values acquired utilizing Wilcoxon summation rank with a significant value of 0.05, which is less than 0.05, demonstrate that the AOACS is superior to the SSA in 18 approaches. The null hypothesis test is therefore not accepted (h = 1 indicates a significant difference between the examined optimizers, AOACS and SSA). The P-values for PSO, SMA, HHO, WOA, SSA, and basic AOA demonstrate that the AOACS outperforms other algorithms in handling about 13 of the 23 functions; as a result, the null assumption examination is rejected (h = 1). Additionally, the proposed AOACS is compared to its equivalents while processing the 23 functions using the Friedman ranking test to determine where it ranks among them for further investigation. Table 5 lists the classes that were gained.

5.2.3. Scalability Study

In this section, the effectiveness of AOACS is assessed using 13 functions from Table 1 and the 10 CEC2019 benchmark functions.
1.
Experiments on 13 benchmark functions:
The performance of AOACS is evaluated using 13 functions from Table 1 with a heightened size of 100 to determine the optimizer’s resilience as the size of the optimization issues it handles grows.
Table 6 presents the effects of the offered variant and the other techniques (PSO, WOA, SSA, SMA, HHO, and AOA) for the worst, best, average, and STD values. Additionally, Table 6 provides the P-value and null hypothesis test result for AOACS compared to the other methods using the Wilcoxon rank summation examination with a substantial difference of 0.05.
The statistics from Table 6 show how stable and effective the suggested AOACS is because it offers the best answers for all six functions (F1, F2, F3, F4, F9, and F11). In addition, compared to the other algorithms, it produces the most comparable outcomes for the best solutions of the other methods (F5, F6, F7, F8, F10). For 85% of the examined functions, the stated P-values are smaller than 0.05. AOACS is highly stable and superior for handling situations with high dimensions.
Table 7 computes the Friedman ranking test to highlight the noteworthiness of the suggested AOACS. The AOACS has the top rankings in nine of the thirteen benchmark functions that were analyzed as problems; as a result, it finally occupies the top spot in the queue of other high-dimensional problem-solving strategies. With an average rank almost as high as AOACS’s, unmodified AOA holds down the second spot. As a result, AOA offers higher-quality solutions for high-dimensional issues than modern state-of-the-art approaches.
Figure 6 compares the proposed AOACS’s convergence curves to the original AOA and cutting-edge methods to evaluate the effectiveness of the AOACS’s exploration and exploitation. In contrast to the WOA, PSO, SSA, SMA, and HHO, which experienced severe recession at the local optimal solutions, the curves demonstrate the soft convergence of the AOACS by reaching superior quality solutions.
2.
Experiments on 10 CEC2019 benchmark functions.
The translation trajectory function, the translation trajectory Schwefel function, the translation trajectory Lunacek double grating function, and the comprehensive Rosenbrock plus Griewangk function are the CEC1 through CEC4 test functions for CEC2020. These functions are divided into mixed functions (CEC5–CEC7) and compound functions (CEC8–CEC10). The best value, average value, and standard deviation for each algorithm were determined after 30 independent runs.
The output of each algorithm for each test function in the CEC2019 is shown in Table 8. The table shows that, regardless of the ideal or average value, the AOACS algorithm produced the best results in 10 benchmark functions and seven of the best outcomes. This demonstrates the AOACS algorithm’s powerful optimization impact. The AOACS algorithm was less stable than most other algorithms for CEC3, CEC8, and CEC9, but it was still better for STD in terms of numerical value.
The Friedman ranking test is computed in Table 9 to show how noteworthy the suggested AOACS are. The AOACS finally holds the top position in the queue of other high-dimensional problem-solving strategies since it has the highest rankings in six of the ten benchmark functions that were analyzed for problems. Unmodified AOA takes the runner-up position with an average rank nearly as high as AOACS’s.

6. AOACS for Solving Real Word Engineering Optimization Problems

This section uses the suggested algorithm to resolve four engineering design issues: the welded beam, the three-bar truss, the stepped cantilever beam, and the speed reducer design. A set of 30 solutions and 500 iterations are employed in each run to solve these issues.
The acquired results are contrasted with some related methods described in the literature. The results of the presented AOACS are contrasted with those of the most recent techniques in the subsections that follow. In order to assess the effectiveness of the suggested AOACS, bound-constrained and generally constrained optimization problems are used in this study. Each pattern variable is frequently required to give a boundary restriction for the bound-constrained optimization problems:
l b j     x i j     u b j ,         j = 1 , 2 , 3 , , n
where n is the total number of places given, and l b j and u b j stand for the position’s lower and upper bounds, respectively. Additionally, a general constrained problem is typically formulated as:
  m i n   f ( x ) X = { x 11 , x 1 j , x 1 j , , x 1 n } s u b j e c t   t o g i ( x ) 0 ,     j = 1 , 2 , 3 , , k s t ( X ) = 0 , t = 1 , 2 , , p
where there are p equilibrium constraints, and k different types of constraints. All of the constrained optimization issues in Equation (6) are mapped into the bound-constrained design in the performance evaluation of the proposed AOA by using the static cost function. A cost function will be incorporated into the underlying objective function for any infeasible solution. The static cost function is streamlined for ease of employment, and it is appropriate for all kinds of issues and calls for an auxiliary cost function. The aforementioned constrained optimization problem can be expressed as follows:
f ( X ) = f ( x ) j = 1 m μ e p m a x { g i ( X ) , 0 } + t = 1 n μ e t { | S t ( X ) δ | , 0 }
where μ e p and μ e t are typically charged a high amount; δ is the inaccuracy of equilibrium constraints and in this paper, we set it to 10 6 . In optimization problems, constraints are used to restrict the values that the variables can take on. By properly handling constraints, we can find the optimal values of the variables that satisfy the constraints and obtain meaningful solutions to real-world problems [40].
Real word engineering design issues comprise nonlinear optimization problems attended by numerous complicated constraints and geometry. The proposed AOACS method is implemented in four issues (the welded beam design problem, the three-bar truss design problem, the stepped cantilever beam design, and the speed reducer design) to demonstrate how well it performs in optimization situations related to engineering problems. The proposed AOACS simulation is set as follows: the maximum iterations number is 200, the population size is set to 30, and the simulation runs 20 times. These details are well known to solve this kind of problem.

6.1. Welded Beam

One of the considerably well-known problem studies to assess algorithms’ effectiveness is the welded beam design problem. It was first put forth in [41] and sought to decrease the overall fabricating value of a welded beam by using four decision variables, as shown in Figure 7. These variables are the weld thickness (h), the joint beam’s length (l), its height (t), and its thickness (b).
The problem’s mathematical formulation and constraint functions are as follows:
Consider
λ = [ λ 1 , λ 2 , λ 3 , λ 4 ] = [ h , l , t , b ]
Minimize
f ( λ ) = 1.10471 λ 1 2 λ 2 + 0.04811 λ 3 λ 4 ( 14.0 + λ 2 )
Subject to
g 1 ( λ ) = τ ( λ ) τ m a x 0
g 2 ( λ ) = σ σ m a x 0
g 3 ( λ ) = δ δ m a x 0
g 4 ( λ ) = λ 1 λ 4 0 0
g 5 ( λ ) = P P C ( λ ) 0
g 6 ( λ ) = 0.125 λ 1 0
g 7 ( λ ) = 1.10471 λ 1 2 + 0.04811 λ 3 λ 4 ( 14 + λ 2 ) 5 0
Variable range
0.1 λ 1 , λ 2 2 , 0.1 λ 2 , λ 3 10
where
τ ( λ ) = (   τ ) 2 + 2   τ   τ λ 2 2 R + (   τ ) 2 ,   τ = P 2 λ 1 λ 2 ,   τ = M R J , M = P ( L + λ 2 2 )
R = λ 1 2 4 + ( λ 1 + λ 3 2 ) 2 , J = 2 { 2 λ 1 λ 2 [ λ 2 2 4 + ( λ 1 + λ 3 2 ) 2 ] } ,
σ ( λ ) = 6 P L E λ 3 2 λ 4 ,   δ ( λ ) = 6 PL 3 E λ 3 2 λ 4 ,
P C ( λ ) = 4.013 E λ 3 2 λ 4 6 36 L 2 ( 1 z 3 2 L E 4 G )
σ m a x = 3000 p s i ,   δ m a x = 0.25 i n ,   τ m a x = 30 , 000 p s i .
E = 30 × 10 6 p s i ,   G = 12 × 10 6 p s i
L = 14 i n ,   P = 6000 l b ,
The proposed AOACS is implemented on the welded beam problem, and the result is compared with other metaheuristic approaches such as AOA, HHO, SSA, WOA, PSO, and SMA. Table 10 shows the achieved results; the variables h, l, t, and b are set as 1.96 × 10−1, 0.335 × 10−1, 0.904 × 10−1, and 2.06 × 10−1, respectively, and the optimal manufacturing cost of AOACS is 1.96 × 10−1. In this comparison, it is clear that AOACS produces better outcomes than all the other techniques. This demonstrates that AOACS is competitive in solving the welded beam design challenge.

6.2. Three-Bar Truss

The three-bar truss design’s optimization goal is to reduce overall weight. Figure 8 depicts the three-bar truss construction together with its primary parameters, element 1’s cross-sectional site (X1) and element 2’s cross-sectional site (X2). Buckling, deflection, and stress are further limits on the issue.
The mathematical formulation is expressed as follows:
Consider
λ = [ λ 1 , λ 2 ] = [ X 1 , X 2 ]
Minimize
f ( λ ) = ( 2 2 λ 1 + λ 2 ) × l
Subject to
g 1 ( λ ) = 2 λ 1 + λ 2 2 λ 1 2 + 2 λ 1 λ 2 P σ 0
g 2 ( λ ) = λ 2 2 λ 1 2 + 2 λ 1 λ 2 P σ 0
g 3 ( λ ) = 1 2 λ 2 + λ 1 P σ 0
Variable range
0 λ 1 , λ 2 1
where
σ = 2   KN / cm 2 ,   l = 100   cm ,   P = 2   KN / cm 2
Table 11 shows the performance of several algorithms, such as HHO, AOA, WOA, SSA, PSO, SMA, and the proposed AOACS algorithm for solving the problem of the three-bar truss design problem. The results demonstrate that the AOACS algorithm indicates an outperformance compared to other algorithms. The minimum weight is 2.6387 × 102, and the optimal values for X1( λ 1 ) and X2( λ 2 ) are 7.8859 × 10−1 and 4.0825 × 10−1, respectively. Thus, the AOACS has the promising prospect of solving such an issue with a minimal search area.

6.3. Stepped Cantilever Beam Design

As depicted in Figure 9, a stepped cantilever beam is kept at one end and loaded at the other. At a predetermined distance from the support, the beam must be capable of supporting the specified load. Each section’s width ( λ ) can be altered by the beam’s designers. We assume that the length of each part of the cantilever is the same. The cantilever’s weight must be kept to a minimum in order to solve the cantilever beam design problem.
The problem’s mathematical formulation and constraint functions are as follows:
Consider:
λ = [ λ 1 λ 2 λ 3 λ 4 λ 5 ]
Minimize:
f ( x ) = 0.0624 ( λ 1 + λ 2 + λ 3 + λ 4 + λ 5 )
Subject to:
g ( x ) = 61 λ 1 3 + 37 λ 2 3 + 19 λ 3 3 + 7 λ 4 3 + 1 λ 5 3 1 0
Variable range:
0.01 λ i 100 ( i = 1 , 2 , 5 )
Table 12 displays the test findings for the cantilever beam design issue. The table shows that the weight acquired by the AOACS algorithm was the best weight (0.134 × 10−1) when compared to the HHO, WOA, SSA, PSO, and SMA algorithms, demonstrating the AOACS algorithm’s viability and efficiency in addressing the cantilever beam design problem.

6.4. Speed Reducer Design

The tooth surface width λ 1 , gear module λ 2 , the number of teeth on the pinion λ 3 , the length of the first shaft between bearings λ 4 , the length of the second shaft between bearings λ 5 , the diameter of the first shaft λ 6 , and the diameter of the second shaft λ 7 are the seven variables in the reducer design problem. In Figure 10, the variable diagram is displayed. The reducer design problem aims to determine the reducer’s smallest weight while adhering to four design restrictions. The four design limitations are stress in the shaft, lateral shaft deflection, stress on the shaft, and bending stress on the gear teeth.
The problem’s mathematical formulation and constraint functions are as follows:
Consider:
λ = [ λ 1 λ 2 λ 3 λ 4 λ 5 λ 6 λ 7 ]
Minimize:
f ( λ ) = 07854 × λ 1 × λ 2 2 × ( 3.3333 × λ 3 2 + 14.9334 × λ 3 43.0934 ) 1.508 × λ 1 × ( λ 6 2 + λ 7 2 ) + 7.4777 × λ 6 3 + λ 7 3 + 0.7854 × λ 4 × λ 6 2 + λ 5 × λ 7 2
Subject to:
g 1 ( λ ) = 27 λ 1 × λ 2 2 × λ 3 1 0
g 2 ( λ ) = 397.5 λ 1 × λ 2 2 × λ 3 2 1 0
g 3 ( λ ) = 1.93 × λ 4 3 λ 2 × λ 3 × λ 6 4 1 0
g 4 ( λ ) = 1.93 × λ 5 3 λ 2 × λ 3 × λ 7 4 1 0
g 5 ( λ ) = 1 110 × λ 6 3 ( 745 × λ 4 λ 2 × λ 3 ) 2 + 16.9 × 10 6 1 0
g 6 ( λ ) = 1 85 × λ 7 3 ( 745 × λ 5 λ 2 × λ 3 ) 2 + 16.9 × 10 6 1 0
g 7 ( λ ) = λ 2 × λ 3 40 1 0
g 8 ( λ ) = 5 × λ 2 λ 1 1 0
g 9 ( λ ) = λ 1 12 × λ 2 1 0
g 10 ( λ ) = 1.5 × λ 6 + 1.9 λ 4 1 0
g 11 ( λ ) = 1.1 × λ 7 + 1.9 λ 5 1 0
Variable range:
2.6 λ 1 3.6 , 0.7 λ 2 0.8 , 17 λ 3 28 , 7.3 λ 4 8.3 , 7.3 λ 5 8.3 , 2.9 λ 6 3.9 , 5 λ 7 5.5
The results of the AOACS and the compared methods are listed in Table 13. From this table, the AOACS is ranked first and outperformed all methods in solving this problem, whereas the SMA is ranked second, followed by PSO, AOA, and HHO, respectively.
This work presents an innovative optimization algorithm for solving engineering design problems. The performance of the proposed algorithm is evaluated using several benchmark functions and compared with other state-of-the-art optimization algorithms. Here is a brief discussion of the results obtained in the study. The results demonstrate that the proposed algorithm outperforms other optimization algorithms in terms of solution quality and convergence speed. The proposed algorithm is more efficient in finding the global optimum and has a faster convergence rate compared to the other algorithms. The study also presents a comparison between the standard cuckoo search algorithm and the proposed algorithm, which shows that the proposed algorithm performs better than the standard cuckoo search algorithm in terms of solution quality and convergence speed. The proposed algorithm achieves a better balance between exploration and exploitation of the solution space, leading to improved optimization performance. Moreover, the study shows that the proposed algorithm is robust and effective in solving different types of engineering design problems. The algorithm successfully optimizes various engineering design problems, including mathematical models for physical systems, circuits, and mechanical systems.
In summary, the results of the study demonstrate that the proposed algorithm is a powerful optimization tool that can efficiently solve complex engineering design problems. The algorithm offers superior performance in terms of solution quality and convergence speed compared to other state-of-the-art optimization algorithms. Therefore, the proposed algorithm has the potential to be widely applied in different fields, such as aerospace engineering, mechanical engineering, and electrical engineering, where optimization of complex mathematical models is necessary.

7. Conclusions

In this work, we propose an accelerated AOA algorithm approach called AOACS, which hybridizes the AOA with the cuckoo search algorithm. The CS is used to enhance the performance of AOA by balancing the exploitation and exploration to acquire more reasonable convergence accuracy. The proposed algorithm provides significant results in solving numerical optimization problems compared to other algorithms. The performance of the AOACS is examined using 23 benchmark functions to show the ability of the proposed work to solve different numerical optimization problems. Further, to verify the outperformance of the AOACS algorithm, AOACS is implemented to solve two examples of engineering optimization design problems, welded beam and three-bar truss design. AOACS is compared with the essential AOA and well-known algorithms such as HHO, SSA, WOA, PSO, and SMA and demonstrates superior performance in terms of minimum manufacturing cost for the welded beam and minimum weight for the three-bar truss design.
This work proposes a novel optimization algorithm that shows promising results in solving complex engineering design problems. Here are some future research directions that could build upon this work:
  • Hybridization with other optimization algorithms: The proposed algorithm could be combined with other optimization algorithms, such as the Genetic Algorithm (GA) or Particle Swarm Optimization (PSO), to create hybrid algorithms that leverage the strengths of both approaches. Hybrid algorithms may lead to even better optimization performance for certain types of engineering design problems.
  • Parameter tuning: The performance of the proposed algorithm is highly dependent on the values of its parameters, such as the population size and the number of iterations. Future work could focus on finding optimal parameter settings that can improve the performance of the algorithm.
  • Application to real-world engineering problems: The proposed algorithm has the potential to be applied to real-world engineering design problems, such as designing efficient aircraft or optimizing the performance of renewable energy systems. Future research could focus on applying the proposed algorithm to such problems and evaluating its performance in comparison to other optimization algorithms.
  • Further theoretical analysis: Theoretical analysis of the proposed algorithm, such as convergence analysis or complexity analysis, could provide deeper insights into its behavior and limitations. Such analysis could also guide the development of more efficient optimization algorithms in the future.
  • Extension to multi-objective optimization: The proposed algorithm is currently designed to optimize single-objective problems. Future research could focus on extending the algorithm to solve multi-objective optimization problems, where multiple conflicting objectives need to be optimized simultaneously.
In conclusion, this work presents a promising optimization algorithm that can be further developed and applied to real-world engineering problems. Future research can build upon this work to improve the algorithm’s performance and expand its application domain.

Author Contributions

Conceptualization, M.H., M.A. (Mohammad Alshinwan), H.G. and L.A.; methodology, M.H., M.A. (Mohammad Alshinwan), H.G.; validation, M.A. (Marah Alshdaifat) and W.A. (Waref Almanaseer); formal analysis, W.A. (Waleed Alomoush) and O.A.K.; writing—original draft preparation, M.H., M.A. (Mohammad Alshinwan), H.G. and L.A.; writing—review and editing, M.A. (Marah Alshdaifat) and W.A. (Waref Almanaseer), O.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alomari, A.; Idris, N.; Sabri, A.Q.M.; Alsmadi, I. Deep reinforcement and transfer learning for abstractive text summarization: A review. Comput. Speech Lang. 2022, 71, 101276. [Google Scholar] [CrossRef]
  2. Tan, Y.; Zhu, Y. Fireworks algorithm for optimization. In Proceedings of the International Conference in Swarm Intelligence, Beijing, China, 12–15 June 2010; pp. 355–364. [Google Scholar]
  3. Abdulhameed, S.; Rashid, T.A. Child drawing development optimization algorithm based on child’s cognitive development. Arab. J. Sci. Eng. 2022, 47, 1337–1351. [Google Scholar] [CrossRef]
  4. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching--learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1. [Google Scholar] [CrossRef]
  5. Kumar, M.; Kulkarni, A.J.; Satapathy, S.C. Socio evolution & learning optimization algorithm: A socio-inspired optimization methodology. Fut. Gener. Comput. Syst. 2018, 81, 252–272. [Google Scholar]
  6. Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks; Springer: Berlin/Heidelberg, Germany, 2019; pp. 43–55. [Google Scholar]
  7. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  8. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  9. Ezugwu, A.E.; Agushaka, J.O.; Abualigah, L.; Mirjalili, S.; Gandomi, A.H. Prairie dog optimization algorithm. Neural Comput. Appl. 2022, 34, 20017–20065. [Google Scholar] [CrossRef]
  10. Mirjalili, S.Z.; Mirjalili, S.; Saremi, S.; Faris, H.; Aljarah, I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl. Intell. 2018, 48, 805–820. [Google Scholar] [CrossRef]
  11. Shehab, M.; Abualigah, L.; Al Hamad, H.; Alabool, H.; Alshinwan, M.; Khasawneh, A.M. Moth--flame optimization algorithm: Variants and applications. Neural Comput. Appl. 2020, 32, 9859–9884. [Google Scholar] [CrossRef]
  12. Yang, X.-S.; Slowik, A. Firefly algorithm. In Swarm Intelligence Algorithms; CRC Press: Boca Raton, FL, USA, 2020; pp. 163–174. [Google Scholar]
  13. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  14. Abualigah, L.; Shehab, M.; Alshinwan, M.; Mirjalili, S.; Abd Elaziz, M. Ant lion optimizer: A comprehensive survey of its variants and applications. Arch. Comput. Methods Eng. 2021, 28, 1397–1416. [Google Scholar] [CrossRef]
  15. Civicioglu, P. Evolutionary Strategies algorithm (ES) search optimization algorithm for numerical optimization problems. Appl. Math. Comput. 2013, 219, 8121–8144. [Google Scholar]
  16. Price, K.V. Differential evolution. In Handbook of Optimization; Springer: Berlin/Heidelberg, Germany, 2013; pp. 187–214. [Google Scholar]
  17. Cheraghalipour, A.; Hajiaghaei-Keshteli, M.; Paydar, M.M. Tree Growth Algorithm (TGA): A novel approach for solving optimization problems. Eng. Appl. Artif. Intell. 2018, 72, 393–414. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  19. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. (Ny) 2013, 222, 175–184. [Google Scholar] [CrossRef]
  20. Hsiao, Y.-T.; Chuang, C.-L.; Jiang, J.-A.; Chien, C.-C. A novel optimization algorithm: Space gravitational optimization. In Proceedings of the 2005 IEEE International Conference on Systems, Man and Cybernetics, Waikoloa, HI, USA, 12 October 2005; Volume 3, pp. 2323–2328. [Google Scholar]
  21. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  22. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Fut. Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  23. Gao, Z.-M.; Zhao, J.; Hu, Y.-R.; Chen, H.-F. The challenge for the nature-inspired global optimization algorithms: Non-symmetric benchmark functions. IEEE Access 2021, 9, 106317–106339. [Google Scholar] [CrossRef]
  24. Alsarhan, T.; Ali, U.; Lu, H. Enhanced discriminative graph convolutional network with adaptive temporal modelling for skeleton-based action recognition. Comput. Vis. Image Underst. 2022, 216, 103348. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Wang, Y.; Li, S.; Yao, F.; Tao, L.; Yan, Y.; Zhao, J.; Gao, Z. An enhanced adaptive comprehensive learning hybrid algorithm of Rao-1 and JAYA algorithm for parameter extraction of photovoltaic models. Math. Biosci. Eng. 2022, 19, 5610–5637. [Google Scholar] [CrossRef]
  26. AlMahmoud, R.H.; Hammo, B.; Faris, H. A modified bond energy algorithm with fuzzy merging and its application to Arabic text document clustering. Expert Syst. Appl. 2020, 159, 113598. [Google Scholar] [CrossRef]
  27. Li, S.; Gong, W.; Wang, L.; Gu, Q. Multi-objective optimal power flow with stochastic wind and solar power. Appl. Soft Comput. 2022, 114, 108045. [Google Scholar] [CrossRef]
  28. Abualigah, L.; Almotairi, K.H.; Abd Elaziz, M.; Shehab, M.; Altalhi, M. Enhanced Flow Direction Arithmetic Optimization Algorithm for mathematical optimization problems with applications of data clustering. Eng. Anal. Bound. Elem. 2022, 138, 13–29. [Google Scholar] [CrossRef]
  29. Elkasem, A.H.A.; Khamies, M.; Magdy, G.; Taha, I.B.M.; Kamel, S. Frequency stability of AC/DC interconnected power systems with wind energy using arithmetic optimization algorithm-based fuzzy-PID controller. Sustainability 2021, 13, 12095. [Google Scholar] [CrossRef]
  30. Bansal, P.; Gehlot, K.; Singhal, A.; Gupta, A. Automatic detection of osteosarcoma based on integrated features and feature selection using binary arithmetic optimization algorithm. Multimed. Tools Appl. 2022, 81, 8807–8834. [Google Scholar] [CrossRef] [PubMed]
  31. Ewees, A.A.; Al-qaness, M.A.A.; Abualigah, L.; Oliva, D.; Algamal, Z.Y.; Anter, A.M.; Ali Ibrahim, R.; Ghoniem, R.M.; Abd Elaziz, M. Boosting arithmetic optimization algorithm with genetic algorithm operators for feature selection: Case study on cox proportional hazards model. Mathematics 2021, 9, 2321. [Google Scholar] [CrossRef]
  32. Zhang, Y.-J.; Yan, Y.-X.; Zhao, J.; Gao, Z.-M. AOAAO: The hybrid algorithm of arithmetic optimization algorithm with aquila optimizer. IEEE Access 2022, 10, 10907–10933. [Google Scholar] [CrossRef]
  33. Zhang, Y.-J.; Wang, Y.-F.; Yan, Y.-X.; Zhao, J.; Gao, Z.-M. LMRAOA: An improved arithmetic optimization algorithm with multi-leader and high-speed jumping based on opposition-based learning solving engineering and numerical problems. Alex. Eng. J. 2022, 61, 12367–12403. [Google Scholar] [CrossRef]
  34. Shehab, M.; Daoud, M.S.; AlMimi, H.M.; Abualigah, L.M.; Khader, A.T. Hybridising cuckoo search algorithm for extracting the ODF maxima in spherical harmonic representation. Int. J. Bio-Inspired Comput. 2019, 14, 190–199. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  36. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Fut. Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  37. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  38. Abualigah, L.; Shehab, M.; Alshinwan, M.; Alabool, H. Salp swarm algorithm: A comprehensive survey. Neural Comput. Appl. 2020, 32, 11195–11215. [Google Scholar] [CrossRef]
  39. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Fut. Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  40. Bertsekas, D.P. Constrained Optimization and Lagrange Multiplier Methods; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  41. Coello, C.A.C. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Comput. Methods Appl. Mech. Eng. 2002, 191, 1245–1287. [Google Scholar] [CrossRef]
Figure 1. Metaheuristic algorithms.
Figure 1. Metaheuristic algorithms.
Processes 11 01380 g001
Figure 2. The AOA optimization technique.
Figure 2. The AOA optimization technique.
Processes 11 01380 g002
Figure 3. The proposed AOACS algorithm.
Figure 3. The proposed AOACS algorithm.
Processes 11 01380 g003
Figure 4. The achieved results of the tested qualitative analysis for 23 benchmark functions.
Figure 4. The achieved results of the tested qualitative analysis for 23 benchmark functions.
Processes 11 01380 g004aProcesses 11 01380 g004bProcesses 11 01380 g004cProcesses 11 01380 g004d
Figure 5. The comparison approaches behaved in terms of convergence for the test functions (F1F6, F10, and F11), where the dimension was 10.
Figure 5. The comparison approaches behaved in terms of convergence for the test functions (F1F6, F10, and F11), where the dimension was 10.
Processes 11 01380 g005aProcesses 11 01380 g005b
Figure 6. Comparison of methods’ behavior regarding convergence on the test functions (F1F4, F7, F10, F11, F15, F21F23), where dimension 10 is present.
Figure 6. Comparison of methods’ behavior regarding convergence on the test functions (F1F4, F7, F10, F11, F15, F21F23), where dimension 10 is present.
Processes 11 01380 g006aProcesses 11 01380 g006b
Figure 7. Welded beam.
Figure 7. Welded beam.
Processes 11 01380 g007
Figure 8. Three-Bar Truss Design.
Figure 8. Three-Bar Truss Design.
Processes 11 01380 g008
Figure 9. Stepped Cantilever Beam Design.
Figure 9. Stepped Cantilever Beam Design.
Processes 11 01380 g009
Figure 10. Speed Reducer Design.
Figure 10. Speed Reducer Design.
Processes 11 01380 g010
Table 1. An overview of the presented methods.
Table 1. An overview of the presented methods.
Ref.Method NameAbbreviationIdeaYear
[1]Fireworks AlgorithmFWThere are two different sorts of explosion (search) procedures used and there are well-designed devices for containing a variety of sparks.2010
[2]Child Drawing Development Optimization AlgorithmCDDBy applying the golden ratio to enhance the beauty of their work, the learning behavior and cognitive development of the child are optimized.2022
[3]Teaching-based learning algorithmTBLAThe suggested approach is based on how a teacher’s influence affects students’ performance in a class.2012
[4]Socio Evolution & Learning OptimizerSELOThis approach draws its inspiration from how people develop social skills when they are arranged into families in a societal setting.2018
[8]Prairie Dog Optimization AlgorithmPDOAThis approach using prairie dogs behaves as they would in their native environment.2022
[9]Aquila optimizerAOThis technique draws inspiration from the way aquilas grab their prey in the wild.2021
[19]The arithmetic optimization algorithmAOAThis technique makes use of the distributional properties of the primary mathematical arithmetic operators.2021
Table 2. Benchmark functions.
Table 2. Benchmark functions.
Fun. Fun. DescriptionDim.Rangefmin
Unimodal Benchmark Functions
F1 f ( x ) = i = 1 n x i 2 10,100[−100, 100]0
F2 f ( x ) = i = 0 n | x i | + i = 0 n | x i | 10,100[−10, 10]0
F3 f ( x ) = i = 1 d ( j = 1 i x j ) 2 10,100[−100, 100]0
F4 f ( x ) = m a x i { | x i | , 1 i n } 10,100[−100, 100]0
F5 f ( x ) = i = 1 n 1 [ 100 ( x i 2 x i + 1 ) 2 + ( 1 x i ) 2 ] 10,100[−30, 30]0
F6 f ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 10,100[−100, 100]0
F7 f ( x ) = i = 0 n ix i 4 + random [ 0 , 1 )
Multimodal Benchmark Functions
F8 f ( x ) = i = 1 n ( x i sin ( | x i | ) ) 10,100[−500, 500]−418.9829
F9 f ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 10,100[−5.12, 5.12]0
F10 f ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 10,100[−32, 32]0
F11 f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) 10,100[−600, 600]0
F12 f ( x ) = π n 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) + i = 1 n u ( x i , 10 , 100 , 4 ) ]

y i = 1 + x i + 1 4 , u ( x i , a , k , m ) { K ( x i a ) m if   x i > a 0 a x i a K ( x i a ) m a x i
10,100[−50, 50]0
F13 f ( x ) = 0.1 ( sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 1 + sin 2 ( 2 π x n ) ) + i = 1 n u ( x i , 5 , 100 , 4 ) 10,100[−50, 50]0
Fixed-Dimension Multimodal Benchmark Functions
F14 f ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a ij ) ) 1 2[−65, 65]1
F15 f ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.398
F16 f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 2[−5, 5]−1.0316
F17 f ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) c o s x 1 + 10 2[−5, 5]0.398
F18 f ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x i + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F19 f ( x ) = i = 1 4 c i exp ( i = 1 3 a ij ( x j p ij ) 2 ) 3[−1, 2]−3.86
F20 f ( x ) = i = 1 4 c i exp ( i = 1 6 a ij ( x j p ij ) 2 ) 6[0, 1]−0.32
F21 f ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 1]−10.1532
F22 f ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 1]−10.4028
F23 f ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 1]−10.5363
Table 3. Parameter values of the proposed AOACS algorithm and other algorithms.
Table 3. Parameter values of the proposed AOACS algorithm and other algorithms.
AlgorithmParameters
AOACSμ = 0.5;
α = 5;
PSOwMax  =  0.9;
wMin  =  0.2;
c1  =  2;
c2  =  2
WOAa1 ∈ [2, 0];
a2 ∈ [−1, −2];
b = 1
SSARandom values c2 and c3 [1, 0]
SMAz = 0.01
HHOα = 1.5
AOA μ = 0.5;
α = 5;
Table 4. Results from methods of comparison on 23 benchmark functions (F1–F23), where the dimension is 10.
Table 4. Results from methods of comparison on 23 benchmark functions (F1–F23), where the dimension is 10.
FunctionMeasure Algorithm
PSOWOASSASMAHHOAOAAOACS
F1Best6.12 × 1033.64 × 1026.51 × 1032.76 × 1011.15 × 1023.66 × 10−93.64 × 10−68
Average3.00 × 1031.60 × 1021.49 × 1031.11 × 1015.95 × 1017.88 × 10−107.28 × 10−69
Worst 1.43 × 1031.81 × 1013.92 × 1015.57 × 10−12.70 × 1016.73 × 10−171.06 × 10−76
STD2.07 × 1031.28 × 1022.81 × 1031.25 × 1013.69 × 1011.61 × 10−91.63 × 10−68
p-value1.19 × 10−22.36 × 10−22.69 × 10−18.04 × 10−26.89 × 10−313.05 × 10−1
h1100100
F2Best3.76 × 1011.14 × 1010.452 × 1010.238 × 1010.494 × 1019.87 × 10−53.77 × 10−27
Average1.98 × 1010.450 × 1010.226 × 1015.38 × 10−10.260 × 1012.06 × 10−57.54 × 10−28
Worst 0.907 × 10−10.203 × 1015.15 × 10−12.38 × 10−20.103 × 1012.19 × 10−109.73 × 10−39
STD1.34 × 1010.394 × 1010.160 × 1010.103 × 1010.162 × 1014.37 × 10−51.68 × 10−27
p-valu×10 1.07 × 10−23.39 × 10−21.34 × 10−22.76 × 10−17.03 × 10−313.23 × 10−1
h1110100
F3B×10 st1.74 × 1047.62 × 1035.40 × 1033.15 × 1042.01 × 1039.11 × 10−28.27 × 10−55
Average7.44 × 1033.19 × 1033.94 × 1031.75 × 1047.80 × 1021.82 × 10−21.69 × 10−55
Worst 4.24 × 1031.19 × 1031.26 × 1036.84 × 1038.88 × 1015.68 × 10−141.54 × 10−63
STD5.60 × 1032.60 × 1031.80 × 1039.39 × 1038.89 × 1024.08 × 10−23.68 × 10−55
p-value1.79 × 10−22.54 × 10−21.19 × 10−33.17 × 10−38.56 × 10−213.46 × 10−1
h1111000
F4Best6.21 × 1011.94 × 1016.43 × 1017.67 × 1019.86 × 1006.42 × 10−54.95 × 10−33
Average3.67 × 1011.58 × 1013.64 × 1015.19 × 1010.697 × 1011.93 × 10−52.13 × 10−33
Worst 2.10 × 1011.34 × 1011.61 × 1012.52 × 1010.400 × 1014.89 × 10−77.72 × 10−38
STD1.65 × 1010.263 × 1012.12 × 1011.98 × 1010.275 × 1012.64 × 10−52.45 × 10−33
p-value1.09 × 10−39.00 × 10−74.89 × 10−33.78 × 10−44.74 × 10−411.40 × 10−1
h1111100
F5Best1.08 × 1077.00 × 1034.19 × 1053.79 × 1043.63 × 1030.894 × 1018.99 × 100
Average2.98 × 1062.02 × 1032.42 × 1051.94 × 1042.71 × 1030.734 × 1018.94 × 100
Worst 5.59 × 1043.23 × 1028.08 × 1046.84 × 1025.27 × 1020.107 × 1018.79 × 100
STD4.51 × 1062.86 × 1031.34 × 1051.74 × 1041.30 × 1030.351 × 1018.25 × 10−2
p-value1.77 × 10−11.55 × 10−13.72 × 10−33.68 × 10−21.65 × 10−313.39 × 10−1
h0011100
F6Best1.12 × 1042.32 × 1031.87 × 1036.20 × 1017.76 × 1014.03 × 10−11.54 × 100
Average4.23 × 1039.19 × 1026.91 × 1022.29 × 1014.00 × 1011.22 × 10−11.24 × 100
Worst 1.11 × 1037.32 × 1011.27 × 1010.149 × 1011.30 × 1011.90 × 10−59.72 × 10−1
STD4.19 × 1038.99 × 1027.22 × 1022.55 × 1012.80 × 1011.76 × 10−12.16 × 10−1
p-value5.39 × 10−25.16 × 10−26.49 × 10−28.03 × 10−21.29 × 10−211.90 × 10−5
h0000101
F7Best5.41 × 10−12.84 × 1005.88 × 10−19.58 × 10−11.99 × 10−18.87 × 10−31.61 × 10−2
Average2.93 × 10−19.32 × 10−13.12 × 10−13.23 × 10−11.01 × 10−16.33 × 10−36.06 × 10−3
Worst 1.89 × 10−13.70 × 10−15.34 × 10−25.25 × 10−36.27 × 10−23.89 × 10−34.13 × 10−4
STD1.53 × 10−10.107 × 1012.04 × 10−13.80 × 10−15.78 × 10−21.83 × 10−36.38 × 10−3
p-value3.03 × 10−38.90 × 10−21.00 × 10−29.95 × 10−26.26 × 10−319.31 × 10−1
h1010100
F8Best−1.46 × 103−1.17 × 103−1.38 × 103−1.65 × 103−1.72 × 103−1.73 × 103−1.93 × 103
Average−1.78 × 103−1.40 × 103−1.59 × 103−2.10 × 103−1.93 × 103−3.17 × 103−3.64 × 103
Worst −2.17 × 103−2.01 × 103−1.91 × 103−2.95 × 103−2.15 × 103−4.19 × 103−4.17 × 103
STD3.15 × 1023.47 × 1022.02 × 1025.25 × 1022.00 × 1021.04 × 1039.60 × 102
p-value2.10 × 10−26.86 × 10−31.04 × 10−27.48 × 10−23.04 × 10−214.82 × 10−1
h1110100
F9Best9.21 × 1017.14 × 1018.80 × 1019.44 × 1016.90 × 1014.21 × 10−80
Average6.63 × 1015.48 × 1016.21 × 1016.80 × 1013.92 × 1011.03 × 10−80
Worst 3.98 × 1014.02 × 1012.28 × 1010.292 × 1011.09 × 10100
STD1.94 × 1011.15 × 1012.62 × 1013.78 × 1012.76 × 1011.80 × 10−80
p-value6.08 × 10−55.32 × 10−67.28 × 10−43.80 × 10−31.30 × 10−212.35 × 10−1
H1111100
F10Best9.21 × 1017.14 × 1018.80 × 1019.44 × 1016.90 × 1014.21 × 10−80
Average6.63 × 1015.48 × 1016.21 × 1016.80 × 1013.92 × 1011.03 × 10−80
Worst 3.98 × 1014.02 × 1012.28 × 1010.292 × 1011.09 × 10100
STD1.94 × 1011.15 × 1012.62 × 1013.78 × 1012.76 × 1011.80 × 10−80
p-value6.08 × 10−55.32 × 10−67.28 × 10−43.80 × 10−31.30 × 10−212.35 × 10−1
h1111100
F11Best6.43 × 1011.63 × 1022.53 × 1011.19 × 1000.316 × 1012.76 × 10−40
Average2.38 × 1011.01 × 1021.09 × 1018.03 × 10−10.204 × 1018.53 × 10−50
Worst 0.933 × 1016.30 × 1010.211 × 1013.49 × 10−10.144 × 1015.20 × 10−120
STD2.29 × 1013.76 × 1010.997 × 1013.38 × 10−16.59 × 10−11.25 × 10−40
p-value4.90 × 10−23.31 × 10−44.00 × 10−27.18 × 10−41.23 × 10−411.65 × 10−1
h1111100
F12Best6.31 × 1060.832 × 1012.61 × 1071.34 × 1041.57 × 1012.78 × 10−10.151 × 101
Average1.36 × 1060.337 × 1015.81 × 1062.70 × 1030.712 × 1016.93 × 10−24.46 × 10−1
Worst 3.92 × 1011.51 × 10−15.10 × 1042.50 × 10−19.75 × 10−13.31 × 10−44.55 × 10−2
STD2.78 × 1060.369 × 1011.14 × 1075.98 × 1030.542 × 1011.19 × 10−16.00 × 10−1
p-value3.07 × 10−18.08 × 10−22.85 × 10−13.42 × 10−11.97 × 10−212.05 × 10−1
h0000100
F13Best1.95 × 1072.03 × 1024.85 × 1064.44 × 1061.42 × 1012.35 × 10−19.56 × 10−1
Average5.77 × 1065.72 × 1011.58 × 1061.30 × 1060.648 × 1018.06 × 10−28.84 × 10−1
Worst 7.02 × 1054.66 × 10−11.36 × 1036.52 × 1010.179 × 1019.17 × 10−58.04 × 10−1
STD7.97 × 1068.72 × 1011.97 × 1061.89 × 1060.469 × 1019.47 × 10−26.08 × 10−2
p-value1.44 × 10−11.81 × 10−11.11 × 10−11.65 × 10−11.57 × 10−212.39 × 10−7
h0000101
F14Worst 2.20 × 1011.27 × 1011.27 × 1011.56 × 1011.65 × 1011.17 × 1011.64 × 101
Average1.72 × 1010.471 × 1010.646 × 1010.993 × 1011.15 × 1010.848 × 1010.957 × 101
Best1.46 × 1019.98 × 10−10.199 × 1010.595 × 1010.397 × 1010.320 × 1010.298 × 101
STD0.311 × 1010.535 × 1010.568 × 1010.424 × 1010.515 × 1010.352 × 1010.524 × 101
p-value2.00 × 10−316.29 × 10−11.26 × 10−17.54 × 10−22.25 × 10−11.85 × 10−1
H1000000
F15Worst 6.17 × 10−25.59 × 10−37.91 × 10−32.64 × 10−23.60 × 10−23.62 × 10−31.64 × 10−2
Average2.54 × 10−22.81 × 10−34.61 × 10−31.42 × 10−21.11 × 10−21.43 × 10−36.05 × 10−3
Best5.03 × 10−37.93 × 10−41.52 × 10−35.74 × 10−41.78 × 10−36.44 × 10−41.90 × 10−3
STD2.35 × 10−21.95 × 10−32.63 × 10−31.04 × 10−21.43 × 10−21.24 × 10−35.93 × 10−3
p-value6.49 × 10−212.54 × 10−14.28 × 10−22.36 × 10−12.20 × 10−12.79 × 10−1
H0001000
F16Worst −0.103 × 101−0.102 × 101−0.100 × 101−0.101 × 101−9.49 × 10−1−0.103 × 101−0.103 × 101
Average−0.103 × 101−0.103 × 101−0.102 × 101−0.102 × 101−9.99 × 10−1−0.103 × 101−0.103 × 101
Best−0.103 × 101−0.103 × 101−0.103 × 101−0.103 × 101−0.103 × 101−0.103 × 101−0.103 × 101
STD7.72 × 10−53.88 × 10−31.63 × 10−21.03 × 10−23.42 × 10−24.18 × 10−54.17 × 10−4
p-value1.07 × 10−112.25 × 10−12.58 × 10−19.53 × 10−21.07 × 10−11.35 × 10−1
h0000000
F17Worst 3.98 × 10−14.09 × 10−15.19 × 10−10.277 × 10−10.150 × 10−10.411 × 10−13.98 × 10−1
Average3.98 × 10−14.03 × 10−14.23 × 10−10.104 × 10−18.57 × 10−10.114 × 10−13.98 × 10−1
Best3.98 × 10−13.98 × 10−13.98 × 10−14.08 × 10−14.00 × 10−13.98 × 10−13.98 × 10−1
STD1.18 × 10−44.52 × 10−35.39 × 10−29.82 × 10−14.61 × 10−10.166 × 10−17.87 × 10−6
p-value3.59 × 10−214.45 × 10−11.87 × 10−15.89 × 10−23.46 × 10−13.34 × 10−2
h1000001
F18Worst 9.18 × 1010.453 × 10−13.00 × 1012.51 × 1011.79 × 1019.37 × 1010.323 × 10−1
Average3.77 × 1010.331 × 10−11.53 × 1011.03 × 1010.629 × 10−12.13 × 1010.305 × 10−1
Best0.30 × 10−10.30 × 10−10.30 × 10−10.30 × 10−10.30 × 10−10.304 × 10−10.300 × 10−1
STD4.75 × 1016.78 × 10−11.38 × 1011.03 × 1010.649 × 10−14.05 × 1011.03 × 10−1
p-value1.44 × 10−118.72 × 10−21.68 × 10−10.338 × 10−13.49 × 10−14.09 × 10−1
H0000000
F19Worst −0.270 × 10−1−0.329 × 10−1−0.385 × 10−1−0.297 × 10−1−0.261 × 10−1−0.377 × 10−1−0.386 × 10−1
Average−0.330 × 10−1−0.368 × 10−1−0.386 × 10−1−0.348 × 10−1−0.346 × 10−1−0.382 × 10−1−0.386 × 10−1
Best−0.386 × 10−1−0.385 × 10−1−0.386 × 10−1−0.378 × 10−1−0.384 × 10−1−0.386 × 10−1−0.386 × 10−1
STD4.92 × 10−12.26 × 10−14.27 × 10−33.26 × 10−14.96 × 10−14.24 × 10−21.01 × 10−3
p-value1.49 × 10−11.00 × 1001.24 × 10−12.83 × 10−13.83 × 10−12.12 × 10−11.15 × 10−1
h0000000
F20Worst −0.114 × 10−1−0.151 × 10−1−0.259 × 10−1−0.213 × 10−1−0.171 × 10−1−0.311 × 10−1−0.313 × 10−1
Average−0.197 × 10−1−0.211 × 10−1−0.287 × 10−1−0.260 × 10−1−0.217 × 10−1−0.319 × 10−1−0.322 × 10−1
Best−0.303 × 10−1−0.262 × 10−1−0.327 × 10−1−0.314 × 10−1−0.255 × 10−1−0.331 × 10−1−0.331 × 10−1
STD9.04 × 10−14.55 × 10−12.90 × 10−14.25 × 10−13.07 × 10−18.36 × 10−27.78 × 10−2
p-value7.56 × 10−111.36 × 10−21.19 × 10−18.27 × 10−18.12 × 10−46.63 × 10−4
h0010011
F21Worst −0.267 × 10−1−0.221 × 10−1−0.489 × 10−1−3.51 × 10−1−4.84 × 10−1−0.256 × 10−1−0.253 × 10−1
Average−0.612 × 10−1−0.348 × 10−1−0.698 × 10−1−9.24 × 10−1−0.201 × 10−1−0.580 × 10−1−0.497 × 10−1
Best−0.986 × 10−1−0.467 × 10−1−1.01 × 101−0.293 × 10−1−0.382 × 10−1−1.01 × 101−1.01 × 101
STD0.339 × 10−10.115 × 10−10.274 × 10−10.112 × 10−10.142 × 10−10.348 × 10−10.343 × 10−1
p-value1.37 × 10−113.02 × 10−27.44 × 10−31.10 × 10−11.94 × 10−13.83 × 10−1
h0011000
F22Worst −0.150 × 10−1−0.303 × 10−1−0.271 × 10−1−5.20 × 10−1−3.75 × 10−1−0.364 × 10−1−0.274 × 10−1
Average−0.209 × 10−1−0.368 × 10−1−0.666 × 10−1−0.122 × 10−1−9.71 × 10−1−0.702 × 10−1−0.494 × 10−1
Best−0.275 × 10−1−0.442 × 10−1−1.03 × 101−0.207 × 10−1−0.181 × 10−1−0.980 × 10−1−0.989 × 10−1
STD5.19 × 10−15.86 × 10−10.340 × 10−15.91 × 10−15.21 × 10−10.268 × 10−10.288 × 10−1
p-value1.89 × 10−318.86 × 10−21.72 × 10−45.68 × 10−52.60 × 10−23.63 × 10−1
h1001110
F23Worst −0.144 × 10−1−0.329 × 10−1−0.514 × 10−1−4.06 × 10−1−0.107 × 10−1−0.222 × 10−1−0.241 × 10−1
Average−0.377 × 10−1−0.398 × 10−1−0.932 × 10−1−0.152 × 10−1−0.262 × 10−1−0.409 × 10−1−0.287 × 10−1
Best−0.509 × 10−1−0.470 × 10−1−1.05 × 101−0.453 × 10−1−0.454 × 10−1−0.973 × 10−1−0.381 × 10−1
STD0.163 × 10−15.79 × 10−10.235 × 10−10.170 × 10−10.162 × 10−10.322 × 10−15.69 × 10−1
p-value7.90 × 10−111.13 × 10−31.56 × 10−21.16 × 10−19.45 × 10−11.54 × 10−2
h0011001
Table 5. Twenty-three benchmark functions are used for the Friedman ranking test for comparative approaches, with a dimension of 10.
Table 5. Twenty-three benchmark functions are used for the Friedman ranking test for comparative approaches, with a dimension of 10.
Fun.Algorithm
PSOWOASSASMAHHOAOAAOACS
F17563421
F27643521
F36457321
F46457321
F57365412
F67653412
F74756321
F85763421
F96457321
F107463521
F116753421
F126375412
F137465312
F147125634
F157236514
F162465713
F172346571
F187254361
F197425631
F207634521
F212517634
F225426713
F234317625
Sum131981001151055144
Mean 5.704.264.355.004.572.221.91
Rank6235421
Table 6. Results from methods of comparison on 23 benchmark functions (F1–F13), where the dimension is 50.
Table 6. Results from methods of comparison on 23 benchmark functions (F1–F13), where the dimension is 50.
FunctionMeasureAlgorithm
PSOWOASSASMAHHOAOAAOACS
F1Worst 3.10 × 1051.15 × 1053.94 × 1053.44 × 1031.56 × 1054.33 × 10−62.33 × 10−41
Average2.27 × 1051.05 × 1052.63 × 1051.28 × 1031.27 × 1051.32 × 10−64.66 × 10−42
Best1.91 × 1059.80 × 1041.69 × 1051.36 × 1029.96 × 1047.18 × 10−175.83 × 10−65
STD4.92 × 1046.61 × 1039.40 × 1041.29 × 1032.13 × 1041.94 × 10−61.04 × 10−41
p-value6.61 × 10−64.20 × 10−102.42 × 10−45.69 × 10−29.44 × 10−711.69 × 10−1
h1110100
F2Worst6.06 × 1024.47 × 10531.40 × 1020.563 × 10−11.50 × 1034.07 × 10−41.40 × 10−16
Average5.30 × 1028.94 × 10521.03 × 1020.308 × 10−16.19 × 1021.29 × 10−42.80 × 10−17
Best3.82 × 1028.35 × 10435.81 × 1013.37 × 10−13.89 × 1025.89 × 10−73.36 × 10−35
STD9.34 × 1012.00 × 10533.12 × 1010.214 × 10−14.93 × 1021.73 × 10−46.27 × 10−17
p-value1.39 × 10−63.47 × 10−18.03 × 10−51.24 × 10−22.28 × 10−210
h1011100
F3Worst 6.01 × 1062.75 × 1065.01 × 1061.75 × 1071.47 × 1062.40 × 1066.37 × 10−17
Average4.15 × 1061.14 × 1063.67 × 1061.03 × 1071.00 × 1061.36 × 1061.27 × 10−17
Best2.22 × 1066.02 × 1053.00 × 1063.78 × 1067.71 × 1058.11 × 1018.96 × 10−57
STD1.39 × 1069.05 × 1058.07 × 1055.07 × 1063.02 × 1059.86 × 1052.85 × 10−17
p-value6.45 × 10−37.20 × 10−13.70 × 10−34.86 × 10−34.61 × 10−111.49 × 10−2
h1011001
F4Worst 9.93 × 1016.16 × 1019.94 × 1019.09 × 1019.28 × 1012.59 × 10−41.05 × 10−18
Average9.30 × 1015.82 × 1019.91 × 1017.76 × 1018.26 × 1015.26 × 10−52.10 × 10−19
Best8.07 × 1015.59 × 1019.87 × 1013.37 × 1017.34 × 1015.34 × 10−91.77 × 10−33
STD0.841 × 10−10.225 × 10−13.13 × 10−12.47 × 1010.832 × 10−11.15 × 10−44.71 × 10−19
p-value7.67 × 10−98.67 × 10−121.80 × 10−201.12 × 10−41.80 × 10−813.38 × 10−1
h1111100
F5Worst 1.16 × 1092.75 × 1082.59 × 1092.88 × 1053.91 × 1081.98 × 1021.99 × 102
Average5.56 × 1081.93 × 1082.39 × 1091.57 × 1051.77 × 1081.58 × 1021.99 × 102
Best2.38 × 1081.17 × 1082.05 × 1097.67 × 1026.54 × 1070.105 × 10−11.99 × 102
STD3.67 × 1085.86 × 1072.05 × 1081.30 × 1051.32 × 1088.78 × 1011.40 × 10−2
p-value9.59 × 10−37.77 × 10−55.04 × 10−92.67 × 10−21.71 × 10−213.27 × 10−1
h1111100
F6Worst 2.72 × 1051.15 × 1053.76 × 1051.79 × 1031.72 × 1051.51 × 1014.94 × 101
Average2.26 × 1051.02 × 1052.66 × 1057.60 × 1021.20 × 1053.53 × 1004.81 × 101
Best1.80 × 1059.46 × 1041.12 × 1051.05 × 1028.26 × 1046.33 × 10−24.73 × 101
STD3.98 × 1047.80 × 1031.15 × 1057.07 × 1023.39 × 1040.652 × 10−19.28 × 10−1
p-value1.38 × 10−61.97 × 10−98.65 × 10−44.38 × 10−24.80 × 10−513.57 × 10−7
h1111101
F7Worst 4.08 × 1031.00 × 1041.01 × 1041.41 × 1021.12 × 1035.47 × 10−22.21 × 10−2
Average1.94 × 1039.34 × 1036.56 × 1033.46 × 1019.18 × 1021.48 × 10−21.24 × 10−2
Best4.29 × 1028.65 × 1034.09 × 1032.19 × 1006.34 × 1025.30 × 10−42.16 × 10−3
STD1.46 × 1036.00 × 1022.43 × 1035.96 × 1012.02 × 1022.27 × 10−28.80 × 10−3
p-value1.77 × 10−25.07 × 10−103.15 × 10−42.31 × 10−17.61 × 10−61.00 × 1008.30 × 10−1
h1110100
F8Worst −7.26 × 103−3.51 × 103−6.05 × 103−4.99 × 104−6.34 × 103−2.84 × 104−2.42 × 104
Average−9.61 × 103−5.16 × 103−6.77 × 103−5.51 × 104−9.25 × 103−4.74 × 104−5.90 × 104
Best−1.32 × 104−8.14 × 103−7.99 × 103−5.98 × 104−1.12 × 104−7.61 × 104−8.33 × 104
STD2.26 × 1031.89 × 1037.39 × 1023.61 × 1032.57 × 1032.00 × 1042.44 × 104
p-value3.00 × 10−31.54 × 10−31.90 × 10−34.20 × 10−12.88 × 10−314.34 × 10−1
h1110100
F9Worst 2.42 × 1033.24 × 1032.57 × 1031.98 × 1032.20 × 1033.14 × 10−60
Average2.26 × 1033.17 × 1031.05 × 1036.35 × 1022.06 × 1037.15 × 10−70
Best2.19 × 1033.09 × 1033.84 × 1021.50 × 10−11.90 × 10300
STD9.29 × 1016.11 × 1018.69 × 1028.10 × 1021.40 × 1021.37 × 10−60
p-value1.44 × 10−113.38 × 10−142.69 × 10−21.18 × 10−18.01 × 10−1012.76 × 10−1
h1110100
F10Worst 2.00 × 1011.86 × 1012.09 × 1010.472 × 10−11.83 × 1011.18 × 10−48.88 × 10−16
Average1.89 × 1011.80 × 1011.87 × 1010.234 × 10−11.77 × 1014.16 × 10−58.88 × 10−16
Best1.83 × 1011.75 × 1011.35 × 1011.90 × 10−21.69 × 1013.37 × 10−78.88 × 10−16
STD7.16 × 10−14.39 × 10−10.306 × 10−10.222 × 10−16.69 × 10−15.44 × 10−50.00 × 100
p-value7.67 × 10−122.27 × 10−137.94 × 10−74.59 × 10−27.59 × 10−1211.26 × 10−1
h1111100
F11Worst 2.11 × 1032.46 × 1033.12 × 1031.82 × 1021.27 × 1035.67 × 10−60
Average1.61 × 1032.29 × 1032.22 × 1034.26 × 1011.15 × 1031.14 × 10−60
Best6.87 × 1022.17 × 1037.28 × 1020.125 × 10−11.06 × 1036.66 × 10−160
STD5.94 × 1021.28 × 1029.58 × 1027.82 × 1018.29 × 1012.53 × 10−60
p-value3.06 × 10−41.65 × 10−108.50 × 10−42.58 × 10−11.30 × 10−913.46 × 10−1
h1110100
F12Worst 9.71 × 1088.83 × 1076.94 × 1093.29 × 1075.27 × 1082.75 × 10−20.347 × 10−1
Average5.21 × 1086.96 × 1075.60 × 1096.90 × 1062.57 × 1089.13 × 10−30.144 × 10−1
Best2.13 × 1084.81 × 1074.79 × 1094.74 × 10−11.16 × 1081.33 × 10−38.87 × 10−2
STD3.17 × 1081.71 × 1078.83 × 1081.46 × 1071.62 × 1081.07 × 10−20.124 × 10−1
p-value6.24 × 10−31.71 × 10−55.97 × 10−73.20 × 10−17.54 × 10−313.26 × 10−2
h1110101
F13Worst 1.62 × 1095.92 × 1081.29 × 10101.50 × 1081.14 × 1090.367 × 10−12.00 × 101
Average1.19 × 1093.66 × 1081.04 × 10103.70 × 1076.67 × 1080.103 × 10−11.73 × 101
Best1.72 × 1081.97 × 1088.31 × 1096.51 × 1021.08 × 1081.77 × 10−20.660 × 10−1
STD6.06 × 1081.64 × 1081.70 × 1096.36 × 1073.92 × 1080.151 × 10−10.598 × 10−1
p-value2.27 × 10−31.07 × 10−38.07 × 10−72.29 × 10−15.23 × 10−313.63 × 10−4
h1110101
Table 7. Thirteen benchmark functions are used for the Friedman ranking test for comparative approaches, with a dimension of 50.
Table 7. Thirteen benchmark functions are used for the Friedman ranking test for comparative approaches, with a dimension of 50.
Fun.Algorithm
PSOWOASSASMAHHOAOAAOACS
F16473521
F25743621
F36357241
F46374521
F56573412
F66473512
F75763421
F84762531
F96743521
F107563421
F115763421
F126473512
F136473512
Sum74677943592517
Mean5.695.156.083.314.541.921.31
Rank6573421
Table 8. Results from methods of comparison on 2019 benchmark functions (F1–F10).
Table 8. Results from methods of comparison on 2019 benchmark functions (F1–F10).
FunctionMeasureAlgorithm
PSOWOASSASMAHHOAOAAOACS
CEC−1Best1.81 × 10126.00 × 10133.11 × 10124.03 × 1081.78 × 10122.15 × 10111.61 × 106
Average7.75 × 10112.19 × 10139.46 × 10118.10 × 1077.64 × 10117.88 × 10106.70 × 105
Worst 1.88 × 10112.95 × 10121.23 × 10117.83 × 1046.40 × 10101.94 × 10101.36 × 105
STD6.72 × 10112.22 × 10131.23 × 10121.80 × 1087.85 × 10118.00 × 10105.89 × 105
p-value3.27 × 10−25.87 × 10−21.24 × 10−11.00 × 1006.13 × 10−25.91 × 10−23.48 × 10−1
h1000000
CEC−2Best1.54 × 1033.00 × 1041.43 × 1021.82 × 1018.88 × 1031.96 × 1021.93 × 101
Average4.41 × 1022.38 × 1046.98 × 1011.77 × 1014.51 × 1038.25 × 1011.82 × 101
Worst 3.97 × 1011.56 × 1041.97 × 1011.74 × 1016.72 × 1023.02 × 1011.75 × 101
STD6.27 × 1027.27 × 1035.72 × 1013.29 × 10−13.31 × 1036.79 × 1017.53 × 10−1
p-value1.69 × 10−18.22 × 10−57.56 × 10−211.62 × 10−26.55 × 10−21.79 × 10−1
h0100100
CEC−3Best1.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 101
Average1.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 101
Worst 1.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 1011.27 × 101
STD1.33 × 10−35.29 × 10−49.19 × 10−43.52 × 10−42.78 × 10−31.74 × 10−31.12 × 10−3
p-value7.92 × 10−34.87 × 10−17.25 × 10−31.00 × 1002.25 × 10−11.82 × 10−11.97 × 10−1
h1010000
CEC−4Best1.14 × 1041.53 × 1041.48 × 1043.00 × 1041.70 × 1045.07 × 1035.16 × 103
Average8.61 × 1031.10 × 1049.93 × 1031.73 × 1041.24 × 1044.18 × 1033.84 × 103
Worst 3.42 × 1037.25 × 1033.68 × 1036.73 × 1034.36 × 1033.54 × 1032.32 × 103
STD3.29 × 1033.24 × 1034.79 × 1039.86 × 1034.74 × 1036.80 × 1021.04 × 103
p-value9.96 × 10−22.15 × 10−11.73 × 10−113.46 × 10−11.81 × 10−21.64 × 10−2
h0000011
CEC−5Best0.521 × 10−10.466 × 10−10.787 × 10−10.666 × 10−10.973 × 10−10.324 × 10−10.359 × 10−1
Average0.454 × 10−10.367 × 10−10.528 × 10−10.594 × 10−10.539 × 10−10.260 × 10−10.318 × 10−1
Worst 0.400 × 10−10.265 × 10−10.336 × 10−10.528 × 10−10.336 × 10−10.200 × 10−10.200 × 10−1
STD4.89 × 10−18.49 × 10−10.189 × 10−15.94 × 10−10.262 × 10−15.09 × 10−16.68 × 10−1
p-value3.59 × 10−31.20 × 10−34.81 × 10−116.62 × 10−11.20 × 10−51.27 × 10−4
h1100011
CEC−6Best1.53 × 1011.43 × 1011.34 × 1011.52 × 1011.38 × 1011.56 × 1011.49 × 101
Average1.42 × 1011.31 × 1011.23 × 1011.33 × 1011.21 × 1011.38 × 1011.27 × 101
Worst 1.37 × 1011.19 × 1011.07 × 1011.10 × 1011.02 × 1011.29 × 1011.06 × 101
STD6.34 × 10−18.80 × 10−10.138 × 10−10.172 × 10−10.144 × 10−10.110 × 10−10.168 × 10−1
p-value3.24 × 10−17.87 × 10−13.38 × 10−112.52 × 10−16.52 × 10−15.91 × 10−1
h0000000
CEC−7Best1.65 × 1031.12 × 1031.87 × 1031.87 × 1031.96 × 1031.97 × 1039.70 × 102
Average1.53 × 1038.71 × 1021.56 × 1031.33 × 1031.47 × 1031.37 × 1036.03 × 102
Worst 1.38 × 1036.45 × 1021.25 × 1036.71 × 1021.07 × 1036.12 × 1021.84 × 102
STD1.07 × 1022.14 × 1022.62 × 1024.47 × 1023.33 × 1025.81 × 1023.46 × 102
p-value3.72 × 10−17.04 × 10−23.58 × 10−116.04 × 10−19.18 × 10−12.03 × 10−2
h0000001
CEC−8Best0.816 × 10−10.757 × 10−10.840 × 10−10.764 × 10−10.769 × 10−10.795 × 10−10.781 × 10−1
Average0.763 × 10−10.710 × 10−10.761 × 10−10.709 × 10−10.689 × 10−10.723 × 10−10.684 × 10−1
Worst 0.703 × 10−10.622 × 10−10.658 × 10−10.581 × 10−10.614 × 10−10.676 × 10−10.615 × 10−1
STD4.54 × 10−15.13 × 10−17.60 × 10−17.38 × 10−15.77 × 10−14.67 × 10−16.32 × 10−1
p-value2.02 × 10−19.76 × 10−13.03 × 10−116.59 × 10−17.20 × 10−15.93 × 10−1
h0000000
CEC−9Best2.05 × 1032.19 × 1033.55 × 1033.92 × 1033.73 × 1033.15 × 1037.79 × 102
Average1.29 × 1031.03 × 1032.86 × 1033.24 × 1032.20 × 1031.08 × 1033.50 × 102
Worst 4.92 × 1022.60 × 1022.35 × 1031.55 × 1034.66 × 1025.46 × 1016.60 × 101
STD5.84 × 1027.93 × 1025.24 × 1029.72 × 1021.19 × 1031.26 × 1033.01 × 102
p-value4.93 × 10−34.39 × 10−34.73 × 10−111.70 × 10−11.66 × 10−22.23 × 10−4
h1100011
CEC−10Best2.11 × 1012.10 × 1012.08 × 1012.09 × 1012.10 × 1012.10 × 1012.09 × 101
Average2.08 × 1012.08 × 1012.07 × 1012.07 × 1012.07 × 1012.09 × 1012.06 × 101
Worst 2.07 × 1012.06 × 1012.04 × 1012.05 × 1012.04 × 1012.06 × 1012.04 × 101
STD1.83 × 10−11.65 × 10−12.00 × 10−11.34 × 10−12.19 × 10−11.55 × 10−11.75 × 10−1
p-value2.45 × 10−12.53 × 10−16.77 × 10−11.00 × 1007.75 × 10−11.37 × 10−13.06 × 10−1
h0000000
Table 9. Ten benchmark functions are used for the Friedman ranking test for comparative approaches.
Table 9. Ten benchmark functions are used for the Friedman ranking test for comparative approaches.
FunctionAlgorithm
PSOWOASSASMAHHOAOAAOACS
cec016452731
cec023651742
cec035671243
cec044637521
cec055647312
cec062175463
cec077563241
cec086273451
cec096547231
cec102364571
Sum46445440413916
Mean4.64.45.44.04.13.91.6
Rank6573421
Table 10. Comparison of the achieved outcomes of different algorithms when solving the welded beam problem.
Table 10. Comparison of the achieved outcomes of different algorithms when solving the welded beam problem.
AlgorithmOptimal Estimated ValuesOptimal Cost
h   ( λ 1 ) l   ( λ 2 ) t   ( λ 3 ) b   ( λ 4 )
HHO2.05 × 10−10.348 × 10−10.904 × 10−12.06 × 10−11.730
AOA 1.94 × 10−10.257 × 10−10.100 × 10−12.02 × 10−11.720
WOA 2.05 × 10−10.347 × 10−10.904 × 10−12.06 × 10−11.730
SSA 2.06 × 10−10.348 × 10−10.904 × 10−12.06 × 10−11.730
PSO 2.00 × 10−10.337 × 10−10.901 × 10−12.07 × 10−11.710
SMA 2.08 × 10−10.323 × 10−10.899 × 10−12.08 × 10−11.700
AOACS1.96 × 10−10.335 × 10−10.904 × 10−12.06 × 10−11.690
Table 11. The achieved outcomes of several algorithms when solving the three-bar truss optimization issue.
Table 11. The achieved outcomes of several algorithms when solving the three-bar truss optimization issue.
AlgorithmOptimal Estimated ValuesLowest Weight
X 1   ( λ 1 ) X 2   ( λ 2 )
HHO 7.8867 × 10−14.0828 × 10−12.6390 × 102
AOA 7.9369 × 10−13.9426 × 10−12.6392 × 102
WOA 7.8866 × 10−14.0828 × 10−12.6390 × 102
SSA 7.8860 × 10−14.0845 × 10−12.6390 × 102
PSO 7.8867 × 10−14.0826 × 10−12.6390 × 102
SMA 7.8890 × 10−14.0762 × 10−12.6390 × 102
AOACS7.8859 × 10−14.0825 × 10−12.6387 × 102
Table 12. The achieved outcomes of several algorithms when solving the stepped cantilever beam design issue.
Table 12. The achieved outcomes of several algorithms when solving the stepped cantilever beam design issue.
AlgorithmOptimal Estimated ValuesLowest Weight
λ 1 λ 2 λ 3 λ 4 λ 5
HHO 0.513 × 10−10.562 × 10−10.510 × 10−10.393 × 10−10.232 × 10−10.138 × 10−1
AOA 0.621 × 10−10.621 × 10−10.621 × 10−10.621 × 10−10.621 × 10−10.194 × 10−1
WOA 0.600 × 10−10.530 × 10−10.449 × 10−10.351 × 10−10.217 × 10−10.134 × 10−1
SSA 0.561 × 10−10.496 × 10−10.566 × 10−10.320 × 10−10.320 × 10−10.141 × 10−1
PSO 0.605 × 10−10.526 × 10−10.451 × 10−10.346 × 10−10.219 × 10−10.134 × 10−1
SMA 0.511 × 10−10.599 × 10−10.502 × 10−10.371 × 10−10.327 × 10−10.144 × 10−1
AOACS0.601 × 10−10.531 × 10−10.449 × 10−10.350 × 10−10.215 × 10−10.134 × 10−1
Table 13. The achieved outcomes of several algorithms when solving the speed reducer issue.
Table 13. The achieved outcomes of several algorithms when solving the speed reducer issue.
AlgorithmOptimal Estimated ValuesLowest Weight
λ 1 λ 2 λ 3 λ 4 λ 5 λ 6 λ 7
HHO 3.6061290.7177.27.981413.4625695.2967493028.873076
AOA 3.50.7177.27.6803963.5524215.2558143020.583365
WOA 3.5187650.7177.27.93.451025.2992133031.563
SSA 3.5301340.7178.367.93.376975.2987193030.002
PSO 3.5374850.7002177.7296848.0909543.3615125.2970513011.137492
SMA 3.5261520.700005177.5591367.958333.3755765.2997733009.08
AOACS3.50320.7177.21987.73753.37415.29943007.7328
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hijjawi, M.; Alshinwan, M.; Khashan, O.A.; Alshdaifat, M.; Almanaseer, W.; Alomoush, W.; Garg, H.; Abualigah, L. Accelerated Arithmetic Optimization Algorithm by Cuckoo Search for Solving Engineering Design Problems. Processes 2023, 11, 1380. https://doi.org/10.3390/pr11051380

AMA Style

Hijjawi M, Alshinwan M, Khashan OA, Alshdaifat M, Almanaseer W, Alomoush W, Garg H, Abualigah L. Accelerated Arithmetic Optimization Algorithm by Cuckoo Search for Solving Engineering Design Problems. Processes. 2023; 11(5):1380. https://doi.org/10.3390/pr11051380

Chicago/Turabian Style

Hijjawi, Mohammad, Mohammad Alshinwan, Osama A. Khashan, Marah Alshdaifat, Waref Almanaseer, Waleed Alomoush, Harish Garg, and Laith Abualigah. 2023. "Accelerated Arithmetic Optimization Algorithm by Cuckoo Search for Solving Engineering Design Problems" Processes 11, no. 5: 1380. https://doi.org/10.3390/pr11051380

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop