Next Article in Journal
Research on Vibration Propagation Law and Dynamic Effect of Bench Blasting
Next Article in Special Issue
A Decision Model to Plan Optimally Production-Distribution of Seafood Product with Multiple Locations
Previous Article in Journal
An Interval-Simplex Approach to Determine Technological Parameters from Experimental Data
Previous Article in Special Issue
An Improved Reptile Search Algorithm Based on Lévy Flight and Interactive Crossover Strategy to Engineering Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Elite Chaotic Manta Ray Algorithm Integrated with Chaotic Initialization and Opposition-Based Learning

1
Design Art College, Xijing University, Xi’an 710123, China
2
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
3
School of Computer Science and Engineering, Xi’an University of Technology, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(16), 2960; https://doi.org/10.3390/math10162960
Submission received: 25 July 2022 / Revised: 11 August 2022 / Accepted: 13 August 2022 / Published: 16 August 2022
(This article belongs to the Special Issue Optimisation Algorithms and Their Applications)

Abstract

:
The manta ray foraging optimizer (MRFO) is a novel nature-inspired optimization algorithm that simulates the foraging strategy and behavior of manta ray groups, i.e., chain, spiral, and somersault foraging. Although the native MRFO has revealed good competitive capability with popular meta-heuristic algorithms, it still falls into local optima and slows the convergence rate in dealing with some complex problems. In order to ameliorate these deficiencies of the MRFO, a new elite chaotic MRFO, termed the CMRFO algorithm, integrated with chaotic initialization of population and an opposition-based learning strategy, is developed in this paper. Fourteen kinds of chaotic maps with different properties are used to initialize the population. Thereby, the chaotic map with the best effect is selected; meanwhile, the sensitivity analysis of an elite selection ratio in an elite chaotic searching strategy to the CMRFO is discussed. These strategies collaborate to enhance the MRFO in accelerating overall performance. In addition, the superiority of the presented CMRFO is comprehensively demonstrated by comparing it with a native MRFO, a modified MRFO, and several state-of-the-art algorithms using (1) 23 benchmark test functions, (2) the well-known IEEE CEC 2020 test suite, and (3) three optimization problems in the engineering field, respectively. Furthermore, the practicability of the CMRFO is illustrated by solving a real-world application of shape optimization of cubic generalized Ball (CG-Ball) curves. By minimizing the curvature variation in these curves, the shape optimization model of CG-Ball ones is established. Then, the CMRFO algorithm is applied to handle the established model compared with some advanced meta-heuristic algorithms. The experimental results demonstrate that the CMRFO is a powerful and attractive alternative for solving engineering optimization problems.

1. Introduction

Many complex problems to be solved in life can be described as optimization problems, and the research on high-precision algorithms for optimization problems has attracted many scholars. Traditional mathematical optimization (TMO) methods usually require the objective function of the optimization problem to satisfy convexity and differentiability. This requirement theoretically ensures that TMO methods can approach the optimal solution. However, since the objective functions of most optimization problems tend to be multimodal, discrete, non-differentiable, and non-convex, TMO cannot better handle complex optimization problems. Nowadays, swarm intelligence algorithms simulating organisms in nature are often adopted to solve optimization problems in order to achieve the desired purpose.
The concept of swarm intelligence was first introduced by Hackwood et al. [1] and originates from the research on the social behavior of swarm gregarious creatures (such as birds, fish, and wolves). Swarm intelligence algorithms take advantage of the swarm evolution behavior of creatures. With the help of information sharing and competition mechanisms among populations, the swarm intelligence algorithm makes random exploration and exploitation to search for the optimal objective function solution. In addition, a swarm intelligence algorithm has no strict mathematical conditions on objective function compared with the traditional random search algorithm. When dealing with complex optimization problems, it can quickly search for the global optimal, is simple to operate, and has fast convergence speed. These advantages have made swarm intelligence algorithms popular in a short amount of time. The genetic algorithm (GA) introduced by scholar Holland in 1975 is an optimization technique inspired by natural evolution [2]. The birth of the genetic algorithm promoted the development of swarm intelligence algorithms, making the theoretical research of swarm intelligence algorithms a hot research field. After the development of biology and continuous research on intelligent optimization algorithms, a series of swarm intelligence optimization algorithms with their own characteristics have been put forward, and these algorithms have been widely used in data clustering [3], feature selection [4], economic emission dispatch [5], engineering problems [6], shape optimization [7], and many other application fields.
Since swarm intelligence optimization algorithms were often proposed and introduced with the help of the influence of natural flora and fauna, different swarm intelligence algorithms have focused on different strategies and solved different problems, showing differences and defects, and as they are a reliable means, the introduction of improvement strategies will help different algorithms in different optimization problems. Many scholars have modified different population intelligence algorithms with the help of some enlightening methods and solved some practical engineering problems [8,9,10,11,12]. For example, Elsisi et al. proposed a modified multitracker optimization algorithm by adding contrastive-based learning and quasi-OBL methods and applied it to nonlinear model predictive control [13]. Zheng et al. proposed an improved gray wolf optimization algorithm and applied it to solve Quintic generalized [14]. Elsisi and Abdelfattah proposed a new design of variable structure control based on the lightning search algorithm [15]. Hu et al. proposed an improved chimp optimization algorithm based on a combination of selective opposition and cuckoo search strategies and used for optimal degree reduction of Said-Ball curves [16]. Zhao et al. proposed an improved artificial hummingbird algorithm for solving complex multiobjective optimization problems [17]. In addition to the above, there have been many studies on improving optimization algorithms and their applications in solving practical problems [18,19,20,21,22].
As a representative of excellent meta-heuristic algorithms, the MRFO simulates the foraging strategy and behavior of manta ray groups [23]. In addition, the MRFO owns good global searching ability, high solving efficiency, and strong stability. Therefore, many scholars have used the MRFO to resolve complex optimization problems in practical application fields. For example, Houssein et al. used manta ray foraging optimization to solve the parameter extraction of the three-diode photovoltaic model [24]. Fathy et al. used the MRFO to deal with the robust global MPPT to mitigate partial shading of the triple-junction solar-cell-based system [25]. Ben et al. used the MRFO to deal with the Novel technique for interpreting gravity anomalies over geologic structures with idealized geometries and the Novel methodology for interpreting magnetic anomalies due to two-dimensional dipping dikes [26,27]. El-Hameed et al. used the MRFO to analyze and validate the three-diode model to characterize industrial solar generating units [28]. In addition, the MRFO has been applied to solve optimization of the support vector machine model [29], optimization of distributed generators [30], and parameter extraction of the artificial neural network model [31]. Some scholars have extended the MRFO to solve multiobjective problems. For example, Got et al. proposed an MOMRFO for solving multiobjective problems [32]. Zouache et al. and Abdelaziz proposed guided manta ray foraging optimization using epsilon dominance to solve multiobjective engineering problems [33]. Some scholars have improved it to solve the practical problem of optimal power flow [34]. The MRFO also has room for further improvement. Many scholars have made relevant improvements to solve complex problems in their fields. There are many improvement methods: (1) Combining the MRFO with various improvement strategies: For example, Elaziz et al. [35,36] used fractional-order algorithms to enhance the MRFO for global optimization and image segmentation. Yousri et al. proposed a Novel memory-based fractional-order Caputo manta ray foraging optimizer [37]. Xu et al. proposed an improved MRFO algorithm for fire-use analysis and optimization of HT-PEMFC [38]. Jena et al. proposed an attacking manta-ray foraging optimization algorithm for handling multilevel thresholding of brain MR images based on maximum 3D Tsallis entropy [39]. Other studies have also worked on improving the MRFO [40,41,42]. (2) Combining the MRFO with other algorithms: For example, the SA algorithm was fused with the MRFO to obtain the SA–MRFO algorithm [43], the MRFO was combined with the GBO algorithm to obtain the MRFO–GBO [44], and the proposed ROA was combined with the MRFO to obtain the ROA–MRFO [45]. In this work, we develop a novel elite chaotic MRFO, termed the CMRFO, by integrating three different strategies: chaotic initialization of population, opposition-based learning, and elite chaotic searching. Through 23 test functions, the CEC 2020 test suite, three engineering examples, and one real-world application, the effectiveness of the CMRFO is examined by comparing it with the native MRFO, a modified MRFO, and well-known meta-heuristic algorithms.
Furthermore, the practicability of the CMRFO is verified by the shape optimization of parametric curves (i.e., CG-Ball curves). The classical Ball curves are a forceful tool for shape design in many geometric modeling fields, such as industrial design, manufacturing, and path planning. [36]. However, the shape of classical Ball curves is only defined by their control points. This paper constructs the local controlled CG-Ball curves to overcome these weaknesses by generalizing classical cubic Ball curves. Furthermore, a new mathematical model of shape optimization for CG-Ball ones based on minimum curvature variation is established and then the CMRFO is applied to deal with this optimization model to obtain the optimal shape of CG-Ball curves. The innovative points and main contributions of this paper are as follows:
(a)
A new manta ray foraging optimizer (CMRFO) based on chaotic initialization, opposition-based learning, and elite chaotic searching is proposed.
(b)
The effectiveness of the CMRFO is demonstrated by comparing it with the native MRFO, a modified MRFO, and several advanced algorithms on 23 classical benchmarks and IEEE CEC 2020, as well as three engineering design examples.
(c)
A new optimization model of CG-Ball curves based on minimum curvature variation is established, and the CMRFO is adopted to solve this model to certify the superiority of the algorithm.
The rest of this paper is arranged as follows: a new elite chaotic CMRFO is proposed in Section 2. The effectiveness of the CMRFO is demonstrated by comparison with other optimization algorithms using 23 classical benchmarks and the IEEE CEC 2020 test suite in Section 3. Three real-world engineering application problems are presented to verify the superiority of the CMRFO in Section 4. In Section 5, the practicability of the CMRFO is verified by the shape optimization problem of CG-Ball curves. There is a summary of this paper in Section 6.

2. Proposed Chaotic MRFO

2.1. Overview of the MRFO

The MRFO is a new swarm intelligence optimization algorithm that simulates the behaviors of manta ray foraging for plankton, which has the following three foraging behaviors [23].

2.1.1. Chain Foraging (CF)

In the CF strategy, manta rays form a chain to swim straight for the plankton. Except for the first individual in the foraging chain, each individual is updated by the previous and best individual, respectively. The updating formula of z i d ( x ) is given by:
{ z 1 d ( x + 1 ) = z 1 d ( x ) + r · [ z b e s t d ( x ) z 1 d ( x ) ] + α · [ z b e s t d ( x ) z 1 d ( x ) ]   , z i d ( x + 1 ) = z i d ( x ) + r · [ z i 1 d ( x ) x i d ( x ) ] + α · [ z b e s t d ( x ) z i d ( x ) ] , i = 2 ,   3 ,   ,   M α = 2 r · | log ( r ) | ,
where the random vector r [ 0 ,   1 ] , x represents the current number of iterations, and M represents the total number of individuals.

2.1.2. Spiral Foraging

In this behavior of manta rays, each individual moves both toward the previous individual and toward food in a spiral way. The updated position of z i d ( t ) is defined by:
{ z 1 d ( x + 1 ) = z b e s t d ( x ) + r · [ z b e s t d ( x ) z 1 d ( x ) ] + β · [ z b e s t d ( x ) z 1 d ( x ) ]   , z i d ( x + 1 ) = z b e s t d ( x ) + r · [ z i 1 d ( x ) z i d ( x ) ] + β · [ z b e s t d ( x ) z i d ( x ) ]   ,   i = 2 ,   3 ,   ,   M β = 2 e r 1 X t + 1 X · sin ( 2 π r 1 ) ,
in which the random number r 1 [ 0 ,   1 ] and X represents the maximum number of iterations.
The above spiral foraging behavior can also be improved for exploration. Then, the mathematical formula is given by:
{ z 1 d ( x + 1 ) = z r a n d d ( x ) + r · [ z r a n d d ( x ) z 1 d ( x ) ] + β · [ z r a n d d ( x ) z 1 d ( x ) ]   , z i d ( x + 1 ) = z r a n d d ( x ) + r · [ z i 1 d ( x ) z i d ( x ) ] + β · [ z r a n d d ( x ) z i d ( x ) ]   ,   i = 2 ,   3 ,   ,   M z r a n d d = L b d + r · ( U b d L b d ) ,
where L b d and U b d represent the upper and lower bounds, respectively.

2.1.3. Somersault Foraging (SF)

In SF behavior of manta rays, each individual is updated only in relation to the best individual. The updated formula of z i d ( x ) in SF is given by
z i d ( x + 1 ) = z i d ( x ) + S · [ r 2 · z b e s t d ( x ) r 3 · z i d ( x ) ] ,   i = 1 ,   2 ,   ,   M
where r 2 ,   r 3 [ 0 ,   1 ] are random numbers and S = 2 is the somersault factor.

2.2. Chaotic MRFO

To heighten the overall performance of the MRFO algorithm, a novel elite chaotic manta ray algorithm, called the CMRFO, integrated with chaotic initialization and opposition-based learning is developed in the section.

2.2.1. Chaotic Initialization of Population

The global convergence speed and convergence accuracy are affected by the quality of the initial population of optimization algorithms, and the high diversity of the initial population is conducive to enhancing the solution quality. The MRFO is known to initialize its population randomly, and the population cannot be uniformly distributed in the whole search space, which results in the reduction of efficiency in the search process. Nevertheless, a chaotic map owns the peculiarity of ergodicity and randomness, which can thoroughly probe the search space in a range. In our paper, multiple chaotic maps are used to improve the MRFO algorithm. Table 1 describes 14 different one-dimensional maps, where k is the index and θk is the k-th number in the chaotic sequence. Figure 1 shows the corresponding 14 different one-dimensional map images.

2.2.2. Opposition-Based Learning (OL)

High diversity of the population can emphasize the exploratory stage of the algorithm, and OL is one of the methods to improve population diversity. The size of the population is M. Suppose the individuals are denoted as zi(x) = (zi1(x), …, zid(x), …, ziD(x))   i = 1, 2, …, M and each dimension component satisfies z i d ( x )     [ U b d ,   L b d ] ,   d = 1 ,   2 ,   ,   D . Then, the opposition-based solution of each dimension is calculated by Equation (5), where z ˜ i ( x ) = ( z ˜ i 1 ( x ) ,   ,   z ˜ i d ( x ) ,   ,   z ˜ i D ( x ) ) ,     i = 1 ,   2 ,   ,   M .
z ˜ i d ( x ) = U b d + L b d z i d ( x )
The fitness values of 2M individuals composed of current and opposition-based individuals are calculated and sorted from small to large, and the first N individuals are selected as the new population. Opposition-based learning not only enhances the diversity of the population but also increases the ability of the MRFO to approach the global optimal solution.

2.2.3. Elite Chaotic Searching (ECS)

OL can improve the exploration ability of the proposed CMRFO. Then, elite chaotic searching is implemented to heighten the exploitation capacity of this algorithm, in which chaotic mutation is carried out on elite individuals to realize further renewal of elite individuals. The fitness values of the current population are calculated and sorted in ascending order, and then the first n ( n = p · M ) individuals are selected as elite individuals, where p is the selection proportion and belongs to [0, 1]. The elite individuals are recorded as { e z 1 ( x ) ,   e z 2 ( x ) ,   ,   e z m ( x ) }     { z 1 ( x ) ,   z 2 ( x ) ,   ,   z M ( x ) } , where the i-th elite individual is e z i ( x ) = ( e z i 1 ( x ) ,   e z i 2 ( x ) ,   ,   e z i D ( x ) ) and its upper and lower bounds are as follows:
{ e b d ( x ) = max { e z i d ( x ) ,   e z 2 d ( x ) ,   ,   e z n d ( x ) } , e a d ( x ) = min { e z 1 d ( x ) ,   e z 2 d ( x ) ,   ,   e z n d ( x ) } .
The elite individual e z i ( x ) is mapped to the interval [0, 1], and the chaotic individual c i ( x ) = { c i 1 ( x ) ,   c i 2 ( x ) ,   ,   c i D ( x ) } is acquired, whose calculation is as follows:
c i d ( x ) = e z i d ( x ) L b d ( x ) U b d ( x ) L b d ( x ) ,   i = 1 ,   2 ,   ,   n
Then, logistic chaotic maps on chaotic individual c i ( x ) are performed.
c i d ( κ + 1 ) = μ · c i d ( κ ) [ 1 c i d ( κ ) ] ,
where the constant μ = 4 and κ is the iteration number of chaotic maps.
When the maximum number of chaotic iterations κ max is gained, the chaotic individuals are remapped into the interval [ e a d ( x ) ,   e b d ( x ) ] . In this paper, κ max is set to X. The i-th new elite individual e c i d ( x ) is obtained as follows:
e c i d ( x ) = c i d , κ max ( x ) · [ e b d ( x ) e a d ( x ) ] + e a d ( x )
Finally, a greedy choice is made between e c i ( x ) and e z i ( x ) , that is
z i ( x + 1 ) = { e z i ( x ) ,           f ( e z i ( x ) ) f ( e c i ( x ) ) e c i ( x ) .           f ( e z i ( x ) ) > f ( e c i ( x ) )
With the increase in iterations, the upper and lower bounds of elite individuals are gradually reduced to the vicinity of the target solution. Hence the local search ability of the MRFO is enhanced.
The above three strategies are combined with the MRFO, and the chaotic manta ray foraging optimizer is proposed, which is recorded as the CMRFO. The pseudo-code and the flow chart for the CMRFO are given by Algorithm 1 and Figure 2, respectively.
Algorithm 1: CMRFO
Set the parameters: M, X, Ub, Lb, D, p
The chaotic map is used to generate the initial position of N manta rays.  //Chaotic initialization of population
Calculate the fitness value of each individual, and save the best position.
While x < X
  for i = 1: M
   for d = 1 to D
    if rand < 0.5  //Cyclone foraging
     if t/T < rand
       z i d ( x + 1 ) = { z r a n d d ( x ) + r · ( z r a n d d ( x ) z i d ( x ) ) + β · ( z r a n d d ( x ) z i d ( x ) ) , i = 1 z r a n d d ( x ) + r · ( z i 1 d ( x ) z i d ( x ) ) + β · ( z r a n d d ( x ) z i d ( x ) ) . i = 2 ,   3 ,   ,   M
      else
       z i d ( x + 1 ) = { z b e s t d ( x ) + r · ( z b e s t d ( x ) z i d ( x ) ) + β · ( z b e s t d ( x ) z i d ( x ) ) , i = 1 z b e s t d ( x ) + r · ( z i 1 d ( x ) z i d ( x ) ) + β · ( z b e s t d ( x ) z i d ( x ) ) . i = 2 ,   3 ,   ,   M
      end if
    else //Chain foraging
      z i d ( x + 1 ) = { z i d ( x ) + r · ( z b e s t d ( x ) z i d ( x ) ) + α · ( z b e s t d ( x ) z i d ( x ) ) , i = 1 z i d ( x ) + r · ( z i 1 d ( x ) x i d ( x ) ) + α · ( z b e s t d ( x ) z i d ( x ) ) . i = 2 ,   3 ,   ,   M
     end if
   Update the best position.
    z i d ( x + 1 ) = z i d ( x ) + S · ( r 2 · z b e s t d ( x ) r 3 · z i d ( x ) ) i = 1 ,   2 ,   ,   M //Somersault foraging
   Update the best position.
    z ˜ i d ( x ) = U b d + L b d z i d ( x ) //Opposition-based learning
   The first N individuals among current and opposition-based individuals are selected as the new population.
   The fitness values of the current population are sorted in ascending order, and the first n individuals are selected as elite individuals. //Elite chaotic searching
    for I = 1 to n
      c i d ( x ) = e z i d ( x )     L b d ( x ) U b d ( x )     L b d ( x )
     for k = 1 to X
       c i d ( κ + 1 ) = μ · c i d ( κ ) · ( 1 c i d ( κ ) )
     end for
      e c i d ( x ) = c i d , κ max ( x ) · ( e b d ( x ) e a d ( x ) ) + e a d ( x )
     if   f ( e c i ( x ) ) < f ( z i ( x ) ) then
       z i ( x + 1 ) = e c i ( x )
      end if
     end for
    end for
   end for
End while
Output the global best position.
Figure 2. Flow chart of the CMRFO.
Figure 2. Flow chart of the CMRFO.
Mathematics 10 02960 g002

3. Experimental Results and Analysis

In this section, the capability and superiority of the CMRFO are comprehensively demonstrated by 23 classical test suites and the IEEE CEC2020 benchmark. The numerical experiments are implemented on Intel(R) Core(TM) i7-7700HQ, 2.80 GHz or 2.81 GHz, 8.00 GB, 512 GB, Windows 10, and Matlab 2018a. Here, the values of N and T are 50 and 1000, respectively. The obtained data are the results of each algorithm running independently 30 times. The 23 classical test functions of three different types are listed in the literature [7].

3.1. Performance of the CMRFO for the Initializing Population Based on Different Chaotic Maps

The CMRFO uses chaotic maps to generate the initial population, which can achieve a uniform distribution of the population and explore the search space more comprehensively within a certain range. This is conducive to heightening the performance and efficiency of intelligence algorithms in the search process. Unimodal functions can examine the local search ability of the CMRFO. Meanwhile, multimodal and fixed-dimensional multimodal functions are treated to be an intractable problem because they have numerous local extrema. So this comparison experiment is performed on 23 benchmark functions of three different types. Table 2 shows the statistical results of the CMRFO using 14 different chaotic maps to initialize the population, in which the boldface data represent the optimal values of 14 different chaotic maps.
Table 3 shows that in the CMRFO algorithm, the results of using different chaotic maps to initialize the initial population are slightly different. For all chaotic maps, according to the final ranking in the last line of Table 3, the performance of the CMRFO using M14 (cubic map) is significantly better than that on using other chaotic maps. Therefore, we choose M14 (cubic map) to initialize the population, and the M14-integrated CMRFO will be studied in detail in Section 3.2.

3.2. Elite Individual Proportion Analysis

In the CMRFO, the selection proportion p of elite individuals is the key to an elite chaotic searching strategy. A large value of p will cause premature algorithm convergence, while too small a value of p will have little impact on the algorithm. Therefore, this section discusses the influence of the parameter p on the performance of the CMRFO by simulation experiments. Table 4 illustrates the effects of the CMRFO based on different p-values on benchmark functions, where the boldface data represent the optimal values of different p-values.
It is observed that when the different values of the selection proportion p are taken, CMRFO has different results on some functions, so the ECS is sensitive to the value of p. In conclusion, when p = 0.1, the CMRFO performs best. Therefore, in this paper, the value of p in the elite chaotic searching is set to 0.1.

3.3. Exploration–Exploitation Analysis

Exploration and exploitation are the two basic building blocks of a meta-heuristic optimization algorithm. The search phase can be explored in the search space of distant regions. However, in the exploitation phase, candidate solutions steadily exploit promising areas already identified in the previous step using local strategies. Thus, maintaining a good balance between these two phases is one way to ensure that an algorithm can guarantee optimal convergence.
In this paper, exploration and exploitation are obtained by the dimensional diversity measure. During the search process, the exploration capability can be measured by the increase in the average value of the distance within the population dimension. Alternatively, the decreasing mean can be considered as the stage of exploitation where the search agent is located in a concentrated area. The following equation shows that dimensional diversity is measured during the search process.
D i v j = 1 N i = 1 N M e d i a n ( x j ) x i , j
D i v ( t ) = 1 D j = 1 D D i v j ,   t = 1 ,   2 ,   ,   T
where xi,j is the position of the i-th candidate solution in the j-th dim, Divj is the average diversity in the j-th dimension, and Median(xj) is the median of the j-th dim of the candidate solution. N is the number of all populations, D is the dim, and T is the maximum number of iterations. The following equation calculates the percentage of exploration and exploitation:
E x p l o r a t i o n % = D i v ( t ) max ( D i v ) × 100 %
E x p l o i t a t i o n % = | D i v ( t ) max ( D i v ) | max ( D i v ) × 100 %
where max(Div) is the maximum diversity in T iterations.
We plotted the exploration and exploitation convergence behavior using some CEC2020 test functions. Figure 3 shows the exploration and exploitation behavior of the cec01, cec02, cec05, cec07, cec09, and cec10 functions. For these functions, the CMRFO demonstrates highly dynamic behavior. As seen from the figure, the CMRFO tends to start the iterative process with a high exploration rate and a low exploitation rate. However, it remains more exploitative in the late iterations. The CMRFO maintains a tendency to balance exploration and exploitation during the search process.

3.4. Comparison of the CMRFO with Other Optimizers on 23 Benchmark Functions

The CMRFO is compared with some optimizers using 23 classical test suites in this section. These algorithms are the original MRFO, an improved MRFO, classical algorithms, and recently proposed algorithms, including MRFO [23], MRFO–GBO [44], DMRFO [38], SA–MRFO [43], PSO [46], GWO [47], HHO [48], AOA [49], CHOA [50], and MPA [51]. For all improved MRFOs, the parameter settings of these algorithms remain unchanged. Table 5 provides the parameter settings for other optimizers. Numerical results of 11 comparison algorithms using 23 classical benchmarks are shown in Figure 4 and Table 6.
To study the convergence characteristics of the CMRFO, Figure 4 shows the convergence curves of 11 comparison methods for 23 classical test suites. From Figure 4, we can see that the CMRFO, the MRFO, and other improved MRFOs can reach the optimal value on unimodal functions F1–F4, while the optimization ability of other comparison algorithms is not strong. Among them, the CMRFO has the fastest convergence speed. Particularly, for functions F5 and F7, the CMRFO obtains a great ameliorate in the field of convergence accuracy and speed compared with the MRFO, and the CMRFO is also obviously superior to other comparison algorithms. For multimodal functions, the CMRFO is greatly improved compared with the MRFO and the CMRFO performs better than all comparison optimizers. The results on fixed-dimensional multimodal functions for all 11 algorithms have little difference. Compared with 10 comparison optimizers, the CMRFO possesses a good convergence rate at the initial stage of iteration and does not fall into the local optimum. The CMRFO algorithm shows better convergence accuracy and speed. In general, Figure 4 illustrates that the proposed CMRFO has obvious improvements in convergence characteristics and has strong competitiveness.
The statistical results obtained by 11 methods on 23 classical benchmarks are given in Table 6. We can see that the CMRFO ranks first on 19 functions, showing the good optimization ability of the CMRFO. Except for functions F16, F18, F20, and F21, the CMRFO ranks second, and it performs best on the remaining functions. From the final rank in Table 6, the CMRFO is number one and the MRFO is number four, which shows that the CMRFO algorithm has obvious improvement and also shows the advantages of the CMRFO. In addition, the standard deviation of the CMRFO on 19 functions is small, which indicates that the CMRFO is relatively stable on benchmark functions. To sum up, the CMRFO displays excellent performance on 23 classical benchmarks.
The rank sum test (RST) is usually used to verify the distinction of different intelligent optimization algorithms. Table 7 gives the p-values of the RST of each algorithm based on the CMRFO at a 95% significance level (α = 0.05) on 23 benchmark functions. “+/=/−” is the statistical result using the p-value and the rank on each function, which, respectively, represent that the CMRFO is significantly worse than/equal to/better than the comparison optimizer. Note that the data in bold are p-values greater than 0.05 in Table 7.
The last row of Table 7 has a statistical result of 1/12/10, 1/14/8, 1/15/7, 1/15/7, 1/6/16, 0/2/21, 0/5/18, 0/3/20, 0/3/20, and 3/7/13. The experimental results indicate that the CMRFO is superior to the original MRFO on 10 functions. For other improved MRFOs, the CMRFO is significantly better than the MRFO–GBO on eight functions and significantly superior to the DMRFO and the SA–MRFO on seven functions, which fully shows that the CMRFO has obvious advantages over other competitors. To sum up, the capability of the CMRFO is better than that of other comparison algorithms.
Figure 5 displays the radar charts of all 11 algorithms based on the rank on 23 functions. Obviously, the CMRFO has the smallest shadow region, demonstrating its superiority over other MRFOs. By comparing radar charts of the MRFO and other algorithms, we can also see which test functions the CMRFO has improved and performs well.

3.5. Comparison of the CMRFO with Other Optimizers on CEC2020

To further test the optimization capability of the CMRFO, the CMRFO is further tested on the CEC2020 test suite in this section. The comparison algorithms are the MRFO [23], the PSO [46], the SCA [52], the WOA [53], the HHO [48], the AOA [49], the CHOA [50], the SSA [54], and the SOA [55]. Table 8 provides the parameter setting for all optimizers. Numerical results of 10 comparison algorithms on IEEE CEC2020 are enumerated in Figure 6 and Table 9.
Figure 6 illustrates the convergence curves for 10 comparison algorithms on the 50-dimensional IEEE CEC2020 benchmark. Compared with the native MRFO, the convergence accuracy and speed of the CMRFO are also significantly improved on the 50-dimensional CEC2020 test suite. Compared with other algorithms, the CMRFO can obtain the optimum value quickly at the beginning of the iteration and it can avert running into the local optimum solution at the end of the iteration, which shows excellent search performance. Overall, the CMRFO possesses powerful competitiveness.
The experimental results of 10 comparison algorithms on the 50-dimensional IEEE CEC2020 test suite are given in Table 8. From the last line of Table 8, we can see that the comprehensive rank of the CMRFO is first, which further verifies the superiority and effectiveness of the CMRFO for solving the 50-dimensional CEC2020 test suite.
Figure 7 further plots the boxplots of 10 comparison algorithms on the CEC2020 test suite. Apparently, the CMRFO possesses the narrowest boxplot without outliers with regard to F4, F5, F7, and F10, indicating that the CMRFO has excellent performance and stability for solving these functions. With respect to functions F1 and F8, although CMRFO results have few outliers, the results are more competitive. As for F2, F6, as well as F9, the positions of the CMRFO boxplot are much lower than that of the original MRFO and the median is smaller, indicating that the result of the CMRFO is closer to the optimal value. Based on the above analysis, it can be concluded that the proposed CMRFO is also effective for solving the 50-dimensional CEC2020 test suite.

4. Practical Engineering Application

Next, the CMRFO is used to deal with three practical applications with nonlinear constrained optimization. The result of the CMRFO is compared with other comparison algorithms, including MRFO [23], AO [56], AOA [49], CHOA [50], TSA [57], SCA [52], SOA [55], GWO [47], HHO [48], JS [58], and MPA [51]. For the execution of the engineering application, we guaranteed a population of 50 for all comparison algorithms and a maximum number of iterations of 1000 and ensured that all algorithms were executed 30 times. In general, the practical engineering application with minimization constraints is defined as:
Minimize: f (X), X = [x1, x2, …, xn]
Subject   to : { g i ( X ) 0 ,   i = 1 ,   ,   m . h j ( X ) = 0 ,   j = 1 ,   ,   n .
where m is the number of multiple constraints and l is the number of balance constraints [6]. Thus, the engineering optimization equation after constraint weighting is described as:
f ( X ) = f ( X ) + α i = 1 m max { g i ( X ) ,   0 } + β j = 1 n max { h j ( X ) ,   0 } .
where α is the weight of multiple constraints and β is the weight of balance constraints.

4.1. Pressure Vessel (PV) Design

Figure 8 shows graphically the structure of PV with four variables [51]. The objective function (OF) of the corresponding optimization task is the entire cost of PV. The objective function needs to be minimized by optimizing four design variables.
Let W = [ w 1 , w 2 , w 3 , w 4 ] = [ T s , T h , R , L ] . The optimization model of the PV design task can be mathematically described in detail as follows:
min   P V ( W ) = 0.6224 w 1 w 3 w 4 + 1.7781 w 2 w 3 2 + 3.1661 w 1 2 w 4 + 19.84 w 1 2 w 3 s . t . { c 1 ( W ) = 0.0193 w 3 w 1 0 ,   c 2 ( W ) = 0.00954 w 3 w 2 0 , c 3 ( W ) = π w 3 2 w 4 4 3 π w 3 3 + 1,296,000 0 ,   c 4 ( W ) = w 4 240 0 ,
where 0 w 1 , w 2 99 ,   10 w 3 , w 4 200 .
Table 10 gives the minimum cost and design variables of 11 algorithms to solve the PV optimization problem. Table 11 gives the PV optimization statistical analyses of all 11 algorithms running 30 times. The simulation results show that the obtained value of the CMRFO is the smallest and the CMRFO is more effective than 10 comparison algorithms for solving PV optimization design.

4.2. Tension/Compression Spring (TCS) Optimization Problem

The design task on TCS mathematically is an optimization problem with three optimization variables, and its structure is illustrated in Figure 9 [51]. The optimization model of the design task is given in Equation (16), whose objective function with four constraints is the total weight of TCS. The total weight is minimized by optimizing the parameters in the model.
Consider W = [ w 1 , w 2 , w 3 ] = [ d , D , N ] ,
min   s p r i n g ( W ) = ( w 3 + 2 ) w 2 w 1 2 s . t . { g 1 ( W ) = 4 w 2 2     w 1 w 2 12,566 ( w 2 w 1 3     w 1 4 ) + 1 5108 w 1 2 1 0 ,   g 2 ( W ) = 1 140.45 w 1 w 2 2 w 3 0 , g 3 ( W ) = 1 w 2 3 w 3 71,785 w 1 4 0 ,   g 4 ( W ) = w 1 + w 2 1.5 1 0 ,
where w 1 [ 0.05 ,   2 ] ,   w 2 [ 0.25 ,   1.3 ] ,   w 3 [ 2 ,   15 ] .
Table 12 gives the minimum weight and design variables of 11 algorithms to finish this optimization design task. Meanwhile, Table 13 gives the comparison results of all 11 algorithms after 30 runs, where the data in bold are the optimal values of the 11 algorithms. From Table 12 and Table 13, we can see that the simulation results of all 11 comparison methods are not significantly different, but the CMRFO can still provide a better solution to this optimization problem.

4.3. Pressure Vessel (PV) Design

The OF of the mathematical model for WB optimization design is the entire cost of WB, which minimizes the objective function of the model by finding a set of feasible problem variables [51]. The structure of this design is shown graphically in Figure 10. The mathematical model of WB design is defined in Equation (17), where W = [w1, w2, w3, w4] = [h, l, t, b].
min   W e i d e d b e a m ( W ) = 1 . 1 0471 w 1 2 w 2 + 0.04811 w 3 w 4 ( 14.0 + w 2 ) s . t . { h 1 ( W ) = τ ( W ) τ max 0 ,   h 2 ( W ) = σ ( W ) σ max 0 , h 3 ( W ) = δ ( W ) δ max 0 ,   h 4 ( W ) = w 1 w 4 0 ,   h 5 ( W ) = P P c ( W ) 0 , h 6 ( W ) = 0.125 w 1 0 ,   h 7 ( W ) = 1 . 1 0471 w 1 2 + 0.04811 w 3 w 4 ( 14.0 + w 2 ) 5.0 0 ,
in which 0.1 ≤ w1, w4 ≤ 2.0, 0.1 ≤ w2, w3 ≤ 10.0, τmax = 136,000 psi, σmax = 36,600 psi, δmax = 0.25 in, and P = 6000 lb. The formulas for τ ( W ) ,   σ ( W ) ,   P c ( W ) , and δ ( W ) . are:
τ ( W ) = ( τ ) 2 + 2 τ τ w 2 2 R + ( τ ) 2 ,   τ = P 2 w 1 w 2 ,   τ = M R J ,   M = P ( L + w 2 2 ) ,   R = w 2 2 4 + ( w 1 + w 3 2 ) 2 ,
J = 2 2 w 1 w 2 [ w 2 2 4 + ( w 1 + w 3 2 ) 2 ] ,   σ ( W ) = 6 P L w 4 w 3 2 ,   δ ( W ) = 6 P L 3 E w 3 2 w 4 ,   P c ( W ) = 4.013 E w 3 2 w 4 6 36 L 2 ( 1 w 3 2 L E 4 G ) ,
where L = 14 i n and E = 30 × 10 6 .
Table 14 shows the simulation results of 11 algorithms to solve the WB optimization problem. Table 15 shows the WB optimization statistical analyses of 11 algorithms after 30 runs. From the results of the experiment, the CMRFO, the MRFO, JS, and MPA, all perform well and can obtain the same minimum value. However, Table 15 shows that the CMRFO has a smaller standard deviation, indicating that the CMRFO is more stable in solving welded beam optimization problems.
From what has been discussed above, the numerical results of three practical engineering applications show that the CMRFO is more effective than comparison optimizers in dealing with practical engineering applications.

5. Real-World Application: Construction of CG-Ball Curves with Optimal Shape

Then, the CMRFO is used to address a challenging real-world optimization problem further. An optimization model of CG-Ball curves based on minimum curvature variation in curves is established in this section. CG-Ball curves are a new kind of parametric curve containing three shape control parameters, which are generalized cubic Ball curves. What is more, its advantage is that the shape of the curve can be freely changed by using the control parameters.

5.1. Shape Optimization Model: Minimum Curvature Variation in Curves

Given four control points P i   ( i = 0 ,   1 ,   2 ,   3 ) R 2   o r   R 3 , the following parametric equation
P ( s ; Ω ) = i = 0 3 P i b i , 4 ( s )
is for cubic generalized Ball (CG-Ball) curves [36], where t [ 0 ,   1 ] and Ω = ( α , β , γ ) are three shape parameters. The basic functions B i , 4 ( s ) ,   i = 0 ,   1 ,   2 ,   3 are defined as follows:
{ b 0 , 4 ( s ) = [ 1 α s ( 1 s ) ] ( 1 s ) 2 , b 1 , 4 ( s ) = ( 2 + α α s β s ) s ( 1 s ) 2 , b 2 , 4 ( s ) = ( 2 + γ   s + β β s ) s 2 ( 1 s ) , b 3 , 4 ( s ) = ( 1 γ   s + γ   s 2 )   s 2 ,
where 2 α ,   γ 4 , 2 β 2 .
The curvature variation (CV) of CG-Ball curves is given by Equation (23).
C V ( Ω ) = 0 1 P ( 3 ) ( s ; Ω ) 2 d s
Further calculation shows that
P ( 3 ) ( s ; Ω ) 2 = b 0 , 4 ( 3 ) ( s ) P 0 2 + b 1 , 4 ( 3 ) ( s ) P 1 2 + b 2 , 4 ( 3 ) ( s ) P 2 2 + b 3 , 4 ( 3 ) ( s ) P 3 2     + 2 b 0 , 4 ( 3 ) ( s ) b 1 , 4 ( 3 ) ( s ) P 0 P 1 + 2 b 0 , 4 ( 3 ) ( s ) b 2 , 4 ( 3 ) ( s ) P 0 P 2 + 2 b 0 , 4 ( 3 ) ( s ) b 3 , 4 ( 3 ) ( s ) P 0 P 3     + 2 b 1 , 4 ( 3 ) ( s ) b 2 , 4 ( 3 ) ( s ) P 1 P 2 + 2 b 1 , 4 ( 3 ) ( s ) b 3 , 4 ( 3 ) ( s ) P 1 P 3 + 2 b 2 , 4 ( 3 ) ( s ) b 3 , 4 ( 3 ) ( s ) P 2 P 3 .
Substituting Equation (24) into Equation (23), we have
C V ( Ω ) = 0 1 P ( 3 ) ( s ; Ω ) 2 d s = l 0 P 0 2 + l 1 P 1 2 + l 2 P 2 2 + l 3 P 3 2 + l 4 P 0 P 1 + l 5 P 0 P 2 + l 6 P 0 P 3 + l 7 P 1 P 2 + l 8 P 1 P 3 + l 9 P 2 P 3
where
l 0 = 84 α 2 , l 1 = 84 α 2 + 96 α β + 144 α + 48 β 2 + 144 ,
l 2 = 48 β 2 96 β γ + 84 γ 2 + 144 γ + 144 , l 3 = 84 γ 2 ,
l 4 = 12 α ( 7 α + 4 β + 6 ) , l 5 = 12 α ( 4 β γ + 6 ) ,
l 6 = 12 α γ , l 7 = 12 α γ 72 γ 48 α β 72 α + 48 β γ 48 β 2 144 ,
l 8 = 12 γ ( α + 4 β 6 ) , l 9 = 12 γ ( 7 γ 4 β + 6 ) .
Finally, the shape optimization model based on minimum curvature variation in curves is established, and the specific formula is:
arg   min   C ( Ω ) = 0 1 P ( 3 ) ( s ; Ω ) 2 d s = l 0 P 0 2 + l 1 P 1 2 + l 2 P 2 2 + l 3 P 3 2   + l 4 P 0 P 1 + l 5 P 0 P 2 + l 6 P 0 P 3 + l 7 P 1 P 2 + l 8 P 1 P 3 + l 9 P 2 P 3 , s . t .   α ,   γ [ 2 ,   4 ] ,   β [ 2 ,   2 ]

5.2. Modeling Examples

This section will optimize the CG-Ball curves shape according to the proposed optimization model. Two examples are given to prove the effectiveness of the CMRFO in solving the established optimization model in Equation (26).
Example 1.
The shape optimization of plane-S-type CG-Ball curves and the convergence curve of the CMRFO are given in this example. The control points of CG-Ball curves are
P 0 = ( 0 , 0.1 ) ,   P 1 = ( 0.25 , 0.8 ) ,   P 2 = ( 0.75 , 0.1 ) ,   P 3 = ( 1 , 0.8 )
Figure 11 shows the optimized CG-Ball curves with the minimum curvature variation obtained by five algorithms. Figure 11a–e shows the CG-Ball curves with optimal shape obtained by the proposed CMRFO, SCA [52], LFD [59], AO [56], and CHOA [50] algorithms, respectively. The red curve is the curve with optimized shape parameter values, and all shape parameters of the blue curve have the value of 1, which demonstrates that CG-Ball curves have flexible shape adjustability. The convergence curves of the five algorithms are illustrated in Figure 11f. The optimal values of shape parameters and objective function obtained by all five algorithms are shown in Table 16. According to the experimental results, the CMRFO has the best effect on solving the optimization model and the curvature variation in CG-Ball curves obtained by the CMRFO is minimum among all five algorithms (its value is 101.5338). Meanwhile, the CMRFO has the fastest convergence speed compared with other algorithms.
Example 2.
The shape optimization of space-M-type CG-Ball curves and convergence curves of the optimization model when the curvature variation in curves converges to the optimal value are given graphically in this example. Take the control points of the curves as
P 0 = ( 0.2 ,   0.1 ,   0.1 ) ,   P 1 = ( 0 ,   0.8 ,   0.8 ) , P 2 = ( 1 ,   0.8 ,   0.8 ) ,   P 3 = ( 0.8 ,   0.1 ,   0.1 )
Figure 12 displays the CG-Ball curves after optimization. Figure 12a–e displays the CG-Ball curves after shape optimization using five different algorithms, and the curvature variation in the curves is the smallest. The red curve is the curve with optimized shape parameters, and all shape parameters of the blue curve have the value of 1. Figure 12f shows the convergence of the curvature variation in CG-Ball curves. Table 17 gives the optimal shape parameters and corresponding objective function after shape optimization using five different algorithms. It can be seen that the curvature variation in the CG-Ball curve obtained by the CMRFO is the smallest. The optimal value of the objective function is 252.6226.

6. Conclusions

This paper develops an improved manta ray foraging optimizer (CMRFO) that combines chaos initialization, opposition-based learning, and elite chaotic searching strategy with higher accuracy and a faster convergence rate. Various correction techniques are used in the MRFO. First, a chaos-based initialization strategy is introduced to help explore the search space comprehensively and improve the algorithm’s efficiency during the search. In addition, an opposition-based learning strategy improves the solution quality of the solution, further improving the algorithm’s solution quality. Finally, the elite chaotic searching strategy is introduced to achieve the update of elite individuals and enhance the optimization capability. Fourteen different chaotic maps are used to initialize the population. It has been proved that a cubic map can provide better searchability. In addition, the effect of critical parameters (elite selection proportion) on CMRFO sensitivity is discussed. The CMRFO is competitive compared with numerous advanced intelligent algorithms on 23 benchmark functions, the well-known IEEE CEC2020 test suite, and three practical engineering applications. In addition, a mathematical model of shape optimization for CG-Ball curves is established and the CMRFO is used to solve the established shape optimization model contrasted with four popular advanced algorithms. Numerical results further verify the effectiveness and practicability of the CMRFO in solving challenging optimization problems in the engineering field.
Future work can extend the proposed CMRFO to other examples of CG-Ball curve shape optimization. In addition, multiobjective optimization of CG-Ball curve shapes can be considered. We also believe in the application of the proposed CMRFO in other engineering fields, for example feature selection, surface optimization, and path planning.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/math10162960/s1: The Matlab codes of the proposed CMRFO algorithm and other optimization algorithms.

Author Contributions

Conceptualization, J.Y. and G.H.; data curation, J.Y., Z.L. and G.H.; formal analysis, Z.L.; funding acquisition, J.Y. and G.H.; investigation, J.Y., Z.L. and X.Z.; methodology, J.Y., Z.L., X.Z. and G.H.; project administration, J.Y. and G.H.; resources, J.Y., X.Z. and G.H.; software, Z.L. and X.Z.; supervision, J.Y.; validation, Z.L., X.Z. and G.H.; visualization, X.Z.; writing—original draft, J.Y., Z.L., X.Z. and G.H.; writing—review and editing, J.Y. and G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Natural Science Foundation of Xijing University (Grant No. XJ190214).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article (and its Supplementary Information Files).

Acknowledgments

The authors are grateful to the reviewers for their insightful suggestions and comments, which helped us to improve the presentation and content of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hackwood, S.; Beni, G. Self-organization of sensors for swarm intelligence. In Proceedings of the 1992 IEEE International Conference on Robotics and Automation, Nice, France, 12–14 May 1992; pp. 819–829. [Google Scholar]
  2. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  3. Barshandeh, S.; Dana, R.; Eskandarian, P. A learning automata-based hybrid MPA and JS algorithm for numerical optimization problems and its application on data clustering. Knowl. Based Syst. 2021, 236, 107682. [Google Scholar] [CrossRef]
  4. Hu, G.; Du, B.; Wang, X.F.; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowl. Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  5. Hassan, M.H.; Kamel, S.; Abualigah, L.; Eid, A. Development and application of slime mould algorithm for optimal economic emission dispatch. Expert Syst. Appl. 2021, 182, 115205. [Google Scholar] [CrossRef]
  6. Hu, G.; Zhong, J.; Du, B.; Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  7. Hu, G.; Zhu, X.N.; Wei, G.; Chang, C.T. An improved marine predators algorithm for shape optimization of developable Ball surfaces. Eng. Appl. Artif. Intell. 2021, 105, 104417. [Google Scholar] [CrossRef]
  8. Elsisi, M.; Ebrahim, M. Optimal design of low computational burden model predictive control based on SSDA towards autonomous vehicle under vision dynamics. Int. J. Intell. Syst. 2021, 36, 6968–6987. [Google Scholar] [CrossRef]
  9. Elsisi, M.; Ismail, M.; Bendary, A. Optimal design of battery charge management controller for hybrid system PV/wind cell with storage battery. Int. J. Power Energy Convers. 2020, 11, 412. [Google Scholar] [CrossRef]
  10. Elsisi, M.; Mahmoud, K.; Lehtonen, M.; Darwish, M.M.F. Effective Nonlinear Model Predictive Control Scheme Tuned by Improved NN for Robotic Manipulators. IEEE Access 2021, 9, 64278–64290. [Google Scholar] [CrossRef]
  11. Elsisi, M.; Soliman, M.; Aboelela, M.; Mansour, W. ABC based design of PID controller for two area load frequency control with nonlinearities. Telkomnika Indones. J. Electr. Eng. 2015, 16, 58–64. [Google Scholar] [CrossRef]
  12. Elsisi, M.; Tran, M.-Q.; Hasanien, H.M.; Turky, R.A.; Albalawi, F.; Ghoneim, S.S.M. Robust Model Predictive Control Paradigm for Automatic Voltage Regulators against Uncertainty Based on Optimization Algorithms. Mathematics 2021, 9, 2885. [Google Scholar] [CrossRef]
  13. Elsisi, M. Optimal design of nonlinear model predictive controller based on new modified multitracker optimization algorithm. Int. J. Intell. Syst. 2020, 35, 1857–1878. [Google Scholar] [CrossRef]
  14. Zheng, J.Y.; Hu, G.; Ji, X.; Qin, X. Quintic generalized Hermite interpolation curves: Construction and shape optimization using an improved GWO algorithm. Comput. Appl. Math. 2022, 41, 115. [Google Scholar] [CrossRef]
  15. Elsisi, M.; Abdelfattah, H. New design of variable structure control based on lightning search algorithm for nuclear reactor power system considering load-following operation. Nucl. Eng. Technol. 2020, 52, 544–551. [Google Scholar] [CrossRef]
  16. Hu, G.; Dou, W.; Wang, X.; Abbas, M. An enhanced chimp optimization algorithm for optimal degree reduction of Said–Ball curves. Math. Comput. Simul. 2022, 197, 207–252. [Google Scholar] [CrossRef]
  17. Zhao, W.; Zhang, Z.; Mirjalili, S.; Wang, L.; Khodadadi, N.; Mirjalili, S.M. An effective multi-objective artificial hummingbird algorithm with dynamic elimination-based crowding distance for solving engineering design problems. Comput. Methods Appl. Mech. Eng. 2022, 398, 115223. [Google Scholar] [CrossRef]
  18. Xie, Z.; Zhang, C.; Shao, X.; Lin, W.; Zhu, H. An effective hybrid teaching–learning-based optimization algorithm for permutation flow shop scheduling problem. Adv. Eng. Softw. 2014, 77, 35–47. [Google Scholar] [CrossRef]
  19. Hu, G.; Li, M.; Wang, X.; Wei, G.; Chang, C.-T. An enhanced manta ray foraging optimization algorithm for shape optimization of complex CCG-Ball curves. Knowl. Based Syst. 2022, 240, 108071. [Google Scholar] [CrossRef]
  20. Elsisi, M. Improved grey wolf optimizer based on opposition and quasi learning approaches for optimization: Case study autonomous vehicle including vision system. Artif. Intell. Rev. 2022, 1–24. [Google Scholar] [CrossRef]
  21. Hu, G.; Du, B.; Li, H.N.; Wang, X.P. Quadratic interpolation boosted black widow spider-inspired optimization algorithm with wavelet mutation. Math. Comput. Simul. 2022, 200, 428–467. [Google Scholar] [CrossRef]
  22. Zhao, W.; Shi, T.; Wang, L.; Cao, Q.; Zhang, H. An adaptive hybrid atom search optimization with particle swarm optimization and its application to optimal no-load PID design of hydro-turbine governor. J. Comput. Des. Eng. 2021, 8, 1204–1233. [Google Scholar] [CrossRef]
  23. Zhao, W.G.; Zhang, Z.X.; Wang, L.Y. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  24. Houssein, E.H.; Zaki, G.N.; Diab, A.A.Z.; Younis, E.M.G. An efficient manta ray foraging optimization algorithm for parameter extraction of three-diode photovoltaic model. Comput. Electr. Eng. 2021, 94, 107304. [Google Scholar] [CrossRef]
  25. Fathy, A.; Rezk, H.; Yousri, D. A robust global MPPT to mitigate partial shading of triple-junction solar cell-based system using manta ray foraging optimization algorithm. Sol. Energy 2020, 207, 305–316. [Google Scholar] [CrossRef]
  26. Ben, U.C.; Akpan, A.E.; Enyinyi, E.O.; Awak, E. Novel technique for the interpretation of gravity anomalies over geologic structures with idealized geometries using the manta ray foraging optimization. J. Asian Earth Sci. X 2021, 6, 100070. [Google Scholar] [CrossRef]
  27. Ben, U.C.; Akpan, A.E.; Mbonu, C.C.; Ebong, E.D. Novel methodology for interpretation of magnetic anomalies due to two-dimensional dipping dikes using the manta ray foraging optimization. J. Appl. Geophys. 2021, 192, 104405. [Google Scholar] [CrossRef]
  28. El-Hameed, M.A.; Elkholy, M.M.; El-Fergany, A.A. Three-diode model for characterization of industrial solar generating units using manta-rays foraging optimizer: Analysis and validations. Energy Convers. Manag. 2020, 219, 113048. [Google Scholar] [CrossRef]
  29. Houssein, E.H.; Ibrahim, I.E.; Neggaz, N.; Hassaballah, M.; Wazery, Y.M. An efficient ECG arrhythmia classification method based on manta ray foraging optimization. Expert Syst. Appl. 2021, 181, 115131. [Google Scholar] [CrossRef]
  30. Hemeida, M.G.; Ibrahim, A.A.; Mohamed, A.A.; Alkhalaf, S.; El-Dine, A.M.B. Optimal allocation of distributed generators DG based manta ray foraging optimization algorithm (MRFO). Ain Shams Eng. J. 2021, 12, 609–619. [Google Scholar] [CrossRef]
  31. Elmaadawy, K.; Elaziz, M.A.; Elsheikh, A.H.; Moawad, A.; Liu, B.C.; Lu, S.F. Utilization of random vector functional link integrated with manta ray foraging optimization for effluent prediction of wastewater treatment plant. J. Environ. Manag. 2021, 298, 113520. [Google Scholar] [CrossRef]
  32. Got, A.; Zouache, D.; Moussaoui, A. MOMRFO: Multi-objective manta ray foraging optimizer for handling engineering design problems. Knowl. Based Syst. 2022, 237, 107880. [Google Scholar] [CrossRef]
  33. Zouache, D.; Abdelaziz, F.B. Guided manta ray foraging optimization using epsilon dominance for multi-objective optimization in engineering design. Expert Syst. Appl. 2022, 189, 116126. [Google Scholar] [CrossRef]
  34. Kahraman, H.T.; Akbel, M.; Duman, S. Optimization of optimal power flow problem using multi-objective manta ray foraging optimizer. Appl. Soft Comput. 2022, 116, 108334. [Google Scholar] [CrossRef]
  35. Elaziz, M.A.; Yousri, D.; Al-Qaness, M.A.A.; AbdelAty, A.M.; Radwan, A.G.; Ewees, A.A. A Grunwald–Letnikov based manta ray foraging optimizer for global optimization and image segmentation. Eng. Appl. Artif. Intell. 2021, 98, 104105. [Google Scholar] [CrossRef]
  36. Hu, S.M.; Wang, G.Z.; Jin, T.G. Properties of two types of generalized Ball curves. Comput. Aided Des. 1996, 28, 125–133. [Google Scholar] [CrossRef]
  37. Yousri, D.; AbdelAty, A.M.; Al-qaness, M.A.A.; Ewees, A.A.; Radwan, A.G.; Elaziz, M.A. Discrete fractional-order Caputo method to overcome trapping in local optima: Manta ray foraging optimizer as a case study. Expert Syst. Appl. 2022, 192, 116355. [Google Scholar] [CrossRef]
  38. Xu, H.T.; Song, H.Q.; Xu, C.X.; Wu, X.W.; Yousefi, N. Exergy analysis and optimization of a HT-PEMFC using developed manta ray foraging optimization algorithm. Int. J. Hydrog. Energy 2020, 45, 30932–30941. [Google Scholar] [CrossRef]
  39. Jena, B.; Naik, M.K.; Panda, R.; Abraham, A. Maximum 3D Tsallis entropy based multilevel thresholding of brain MR image using attacking manta ray foraging optimization. Eng. Appl. Artif. Intell. 2021, 103, 104293. [Google Scholar] [CrossRef]
  40. Feng, J.Y.; Luo, X.G.; Gao, M.Z.; Abbas, A.; Xu, Y.P.; Pouramini, S. Minimization of energy consumption by building shape optimization using an improved manta-ray foraging optimization algorithm. Energy Rep. 2021, 7, 1068–1078. [Google Scholar] [CrossRef]
  41. Liu, B.Z.; Wang, Z.Z.; Feng, L.; Jermsittiparsert, K. Optimal operation of photovoltaic/diesel generator/pumped water reservoir power system using modified manta ray optimization. J. Clean. Prod. 2021, 289, 125733. [Google Scholar] [CrossRef]
  42. Sheng, B.Q.; Pan, T.H.; Luo, Y.; Jermsittiparsert, K. System identification of the PEMFCs based on balanced manta-ray foraging optimization algorithm. Energy Rep. 2020, 6, 2887–2896. [Google Scholar] [CrossRef]
  43. Micev, M.; Ćalasan, M.; Ali, Z.M.; Hasanien, H.M.; Aleem, S.H.E.A. Optimal design of automatic voltage regulation controller using hybrid simulated annealing—Manta ray foraging optimization algorithm. Ain Shams Eng. J. 2021, 12, 641–657. [Google Scholar] [CrossRef]
  44. Hassan, M.H.; Houssein, E.H.; Mahdy, M.A.; Kamel, S. An improved manta ray foraging optimizer for cost-effective emission dispatch problems. Eng. Appl. Artif. Intell. 2021, 100, 104155. [Google Scholar] [CrossRef]
  45. Jain, S.; Indora, S.; Atal, D.K. Rider manta ray foraging optimization-based generative adversarial network and CNN feature for detecting glaucoma, Biomed. Signal Process. Control 2022, 73, 103425. [Google Scholar] [CrossRef]
  46. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  47. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  48. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H.L. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  49. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  50. Khishe, M.; Mosavi, M.R. Chimp optimization algorithm. Expert Syst. Appl. 2020, 149, 113338. [Google Scholar] [CrossRef]
  51. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  52. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  53. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  54. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  55. Dhiman, G.; Kumar, V. Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl. Based Syst. 2019, 165, 169–196. [Google Scholar] [CrossRef]
  56. Abualigah, L.; Yousri, D.; Elaziz, M.A.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  57. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  58. Chou, J.S.; Truong, D.N. A novel metaheuristic optimizer inspired by behavior of jellyfish in ocean. Appl. Math. Comput. 2021, 389, 125535. [Google Scholar] [CrossRef]
  59. Houssein, E.H.; Saad, M.R.; Hashim, F.A.; Shaban, H.; Hassaballah, M. Lévy flight distribution: A new metaheuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 94, 103731. [Google Scholar] [CrossRef]
Figure 1. Fourteen different chaotic map images.
Figure 1. Fourteen different chaotic map images.
Mathematics 10 02960 g001aMathematics 10 02960 g001b
Figure 3. Exploration and exploitation phases in the CMRFO on the CEC2020 functions.
Figure 3. Exploration and exploitation phases in the CMRFO on the CEC2020 functions.
Mathematics 10 02960 g003
Figure 4. Convergence curves of 11 algorithms.
Figure 4. Convergence curves of 11 algorithms.
Mathematics 10 02960 g004aMathematics 10 02960 g004b
Figure 5. Radar charts of all 11 algorithms.
Figure 5. Radar charts of all 11 algorithms.
Mathematics 10 02960 g005
Figure 6. Convergence curves for 10 comparison algorithms (50-dimensional IEEE CEC2020).
Figure 6. Convergence curves for 10 comparison algorithms (50-dimensional IEEE CEC2020).
Mathematics 10 02960 g006
Figure 7. Boxplots of all 10 algorithms on the 50-dimensional CEC2020 test suite.
Figure 7. Boxplots of all 10 algorithms on the 50-dimensional CEC2020 test suite.
Mathematics 10 02960 g007
Figure 8. Structure of the pressure vessel.
Figure 8. Structure of the pressure vessel.
Mathematics 10 02960 g008
Figure 9. Structure of TCS.
Figure 9. Structure of TCS.
Mathematics 10 02960 g009
Figure 10. Structure of WB.
Figure 10. Structure of WB.
Mathematics 10 02960 g010
Figure 11. Shape optimization of CG-Ball curves.
Figure 11. Shape optimization of CG-Ball curves.
Mathematics 10 02960 g011
Figure 12. Shape optimization of CG-Ball curves.
Figure 12. Shape optimization of CG-Ball curves.
Mathematics 10 02960 g012
Table 1. Fourteen chaotic maps.
Table 1. Fourteen chaotic maps.
No.Map NameMap Equation
1Chebyshev (M1) θ k + 1 = cos [ k cos ( θ k ) ]
2Circle (M2) θ k + 1 = θ k a 2 π sin ( 2 π θ k ) mod ( 1 ) + b
3Gauss/mouse (M3) θ k + 1 = { 0     ,   θ k = 0 1 θ k mod ( 1 ) ,   otherwise
4Intermittency (M4) θ k + 1 = { ε + θ k + c θ k n ,   0 < θ k P θ k     P 1     P     ,   P < θ k < 1
5Iterative (M5) θ k + 1 = sin   ( a π θ k ) ,   a ( 0 ,   1 )
6Liebovitch (M6) θ k + 1 = { a θ k                 ,   0 < θ k P 1 P     θ k P 2     P 1             ,   P 1 < θ k P 2 1 β ( 1 θ k )   ,   P 2 < θ k 1
7Logistic (M7) θ k + 1 = a θ k ( 1 θ k )
8Piecewise (M8) θ k + 1 = { θ k P 1 , 0 θ k < P ( θ k P ) ( 0.5 P ) 1 ,   P θ k < 0.5 ( 1 P θ k ) ( 0.5 P ) 1 ,   0.5 θ k < 1 P ( 1 θ k ) P 1 ,   1 P θ k < 1
9Sine (M9) θ k + 1 = a 4 sin ( π θ k ) ,   a ( 0 ,   4 ]
10Singer (M10) θ k + 1 = μ ( 7.86 θ k 23.31 θ k 2 + 28.75 θ k 3 13.302875 θ k 4 )
11Sinusoidal (M11) θ k + 1 = a θ k 2 sin ( π θ k )
12Tent (M12) θ k + 1 = { θ k / 0.7 ,   θ k < 0.7 10 ( 1 θ k ) / 3 ,   θ k 0.7
13β-chaotic (M13) θ k + 1 = k β ( θ k , μ , ν , θ 1 , θ 2 )
14Cubic (M14) θ k + 1 = ρ ( 1 θ k 2 )
Table 2. Results of the CMRFO using 14 different chaotic maps on 23 benchmark functions.
Table 2. Results of the CMRFO using 14 different chaotic maps on 23 benchmark functions.
No.ResultCMRFO
M1M2M3M4M5M6M7
F1Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F2Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F3Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F4Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F5Mean8.700600.901441.762702.17 × 10−77.804300.897950.88399
Std79.759516.252029.50258.89 × 10−1378.617216.126315.6288
Rank1471131365
F6Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F7Mean3.26 × 10−55.28 × 10−54.33 × 10−54.65 × 10−55.44 × 10−54.37 × 10−53.88 × 10−5
Std4.58 × 10−101.29 × 10−91.20 × 10−91.71 × 10−91.63 × 10−91.87 × 10−97.27 × 10−10
Rank112681373
F8Mean−38,051.19−12,569.49−12,391.83−9584.84−39,165.73−12,569.49−12,569.49
Std0.02.35 × 10−236.31 × 1051.66 × 1065.57 × 10−231.41 × 10−231.60 × 10−23
Rank131011121457
F9Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F10Mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F11Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F12Mean1.88 × 10−301.04 × 10−313.61 × 10−319.12 × 10−322.06 × 10−308.87 × 10−322.68 × 10−31
Std2.61 × 10−595.64 × 10−622.93 × 10−616.34 × 10−622.56 × 10−592.65 × 10−627.27 × 10−61
Rank1351141439
F13Mean1.38 × 10−297.61 × 10−311.15 × 10−291.87 × 10−315.49 × 10−42.35 × 10−315.92 × 10−31
Std1.08 × 10−573.98 × 10−603.90 × 10−581.77 × 10−616.04 × 10−67.95 × 10−621.46 × 10−60
Rank1371221436
F14Mean0.9980.9980.9980.9980.9980.9980.998
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F15Mean3.53 × 10−43.07 × 10−43.53 × 10−43.07 × 10−43.53 × 10−43.07 × 10−43.07 × 10−4
Std4.19 × 10−85.88 × 10−384.19 × 10−83.37 × 10−384.19 × 10−82.68 × 10−382.26 × 10−38
Rank11101191132
F16Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std5.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−32
Rank1.01.01.01.01.01.01.0
F17Mean0.397890.397890.397890.397890.397890.397890.39789
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F18Mean3333333
Std4.05 × 10−313.01 × 10−315.19 × 10−321.56 × 10−3107.78 × 10−319.34 × 10−32
Rank97361124
F19Mean−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628
Std5.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−30
Rank1.01.01.01.01.01.01.0
F20Mean−3.2863−3.3101−3.2863−3.3042−3.3101−3.2923−3.3161
Std3.12 × 10−31.34 × 10−33.12 × 10−31.90 × 10−31.34 × 10−32.79 × 10−37.07 × 10−4
Rank6263251
F21Mean−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532
Std1.13 × 10−291.33 × 10−291.08 × 10−291.08 × 10−299.80 × 10−301.08 × 10−291.18 × 10−29
Rank3722124
F22Mean−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029
Std7.31 × 10−308.30 × 10−309.80 × 10−308.30 × 10−307.31 × 10−309.30 × 10−309.80 × 10−30
Rank2464256
F23Mean−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364
Std3.32 × 10−303.65 × 10−303.16 × 10−303.32 × 10−302.99 × 10−303.32 × 10−304.32 × 10−30
Rank3423135
Mean Rank4.34783.78264.04352.95654.26092.86962.7826
Result1381051243
Table 3. Results of CMRFO using 14 different chaotic maps to initialize the population (continued).
Table 3. Results of CMRFO using 14 different chaotic maps to initialize the population (continued).
No.ResultCMRFO
M8M9M10M11M12M13M14
F1Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F2Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F3Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F4Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F5Mean0.912031.680300.911821.77 × 10−77.79 × 10−73.555401.62 × 10−8
Std16.636126.747116.62822.01 × 10−135.36 × 10−1253.31941.04 × 10−15
Rank910824121
F6Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F7Mean4.89 × 10−55.01 × 10−54.30 × 10−53.75 × 10−56.05 × 10−55.23 × 10−54.14 × 10−5
Std6.98 × 10−102.06 × 10−91.05 × 10−91.08 × 10−92.74 × 10−91.30 × 10−99.62 × 10−10
Rank9105214114
F8Mean−12,569.49−12,569.49−12,569.49−12,569.49−12,569.49−12,569.49−12,569.49
Std5.22 × 10−241.36 × 10−231.76 × 10−231.95 × 10−231.57 × 10−231.22 × 10−236.27 × 10−24
Rank1489632
F9Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F10Mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F11Mean0.00.00.00.00.00.00.0
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F12Mean2.29 × 10−311.46 × 10−317.63 × 10−327.80 × 10−322.86 × 10−316.84 × 10−311.56 × 10−31
Std6.98 × 10−615.94 × 10−621.80 × 10−629.70 × 10−633.54 × 10−612.90 × 10−602.00 × 10−61
Rank861210127
F13Mean1.21 × 10−303.99 × 10−303.19 × 10−302.93 × 10−304.16 × 10−312.19 × 10−301.80 × 10−31
Std6.54 × 10−605.57 × 10−593.21 × 10−618.34 × 10−597.06 × 10−611.42 × 10−591.45 × 10−61
Rank811410591
F14Mean0.9980.9980.9980.9980.9980.9980.998
Std02.59 × 10−3300000
Rank1211111
F15Mean3.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.53 × 10−43.07 × 10−4
Std2.17 × 10−382.72 × 10−382.80 × 10−382.94 × 10−382.71 × 10−384.19 × 10−83.12 × 10−38
Rank15674118
F16Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std5.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−32
Rank1.01.01.01.01.01.01.0
F17Mean0.397890.397890.397890.397890.397890.397890.39789
Std0.00.00.00.00.00.00.0
Rank1.01.01.01.01.01.01.0
F18Mean3333333
Std3.43 × 10−317.68 × 10−314.46 × 10−319.34 × 10−329.34 × 10−321.35 × 10−311.04 × 10−32
Rank811104452
F19Mean−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628
Std5.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−30
Rank1.01.01.01.01.01.01.0
F20Mean−3.2982−3.2923−3.2982−3.2923−3.2744−3.3042−3.2923
Std2.38 × 10−32.79 × 10−32.38 × 10−32.79 × 10−33.57 × 10−31.90 × 10−32.79 × 10−3
Rank4545735
F21Mean−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532
Std1.23 × 10−291.13 × 10−291.28 × 10−291.23 × 10−291.23 × 10−291.18 × 10−291.23 × 10−29
Rank5365545
F22Mean−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029
Std5.31 × 10−307.81 × 10−309.30 × 10−307.81 × 10−307.81 × 10−309.80 × 10−307.31 × 10−30
Rank1353362
F23Mean−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364
Std3.16 × 10−301.23 × 10−293.16 × 10−302.99 × 10−303.32 × 10−301.13 × 10−293.32 × 10−30
Rank2721363
Mean Rank2.95653.82613.08702.69573.34784.08702.2609
Result59627111
Table 4. Results of the CMRFO based on different p-values.
Table 4. Results of the CMRFO based on different p-values.
No.ResultThe p-Value of CMRFO
0.10.20.30.40.50.60.70.80.9
F1Mean000000000
Std000000000
Rank111111111
F2Mean000000000
Std000000000
Rank111111111
F3Mean000000000
Std000000000
Rank111111111
F4Mean04.94 × 10−3244.94 × 10−3244.94 × 10−3244.94 × 10−3244.94 × 10−3244.94 × 10−3244.94 × 10−3244.94 × 10−324
Std000000000
Rank122222222
F5Mean5.79 × 10−91.29 × 10−83.25 × 10−81.76 × 10−86.58 × 10−83.11 × 10−74.64 × 10−81.23 × 10−61.56 × 10−7
Std2.35 × 10−161.17 × 10−151.64 × 10−147.74 × 10−161.24 × 10−141.52 × 10−121.59 × 10−142.23 × 10−114.22 × 10−13
Rank124368597
F6Mean000000000
Std000000000
Rank111111111
F7Mean2.07 × 10−52.40 × 10−52.17 × 10−52.44 × 10−53.70 × 10−52.70 × 10−53.72 × 10−55.33 × 10−56.42 × 10−5
Std2.88 × 10−103.61 × 10−106.50 × 10−103.90 × 10−107.30 × 10−103.51 × 10−101.03 × 10−92.81 × 10−93.66 × 10−9
Rank132465789
F8Mean−12,569.49−12,504.34−12,563.56−12,534.94−12,569.49−12,489.54−12,569.49−12,541.85−12,532.97
Std6.62 × 10−248.49 × 10−47.01 × 10−21.67 × 10−46.62 × 10−241.28 × 1056.10 × 10−247.60 × 1032.67 × 104
Rank273528146
F9Mean000000000
Std000000000
Rank111111111
F10Mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std000000000
Rank111111111
F11Mean000000000
Std000000000
Rank111111111
F12Mean1.57 × 10−321.60 × 10−327.76 × 10−321.81 × 10−302.38 × 10−304.23 × 10−301.07 × 10−291.15 × 10−298.80 × 10−30
Std7.88 × 10−961.23 × 10−679.46 × 10−631.13 × 10−591.78 × 10−592.93 × 10−591.35 × 10−575.77 × 10−582.37 × 10−58
Rank123456897
F13Mean1.36 × 10−321.62 × 10−322.21 × 10−302.95 × 10−291.01 × 10−294.16 × 10−291.29 × 10−285.31 × 10−281.24 × 10−28
Std7.60 × 10−684.79 × 10−657.23 × 10−604.54 × 10−571.57 × 10−585.71 × 10−578.07 × 10−564.54 × 10−546.16 × 10−56
Rank123546897
F14Mean0.9980.9980.9980.9980.9980.9980.9980.9980.998
Std000000000
Rank111111111
F15Mean3.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−4
Std1.87 × 10−382.06 × 10−382.72 × 10−382.92 × 10−383.96 × 10−383.28 × 10−381.89 × 10−382.41 × 10−382.10 × 10−38
Rank136798254
F16Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std2.55 × 10−208.55 × 10−205.83 × 10−412.23 × 10−161.03 × 10−181.41 × 10−181.02 × 10−191.70 × 10−181.58 × 10−12
Rank128745369
F17Mean0.397890.397890.397890.397890.397890.397890.397890.397890.39789
Std000000000
Rank111111111
F18Mean333333333
Std1.35 × 10−313.11 × 10−315.19 × 10−323.11 × 10−312.70 × 10−311.25 × 10−315.29 × 10−314.05 × 10−313.63 × 10−31
Rank351542876
F19Mean−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628
Std5.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−30
Rank111111111
F20Mean−3.2982−3.2863−3.2863−3.2804−3.2863−3.2804−3.2804−3.2625−3.2685
Std2.38 × 10−33.12 × 10−33.12 × 10−33.39 × 10−33.12 × 10−33.39 × 10−33.39 × 10−33.72 × 10−33.68 × 10−3
Rank122323354
F21Mean−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532−10.1532
Std1.18 × 10−291.28 × 10−291.33 × 10−291.23 × 10−291.23 × 10−291.28 × 10−291.33 × 10−291.28 × 10−291.28 × 10−29
Rank134223433
F22Mean−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029
Std5.81 × 10−308.30 × 10−307.31 × 10−307.81 × 10−305.81 × 10−309.30 × 10−309.30 × 10−306.81 × 10−309.30 × 10−30
Rank153416626
F23Mean−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364−10.5364
Std3.16 × 10−303.32 × 10−303.32 × 10−304.32 × 10−303.32 × 10−303.82 × 10−303.32 × 10−303.82 × 10−303.32 × 10−30
Rank122423232
Mean Rank1.13042.17392.30432.82612.56523.26093.00003.56523.5652
Result123547688
Table 5. Description of parameter settings.
Table 5. Description of parameter settings.
AlgorithmsParameter Values
MRFOS = 2
CMRFOS = 2, p = 0.1
PSO P1 = P2 = 2; ω: linearly decreases from 0.8 to 0.2
GWOα: the value range of α is [0, 2]; increases linearly
HHOE0: [−1 1]
AOAP1 = 2, P2 = 6, P3 = 1, P4 = 2
CHOAf: non-linearly decreases from 2.5 to 0; chaotic map: tent map
MPAF = 0.2, P = 0.5
Table 6. Comparison results of all 11 optimizers on 23 classical benchmarks.
Table 6. Comparison results of all 11 optimizers on 23 classical benchmarks.
No.ResultAlgorithm
MRFOCMRFOMRFO–GBODMRFOSA–MRFOPSOGWOHHOAOACHOAMPA
F1Mean000001.31 × 10−104.08 × 10−703.36 × 10−1876.99 × 10−1811.73 × 10−1251.28 × 10−49
Std000002.91 × 10−198.72 × 10−139001.24 × 10−2498.54 × 10−98
Rank11111752346
F2Mean000001.42 × 10−66.14 × 10−411.16 × 10−1002.80 × 10−912.07 × 10−661.29 × 10−27
Std000006.47 × 10−125.88 × 10−811.19 × 10−1991.56 × 10−1805.94 × 10−1319.56 × 10−54
Rank11111752346
F3Mean0000043.724.72 × 10−211.40 × 10−1605.16 × 10−1421.05 × 10−993.04 × 10−13
Std00000292.391.17 × 10−403.94 × 10−3195.31 × 10−2822.08 × 10−1976.34 × 10−25
Rank11111752346
F4Mean0004.94 × 10−32401.04111.47 × 10−171.74 × 10−1005.26 × 10−793.93 × 10−551.94 × 10−19
Std000009.58 × 10−23.01 × 10−342.14 × 10−1992.95 × 10−1561.79 × 10−1082.08 × 10−38
Rank11121873456
F5Mean17.34859.10 × 10−910.497916.52617.367937.873926.58771.44 × 10−328.831628.926222.2968
Std2.49 × 10−12.83 × 10−164.58224.82 × 10−13.12 × 10−11.48 × 1037.74 × 10−14.99 × 10−68.71 × 10−38.71 × 10−32.12 × 10−1
Rank5134611829107
F6Mean000005.00 × 10−200000
Std000005.00 × 10−200000
Rank11111211111
F7Mean5.98 × 10−51.54 × 10−51.08 × 10−45.41 × 10−53.37 × 10−58.66 × 10−35.20 × 10−43.60 × 10−52.56 × 10−46.50 × 10−55.71 × 10−4
Std2.13 × 10−91.21 × 10−103.99 × 10−91.52 × 10−99.95 × 10−105.97 × 10−61.28 × 10−77.04 × 10−102.10 × 10−83.16 × 10−91.10 × 10−7
Rank5174211838610
F8Mean−8432.83−12,569.49−9533.46−9485.30−8554.22−6758.13−5962.18−12,569.41−5.83 × 107−5866.21−10,161.35
Std7.61 × 1057.14 × 10−244.30 × 1051.29 × 1056.33 × 1054.85 × 1054.73 × 1051.09 × 10−23.24 × 10163.87 × 1031.05 × 105
Rank7145689211103
F9Mean0000043.67860.159206.19704.05210
Std00000145.04170.50710768.053598.10560
Rank11111521431
F10Mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−162.58 × 10−71.44 × 10−148.88 × 10−1610.981219.95973.38 × 10−15
Std000003.54 × 10−136.11 × 10−3001.04 × 10−21.42 × 10−62.79 × 10−30
Rank11111431562
F11Mean000001.73 × 10−200000
Std000003.12 × 10−400000
Rank11111211111
F12Mean7.81 × 10−291.57 × 10−321.71 × 10−325.32 × 10−311.03 × 10−281.04 × 10−22.45 × 10−24.59 × 10−77.34 × 10−11.25 × 10−15.85 × 10−11
Std1.06 × 10−568.73 × 10−709.10 × 10−671.87 × 10−607.17 × 10−561.02 × 10−33.25 × 10−43.69 × 10−131.54 × 10−23.84 × 10−41.34 × 10−21
Rank4123589711106
F13Mean2.39481.41 × 10−231.47 × 10−21.76601.52571.65 × 10−33.74 × 10−12.75 × 10−52.88022.98169.50 × 10−10
Std1.37602.95 × 10−668.60 × 10−41.89562.18711.62 × 10−52.79 × 10−28.72 × 10−101.0921.58 × 10−33.87 × 10−19
Rank9158746310112
F14Mean0.9980.9980.9980.9980.9982.03493.64560.9981.05090.99810.998
Std000003.523114.05411.38 × 10−215.00 × 10−23.47 × 10−80
Rank11111562431
F15Mean3.53 × 10−43.07 × 10−43.07 × 10−43.53 × 10−44.05 × 10−42.44 × 10−32.36 × 10−33.20 × 10−45.27 × 10−41.35 × 10−33.07 × 10−4
Std4.19 × 10−86.81 × 10−392.23 × 10−384.19 × 10−87.90 × 10−83.77 × 10−53.80 × 10−52.95 × 10−101.78 × 10−73.81 × 10−95.82 × 10−37
Rank512561094783
F16Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std5.19 × 10−322.44 × 10−205.19 × 10−325.19 × 10−325.19 × 10−325.19 × 10−321.01 × 10−173.40 × 10−269.69 × 10−116.32 × 10−94.67 × 10−32
Rank24222253671
F17Mean0.397890.397890.397890.397890.397890.397890.397890.397890.397910.398560.39789
Std0000001.68 × 10−149.53 × 10−173.85 × 10−93.40 × 10−70
Rank11111132451
F18Mean333333333.035333
Std4.26 × 10−311.35 × 10−311.04 × 10−322.59 × 10−314.67 × 10−315.61 × 10−311.57 × 10−111.31 × 10−175.48 × 10−31.50 × 10−101.06 × 10−30
Rank4213569811107
F19Mean−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8616−3.8626−3.8580−3.8539−3.8628
Std5.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−305.19 × 10−307.18 × 10−63.93 × 10−74.30 × 10−58.75 × 10−75.19 × 10−30
Rank11111132451
F20Mean−3.2566−3.2923−3.2566−3.2863−3.2625−3.2863−3.2472−3.1740−3.0823−2.6656−3.322
Std3.6 × 10−32.79 × 10−33.68 × 10−33.12 × 10−33.72 × 10−33.12 × 10−37.71 × 10−34.50 × 10−31.38 × 10−21.86 × 10−11.18 × 10−29
Rank52534367891
F21Mean−8.8787−10.1532−8.8787−9.8983−9.6434−5.7660−9.6453−5.5628−7.4155−3.1803−10.1532
Std5.12951.23 × 10−295.12951.29952.462211.76232.44012.44323.84554.21114.32 × 10−30
Rank626358497101
F22Mean−9.8714−10.4029−9.6057−10.3412−9.3399−9.4930−10.1369−5.3528−7.4643−3.2238−10.4029
Std2.67657.81 × 10−303.79177.61 × 10−24.75825.13741.41241.40873.88693.93961.36 × 10−29
Rank5163874109112
F23Mean−9.4548−10.5364−9.7252−10.2660−10.5364−6.4475−10.5361−5.6641−8.7643−4.0086−10.5364
Std4.92563.32 × 10−303.92511.46233.823014.9472.48 × 10−82.72074.28422.60054.98 × 10−30
Rank7165294108113
Mean Rank3.26091.26092.60872.60873.00005.91305.30433.78266.13046.69573.3913
Result412238769105
Table 7. Statistical results of each algorithm based on the CMRFO.
Table 7. Statistical results of each algorithm based on the CMRFO.
No.Algorithm
MRFOMRFO–GBODMRFOSA–MRFOPSOGWOHHOAOACHOAMPA
F1NaNNaNNaNNaN8.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−9
F2NaNNaNNaNNaN8.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−9
F3NaNNaNNaNNaN8.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−9
F45.2 × 10−23.3 × 10−43.3 × 10−42.1 × 10−24.2 × 10−84.2 × 10−84.2 × 10−84.2 × 10−84.2 × 10−84.2 × 10−8
F56.8 × 10−86.8 × 10−86.8 × 10−86.8 × 10−86.8 × 10−86.8 × 10−86.8 × 10−86.8 × 10−86.8 × 10−86.8 × 10−8
F6NaNNaNNaNNaN3.4 × 10−1NaNNaNNaNNaNNaN
F72.3 × 10−53.0 × 10−74.2 × 10−53.6 × 10−26.8 × 10−86.8 × 10−88.4 × 10−31.1 × 10−76.6 × 10−56.8 × 10−8
F83.7 × 10−83.7 × 10−83.7 × 10−83.7 × 10−83.7 × 10−83.7 × 10−83.7 × 10−82.8 × 10−23.7 × 10−83.7 × 10−8
F9NaNNaNNaNNaN8.0 × 10−94.0 × 10−2NaN3.4 × 10−18.1 × 10−2NaN
F10NaNNaNNaNNaN8.0 × 10−93.7 × 10−9NaN6.7 × 10−58.0 × 10−95.0 × 10−6
F11NaNNaNNaNNaN8.0 × 10−9NaNNaNNaNNaNNaN
F121.9 × 10−81.1 × 10−71.9 × 10−81.9 × 10−81.9 × 10−81.9 × 10−81.9 × 10−81.9 × 10−81.9 × 10−81.9 × 10−8
F131.1 × 10−81.5 × 10−81.4 × 10−81.4 × 10−81.5 × 10−81.5 × 10−81.5 × 10−81.5 × 10−81.5 × 10−81.5 × 10−8
F14NaNNaNNaNNaN2.0 × 10−38.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−9NaN
F152.1 × 10−35.5 × 10−31.4 × 10−14.3 × 10−26.1 × 10−86.1 × 10−86.1 × 10−86.1 × 10−86.1 × 10−84.7 × 10−7
F162.9 × 10−82.9 × 10−82.9 × 10−82.9 × 10−82.9 × 10−81.2 × 10−74.2 × 10−19.1 × 10−86.7 × 10−81.2 × 10−7
F17NaNNaNNaNNaNNaN8.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−9NaN
F189.5 × 10−19.1 × 10−26.2 × 10−13.1 × 10−17.1 × 10−11.5 × 10−85.0 × 10−81.5 × 10−81.5 × 10−85.7 × 10−2
F19NaNNaNNaNNaNNaN8.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−9NaN
F204.5 × 10−24.5 × 10−25.8 × 10−18.5 × 10−25.8 × 10−13.5 × 10−56.8 × 10−71.6 × 10−73.4 × 10−82.7 × 10−2
F212.1 × 10−21.6 × 10−16.9 × 10−45.9 × 10−15.9 × 10−61.5 × 10−81.5 × 10−81.5 × 10−81.5 × 10−86.4 × 10−7
F223.5 × 10−21.9 × 10−13.2 × 10−14.1 × 10−11.9 × 10−14.1 × 10−84.1 × 10−84.1 × 10−84.1 × 10−85.7 × 10−4
F232.0 × 10−28.1 × 10−28.1 × 10−23.4 × 10−16.6 × 10−58.0 × 10−98.0 × 10−98.0 × 10−98.0 × 10−95.4 × 10−7
+/=/−1/12/101/14/81/15/71/15/71/6/160/2/210/5/180/3/200/3/203/7/13
Table 8. Parameter settings of algorithms.
Table 8. Parameter settings of algorithms.
AlgorithmParameter
MRFOS = 2
CMRFOS = 2, p = 0.1
PSOF1 = 2, F2 = 2; s: linearly decreases from 0.8 to 0.2
SCAα = 2
WOAα: the value range of α is [0, 2]; increases linearly; b = 1
HHOE0: [−1 1]
CHOAf: non-linearly decreases from 2.5 to 0; chaotic map: tent map
AOAF1 = F4 = 2, F2 = 6, F3 = 1
SSAv0= 0
SOAA: linearly decreases from 2 to 0; fc = 0
Table 9. Results of all 10 algorithms on the 50-dimensional CEC2020 test suite.
Table 9. Results of all 10 algorithms on the 50-dimensional CEC2020 test suite.
No.ResultAlgorithm
MRFOCMRFOPSOSCAWOAHHOCHOAAOASSASOA
F1Mean6.80 × 1032.56 × 1036.68 × 1085.45 × 10103.48 × 1091.48 × 1085.69 × 10101.05 × 10118.03 × 1033.70 × 1010
Std3.75 × 1077.12 × 1069.35 × 10176.69 × 10192.92 × 10181.08 × 10162.53 × 10187.30 × 10195.92 × 1076.81 × 1019
Rank21586491037
F2Mean8.58 × 1038.57 × 1037.46 × 1031.52 × 1041.15 × 1041.03 × 1041.56 × 1041.51 × 1048.61 × 1031.24 × 104
Std1.19 × 1069.48 × 1056.85 × 1051.80 × 1051.13 × 1061.42 × 1061.49 × 1052.58 × 1059.92 × 1051.14 × 105
Rank32196510847
F3Mean1.45 × 1031.51 × 1039.29 × 1021.76 × 1031.79 × 1031.83 × 1031.75 × 1031.99 × 1031.21 × 1031.54 × 103
Std1.79 × 1046.04 × 1044.08 × 1039.22 × 1031.41 × 1047.42 × 1031.65 × 1034.92 × 1032.55 × 1049.80 × 103
Rank34178961025
F4Mean1.90 × 1031.90 × 1031.91 × 1032.04 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.93 × 1031.90 × 103
Std003.771.75 × 10400001.04 × 1020
Rank1124111131
F5Mean7.44 × 1054.10 × 1051.76 × 1067.65 × 1071.18 × 1081.01 × 1075.96 × 1074.27 × 1082.62 × 1061.28 × 107
Std2.25 × 10114.32 × 10106.46 × 10111.60 × 10153.92 × 10152.67 × 10132.09 × 10141.08 × 10161.65 × 10126.12 × 1013
Rank21389571046
F6Mean3.36 × 1033.17 × 1032.85 × 1036.34 × 1035.77 × 1034.55 × 1035.79 × 1037.61 × 1033.86 × 1034.22 × 103
Std2.07 × 1051.15 × 1051.04 × 1054.45 × 1059.15 × 1051.81 × 1051.08 × 1058.27 × 1051.83 × 1052.67 × 105
Rank32197681045
F7Mean3.94 × 1052.36 × 1051.52 × 1062.06 × 1071.33 × 1074.26 × 1062.35 × 1073.72 × 1072.16 × 1067.55 × 106
Std6.89 × 10101.79 × 10101.43 × 10127.40 × 10133.48 × 10136.53 × 10124.31 × 10132.76 × 10141.88 × 10121.73 × 1013
Rank21387591046
F8Mean9356.90428852.86218545.671116798.809413060.889411558.140817340.370116235.74149446.704313085.0313
Std6.33 × 1069.47 × 1063.22 × 1062.79 × 1051.15 × 1061.29 × 1061.07 × 1055.20 × 1055.59 × 1051.39 × 106
Rank32196510847
F9Mean3324.54943304.73643399.39703799.04533801.25544208.35634063.3664906.95633155.64063306.2197
Std1.78 × 1041.07 × 1041.62 × 1044.87 × 1032.79 × 1046.15 × 1046.99 × 1031.48 × 1053.20 × 1034.83 × 103
Rank42567981013
F10Mean3075.6673059.26483077.44697425.59383592.39863209.662611,968.023114507.00253092.34895653.4024
Std5.95 × 1021.07 × 1032.48 × 1035.82 × 1053.10 × 1041.43 × 1035.49 × 1051.59 × 1066.87 × 1028.37 × 105
Rank21386591047
Mean Rank2.501.702.507.606.305.407.708.703.305.40
Result2126547834
Table 10. Results of all 11 algorithms.
Table 10. Results of all 11 algorithms.
AlgorithmVariableMinimum Cost
z1z2z3z4
MRFO0.77450070.383224240.31993199.99575870.1394
CMRFO0.77454760.383205540.31962200.00005870.1240
AO0.85856100.423251244.60451149.08816061.6264
AOA1.00395004.460250053.5001072.794027,253.6871
CHOA1.30461000.613314066.0164010.00007843.5750
TSA0.77247450.382958540.36756200.00005895.4548
SCA0.82498760.452259141.32788187.46126313.5757
SOA0.82848060.405472842.92673166.80165984.1297
GWO0.78918660.389931141.06881189.83385895.9798
HHO0.91268080.449794147.41997120.22976150.9174
JS0.77452590.383193740.32009199.99415870.1564
Table 11. Statistical analyses of all 11 algorithms after 30 runs.
Table 11. Statistical analyses of all 11 algorithms after 30 runs.
AlgorithmBestWorstMeanStd
MRFO5870.12455871.25695870.21366.22 × 10−2
CMRFO5870.12405870.12405870.12404.62 × 10−11
AO5988.64037564.72216666.13392.34 × 105
AOA6662.4718142,714.867433,101.36671.69 × 109
CHOA7528.6200362,414.176377,881.91241.45 × 1010
TSA5873.35486418.79765956.97961.75 × 104
SCA6171.78856859.02576406.63243.74 × 104
SOA5878.9664753,596.696180,183.56232.93 × 1010
GWO5870.19567171.18045943.80588.42 × 104
HHO6015.64067482.80866684.38871.50 × 105
JS5870.12405873.73865871.19151.34
Table 12. Results of all 11 algorithms.
Table 12. Results of all 11 algorithms.
AlgorithmVariableMinimum Weight
z1z2z3
MRFO0.05175570.35826511.20780.012675
CMRFO0.05168880.35671211.28930.012665
AO0.05721400.5043906.50550.014044
AOA0.06383600.7254503.12240.015143
CHOA0.05000000.31661114.25970.012870
TSA0.05266180.37958510.10160.012739
SCA0.05000000.31620214.20370.012809
SOA0.05000000.31708314.07900.012746
GWO0.05061040.33123812.96180.012694
HHO0.05644200.4822106.49740.013053
JS0.05200760.36442510.85230.012668
Table 13. Comparison results of all 11 algorithms after 30 runs.
Table 13. Comparison results of all 11 algorithms after 30 runs.
AlgorithmBestWorstMeanStd
MRFO0.0126670.0127540.0126885.43 × 10−10
CMRFO0.0126660.0126950.0126797.95 × 10−11
AO0.0129830.0204360.0159305.25 × 10−6
AOA0.0129070.0160860.0139389.99 × 10−7
CHOA0.0128560.0176680.0145433.28 × 10−6
TSA0.0127130.0130480.0128015.45 × 10−9
SCA0.0127420.0131970.0129821.01 × 10−8
SOA0.0127300.0127980.0127583.83 × 10−10
GWO0.0126690.0127660.0127086.44 × 10−10
HHO0.0128050.0145970.0133653.30 × 10−7
JS0.0126710.0127700.0127101.02 × 10−9
Table 14. Results of all 11 algorithms.
Table 14. Results of all 11 algorithms.
AlgorithmVariableMinimum Cost
z1z2z3z4
MRFO0.205733.25319.03660.205731.6952
CMRFO0.205733.25319.03660.205731.6952
AO0.167067.54268.97510.210362.1893
CHOA0.137455.28008.97670.216071.9093
TSA0.205583.28389.06140.205991.7054
SCA0.186483.60649.15120.206021.7355
SOA0.191823.53529.04700.205781.7143
GWO0.204083.28349.03900.205731.6973
HHO0.194023.60198.81530.216191.7636
JS0.205733.25319.03660.205731.6952
MPA0.205733.25319.03660.205731.6952
Table 15. Results of all 11 algorithms after 30 runs.
Table 15. Results of all 11 algorithms after 30 runs.
AlgorithmBestWorstMeanStd
MRFO1.69521.69521.69521.48 × 10−19
CMRFO1.69521.69521.69526.49 × 10−21
AO1.74562.22501.89351.32 × 10−2
CHOA1.76211.91471.85971.48 × 10−3
TSA1.70131.72091.71052.57 × 10−5
SCA1.77551.90831.81901.23 × 10−3
SOA1.69961.74651.71001.33 × 10−4
GWO1.69551.69941.69711.52 × 10−6
HHO1.72481.86161.79491.66 × 10−3
JS1.69521.69521.69523.81 × 10−20
MPA1.69521.69521.69521.06 × 10−15
Table 16. Optimal values of shape parameters and corresponding objective function.
Table 16. Optimal values of shape parameters and corresponding objective function.
Algorithm α β γ Objective Function
CMRFO−0.918290.37267−0.24463101.5338
SCA−0.928740.38598−0.24488101.5411
LFD−0.857340.29792−0.28173101.8180
AO−0.926160.38264−0.24147101.5378
CHOA−0.916210.36598−0.25012101.5374
Table 17. Optimal values of shape parameters and corresponding fitness value.
Table 17. Optimal values of shape parameters and corresponding fitness value.
Algorithm α β γ Objective Function
CMRFO−0.44182−0.17432−0.58420252.6226
SCA−0.43369−0.16005−0.57312252.6623
LFD−0.46342−0.16430−0.58057252.6614
AO−0.42406−0.17529−0.58248252.6538
CHOA−0.44194−0.17604−0.58321252.6233
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, J.; Liu, Z.; Zhang, X.; Hu, G. Elite Chaotic Manta Ray Algorithm Integrated with Chaotic Initialization and Opposition-Based Learning. Mathematics 2022, 10, 2960. https://doi.org/10.3390/math10162960

AMA Style

Yang J, Liu Z, Zhang X, Hu G. Elite Chaotic Manta Ray Algorithm Integrated with Chaotic Initialization and Opposition-Based Learning. Mathematics. 2022; 10(16):2960. https://doi.org/10.3390/math10162960

Chicago/Turabian Style

Yang, Jianwei, Zhen Liu, Xin Zhang, and Gang Hu. 2022. "Elite Chaotic Manta Ray Algorithm Integrated with Chaotic Initialization and Opposition-Based Learning" Mathematics 10, no. 16: 2960. https://doi.org/10.3390/math10162960

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop