Next Article in Journal
Hybrid Feature Extraction Model to Categorize Student Attention Pattern and Its Relationship with Learning
Previous Article in Journal
Improving Lane Detection Performance for Autonomous Vehicle Integrating Camera with Dual Light Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient and Robust Improved Whale Optimization Algorithm for Large Scale Global Optimization Problems

1
College of Information Engineering, Henan University of Science and Technology, Luoyang 471023, China
2
School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
3
Department of Mathematics and Computer Science, Northern Michigan University, Marquette, MI 49855, USA
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(9), 1475; https://doi.org/10.3390/electronics11091475
Submission received: 9 April 2022 / Revised: 27 April 2022 / Accepted: 30 April 2022 / Published: 4 May 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
As an efficient meta-heuristic algorithm, the whale optimization algorithm (WOA) has been extensively applied to practical problems. However, WOA still has the drawbacks of converging slowly, and jumping out from extreme points especially for large scale optimization problems. To overcome these defects, a modified whale optimization algorithm integrated with a crisscross optimization algorithm (MWOA-CS) is proposed. In MWOA-CS, each dimension of the optimization problem updates its position by randomly performing improved WOA or crisscross optimization algorithm during the entire iterative process. The improved WOA adopts the new nonlinear convergence factor and nonlinear inertia weight to tune the ability of exploitation and exploration. To analyze the performance of MWOA-CS, a series of numerical experiments were performed on 30 test benchmark functions with dimension ranging from 300 to 1000. The experimental results revealed that the presented MWOA-CS provided better convergence speed and accuracy, and meanwhile, displayed a significantly more effective and robust performance than the original WOA and other state of the art meta-heuristic algorithms for solving large scale global optimization problems.

1. Introduction

In the information era with big data, large scale global optimization problems with high dimension are widespread in diverse practical areas, such as finance and economy [1], parameter estimation [2], machine learning [3] and image processing [4]. These issues cannot be effectively addressed by the traditional optimization algorithms [5,6,7,8,9] because of high dimension, nonlinear, non-differential and even no special express. In this context, meta-heuristic algorithms, as an intelligent computing method, have attracted the strong attention of scholars and have been favorably received by engineers.
Different from the traditional optimization algorithms based on gradient information, meta-heuristic algorithms adopt a “trial and error” mechanism [10] in the search to find the best solution possible in the iterative procedure. This mechanism, with convenient implementation, can easily discover one high-quality solution with less computation cost for optimization problems. Therefore, meta-heuristic algorithms can be regarded as a better way to solve large scale global optimization problems. To date, driven by numerous diverse practical optimization problems, many meta-heuristic algorithms have been investigated by scholars, such as particle swarm optimization (PSO) [11], crisscross optimization algorithm (CSO) [12], honey badger algorithm (HBA) [13], ant colony optimization (ACO) [14], artificial bee colony (ABC) [15], fish swarm optimization (FSO) [16], grey wolf optimizer (GWO) [17], tunicate swarm algorithm (TSA) [18], etc.
The complexity of large scale optimization problems increases exponentially as the number of dimensions increase, leading to the “curse of dimensionality” [19]. Although meta-heuristic algorithms exhibit better performance for some practical problems, they still suffer from the puzzle of low convergence and precision, especially for large scale optimization problems. Therefore, it is still a significant research topic as to how meta-heuristic algorithms can remain at faster convergence rates, yet avoid the premature phenomenon. To overcome these drawbacks, much research has been performed to improve the performance of the standard algorithm by modifying parameters or introducing various mechanisms, and remarkable results have been achieved.
Whale optimization algorithm [20] has been proved to exhibit better optimization capability in the engineering and industry field [21], but still has disadvantages such as imbalance between exploration and exploitation, premature convergence and low accuracy. Hitherto, numerous improved versions of WOA have been presented by many scholars. In [22,23,24], the strategy of constructing nonlinear parameters was introduced into WOA to improve its performance, by better tuning the exploitation and exploration capability. To enhance the optimization ability of WOA for large scale optimization problems, Sun et al. [25] employed a nonlinear convergence factor based on cosine function to balance the global and local search process, and improved the avoidant ability of local minimum through a Lévy flight strategy. Similarly, Abdel-Basset et al. [26] adopted Lévy flight and logical chaos mapping to construct an improved WOA. Jin et al. [27] constructed a modified whale optimization algorithm containing three aspects: firstly, different evaluation and random operators were utilized to strengthen the capability of exploration; secondly, Lévy flight-based a step factor to enhance the capability of exploitation; and thirdly, an adaptive weight was constructed to tune the progress of exploration and exploitation. Saafan et al. [28] presented a hybrid algorithm integrating an improved whale optimization algorithm with salp swarm algorithm. Firstly, an improved WOA based on exponential convergence factor was designed, and then, according to the condition given in advance, the hybrid algorithm selected the improved WOA or salp swarm algorithm to perform the optimization task. El-Aziz et al. [29] constructed a hyper-heuristic DEWOA combing WOA and DE, where the DE algorithm was adopted to automatically select one chaotic mapping and one opposition-based learning, to gain an initial population with high-quality and enhance the ability of exploration and escaping from local solutions. Chakraborty et al. [30] constructed an improved whale optimization algorithm by modifying parameters to solve high dimension problems. Nadimi-Shahraki et al. [31] utilized whale optimization algorithm and moth flame optimization algorithm to present an effective hyper-heuristic algorithm WMFO to solve optimal power flow problems of complex structure. Nadimi-Shahraki et al. [32] adopted Levy motion and Brownian motion to achieve a better balance between the exploration and exploitation of whale optimization algorithm. Liu et al. [33] proposed an enhanced global exploration whale optimization algorithm (EGE-WOA) for complex multi-dimensional problems. EGE-WOA employed Levy flights to improve the ability of its global exploration, and adopted new convergent dual adaptive weights to perfect the convergence procedure. On the whole, these improved variants of WOA have displayed better performance for dealing with optimization problems. However, according to the theorem of no free lunch for optimizations [34], they still face, somewhat, defects of plaguing into local optima and premature convergence when dealing with large scale optimization problems with higher dimension.
In order to enhance the optimization ability of WOA to solve large scale optimization problems, this paper presents a new modified WOA algorithm, namely, MWOA-CS. MWOA-CS is composed of a modified WOA and crisscross optimization algorithm, which contains the following improvements: (1) the design of a new nonlinear convergence factor to improve the relationship between the exploration and exploitation of WOA; (2) a cosine-based inertia weight is introduced into WOA to further optimize the optimization procedure; (3) in each iteration, each dimension randomly performs a modified whale optimization algorithm or crisscross optimization algorithm. This update mechanism will randomly divide the population dimension into parts, utilizing the exploration capability of WOA and the exploitation ability of CSO to reduce the search blind spots and avoid the premature phenomena in some dimensions; (4) a large number of numerical experiments are carried out to verify that the proposed MWOA-CS displays better performance for large scale global optimization problems.
The rest of this paper is organized as follows. Section 2 presents a brief conception of WOA, followed by an introduction of the detailed improvements and contributions of the proposed MWOA-CS in Section 3. In Section 4, a discussion and analysis of numerical experiments are presented. Finally, Section 5 presents the conclusions of this paper.

2. Standard Whale Optimization Algorithm

As an intelligent computation algorithm for solving optimization problems, WOA is inspired by the predation behaviors of humpback whales. Its optimization progress consists of three strategies: encircling prey, bubble-net attacking method and search for prey.

2.1. Encircling Prey

The positions of whale population considered as a candidate solution set can be represented as:
X = x 1 , j x 1 , D x N , j x N , D N × D
where N indicates the size of the whale population and D is dimension of the optimization problem. X i = ( x i , 1 , x i , 2 , , x i , D ) represents a position vector of the i th individual whale, and means a feasible solution for a given optimization problem. Since optimal information is unknown beforehand, WOA algorithm regards the current best solution as the prey or the approximate optimum in the iterative process. Once the optimal whale individual (the current best solution) is obtained, the others in the population update their positions towards the prey along with iterations. The corresponding mathematical formula is defined as follows:
D = C X * ( t ) X ( t )  
X ( t + 1 ) = X * ( t ) A D
where t is the current iterative number, X ( t ) represents the whale population in iteration t , X * ( t ) indicates the optimal whale (the best) achieved so far and will be updated if a better position is found in each iteration, is multiplication of one element by one element, and A D stands for a step size to renew the location of whale population. A and C as coefficient vectors are calculated by the following expressions:
A = 2 a r 1 a
C = 2 r 2
where r 1 and r 2 are random vectors uniformly distributed in the interval 0 , 1 . a = 2 2 t / t m a x is a convergence factor to tune the relationship between exploitation and exploration where t m a x indicates the maximum iterative number.

2.2. Bubble-Net Attacking Method

The bubble-net attacking method of WOA performs a fine search near the current best solution in the feasible domain, which is similar to the bubble-net feeding behavior of humpback whales. Its mathematical model is defined as:
D = X * ( t ) X ( t )
X ( t + 1 ) = D e b l cos ( 2 π l ) + X * ( t )
where D represents the distance between the current whale individual and the best solution. l [ 1 , 1 ] is a random value and b = 1 is a constant value defining the shape of the spiral.
When A < 1 , the current whale population randomly executes the bubble-net attacking method or encircling prey to perform a local search near the prey (the current best solution), which depicts the exploitation phase of WOA algorithm. Accordingly, the location renewal mechanism of the current whale population can be described as:
X ( t + 1 ) = X * ( t ) A D , p < 0.5 X * ( t ) + D e b l cos ( 2 π l ) , p 0.5 ,
where p [ 0 , 1 ] is a random value.

2.3. Search for Prey

Search for prey is an important part of WOA algorithm to achieve the global search. When A 1 , this random search strategy is performed. The current whale population randomly chooses an individual as a reference (the prey) to update its position. This strategy attempts to find a better solution in the whole search space, which represents the exploration (global search) ability of WOA. Its mathematical model is similar to that of encircling prey, but the reference prey is different. The formula is written as:
D = C X r a n d ( t ) X ( t )
X ( t + 1 ) = X * ( t ) A D
X r a n d ( t ) is a randomly chosen whale from the current population. The meanings of other symbols are the same as in Section 2.1. Therefore, the optimization progress of WOA is divided by A into exploitation (local search) phase and exploration (global search) phase. If A 1 , WOA conducts the exploration phase, otherwise it performs the exploitation stage. The flowchart of WOA is described in Figure 1.

3. Some Improvements of WOA

Although WOA outperforms some meta-heuristic algorithms, such as PSO, ABC and DE [21], it still contains some defects of low precision and premature convergence, especially when solving large scale optimization problems. In order to overcome these shortcomings, a new improved WOA is proposed by constructing nonlinear parameters and introducing other search methods in this section. The following details the construction of improvements in the MWOA-CS.

3.1. Nonlinear Convergence Factor

In the basic WOA, the convergence factor a tunes the global and local search phase by controlling the value of A . According to Equation (2), the larger the value of a is, the stronger the global search capability is. On the contrary, the smaller the value of a is, the stronger the local search ability is. When dealing with optimization problems, the ideal optimization process is to have a strong global search ability with a fast convergence speed, and meanwhile, maintain fast convergence with high accuracy. Obviously, the linear decrement strategy of a does not meet this expectation. Based on the characteristic of power function, a nonlinear convergence factor a is proposed, and its update formula is as follows:
a = 2 2 ( t t m a x ) μ
where μ is a nonlinear regulation coefficient, depicting the curve smoothness of parameter a . Figure 2 illustrates the change curves of a under different values of μ . Large amounts of numerical results on test functions describe a fact that compared with the strategy of linearly decreasing, nonlinear strategy is more beneficial in enhancing the optimization ability of the algorithm.

3.2. Nonlinear Inertia Weight

As a significant parameter in PSO, inertia weight represents an influence factor of the current optimal solution on population location update. A larger inertia weight value indicates a larger renewal step and enhances the ability to escape from local solutions, which is conductive to global exploration. Meanwhile a smaller inertia weight value can perform a more fine-grained local search to improve the convergence precision. Hence, an intuitive perception is that the weight value decreases along with the iterative process, which is more beneficial to the balance between exploration ability and exploitation ability. However, in the standard WOA, the inertia weight value is set to 1 during the whole optimization procedure. Although this strategy can ensure the performance of algorithm, it may be not optimal. How to construct an appropriate dynamic inertia weight to improve the performance of WOA is of great significance to solve optimization problems effectively. In this paper, a nonlinear inertia weight function is proposed:
w = cos ( n π t t m a x ) 2
Obviously, w is a cosine function with the period T = t m a x n . The parameter n controls the change period of w . Figure 3 illustrates the variation curves of w under different n during the whole optimization procedure. The value of n is determined by a large number of numerical experiments as n = 0.8 for unimodal function and n = 2 for multimodal function, which implies that T < t m a x . According to Figure 3, the value of w decreases first, and then increases in the whole optimization process, which is not in line with our intuitive cognition and has guiding significance of the construction of inertia weight.
The new update mechanism of the modified WOA can be expressed as follows:
X ( t + 1 ) = w X * ( t ) A D , p < 0.5 w X * ( t ) + D e b l cos ( 2 π l ) , p 0.5
X ( t + 1 ) = w X r a n d ( t ) A D
The presented cosine-based inertia weight strategy promotes the WOA to dynamically adjust the value of the inertia weight with respect to the iterative number (seen in Figure 3). This strategy enables the optimal whale position to provide a different effect for other individuals to renew their location in the iterative procedure, and further tunes the exploration ability and exploitation ability of the standard WOA algorithm.

3.3. Proposed MWOA-CS

WOA exhibits poor convergence rate and accuracy when dealing with large scale global optimization problems. This may be due to the complexity of optimization problems, that the update mechanism of WOA cannot reach certain search blind spots and that some dimensions drop into local optimum. Meanwhile we observed that the whale optimization algorithm has better exploration ability. Crisscross optimization algorithm (CSO) [12] is created by using a horizontal crossover operator in addition to a vertical crossover operator, which has better accuracy when dealing with low dimensional problems. Inspired by this, to enhance the optimization ability of WOA, CSO is embedded into the improved WOA to construct a hybrid algorithm (MWOA-CS) in this paper. In MWOA-CS, each dimension randomly performs MWOA or CSO within the feasible region of optimization problems, which can enhance the exploration ability of the algorithm. On the whole, according to the diversity of the current whale population, the problem dimension is randomly divided into two parts. One part executes MWOA, and the other performs CSO.
The variation of population during the optimization process can be defined as [35]:
D i v = 1 N i = 1 N ( ( j = 1 D ( X i j X j a v g ) 2 )
where X j a v g is the j th value of average solution X a v g , and it is computed as follows:
X a v g = [ 1 N i = 1 N X i , 1 , , 1 N i = 1 N X i , D ]
and then, D i v is normalized using the logistic function ( D R = 1 / 1 + e D i v ).
According to the value of D R , the current whale population X is randomly divided into two sub populations X W and X C in terms of dimension, namely, X = [ X W X C ] . Then X W with the dimension D * D R performs the WOA, and X C with the dimension D * ( 1 D R ) is updated by the CSO. The specific update formulas of CSO are as follows:
Horizontal crossover:
X C i 1 h c = r 3 X C i 1 + ( 1 r 3 ) X C i 2 + c 1 ( X C i 1 X C i 2 )
X C i 2 h c = r 4 X C i 2 + ( 1 r 4 ) X C i 1 + c 2 ( X C i 2 X C i 1 )
where r 3 , r 4 [ 0 , 1 ] and c 1 , c 2 [ 1 , 1 ] are random values, and i 1 , i 2 [ 1 , D * ( 1 D R ) ] . X C i 1 h c and X C i 2 h c are candidate solutions that are the offspring of X C i 1 and X C i 2 , respectively.
Vertical crossover:
x c i , j 1 v c = r x c i , j 1 + ( 1 r ) x c i , j 2
where r [ 0 , 1 ] represents a random number. That is, the j 1 th and j 2 th dimension of the individual X C i are chosen to conduct the vertical crossover operation and yield an offspring x c i , j 1 v c . It is noted that X C i 1 h c , X C i 2 h c and x c i , j 1 v c are compared with their parents, and the individual with better fitness is retained to enter the next iteration.

3.4. The Pseudo of MWOA-CS

The above-mentioned contents detail the improvement strategies of basic WOA to enhance its performance. The corresponding pseudo code of the modified WOA algorithm (MWOA-CS) is described in Algorithm 1.
Algorithm 1: The description of MWOA-CS
Input: Population size N , Maximum iteration   t m a x , Problem dimension D ,   t = 1
1: Generate the whale population X
2: Evaluate the fitness of each individual and achieve the best individual X *
3: While   t < t m a x do
4:      Calculate a and w by the Equations (10) and (11), respectively
5:      For   i = 1 : N  do
6:      Compute   D i v of the current population based on Equation (14)
7:       D ˜ = r a n d p e r m ( D ) , update the   A , C , l and generate a random number   p [ 0 , 1 ]
8:             For j = 1 : D * D R do   j w = D ˜ j
9:                   If   p < 0.5  then
10:                        If   A < 1   then
11:                             Update the position of   X W i , j w by Equation (12)
12:                        Else
13:                             Select one random individual   X r a n d
14:                             Update the position of   X W i , j w by Equation (13)
15:                        EndIf
16:                   Else
17:                        Update the position of   X W i , j w by Equation (12)
18:                   EndIf
19:             EndFor
20:      EndFor
21:      If   D * D R + 1 < D  then
22:      Update the positions of the sub population   X C by Equations (16–18)
23:      EndIf
24:      Evaluate the fitness of each individual and Update   X *
25:       t = t + 1
26: End While
Output: The optimal solution   X *

4. Experimental Results and Analysis

This section conducts numerical experiments to verify and discuss the optimization ability of proposed MWOA-CS on 30 large scale global optimization problems, as listed in Table 1. MWOA-CS is compared with five other meta-heuristic algorithms in aspects of convergence speed and precision, in addition to stability. The comparison algorithms selected in this paper are ESPSO [11], HBA [13], CSO [12], GWO [36] and WOA [20]. All experiments are programmed using Matlab and executed on a computer with Core i 5 9500 with 3.00 GHz and 8 G main memory.

4.1. Benchmark Functions and Experimental Settings

According to the number of local optimal solutions, the benchmark functions to verify the performance of MWOA-CS are divided into two categories: unimodal function (UM) and multimodal function (MM). f 1 ~ f 15 with only one global minimum represents the unimodal function, which is employed to test the exploitation ability of the studied meta-heuristic algorithm. f 16 ~ f 30 with multiple local optimums are the multimodal functions, which are utilized to measure the exploration ability of a meta-heuristic algorithm. The dimensions of all test functions are taken as 300, 500 and 1000, respectively.
For convenience, the parameter settings among the selected comparative algorithms are shown in Table 2. Meanwhile, due to containing random factors, each algorithm deals with each test function 30 times independently, for the accuracy of the experimental results.

4.2. Comparison and Analysis of Statistical Results

In the numerical experiments, the dimension of benchmark functions is set to 300, 500 and 1000, respectively. The experimental results on different dimensions are presented in Table 3, Table 4 and Table 5, where the better results are highlighted in bold. Mean and standard deviation (std) are significant indicators to measure the performance of the meta heuristic algorithm. Based on the Table 3, Table 4 and Table 5, the following conclusions are drawn:
(1) For the unimodal functions, the proposed MWOA-CS exhibits significantly better exploitation performance than the five other algorithms. Moreover, with the increase in benchmark functions, the performance of MWOA-CS is not affected, while the other comparison algorithms are damaged in some cases. For f 2 , f 9 , f 10 , f 11 and f 14 , although HBA can obtain the best std as MWOA-CS, its solution accuracy is still inferior to MWOA-CS.
(2) For the multimodal functions, MWOA-CS displays a good exploration ability except for f 24 , f 25 and f 26 . When dealing with f 18 , f 20 ~ f 23 , f 27 , f 28 and f 29 , MWOA-CS is the most efficient method, compared with the others. For f 16 and f 30 , both MWOA-CS and HBA can achieve the global solution, and WOA is the third most effective algorithm. For f 17 and f 19 , MWOA-CS, WOA and HBA show the best optimization performance, followed by CSO. For f 24 , MWOA-CS shows the best performance with respect to accuracy, but the standard deviation is inferior to ESPSO, HBA and WOA, and stronger than CSO and GWO. For f 25 and f 26 , CSO exhibits the best competitive method, and MWOA-CS the second best, followed by WOA.
Moreover, Wilcoxon rank sum test [37] was adopted to statistically evaluate the performance of MWOA-CS. The results of all test functions with dimensions ranging from 300 to 1000 are shown in Table 5. The p - value achieved from Wilcoxon rank sum test indicates the difference between MWOA-CS and the comparison method, where the significance level is 0.05. h = 1 indicates that MWOA-CS has more significant statistical advantage than the comparison algorithm; h = N a N indicates that the optimal solutions of the comparison algorithms are similar; h = 0 indicates that the statistical performance of MWOA-CS is weaker than the comparison algorithm. The results illustrate that the p - value of most test functions is less than 0.05, indicating that MWOA-CS can effectively solve test optimization problems more robustly. There still exist some cases where the p - value is larger than 0.05: MWOA-CS and HBA on f 25 with the dimension of 300; and MWOA-CS and CSO on f 25 and f 26 with dimensions of 300, 500 and 1000.
In brief, the results illustrate that MWOA-CS shows excellent exploitation ability to jump out of local solutions, and good exploration ability to search the whole feasible region. Hence MWOA-CS can effectively solve large scale global optimization problems.

4.3. Convergence Analysis of Comparison Algorithms

In order to further observe the difference of the optimization process of the comparison algorithms, Figure 4 provides the average convergence rate curves over 30 run times, where x-axis and y-axis are the iteration and best fitness values obtained so far, respectively. The dimension of the test functions is set at 1000. It can be seen that MWOA-CS has outstanding advantages in terms of convergence speed and accuracy. Although the performance of MWOA-CS is weaker than CSO for f 25 and f 26 , it is still superior to the other comparison algorithms. The results mean that MWOA-CS can effectively obtain one satisfactory solution of large scale optimization problems with less time.

4.4. Boxplot Analysis of Comparison Algorithms

The boxplot graphs of the comparison algorithms on the benchmark functions with 1000 dimension are portrayed in Figure 5, which can further illustrate the stability of the meta-heuristic algorithm. Based on Table 5 and Table 6 and the graphs shown in Figure 5, it is easy to determine that MWOA-CS outperforms the five other algorithms in solving most test optimization problems with 1000 dimension. For f 25 and f 26 , although MWOA-CS exhibits the ability to jump out the local solution, it still shows premature phenomenon in the convergence. Unfortunately, this avoidant ability is still weaker than CSO. Consequently, it can be concluded that MWOA-CS exhibits better efficiency and robustness than the other comparison algorithms in solving classical benchmark functions with large scale dimension.

5. Conclusions

This study proposed an improved WOA, which had good efficiency and robustness to solve large scale optimization problems. In order to enhance the optimization ability of WOA, nonlinear convergence factor and cosine-based inertia weight were designed to balance the exploration and exploitation ability of WOA. Meanwhile, the update mechanism of CSO was introduced into the improved WOA to improve local optimal avoidance. According to current population diversity, the optimization problem dimension was divided into two parts, one of which performed improved WOA with better robustness for large scale problems, and the other executed CSO, providing higher accuracy for low dimension problems. To test the performance of MWOA-CS, extensive numerical experiments were carried out on 30 benchmark functions of different dimensions, and the numerical results were compared with five other meta-heuristic algorithms. The compared results indicated that MSWOA achieved higher quality solutions within fewer iteration, and exhibited faster convergence rate and stronger stability for the majority of test functions. In addition, with the increase in problem dimension, the proposed algorithm showed better robustness. MWOA-CS can be applied to large-scale constrained optimization problems, multi-objective optimization and practical engineering problems in the near future.

Author Contributions

Conceptualization, G.S. and Y.S.; methodology, G.S. and Y.S.; software, G.S.; validation, G.S., Y.S. and R.Z.; formal analysis, Y.S. and R.Z.; investigation, Y.S. and R.Z.; resources, G.S., Y.S. and R.Z.; data curation, G.S., Y.S. and R.Z.; writing—original draft preparation, G.S.; writing—review and editing, G.S., Y.S. and R.Z.; visualization, G.S. and Y.S.; supervision, R.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Nature Science Foundation (NNSF) of China (Grant Nos. 12071112, 11471102, 12101195, 202300410146) and the Basic Research Projects for Key Scientific Research Projects in Henan Province (Grant No. 20ZX001).

Acknowledgments

The authors would like to thank the editors and reviewers for handling and reviewing our paper.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Abbreviations

WOAWhale optimization algorithm
CSOCrisscross optimization algorithm
MWOA-CSModified whale optimization algorithm integrated with crisscross optimization algorithm
ESPSOParticle swarm optimization algorithm using eagle strategy
HBAHoney badger algorithm
GWOGrey wolf optimizer
stdStandard deviation
DimDimension
UMUnimodal function
MMMultimodal function

References

  1. Wang, Y.; Wang, H. Neural Network Model for Energy Low Carbon Economy and Financial Risk Based on PSO Intelligent Algorithms. J. Intell. Fuzzy Syst. 2019, 37, 6151–6163. [Google Scholar] [CrossRef]
  2. Rezk, H.; Arfaoui, J.; Gomaa, M.R. Optimal Parameter Estimation of Solar PV Panel Based on Hybrid Particle Swarm and Grey Wolf Optimization Algorithms. Int. J. Interact. Multimed. Artif. Intell. 2021, 6, 145. [Google Scholar] [CrossRef]
  3. Chen, H.; Zhang, Q.; Luo, J.; Xu, Y.; Zhang, X. An Enhanced Bacterial Foraging Optimization and Its Application for Training Kernel Extreme Learning Machine. Appl. Soft Comput. 2020, 86, 105884. [Google Scholar] [CrossRef]
  4. Du, Y.; Yang, N. Analysis of Image Processing Algorithm Based on Bionic Intelligent Optimization. Clust. Comput. 2019, 22, 3505–3512. [Google Scholar] [CrossRef]
  5. Shang, Y.; Zhang, L. A Filled Function Method for Finding a Global Minimizer on Global Integer Optimization. J. Comput. Appl. Math. 2005, 181, 200–210. [Google Scholar] [CrossRef] [Green Version]
  6. Shang, Y.; Zhang, L. Finding Discrete Global Minima with a Filled Function for Integer Programming. Eur. J. Oper. Res. 2008, 189, 31–40. [Google Scholar] [CrossRef]
  7. Shang, Y.; Pu, D.; Jiang, A. Finding Global Minimizer with One-Parameter Filled Function on Unconstrained Global Optimization. Appl. Math. Comput. 2007, 191, 176–182. [Google Scholar] [CrossRef]
  8. Shang, Y.-L.; Sun, Z.-Y.; Jiang, X.-Y. Modified Filled Function Method for Global Discrete Optimization. In Proceedings of the 3rd World Congress on Global Optimization in Engineering and Science, Anhui, China, 8–12 July 2013; Gao, D., Ruan, N., Xing, W., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 57–68. [Google Scholar]
  9. Shang, Y.; Wang, W.; Zhang, L. Modified T-F Function Method for Finding Global Minimizer on Unconstrained Optimization. Math. Probl. Eng. 2010, 2010, 602831. [Google Scholar] [CrossRef]
  10. Boussaïd, I.; Lepagnot, J.; Siarry, P. A Survey on Optimization Metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  11. Yapıcı, H.; Çetinkaya, N. An Improved Particle Swarm Optimization Algorithm Using Eagle Strategy for Power Loss Minimization. Math. Probl. Eng. 2017, 2017, 1063045. [Google Scholar] [CrossRef] [Green Version]
  12. Meng, A.; Chen, Y.; Yin, H.; Chen, S. Crisscross Optimization Algorithm and Its Application. Knowl.-Based Syst. 2014, 67, 218–229. [Google Scholar] [CrossRef]
  13. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New Metaheuristic Algorithm for Solving Optimization Problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  14. Polnik, W.; Stobiecki, J.; Byrski, A.; Kisiel-Dorohinicki, M. Ant Colony Optimization–Evolutionary Hybrid Optimization with Translation of Problem Representation. Comput. Intell. 2021, 37, 891–923. [Google Scholar] [CrossRef]
  15. Ghanem, W.A.H.M.; Jantan, A.; Ghaleb, S.A.A.; Nasser, A.B. An Efficient Intrusion Detection Model Based on Hybridization of Artificial Bee Colony and Dragonfly Algorithms for Training Multilayer Perceptrons. IEEE Access 2020, 8, 130452–130475. [Google Scholar] [CrossRef]
  16. Albert, P.; Nanjappan, M. An Efficient Kernel FCM and Artificial Fish Swarm Optimization-Based Optimal Resource Allocation in Cloud. J. Circuits Syst. Comput. 2020, 29, 2050253. [Google Scholar] [CrossRef]
  17. Mallika, C.; Selvamuthukumaran, S. A Hybrid Crow Search and Grey Wolf Optimization Technique for Enhanced Medical Data Classification in Diabetes Diagnosis System. Int. J. Comput. Intell. Syst. 2021, 14, 157. [Google Scholar] [CrossRef]
  18. Rizk-Allah, R.M.; Saleh, O.; Hagag, E.A.; Mousa, A.A.A. Enhanced Tunicate Swarm Algorithm for Solving Large-Scale Nonlinear Optimization Problems. Int. J. Comput. Intell. Syst. 2021, 14, 189. [Google Scholar] [CrossRef]
  19. Rahnamayan, S.; Wang, G.G. Solving Large Scale Optimization Problems by Opposition-Based Differential Evolution (ODE). WSEAS Trans. Comput. 2008, 7, 13. [Google Scholar]
  20. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  21. Gharehchopogh, F.S.; Gholizadeh, H. A Comprehensive Survey: Whale Optimization Algorithm and Its Applications. Swarm Evol. Comput. 2019, 48, 1–24. [Google Scholar] [CrossRef]
  22. Kaur, G.; Arora, S. Chaotic Whale Optimization Algorithm. J. Comput. Des. Eng. 2018, 5, 275–284. [Google Scholar] [CrossRef]
  23. Sayed, G.I.; Darwish, A.; Hassanien, A.E. A New Chaotic Whale Optimization Algorithm for Features Selection. J. Classif. 2018, 35, 300–344. [Google Scholar] [CrossRef]
  24. Ding, H.; Wu, Z.; Zhao, L. Whale Optimization Algorithm Based on Nonlinear Convergence Factor and Chaotic Inertial Weight. Concurr. Comput. Pract. Exp. 2020, 32, e5949. [Google Scholar] [CrossRef]
  25. Sun, Y.; Wang, X.; Chen, Y.; Liu, Z. A Modified Whale Optimization Algorithm for Large-Scale Global Optimization Problems. Expert Syst. Appl. 2018, 114, 563–577. [Google Scholar] [CrossRef]
  26. Abdel-Basset, M.; Abdle-Fatah, L.; Sangaiah, A.K. An Improved Lévy Based Whale Optimization Algorithm for Bandwidth-Efficient Virtual Machine Placement in Cloud Computing Environment. Clust. Comput. 2019, 22, 8319–8334. [Google Scholar] [CrossRef]
  27. Jin, Q.; Xu, Z.; Cai, W. An Improved Whale Optimization Algorithm with Random Evolution and Special Reinforcement Dual-Operation Strategy Collaboration. Symmetry 2021, 13, 238. [Google Scholar] [CrossRef]
  28. Saafan, M.M.; El-Gendy, E.M. IWOSSA: An Improved Whale Optimization Salp Swarm Algorithm for Solving Optimization Problems. Expert Syst. Appl. 2021, 176, 114901. [Google Scholar] [CrossRef]
  29. Elaziz, M.A.; Mirjalili, S. A Hyper-Heuristic for Improving the Initial Population of Whale Optimization Algorithm. Knowl.-Based Syst. 2019, 172, 42–63. [Google Scholar] [CrossRef]
  30. Chakraborty, S.; Saha, A.K.; Chakraborty, R.; Saha, M. An Enhanced Whale Optimization Algorithm for Large Scale Optimization Problems. Knowl.-Based Syst. 2021, 233, 107543. [Google Scholar] [CrossRef]
  31. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Oliva, D. Hybridizing of Whale and Moth-Flame Optimization Algorithms to Solve Diverse Scales of Optimal Power Flow Problem. Electronics 2022, 11, 831. [Google Scholar] [CrossRef]
  32. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Abualigah, L.; Abd Elaziz, M.; Oliva, D. EWOA-OPF: Effective Whale Optimization Algorithm to Solve Optimal Power Flow Problem. Electronics 2021, 10, 2975. [Google Scholar] [CrossRef]
  33. Liu, J.; Shi, J.; Hao, F.; Dai, M. A Novel Enhanced Global Exploration Whale Optimization Algorithm Based on Lévy Flights and Judgment Mechanism for Global Continuous Optimization Problems. Eng. Comput. 2022, 1–29. [Google Scholar] [CrossRef]
  34. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  35. Bansal, J.C.; Farswan, P. A Novel Disruption in Biogeography-Based Optimization with Application to Optimal Power Flow Problem. Appl. Intell. 2017, 46, 590–615. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  37. Družeta, S.; Ivić, S. Examination of Benefits of Personal Fitness Improvement Dependent Inertia for Particle Swarm Optimization. Soft Comput. 2017, 21, 3387–3400. [Google Scholar] [CrossRef]
Figure 1. The flow diagram of WOA.
Figure 1. The flow diagram of WOA.
Electronics 11 01475 g001
Figure 2. Change curves of nonlinear of parameter a with different μ .
Figure 2. Change curves of nonlinear of parameter a with different μ .
Electronics 11 01475 g002
Figure 3. Nonlinear weight with diverse values n .
Figure 3. Nonlinear weight with diverse values n .
Electronics 11 01475 g003
Figure 4. Convergence curves for some selected unimodal /multimodal problems with dim = 1000.
Figure 4. Convergence curves for some selected unimodal /multimodal problems with dim = 1000.
Electronics 11 01475 g004aElectronics 11 01475 g004b
Figure 5. Box-plot of some selected unimodal /multimodal test problems with 1000 Dim.
Figure 5. Box-plot of some selected unimodal /multimodal test problems with 1000 Dim.
Electronics 11 01475 g005aElectronics 11 01475 g005bElectronics 11 01475 g005c
Table 1. Description of benchmark test functions.
Table 1. Description of benchmark test functions.
FunctionDimDomainType
f 1 ( x ) = j = 1 D x j 2 300/500/1000[−100, 100]UM
f 2 ( x ) = j = 1 D | x j | j + 1 300/500/1000[−1, 1]UM
f 3 ( x ) = j = 1 D j x j 4 + r a n d o m [ 0 , 1 ] 300/500/1000[−1.28, 1.28]UM
f 4 ( x ) = j = 1 D x j 2 + ( j = 1 D 0.5 j x j ) 2 + ( j = 1 D 0.5 j x j ) 4 300/500/1000[−5, 10]UM
f 5 ( x ) = j = 1 D ( j = 1 j x j ) 2 300/500/1000[−100, 100]UM
f 6 ( x ) = j = 1 D x j + j = 1 D x j 300/500/1000[−100, 100]UM
f 7 ( x ) = max 1 j D | x j | 300/500/1000[−100, 100]UM
f 8 ( x ) = j = 1 D j x j 2 300/500/1000[−10, 10]UM
f 9 ( x ) = 10 6 x 1 2 + j = 2 D x j 6 300/500/1000[−1, 1]UM
f 10 ( x ) = x 1 2 + 10 6 j = 2 D x j 6 300/500/1000[−100, 100]UM
f 11 ( x ) = ( j = 1 D x j 2 ) 2 300/500/1000[−100, 100]UM
f 12 ( x ) = j = 2 D ( 10 6 ) ( j 1 ) / ( D 1 ) x j 2 300/500/1000[−100, 100]UM
f 13 ( x ) = ( x 1 1 ) 2 + j = 2 D j ( 2 x j 2 x j 1 ) 2 300/500/1000[−10, 10]UM
f 14 ( x ) = j = 1 D x j 4 300/500/1000[−100, 100]UM
f 15 ( x ) = j = 1 D 1 [ ( x j 2 ) ( x j + 1 2 + 1 ) + ( x j + 1 2 ) ( x j 2 + 1 ) ] 300/500/1000[−1, 4]UM
f 16 ( x ) = j = 1 D [ x j 2 10 cos ( 2 π x j ) + 10 ] 300/500/1000[−5.12, 5.12]MM
f 17 ( x ) = j = 1 D [ x j 2 + 2 x j + 1 2 0.3 cos ( 3 π x j ) 0.4 cos ( 4 π x j + 1 ) + 0.7 ] 300/500/1000[−15, 15]MM
f 18 ( x ) = j = 1 D x j sin ( x j ) + 0.1 x j 300/500/1000[−10, 10]MM
f 19 ( x ) = j = 1 D x j 2 / 4000 j = 1 D cos ( x j / j ) + 1 300/500/1000[−600, 600]MM
f 20 ( x ) = 20 exp ( 0.2 j = 1 D x j 2 / D ) exp ( j = 1 D cos ( 2 π x j ) / D ) + 20 + e 300/500/1000[−32, 32]MM
f 21 ( x ) = 418.9829 D j = 1 D x j sin ( | x j | ) 300/500/1000[−500, 500]MM
f 22 ( x ) = 1 cos ( 2 π j = 1 D x j 2 ) + 0.1 j = 1 D x j 2 300/500/1000[−100, 100]MM
f 23 ( x ) = j = 1 D ( x j 4 16 x j 2 + 5 x j ) / D 300/500/1000[−5, 5]MM
f 24 ( x ) = j = 1 D [ k k max a k cos ( 2 π b k ( x j + 0.5 ) ) D k k max a k cos ( 2 π b k ) ] 300/500/1000[−0.5, 0.5]MM
f 25 ( x ) = j = 1 D x j 6 [ 2 + sin ( 1 / x j ) ] 300/500/1000[−1, 1]MM
f 26 ( x ) = π { 10 sin ( π y 1 ) + j = 1 D 1 ( y j 1 ) 2 [ 1 + 10 sin 2 ( π y j + 1 ) ] + ( y D 1 ) 2 } / D + j = 1 D u ( x j , 10 , 100 , 4 ) 300/500/1000[−50, 50]MM
f 27 ( x ) = 0.1 { sin 2 ( 3 π x 1 ) + j = 1 D ( x j 1 ) 2 [ 1 + sin 2 ( 3 π x j + 1 ) ] + ( x D 1 ) 2 [ 1 + sin 2 ( 2 π x D ) ] } + j = 1 D u ( x j , 5 , 100 , 4 ) 300/500/1000[−50, 50]MM
f 28 ( x ) = 0.5 + [ sin 2 ( j = 1 D x j 2 ) 0.5 ] / ( 1 + 0.001 j = 1 D x j 2 ) 2 300/500/1000[−100, 100]MM
f 29 ( x ) = 0.5 j = 1 D ( x j 4 16 x j 2 + 5 x j ) 300/500/1000[−5, 5]MM
f 30 ( x ) = exp ( 0.5 j = 1 D x j 2 ) 300/500/1000[−1, 1]MM
Table 2. Parameter settings.
Table 2. Parameter settings.
AlgorithmParameter Settings
ESPSO N = 30 , t m a x = 500 , c 1 = c 2 = 2 , w = 0.9 ( 0.9 0.4 ) t / t m a x
HBA N = 30 , t m a x = 500 , β = 6 , C = 2
CSO N = 30 , t m a x = 500 , P 1 = 1 , P 2 = 0.8
GWO N = 30 , t m a x = 500
WOA N = 30 , t m a x = 500
MWOA-CS N = 30 , t m a x = 500
Table 3. Comparison results of algorithms for test problems with dim = 300.
Table 3. Comparison results of algorithms for test problems with dim = 300.
Function ESPSOHBACSOGWOWOAMWOA-CS
f 1 mean67,914.495.02 × 10−1140.0004851.26 × 10−54.47 × 10−710
std47,782.432.67 × 10−1130.0026564.61 × 10−62.09 × 10−700
f 2 mean0.0001023.67 × 10−1991.22 × 10−124.01 × 10−98.75 × 10−1090
std9.69 × 10−501.81 × 10−121.69 × 10−83.47 × 10−1080
f 3 mean53.768610.0005650.0065640.0240470.0034608.83 × 10−5
std23.119560.0004050.0025550.0052530.0034987.27 × 10−5
f 4 mean3.53 × 1016153.2710787.54192034.0934830.5752.42 × 10−209
std1.42 × 1016230.1733253.9344254.4718612.99950
f 5 mean9.80 × 1057.87 × 10−674087.43982,336.881.17 × 1080
std2.65 × 1054.19 × 10−663350.43236,791.803,814,0780
f 6 meanInfInfInf1.13 × 101902.45 × 10−490
stdNanNanNanNan5.74 × 10−490
f 7 mean35.092431.95 × 10−300.14244847.3585479.281220
std2.5494773.40 × 10−300.1153797.11291421.698120
f 8 mean103270.21.15 × 10−1135.09 × 10−51.38 × 10−56.54 × 10−710
std76,559.304.65 × 10−1120.0002635.34 × 10−62.86 × 10−700
f 9 mean20.006941.50 × 10−2743.82 × 10−111.03 × 10−197.39 × 10−1030
std13.7475101.48 × 10−103.26 × 10−194.05 × 10−1020
f 10 mean5.49 × 10163.38 × 10−251114,602.30.0665231.10 × 10−960
std2.14 × 10170371,856.70.1061726.05 × 10−960
f 11 mean7.05 × 10091.93 × 10−2293.46 × 10−132.01 × 10−101.38 × 10−1430
std9.76 × 100901.77 × 10−121.57 × 10−105.85 × 10−1430
f 12 mean9.44 × 1082.51 × 10−10576,587.910.0193942.32 × 10−650
std4.00 × 1081.37 × 10−104400,931.50.0088631.26 × 10−640
f 13 mean2.23 × 1060.6668480.6693380.7914860.6707070.666667
std4.84 × 1068.98 × 10−50.1251070.1616710.0045063.79 × 10−7
f 14 mean1.36 × 1073.85 × 10−2100.0589053.21 × 10−89.60 × 10−990
std4.15 × 10600.2667972.81 × 10−85.11 × 10−980
f 15 mean3821.1239.24 × 10−11928.834715.08 × 10−71.01 × 10−730
std1595.5092.36 × 10−1188.0744711.68 × 10−64.40 × 10−730
f 16 mean2303.11000.00077034.506031.06 × 10−130
std231.641000.00418311.879672.84 × 10−130
f 17 mean5268.88206.93 × 10−68.66 × 10−600
std3670.68202.18 × 10−50.00598900
f 18 mean224.26882.45 × 10−620.0017190.0234801.58 × 10−470
std45.443433.49 × 10−620.0032820.0005868.64 × 10−470
f 19 mean461.670704.18 × 10−50.01234200
std480.981806.34 × 10−50.01794500
f 20 mean19.915433.3206890.0002660.0002243.73 × 10−158.88 × 10−16
std0.2786367.5522140.0010945.23 × 10−52.36 × 10−150
f 21 mean76,372.4073,537.38111,301.786,132.7619,785.73117.1536
std5641.7386493.7971060.2166673.87617,043.61256.8420
f 22 mean29.919300.0565020.3453110.7532060.1065650
std7.3072570.0330300.0494220.0937100.0784550
f 23 mean−44.4444−44.02731−57.8242−35.79526−73.57010−78.00831
std1.7368692.23771919.768012.0771646.9468140.170153
f 24 mean1.13 × 10−131.13 × 10−13416.0773473.20281.13 × 10−131.89 × 10−14
std005.7463178.25788804.30 × 10−14
f 25 mean6.62 × 1080.7346261.08 × 10−80.7587050.0896610.012489
std2.57 × 1080.0257675.75 × 10−80.0611420.0381560.002911
f 26 mean5.48 × 10729.182342.10 × 10−627.2053110.274550.312358
std2.49 × 1080.2856171.01 × 10−50.6394753.5824410.148969
f 27 mean0.0132299.51 × 10−3101.11 × 10−131.52 × 10−171.55 × 10−1120
std0.00606604.71 × 10−134.63 × 10−178.41 × 10−1120
f 28 mean0.4999861.86 × 10−50.0075050.0417420.0022880
std7.07 × 10−65.73 × 10−50.0017320.0071660.0021540
f 29 mean−6634.555−6653.709−9739.326−5249.963−11,004.02−11,747.62
std237.7986552.82602483.049269.0111989.38063.164895
f 30 mean−0.1389537−1−0.999999−0.999999−1−1
std0.03623602.21 × 10−73.52 × 10−104.61 × 10−170
Table 4. Comparison results of algorithms for test problems with dim = 500.
Table 4. Comparison results of algorithms for test problems with dim = 500.
Function ESPSOHBACSOGWOWOAMWOA-CS
f 1 mean207,002.65.80 × 10−1120.0004720.00156333.16 × 10−700
std47,782.432.67 × 10−1130.0026564.61 × 10−62.09 × 10−700
f 2 mean0.0001031.26 × 10−1821.47 × 10−120.0013101.17 × 10−1080
std9.05 × 10−501.52 × 10−120.0040296.02 × 10−1080
f 3 mean198.65780.0003800.0096400.0450370.0032839.49 × 10−5
std44.278250.0002450.0035920.0094430.0035697.77 × 10−5
f 4 mean2.15 × 1018423.00411229.4313773.3808141.6421.33 × 10−121
std7.69 × 1018480.8106478.0931475.34511411.2547.33 × 10−121
f 5 mean2.85 × 1063.92 × 10−6217,514.16328,531.93.29 × 1070
std7.18 × 1052.03 × 10−6118,074.7987,428.111.25 × 1070
f 6 meanInfInfInf8.89 × 10991.81 × 10460
stdNanNanNan4.84 × 101005.84 × 10−460
f 7 mean37.248119.95 × 10−280.11660666.5231378.732230
std3.3353712.75 × 10−270.0999435.10671524.382020
f 8 mean2.91 × 1053.61 × 10−1006.79 × 10−60.0033395.35 × 10−700
std1.77 × 1051.85 × 10−1093.70 × 10−50.0008672.92 × 10−690
f 9 mean39.562932.67 × 10−2673.10 × 10−106.35 × 10−154.52 × 10−1040
std18.8203109.82 × 10−101.64 × 10−142.47 × 10−1030
f 10 mean4.67 × 10161.59 × 10−254507,157.92994.7291.95 × 10−990
std9.35 × 101602,144,8044738.4888.50 × 10−990
f 11 mean2.14 × 10112.56 × 10−2178.08 × 10−90.0668761.85 × 10−1350
std3.56 × 101104.42 × 10−80.0324087.07 × 10−1350
f 12 mean3.04 × 1095.41 × 10−1068328.8732.8378384.64 × 10−650
std9.76 × 1081.52 × 10−10539,790.711.2317471.88 × 10−640
f 13 mean1.45 × 1070.6669432.8886350.9509250.6939070.666667
std3.14 × 1070.00014112.044910.1448860.0651221.48 × 10−6
f 14 mean1.87 × 1089.11 × 10−2066.5874500.0001305.31 × 10−1010
std4.09 × 108034.668600.0001342.80 × 10−1000
f 15 mean21274.506.07 × 10−116246.55490.0004279.46 × 10−760
std7109.8281.76 × 10−11533.690170.0012932.94 × 10−750
f 16 mean4333.04600.17557373.203379.09 × 10−140
std545.723600.60593322.253002.77 × 10−130
f 17 mean10,025.8401.98 × 10−70.00134400
std7439.00101.08 × 10−60.00052900
f 18 mean440.14763.84 × 10−610.0096880.0850133.52 × 10−490
std85.133271.07 × 10−600.0193080.1091781.52 × 10−480
f 19 mean1677.64802.72 × 10−50.04159500
std1151.78500.0001390.03460700
f 20 mean19.966525.3177030.0002220.0018454.32 × 10−158.88 × 10−16
std0.0001218.9691790.0007550.0003121.97 × 10−150
f 21 mean143,764.7137,963.4191,175.8155,007.434,080.73271.3243
std10,204.059448.3801162.90414,098.9528,543.86594.3385
f 22 mean42.512170.0504360.4069451.0798730.1132270
std13.304810.0294830.0532780.1126480.0681080
f 23 mean−38.13811−38.28829−31.15500−30.59179−74.23117−78.31819
std1.0887782.6739125.1944611.9591746.2311650.014089
f 24 mean1.13 × 10−131.13 × 10−13735.1724815.87621.13 × 10−131.13 × 10−14
std006.6274757.83634703.46 × 10−14
f 25 mean1.46 × 1080.7438531.49 × 10−90.7634990.0990710.001950
std3.48 × 1080.0321086.76 × 10−90.0429560.0502330.000890
f 26 mean2.06 × 10849.365330.00141950.7831517.725250.799122
std7.20 × 1080.2473910.0054161.2298915.4440780.368979
f 27 mean0.0319613.20 × 10−3011.90 × 10−131.28 × 10−132.26 × 10−1040
std0.01220906.45 × 10−132.36 × 10−131.24 × 10−1030
f 28 mean0.4999955.17 × 10−50.0090470.1268820.0022900
std1.94 × 10−60.0001880.0018050.0213070.0018190
f 29 mean−9611.591−9589.609−7547.245−7693.819−18,408.48−19,579.64
std272.2021880.5164247.1070397.14201781.4993.758783
f 30 mean−0.018264−1−0.999999−0.999999−1−1
std0.00775503.05 × 10−83.52 × 10−104.61 × 10−170
Table 5. Comparison results of algorithms for test problems with dim = 1000.
Table 5. Comparison results of algorithms for test problems with dim = 1000.
Function ESPSOHBACSOGWOWOAMWOA-CS
f 1 mean5.23 × 1058.68 × 10−1100.0001030.247418.26 × 10−680
std47,782.432.67 × 10−1130.0026564.61 × 10−62.09 × 10−700
f 2 mean0.0001393.19 × 10−1661.06 × 10−120.0147372.12 × 10−1050
std0.00013601.33 × 10−120.0594751.12 × 10−1040
f 3 mean989.510.0004070.0096170.143720.0042340.000105
std211.570.0002830.0045420.0279350.0049077.77 × 10−5
f 4 mean1.02 × 10212354.73035.57897.9157344.06 × 10−21
std2.17 × 10213437.61193703.461004.52.22 × 10−20
f 5 mean1.05 × 1072.17 × 10−621.47 × 1051.61 × 1061.41 × 1080
std3.46 × 1061.16 × 10−611.23 × 1053.92 × 1053.78 × 1070
f 6 meanInfInfInf1.08 × 102094.88 × 10−470
stdNanNanNanNan1.73 × 10−460
f 7 mean40.640015.64 × 10−260.12662679.5457182.887720
std2.2864156.78 × 10−260.1540043.65374422.472920
f 8 mean2.03 × 1062.33 × 10−1090.0035400.9731365.97 × 10−670
std1.66 × 1066.31 × 10−1060.0193880.2197343.25 × 10−660
f 9 mean88.90461.13 × 10−2592.62 × 10−105.10 × 10−101.23 × 10−1100
std48.0627107.79 × 10−108.31 × 10−104.56 × 10−1100
f 10 mean1.21 × 10181.71 × 10−24234,213.932.55 × 1081.17 × 10−960
std3.11 × 10180137,102.35.08 × 10084.56 × 10−960
f 11 mean2.93 × 10111.55 × 10−2184.69 × 10−60.0728007.50 × 10−1310
std4.58 × 101102.54 × 10−50.0386544.11 × 10−1300
f 12 mean1.27 × 10121.67 × 10−10316,503.34701.52995.17 × 10−640
std7.84 × 1099.10 × 10−10368,226.77251.96162.33 × 10−630
f 13 mean6.14 × 1070.6780471.24385124.977250.7401830.666667
std1.71 × 1080.0608073.1972859.0499590.1113871.10 × 10−6
f 14 mean1.48 × 1084.57 × 10−2023.174010.6625524.45 × 10−1050
std3.57 × 10806.6642030.5112911.97 × 10−1040
f 15 mean77,855.561.21 × 10−1111755.92678.926214.65 × 10−720
std14,883.693.81 × 10−111226.1290297.50952.39 × 10−710
f 16 mean10,113.3100.048433202.28682.42 × 10−130
std1114.80700.11710343.125697.89 × 10−130
f 17 mean24,621.9801.13 × 10−63.19751900
std19,894.1905.01 × 10−61.68148500
f 18 mean997.46052.42 × 10−590.0119990.6949141.08 × 10−490
std266.34537.15 × 10−590.0362560.9416632.78 × 10−490
f 19 mean3607.65902.71 × 10−60.04326200
std3167.73701.36 × 10−50.06871900
f 20 mean19.957362.6552640.0001330.0181564.44 × 10−158.88 × 10−16
std0.0507086.8853660.0007200.0027672.79 × 10−150
f 21 mean326,104.4316,582.6393,233.5330,499.883,574.47422.7988
std16,893.5313,563.461466.71714261.6947,042.86930.2201
f 22 mean70.175120.0533660.5091561.9165400.1498900
std23.932700.0337730.0368210.1620620.0819680
f 23 mean−36.32393−32.89725−28.17181−25.51003−75.83808−78.31360
std0.6367131.9151510.5466091.6539294.9682300.017915
f 24 mean2.27 × 10−132.27 × 10−131550.0901673.6112.27 × 10−132.27 × 10−14
std007.65254314.1403106.93 × 10−14
f 25 mean4.81 × 1080.9253391.38 × 10−61.2442770.1150250.002789
std1.29 × 1090.0194127.55 × 10−60.2906780.0474010.001002
f 26 mean1.43 × 10999.320870.004964122.091533.916271.814436
std3.53 × 1090.6181980.0249017.43812612.922120.852835
f 27 mean0.0853351.39 × 10−2985.23 × 10−135.54 × 10−91.72 × 10−1110
std0.01960102.63 × 10−128.71 × 10−99.09 × 10−1110
f 28 mean0.4999980.0001180.0105210.4026520.0028120
std4.46 × 10−70.0003810.0037080.0293700.0014980
f 29 mean−18,118.04−16,234.30−14,094.66−12,624.36−37,312.62−39,159.47
std311.95521400.603381.9183634.66562990.9549.559104
f 30 mean−9.6 × 10−5−1−0.999999−0.999999−1−1
std0.00010009.12 × 10−62.76 × 10−64.60 × 10−170
Table 6. Comparison results of Wilcoxon rand test sum for all test problems.
Table 6. Comparison results of Wilcoxon rand test sum for all test problems.
FunctionDimESPSO HBA CSO GWO WOA
p-Valuehp-Valuehp-Valuehp-Valuehp-Valueh
f 1 3006.06 × 10−1316.06 × 10−1319.67 × 10−1116.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1318.28 × 10−1216.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 2 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 3 3001.51 × 10−1112.30 × 10−1011.51 × 10−1111.51 × 10−1111.43 × 10−101
5001.51 × 10−1113.26 × 10−811.51 × 10−1111.51 × 10−1112.78 × 10−101
10001.51 × 10−1113.02 × 10−711.51 × 10−1111.51 × 10−1114.17 × 10−81
f 4 3001.47 × 10−1111.47 × 10−1111.47 × 10−1111.47 × 10−1111.47 × 10−111
5001.51 × 10−1111.51 × 10−1111.51 × 10−1111.51 × 10−1111.51 × 10−111
10001.51 × 10−1111.51 × 10−1111.51 × 10−1111.51 × 10−1111.51 × 10−111
f 5 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 6 3008.42 × 10−1518.42 × 10−1518.42 × 10−1516.06 × 10−1316.06 × 10−131
5008.42 × 10−1518.42 × 10−1518.42 × 10−1516.06 × 10−1316.06 × 10−131
10008.42 × 10−1518.42 × 10−1518.42 × 10−1516.06 × 10−1316.06 × 10−131
f 7 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 8 3006.06 × 10−1316.06 × 10−1312.28 × 10−1216.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1312.28 × 10−1216.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 9 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 10 3006.06 × 10−1316.06 × 10−1318.28 × 10−1216.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 11 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 12 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 13 3001.51 × 10−1111.51 × 10−1114.23 × 10−911.51 × 10−1111.51 × 10−111
5001.51 × 10−1111.51 × 10−1112.78 × 10−1011.51 × 10−1111.51 × 10−111
10001.51 × 10−1111.51 × 10−1114.23 × 10−911.51 × 10−1111.51 × 10−111
f 14 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 15 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1310.04076116.06 × 10−1316.06 × 10−131
f 16 3006.06 × 10−1311NaN3.30 × 10−416.06 × 10−1310.0209321
5006.06 × 10−1311NaN1.46 × 10−416.06 × 10−1310.0407021
10006.06 × 10−1311NaN3.30 × 10−416.06 × 10−1310.0407461
f 17 3006.06 × 10−1311NaN4.12 × 10−416.06 × 10−1311NaN
5006.06 × 10−1311NaN4.12 × 10−416.06 × 10−1311NaN
10006.06 × 10−1311NaN0.00279216.06 × 10−1311NaN
f 18 3006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1318.48 × 10−916.06 × 10−1316.06 × 10−131
f 19 3006.06 × 10−1311NaN1.72 × 10−716.06 × 10−1311NaN
5006.06 × 10−1311NaN1.72 × 10−716.06 × 10−1311NaN
10006.06 × 10−1311NaN2.33 × 10−1116.06 × 10−1311NaN
f 20 3002.67 × 10−1310.00551715.48 × 10−1316.06 × 10−1314.36 × 10−81
5001.94 × 10−1310.00139415.27 × 10−1316.06 × 10−1311.02 × 10−101
10004.13 × 10−1310.02096315.27 × 10−1316.06 × 10−1311.87 × 10−81
f 21 3001.51 × 10−1111.51 × 10−1111.51 × 10−1111.51 × 10−1116.64 × 10−111
5001.51 × 10−1111.51 × 10−1111.51 × 10−1111.51 × 10−1111.73 × 10−101
10001.51 × 10−1111.51 × 10−1111.51 × 10−1111.51 × 10−1112.48 × 10−111
f 22 3006.06 × 10−1316.04 × 10−1316.06 × 10−1314.11 × 10−1312.65 × 10−131
5006.06 × 10−1316.04 × 10−1316.06 × 10−1314.78 × 10−1313.10 × 10−131
10006.06 × 10−1316.04 × 10−1316.06 × 10−1315.70 × 10−1313.43 × 10−131
f 23 3001.51 × 10−1111.51 × 10−1115.46 × 10−1111.51 × 10−1116.23 × 10−111
5001.51 × 10−1111.51 × 10−1115.46 × 10−1111.51 × 10−1111.66 × 10−111
10001.51 × 10−1111.51 × 10−1115.46 × 10−1111.51 × 10−1112.48 × 10−111
f 24 3004.49 × 10−1114.49 × 10−1112.57 × 10−1212.57 × 10−1214.49 × 10−111
5001.97 × 10−1211.97 × 10−1211.58 × 10−1211.58 × 10−1211.97 × 10−121
10001.97 × 10−1211.97 × 10−1211.58 × 10−1211.58 × 10−1211.97 × 10−121
f 25 3001.51 × 10−1110.99999900.99999901.51 × 10−1111.51 × 10−111
5001.51 × 10−1111.51 × 10−1110.99999901.51 × 10−1111.51 × 10−111
10001.51 × 10−1111.51 × 10−1110.99999901.51 × 10−1111.51 × 10−111
f 26 3001.51 × 10−1111.51 × 10−1110.99999901.51 × 10−1111.51 × 10−111
5001.51 × 10−1111.51 × 10−1110.99999901.51 × 10−1111.51 × 10−111
10001.51 × 10−1111.51 × 10−1110.99999901.51 × 10−1111.51 × 10−111
f 27 3006.06 × 10−1312.68 × 10−616.06 × 10−1316.06 × 10−1316.06 × 10−131
5006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
10006.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−1316.06 × 10−131
f 28 3006.06 × 10−1316.85 × 10−416.06 × 10−1316.06 × 10−1314.98 × 10−131
5006.06 × 10−1312.68 × 10−616.06 × 10−1316.06 × 10−1311.32 × 10−81
10006.06 × 10−1311.72 × 10−716.06 × 10−1316.06 × 10−1317.12 × 10−111
f 29 3001.51 × 10−1111.51 × 10−1112.98 × 10−911.51 × 10−1110.0010781
5001.51 × 10−1111.51 × 10−1112.98 × 10−911.51 × 10−1114.95 × 10−111
10001.51 × 10−1111.51 × 10−1112.98 × 10−911.51 × 10−1111.51 × 10−111
f 30 3006.06 × 10−1311NaN0.04076111NaN1NaN
5006.06 × 10−1311NaN0.04076115.71 × 10−1311NaN
10006.06 × 10−1311NaN0.02096315.75 × 10−1311NaN
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, G.; Shang, Y.; Zhang, R. An Efficient and Robust Improved Whale Optimization Algorithm for Large Scale Global Optimization Problems. Electronics 2022, 11, 1475. https://doi.org/10.3390/electronics11091475

AMA Style

Sun G, Shang Y, Zhang R. An Efficient and Robust Improved Whale Optimization Algorithm for Large Scale Global Optimization Problems. Electronics. 2022; 11(9):1475. https://doi.org/10.3390/electronics11091475

Chicago/Turabian Style

Sun, Guanglei, Youlin Shang, and Roxin Zhang. 2022. "An Efficient and Robust Improved Whale Optimization Algorithm for Large Scale Global Optimization Problems" Electronics 11, no. 9: 1475. https://doi.org/10.3390/electronics11091475

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop