Next Article in Journal
Living Plants Ecosystem Sensing: A Quantum Bridge between Thermodynamics and Bioelectricity
Next Article in Special Issue
Feature Extraction and Matching of Humanoid-Eye Binocular Images Based on SUSAN-SIFT Algorithm
Previous Article in Journal
A Compact Two-Dimensional Varifocal Scanning Imaging Device Actuated by Artificial Muscle Material
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Green Anaconda Optimization: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems

Department of Mathematics, Faculty of Science, University of Hradec Králové, 500 03 Hradec Králové, Czech Republic
Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(1), 121;
Received: 21 February 2023 / Revised: 8 March 2023 / Accepted: 10 March 2023 / Published: 14 March 2023
(This article belongs to the Special Issue Bio-Inspired Computing: Theories and Applications)


A new metaheuristic algorithm called green anaconda optimization (GAO) which imitates the natural behavior of green anacondas has been designed. The fundamental inspiration for GAO is the mechanism of recognizing the position of the female species by the male species during the mating season and the hunting strategy of green anacondas. GAO’s mathematical modeling is presented based on the simulation of these two strategies of green anacondas in two phases of exploration and exploitation. The effectiveness of the proposed GAO approach in solving optimization problems is evaluated on twenty-nine objective functions from the CEC 2017 test suite and the CEC 2019 test suite. The efficiency of GAO in providing solutions for optimization problems is compared with the performance of twelve well-known metaheuristic algorithms. The simulation results show that the proposed GAO approach has a high capability in exploration, exploitation, and creating a balance between them and performs better compared to competitor algorithms. In addition, the implementation of GAO on twenty-one optimization problems from the CEC 2011 test suite indicates the effective capability of the proposed approach in handling real-world applications.

1. Introduction

Optimization has long been discussed in various branches of science in order to achieve the best solution in multi-solution problems [1]. Optimization is employed in addressing many optimization challenges in technology, engineering, and real-life applications [2]. An optimization problem is modeled from a mathematical point of view using decision variables that must be quantified, problem constraints that must be justified, and the objective function that must be optimized [3]. Problem-solving approaches in the field of optimization are classified into two classes: deterministic and stochastic techniques [4]. Deterministic techniques fall into two categories: gradient-based and non-gradient-based. These techniques are effective in solving linear, convex, simple optimization problems and simple real-world applications [5]. The need for first and second order derivative information and dependence on initial starting points are among the disadvantages of these techniques. This is so because, with the advancement of science and technology, scientists are faced with more complex and emerging optimization problems that are non-linear, non-convex, high-dimensional, and non-differentiable in nature. These characteristics lead to the inability of deterministic techniques to deal with such optimization problems and finally become stuck in local optima solutions [6]. The difficulties of deterministic techniques led scholars to explore new techniques called stochastic approaches to handle complex optimization tasks. Effective performance in nonlinear, discontinuous, complex, high-dimensional, NP-hard, non-convex optimization problems and nonlinear, discrete, and unknown search spaces are among the advantages that have led to the popularity of metaheuristic algorithms [7].
The operation of searching for a solution in metaheuristic algorithms starts by randomly generating a certain number of initial candidate solutions. In each iteration, under the influence of algorithm steps, candidate solutions are improved. After completing the iterations of the algorithm, the best candidate solution is identified among the solutions and presented as the solution to the given problem [8].
Metaheuristic algorithms should search the problem-solving space at both global and local levels well and carefully. The main goal of global search with the concept of exploration is the ability of the metaheuristic algorithm to identify the main optimal region and avoid getting stuck in local optima. The main goal of local search with the concept of exploitation is the ability of the metaheuristic algorithm to converge towards possible better solutions in the vicinity of the solutions discovered in the promising regions of the problem-solving space. Considering that exploration and exploitation pursue opposite goals, balancing them during the search process is the key to the success of metaheuristic algorithms [9]. The search process in metaheuristic algorithms has a random nature, which makes the solutions resulting from these approaches not guaranteed to be global optimal. At the same time, considering that these solutions are close to the global optimum, they are accepted as quasi-optimal solutions. Therefore, in the comparison of several metaheuristic algorithms, the algorithm that provides a quasi-optimal solution closer to the global optimum has superior performance. Achieving better quasi-optimal solutions and solutions closer to the global optimum to more effectively solve optimization problems is the motivation source of scientists in designing numerous metaheuristic algorithms [10].
The main research question is that according to the introduction of numerous metaheuristic algorithms so far, is there still a need to design a new metaheuristic algorithm or not? In response to this question, the No Free Lunch (NFL) [11] theorem explains that the successful performance of a metaheuristic algorithm in a set of optimization problems does not guarantee the same performance of that algorithm in all other optimization problems. In fact, according to the NFL theorem, there is no specific metaheuristic algorithm that is the best optimizer for all optimization applications. This means that there is no preconceived notion about whether the implementation of a metaheuristic algorithm on an optimization problem will be successful or not. The NFL theorem is the main source of the motivation for researchers to search for and provide better solutions for optimization problems by designing newer algorithms.
The novelty and innovation of this paper are in the introduction of a new metaheuristic algorithm called green anaconda optimization (GAO), which is used in dealing with optimization problems and providing solutions for them. The key contributions of this paper are given below:
  • GAO is designed based on mimicking the behavior of green anacondas in the wild.
  • The fundamental inspiration for GAO is the green anaconda’s tracking mechanism during the mating season and the hunting strategy they have when attacking prey.
  • The mathematical model of GAO is presented in two phases with the aim of forming exploration and exploitation in the search process.
  • GAO’s performance on optimization tasks is tested on twenty-nine benchmark functions from the CEC 2017 test suite and CEC 2019 test suite.
  • GAO’s ability to handle real-world applications is evaluated on twenty-one optimization problems from the CEC 2011 test suite.
  • The results obtained from GAO are compared with the performance of twelve well-known metaheuristic algorithms.
The structure of the paper is as follows: the literature review is presented in Section 2. Then, the proposed green anaconda optimization approach is introduced and modeled in Section 3. The simulation studies and results are presented in Section 4. The effectiveness of GAO in solving real-world applications is investigated in Section 5. Conclusions and suggestions for future research are provided in Section 6.

2. Literature Review

Metaheuristic algorithms in design are inspired by various natural phenomena, animal behavior in nature, laws of physics, rules of games, biological sciences, human interactions, and any other phenomenon that has an evolutionary process. According to this, in terms of the idea inspired in the design, metaheuristic algorithms are placed in five classes: swarm-based, evolutionary-based, physics-based, human-based, and game-based approaches.
Swarm-based metaheuristic algorithms have been developed based on the simulation of various swarming phenomena in nature, including the behaviors and strategies of animals, insects, birds, aquatic organisms and other living organisms. Among the most famous swarm-based algorithms can be mentioned ant colony optimization (ACO) [12], artificial bee colony (ABC) [13], and particle swarm optimization (PSO) [14]. ACO is inspired by the ability of ants to identify the optimal route between nests and food sources. ABC is developed based on the modeling of honey bee colony activities in obtaining food resources. PSO is designed based on the simulation of flocks of fish and birds searching for food in the environment. Food provision is a basic activity among living organisms in nature, which is obtained through foraging, eating carrion, and hunting. This natural behavior has been the source of inspiration for the design of numerous algorithms, including: grey wolf optimizer (GWO) [15], the orca predation algorithm (OPA) [16], the African vultures optimization algorithm (AVOA) [17], the marine predator algorithm (MPA) [18], white shark optimizer (WSO) [19], the reptile search algorithm (RSA) [20], golden jackal optimization (GJO) [21], the whale optimization algorithm (WOA) [22], the honey badger algorithm (HBA) [23], and the tunicate swarm algorithm (TSA) [24].
Evolutionary-based metaheuristic algorithms are designed based on modeling the concepts of biological, genetic sciences, and natural selection. Genetic algorithm (GA) [25] and differential evolution (DE) [26] are among the most famous evolutionary algorithms whose main idea in their design is to simulate the reproduction process, the concept of survival of the fittest, Darwin’s theory of evolution, random selection, mutation, and crossover operators.
Physics-based metaheuristic algorithms are inspired by the phenomena, laws, concepts, and forces of physics. Simulated annealing (SA) [27] is one of the most famous approaches of this class of metaheuristic algorithms, which is inspired by the metal annealing phenomenon. In this physical process, the metal is first melted and then slowly cooled to achieve the ideal crystal. Physical forces have been a source of inspiration in designing algorithms such as the gravitational search algorithm (GSA) [28] inspired vy gravitational force, the spring search algorithm (SSA) [29] inspired by spring force, and the momentum search algorithm (MSA) [30] inspired by momentum force. The physical water cycle is employed in the design of the water cycle algorithm (WCA) [31]. Some of the other physics-based algorithms are: the Archimedes optimization algorithm (AOA) [32], Henry gas optimization (HGO) [33], the equilibrium optimizer (EO) [34], the Lichtenberg algorithm (LA) [35], nuclear reaction optimization (NRO) [36], electro-magnetism optimization (EMO) [37], the black hole algorithm (BHA) [38], the multi-verse optimizer (MVO) [39], and thermal exchange optimization (TEO) [40].
Human-based metaheuristic algorithms are formed based on the simulation of human behavior, activities, and interactions. Teaching–learning-based optimization (TLBO) is one of the most widely used human-based approaches, which is designed based on imitating the classroom learning environment and interactions between students and teachers [41]. Human interactions in the field of therapy between doctors and patients are employed in the design of doctor and patient optimization (DPO) [42]. The development of society and the improvement of people’s living standards under the influence of the leader of that society has been the origin of following optimization algorithm (FOA) design [43]. The strategy of military troops during ancient wars has been the main inspiration in the design of war strategy optimization (WSO) [44]. Some of the other human-based algorithms are: the teamwork optimization algorithm (TOA) [45], Ali Baba and the forty thieves (AFT) [46], driving-training-based optimization (DTBO) [6], the gaining–sharing-knowledge-based algorithm (GSK) [47], and the Coronavirus herd immunity optimizer (CHIO) [48].
Game-based metaheuristic algorithms have been introduced based on the modeling of game rules, players’ strategies, referees, and other influential persons in games. Players trying to find a hidden object in the game space has been the main idea in the design of the hide object game optimizer (HOGO) [49]. The strategy of players in changing the direction of movement based on the direction determined by the referee in the orientation game is employed in the design of the orientation search algorithm (OSA) [50]. The simulation of the volleyball league and the behavior of the players and coaches during the match are used in the design of the volleyball premier league (VPL) [51]. Some of the other game-based algorithms are: football-game-based optimization (FGBO) [52], the archery algorithm (AA) [7], the dice game optimizer (DGO) [53], ring-toss-game-based optimization (RTGBO) [54], and the puzzle optimization algorithm (POA) [55].
Based on the best knowledge obtained from the literature review, so far, no metaheuristic model has been designed based on simulating the natural behavior of green anacondas. Meanwhile, the strategy of moving male species towards female species in the mating season and the hunting strategy of this animal is an intelligent process that has special potential for designing a meta-heuristic algorithm. Therefore, in order to address this research gap, a new swarm-based metaheuristic algorithm is designed based on mimicking the natural behavior of green anacondas. It is described in the next section.

3. Green Anaconda Optimization

In this section, the inspiration source and theory of the proposed green anaconda optimization (GAO) approach for use in optimization tasks is explained and its mathematical modeling is presented.

3.1. Inspiration of GAO

The green anaconda (Eunectes murinus) is a boa species that lives in South America, which is also known by other names such as common anaconda, giant anaconda, common water boa, or sucuri. The green anaconda, one of the longest and heaviest snakes in existence, is similar to other boas and is a non-venomous constrictor [56]. The length of green anacondas has been reported to be as long as 5.21 m [57]. In general, the female species with an average length of 4.6 m is usually much larger than the male species with an average length of 3 m [58]. The weight of green anacondas is reported to be between 30 and 70 kg [59]. Green anacondas are olive green in color and have black blotches along their body. Compared to their body size, they have a narrower head that is distinguished by orange–yellow striping. Green anacondas’ eyes are located on its head, giving it the ability to emerge from the water while swimming without exposing its body. Green anacondas have flexible jaw bones that enable it to swallow prey that is larger than the head size of this animal [60]. A picture of a green anaconda is shown in Figure 1.
Green anacondas have a varied diet that they provide by hunting prey. This diet includes fish, birds, reptiles (caimans and turtles), and mammals (agoutes, pacas, tapirs, capybara, peccaries, deer, etc.). There are also reports that green anacondas feed by hunting prey animals over 40 kg, which rarely happens [61]. Green anacondas spend most of their time in or around water. Although they are slow on land, they are very agile in water and are able to swim at high speeds. The green anaconda’s hunting strategy is to hide under the surface of the water while its snouts are placed above the surface of the water. When the prey approaches it or stops to drink water, the green anaconda strikes the prey, wraps around the prey, then contracts to suffocate the prey, and finally swallows it [62].
When the mating season arrives, males look for females. Normally, males are able to identify the position of females and move towards them by following a trail of pheromones that females produce and leave in their path. During this process, males are able to sense chemicals that indicate the presence of the female species by constantly flicking their tongues [63].
Among the natural behaviors of green anacondas in nature, the process of chasing female species by male species during the mating season and their strategy during hunting are much more significant. These natural behaviors of green anacondas are intelligent processes whose mathematical modeling is employed in designing the proposed GAO approach.

3.2. Algorithm Initialization

The proposed GOA is a population-based metaheuristic algorithm in which green anacondas are its population members. From a mathematical point of view, each green anaconda is a candidate solution to the problem whose position in the search space determines the values of the decision variables. Hence, each green anaconda can be modeled using a vector, and the population of green anacondas consisting of these vectors can be modeled using a matrix according to Equation (1). The initial position of each green anaconda in the search space is randomly generated at the beginning of the algorithm execution using Equation (2).
X = X 1 X i X N N × m = x 1,1 x 1 , d x 1 , m x i , 1 x i , d x i , m x N , 1 x N , d x N , m N × m ,
x i , d = l b d + r i , d · u b d l b d ,   i = 1,2 , , N ,   d = 1,2 , , m ,
where X is the GAO population matrix, X i is the i th green anaconda (candidate solution), x i , d is its d th dimension in the search space (decision variable), N is the number of green anacondas, m is the number of decision variables, r i , d are random numbers in interval 0 , 1 , and l b d and u b d are the lower bound and upper bound of the d th. decision variable, respectively.
Corresponding to the suggested values of each green anaconda for the decision variables, the objective function of the problem can be evaluated. This set of calculated values for the objective function can be represented from a mathematical point of view using a vector according to Equation (3).
F = F 1 F i F N N × 1 = F ( X 1 ) F ( X i ) F ( X N ) N × 1 ,
where F is the vector of the calculated objective function and F i is the calculated objective function based on the i th green anaconda.
From the comparison of the calculated values for the objective function, the member corresponding to the best value calculated for the objective function is identified as the best member (the best candidate solution). Since in each iteration of GAO, the positions of the green anacondas and thus the values of the objective function are updated, the best member should also be updated.

3.3. Mathematical Modelling of GAO

In the GAO design, the position of green anacondas in the search space has been updated based on the simulation of green anaconda behavior in two phases with the aim of providing exploration and exploitation in the search process.

3.3.1. Phase 1: Mating Season (Exploration)

During the mating season, green anaconda female species leave pheromones along their path so that the male species can identify their position. Males use their tongues to sense the chemical effects of pheromones that indicate the presence of a female species and move toward it. In the first phase of GAO, the position of green anacondas is updated based on the male species’ strategy in identifying the female species’ position and moving towards them during the mating season. This strategy leads to large displacements in the position of green anacondas in the search space, which demonstrates the exploration ability of GAO in global search and accurate scanning of the problem-solving space to avoid becoming stuck in optimal local regions.
In order to mathematically simulate this process, it is assumed in the GAO design that for each green anaconda, members of the GAO population who have a better objective function value than it are considered as the female species of green anacondas. The set of candidate female species for each green anaconda is determined using Equation (4).
C F L i = X k i : F k i < F i   a n d   k i i ,   w h e r e   i = 1,2 , , N   a n d   k i 1,2 , , N ,
where C F L i is the set of candidate females’ locations for the i th green anaconda and k i is the green anaconda row number in the GAO population matrix and the position number of the corresponding element in the objective function vector that has a better objective function value than the i th green anaconda.
The concentration of pheromones has a significant effect on the movement of green anacondas. To simulate the pheromone concentration, the objective function values have been used. Thus, the better the value of the objective function of a member, the higher the chance of selecting it by green anaconda. The probability function of pheromone concentration for the material species corresponding to each GAO member is calculated using Equation (5).
P C j i = C F F j i C F F m a x i n = 1 n i C F F n i C F F m a x i ,   w h e r e   i = 1,2 , , N   a n d   j = 1,2 , , n i
where P C j i is the probability of the pheromone concentration of the j th female for the i th green anaconda, C F F i is the vector of the set of objective function values of candidate females for the i th green anaconda, C F F j i is its j th value, C F F m a x i is its maximum value, and n i is the number of candidate females for the i th green anaconda.
In the GAO design, it is assumed that the green anaconda randomly selects one of the candidate materials and moves towards it. In order to simulate this selection process, first the cumulative probability function of candidate females is calculated using Equation (6). Then, based on the comparison of the cumulative probability function with a random number with a normal distribution in the range of 0 , 1 , the selected female species for green anaconda is determined according to Equation (7).
C j i = P C j i + C j 1 i ,   w h e r e   i = 1,2 , , N ,   j = 1,2 , , m ,   a n d   C 0 i = 0
S F i = C F L j i : C j 1 i < r i , j < C j i
where C j i is the cumulative probability function of the j th candidate female for the i th green anaconda, S F i is the selected female for the i th green anaconda, and r is a random number with a normal distribution in the range of 0 , 1 .
After determining the selected female species, based on the simulation of green anaconda movement towards it, a random position in the search space for green anaconda is calculated using Equation (8). If the value of the objective function is improved in this new position, according to Equation (9), the position of corresponding green anaconda is updated to this new position, otherwise it remains in the previous position.
x i , d P 1 = x i , d + r i , d · S F d i I i , d · x i , d ,   i = 1,2 , , N ,   a n d   d = 1,2 , , m ,
X i = X i P 1 , F i P 1 < F i , X i , e l s e ,
where X i P 1 is the new suggested position of the i th green anaconda based on the first phase of GAO, x i , d P 1 is its d th dimension, F i P 1 is its objective function value, r i , d are random numbers with a normal distribution in the range of 0 , 1 , S F d i is the d th dimension of the selected female for the i th green anaconda, I i , d are random numbers from the set 1,2 , N is the number of green anacondas, and m is the number of decision variables.

3.3.2. Phase 2: Hunting Strategy (Exploitation)

Green anacondas are powerful predators whose hunting strategy is to ambush underwater and wait for prey. When the prey stops drinking water or passes near the green anaconda, the anaconda attacks and surrounds the prey, then contracts to suffocate the prey, and finally swallows it. In the second phase of GAO, the position of the population members is updated based on the green anaconda’s strategy when hunting prey. This strategy leads to small displacements in the position of the green anacondas in the search space, which indicates the exploitation ability of GAO in local search to obtain possible better solutions near the discovered solutions.
In order to simulate the hunting strategy and change the position of the population members towards the prey that has approached them, first a random position is generated near each green anaconda using Equation (10). Then, according to Equation (11), if the value of the objective function is improved in this new position, it is acceptable to update the green anaconda location.
x i , d P 2 = x i , d + 1 2 r i , d u b d l b d t , i = 1,2 , , N ,   d = 1,2 , , m ,   a n d   t = 1,2 , , T
X i = X i P 2 , F i P 2 < F i , X i , e l s e ,
where X i P 2 is the new suggested position of the i th green anaconda based on thr second phase of GAO, x i , d P 2 is its d th dimension, F i P 2 is its objective function value, t is the iteration counter of the algorithm, and T is the maximum number of algorithm iterations.

3.4. Repetition Process, Pseudocode, and Flowchart of GAO

Various steps of GAO are presented in the form of a flowchart in Figure 2 and its pseudocode is presented in Algorithm 1. The first iteration of GAO is completed after updating the position of all green anacondas based on the first and second phases. After this, the algorithm enters the next iteration with the new values of the objective function and the new positions of the green anacondas, and the updating process continues according to Equations (4)–(11) until the last iteration of the algorithm. After the full implementation of GAO, the best candidate solution recorded during the execution of the algorithm is presented as a solution for the given problem.
Algorithm 1. Pseudocode of GAO
Start GAO.
1.Input problem information: variables, objective function, and constraints.
2.Set GAO population size (N) and iterations (T).
3.Generate the initial population matrix at random using Equation (2). x i , d l b d + r i , d · ( u b d l b d )
4.Evaluate the objective function.
5.For t = 1 to T
6.For i = 1 to N
7.Phase 1: mating season (exploration)
8.Identify the candidate females using Equation (4). C F L i X k i : F k i < F i   a n d   k i i .
9.Calculate the concentration function of candidate females using Equation (5). P C j i C F F j i C F F m a x i n = 1 n i C F F n i C F F m a x i .
10.Calculatethe cumulative probability function candidate females using Equation (6). C j i P C j i + C j 1 i .
11.Determine the selected female using Equation (7). S F i C F L j i : C j 1 i < r i , j < C j i .
12.Calculate the new position of ith GAO member using Equation (8). x i , d P 1 x i , d + r i , d · S F d i I i , d · x i , d .
13.Update ith GAO member using Equation (9). X i X i P 1 , F i P 1 < F i , X i , e l s e .
14.Phase 2: hunting strategy (exploitation)
15.Calculate the new position of ith GAO member using Equation (10). x i , d P 2 x i , d + ( 1 2 r i , d ) u b d l b d t
16.Update the i th GAO member using Equation (11). X i X i P 2 , F i P 2 < F i , X i , e l s e .
18.Save the best candidate solution so far.
20.Output the best quasi-optimal solution obtained with the GAO.
End GAO.

3.5. Computational Complexity of GAO

In this section, the computational complexity of the proposed GAO approach is evaluated. GAO initialization for a problem with m number of decision variables is O ( N m ) where N is the number of green anacondas. In each iteration of GAO, the position of green anacondas is updated in two different phases, and this process has a computational complexity equal to O ( 2 N m T ) , where T is the maximum iterations of the algorithm. Therefore, the total computational complexity of GAO is equal to O ( N m ( 1 + 2 T ) ) .

4. Simulation Studies and Results

GAO’s ability to solve optimization problems has been evaluated in this section on a set of thirty-nine benchmark functions from the CEC 2017 test suite and CEC 2019 test suite. The CEC 2017 test suite includes 30 objective functions, among which C17-F1 to C17-F3 are unimodal, C17-F4 to C17-F10 are multimodal, C17-F11 to C17-F20 are hybrid, and C17-F21 to C17-F30 are composition. From this set, the function C17-F2 has been left out from the simulations due to the instability of the behavior. The full description of the CEC 2017 test suite is provided in [64]. The CEC 2019 test suite includes ten complex objective functions, the full description of which is provided in [65]. The performance of GAO in optimization is compared with twelve well-known metaheuristic algorithms including: GA [25], PSO [14], GSA [28], TLBO [41], MVO [39], GWO [15], WOA [22], MPA [18], TSA [24], RSA [20], AVOA [17], and WSO [19]. The values used for the control parameters of competitor algorithms are shown in Table 1.
GAO and competing algorithms have been implemented on the mentioned thirty-nine benchmark functions in order to obtain suitable solutions for these functions. The simulation results are presented using six indicators: mean, best, worst, standard deviation (std), median, and rank.

4.1. Evaluation the CEC 2017 Test Suite

In this subsection, GAO’s ability to solve optimization problems is tested on the CEC 2017 test suite. In order to analyze the scalability, GAO and competitor algorithms are employed to optimize this set for different dimensions equal to 10, 30, 50, and 100. The simulation results are reported in Table 2, Table 3, Table 4 and Table 5. Convergence curves of the performance of GAO and competitor algorithms on the CEC 2017 test suite for different dimensions are presented in Figure 3, Figure 4, Figure 5 and Figure 6. Simulation results for dimension equal to 10 show that GAO is the first best optimizer for C17-F1, C17-F3, C17-F4, C17-F7, C17-F9, C17-F10, C17-F12 to C17-F14, C17-F16, C17-F18, C17-F19, C17-F21, C17-F22, C17-F25, C17-F26, and C17-F29 compared to competitor algorithms. For dimension equal to 30, GAO is the first best optimizer for C17-F1, C17-F3 to C17-F5, C17-F7, C17-F12 to C17-F14, C17-F16 to C17-F18, C17-F21 to C17-F27, and C17-F29 compared to competitor algorithms. For dimension equal to 50, GAO is the first best optimizer for C17-F1, C17-F3 to C17-F14, C17-F16 to C17-F18, C17-F20, C17-F22 to C17-F26, C17-F28, and C17-F30 compared to competitor algorithms. For dimension equal to 100, GAO is the first best optimizer for C17-F1, C17-F3 to C17-F13, C17-F15 to C17-F23, C17-F26, C17-F27, C17-F29, and C17-F30 compared to competitor algorithms.
The unimodal functions C11-F1 and C11-F3 do not have local optimal. For that reason, they are suitable criteria for measuring the exploitation ability of metaheuristic algorithms in local search and convergence to the global optimal. The optimization results of C17-F1 and C17-F3 functions show that the proposed GAO approach has a high ability in exploitation. Multi-modal functions C17-F4 to C17-F10 have several local optimal in addition to the main optimal. For this reason, they are suitable criteria to measure the exploitation ability of metaheuristic algorithms in the global search and discover the main optimal area. The simulation results show that GAO has a high quality in exploration and decent performance in solving multi-modal functions. Hybrid functions C17-F11 to C17-F20 and composition functions C17-F21 to C17-F30 are suitable options for measuring the ability of metaheuristic algorithms to balance exploration and exploitation during the search process. The optimization results show that GOA achieves acceptable results for these benchmark functions. Thus it can be said that GAO can balance exploration and exploitation in the optimization process. It can be inferred from the simulation results that the proposed GAO approach by balancing exploration and exploitation has performed better compared to competing algorithms in optimizing the CEC 2017 test suite for different dimensions of 10, 30, 50, and 100, and overall, it has been ranked as the best optimizer.

4.2. Evaluation the CEC 2019 Test Suite

In this subsection, GAO’s ability to solve optimization problems has been tested on the CEC 2019 test suite. This test suite has ten benchmark functions and its dimensions are 9 for C19-F1, 16 for C19-F2, 18 for C19-F3, and 10 for C19-F4 to C19-F10. The full description and details of the CEC 2019 test suite are provided in [65]. The results of employing the proposed GAO approach and competitor algorithms in dealing with this test suite are reported in Table 6. The simulation results show that GAO is the first best optimizer for C19-F1 to C19-F4, and C19-F6 to C19-F9 compared to the competitor algorithms. Analysis of the simulation results indicates that the proposed GAO approach has performed better by balancing exploration and exploitation compared to the competitor algorithms and has been assigned the first rank of the best optimizer in handling the CEC 2019 test suite. Convergence curves of performance for the GAO and competitor algorithms during optimization of the CEC 2019 test suite are presented in Figure 7.

4.3. Statistical Analysis

Presenting the optimization results of the objective functions using mean, best, worst, standard deviation, median, and rank indicators provides valuable information about the performance of metaheuristic algorithms and the proposed GAO approach. However, statistical analysis is necessary to show whether the superiority of GAO’s proposed approach against competitor algorithms is significant or not. In order to deal with this issue, the Wilcoxon rank sum test [66], which is a non-parametric test and is used to determine the significant difference between two data samples, is used. Results of implementing the Wilcoxon rank sum test analysis on the simulation results of the CEC 2017 test suite and the CEC 2019 test suite are reported in Table 7. The results obtained for the p-value index indicate that the proposed GAO approach has a significant statistical superiority in comparison to the corresponding competitor algorithms in cases where the p-value is less than 0.05.

5. GAO for Real-World Applications

In this section, the ability of the proposed GAO approach to handle optimization problems in real-world applications is evaluated. For this purpose, the CEC 2011 test suite, which is a collection of 22 real-world optimization applications, is employed. The titles of these real-world optimization problems are as follows: parameter estimation for frequency-modulated (FM) sound waves, the Lennard–Jones potential problem, the bifunctional catalyst blend optimal control problem, optimal control of a non-linear stirred tank reactor, tersoff potential for model Si (B), tersoff potential for model Si (C), spread spectrum radar polly phase code design, the transmission network expansion planning (TNEP) problem, the large-scale transmission pricing problem, the circular antenna array design problem, the ELD problems (consisting of: DED instance 1, DED instance 2, ELD Instance 1, ELD Instance 2, ELD Instance 3, ELD Instance 4, ELD Instance 5, hydrothermal scheduling instance 1, hydrothermal scheduling instance 2, and hydrothermal scheduling instance 3), the messenger: spacecraft trajectory optimization problem, and the Cassini 2: spacecraft trajectory optimization problem. From this set, the C11-F3 function has been removed in the simulation studies. The full description of the CEC 2011 test suite is provided in [67]. The results of implementing the proposed GAO approach and competitor algorithms on the CEC 2011 test suite are reported in Table 8. The simulation results show that GAO is the first best optimizer for C11-F1, C11-F2, C11-F4 to C11-F7, C11-F9 to C11-F12, FC11-14 to C11-F16, and C11-F20 to C11-F22 compared to the competing algorithms. What is clear from the analysis of the simulation results is that the proposed GAO approach has effective performance in dealing with real-world applications and based on the Wilcoxon rank sum test statistical analysis, it has won the first rank of being the best optimizer compared to the competitor algorithms. The convergence curves of the performance of GAO and competitor algorithms during optimization of the CEC 2011 test suite are presented in Figure 8.

6. Conclusions and Future Works

A new swarm-based optimization algorithm, called green anaconda optimization (GAO), that can be used in solving optimization problems, is introduced in this paper. The natural behavior of green anacondas, the strategy of the male species in identifying the position of the female species during the mating season, and their hunting strategy are the fundamental inspiration for GAO. The mathematical model of GAO is presented in two phases of exploration and exploitation based on modeling the natural behavior of green anacondas. The ability of the proposed GAO approach in handling the optimization problems is tested on thirty-nine objective functions from the CEC 2017 and CEC 2019 test suites. The performance of GAO is also compared with that of twelve well-known metaheuristic algorithms. The simulation results show that the proposed GAO approach has superior performance compared to competitor algorithms by creating a balance between exploration and exploitation. In addition, the implementation of GAO on twenty-one problems from the CEC 2011 test suite showed the high capability of the proposed approach in handling real-world applications.
Among several suggestions for future research, the design of binary and multi-objective versions of the proposed GAO approach is the most prominent. Employing the proposed GAO approach in order to solve optimization problems in various sciences as well as real-world applications such as image clustering, image segmentation, medical applications, and engineering problems are other suggestion for future research.

Author Contributions

Conceptualization, O.P.M. and P.T.; methodology, M.D.; software, M.D. and P.T.; validation, O.P.M., P.T. and M.D.; formal analysis, M.D.; investigation, P.T.; resources, M.D.; data curation, O.P.M.; writing—original draft preparation, M.D. and P.T.; writing—review and editing, O.P.M.; visualization, P.T.; supervision, M.D.; project administration, P.T.; funding acquisition, O.P.M. All authors have read and agreed to the published version of the manuscript.


Financial support from the Natural Sciences and Engineering Research Council of Canada in the form of a research grant is acknowledged by one of the authors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.


The authors thank the University of Hradec Králové and the University of Calgary for their support.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  2. Jahani, E.; Chizari, M. Tackling global optimization problems with a novel algorithm–Mouth Brooding Fish algorithm. Appl. Soft Comput. 2018, 62, 987–1002. [Google Scholar] [CrossRef]
  3. Sergeyev, Y.D.; Kvasov, D.; Mukhametzhanov, M. On the efficiency of nature-inspired metaheuristics in expensive global optimization with limited budget. Sci. Rep. 2018, 8, 453. [Google Scholar] [CrossRef][Green Version]
  4. Liberti, L.; Kucherenko, S. Comparison of deterministic and stochastic approaches to global optimization. Int. Trans. Oper. Res. 2005, 12, 263–285. [Google Scholar] [CrossRef]
  5. Koc, I.; Atay, Y.; Babaoglu, I. Discrete tree seed algorithm for urban land readjustment. Eng. Appl. Artif. Intell. 2022, 112, 104783. [Google Scholar] [CrossRef]
  6. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  7. Zeidabadi, F.-A.; Dehghani, M.; Trojovský, P.; Hubálovský, Š.; Leiva, V.; Dhiman, G. Archery Algorithm: A Novel Stochastic Optimization Algorithm for Solving Optimization Problems. Comput. Mater. Contin. 2022, 72, 399–416. [Google Scholar] [CrossRef]
  8. de Armas, J.; Lalla-Ruiz, E.; Tilahun, S.L.; Voß, S. Similarity in metaheuristics: A gentle step towards a comparison methodology. Nat. Comput. 2022, 21, 265–287. [Google Scholar] [CrossRef]
  9. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  10. Dehghani, M.; Montazeri, Z.; Dehghani, A.; Malik, O.P.; Morales-Menendez, R.; Dhiman, G.; Nouri, N.; Ehsanifar, A.; Guerrero, J.M.; Ramirez-Mendoza, R.A. Binary spring search algorithm for solving various optimization problems. Appl. Sci. 2021, 11, 1286. [Google Scholar] [CrossRef]
  11. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef][Green Version]
  12. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed][Green Version]
  13. Karaboga, D.; Basturk, B. Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems. In International Fuzzy Systems Association World Congress; Springer: Berlin/Heidelberg, Germany, 2007; pp. 789–798. [Google Scholar]
  14. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Perth, WA, Australia, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  15. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef][Green Version]
  16. Jiang, Y.; Wu, Q.; Zhu, S.; Zhang, L. Orca predation algorithm: A novel bio-inspired algorithm for global optimization problems. Expert Syst. Appl. 2022, 188, 116026. [Google Scholar] [CrossRef]
  17. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  18. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  19. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. Based Syst. 2022, 234, 108457. [Google Scholar] [CrossRef]
  20. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  21. Chopra, N.; Ansari, M.M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  23. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  24. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  25. Goldberg, D.E.; Holland, J.H. Genetic Algorithms and Machine Learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  26. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  27. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  28. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  29. Dehghani, M.; Montazeri, Z.; Dhiman, G.; Malik, O.; Morales-Menendez, R.; Ramirez-Mendoza, R.A.; Dehghani, A.; Guerrero, J.M.; Parra-Arroyo, L. A spring search algorithm applied to engineering optimization problems. Appl. Sci. 2020, 10, 6173. [Google Scholar] [CrossRef]
  30. Dehghani, M.; Samet, H. Momentum search algorithm: A new meta-heuristic optimization algorithm inspired by momentum conservation law. SN Appl. Sci. 2020, 2, 1720. [Google Scholar] [CrossRef]
  31. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  32. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  33. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  34. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  35. Pereira, J.L.J.; Francisco, M.B.; Diniz, C.A.; Oliver, G.A.; Cunha Jr, S.S.; Gomes, G.F. Lichtenberg algorithm: A novel hybrid physics-based meta-heuristic for global optimization. Expert Syst. Appl. 2021, 170, 114522. [Google Scholar] [CrossRef]
  36. Wei, Z.; Huang, C.; Wang, X.; Han, T.; Li, Y. Nuclear reaction optimization: A novel and powerful physics-based algorithm for global optimization. IEEE Access 2019, 7, 66084–66109. [Google Scholar] [CrossRef]
  37. Cuevas, E.; Oliva, D.; Zaldivar, D.; Pérez-Cisneros, M.; Sossa, H. Circle detection using electro-magnetism optimization. Inf. Sci. 2012, 182, 40–55. [Google Scholar] [CrossRef][Green Version]
  38. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  39. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  40. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  41. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  42. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.P.; Ramirez-Mendoza, R.A.; Matas, J.; Vasquez, J.C.; Parra-Arroyo, L. A new “Doctor and Patient” optimization algorithm: An application to energy commitment problem. Appl. Sci. 2020, 10, 5791. [Google Scholar] [CrossRef]
  43. Dehghani, M.; Mardaneh, M.; Malik, O. FOA:‘Following’Optimization Algorithm for solving Power engineering optimization problems. J. Oper. Autom. Power Eng. 2020, 8, 57–64. [Google Scholar]
  44. Ayyarao, T.L.; RamaKrishna, N.; Elavarasam, R.M.; Polumahanthi, N.; Rambabu, M.; Saini, G.; Khan, B.; Alatas, B. War Strategy Optimization Algorithm: A New Effective Metaheuristic Algorithm for Global Optimization. IEEE Access 2022, 10, 25073–25105. [Google Scholar] [CrossRef]
  45. Dehghani, M.; Trojovský, P. Teamwork Optimization Algorithm: A New Optimization Approach for Function Minimization/Maximization. Sensors 2021, 21, 4567. [Google Scholar] [CrossRef]
  46. Braik, M.; Ryalat, M.H.; Al-Zoubi, H. A novel meta-heuristic algorithm for solving numerical optimization problems: Ali Baba and the forty thieves. Neural Comput. Appl. 2022, 34, 409–455. [Google Scholar] [CrossRef]
  47. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar] [CrossRef]
  48. Al-Betar, M.A.; Alyasseri, Z.A.A.; Awadallah, M.A.; Abu Doush, I. Coronavirus herd immunity optimizer (CHIO). Neural Comput. Appl. 2021, 33, 5011–5042. [Google Scholar] [CrossRef] [PubMed]
  49. Dehghani, M.; Montazeri, Z.; Saremi, S.; Dehghani, A.; Malik, O.P.; Al-Haddad, K.; Guerrero, J.M. HOGO: Hide objects game optimization. Int. J. Intell. Eng. Syst. 2020, 13, 4. [Google Scholar] [CrossRef]
  50. Dehghani, M.; Montazeri, Z.; Malik, O.P.; Ehsanifar, A.; Dehghani, A. OSA: Orientation search algorithm. Int. J. Ind. Electron. Control. Optim. 2019, 2, 99–112. [Google Scholar]
  51. Moghdani, R.; Salimifard, K. Volleyball premier league algorithm. Appl. Soft Comput. 2018, 64, 161–185. [Google Scholar] [CrossRef]
  52. Dehghani, M.; Mardaneh, M.; Guerrero, J.M.; Malik, O.; Kumar, V. Football game based optimization: An application to solve energy commitment problem. Int. J. Intell. Eng. Syst. 2020, 13, 514–523. [Google Scholar] [CrossRef]
  53. DEHGHANI, M.; MONTAZERI, Z.; MALIK, O.P. DGO: Dice game optimizer. Gazi Univ. J. Sci. 2019, 32, 871–882. [Google Scholar] [CrossRef][Green Version]
  54. Doumari, S.A.; Givi, H.; Dehghani, M.; Malik, O.P. Ring Toss Game-Based Optimization Algorithm for Solving Various Optimization Problems. Int. J. Intell. Eng. Syst. 2021, 14, 545–554. [Google Scholar] [CrossRef]
  55. Zeidabadi, F.A.; Dehghani, M. POA: Puzzle Optimization Algorithm. Int. J. Intell. Eng. Syst. 2022, 15, 273–281. [Google Scholar]
  56. Hsiou, A.S.; Winck, G.R.; Schubert, B.W.; Avilla, L. On the presence of Eunectes murinus (Squamata, Serpentes) from the late Pleistocene of northern Brazil. Rev. Bras. De Paleontol. 2013, 16, 77–82. [Google Scholar] [CrossRef]
  57. Rivas, J.A. The Life History of the Green Anaconda (Eunectes murinus), with Emphasis on Its Reproductive Biology. Ph.D. Thesis, The University of Tennessee, Knoxville, TN, USA, 1999. [Google Scholar]
  58. Rivas, J.A.; Burghardt, G.M. Understanding sexual size dimorphism in snakes: Wearing the snake’s shoes. Anim. Behav. 2001, 62, F1–F6. [Google Scholar] [CrossRef][Green Version]
  59. Pope, C.H. The Giant Snakes: The Natural History of the Boa Constrictor, the Anaconda, and the Largest Pythons, Including Comparative Facts about Other Snakes and Basic Information on Reptiles in General; Knopf: New York, NY, USA, 1961. [Google Scholar]
  60. Harvey, D. Smithsonian Super Nature Encyclopedia; Dorling Kindersley Publishing: London, UK, 2012. [Google Scholar]
  61. Thomas, O.; Allain, S. Review of prey taken by anacondas (Squamata, Boidae: Eunectes). Reptiles Amphib. 2021, 28, 329–334. [Google Scholar] [CrossRef]
  62. Strimple, P. The Green Anaconda Eunectes murinus (Linnaeus). Liyyeratura Serpentium 1993, 13, 46–50. [Google Scholar]
  63. Burton, M.; Burton, R. International Wildlife Encyclopedia; Marshall Cavendish: New York, NY, USA, 2002; Volume 1. [Google Scholar]
  64. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P.; Definitions, P. Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technology Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  65. Price, K.V.; Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the 100-Digit Challenge Special Session and Competition on Single Objective Numerical Optimization; Nanyang Technological University: Singapore, 2018. [Google Scholar]
  66. Wilcoxon, F. Individual comparisons by ranking methods. In Breakthroughs in Statistics; Springer: Berlin/Heidelberg, Germany, 1992; pp. 196–202. [Google Scholar]
  67. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems; Jadavpur University: Kolkata, India; Nanyang Technological University: Singapore, 2010; pp. 341–359. [Google Scholar]
Figure 1. Green anaconda ( (accessed on 20 February 2023)) taken from: free media Wikimedia Commons.
Figure 1. Green anaconda ( (accessed on 20 February 2023)) taken from: free media Wikimedia Commons.
Biomimetics 08 00121 g001
Figure 2. Flowchart of GAO.
Figure 2. Flowchart of GAO.
Biomimetics 08 00121 g002
Figure 3. Convergence curves of GAO and competitor algorithms performance on the CEC 2017 test suite (dimension m = 10 ).
Figure 3. Convergence curves of GAO and competitor algorithms performance on the CEC 2017 test suite (dimension m = 10 ).
Biomimetics 08 00121 g003
Figure 4. Convergence curves of GAO and competitor algorithms performance on the CEC 2017 test suite (dimension m = 30 ).
Figure 4. Convergence curves of GAO and competitor algorithms performance on the CEC 2017 test suite (dimension m = 30 ).
Biomimetics 08 00121 g004
Figure 5. Convergence curves of GAO and competitor algorithms performance on the CEC 2017 test suite (dimension m = 50 ).
Figure 5. Convergence curves of GAO and competitor algorithms performance on the CEC 2017 test suite (dimension m = 50 ).
Biomimetics 08 00121 g005
Figure 6. Convergence curves of GAO and competitor algorithms performance on CEC 2017 test suite (dimension m = 100 ).
Figure 6. Convergence curves of GAO and competitor algorithms performance on CEC 2017 test suite (dimension m = 100 ).
Biomimetics 08 00121 g006
Figure 7. Convergence curves of GAO and competitor algorithms performance on the CEC 2019 test suite.
Figure 7. Convergence curves of GAO and competitor algorithms performance on the CEC 2019 test suite.
Biomimetics 08 00121 g007
Figure 8. Convergence curves of GAO and competitor algorithms performance on the CEC 2011 test suite.
Figure 8. Convergence curves of GAO and competitor algorithms performance on the CEC 2011 test suite.
Biomimetics 08 00121 g008
Table 1. Control parameters values.
Table 1. Control parameters values.
TypeReal coded
SelectionRoulette wheel (proportionate)
CrossoverWhole arithmetic (probability = 0.8,
α 0.5 , 1.5 )
MutationGaussian (probability = 0.05)
TopologyFully connected
Cognitive and social constant(C1, C2) = ( 2 , 2 )
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
Alpha, G0, Rnorm, Rpower20, 100, 2, 1
TF: Teaching factorTF = round ( 1 + r a n d )
random numberrand is a random number between 0 1 .
Convergence parameter (a)a: Linear reduction from 2 to 0.
Wormhole existence probability (WEP)Min(WEP) = 0.2 and Max(WEP) = 1.
Exploitation accuracy over the iterations (p) p = 6 .
Convergence parameter (a)a: Linear reduction from 2 to 0.
r is a random vector in 0 1 .
l is a random number in 1,1 .
Pmin and Pmax1, 4
c1, c2, c3random numbers lie in the range of 0 1 .
Constant number P = 0.5
Random vectorR is a vector of uniform random numbers in 0 , 1 .
Fish aggregating devices (FADs) F A D s = 0.2
Binary vector U = 0 or 1
Sensitive parameter β = 0.01
Sensitive parameter α = 0.1
Evolutionary sense (ES)ES: randomly decreasing values between 2 and −2
L1, L20.8, 0.2
P1, P2, P30.6, 0.4, 0.6
Fmin and Fmax0.07, 0.75
τ, ao, a1, a24.125, 6.25, 100, 0.0005
Table 2. Optimization results of the CEC 2017 test suite (dimension m = 10 ).
Table 2. Optimization results of the CEC 2017 test suite (dimension m = 10 ).
C17-F1mean1006977.1111822.6349.72 × 109104.66221.33 × 1097,296,30211,776.5218,952.961.57 × 108328.93463347.23320,023,383
best100345.9705754.13277.3 × 109102.406311,665,9813,349,2886355.28911,697.0269,930,767109.1846362.0216,675,925
worst10012,471.593882.1221.27 × 1010106.05173.8 × 10910,244,42815,855.0628,173.753.79 × 108747.24629924.34536,691,453
std1.76 × 10−56313.971440.8552.69 × 1091.6158771.69 × 1093,442,7254085.3237694.8961.49 × 108290.32694427.8112,404,106
median1007545.4421327.149.47 × 109105.09557.46 × 1087,795,74612,447.8617,970.5489,666,699229.65391551.28418,363,078
std4.64 × 10−1450.7035231.218434674.2855.43 × 10−113304.026552.18510.0140953284.596197.0911554.8072.49120910,584.32
std070.4426148.8524184.28716.63 × 10−8336.066211.2310.2271380.3761986.0778255.957635.9046113.075444
C17-F12mean1236.2715504.8672,486,5232.2 × 1081290.268271,235.98,310,292182,122.51,523,2425,426,363526,882.78588.212649,676
worst1320.3938397.6134,285,3513.92 × 1081351.6366,131.118,484,802401,267.32,123,2989,606,3711,163,36414,841.021,146,930
std56.34882509.0741,363,6011.52 × 10842.41378126,890.67,346,263152,346.3821,342.44,319,261485,164.35574.514393,704.8
median1212.115538.52,166,4012.08 × 1081275.49314,385.86,866,002137,127.31,810,5435,323,495428,956.28452.54631,832
worst1801.231866.78630,487.912.29 × 1081925.12838,281.5118,261.5128,200.1743,428.1139,435.7111,618.8743,550.3619,687.74
std0.42523823.5202811,179.231.13 × 10832.0222116,834.365938.54210,954.316,105.166367.1213682.25520,955.057046.641
std16.0425654.9365130.9643198.00347.84 × 10−5178.7025132.8519134.10720.7314590.5332742.51919103.9299191.9303
Sum rank4810421836388303280152178217258207223
Mean rank1.6551723.5862077.51724112.517243.03448310.448289.6551725.2413796.1379317.4827598.8965527.1379317.689655
Total rank13813212114571069
Table 3. Optimization results of the CEC 2017 test suite (dimension m = 30 ).
Table 3. Optimization results of the CEC 2017 test suite (dimension m = 30 ).
C17-F1mean2965.6415.61 × 10996,586.164.35 × 101086,7591.96 × 10101.67 × 109505,100.81.3 × 1095.71 × 10932,552,5594.2 × 1081.92 × 108
best105.40591.63 × 1093775.7073.92 × 10107863.151.85 × 10109.03 × 108347,301.55.83 × 1083.66 × 109104.80296710.2151.56 × 108
worst6756.638.1 × 109363,121.44.78 × 1010217,079.12.11 × 10102.61 × 109809,113.21.73 × 1091.01 × 10101.3 × 1081.03 × 1092.35 × 108
std2967.1562.79 × 109177,756.23.59 × 10999,555.511.16 × 1097.07 × 108208,2195.09 × 1082.94 × 10965,065,4755.08 × 10833,433,720
median2500.2656.36 × 1099723.7694.34 × 101061,046.891.95 × 10101.59 × 109431,994.31.45 × 1094.55 × 10929,684.63.26 × 1081.89 × 108
C17-F12mean85,555.8725,949,23010,375,5921.32 × 101089,184.916.12 × 1093.34 × 10828,219,3071.35 × 1083.08 × 10884,031,91862,898,5158,506,616
best25,576.261,213,8751,853,6429.56 × 10926,351.254.1 × 10944,146,8779,447,87277,445,9091.76 × 1083,464,275201,962.83,266,660
worst191,112.373,085,85430,139,9711.66 × 1010200,2628.4 × 1098.95 × 10871,379,4512.74 × 1084.53 × 1081.6 × 1082 × 10812,986,786
std72,522.3832,367,34513,248,3992.96 × 10976,266.982.02 × 1093.84 × 10829,109,11393,894,3531.37 × 10877,920,02794,482,8825,007,932
median62,767.4814,748,5964,754,3771.34 × 101065,063.215.98 × 1091.99 × 10816,024,95393,914,7193.01 × 10886,487,23325,671,8858,886,510
C17-F13mean1679.05165,307.57196,312.19.71 × 1091663.411.62 × 1091,218,387169,962.5107,891.81.05 × 10834,883.971,140,6069,001,402
best1509.43910,167.4164,177.335.02 × 1091496.8999,775,852190,877.661,271.8680,912.2373,593,61429,396.1927,575.572,238,017
worst1846.767222,691.8385,695.21.73 × 10101828.6124.01 × 1093,666,218384,298.9157,114.61.61 × 10847,6604,444,29513,874,918
std138.2768104,969.9146,282.35.43 × 109136.02051.9 × 1091,643,068146,086.534,141.5938,514,7578575.9282,202,4805,734,672
median1679.99914,185.55167,687.98.27 × 1091664.0651.23 × 109508,225.6117,139.596,770.1492,927,82931,239.8545,277.589,946,336
C17-F15mean1588.7852511.29745,133.533.52 × 1081577.2622.66 × 1081,282,54956,902.87629,660.22,881,97918,208.948283.199986,862.4
best1560.0731925.33926,535.072.27 × 1081550.94206,063.4348,434.138,106.1821,681.951,030,9208605.332205.324367,843.7
worst1624.033215.39559,058.665.16 × 1081612.7011.03 × 1092,123,57396,468.192,007,2354,849,62822,396.1715,491.711,359,201
std26.94309601.98614,789.581.28 × 10826.551075.06 × 108948,175.327,305.97939,116.31,581,1346440.9145566.237447,634.1
median1585.5182452.22847,470.23.32 × 1081572.70519,428,4691,329,09446,518.56244,8622,823,68520,917.147717.8791,110,203
C17-F19mean1950.4993335.47291,572.276.82 × 1081933.2621.54 × 10914,619,9711,421,627276,846.25,881,714160,12114,630.44485,098.8
best1940.6692030.4314,294.764.52 × 1081921.71241,342,4585,450,99572,18673,307.974,590,769121,887.88707.597171,700.8
worst1958.4225624.54144,128.78.89 × 1081939.3595.66 × 10932,121,7022,690,213490,870.37,719,442215,531.718,279.15847,776
std7.6416261581.36555,588.552.04 × 1087.8850532.75 × 10912,527,9961,129,134170,582.71,476,86244,979.384145.322295,341.7
median1951.4522843.459103,932.86.94 × 1081935.9892.35 × 10810,453,5941,462,055271,603.35,608,322151,532.215,767.5460,459.2
C17-F23 mean2730.4713215.2932991.0253298.1242774.4973300.5593153.6642764.7722761.2432948.5923884.8443000.9732999.073
C17-F28 mean3302.8613546.3713285.6175932.2033221.5724134.4733494.5753246.3383424.7643666.0293691.623314.0083627.343
C17-F30mean7742.305179,776.51,269,1003.38 × 1097679.74360,768,71632,463,9483,549,1826,660,65528,705,8932,737,26199,984.65992,091.5
best6292.26122,142.69383,760.82.52 × 1096248.99813,475,6379,172,6431,518,6195,440,68619,256,1261,341,58617,644.38266,268.1
worst9407.668459,312.22,509,3424.45 × 1099322.4271.18 × 10854,601,9067,489,4908,069,47944,576,3344,164,795290,688.32,346,103
std1283.647191,970.5921,032.29.95 × 1081265.36543,813,63523,396,8922,771,7171,213,04311,210,6821,399,201129,358925,161.8
median7634.645118,825.61,091,6483.27 × 1097573.77455,723,91433,040,6222,594,3106,566,22725,495,5562,721,33145,802.95677,997.4
Sum rank3817419536164324309141157249244166217
Mean rank1.31034566.72413812.448282.20689711.1724110.655174.8620695.4137938.5862078.4137935.7241387.482759
Total rank16713212113410958
Table 4. Optimization results of the CEC 2017 test suite (dimension m = 50 ).
Table 4. Optimization results of the CEC 2017 test suite (dimension m = 50 ).
C17-F1mean14,553.52.52 × 101012,002,9199.79 × 10108,328,9044.48 × 10108.24 × 1094,657,0641.08 × 10102.21 × 10101.79 × 10104.29 × 1097.95 × 10 9
best11,540.481.57 × 10104,081,9068.7 × 10105,232,9402.5 × 10104.23 × 1094,298,3804.37 × 1091.9 × 10101.41 × 10101.63 × 1094.59 × 10 9
worst18,594.352.97 × 101030,750,2131.1 × 101111,801,6625.49 × 10101.26 × 10105,533,8401.52 × 10103.05 × 10102.24 × 10107.79 × 1091.06 × 10 10
std2982.2056.44 × 10912,587,1669.2 × 1092,866,6691.37 × 10103.45 × 109589,793.44.81 × 1095.58 × 1093.67 × 1092.59 × 1092.62 × 10 9
median14,039.592.77 × 10106,589,7799.76 × 10108,140,5074.96 × 10108.08 × 1094,398,0181.19 × 10101.95 × 10101.76 × 10103.87 × 1098.29 × 10 9
C17-F12mean3,759,6303.08 × 10963,614,7758.18 × 101015,933,8162.13 × 10101.83 × 10992,770,0784.76 × 1084.26 × 1092.06 × 1092.03 × 1091.93 × 10 8
best3,120,8776.07 × 10825,515,0666.07 × 10103,269,2471.42 × 10101.01 × 10966,570,98068,637,4992.05 × 1091.07 × 1091.29 × 1081.54 × 10 8
worst5,026,6346.48 × 1091.03 × 1081.01 × 101129,463,9462.83 × 10102.57 × 1091.43 × 1088.17 × 1086.51 × 1092.67 × 1094.68 × 1092.25 × 10 8
std867,286.62.54 × 10932,204,7021.87 × 101011,215,1997.09 × 1097.33 × 10833,904,6963.49 × 1081.88 × 1096.94 × 1081.97 × 10935,866,307
median3,445,5052.61 × 10963,038,2608.28 × 101015,501,0352.14 × 10101.87 × 10980,957,6775.1 × 1084.24 × 1092.25 × 1091.66 × 1091.98 × 10 8
C17-F13mean20,543.437.13 × 108141,800.93.67 × 101021,237.927.59 × 1092.11 × 108247,156.62.47 × 1085.88 × 10811,242,1636.16 × 10826,005,027
best15,376.3819,099,12078,023.612.48 × 101016,056.185.03 × 1091.2 × 108194,166.22.17 × 1083.8 × 10845,145.0426,221.2512,818,531
worst27,661.651.43 × 109224,5535.03 × 101028,607.161.12 × 10103.77 × 108308,694.83.16 × 1087.42 × 10832,680,1412.46 × 10954,704,377
std5265.5137.92 × 10865,454.081.08 × 10105377.9222.79 × 1091.18 × 10852,336.7646,903,2261.59 × 10815,396,9151.23 × 10919,804,620
median19,567.857.03 × 108132,313.63.59 × 101020,144.167.08 × 1091.75 × 108242,882.72.27 × 1086.14 × 1086,121,682149,73418,248,601
C17-F15mean2278.1221,918,42656,625.995.26 × 1092259.2081.83 × 10914,219,521194,386.31.31 × 10866,651,0922.11 × 10817,821,96112,226,060
best2183.16613,016.5335,125.414.45 × 1092168.2221.08 × 1094,542,30946,119.6454,273.5117,897,68712,105.458042.8933,671,989
worst2463.5777,144,59586,519.197.14 × 1092443.1183.61 × 10932,284,842458,767.14.32 × 1081.36 × 1088.44 × 10871,236,82420,439,957
std126.48233,491,21621,600.891.27 × 109125.1741.21 × 10912,437,330192,3552.05 × 10851,513,2074.22 × 10835,609,9099,283,812
median2232.872258,046.452,429.694.73 × 1092212.7461.31 × 10910,025,466136,329.345,952,68256,383,28421,026.0321,487.8612,396,147
C17-F18mean15,216.71,365,9557,779,2992.14 × 10815,772.325,039,60363,038,3342,399,8104,407,8746,375,3076,213,809688,185.29,525,765
worst29,018.953,065,48010,900,5903.18 × 10829,848.7958,520,8931.17 × 1084,811,9407,682,38710,961,38913,622,0361,012,62222,724,999