Next Article in Journal
Nonparametric Estimation of the Expected Shortfall Regression for Quasi-Associated Functional Data
Previous Article in Journal
A New Pre-Stretching Method to Increase Critical Flutter Dynamic Pressure of Heated Panel in Supersonic Airflow
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Orthogonal Learning Rosenbrock’s Direct Rotation with the Gazelle Optimization Algorithm for Global Optimization

1
Faculty of Information Technology, Al-Ahliyya Amman University, Amman 19328, Jordan
2
Division of Engineering, New York University Abu Dhabi, Saadiyat Island, Abu Dhabi 129188, United Arab Emirates
3
Department of Civil and Urban Engineering, Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA
4
Sorbonne Center of Artificial Intelligence, Sorbonne University-Abu Dhabi, Abu Dhabi 38044, United Arab Emirates
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(23), 4509; https://doi.org/10.3390/math10234509
Submission received: 16 October 2022 / Revised: 11 November 2022 / Accepted: 14 November 2022 / Published: 29 November 2022

Abstract

:
An efficient optimization method is needed to address complicated problems and find optimal solutions. The gazelle optimization algorithm (GOA) is a global stochastic optimizer that is straightforward to comprehend and has powerful search capabilities. Nevertheless, the GOA is unsuitable for addressing multimodal, hybrid functions, and data mining problems. Therefore, the current paper proposes the orthogonal learning (OL) method with Rosenbrock’s direct rotation strategy to improve the GOA and sustain the solution variety (IGOA). We performed comprehensive experiments based on various functions, including 23 classical and IEEE CEC2017 problems. Moreover, eight data clustering problems taken from the UCI repository were tested to verify the proposed method’s performance further. The IGOA was compared with several other proposed meta-heuristic algorithms. Moreover, the Wilcoxon signed-rank test further assessed the experimental results to conduct more systematic data analyses. The IGOA surpassed other comparative optimizers in terms of convergence speed and precision. The empirical results show that the proposed IGOA achieved better outcomes than the basic GOA and other state-of-the-art methods and performed better in terms of solution quality.

1. Introduction

In recent years, numerous disciplines, including data science, systems engineering, mathematical optimization, and their applications, have focused on real-world optimization challenges [1,2]. Traditional and meta-heuristic methods are the two main approaches for handling these issues. Newton and gradient descent are two simple examples of the first category of techniques [3]. The time-consuming nature of these procedures is, however, a disadvantage. Additionally, there is only one solution for each run. The effectiveness of these methods depends on the issues being addressed, as well as the restrictions, objective functions, search methods, and variables [4,5].
Numerous optimization techniques have been created, and optimization issues are frequently encountered in engineering and scientific study disciplines [6]. The search space generation and local optimal stagnation are two fundamental problems with current optimization paradigms [7]. Consequently, stochastic optimization techniques have received much attention in recent years. Because stochastic optimization methods merely modify the inputs, monitor the outputs of a given system for objective outcomes, and consider the optimization problem a “black box”, they can skip deriving the computational formula [8]. Additionally, stochastic optimization techniques can randomly complete optimization problems, giving them an inherent advantage over traditional optimization algorithms in their capacity for optimal local avoidance [9]. Stochastic optimization algorithms can be categorized into population-based and individual-based techniques depending on how many solutions are produced at each iteration of the entire process. Population-based methods have many random solutions and promote them throughout the optimization procedure [10,11]. In contrast, individual-based methods only produce one candidate solution at a time.
Nature-inspired stochastic techniques have received much interest recently [12,13,14]. Such optimization frequently imitates the social or individual behavior of a population of animals or other natural occurrences. Such algorithms begin the optimization process by generating several random solutions, which they then enhance as potential answers to a specific issue [15]. Stochastic optimization algorithms are used in various sectors because they outperform mathematical optimization methods. Despite the enormous number of suggested ways in the optimization area, it is crucial to understand why we require different optimization techniques. The no free lunch (NFL) theorem can be used to justify this problem [16]. It logically supports the idea that one method cannot provide an efficient solution for resolving any optimization issue. In other words, it cannot be guaranteed that an algorithm’s success in addressing a specific set of issues would enable it to solve all optimization problems of all types and natures. This theorem allows researchers to suggest brand-new optimization strategies or enhance the existing algorithms to solve various issues.
As mentioned above, hard optimization problems need powerful search methods to find the optimal solutions for different problems. Thus, the basic method faces several problems during the search process, which are premature convergence, a lack of balance between the search methods, and low convergence behavior. The gazelle optimization algorithm (GOA) is a global stochastic optimizer that is straightforward and has a powerful search capability. Nevertheless, the GOA requires a deep investigation to improve its search abilities and address various multimodal–hybrid functions and data mining problems efficiently. Therefore, this paper proposes a new hybrid method, called (IGOA), based on using the orthogonal learning (OL) method with Rosenbrock’s direct rotational strategy. This modification aims to improve the basic GOA performance and sustain solution varieties. We performed comprehensive experiments based on various functions, including 23 classical and IEEE CEC2017 problems. Moreover, eight data clustering problems taken from the UCI repository were tested to further verify the proposed method’s performance. The IGOA was compared with several other proposed meta-heuristic algorithms. Moreover, the Wilcoxon signed-rank test further assessed the experimental results to conduct more systematic data analyses. The IGOA surpassed other comparative optimizers in terms of convergence speed and precision. The empirical results show that the proposed IGOA achieved better results compared to the basic GOA and other state-of-the-art methods, and it obtained higher performance in terms of solution quality.
The remainder of this paper is structured as follows. In Section 2, the related works are presented. Section 3 presents the main procedure of the used original GOA. Section 4 presents the proposed IGOA method. Section 5 presents the experiments and the obtained results by the comparative methods. The results are presented in Section 6. Finally, the conclusion and potential future research directions are presented in Section 7.

2. Related Works

In this section, the related search methods that were used to solve various problems with their modifications are presented.
A popular technique inspired by nature, called moth–flame optimization (MFO), is straightforward. However, MFO may struggle with convergence or tend to slip into local optima for some complicated optimization tasks, particularly high-dimensional and multimodal issues. The integration of MFO with the Gaussian mutation (GM), the Cauchy mutation (CM), the Lévy mutation (LM), or the combination of GM, CM, and LM, is presented in [17] to address these constraints. To specifically enhance neighborhood-informed capabilities, GM is added to the fundamental MFO. On an extensive collection of 23 benchmark issues and 30 CEC2017 benchmark tasks, the most acceptable variation of MFO was evaluated to 15 cutting-edge methods and 4 well-known sophisticated optimization techniques. The extensive experiments show that there are three ways to considerably increase the basic MFO exploration and exploitation capabilities.
To direct the swarm, further enhance the balance between the integrated exploratory and neighborhood-informed capacities of the traditional process, and explore the core (in an attempt to search the abilities of the whale optimization algorithm (WOA) in dealing with optimal control tasks), two novel and appropriate strategies (made up of Lévy flight and chaotic local search) are instantaneously presented into the WOA [18]. Unimodal, multimodal, and fixed-dimension multimodal problems are in the benchmark tasks. The study and practical findings show that the suggested method can surpass its rivals in terms of convergence speed and rate. The proposed approach may be a powerful and valuable supplemental tool for more complicated optimization problems.
Implementing effective software products is becoming increasingly difficult due to the high-performance computing (HPC) industry’s rapidly changing hardware environment and growing specialization. Software abstraction is one approach to overcoming these difficulties. A parallel algorithm paradigm is proposed that enables both optimum mappings to be various and complex architectures with global control of their synchronization and parallelization [19]. The model that is provided distinguishes between an algorithm’s structure and its functional execution. It preserves them in an abstract pattern tree. It uses a hierarchical breakdown of parallel process models while building blocks for computational frameworks (APT). Based on the APT, a data-centric flow graph is created that serves as an intermediary description for complex and automatic structural changes. Three example algorithms are used to illustrate the usability of this paradigm, which results in runtime speedups between 1.83 and 2.45 on a typical mixed CPU/GPU system.
One of the recently developed algorithms, the salp swarm algorithm (SSA), is based on the simulated behavior of salps. However, it faces local optima stagnation and slow convergence (similar to most meta-heuristic methods). These issues were recently satisfactorily resolved using chaos theory. A brand-new hybrid approach built on SSA and the chaos theory was put forth in this work [20]. A total of 20 benchmark datasets and 14 benchmark unimodal and multimodal optimization tasks were used to test the proposed chaotic salp swarm algorithm (CSSA). Ten distinct chaotic maps were used to increase the convergence rate and ensuing precision. The suggested CSSA is an algorithm with promise, according to the simulation findings. The outcomes also demonstrate CSSA’s capacity to identify an ideal feature subset that maximizes classification performance while using the fewest possible features. Additionally, the results show that the chaotic logistic map is the best of the ten maps utilized and may significantly improve the effectiveness of the original SSA.
The sine–cosine algorithm, newly created to handle global optimization issues, is based on the properties of the sine and cosine trigonometric functions. The sine–cosine technique is modified in this study [21], improving the solution’s ability to be exploited while lowering the diversity overflow in traditional SCA search equations. ISCA is the suggested algorithm’s name. Its standout feature is the presented algorithm’s fusion of crossover skills with individual optimal states of unique solutions and a combination of self-learning and global search methods. A traditional collection of common benchmark issues, the IEEE CEC2014 benchmark test, and a more recent group of benchmark functions, the IEEE CEC2017 benchmark exam, have been used to assess these skills in ISCA. ISCA has used several performance criteria to guarantee the robustness and effectiveness of the method. In the study, five well-known engineering optimization problems were also solved using the suggested technique, ISCA. The proposed approach is also employed for multilayer thresholding in picture segmentation toward the study’s conclusion.
The WOA, a novel and competitive population-based optimization technique, beats specific previous biologically inspired algorithms in terms of ease of use and effectiveness [22]. However, for massive global optimization (LSGO) issues, WOA will become entangled in local optima and lose accuracy. To solve LSGO difficulties, a modified version of the whale optimization algorithm (MWOA) is suggested. A nonlinear dynamic method based on a differential operator is provided to update the controller parameters to equilibrium regarding the exploration and exploitation capacities. The algorithm is forced to leave local optima by employing a Lévy flight technique. Additionally, the population’s leader is subjected to a quadratic extrapolation approach, which increases the precision of the solution and the effectiveness of local exploitation. A total of 25 well-known benchmark problems varying from 100 to 1000 were used to evaluate MWOA. The experimental findings show that MWOA on LSGO perform better than other cutting-edge optimization methods in terms of solution correctness, fast convergence, and reliability.

3. Gazelle Optimization Algorithm: Procedure and Presentation

This section introduces the gazelle optimization algorithm (GOA) and formulates the suggested procedures for optimization.
The family of the genus Gazella includes the gazelles. The drylands throughout most of Asia, including China, the Arabian Peninsula, and a portion of the Sahara Desert in northern Africa, are home to gazelles. Additionally, they may be found in eastern Africa and the sub-Saharan Sahel, which stretches from Tanzania to the Horn of Africa. The majority of predators frequently hunt on gazelles. There are about 19 different species of gazelles, ranging in size from the little Thomson’s and Speke’s gazelle to the enormous Dama gazelle. The gazelle’s hearing, sight, and smell are keen, and they move quickly. These evolutionary traits enable them to flee from predators, making up for their chronic weaknesses. In nature, gazelles and their peculiar characteristics may be seen [23].

3.1. Initialization

The GOA is a population-based optimization technique that employs gazelles (X) with randomly initialized search parameters. According to Equation (1), the search agents are represented by an n-by-d matrix of the potential solution. The GOA leverages the issue’s upper bound (UB) and lower limit (LB) constraints to determine the possible values for the population matrix stochastically.
X = X 1 , 1 X 1 , 2 X 1 , j X 1 , 1 d X 1 , d X 2 , 1 X 2 , 2 X 2 , j X 2 , 1 d X 2 , d X i , j X n 1 , 1 X n 1 , 2 X n 1 , j X n 1 , d X n , 1 X n , 2 X n , d 1 X n , d
where X is the matrix of the solution locations, each location is stochastically yielded by Equation (2), X i , j is the jth randomly induced location of the ith solution, n denotes the number of gazelles, and d is the specified search space.
X i , j = r a n d ( U B j L B j ) + L B j
Each iteration has a candidate location for each X i , j , where r a n d is a random numeral, U B j L B j are the upper and lower bounds of the specified search space, respectively. The best-obtained solution is appointed as a top gazelle to create an elite matrix (n × d) as given in Equation (3). This matrix is employed to explore and discover the gazelle’s next iteration.
E l i t e = X 1 , 1 X 1 , 2 X 1 , j X 1 , 1 d X 1 , d X 2 , 1 X 2 , 2 X 2 , j X 2 , 1 d X 2 , d X i , j X n 1 , 1 X n 1 , 2 X n 1 , j X n 1 , d X n , 1 X n , 2 X n , d 1 X n , d
where X i , j denotes the locations of the top gazelle. The predator and gazelles are indeed regarded as search agents by the GOA. Since the gazelles are already running in unison for refuge by the time a predator is observed stalking them, the predator would have already searched the area when the gazelles flee. If the superior gazelle replaces the top gazelle, the elite will change after each iteration.

3.2. The Brownian Motion

A seemingly random movement when the displacement conforms to a standard (Gaussian) probability distribution function with a median and variance of μ = 0 and σ 2 = 1, respectively. Equation (4) defines the normal Brownian motion at location M [23].
f B ( x ; μ , σ ) = 1 2 μ σ 2 e x p ( x μ ) 2 2 σ 2 = 1 2 μ e x p x 2 2

3.3. The Lévy Flight

The Lévy distribution from Equation (5) is used by the Lévy flight to conduct a random walk [24].
L ( X j ) = X j 1 α
where X j indicates the flight space, and α = (1, 2) indicates the power law exponent. Equation (6) indicates the Lévy regular operation [25].
f L ( x ; α , ν ) = 1 n 0 e x p ( ν q α ) c o s ( q x ) δ q
This work utilized an algorithm that yields a stable Lévy motion. The algorithm employed within the range of 0.3–1.99, represented in Equation (7), where α is the distribution index that handles the motion processes, and ν denotes the scale unit.
L e v y ( α ) = 0.05 x y 1 α
where α , x, and y are described as follows:
x = N o r m a l ( 0 , σ x 2 )
y = N o r m a l ( 0 , σ x 2 )
σ x = Γ ( 1 α ) s i n ( π α 2 ) Γ ( 1 + α 2 ) α 2 α 1 2
where σ y = 1 and α = 1.5.

3.4. Modeling the Basic GOA

The developed GOA algorithm mimics the way that gazelles survive. The optimization process entails grazing without a predator and fleeing to a refuge when one is seen. As a result, there are two steps to the described GOA algorithm optimization procedure [23].

3.4.1. Exploitation

In this stage, it is assumed that there is no predator present or that the predator is only stalking the gazelles while they calmly graze. During this phase, neighborhood regions of the domain are efficiently covered using the Brownian motion, characterized by uniform and controlled steps. As shown in Figure 1, it is expected that the gazelles walk in a Brownian motion during grazing.
Equation (11) illustrates the mathematical formula of this phenomenon.
g a z e l l e i + 1 = g a z e l l e i + s . R R B ( E l i t e R B g a z e l l e i )
where g a z e l l e i + 1 is the solution for the following iteration, g a z e l l e i is the solution for the current iteration, s is the pace at which the gazelles graze, R B is a vector of constant random integers [0, 1], and R is a vector of different random numbers reflecting the Brownian motion.

3.4.2. Exploration

When a predator is seen, the exploring phase begins. The 2 m height is mimicked by scaling the 2 m height to a figure between 0 and 1. Gazelles respond to danger by flicking their tail, stomping their feet, or stotting up to 2 m in the air with all four feet. This algorithm phase uses the Lévy flight, which involves a series of little steps and sporadic large jumps. This strategy has enhanced searchability in the literature on optimization. In Figure 2, the exploring phase is shown. Once the predator is seen, the gazelle flees, and the predator pursues. Both runs exhibit a sharp turn in travel direction, symbolized by the m u . This study assumed that this direction shift happens every iteration; when the iteration number is odd, the gazelle moves in one direction, and when it is even, it moves in the opposite direction. We hypothesized that because the gazelle reacts first, it uses the Lévy flight to move. The study assumed that the predator would lift off utilizing Brownian motion and switch to Lévy flight since the predator would respond later.
Equation (12) illustrates the computational formula of the gazelle’s behavior after spotting the predator.
g a z e l l e i + 1 = g a z e l l e i + S . μ . R L ( E l i t e i R L g a z e l l e i )
where S represents the fastest the gazelle can go, and R L is a vector of random integers generated using Lévy distributions. Equation (13) illustrates the computational formula for the predator’s pursuit of the gazelle.
g a z e l l e i + 1 = g a z e l l e i + S . μ . C F . R R B ( E l i t e i R L g a z e l l e i )
where,
C F = ( 1 i t e r m a x _ i t e r ) ( 2 i t e r m a x _ i t e r )
The research on Mongolian gazelles also claimed that even though the animals are not endangered, they have annual survivorship of 0.66, which translates to just 0.34 instances where predators are effective. PSRs, which stand for predator success rates, impact the gazelle’s capacity to flee. Therefore, the method prevents becoming stuck in a local minimum. Equation (15) models the impact of PSRs.
g a z e l l e i + 1 = g a z e l l e i + C F L B + R ( U B L B ) U , if r P S R s g a z e l l e i + P S R s ( 1 r ) + r ( g a z e l l e r 1 g a z e l l e r 2 ) , else
U = 0 if r < 0.34 1 otherwise
The main procedure of the basic IGOA is presented in Figure 3.

4. The Proposed Method

In this section, the proposed IGOA is presented according to its main search procedures.

4.1. Orthogonal Learning (OL)

A fractional experiment holds the secret to the orthogonal design. Finding the optimum level combination may be accomplished fast by utilizing the features of a fractional experiment to produce all possible level combinations. The standard approach to finding the best level for each variable is to cycle via all levels, supposing that the experimental findings of the objective issue rely on the K factor, which may be split into Q levels. When there are a few factors involved, this strategy works quite well. However, when many factors are involved, it becomes very challenging to reach every level. This traversal approach differs from orthogonal design. Through an orthogonal array, it mixes various elements and stories. As an illustration, the construction procedure follows: build the introductory column first, then the nonbasic column [26,27]. The following Equation describes how to create an orthogonal array of L 10 ( 3 4 ) .
L 10 ( 3 4 ) = 1 1 1 1 1 2 2 2 1 3 3 3 2 1 2 3 2 2 3 1 2 3 1 2 3 1 3 2 3 2 1 3 3 3 2 1 4 1 1 1

4.2. Rosenbrock’s Direct Rotational (RDR)

The coordinate axes are used as the initial search path in Rosenbrock’s direct rotation (RDR) local search technology, which then rotates along these directions before moving to a new arrangement point where effective steps are produced until at least one effective process and one failed step has been made in each search direction [28]. In this scenario, the present phase will end, and the orthonormal basis will be revised to account for the cumulative impact of all successful steps in all dimensions [29]. Equation (18) displays the orthonormal basis update.
x k + 1 x k + = i = 1 n λ i d i
Equation (19) defines the new set of guidelines. Where λ i is the total number of successful design variables, the most beneficial search direction at this point is x k + 1 x k + ; hence, it must be included in the revised search direction.
p i = d i , λ i = 0 j = 0 n λ j d j , λ i 0
The search results are then updated using the Gram–Schmidt orthonormalization process as given in Equation (20).
q i = p i , i = 1 p i j = 1 i 1 q j T p i q j T q j , i 2
Equation (21) displays the modified search instructions upon normalization.
d i = q i q i , i = 1 , 2 , 3 , . . . . , n .
The technique searches once more along the new opposite manner after updating the local search, continuing until the end condition is fulfilled.

4.3. Procedure of the Proposed IGOA

The suggested IGOA is presented in this section to demonstrate its primary process and structure. The three primary methods—GOA, OL, and RDR—are employed in the suggested technique, subject to three phases following a transition mechanism. The suggested IGOA modifies the placements of the solutions in accordance with an assumption (IF r a n d < 0.2). The search procedure is checked at the end to see if it may be terminated or kept on. The search operations of the OL will be carried out if this is the case; otherwise, IF r a n d < 0.5, the search processes of the RDR will be carried out. If not, the search procedures will be excused in accordance with the GOA’s exploration and exploitation. The main procedure of the proposed IGOA is presented in Figure 4.
By creating a novel arrangement (transition mechanism) and utilizing three integrated approaches, the suggested method addresses the shortcomings of the conventional methods (i.e., GOA, LO, and RDR). The traditional approach, such as GOA, has problems with quick convergence rates, saturation in the immediate search region, and an imbalance between the search stages (exploration and exploitation). One of the critical issues with the GOA is that the variety of possible solutions is possibly low. In order to solve clustering difficulties more effectively, the suggested technique has an appropriate arrangement among the existing methods to meet these issues.
In conclusion, we demonstrate how the flaws were fixed. The first step is to address the imbalance between the search processes by doing an exploration search of OL and RDR for half of the iterations and an exploitation search of GOA for the other half. Thus, the suggested force arrangement may balance the search processes and increase variety in the candidate solutions by selecting one exploration or exploitation process out of three approaches in each iteration. Second, adjustments in the search procedure made in accordance with the suggested transition mechanism would regulate the pace of convergence. This impacts the optimization process by causing it to avoid the local search region and continue searching for the optimal answer. Finally, using various updating mechanisms per the suggested methodology will preserve the diversity of the solutions employed.

4.4. Computational Complexity of the Proposed IGOA

The suggested IGOA’s total computing complexity is provided according to the initialization of the candidate solutions, the objective function of the existing solutions, and the updating of the candidate solutions.
Assume that N is the number of all utilized solutions, and O(N) is the complexity time of the solutions’ initialization. The time complexity of the solutions’ updating is O(T × N) + O(T × N × D i m ), where T is the total number of used iterations. D i m is the location size of the problem. Therefore, the time complexity of the presented IGOA is given as follows.
O ( I G O A ) = ( N ) × O ( G O A ) + O ( O L ) + O ( R D R )
The time complexity of the proposed method depends on three main search operators; GOA, OL, and RDR. These methods’ complexity times are calculated as follows.
O ( O L ) = O ( N × ( m a x _ i t e r × D i m + 1 ) )
O ( G O A ) = O ( N × ( m a x _ i t e r × D i m + 1 ) )
O ( R D R ) = O ( N × D i m )
Therefore, the total time complexity of the IGOA is given as follows.
O ( I G O A ) = O ( m a x _ i t e r × N × ( D i m + 1 ) + ( N × D i m ) + ( N × D i m ) )
O ( I G O A ) = O m a x _ i t e r × N × D i m + N

5. Experiments and Results

The suggested approach (IGOA) is tested in this part to solve the optimization problems (data clustering problems). The outcomes are assessed using various metrics. Additionally, the suggested method’s considerable benefits over previous comparative approaches in the literature are demonstrated using the Friedman ranking test (FRT) and Wilcoxon signed-rank (WRT) test.
To evaluate the outcomes of the suggested strategy, a number of optimizers are employed as comparing techniques.
These methods are the salp swarm algorithm (SSA) [30], whale optimization algorithm (WOA) [31], sine–cosine algorithm (SCA) [32], dragonfly algorithm (DA) [33], grey wolf optimizer (GWO) [34], equilibrium optimizer (EO) [35], particle swarm optimizer (PSO) [36], Aquila optimizer (AO) [37], ant lion optimizer (ALO) [38], marine predators algorithm (MPA) [39], and gazelle optimization algorithm (GOA).
The findings are reported after 30 separate runs of each comparison approach with a population size of 30 and 1000 iterations. The parameters for the comparison approaches are set out in Table 1 in the ’Parameter’ column. Table 2 lists the specifics of the computers that were used.

5.1. Experiments Series 1: Classical Benchmark Problems

In this section, the performance of the proposed method is evaluated using a set of 23 classical benchmark functions.
The mathematical representation and classifications of the applied 23 benchmark functions are shown in Table 3. The benchmark functions are unimodal (F1–F7); they are used to assess the exploitation capability of the proposed method as they have one optimal solution. Multimodal (F8–F13) has several local optima and one global optimum used to evaluate the algorithm’s exploration. The fixed-dimension multimodal has a limited search space to assess the equilibrium between exploration and exploitation.
To find the best number of used candidate solutions of the proposed method, experiments were conducted as shown in Figure 5. The influence of the number of solutions (i.e., N) is examined on the classical test functions (23 benchmark functions). According to the literature, several numbers were taken to choose the population size; these numbers were 5, 10, 15, 20, 25, 30, 35, 40, 45, and 50, comparing the changes in the number of solution parameters throughout iterations (i.e., 500 iterations).
Usually, the used solution numbers range from 5 to 50. For example, if we look at F5, F6, F7, and F9, we find slight differences between the solutions used, as 30 is almost the best-utilized number. This size had the best results on average as this value can be considered the smallest number of the used best numbers (30 solutions), as given in F17–F22. The best number of solutions between several values is the smallest one, to maintain low computational time and obtain the maximum performance. It can be observed from the obtained results in Figure 5 that when these many population sizes are used, the proposed IGOA method keeps its advantages, which means that IGOA is more robust and less overwhelmed by population size. The proposed IGOA is more stable when the population changes, such as F10, F16, F17, F18, F19, F21, F22, and F23. In other words, the best number of solutions is 50 in most of the used benchmark functions (i.e., F1, F6, F9, F10, F11, etc.). However, because there is little difference between the given number of solutions, the claim as mentioned earlier is supported.
Deep investigations were conducted to show the performance of the proposed method compared to the original methods as given in Figure 6. In this Figure, in each row, four sub-figures are given: function topology, trajectory of the first dimension, average fitness values, and convergence curves of the tested methods.
In the first column in Figure 6, the standard 2D designs of the fitness function are provided. The second column presents the first position story collected by the proposed IGOA through the optimization process. IGOA examines promising areas in the assigned search space for the used problems. Unique positions are given in a broad search space, while the largest is developed around the local region for the provided problems because of these difficulty levels. The second column, trajectory patterns, shows that the solution achieves significant and acute changes in the optimization’s first actions. This operation can confirm that the proposed IGOA can ultimately meet the optimal position. The third column presents the average fitness value of the used candidate solutions in each repetition. The curves display shrinking behaviors on the employed problems. This proves that the proposed IGOA enhances the performance of the search process for iterations. In the fourth column, the objective function value of the best-obtained solution in each repetition is presented. Consequently, the proposed IGOA has preferred exploration and exploitation aptitudes.
The results of the proposed IGOA, compared to the other comparative methods using classical (F1–F13) benchmark functions, where the dimension size is 10, are given in Table 4. A very impressive indication is observed when the proposed method achieved all of the best results in the tested cases (F1–F13). According to the FRT, the proposed method ranked first, followed by AO, MPA, GOA, EO, WOA, GWO, PSO, SCA, SSA, ALO, and DE. In Table 5, the results of the proposed IGOA are compared to other comparative methods using classical benchmark functions (F14–F23), where the dimension size was fixed. From this table, we notice that the proposed method had the best results in almost all of the tested problems, except F15, where it obtained the second-best results. So, the archived results prove that the proposed IGOA has a promising ability to solve the benchmark problems. According to the FRT, the proposed IGOA ranked first, followed by MPA, SSA, GWO, EO, AO, ALO, PSO, DA, WOA, SCA, and GOA.
For further investigation, the WRT was applied in Table 6 to find the significant improvements that cases obtained in the proposed IGOA compared to the comparative methods (i.e., SSA, WOA, SCA, DA, GWO, PSO, ALO, MPA, EO, AO, and GOA). The proposed method had 16 significant improvements out of 23 compared to SSA; it obtained 14 significant improvements out of 23 compared to WOA; it obtained 17 significant improvements out of 23 compared to SSA; it obtained 19 significant improvements out of 23 compared to SCA; it obtained 16 significant improvements out of 23 compared to DA; it obtained 16 significant improvements out of 23 compared to GWO; it obtained 15 significant improvements out of 23 compared to PSO; it obtained 15 significant improvements out of 23 compared to ALO; it obtained 14 significant improvements out of 23 compared to MPA; it obtained 11 significant improvements out of 23 compared to EO; and it obtained 17 significant improvements out of 23 compared to GOA.
Figure 7 shows the convergence behaviors of the comparative algorithms on classical test functions (F1–F23), where the functions from 1–13 are fixed to 10 dimensions. These figures illustrate the optimization process during the iterations. It is clear that the proposed method finds the best solutions in all of the test cases, and it performs very well according to its convergence curves. For example, in the first test case (F1), the proposed IGOA presents good behavior through the optimization process. It had the best results in a smooth curve and faster compared to all comparative methods. The proposed IGOA in F16 obtained the best solution contrary to all comparison methods; the speed of the most significant difference is apparent.
Table 7 shows the results of the proposed IGOA compared to other comparative methods using classical benchmark functions (F1–F13), where the dimension size is 50. High-dimensional problems are used to validate the performance of the proposed IGOA compared to other methods. The proposed IGOA obtained better results than other methods, except the F8, which brought the best results in all of the test cases. The performance of the IGOA is excellent for solving high-dimensional problems. According to the FRT, the IGOA ranked first, followed by AO, MPA, EO, WOA, GOA, GWO, PSO, SSA, SCA, DA, and ALO. In Table 8, the results of the proposed IGOA compared to other comparative methods are presented using classical benchmark functions (F1–F13), where the dimension size is 100. Moreover, the proposed IGOA had the best results in all of the test cases, and it recorded the new best results in some cases. According to the FRT, the IGOA ranked first, followed by AO, MPA, WOA, EO, GOA, GWO, PSO, ALO, SSA, and SCA.
Figure 8 illustrates the convergence behaviors of the comparative algorithms on classical test functions (F1–F13) with higher dimensional sizes (i.e., 100). Obviously, the proposed IGOA has the best solutions in all of the test high-dimensional problems, and it shows promising behavior according to the given convergence curves. For example, in the third test case (F3), the proposed IGOA shows stable and smooth convergence behavior through the optimization rule. It is faster than all of the comparative methods in solving high-dimensional problems.
Figure 9 shows the exploration and exploitation of the optimization processes. This figure shows the division of the search tools and how to exploit them during the optimization process, which indicates the distribution between the search tools of exploitation and exploration; this enhances the ability of the proposed algorithm to find new solutions in the available search space.

5.2. Experiments Series 2: Advanced CEC2017 Benchmark Problems

In this section, the proposed IGOA is tested further using a set of 30 advanced CEC2017 benchmark functions.
The types and descriptions of the used CEC2017 benchmark functions are presented in Table 9. The benchmark functions are unimodal (F1–F3) and multimodal (F4–F10), which are shifted and rotated functions. Hybrid functions range from F11 to F20. Composition functions range from F21 to F30. These functions are usually used to test the exploration and exploitation search processes and their equilibrium. The primary setting of the experiment in this section is the same as in the previous section.
The proposed IGOA is compared with other state-of-the-art methods using CEC2017, including the dimension-decided Harris hawks optimization (GCHHO) [42], multi-strategy mutative whale-inspired optimization (CCMWOA) [43], balanced whale optimization algorithm (BMWOA) [18], reinforced whale optimizer (BWOA) [44], hybridizing sine–cosine differential evolution (SCADE) [45], Cauchy and Gaussian sine–cosine algorithm (CGSCA) [46], improved opposition-based sine–cosine optimizer (OBSCA) [47], hybrid grey wolf differential evolution (HGWO) [48], mutation-driven salp chains-inspired optimizer (CMSSA) [49], and dynamic Harris hawks optimization (DHHOM) [50].
The results of the proposed IGOA compared to other comparative state-of-the-art methods are presented in Table 10 using 30 CEC2017 benchmark functions. This table shows that the proposed IGOA method obtained perfect results compared to other methods. In most of the best CEC2017 problems, the IGOA obtained better results and delivered high-quality solutions to solve mathematical problems. FRT was conducted in Table 11 to rank the comparative methods for solving the CEC2017 problems to support our claim. In Table 11, the proposed IGOA is ranked first, followed by GCHHO, CMSSA, DHHOM, HGWO, BMWOA, BWOA, CGSCA, OBSCA, SCADE, and CCMWOA. So, the obtained results proved that the performance of the IGOA is promising compared to other comparative methods. Since these mathematical problems are complex, it is challenging to reach the optimal solutions for such problems. Since the proposed method obtained a set of optimal solutions, this confirms its efficiency in solving complex and different CEC2017 problems, improving its ability to avert from local optima and increase population diversity.

5.3. Experiments Series 3: Data Clustering Problems

The proposed method was further tested on real-world problems (data clustering) to prove its searchability.

5.3.1. Description of the Data Clustering Problem

Suppose we have a collection of objects (N) that has to be divided into predetermined clusters (K). To reduce the Euclidean distance between the provided objects and the centroids of the clusters, which refer to each specified item, the clustering procedure allocates each object (O) to a specific cluster. Equation (28) is the distance metric used to evaluate the similarity (connection) between data objects. The issue is primarily described as follows:
E D ( O , Z ) = i = 1 N j = 1 K w i j | | O i Z j | | 2
where | | O i Z j | | is the distance measure between the ith object ( O i ) and the jth cluster centroid ( Z j ). N is the number of given objects, and K is the number of clusters. w i j is the weight of the ith object ( O i ) associated with the jth cluster ( Z j ), which is either 1 or 0.

5.3.2. Results of the Data Clustering Problems

This experiment tests the proposed IGOA using eight different UCI datasets, namely Cancer, Vowels, CMC, Iris, Seeds, Glass, Heart, and Water. The details of these datasets are presented in Table 12.
Table 13 shows the obtained results by the proposed IGOA compared to other comparative methods using various data clustering problems. It is clear from the results presented in this table that the modification made in the proposed method effectively affected the ability of the basic algorithm. This modification helped the proposed algorithm obtain impressive results compared with the basic (and other) methods. The proposed method had the best results in most cases, with a clear difference, as it received new results in some cases. Thus, it was found that the proposed algorithm is able to deal with such complex problems and solve them efficiently.
In Table 14, it was found that the proposed method obtained significantly better results than the comparative methods according to the tests used. According to the Wilcoxon rank test, it was found that the results were substantially better. The improvement rates were significant, which indicates the ability of the proposed algorithm to solve such problems more efficiently compared to others. Based on the Friedman ranking test examination in Table 14, the proposed algorithm’s order is in the first place. The PSO algorithm is in second place, the GWO algorithm is in third place, the AOA algorithm is in fourth place, the SCA and RSA algorithms are in fifth place, and the GOA algorithm is in seventh place. Figure 10 indicates that the proposed algorithm is able to achieve results that exceed other algorithms in most of the cases used in these experiments.

6. Discussions

In the tests indicated above, the OL technique and RDR method were incorporated in order to improve the performance of the original GOA method, and the improved resultant IGOA was thoroughly examined. Using benchmark datasets for clustering, the effectiveness of IGOA was evaluated.
We realize that the IGOA presents significantly superior outcomes for multi-dimensional classical test problems than other well-known optimization algorithms through results in earlier sections. The effectiveness of other algorithms significantly diminishes by expanding the dimension space. The results revealed that the proposed IGOA can maintain the equipoise between the exploratory and exploitative abilities on the problem’s nature with a high-dimension space.
From the results of the F1–F13 classical test functions, one can see a clear significant gap among the results of the comparative algorithms, with superior solutions obtained from the proposed IGOA solution. As a result, the IGOA convergence speed is greatly improved. This observation proves the superior exploitative qualities of the IGOA. According to the obtained solution for multimodal and fixed dimensional multimodal functions, we conclude that IGOA achieved superior solutions in most cases (and in competitive solutions in some cases) according to the equilibrium between the explorative and exploitative capabilities, as well as a smooth change among the searching methods.
In addition, the performance of IGOA was tested using 30 advanced CEC2017 benchmark functions for further judgment. In the first part, regarding CEC2017 unimodal functions (F1–F3), IGOA has excellent results and optimization efficiency because the OL and RDR strategies supplement the diversity of the solutions of the conventional GOA and increase its likelihood of taking the local optimal section. In contrast, the DE has different distribution properties that allow the conventional GOA to obtain a good stability point in the search mechanisms (exploration and exploitation), increasing the optimizer’s accuracy.
The importance of the proposed method is further validated in the CEC2017 multimodal functions (F4–F10). First, it can be recognized that the IGOA strategies thoroughly allow the search solutions at various regions to be determined. Then the data (from the solutions’ positions) interact with each other. As a result, it is hard for the implied areas to be trapped quickly. Furthermore, when the GOA is trapped with local optima, the stochastic disturbance of the OL can dramatically improve the GOA and its search accuracy. Finally, by combining the two strategies (OL and RDR) into the GOA, IGOA works well in the multimodal functions.
The combinations of CEC2017 unimodal and CEC2017 multimodal functions, called hybrid functions (F11–F20), were used to analyze the accuracy of the proposed IGOA. The RDR encourages the GOA to obtain the possibly optimal region faster. On the other hand, the OL encourages the GOA to obtain a potentially more reliable solution in the adjacent area. Nevertheless, due to the hardness of some problems, the combination of two search strategies (RDR and OL) interacting together leads to good results.
CEC2017 composite functions (F21–F30) examine the proposed IGOA’s ability to change between the search phases. Such benchmark analyses place more critical needs on the optimization process. For example, the OL principally assists in enhancing the ability in the exploration aspect, which enables IGOA to come near the better region quickly. Contrarily, the RDR encourages determining a more suitable solution close to the current best solution. However, the overall performance of IGOA was developed.
IGOA is shown to be highly effective since the precision of the final findings is improved when compared to well-known cutting-edge methodologies in the literature. IGOA’s efficacy is further demonstrated in clustering application challenges. Additionally, it can be shown from the outcomes of the two clustering issues (data clustering) that IGOA also performs remarkably well in locating ideal clusters and centroids.
To demonstrate its efficacy concerning convergence acceleration and optimization accuracy, the suggested approach presents a better form of the GOA when compared with well-known methods and state-of-the-art methods. When using the suggested IGOA to solve actual clustering issues, whether they include data or text, we also compare it to alternative optimization techniques. Researchers will understand how to implement the suggested optimizer to minimize and maximize the costs of clustering applications as we reduce the difficulties with data clustering and maximize the issues with text document clustering. In essence, one may obtain managerial recommendations from this document. An earlier subsection described this experimental configuration.

7. Conclusions and Future Works

An effective optimization technique is required to handle challenging problems and identify the best solution. Optimization problems can take many different routes in a variety of disciplines. It is challenging to solve several problems using the same approach since the nature of these problems varies. One simple–global stochastic optimizer with strong search capabilities is the GOA. However, the GOA is inadequate for multi-modality, hybrid functions, and data mining.
To enhance the GOA and maintain diversity in solutions, the current article suggests combining Rosenbrock’s direct rotational (RDR) technique with the orthogonal learning (OL) method (IGOA). We carried out extensive tests based on several functions, including 23 classical and IEEE CEC2017 challenges. Several other suggested meta-heuristic algorithms were contrasted with the IGOA. To further confirm the effectiveness of the proposed strategy, 8 data clustering issues collected from the UCI repository were tested. The Wilcoxon signed-rank test also evaluated the experimental findings for a more organized data analysis. Other comparison optimizers could not compete with the IGOA’s convergence speed and accuracy. The empirical results demonstrate that the suggested IGOA outperformed different comparative approaches and the basic GOA regarding outcomes and solution quality.
In future work, the proposed method can be improved using other search operators. Moreover, the proposed method can be tested to solve other hard problems, such as text clustering problems, scheduling problems, industrial engineering problems, advanced mathematical problems, parameter extraction problems, forecasting problems, feature selection problems, multi-objective problems, and others. Moreover, a deep investigation is needed to determine the main reasons for the current weaknesses in some cases.

Author Contributions

L.A.: conceptualization, supervision, methodology, formal analysis, resources, data curation, writing—original draft preparation. A.D.: conceptualization, supervision, writing—review and editing, project administration, funding acquisition. R.A.Z.: conceptualization, writing—review and editing, supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ibrahim, R.A.; Abd Elaziz, M.; Lu, S. Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization. Expert Syst. Appl. 2018, 108, 1–27. [Google Scholar] [CrossRef]
  2. Wang, S.; Hussien, A.G.; Jia, H.; Abualigah, L.; Zheng, R. Enhanced Remora Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 1696. [Google Scholar] [CrossRef]
  3. Elaziz, M.A.; Abualigah, L.; Yousri, D.; Oliva, D.; Al-Qaness, M.A.; Nadimi-Shahraki, M.H.; Ewees, A.A.; Lu, S.; Ali Ibrahim, R. Boosting atomic orbit search using dynamic-based learning for feature selection. Mathematics 2021, 9, 2786. [Google Scholar] [CrossRef]
  4. Koziel, S.; Leifsson, L.; Yang, X.S. Solving Computationally Expensive Engineering Problems: Methods and Applications; Springer: Berlin/Heidelberg, Germany, 2014; Volume 97. [Google Scholar]
  5. Baykasoglu, A. Design optimization with chaos embedded great deluge algorithm. Appl. Soft Comput. 2012, 12, 1055–1067. [Google Scholar] [CrossRef]
  6. Chen, H.; Wang, M.; Zhao, X. A multi-strategy enhanced Sine Cosine Algorithm for global optimization and constrained practical engineering problems. Appl. Math. Comput. 2020, 369, 124872. [Google Scholar] [CrossRef]
  7. Simpson, A.R.; Dandy, G.C.; Murphy, L.J. Genetic algorithms compared to other techniques for pipe optimization. J. Water Resour. Plan. Manag. 1994, 120, 423–443. [Google Scholar] [CrossRef] [Green Version]
  8. Rao, R.V.; Savsani, V.J.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  9. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L.; Liu, Y. A Hybrid Arithmetic Optimization and Golden Sine Algorithm for Solving Industrial Engineering Design Problems. Mathematics 2022, 10, 1567. [Google Scholar] [CrossRef]
  10. Droste, S.; Jansen, T.; Wegener, I. Upper and lower bounds for randomized search heuristics in black-box optimization. Theory Comput. Syst. 2006, 39, 525–544. [Google Scholar] [CrossRef] [Green Version]
  11. El Shinawi, A.; Ibrahim, R.A.; Abualigah, L.; Zelenakova, M.; Abd Elaziz, M. Enhanced adaptive neuro-fuzzy inference system using reptile search algorithm for relating swelling potentiality using index geotechnical properties: A case study at El Sherouk City, Egypt. Mathematics 2021, 9, 3295. [Google Scholar] [CrossRef]
  12. Attiya, I.; Abualigah, L.; Alshathri, S.; Elsadek, D.; Abd Elaziz, M. Dynamic Jellyfish Search Algorithm Based on Simulated Annealing and Disruption Operators for Global Optimization with Applications to Cloud Task Scheduling. Mathematics 2022, 10, 1894. [Google Scholar] [CrossRef]
  13. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Abualigah, L. Binary Aquila Optimizer for Selecting Effective Features from Medical Data: A COVID-19 Case Study. Mathematics 2022, 10, 1929. [Google Scholar] [CrossRef]
  14. Attiya, I.; Abualigah, L.; Elsadek, D.; Chelloug, S.A.; Abd Elaziz, M. An Intelligent Chimp Optimizer for Scheduling of IoT Application Tasks in Fog Computing. Mathematics 2022, 10, 1100. [Google Scholar] [CrossRef]
  15. Wen, C.; Jia, H.; Wu, D.; Rao, H.; Li, S.; Liu, Q.; Abualigah, L. Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem. Mathematics 2022, 10, 3604. [Google Scholar] [CrossRef]
  16. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  17. Xu, Y.; Chen, H.; Luo, J.; Zhang, Q.; Jiao, S.; Zhang, X. Enhanced Moth-flame optimizer with mutation strategy for global optimization. Inf. Sci. 2019, 492, 181–203. [Google Scholar] [CrossRef]
  18. Chen, H.; Xu, Y.; Wang, M.; Zhao, X. A balanced Whale Optimization Algorithm for constrained engineering design problems. Appl. Math. Model. 2019, 71, 45–59. [Google Scholar] [CrossRef]
  19. Miller, J.; Trümper, L.; Terboven, C.; Müller, M.S. A theoretical model for global optimization of parallel algorithms. Mathematics 2021, 9, 1685. [Google Scholar] [CrossRef]
  20. Sayed, G.I.; Khoriba, G.; Haggag, M.H. A novel chaotic Salp Swarm Algorithm for global optimization and feature selection. Appl. Intell. 2018, 48, 3462–3481. [Google Scholar] [CrossRef]
  21. Gupta, S.; Deep, K. Improved Sine Cosine Algorithm with crossover scheme for global optimization. Knowl. Based Syst. 2019, 165, 374–406. [Google Scholar] [CrossRef]
  22. Sun, Y.; Wang, X.; Chen, Y.; Liu, Z. A modified Whale Optimization Algorithm for large-scale global optimization problems. Expert Syst. Appl. 2018, 114, 563–577. [Google Scholar] [CrossRef]
  23. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Gazelle Optimization Algorithm: A novel nature-inspired metaheuristic optimizer. Neural Comput. Appl. 2022, 4, 1–33. [Google Scholar] [CrossRef]
  24. Haklı, H.; Uğuz, H. A novel particle swarm optimization algorithm with Lévy flight. Appl. Soft Comput. 2014, 23, 333–345. [Google Scholar] [CrossRef]
  25. Liu, Y.; Cao, B.; Li, H. Improving ant colony optimization algorithm with epsilon greedy and Lévy flight. Complex Intell. Syst. 2021, 7, 1711–1722. [Google Scholar] [CrossRef] [Green Version]
  26. Gao, W.F.; Liu, S.Y.; Huang, L.L. A novel artificial bee colony algorithm based on modified search equation and orthogonal learning. IEEE Trans. Cybern. 2013, 43, 1011–1024. [Google Scholar] [PubMed]
  27. Hu, J.; Chen, H.; Heidari, A.A.; Wang, M.; Zhang, X.; Chen, Y.; Pan, Z. Orthogonal learning covariance matrix for defects of grey wolf optimizer: Insights, balance, diversity, and feature selection. Knowl. Based Syst. 2021, 213, 106684. [Google Scholar] [CrossRef]
  28. Rosenbrock, H. An automatic method for finding the greatest or least value of a function. Comput. J. 1960, 3, 175–184. [Google Scholar] [CrossRef] [Green Version]
  29. Lewis, R.M.; Torczon, V.; Trosset, M.W. Direct search methods: Then and now. J. Comput. Appl. Math. 2000, 124, 191–207. [Google Scholar] [CrossRef] [Green Version]
  30. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  32. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  33. Mirjalili, S. Dragonfly Algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  34. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  35. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium Optimizer: A novel optimization algorithm. Knowl. Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  36. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  37. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization Algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  38. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  39. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  40. Abd Elaziz, M.; Mirjalili, S. A hyper-heuristic for improving the initial population of whale optimization algorithm. Knowl. Based Syst. 2019, 172, 42–63. [Google Scholar] [CrossRef]
  41. Jouhari, H.; Lei, D.; AA Al-qaness, M.; Abd Elaziz, M.; Ewees, A.A.; Farouk, O. Sine-cosine algorithm to enhance simulated annealing for unrelated parallel machine scheduling with setup times. Mathematics 2019, 7, 1120. [Google Scholar] [CrossRef] [Green Version]
  42. Song, S.; Wang, P.; Heidari, A.A.; Wang, M.; Zhao, X.; Chen, H.; He, W.; Xu, S. Dimension decided Harris hawks optimization with Gaussian mutation: Balance analysis and diversity patterns. Knowl. Based Syst. 2021, 215, 106425. [Google Scholar] [CrossRef]
  43. Luo, J.; Chen, H.; Heidari, A.A.; Xu, Y.; Zhang, Q.; Li, C. Multi-strategy boosted mutative whale-inspired optimization approaches. Appl. Math. Model. 2019, 73, 109–123. [Google Scholar] [CrossRef]
  44. Chen, H.; Yang, C.; Heidari, A.A.; Zhao, X. An efficient double adaptive random spare reinforced whale optimization algorithm. Expert Syst. Appl. 2020, 154, 113018. [Google Scholar] [CrossRef]
  45. Nenavath, H.; Jatoth, R.K. Hybridizing Sine Cosine Algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 2018, 62, 1019–1043. [Google Scholar] [CrossRef]
  46. Kumar, N.; Hussain, I.; Singh, B.; Panigrahi, B.K. Single sensor-based MPPT of partially shaded PV system for battery charging by using cauchy and gaussian sine cosine optimization. IEEE Trans. Energy Convers. 2017, 32, 983–992. [Google Scholar] [CrossRef]
  47. Abd Elaziz, M.; Oliva, D.; Xiong, S. An improved opposition-based Sine Cosine Algorithm for global optimization. Expert Syst. Appl. 2017, 90, 484–500. [Google Scholar] [CrossRef]
  48. Zhu, A.; Xu, C.; Li, Z.; Wu, J.; Liu, Z. Hybridizing grey wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC. J. Syst. Eng. Electron. 2015, 26, 317–328. [Google Scholar] [CrossRef]
  49. Zhang, Q.; Chen, H.; Heidari, A.A.; Zhao, X.; Xu, Y.; Wang, P.; Li, Y.; Li, C. Chaos-induced and mutation-driven schemes boosting salp chains-inspired optimizers. IEEE Access 2019, 7, 31243–31261. [Google Scholar] [CrossRef]
  50. Jia, H.; Lang, C.; Oliva, D.; Song, W.; Peng, X. Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote. Sens. 2019, 11, 1421. [Google Scholar] [CrossRef]
Figure 1. The gazelle’s grazing pattern denotes exploitation.
Figure 1. The gazelle’s grazing pattern denotes exploitation.
Mathematics 10 04509 g001
Figure 2. Gazelles fleeing the predator is a sign of exploration.
Figure 2. Gazelles fleeing the predator is a sign of exploration.
Mathematics 10 04509 g002
Figure 3. Procedure of the basic GOA.
Figure 3. Procedure of the basic GOA.
Mathematics 10 04509 g003
Figure 4. Procedure of the proposed IGOA.
Figure 4. Procedure of the proposed IGOA.
Mathematics 10 04509 g004
Figure 5. The influence of the population size tested on the classical test functions.
Figure 5. The influence of the population size tested on the classical test functions.
Mathematics 10 04509 g005aMathematics 10 04509 g005b
Figure 6. Qualitative results of the proposed method.
Figure 6. Qualitative results of the proposed method.
Mathematics 10 04509 g006aMathematics 10 04509 g006b
Figure 7. Convergence behaviors of the comparative algorithms on classical test functions (F1–F23).
Figure 7. Convergence behaviors of the comparative algorithms on classical test functions (F1–F23).
Mathematics 10 04509 g007aMathematics 10 04509 g007b
Figure 8. Convergence behaviors of the comparative algorithms on classical test functions (F1–F13) with higher dimensional sizes (i.e., 100).
Figure 8. Convergence behaviors of the comparative algorithms on classical test functions (F1–F13) with higher dimensional sizes (i.e., 100).
Mathematics 10 04509 g008
Figure 9. The exploration and exploitation of the optimization processes.
Figure 9. The exploration and exploitation of the optimization processes.
Mathematics 10 04509 g009
Figure 10. The ranking of the tested methods for the data clustering problems.
Figure 10. The ranking of the tested methods for the data clustering problems.
Mathematics 10 04509 g010
Table 1. Setting of the comparative methods’ parameters.
Table 1. Setting of the comparative methods’ parameters.
No.AlgorithmReferenceParameterValue
1SSA[32] v 0 0
2WOA[40] α Decreased from 2 to 0
b2
3SCA[41] α 0.05
4DA[33]w0.2–0.9
s, a, and c0.1
f and e1
5GWO[34]Convergence parameter (a)Linear reduction from 2 to 0
6PSO[36]TopologyFully connected
Cognitive and social constant(C1, C2) 2, 2
Inertia weightLinear reduction from 0.9 to 0.1
Velocity limit10% of dimension range
7ALO[38] α ∈ [0 1]
8MPA[39] γ γ > 1
P0.0
9EO[35]r0.5
a4
G P 0.5
10AO[37] α 0.1
δ 0.1
Table 2. Details of the utilized computers.
Table 2. Details of the utilized computers.
NameSetting
Software
- Operating system64-Bit
- WindowsWindows 10
- LanguageMATLAB R2015a
Hardware
- CPUIntel(R) Core(TM) i7 processor
- Frequency2.3 GHz
- RAM16 GB
- Hard disk1000 GB
Table 3. Classical benchmark functions.
Table 3. Classical benchmark functions.
FunctionDescriptionDimensionsRange f min
F1 f ( x ) = i = 1 n x i 2 10, 50, 100, 500[−100, 100]0
F2 f ( x ) = i = 0 n | x i | + i = 0 n | x i | 10, 50, 100, 500[−10, 10]0
F3 f ( x ) = i = 1 d ( j = 1 i x j ) 2 10, 50, 100, 500[−100, 100]0
F4 f ( x ) = m a x i { | x i | , 1 i n } 10, 50, 100, 500[−100, 100]0
F5 f ( x ) = i = 1 n 1 [ 100 ( x i 2 x i + 1 ) 2 + ( 1 x i ) 2 ] 10, 50, 100, 500[−30, 30]0
F6 f ( x ) = i = 1 n ( [ x i + 0.5 ] ) 2 10, 50, 100, 500[−100, 100]0
F7 f ( x ) = i = 0 n i x i 4 + random [ 0 , 1 ) 10, 50, 100, 500[−128, 128]0
F8 f ( x ) = i = 1 n ( x i s i n ( | x i | ) ) 10, 50, 100, 500[−500, 500]−418.9829 × n
F9 f ( x ) = i = 1 n [ x i 2 10 c o s ( 2 π x i ) + 10 ] 10, 50, 100, 500[−5.12, 5.12]0
F10 f ( x ) = 20 e x p ( 0.2 1 n i = 1 n x i 2 ) e x p ( 1 n i = 1 n c o s ( 2 π x i ) ) + 20 + e 10, 50, 100, 500[−32, 32]0
F11 f ( x ) = 1 + 1 4000 i = 1 n x i 2 i = 1 n c o s ( x i i ) 10, 50, 100, 500[−600, 600]0
F12 f ( x ) = π n 10 s i n ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 s i n 2 ( π y i + 1 ) + i = 1 n u ( x i , 10 , 100 , 4 ) ] , where y i = 1 + x i + 1 4 , u ( x i , a , k , m ) K ( x i a ) m if x i > a 0 a x i a K ( x i a ) m a x i 10, 50, 100, 500[−50, 50]0
F13 f ( x ) = 0.1 ( s i n 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 1 + s i n 2 ( 2 π x n ) ) + i = 1 n u ( x i , 5 , 100 , 4 ) 10, 50, 100, 500[−50, 50]0
F14 f ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 1 2[−65, 65]1
F15 f ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.00030
F16 f ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F17 f ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 ( 1 1 8 π ) c o s   x 1 + 10 2[−5, 5]0.398
F18 f ( x ) = 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) × 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x i + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) 2[−2, 2]3
F19 f ( x ) = i = 1 4 c i e x p i = 1 3 a i j ( x j p i j ) 2 3[−1, 2]−3.86
F20 f ( x ) = i = 1 4 c i e x p i = 1 6 a i j ( x j p i j ) 2 6[0, 1]−0.32
F21 f ( x ) = i = 1 5 ( X a i ) ( X a i ) T + c i 1 4[0, 1]−10.1532
F22 f ( x ) = i = 1 7 ( X a i ) ( X a i ) T + c i 1 4[0, 1]−10.4028
F23 f ( x ) = i = 1 10 ( X a i ) ( X a i ) T + c i 1 4[0, 1]−10.5363
Note: Unimodal functions are from F1–F7, multimodal functions are from F8–F13, and fixed-dimension multi-modal functions are from F14–F23.
Table 4. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F1–F13), where the dimension size is 10.
Table 4. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F1–F13), where the dimension size is 10.
MeasureComparative Algorithms
SSAWOASCADAGWOPSOALOMPAEOAOGOAIGOA
F1
Max5.6687E-063.0768E-299.2720E-033.7901E+024.5916E-186.0401E-035.2574E-011.9657E-293.8998E-269.5870E-811.6001E-404.0016E-111
Mean1.6182E-067.6986E-302.3193E-032.5349E+021.5936E-181.6560E-031.3154E-019.7494E-301.6302E-262.4125E-814.0002E-411.0004E-111
Min1.4128E-071.4455E-369.0439E-081.0313E+021.2000E-197.8798E-067.9433E-054.8694E-303.6988E-271.4810E-851.6323E-1222.5272E-123
Std2.7047E-061.5380E-294.6352E-031.2133E+022.0273E-182.9346E-032.6280E-016.9322E-301.5858E-264.7830E-818.0003E-412.0008E-111
Ranking841012791156231
F2
Max7.7959E+002.1492E-291.1036E-042.2195E+013.3639E-111.6219E-011.9575E+012.6474E-161.4528E-147.8147E-446.4440E-710.0000E+00
Mean3.0662E+005.9613E-304.4043E-051.1944E+011.2412E-111.1260E-018.5054E+001.4902E-163.8156E-152.0429E-441.6110E-710.0000E+00
Min5.4086E-013.1640E-345.9247E-063.0554E+001.8400E-129.3967E-031.4917E+001.7669E-172.9474E-171.0185E-493.4455E-950.0000E+00
Std3.3581E+001.0413E-294.6685E-057.8998E+001.4482E-117.0423E-027.7648E+001.0159E-167.1434E-153.8512E-443.2220E-710.0000E+00
Ranking104812791156321
F3
Max1.2878E+033.1583E+037.2366E+016.6942E+033.5163E-073.4736E+005.4055E+038.8765E-132.7141E-114.1515E-794.0712E-091.8119E-94
Mean5.0275E+021.7896E+033.3041E+014.9705E+031.2577E-071.5269E+003.5168E+032.3968E-131.0682E-111.0379E-791.0178E-094.5299E-95
Min6.5881E+017.8867E+021.0676E-013.4937E+021.0960E-084.5930E-022.0838E+036.1340E-174.4484E-131.3725E-900.0000E+001.4890E-115
Std5.5639E+021.0134E+033.7958E+013.0848E+031.5853E-071.4318E+001.3815E+034.3327E-131.1833E-112.0757E-792.0356E-099.0597E-95
Ranking910812671134251
F4
Max1.1350E+012.4947E+015.2459E-013.4546E+017.0394E-062.5740E-012.5872E+015.0672E-136.4122E-087.0382E-401.9988E-239.7425E-52
Mean4.7755E+001.0417E+013.4400E-011.8807E+012.2793E-061.6819E-011.8858E+013.1893E-131.8603E-081.8001E-404.9970E-242.4506E-52
Min1.0957E+002.8202E+001.5933E-018.1766E+002.3154E-079.0260E-021.2936E+015.3543E-147.8747E-101.2170E-481.8617E-773.5318E-62
Std4.5106E+001.0221E+011.9127E-011.1584E+013.1913E-066.8608E-025.7790E+002.1383E-133.0477E-083.4929E-409.9939E-244.8613E-52
Ranking910811671245231
F5
Max2.4884E+038.9652E+008.9499E+003.9872E+048.0846E+009.6438E+012.9905E+048.5294E+008.7210E+001.9262E-018.5801E+003.4160E-02
Mean1.1873E+038.8175E+008.4259E+002.4852E+047.6262E+003.5639E+018.2018E+037.2907E+007.7837E+008.6155E-028.2137E+001.8406E-02
Min7.7911E+008.5543E+008.0821E+001.3031E+047.1861E+007.4127E+002.5891E+026.2748E+007.1336E+002.0769E-067.7814E+001.4281E-03
Std1.3156E+031.9212E-013.6921E-011.1143E+045.0584E-014.1800E+011.4479E+049.2936E-017.5787E-019.7685E-023.4581E-011.5032E-02
Ranking108712491135261
F6
Max1.8987E-027.1473E-011.4622E+002.5496E+027.5229E-018.3608E-041.2734E-027.7604E-024.9193E-013.3793E-044.6477E-013.2478E-04
Mean6.9763E-034.8733E-011.0843E+002.1284E+023.7321E-013.8147E-043.2223E-031.9401E-021.2318E-011.9276E-043.4021E-019.2145E-05
Min1.1915E-063.2553E-018.8693E-011.8611E+021.4414E-051.9334E-052.9475E-059.0157E-102.0808E-055.9616E-062.5929E-014.4427E-08
Std9.0313E-031.7744E-012.6857E-013.1781E+013.2201E-013.6694E-046.3413E-033.8802E-022.4583E-011.4224E-049.4276E-021.5640E-04
Ranking510111293467281
F7
Max9.4570E-025.1017E-021.3523E-024.6264E-015.0253E-031.6015E-017.5349E-011.4525E-034.0995E-031.5900E-031.0205E-031.4519E-04
Mean7.4271E-021.3559E-027.8508E-032.1954E-013.0991E-038.1841E-024.5660E-011.1726E-031.8678E-035.9058E-045.0738E-049.7675E-05
Min5.6938E-026.9400E-044.3940E-039.3729E-022.1818E-033.0057E-028.7673E-028.4107E-049.6380E-041.7258E-052.2784E-044.4478E-05
Std1.9462E-022.4974E-024.2216E-031.6841E-011.3158E-036.1565E-022.7498E-012.9914E-041.4989E-037.0301E-043.6238E-044.4662E-05
Ranking987116101245321
F8
Max−2.2734E+03−1.8870E+03−1.6176E+03−1.8505E+03−2.1174E+03−1.2648E+03−1.9152E+03−2.8653E+03−2.4220E+03−1.6770E+03−2.0896E+03−2.7694E+07
Mean−2.5962E+03−2.7810E+03−1.9484E+03−2.1719E+03−2.4186E+03−1.4903E+03−2.0689E+03−3.3299E+03−2.8783E+03−2.4188E+03−2.2345E+03−1.2332E+08
Min−3.1435E+03−3.4289E+03−2.3834E+03−2.7042E+03−2.6000E+03−1.6333E+03−2.3523E+03−3.5369E+03−3.4762E+03−4.1864E+03−2.3109E+03−2.5083E+08
Std3.8896E+026.4835E+023.2334E+023.7546E+022.0990E+021.6288E+022.0383E+023.1353E+024.4133E+021.1838E+039.9463E+011.0036E+08
Ranking541197121023681
F9
Max3.6813E+011.5062E+001.2066E+007.0022E+014.7147E+003.0419E+017.6611E+011.4211E-143.0381E+002.6242E-010.0000E+000.0000E+00
Mean2.5869E+013.7655E-013.0890E-015.7496E+012.6264E+001.7201E+015.7956E+013.5527E-151.0083E+006.5605E-020.0000E+000.0000E+00
Min1.5920E+010.0000E+002.0227E-064.7361E+014.2633E-145.9917E+003.7810E+010.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
Std9.0092E+007.5310E-015.9859E-011.0614E+011.9778E+001.0360E+011.7468E+017.1054E-151.4322E+001.3121E-010.0000E+000.0000E+00
Ranking106511891237411
F10
Max1.9965E+014.3521E-141.3360E+011.5782E+013.2645E-102.3169E+001.5140E+017.9936E-157.6827E-138.8818E-168.8818E-168.8818E-16
Mean7.0934E+001.6875E-143.3500E+001.0033E+012.0063E-105.8999E-011.2971E+015.3291E-152.7711E-138.8818E-168.8818E-168.8818E-16
Min1.6466E+008.8818E-164.6053E-053.4909E+004.7813E-113.8237E-039.6145E+004.4409E-155.0626E-148.8818E-168.8818E-168.8818E-16
Std8.6251E+001.8687E-146.6735E+005.0775E+001.3298E-101.1513E+002.3725E+001.7764E-153.3477E-130.0000E+000.0000E+000.0000E+00
Ranking105911781246111
F11
Max1.9439E-010.0000E+004.7522E-017.2161E+006.3368E-025.2703E+005.5890E-011.7339E-022.6300E-020.0000E+003.5947E-020.0000E+00
Mean1.5599E-010.0000E+003.1614E-014.3403E+003.9139E-022.1440E+002.5481E-014.3347E-039.0506E-030.0000E+008.9867E-030.0000E+00
Min9.1527E-020.0000E+005.0005E-022.1045E+002.6828E-022.9533E-013.7795E-020.0000E+000.0000E+000.0000E+000.0000E+000.0000E+00
Std4.8640E-020.0000E+001.9205E-012.1867E+001.6448E-022.1658E+002.3401E-018.6694E-031.2411E-020.0000E+001.7973E-020.0000E+00
Ranking811012711946151
F12
Max1.1753E+012.6635E-011.8565E+001.7519E+011.5264E-016.1327E-044.0024E+011.5225E-022.0023E-023.8124E-043.3640E-012.5541E-04
Mean5.2622E+002.2869E-016.9748E-017.8833E+007.3856E-021.8414E-041.5694E+013.8159E-038.8384E-032.1247E-042.8413E-011.7172E-04
Min1.2927E+001.8560E-011.4509E-011.5513E+004.2429E-025.1148E-066.8637E+002.2171E-095.5321E-068.5093E-052.0181E-017.8122E-05
Std4.6576E+003.5965E-027.8947E-016.8630E+005.3004E-022.8779E-041.6242E+017.6063E-031.0349E-021.2764E-045.7632E-027.3077E-05
Ranking107911621245381
F13
Max2.5967E+005.1313E-019.8470E-013.6598E+055.2436E-011.0989E-021.7543E+015.4243E-022.4034E-017.9376E-039.9617E-016.4710E-04
Mean6.9400E-014.6523E-016.8633E-019.1861E+042.9147E-012.7604E-038.9895E+001.8705E-021.6238E-011.9946E-038.1993E-012.2333E-04
Min2.8184E-024.4162E-015.5358E-012.2382E+001.0911E-012.5130E-073.0920E-021.7929E-081.0272E-011.1894E-056.8063E-014.5897E-05
Std1.2691E+003.3200E-022.0330E-011.8275E+051.7691E-015.4854E-038.3440E+002.5601E-025.9648E-023.9620E-031.3519E-012.8682E-04
Ranking978126311452101
Friedman test
Mean Rank8.626.468.5411.386.627.6210.623.925.382.544.771.00
Final Ranking106912781135241
Table 5. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F14–F23), where the dimension size is a fixed size.
Table 5. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F14–F23), where the dimension size is a fixed size.
MeasureComparative Algorithms
SSAWOASCADAGWOPSOALOMPAEOAOGOAIGOA
F14
Max2.9821E+001.0763E+012.9821E+001.5504E+011.0763E+011.5504E+012.3809E+011.2671E+011.0763E+011.2671E+011.2671E+011.9920E+00
Mean1.4940E+003.9616E+002.4866E+005.3690E+008.8179E+001.1203E+011.0637E+015.1574E+004.6720E+007.0828E+001.2194E+011.2465E+00
Min9.9800E-019.9800E-011.0001E+009.9800E-012.9821E+005.9288E+003.9683E+001.9920E+009.9800E-019.9800E-011.0763E+019.9800E-01
Std9.9205E-014.6251E+009.9101E-016.8049E+003.8905E+005.0297E+009.0070E+005.0305E+004.6790E+006.4649E+009.5366E-014.9701E-01
Ranking243791110658121
F15
Max1.7018E-034.3071E-031.5907E-032.7670E-021.4827E-031.7771E-031.8452E-023.3202E-042.0363E-021.2317E-038.9716E-026.4899E-04
Mean1.5560E-031.6586E-031.3341E-031.2183E-026.4339E-041.0875E-037.7714E-033.1617E-041.0343E-027.0995E-044.7898E-025.0192E-04
Min1.4119E-033.3772E-048.9731E-046.5750E-043.1075E-047.6724E-042.9649E-033.0749E-043.0935E-043.3906E-042.9486E-024.4828E-04
Std1.5860E-041.7941E-033.0634E-041.1296E-025.6338E-044.7034E-047.3060E-031.1593E-051.1570E-024.0040E-042.8449E-029.8118E-05
Ranking786113591104122
F16
Max−1.0316E+00−1.0316E+00−1.0000E+00−1.0308E+00−1.0316E+00−1.0291E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0096E+00−1.0316E+00−1.0316E+00
Mean−1.0316E+00−1.0316E+00−1.0236E+00−1.0313E+00−1.0316E+00−1.0308E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0198E+00−1.0316E+00−1.0316E+00
Min−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0249E+00−1.0316E+00−1.0316E+00
Std1.1358E-139.0053E-061.5707E-023.2551E-047.7555E-081.1721E-036.8452E-131.8559E-133.4995E-147.0603E-032.2852E-070.0000E+00
Ranking481196105321271
F17
Max3.9789E-014.2689E-014.3137E-013.9874E-013.9795E-014.8050E-013.9789E-013.9789E-013.9789E-014.0216E-011.5356E+003.9789E-01
Mean3.9789E-014.0740E-014.1413E-013.9832E-013.9791E-014.2242E-013.9789E-013.9789E-013.9789E-013.9922E-019.0263E-013.9789E-01
Min3.9789E-013.9866E-013.9906E-013.9789E-013.9789E-013.9863E-013.9789E-013.9789E-013.9789E-013.9790E-014.4057E-013.9789E-01
Std1.6369E-141.3370E-021.6334E-023.5436E-042.6611E-053.8869E-024.9152E-135.5505E-132.7422E-092.0199E-035.2840E-010.0000E+00
Ranking291076114358121
F18
Max3.0000E+003.0579E+013.0022E+003.0021E+008.4001E+013.0542E+003.0000E+003.0000E+003.0000E+003.6799E+008.4339E+013.0000E+00
Mean3.0000E+001.6688E+013.0013E+003.0010E+003.0001E+013.0141E+003.0000E+003.0000E+003.0000E+003.3010E+002.4463E+013.0000E+00
Min3.0000E+003.0053E+003.0007E+003.0001E+003.0001E+003.0000E+003.0000E+003.0000E+003.0000E+003.0190E+003.0000E+003.0000E+00
Std6.2815E-131.5784E+016.6423E-049.1491E-043.8184E+012.6794E-022.6479E-112.2338E-133.8520E-072.7651E-013.9974E+011.4950E-15
Ranking310761284259111
F19
Max−3.8572E+00−3.0723E+00−3.8398E+00−3.6103E+00−3.8549E+00−3.5817E+00−3.8596E+00−3.0898E+00−3.8609E+00−3.7221E+00−3.8227E+00−3.8628E+00
Mean−3.8596E+00−3.2723E+00−3.8494E+00−3.7415E+00−3.8601E+00−3.7763E+00−3.8610E+00−3.6695E+00−3.8623E+00−3.7943E+00−3.8444E+00−3.8628E+00
Min−3.8627E+00−3.8468E+00−3.8573E+00−3.8471E+00−3.8628E+00−3.8468E+00−3.8628E+00−3.8628E+00−3.8628E+00−3.8522E+00−3.8587E+00−3.8628E+00
Std2.5466E-033.8304E-017.7153E-031.1360E-013.6683E-031.2987E-011.3784E-033.8651E-019.5396E-046.3004E-021.5338E-024.4409E-16
Ranking512610493112871
F20
Max−3.1979E+00−1.7529E+00−1.2348E+00−3.1440E+00−2.2762E+00−3.2031E+00−2.5329E+00−3.3220E+00−3.2030E+00−2.5225E+00−2.6051E+00−3.3220E+00
Mean−3.2609E+00−2.7847E+00−2.2059E+00−3.1943E+00−2.9987E+00−3.2328E+00−2.7032E+00−3.3220E+00−3.2328E+00−2.8380E+00−2.8984E+00−3.3220E+00
Min−3.3220E+00−3.1841E+00−3.0581E+00−3.3162E+00−3.3219E+00−3.3220E+00−2.8814E+00−3.3220E+00−3.3220E+00−3.1264E+00−3.0035E+00−3.3220E+00
Std7.0573E-026.8900E-019.4395E-018.1983E-024.8517E-015.9447E-021.6440E-014.0937E-105.9494E-022.6187E-011.9564E-011.7172E-10
Ranking310126741125981
F21
Max−2.6305E+00−4.8157E+00−3.5065E-01−5.0500E+00−5.0966E+00−2.6829E+00−2.6305E+00−4.9540E+00−8.8199E-01−9.2575E+00−1.5798E+00−1.0153E+01
Mean−7.0094E+00−4.8932E+00−7.5598E-01−6.8195E+00−8.8813E+00−8.0375E+00−5.7235E+00−8.0029E+00−4.0233E+00−9.7647E+00−2.4100E+00−1.0153E+01
Min−1.0153E+01−5.0032E+00−1.3155E+00−8.6125E+00-1.0150E+01−1.0153E+01−1.0153E+01−9.9566E+00−5.1007E+00-1.0152E+01−3.5016E+00−1.0153E+01
Std3.7676E+007.9719E-024.3024E-012.0287E+002.5232E+003.6003E+003.1666E+002.2399E+002.0943E+004.3512E-018.3575E-011.3286E-08
Ranking691273485102111
F22
Max−2.7519E+00−9.0585E-01−9.0510E-01−1.8369E+00−1.0393E+01−1.8376E+00−5.1288E+00−5.0801E+00−5.0877E+00−5.0645E+00−1.2329E+00−1.0403E+01
Mean−6.8205E+00−5.1360E+00−1.4838E+00−3.3195E+00−1.0399E+01−4.4431E+00−9.0844E+00−8.7305E+00−6.4165E+00−9.0095E+00−2.1487E+00−1.0403E+01
Min−1.0403E+01−1.0235E+01−2.4565E+00−5.0164E+00−1.0403E+01−1.0403E+01−1.0403E+01−1.0272E+01−1.0403E+01−1.0399E+01−3.0909E+00−1.0403E+01
Std4.1556E+003.8525E+007.4057E-011.3673E+004.3456E-033.9973E+002.6371E+002.4694E+002.6576E+002.6311E+008.0696E-011.5991E-08
Ranking681210293574111
F23
Max−2.8711E+00−2.2850E+00−1.8691E+00−1.8570E+00−5.0079E+00−2.8711E+00−2.4217E+00−3.8354E+00−3.8354E+00−1.0183E+01−3.4634E+00−1.0522E+01
Mean−7.2799E+00−3.8940E+00−2.0020E+00−4.1990E+00−7.5155E+00−6.9448E+00−4.4504E+00−8.8612E+00−7.1852E+00−1.0361E+01−5.1392E+00−1.0526E+01
Min−1.0536E+01−5.0669E+00−2.0959E+00−1.0215E+01−1.0019E+01−1.0536E+01−1.0536E+01−1.0536E+01−1.0536E+01−1.0515E+01−8.7041E+00−1.0532E+01
Std3.8762E+001.3956E+001.1257E-014.0177E+002.8379E+004.1658E+004.0573E+003.3505E+003.8680E+001.3645E-012.4546E+004.4536E-03
Ranking511121047936281
Friedman test
Mean Rank3.918.098.277.555.097.096.003.735.186.009.001.00
Final Ranking310119486256121
Table 6. The results of the Wilcoxon ranking test of the comparative methods using 23 classical benchmark functions.
Table 6. The results of the Wilcoxon ranking test of the comparative methods using 23 classical benchmark functions.
MeasureComparative Algorithms
SSAWOASCADAGWOPSOALOMPAEOAOGOA
F1
p-value2.7661E-023.5541E-023.5559E-015.8221E-031.6699E-023.0216E-024.3554E-023.0643E-023.5536E-023.5201E-013.5592E-01
Sign11011111100
F2
p-value1.1761E-022.9581E-011.0812E-022.3280E-021.3734E-011.8648E-024.1002E-022.6155E-023.2647E-023.2957E-023.5592E-02
Sign10110111111
F3
p-value1.2074E-021.2344E-021.3234E-021.8078E-021.6370E-014.6919E-022.2403E-033.1094E-011.2106E-023.5592E-023.5592E-02
Sign11110110111
F4
p-value7.8557E-028.7641E-021.1406E-021.7528E-022.0308E-012.7028E-036.1763E-042.4542E-022.6797E-013.4241E-013.5592E-01
Sign00110111000
F5
p-value1.2113E-011.1615E-107.5441E-094.2798E-038.9790E-081.3920E-023.0048E-014.3146E-068.7920E-072.1946E-015.9453E-09
Sign01111101101
F6
p-value1.7828E-011.6523E-032.0092E-041.0725E-054.3337E-021.9479E-015.2165E-014.5575E-023.8125E-011.8377E-014.1149E-04
Sign01111001001
F7
p-value2.7429E-043.3626E-011.3366E-024.0602E-028.9881E-033.8431E-021.6060E-022.9906E-021.2811E-018.4033E-016.5965E-02
Sign10111111000
F8
p-value4.9287E-024.9287E-024.9286E-024.9286E-024.9287E-024.9285E-024.9286E-024.9288E-024.9287E-024.9287E-024.9286E-02
Sign11111111111
F9
p-value1.2123E-033.5592E-013.4183E-013.6625E-053.7729E-021.5993E-025.6505E-043.5592E-012.0877E-013.5592E-011.2452E-02
Sign10011110001
F10
p-value1.5111E-021.3792E-023.5415E-027.5193E-032.3472E-023.4494E-013.4726E-052.4523E-031.4998E-013.8431E-027.5441E-09
Sign11111011011
F11
p-value6.7778E-041.0725E-051.6568E-027.3688E-033.1295E-034.5040E-027.2286E-023.5592E-011.9499E-014.3146E-063.5592E-01
Sign11111100010
F12
p-value4.5898E-021.4565E-051.2774E-014.1330E-023.1982E-029.3609E-011.0150E-013.7499E-021.4498E-015.9953E-016.2976E-05
Sign11011001001
F13
p-value3.1619E-011.3697E-075.1562E-043.5354E-011.6561E-023.9124E-017.4624E-021.9893E-011.6070E-034.0685E-011.9106E-05
Sign01101000101
F14
p-value2.0296E-027.3831E-013.3765E-022.9617E-022.9344E-021.4011E-023.2899E-021.7275E-028.9226E-014.6549E-023.3358E-02
Sign10111111011
F15
p-value2.8670E-052.4533E-012.0663E-034.1099E-024.6384E-024.0643E-029.3738E-021.3982E-019.3964E-033.5181E-011.5768E-02
Sign10111100101
F16
p-value2.2168E-012.2493E-013.9237E-014.9464E-022.2172E-012.2168E-012.2168E-012.2168E-012.2168E-012.1779E-022.2188E-01
Sign00010000010
F17
p-value2.5376E-024.9263E-027.0800E-012.6141E-022.5417E-022.5376E-022.5376E-022.5376E-022.5376E-022.7835E-021.1982E-02
Sign11011111111
F18
p-value3.3442E-021.3387E-023.7619E-023.6649E-012.0723E-023.3442E-023.3442E-023.3442E-023.3443E-024.4389E-023.2447E-02
Sign11101111111
F19
p-value2.4717E-024.7013E-023.0434E-026.9990E-012.4501E-022.3149E-022.4026E-026.1910E-012.3379E-028.1141E-013.3805E-02
Sign11101110101
F20
p-value7.8799E-048.2574E-013.3926E-011.7493E-032.9256E-019.1595E-042.8465E-042.8465E-049.1707E-044.1671E-011.7737E-01
Sign10010111100
F21
p-value6.6625E-033.2211E-027.1209E-044.6331E-026.2122E-019.8752E-012.8439E-021.0326E-024.0904E-021.7348E-013.3997E-03
Sign11110011101
F22
p-value4.5947E-011.6723E-021.3534E-038.6195E-032.2524E-011.1780E-028.5115E-012.2436E-022.4921E-021.8822E-022.2948E-03
Sign01110101111
F23
p-value9.2506E-016.1916E-028.1461E-032.2618E-014.8084E-028.2839E-012.6193E-025.6241E-012.8950E-022.2042E-022.5223E-02
Sign00101010111
Total1614171916161515141117
Table 7. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F1–F13), where the dimension size is 50.
Table 7. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F1–F13), where the dimension size is 50.
MeasureComparative Algorithms
SSAWOASCADAGWOPSOALOMPAEOAOGOAIGOA
F1
Max1.6218E+042.1984E-215.5910E+031.6915E+041.2783E-061.8019E+022.8149E+043.6023E-231.3670E-135.3554E-796.6370E-031.5722E-79
Mean1.1851E+045.4959E-222.4330E+031.1095E+046.6143E-078.8624E+012.1047E+041.1866E-234.4250E-141.3389E-793.1433E-033.9304E-80
Min6.9875E+034.3444E-359.0817E+024.7960E+032.2679E-073.1343E+011.4298E+041.5924E-244.0144E-161.0172E-865.1212E-052.7584E-106
Std3.8851E+031.0992E-212.1585E+035.0664E+034.8487E-076.3910E+016.5360E+031.6212E-236.3336E-142.6777E-793.4204E-037.8608E-80
Ranking114910681235271
F2
Max1.4023E+056.2386E-237.3347E+001.5847E+022.6315E-045.9956E+012.0823E+025.0305E-138.1913E-091.4280E-399.0945E-424.7159E-42
Mean4.9211E+041.5787E-234.5222E+001.1268E+021.3932E-043.9135E+011.6509E+021.7435E-135.4547E-094.1641E-402.2736E-421.2189E-42
Min1.3926E+024.2176E-306.1124E-016.7739E+014.5768E-052.2800E+017.3738E+012.9129E-142.6583E-091.1092E-424.7221E-591.8606E-51
Std6.6198E+043.1067E-233.0088E+004.3988E+019.3082E-051.5398E+016.3429E+012.2192E-132.2952E-096.7921E-404.5473E-422.3326E-42
Ranking124810791156321
F3
Max6.5413E+046.8367E+051.0594E+052.4096E+051.7714E+031.4516E+041.5527E+051.9532E+006.2031E+011.0137E-764.8398E-013.9901E-84
Mean5.8588E+044.5196E+057.2010E+041.8319E+058.7469E+021.0734E+041.2098E+056.1570E-011.7256E+012.5343E-772.4046E-019.9751E-85
Min5.1191E+041.8831E+054.2268E+041.1646E+051.4849E+028.7526E+031.0824E+055.9612E-041.3599E-028.5573E-1001.4096E-011.5318E-116
Std6.2005E+032.3483E+053.3319E+045.1089E+046.8684E+022.6554E+032.2920E+049.2270E-012.9999E+015.0687E-771.6308E-011.9950E-84
Ranking812911671045231
F4
Max6.8899E+019.1800E+018.6073E+019.1337E+016.4686E+002.6396E+016.6814E+017.2917E-091.7832E-031.7153E-407.7673E-025.5416E-47
Mean6.3772E+017.8942E+018.1343E+015.4232E+014.5319E+002.4381E+015.5044E+013.3935E-097.1452E-044.2899E-417.2182E-021.4531E-47
Min6.0251E+014.1870E+017.4059E+012.3414E+012.8701E+002.2176E+014.2960E+011.6458E-092.1465E-042.7587E-526.7869E-022.8523E-65
Std3.6504E+002.4719E+015.2408E+002.8233E+011.7816E+001.7632E+001.0941E+012.6186E-097.2951E-048.5756E-414.1536E-032.7287E-47
Ranking101112867934251
F5
Max1.1466E+074.8878E+012.5207E+072.0836E+074.8797E+016.0770E+043.2361E+074.8543E+014.8836E+019.2352E-014.8946E+017.5327E-02
Mean8.1869E+064.8787E+011.4245E+071.1847E+074.8365E+013.2787E+042.1723E+074.8080E+014.8736E+012.6935E-014.8824E+012.3955E-02
Min4.7476E+064.8704E+011.8179E+053.5789E+064.7957E+017.7683E+039.9082E+064.7882E+014.8561E+019.5134E-044.8635E+012.7754E-04
Std3.1123E+068.7276E-021.1337E+077.0631E+064.0259E-012.6510E+041.1468E+073.1421E-011.2161E-014.3930E-011.4145E-013.5332E-02
Ranking961110481235271
F6
Max1.9156E+048.1525E+007.6716E+032.0207E+048.4984E+001.2644E+022.4720E+043.9668E+006.4815E+001.2152E-029.1091E+001.0081E-01
Mean1.2957E+047.3447E+003.6820E+031.5112E+047.6074E+009.2171E+011.9583E+043.6884E+006.0817E+003.7421E-038.7934E+004.2876E-02
Min1.0247E+046.2648E+006.0131E+024.7529E+037.0766E+004.9020E+011.4009E+043.3131E+005.8648E+003.6313E-078.0696E+002.7665E-03
Std4.1735E+037.8671E-012.9352E+037.0247E+036.2472E-013.2030E+014.5631E+032.8150E-012.7298E-015.6647E-034.8543E-014.1387E-02
Ranking105911681234172
F7
Max2.9031E+013.1956E-029.8864E+002.2178E+013.0870E-024.9951E+023.9801E+013.1534E-031.7831E-028.0405E-041.2233E-036.5384E-04
Mean1.6099E+011.6597E-024.9402E+001.4279E+011.7940E-021.8972E+022.6277E+012.3485E-039.1412E-033.8734E-046.6861E-043.0638E-04
Min8.8668E+008.2613E-041.0335E+005.3067E+008.2160E-034.6662E+018.5259E+001.8104E-034.3024E-031.0980E-041.3892E-044.0132E-05
Std8.8703E+001.6283E-023.6637E+007.6343E+001.0628E-022.0972E+021.3917E+015.8897E-046.3451E-032.9642E-044.8153E-042.6922E-04
Ranking106897121145231
F8
Max−8.7775E+03−1.3214E+04−3.8859E+03−3.8665E+03−7.1314E+03−2.7860E+03−9.0295E+03−1.0965E+04−9.4877E+03−4.1272E+03−5.6384E+03−2.7814E+06
Mean−9.4181E+03−1.7744E+04−4.1214E+03−4.6595E+03−8.3000E+03−4.1270E+03−9.0295E+03−1.1250E+04−1.0613E+04−4.7543E+03−6.0615E+03−1.9399E+09
Min−1.0788E+04−2.0805E+04−4.3281E+03−6.2820E+03−9.1511E+03−5.2134E+03−9.0295E+03−1.1795E+04−1.2302E+04−6.1055E+03−6.6070E+03−5.7275E+09
Std9.2277E+023.6361E+031.9671E+021.1238E+039.8483E+021.0172E+030.0000E+003.7047E+021.1959E+039.1748E+024.6042E+022.6775E+09
Ranking521210711634981
F9
Max4.0940E+026.7153E-072.2995E+026.3180E+022.1152E+014.0120E+023.6963E+020.0000E+002.8863E+000.0000E+000.0000E+000.0000E+00
Mean3.8197E+021.6788E-071.4151E+025.6403E+029.0516E+003.5740E+023.3235E+020.0000E+007.2156E-010.0000E+000.0000E+000.0000E+00
Min3.6797E+020.0000E+004.2817E+015.1762E+021.6947E+003.3564E+023.0105E+020.0000E+005.6843E-130.0000E+000.0000E+000.0000E+00
Std1.9120E+013.3576E-077.7491E+014.8844E+018.4071E+002.9652E+013.3984E+010.0000E+001.4431E+000.0000E+000.0000E+000.0000E+00
Ranking115812710916111
F10
Max1.9306E+016.9722E-132.0650E+011.9961E+013.4051E-046.9781E+001.8075E+015.3024E-132.0638E-088.8818E-161.3498E-088.8818E-16
Mean1.8195E+011.8741E-131.8678E+011.7540E+012.0529E-045.4526E+007.9782E+003.7126E-131.1748E-088.8818E-163.3745E-098.8818E-16
Min1.7747E+014.4409E-151.2972E+011.6521E+016.9576E-054.3857E+008.8818E-162.2116E-134.6971E-098.8818E-168.8818E-168.8818E-16
Std7.4469E-013.4025E-133.8039E+001.6273E+001.2266E-041.1117E+008.7915E+001.3779E-136.6047E-090.0000E+006.7490E-090.0000E+00
Ranking113121078946151
F11
Max1.7362E+021.1102E-161.0601E+014.3677E+027.4534E-021.9413E+023.1461E+020.0000E+001.0418E-020.0000E+003.4754E+020.0000E+00
Mean1.2334E+025.5511E-176.9297E+002.3595E+021.8637E-021.3345E+021.9353E+020.0000E+002.6046E-030.0000E+002.1478E+020.0000E+00
Min9.4444E+010.0000E+001.2481E+001.1602E+024.0480E-071.0472E+029.0246E+010.0000E+001.9984E-150.0000E+001.2010E+020.0000E+00
Std3.4603E+016.4099E-174.2980E+001.4060E+023.7264E-024.2027E+019.7164E+010.0000E+005.2092E-030.0000E+009.6724E+010.0000E+00
Ranking847126910151111
F12
Max6.6041E+067.6181E-011.7325E+082.9504E+068.7300E-011.2819E+012.4959E+072.8493E-014.6137E-017.7594E-059.6619E-015.2576E-05
Mean4.7716E+066.7329E-018.5889E+071.3861E+065.2826E-018.2059E+009.8564E+062.0643E-013.9241E-013.0632E-059.3607E-012.1443E-05
Min7.2951E+054.2717E-015.0216E+076.7839E+054.0129E-013.6242E+001.9815E+061.1744E-013.2544E-011.9160E-069.0405E-013.9386E-07
Std2.7700E+061.6419E-015.8718E+071.0568E+062.2999E-014.0047E+001.0301E+076.8716E-026.0555E-023.2779E-052.8743E-022.5393E-05
Ranking106129581134271
F13
Max1.8640E+084.0618E+003.5703E+085.1925E+074.5126E+009.2536E+011.3806E+084.6401E+003.9209E+007.1058E-035.0124E+005.0486E-04
Mean7.4528E+073.2654E+001.3817E+082.0251E+074.1590E+007.7931E+019.0749E+074.1691E+003.6951E+003.4706E-034.9641E+001.9322E-04
Min2.7158E+072.5325E+003.4788E+067.2943E+063.7958E+003.5409E+016.6837E+073.7053E+003.4860E+003.5106E-044.9044E+003.0919E-05
Std7.5206E+078.1278E-011.6310E+082.1185E+073.3293E-012.8350E+013.3005E+073.8267E-011.7869E-012.9141E-035.0203E-022.1632E-04
Ranking103129581164271
Friedman test
Mean Rank9.625.469.9210.086.088.6910.233.314.852.315.621.08
Final Ranking951011781234261
Table 8. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F1–F13), where the dimension size is 100.
Table 8. The results of the proposed IGOA and other comparative methods using classical benchmark functions (F1–F13), where the dimension size is 100.
MeasureComparative Algorithms
SSAWOASCADAGWOPSOALOMPAEOAOGOAIGOA
F1
Max6.4450E+042.2100E-251.8971E+046.1119E+041.9274E-032.6248E+031.0401E+052.1620E-211.7958E-114.7366E-836.4311E-023.2757E-97
Mean5.8329E+046.0754E-261.5087E+043.0927E+041.2495E-031.8935E+038.6209E+048.2788E-228.9226E-121.1847E-834.5752E-028.1893E-98
Min4.6369E+046.0190E-301.0715E+047.4207E+038.4757E-041.2104E+035.7437E+045.0547E-238.6992E-133.0147E-933.5219E-024.1461E-126
Std8.2402E+031.0732E-254.0115E+032.3071E+044.9092E-046.0029E+022.0334E+049.2942E-229.1868E-122.3680E-831.3744E-021.6379E-97
Ranking113910681245271
F2
Max1.1688E+253.4538E-234.4337E+013.2154E+027.1401E-032.2877E+024.8464E+027.1408E-139.6007E-081.0283E-441.4934E-127.6707E-49
Mean2.9221E+248.6966E-243.5901E+012.8186E+024.9435E-031.9384E+024.0496E+023.6888E-136.5048E-083.7138E-454.0823E-131.9177E-49
Min3.6567E+074.6747E-272.3812E+012.0375E+023.8459E-031.5887E+022.2760E+021.6618E-135.5969E-095.0295E-545.8608E-242.8573E-57
Std5.8441E+241.7228E-239.0612E+005.3474E+011.5476E-033.1984E+011.1956E+022.4632E-134.1719E-084.5717E-457.2438E-133.8354E-49
Ranking123810791146251
F3
Max3.0940E+052.9857E+064.5705E+057.2240E+052.6564E+049.1971E+045.1265E+051.6998E+021.3121E+049.4571E-792.1274E+009.9554E-97
Mean2.3669E+051.7822E+062.9752E+056.5700E+051.9553E+048.5210E+044.3074E+055.5700E+014.8385E+032.3644E-798.8560E-015.5818E-97
Min1.6464E+059.4484E+052.0399E+055.8488E+051.2861E+047.6831E+043.9466E+052.5961E-019.3275E+013.7128E-933.2007E-014.0772E-112
Std6.7513E+049.2686E+051.1076E+056.5245E+046.5376E+036.8786E+035.5000E+047.7934E+016.1542E+034.7284E-798.4679E-014.1342E-97
Ranking812911671045231
F4
Max8.9996E+019.5933E+019.7350E+019.0926E+013.6873E+014.2037E+019.2671E+017.6362E-084.6973E+004.0962E-401.3932E-015.7872E-40
Mean8.1540E+018.5303E+019.5168E+017.3538E+013.0380E+013.8428E+017.3165E+013.6609E-081.3036E+001.0252E-401.1315E-011.4468E-40
Min7.1679E+016.9219E+019.2087E+015.7173E+012.7215E+013.6876E+015.9800E+019.5124E-091.5244E-029.4961E-618.1765E-027.0045E-51
Std7.5223E+001.1439E+012.2576E+001.5401E+014.4120E+002.4303E+001.4777E+012.8403E-082.2658E+002.0473E-402.7412E-022.8936E-40
Ranking101112967835142
F5
Max1.3626E+089.8819E+012.9568E+081.6827E+089.9508E+012.3804E+062.1146E+089.8561E+019.8782E+011.2431E+009.9003E+011.0128E+00
Mean1.1233E+089.8685E+011.8065E+081.3770E+089.8798E+011.6261E+061.3225E+089.8410E+019.8675E+016.8750E-019.8967E+012.5496E-01
Min9.6365E+079.8552E+011.0803E+089.1133E+079.7860E+011.1455E+067.1318E+079.7992E+019.8452E+012.2469E-019.8940E+017.9814E-04
Std1.7408E+071.1599E-018.7119E+073.6996E+076.8924E-015.3146E+055.8216E+072.7899E-011.5157E-014.5514E-012.7347E-025.0525E-01
Ranking951211681034271
F6
Max8.9225E+041.7458E+014.4523E+044.8419E+041.8732E+014.5079E+031.1998E+051.5020E+011.8757E+014.8547E-022.0854E+019.9236E-02
Mean6.7793E+041.5675E+012.2272E+043.1812E+041.7622E+012.3132E+038.8804E+041.4006E+011.7521E+011.4148E-022.0176E+013.3483E-02
Min5.0618E+041.4788E+015.6090E+031.7590E+041.6555E+018.9842E+025.7004E+041.2863E+011.5730E+014.5258E-061.9503E+012.3732E-03
Std1.6012E+041.2371E+001.8107E+041.4850E+048.8996E-011.5634E+033.2783E+049.4087E-011.3357E+002.3070E-026.8445E-014.4461E-02
Ranking114910681235172
F7
Max2.7204E+021.8463E-023.9305E+023.0559E+021.0116E-012.0273E+032.0268E+024.6610E-031.1035E-021.8323E-038.5588E-041.0198E-03
Mean1.4734E+029.5542E-031.6825E+021.9049E+025.8300E-021.7556E+031.3493E+022.3908E-038.0461E-038.9543E-045.3393E-043.7064E-04
Min7.9767E+016.0216E-037.5642E+011.0019E+022.5231E-021.4554E+035.2479E+011.5111E-036.0757E-036.8527E-061.7860E-052.0470E-05
Std8.6798E+015.9534E-031.5044E+028.9961E+013.5648E-022.5530E+026.5760E+011.5163E-032.2743E-038.5082E-043.6001E-044.6821E-04
Ranking961011712845321
F8
Max−1.3646E+04−2.1492E+04−5.3732E+03−6.6223E+03−1.1677E+04−5.1598E+03−1.8059E+04−1.6364E+04−1.4114E+04−4.9756E+03−7.1045E+03−4.9875E+06
Mean−1.5292E+04−2.7471E+04−5.9623E+03−7.6416E+03−1.3239E+04−5.9618E+03−1.8059E+04−1.8456E+04−1.7127E+04-6.9425E+03−7.5314E+03−1.1856E+11
Min−1.6871E+04−3.7725E+04−6.2835E+03−8.9144E+03−1.4785E+04−7.7160E+03−1.8059E+04−2.0714E+04−2.1019E+04−8.8904E+03−8.5248E+03−4.7145E+11
Std1.3184E+037.0723E+034.0828E+021.0152E+031.3767E+031.1981E+030.0000E+001.9681E+033.0433E+031.7483E+036.7273E+022.3526E+11
Ranking621187124351091
F9
Max9.9339E+020.0000E+004.8313E+021.1884E+039.8581E+011.1115E+039.3285E+020.0000E+002.4728E+000.0000E+000.0000E+000.0000E+00
Mean8.7313E+020.0000E+003.2001E+021.1454E+035.4544E+019.7640E+028.0171E+020.0000E+008.7120E-010.0000E+000.0000E+000.0000E+00
Min7.9510E+020.0000E+001.4363E+021.0557E+032.8686E+019.0364E+026.9063E+020.0000E+001.3642E-120.0000E+000.0000E+000.0000E+00
Std8.6539E+010.0000E+001.6490E+026.0815E+013.0661E+019.8033E+011.0578E+020.0000E+001.1695E+000.0000E+000.0000E+000.0000E+00
Ranking101812711916111
F10
Max1.9126E+013.5039E-122.0689E+011.9743E+016.6654E-031.0103E+011.8613E+012.2462E-122.5460E-074.4409E-154.8293E-038.8818E-16
Mean1.8807E+019.2015E-132.0650E+011.6857E+014.6223E-039.7618E+009.3010E+001.7613E-121.4055E-071.7764E-152.4339E-038.8818E-16
Min1.8189E+011.5099E-142.0599E+011.4417E+012.2334E-039.3897E+008.8818E-161.4539E-126.4001E-088.8818E-163.7348E-068.8818E-16
Std4.1994E-011.7234E-123.7988E-022.5906E+002.0241E-033.1516E-011.0740E+013.4094E-138.0772E-081.7764E-152.2733E-030.0000E+00
Ranking113121079845261
F11
Max7.6438E+020.0000E+002.4170E+025.5599E+021.3725E-013.4697E+028.8736E+020.0000E+004.4995E-110.0000E+001.5563E+030.0000E+00
Mean6.6258E+020.0000E+001.2059E+023.2260E+026.7813E-022.9022E+027.4397E+020.0000E+001.5632E-110.0000E+001.2242E+030.0000E+00
Min5.0940E+020.0000E+006.7609E+019.6513E+013.6218E-042.6014E+024.8176E+020.0000E+003.2496E-130.0000E+009.8697E+020.0000E+00
Std1.0973E+020.0000E+008.1316E+011.9609E+027.7747E-023.9246E+011.8275E+020.0000E+002.0097E-110.0000E+002.4182E+020.0000E+00
Ranking101796811151121
F12
Max1.0867E+086.2302E-011.1473E+091.3795E+077.8790E-018.7761E+041.9343E+084.1403E-016.5971E-013.2853E-051.1572E+001.3334E-05
Mean7.7271E+075.0075E-017.2816E+087.4978E+066.9134E-013.0279E+041.2189E+083.6533E-016.2007E-011.5033E-051.0648E+007.0284E-06
Min5.3628E+072.8339E-012.5478E+082.0035E+065.7241E-011.1431E+034.8932E+073.3838E-015.4336E-011.5344E-061.0026E+007.4445E-09
Std2.4450E+071.5492E-013.7221E+085.3527E+061.0843E-014.0208E+046.2288E+073.4113E-025.3034E-021.5254E-056.7986E-027.2560E-06
Ranking104129681135271
F13
Max6.1786E+089.3496E+001.0303E+095.1763E+081.0134E+019.1508E+057.5026E+089.7839E+009.2059E+004.2178E-021.0028E+012.5780E-02
Mean3.3934E+088.5770E+008.8837E+082.0178E+089.4544E+004.9212E+054.8928E+089.6897E+008.9239E+001.1599E-029.9911E+007.5849E-03
Min1.6900E+088.0494E+006.4915E+083.6750E+078.8932E+001.1739E+052.9431E+089.5922E+008.5903E+005.6404E-049.9129E+001.2093E-05
Std1.9376E+085.5539E-011.7019E+082.1606E+085.2363E-013.3819E+052.0447E+081.0628E-012.6804E-012.0401E-025.3388E-021.2292E-02
Ranking103129581164271
Friedman test
Mean Rank9.774.4610.089.926.318.859.623.315.002.385.921.15
Final Ranking104121178935261
Table 9. Description of CEC2017 functions.
Table 9. Description of CEC2017 functions.
No.TypeDescriptionFi*
1Unimodal functionsSAR Bent Cigar Function100
2 SAR Sum of Different Power Functions200
3 SAR Zakharov Function300
4Simple Multimodal FunctionsSAR Rosenbrock’s Function400
5 SAR Rastrigin’s Function500
6 SAR Expanded Schaffer’s F6 Function600
7 SAR Lunacek’s Bi-Rastrigin Function700
8 SAR Non-Continuous Rastrigin’s Function800
9 SAR Lévy Function900
10 SAR Schwefel’s Function1000
11Hybrid functionsHF1 (N = 3)1100
12 HF2 (N = 3)1200
13 HF3 (N = 3)1300
14 HF4 (N = 4)1400
15 HF5 (N = 4)1500
16 HF6 (N = 4)1600
17 HF6 (N = 5)1700
18 HF6 (N = 5)1800
19 HF6 (N = 5)1900
20 HF6 (N = 6)2000
21Composition FunctionsCF1 (N = 3)2100
22 CF2 (N = 3)2200
23 CF3 (N = 4)2300
24 CF4 (N = 4)2400
25 CF5 (N = 5)2500
26 CF6 (N = 5)2600
27 CF7 (N = 6)2700
28 CF8 (N = 6)2800
29 CF9 (N = 3)2900
30 CF10 (N = 3)3000
Note: SAR = shifted and rotated, HF = hybrid function, CF = composition function.
Table 10. The results of the proposed IGOA and other comparative methods using 30 CEC2017 benchmark functions.
Table 10. The results of the proposed IGOA and other comparative methods using 30 CEC2017 benchmark functions.
MeasureComparative Algorithms
GCHHOCCMWOABMWOABWOASCADECGSCAOBSCAHGWOCMSSADHHOMIGOA
F1
Mean3.30940E+032.02540E+102.37560E+081.57600E+082.01310E+101.42880E+101.67810E+108.13100E+091.35370E+091.34700E+073.25100E+03
Std4.00990E+034.34540E+091.12730E+089.52730E+072.37100E+091.99070E+092.57230E+091.16330E+098.03300E+082.56170E+064.65850E+04
F2
Mean1.42050E+067.75550E+372.63340E+221.86990E+261.55760E+361.78980E+353.60910E+351.18990E+347.57370E+317.66760E+151.32600E+03
Std4.35570E+062.78850E+381.01970E+237.00190E+264.19900E+364.60380E+358.75620E+352.40170E+343.86780E+322.11000E+164.23561E+06
F3
Mean5.72480E+027.75120E+046.95490E+045.78200E+045.93160E+044.18320E+046.17240E+047.78820E+045.91790E+041.54280E+045.72000E+02
Std2.29300E+026.00500E+038.30380E+031.08020E+046.77840E+037.55140E+035.80660E+035.29540E+037.90900E+033.37440E+032.15410E+04
F4
Mean4.92330E+023.37030E+036.13220E+026.11720E+023.69940E+031.71050E+032.67520E+038.98810E+026.97120E+025.38890E+024.93000E+02
Std2.20920E+011.07380E+034.52740E+017.44420E+017.85300E+023.16700E+028.37710E+021.21710E+021.25430E+023.28540E+012.15000E+02
F5
Mean7.16920E+028.35120E+027.94230E+027.67440E+028.28150E+027.89630E+028.06510E+027.50670E+027.16280E+027.35870E+027.15000E+02
Std3.58460E+013.41570E+014.81890E+013.47290E+012.18030E+012.43770E+012.45210E+011.39620E+015.00250E+013.09490E+011.21456E+01
F6
Mean6.52280E+026.68500E+026.64740E+026.67190E+026.60110E+026.54060E+026.57370E+026.37400E+026.52630E+026.62000E+026.37322E+02
Std7.04260E+007.43640E+001.22790E+014.92080E+005.71150E+006.12160E+005.42690E+002.75530E+001.67270E+016.79720E+001.23549E+01
F7
Mean1.07440E+031.28440E+031.18850E+031.23650E+031.17650E+031.14720E+031.17170E+031.03810E+039.80550E+021.23790E+039.80003E+02
Std9.50300E+017.86780E+011.04810E+027.38450E+013.21420E+015.09880E+013.47780E+012.95840E+016.57600E+017.19790E+012.15648E+01
F8
Mean9.51300E+021.05130E+031.01270E+039.77110E+021.08420E+031.05910E+031.06590E+031.00090E+039.89580E+029.53930E+029.50548E+02
Std2.50210E+013.14490E+013.44830E+012.20290E+011.78760E+012.15500E+011.96760E+011.30390E+013.35310E+012.24990E+011.23684E+01
F9
Mean4.99930E+037.99490E+037.26070E+036.21960E+038.16870E+036.13090E+036.89320E+033.48300E+034.79750E+037.15890E+033.48226E+03
Std6.67700E+021.18910E+031.33260E+038.87330E+021.20850E+031.22730E+031.13440E+034.07920E+021.93030E+037.65270E+021.85667E+02
F10
Mean4.93970E+037.11260E+037.41370E+036.54980E+038.23470E+038.14170E+037.36950E+036.65980E+036.29020E+035.52560E+034.93911E+03
Std7.02370E+026.56310E+027.24060E+029.75090E+022.22270E+022.97070E+023.50750E+024.53880E+026.78770E+025.45470E+021.98266E+02
F11
Mean1.22750E+033.51630E+031.63860E+031.74140E+033.43800E+032.24610E+032.69610E+035.03510E+032.11820E+031.25990E+031.22636E+03
Std4.61680E+017.12600E+021.73870E+022.18240E+026.15770E+022.93550E+025.75960E+029.23410E+023.80140E+024.31720E+013.26710E+01
F12
Mean1.18910E+062.18990E+097.96980E+071.34760E+081.92660E+091.43190E+092.04570E+095.89000E+081.78120E+081.54720E+071.18237E+06
Std1.05870E+061.40780E+096.51450E+078.69410E+074.82420E+083.09240E+086.73740E+081.53340E+081.86560E+081.09180E+071.53623E+04
F13
Mean1.51460E+048.10180E+073.16270E+052.49390E+056.42760E+085.11870E+086.32870E+083.06160E+087.74430E+054.53100E+051.51413E+04
Std1.55350E+041.01880E+082.76180E+051.49100E+052.80990E+081.88770E+082.45500E+081.52570E+083.73460E+063.41790E+051.23633E+04
F14
Mean4.37800E+041.36190E+064.51810E+059.58570E+053.42730E+051.83780E+052.12560E+057.58740E+053.33800E+051.26670E+054.36933E+04
Std3.37360E+041.06020E+062.97310E+051.03090E+061.51470E+051.04920E+051.04490E+056.32210E+053.16240E+051.06830E+051.54932E+04
F15
Mean7.98250E+036.08890E+068.76670E+041.20450E+058.49960E+061.02240E+071.38510E+071.27440E+072.58440E+046.76810E+047.98191E+03
Std7.03540E+037.18430E+069.03590E+048.97940E+045.00020E+061.16050E+071.72250E+071.57900E+072.18710E+044.31430E+047.54535E+03
F16
Mean2.75590E+033.82030E+033.29790E+033.67160E+033.91670E+033.76790E+033.84600E+033.29040E+033.18600E+033.27670E+032.75461E+03
Std3.40700E+025.19220E+023.55500E+025.13770E+022.38250E+022.22570E+022.22620E+021.67740E+023.03380E+022.92610E+021.33951E+02
F17
Mean2.32790E+032.74680E+032.37630E+032.60050E+032.52920E+032.46670E+032.60340E+032.41310E+032.38060E+032.61210E+032.32751E+03
Std2.20580E+023.49430E+022.56110E+022.84310E+021.63110E+021.85890E+021.61410E+021.69650E+021.98150E+022.77480E+021.25317E+02
F18
Mean2.25110E+058.74010E+062.91200E+063.17660E+063.73710E+063.13090E+064.22870E+061.46490E+062.46650E+061.09400E+062.25110E+05
Std1.61650E+058.78500E+062.41540E+063.06640E+062.58540E+061.39660E+062.36190E+061.30980E+063.60400E+069.71750E+051.65429E+04
F19
Mean6.63810E+035.12680E+067.74970E+053.48230E+062.33610E+072.46870E+074.16280E+071.38590E+076.29370E+064.04430E+056.63798E+03
Std7.38500E+037.32160E+068.59610E+053.12020E+061.15100E+071.22460E+072.74960E+071.55780E+075.81650E+062.62140E+054.62325E+03
F20
Mean2.56010E+032.70240E+032.76650E+032.79680E+032.74110E+032.62520E+032.70000E+032.65430E+032.60240E+032.71820E+032.55955E+03
Std2.02220E+022.16930E+021.62670E+022.18290E+021.08830E+021.38560E+021.31380E+021.30220E+022.05090E+021.30250E+021.12149E+02
F21
Mean2.48850E+032.61950E+032.53510E+032.57030E+032.57740E+032.56520E+032.44250E+032.50510E+032.46770E+032.55420E+032.48749E+03
Std3.85930E+015.12580E+014.56750E+015.02670E+012.56880E+013.32110E+018.21800E+011.56890E+015.50670E+014.03780E+011.51456E+02
F22
Mean4.62350E+037.04850E+035.53390E+036.18390E+034.59700E+033.93100E+034.11250E+033.23580E+032.94720E+036.79490E+032.94754E+03
Std2.41780E+031.54310E+033.21560E+032.48410E+032.19660E+022.56550E+023.62170E+022.77220E+021.26680E+031.67410E+033.20515E+02
F23
Mean2.92700E+033.18090E+032.99450E+033.08030E+033.01140E+033.00190E+033.01420E+032.90250E+032.84090E+033.16020E+032.99325E+03
Std7.37890E+011.18160E+028.17800E+011.07680E+023.48820E+013.15550E+014.54740E+011.68870E+016.23460E+011.38630E+022.35565E+02
F24
Mean3.08380E+033.30380E+033.09670E+033.19780E+033.16640E+033.14920E+033.18440E+033.05480E+032.95410E+033.44050E+032.95326E+03
Std5.04200E+011.16420E+027.36580E+018.09340E+013.72610E+012.53380E+013.43000E+012.60000E+013.86290E+011.45780E+023.52588E+01
F25
Mean2.90010E+033.44510E+033.02210E+033.01640E+033.45100E+033.28120E+033.38280E+033.08930E+033.09440E+032.91640E+033.00055E+03
Std1.49490E+011.41040E+023.39370E+014.69270E+019.77010E+011.02840E+021.57830E+023.12340E+015.98770E+012.30520E+011.22550E+01
F26
Mean5.65890E+039.01340E+036.69870E+038.11220E+037.41680E+037.13610E+037.05350E+035.92290E+035.48950E+037.64800E+035.47518E+03
Std1.72280E+031.03630E+039.25950E+027.67280E+022.41560E+023.74440E+025.36350E+024.49190E+021.27270E+031.08640E+034.52201E+02
F27
Mean3.26360E+033.63050E+033.31650E+033.39200E+033.44060E+033.38790E+033.45800E+033.31450E+033.33700E+033.38570E+033.26194E+03
Std2.42390E+011.79580E+027.97790E+018.83970E+015.54960E+014.08300E+015.42170E+012.33400E+018.83800E+019.26590E+015.66982E+01
F28
Mean3.21950E+034.67320E+033.39180E+033.38910E+034.37540E+033.89120E+034.18740E+033.60200E+033.52070E+033.27420E+033.29525E+03
Std2.31010E+014.05870E+024.45990E+014.22080E+012.39090E+021.21840E+021.91430E+024.75420E+011.14230E+022.25600E+012.35466E+01
F29
Mean4.04170E+035.18310E+034.78350E+035.07360E+035.08210E+034.76980E+034.94150E+034.48290E+034.76470E+034.49320E+034.04055E+03
Std2.30680E+024.91440E+024.43160E+025.09510E+022.77570E+022.08340E+022.10560E+021.72000E+024.50640E+023.90420E+022.35486E+02
F30
Mean1.15970E+045.82960E+077.26090E+061.50610E+078.68650E+078.04480E+071.16700E+087.21540E+072.47330E+072.64540E+061.93215E+05
Std4.23300E+035.37510E+075.42220E+061.03830E+073.83730E+072.22460E+074.32210E+073.48540E+072.61650E+071.50810E+066.45852E+05
Table 11. The results of the Friedman ranking test of the comparative methods using 30 CEC2017 benchmark functions.
Table 11. The results of the Friedman ranking test of the comparative methods using 30 CEC2017 benchmark functions.
MeasureComparative Algorithms
GCHHOCCMWOABMWOABWOASCADECGSCAOBSCAHGWOCMSSADHHOMIGOA
F12115410897631
F22114510897631
F32109574811631
F41105411897632
F53118610795241
F63119107562481
F74118975632101
F82874119106531
F94109611572381
F102795111086431
F112104597811631
F122114598107631
F132743119108651
F142118107459631
F152756891110341
F162967118105341
F172113876954101
F182116897104531
F192645910118731
F202710119465381
F214116910815273
F227118964531102
F233115976821104
F244105976832111
F251105411896723
F263115108764291
F272114897103561
F281115410897623
F292117910683541
F301745109118632
Summation7329418120227221024617213015842
Mean Rank2.43339.86.0336.7339.0667.008.2005.7334.3335.2661.400
Final Ranking2116710895341
Table 12. UCI data clustering benchmark datasets.
Table 12. UCI data clustering benchmark datasets.
DatasetFeatures No.Instances No.Classes No.
Cancer96832
CMC1014733
Glass92147
Iris41503
Seeds72103
Heart132702
Vowels68713
Water131783
Table 13. The obtained results by the comparative methods using the data clustering problems.
Table 13. The obtained results by the comparative methods using the data clustering problems.
DatasetMetricComparative Algorithms
AOAPSOGWOSCAPDOARSAGOAIGOA
CancerWorst3.34E+032.01E+032.95E+033.47E+033.50E+033.55E+033.48E+033.73E+02
Average3.28E+031.12E+032.81E+033.16E+033.23E+033.38E+033.18E+032.49E+02
Best3.19E+035.62E+022.48E+032.88E+033.06E+033.05E+032.94E+031.54E+02
STD7.93E+015.98E+021.92E+022.85E+022.10E+021.90E+022.35E+029.07E+01
CMCWorst3.33E+029.60E+013.11E+023.35E+023.35E+023.35E+023.35E+028.08E+01
Average3.33E+028.95E+013.08E+023.33E+023.35E+023.34E+023.34E+027.76E+01
Best3.32E+028.14E+013.01E+023.32E+023.34E+023.33E+023.32E+027.43E+01
STD5.34E-016.48E+004.21E+009.86E-012.03E-018.63E-011.12E+002.77E+00
GlassWorst3.48E+011.08E+013.07E+013.49E+013.52E+013.43E+013.51E+011.23E+00
Average3.42E+016.40E+002.89E+013.44E+013.49E+013.37E+013.44E+017.67E-01
Best3.36E+010.00E+002.74E+013.37E+013.44E+013.23E+013.37E+010.00E+00
STD4.65E-014.20E+001.40E+004.16E-013.35E-018.74E-016.66E-014.62E-01
IrisWorst2.39E+016.19E+001.66E+012.43E+012.47E+012.47E+012.48E+012.16E+00
Average2.37E+014.48E+001.54E+012.37E+012.37E+012.40E+012.44E+011.60E+00
Best2.33E+016.16E-011.42E+012.29E+012.27E+012.33E+012.39E+019.03E-01
STD2.68E-012.23E+001.07E+005.47E-018.98E-015.31E-013.39E-015.38E-01
SeedsWorst4.92E+011.96E+014.52E+015.02E+015.01E+015.06E+015.04E+016.65E+00
Average4.86E+011.69E+013.89E+014.86E+014.92E+014.99E+014.97E+016.28E+00
Best4.80E+011.56E+013.59E+014.74E+014.78E+014.84E+014.82E+015.95E+00
STD5.41E-011.60E+003.65E+001.12E+009.53E-018.76E-019.81E-012.55E-01
Statlog (Heart)Worst1.66E+034.06E+029.85E+021.69E+031.69E+031.67E+031.66E+033.53E+01
Average1.58E+032.72E+029.14E+021.65E+031.59E+031.49E+031.60E+032.12E+01
Best1.50E+037.35E+017.40E+021.61E+031.48E+031.39E+031.43E+030.00E+00
STD6.06E+011.28E+021.01E+022.97E+017.84E+011.09E+029.47E+011.38E+01
VowelsWorst1.53E+022.51E+011.53E+021.53E+021.53E+021.53E+021.53E+022.10E+01
Average1.52E+022.05E+011.37E+021.53E+021.53E+021.52E+021.53E+021.97E+01
Best1.52E+021.56E+011.28E+021.52E+021.53E+021.51E+021.52E+021.85E+01
STD4.75E-015.43E+001.00E+012.97E-011.20E-014.95E-013.70E-011.04E+00
WaterWorst3.91E+031.66E+032.91E+034.02E+033.94E+033.91E+033.98E+033.63E+02
Average3.87E+031.20E+032.52E+033.91E+033.87E+033.83E+033.84E+033.08E+02
Best3.78E+038.05E+022.20E+033.79E+033.72E+033.77E+033.42E+032.48E+02
STD5.49E+013.39E+023.29E+029.24E+019.02E+015.02E+012.31E+024.17E+01
Table 14. The statistical test using the p-value, Wilcoxon rank test, and Friedman ranking test.
Table 14. The statistical test using the p-value, Wilcoxon rank test, and Friedman ranking test.
DatasetMetricComparative Algorithms
AOAPSOGWOSCAPDOARSAGOAIGOA
Cancerp-value1.11E-110.0124553.83E-092.12E-082.11E-097.38E-105.2E-091
h1111111NaN
Rank72346851
CMCp-value4.01E-160.0054299.41E-145.47E-163.34E-164.8E-166.09E-161
h1111111NaN
Rank42358761
Glassp-value3.93E-140.0176651.01E-102.43E-141.1E-141.19E-122.02E-131
h1111111NaN
Rank52368471
Irisp-value5.39E-130.0225665.6E-093.76E-124.45E-112.96E-126.55E-131
h1111111NaN
Rank42356781
Seedsp-value2.85E-154.65E-074.18E-085.38E-131.39E-136.58E-141.58E-131
h1111111NaN
Rank52346871
Statlog (Heart)p-value1.15E-110.002484.79E-084.89E-147.74E-111.68E-093.2E-101
h1111111NaN
Rank52386471
Vowelsp-value5.48E-170.9654255.32E-093.43E-172.6E-175.98E-174E-171
h1111111NaN
Rank52368471
Waterp-value3.56E-140.0003844.03E-077.09E-136.53E-132.47E-146.74E-101
h1111111NaN
Rank62387451
MeanRanking5.1252.0003.0005.7506.8755.7506.5001.000
FinalRanking42358571
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abualigah, L.; Diabat, A.; Zitar, R.A. Orthogonal Learning Rosenbrock’s Direct Rotation with the Gazelle Optimization Algorithm for Global Optimization. Mathematics 2022, 10, 4509. https://doi.org/10.3390/math10234509

AMA Style

Abualigah L, Diabat A, Zitar RA. Orthogonal Learning Rosenbrock’s Direct Rotation with the Gazelle Optimization Algorithm for Global Optimization. Mathematics. 2022; 10(23):4509. https://doi.org/10.3390/math10234509

Chicago/Turabian Style

Abualigah, Laith, Ali Diabat, and Raed Abu Zitar. 2022. "Orthogonal Learning Rosenbrock’s Direct Rotation with the Gazelle Optimization Algorithm for Global Optimization" Mathematics 10, no. 23: 4509. https://doi.org/10.3390/math10234509

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop