Next Article in Journal
Methods of Retrieving Large-Variable Exponents
Previous Article in Journal
Resummed Relativistic Dissipative Hydrodynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Gaussian Mutational Barebone Dragonfly Algorithm: From Design to Analysis

1
School of Artificial Intelligence, Beijing Institute of Economics and Management, Beijing 100102, China
2
School of Information Engineering, Wenzhou Business College, Wenzhou 325035, China
3
College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325035, China
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(2), 331; https://doi.org/10.3390/sym14020331
Submission received: 5 January 2022 / Revised: 27 January 2022 / Accepted: 1 February 2022 / Published: 6 February 2022
(This article belongs to the Topic Dynamical Systems: Theory and Applications)

Abstract

:
The dragonfly algorithm is a swarm intelligence optimization algorithm based on simulating the swarming behavior of dragonfly individuals. An efficient algorithm must have a symmetry of information between the participating entities. An improved dragonfly algorithm is proposed in this paper to further improve the global searching ability and the convergence speed of DA. The improved DA is named GGBDA, which adds Gaussian mutation and Gaussian barebone on the basis of DA. Gaussian mutation can randomly update the individual positions to avoid the algorithm falling into a local optimal solution. Gaussian barebone can quicken the convergent speed and strengthen local exploitation capacities. Enhancing algorithm efficiency relative to the symmetric concept is a critical challenge in the field of engineering design. To verify the superiorities of GGBDA, this paper sets 30 benchmark functions, which are taken from CEC2014 and 4 engineering design problems to compare GGBDA with other algorithms. The experimental result show that the Gaussian mutation and Gaussian barebone can effectively improve the performance of DA. The proposed GGBDA, similar to the DA, presents improvements in global optimization competence, search accuracy, and convergence performance.

1. Introduction

The swarm intelligence optimization algorithm (SIOA) mainly simulates biological individuals’ group behavior, such as cooperation and competition, to obtain the optimal solution to complex problems. Moreover, SIOA has the benefit of an uncomplicated structure, few parameters, and uncomplicated implementations [1]. To date, varies of SIOA had been proposed by domestic and foreign scholars, namely the whale optimization algorithm (WOA) [2,3]; differential evolution [4] (DE); genetic algorithm (GA) [5]; ant colony optimization (ACO) [6,7]; particle swarm optimization (PSO) [8,9]; firefly algorithm (FA) [10]; fruit fly optimization algorithm (FOA) [11,12,13]; slime mould algorithm (SMA) [14]; moth flame optimization (MFO) [15,16,17]; grey wolf optimizer (GWO) [18,19]; bat algorithm (BA) [20,21]; grasshopper optimization algorithm (GOA) [22,23]; Harris hawks optimization (HHO) [24]; colony predation algorithm (CPA) [25]; hunger games search (HGS) [26]; Runge–Kutta optimizer (RUN) [27] and weighted mean of vectors (INFO) [28].
SIOA found its application in many fields, namely expensive optimization problems [29,30]; performance optimization [31]; object tracking [32,33]; multi-objective or many optimization problems [34,35,36]; traveling salesman problem [37]; neural network training [38]; scheduling problems [39]; big data optimization problems [40]; fault diagnosis of rolling bearings [41]; evolving deep convolutional neural networks [42]; gate resource allocation [43,44], and combination optimization problems [45]. The dragonfly algorithm (DA) is a population-based heuristic search algorithm that was first proposed by Mirjalili, S. [46] in 2015 and has since gained widespread adoption. It has a high level of performance and a broad range of applications in real life. Many applications, including parameter optimization [47], feature selection [48], load balancing [49], modeling [50], and others [51], have been effectively implemented using it. Many trials with complicated, high-dimensional, and multi-modal functions, on the other hand, demonstrated that DA had some drawbacks in some situations. For example, the DA lacks internal memory, has a poor convergence time, and is prone to falling into the local optimum when running in the background. As a result, several researchers are putting forth an attempt to increase the DA.

1.1. Related Works

When it comes to solving the challenge of numerical optimization, Sree Ranjini and colleagues [52] suggested a new memory-based hybrid DA (HMDA). The drawback of the DA was remedied by combining the advantages of the DA and the PSO together. Moreover, N. S. et al. [53] integrated the crow search algorithm (CSA) with the D-Crow optimization algorithm, presented a D-Crow optimization method, and applied this algorithm to optimize the configuration of virtual machines migrating. A method combining the dynamic analysis and the pattern search algorithm was presented by Khadanga and colleagues [54] to improve the performance and optimize the controller settings, in order to improve the control efficiency of the frequency of Microgrid. Using a trained multi-layer perceptron, Ghanem et al. [55] developed a novel hybridized metaheuristic method with improved properties in terms of attaining the best optimal value, convergence speed, avoiding local minima, and accuracy compared to previous algorithms. They created a hybrid algorithm by combining the artificial bee colony (ABC) algorithm with the distributed algorithm (DA). Shilaja and colleagues [56] used a combination of the enhanced grey wolf optimization and dynamic programming to handle the nonlinearity problems. Furthermore, it has been demonstrated to be more efficient than the conventional method. Using a dragonfly-based clustering method, CAVDO, Aadil et al. [57] proposed a solution to difficulties associated with the Internet of vehicles, such as scalability, dynamic topology changes, and finding the shortest path for routing. For the DA to be more random, Aci and colleagues [58] used the Brownian motion, which they found to be more effective. Furthermore, the results of the experiments revealed that the new DA had superior properties when compared to the old algorithm. Bao and colleagues [59] proposed a new DA that was changed using opposition-based learning. It also had a faster convergence time and a more balanced exploration–exploitation ratio, according to the results of the studies. Li et al. [60] improved the performance of DA by incorporating the adaptive learning factor and differential evolution (DE) approach into the algorithm. Sayed and colleagues [61] proposed a novel chaotic DA (CDA). In order to increase the DA, the researchers included chaotic maps in the searching iterations of the algorithm. Conforming to the experimental findings, CDA outperformed the control group in classification performance and was capable of identifying more suitable feature subsets.
Mafarja et al. [62] collected eight transfer functions (s-type function and v-type function) in BDA for evaluation, and proposed the time-varying s-type BDA, which made the algorithm have a high probability of changing the element position in the early optimization period, but with a low probability in the late optimization period. Hariharan et al. [63] proposed an improved binary dragonfly optimization algorithm (IBDFO) to solve the dimension problem and combined it with a feature extraction based on a wavelet packet to improve the accuracy of identifying the type of infant crying. Zhang et al. [64] used the DA to improve the prediction accuracy of the support vector machine (SVM) to obtain the optimal combination of parameters, and proposed the DA-SVM model to realize the short-term load prediction of the micro grid. Yuan et al. [65] tended to obtain an algorithm with better exploration capability as they combined the DA with the Coulomb force search strategy (CFSS). The resultant algorithm gained both a high accuracy and a remarkably improved convergence rate. Zhang et al. [66] quantized dragonfly behaviors to improve the search efficiency of the DA to obtain a quantized dragonfly algorithm (QDA). Furthermore, they put forward a new electric load forecasting model, based on the complete ensemble empirical mode decomposition adaptive noise, QDA, and support vector regression model, to accurately forecast the electric load. Suresh et al. [67] adopted the DA as the optimization algorithm to solve static economic dispatch incorporating solar energy. Based on the modified dragonfly algorithm (MDA) and bat search algorithm (BSA), Sureshkumar et al. [68] put forward a new method that adopted the MDABSA technique to control power flow more efficiently. In this method, MDA was used to develop the control signals of the voltage source. Xie et al. [69] adopted the DA to create a cancer classification algorithm. Furthermore, the comparative experiments proved it had a higher classification accuracy on cancer datasets. Xu et al. [70] adopted the DA and DE for color image segmentation. In this method, the DA was used for global search, and DE was used for local search.

1.2. Needs for Research

However, despite the fact that the literature discussed above made significant advances to the DA, it is not optimal enough to stabilize the algorithm’s exploration and exploitation capabilities. With the goal of further improving the exploration and exploitation exactness of the DA, as well as avoiding falling into the local optimum, this work proposes an upgraded DA that incorporates Gaussian mutation and Gaussian barebone to further improve these aspects. With the use of Gaussian mutation, we were able to update the dragonfly’s unique location while also improving the global search capabilities. Additionally, the Gaussian barebone was used to increase the local exploitation capabilities as well as the speed with which the searches could be conducted. The results of the simulations demonstrated that the algorithm’s accomplishments were superior to those of the original DA, and that its global optimization capabilities, search accuracy, and convergence performance were all greatly enhanced as a consequence. In summary, the innovations and contributions of this paper are as follows.
  • An improved dragonfly algorithm (GGBDA) is proposed in this paper to further improve the global searching ability and the convergence speed of DA.
  • GGBDA achieves a great improvement in the ability of exploitation and exploration.
  • The performance of GGBDA is verified by comparison with some excellent algorithms.
  • GGBDA is applied to optimize the engineering optimization problems.
The following is a summary of the rest of this article. Section 2 introduces the DA; Section 3 describes the enhanced DA based on Gaussian mutation and Gaussian barebone; Section 4 presents the experimental findings of the benchmark functions; and Section 5 concludes the paper and provides an overview of the previous work as well as a forecast for future work.

2. Materials and Methods

2.1. Dragonfly Algorithm (DA)

DA was inspired by two states of idealized behaviors of dragonflies in nature. There are three principles in the core mathematical backgrounds of this method.
Separation aims to prevent search individuals from collisions with others in a static state within a partial range. The following is the calculation function:
S i = j = 1 N X X j
where X is the position agents, X j is j-th neighboring individual’s position, and N is neighboring individuals’ number.
Alignment is aimed at matching velocity between individuals within a partial range. The following is the calculation function:
A i = j = 1 N V j N
where V j is the j-th velocity of the neighboring individual.
Cohesion is aimed at making individuals move closer towards the center of swarm aggregation. The following is the calculation function:
C i = j = 1 N X j N X
where X is the current individual’s position, N is neighborhoods’ number, and X j is j-th neighboring individual’s position.
The following is the attraction towards a food source:
F i = X + X
where X is the current individual’s position and X + is he food source’s position.
The following is the distraction outwards an enemy source:
E i = X + X
where X is the current individual’s position and X is the enemy’s position.
Step ( Δ X ) and position(X) are prerequisites to update and record the location of agents in the search domain. The step vector can be considered as the velocity vector in PSO. It is the direction of the agents’ motion. The following is the calculation function of the position vector:
Δ X t + 1 = s S i + a A i + c C i + f F i + e E i + w Δ t
where S i , A i , C i , F i , E i indicates the separation, alignment, cohesion, food source and an enemy of the i-th individual’s position. s, a, c, f, e represent the weights, w is the inertia weight, i is the i-th individual, and t is the number of the current iteration. The following is the calculation function of the position vector:
X t + 1 = X t + Δ X t + 1
Search agents have some deficiencies in terms of random behavior and exploration ability, and they also lack adjacent solutions. Therefore, Levy flight-based patterns are used to update the position of agents. The following is the function to update location:
X t + 1 = X t + L e v y ( d ) × X t
where t is the current iteration number and d is the dimension of the position vector.

2.2. Gaussian Mutation

To improve the performance of DA, this paper used the Gaussian mutation to update the individual position of the dragonfly. Gaussian mutation has applied to many optimizers [3,16,71,72]. The following is the mutation function of the Gaussian mutation:
t e m p = X j ( 1 + k )
where X is the position agents, temp is a temporary individual position, X j is j-th neighboring individual position, N is neighboring individuals’ number, and k is a random number between 0 and 1.
After updating the individual position of the dragonfly with this mutation function, whether the result of the Gaussian mutation is better than the previous result needs to be verified. If the temporary individual position can obtain a better result, it will be used as the new individual position of the dragonfly. With the population iterates, the DA may fall into local optimum. The Gaussian mutation has randomness, thereby quickening the scouting speed, avoiding slipping into the local optimum effectively, improving the global optimization capacity, and eventually obtaining the global optimum or a satisfactory solution.

2.3. Gaussian Barebone Mechanism

The speed of scouting for the optimal solution is a significant indicator of the performance of the algorithm. However, in the iteration, the scouting speed of the DA is dissatisfactory; thereby, this paper employed a Gaussian barebone to improve it. The Gaussian barebone mechanism hast been shown great potential in other optimizers [71,72]. The Gaussian barebone mechanism could help the DA scout the global optimum faster and more effectively by gathering individuals into a food source. There are two methods to gather individuals. The first method calculates the middle position between the food source and individual’s position and the distance between them. Then, it generates a random position where the values of each dimension are normally distributed based on the two calculated variables. The second method obtains the distances for each dimension of two random individuals. Additionally, it uses them and the position of the food source to calculate a new position. The following is the function:
V i , j = { n o r m a l ( m u + s i g m a ) , r a n d ( ) < C R F P j + k ( X k 1 , j X k 2 , j ) , r a n d ( ) C R
where CR is a freely settable parameter; rand is a random number between 0 and 1; V i , j is a new temporary position; mu is the middle position between the food source’s position and X j ; sigma is the distance between the j-th dimension of the i-th neighboring individual and the j-th dimension of food source; the normrnd function generates random numbers that follow a normal distribution with the mu parameter representing the mean value and the sigma parameter representing standard deviation; F P j is the j-th dimension of food source; k is a random number; and X k 1 , j and X k 2 , j are j-th dimension of two random individuals in the population.

3. Proposed Method

The DA lacks internal memory, has a slow convergence speed, and quickly falls into the local optimum. As a result of these defects, this paper puts forward a new DA improved by Gaussian mutation and a Gaussian barebone named GGBDA. It uses the Gaussian barebone to gather individuals to food to quicken the speed of scouting the optimal solution and strengthen local exploitation capacities. It can update the individuals’ positions based on the position of the food source. However, Gaussian barebone could make the population fall into local optimums. Therefore, this paper employs the Gaussian mutation to improve the global search capacities, search accuracy, and convergence performance by preventing it from trapping into local optimums.
The Gaussian mutation is mainly used to randomly update individuals’ positions to escape the local optimums based on the Gaussian mutation function. The flowchart of the improved DA is shown in Figure 1. And The pseudocode of GGBDA is shown in Algorithm 1.
Algorithm 1. Pseudocode of GGBDA
Begin
   Initialize the dragonflies’ population X i ( i = 1 , 2 , , n )
   Initialize the step vectors Δ X i ( i = 1 , 2 , , n )
   while the end condition is not satisfied
    Calculate the population fitness of all the dragonflies
    Update the food source and enemy
    Update w, s, a, c, f, and e
    Calculate S, A, C, F, and E by Equations (1)–(5)
    Update the neighboring radius
    if a dragonfly has at least one neighboring dragonfly
     Update the velocity and vector by Equation (6)
     Update the position vector by Equation (7)
    else
     Update the position vector by Equation (8)
    end if
    Check and correct the new position according to the boundaries of the variables
    Update with the Gaussian mutation and Gaussian barebone
   end while
End

4. Experimental Results

In this part, the GGBDA was evaluated on CEC2014 benchmarks and practical engineering problems. To obtain unbiased results, all the experiments were carried out in the same environments, and the maximum number of iterations and the population size were set to 500 and 30, respectively. Each algorithm was run 30 times independently on each function to decrease the weight of unpredictability. Regarding the parameters that affect the algorithms involved in the comparison, we adopted the same values as in the original paper. In this paper, the average value and standard deviation of the experimental results of the optimization function were used to evaluate and analyze the potential of related technologies. To show the experimental result intuitively, the best values of each function are shown in bold.

4.1. Benchmark Functions

To compare the proposed algorithm and other algorithms, this experiment used 30 classical functions, including unimodal functions, multi-modal functions, hybrid functions, and composition functions.
These 30 functions are all taken from CEC2014 [73]. Thirty different types of benchmarks can more comprehensively estimate the performance of the proposed algorithm. The details of the thirty benchmarks are listed in Table 1.

4.2. Comparison with Classical Algorithms

In order to validate the effectiveness of the improved GGBDA, there are some representative algorithms employed for comparison: OBSCA [74], m_SCA [75], SCADE [76], ASCA_PSO [77], ACWOA [78], MFO [15], SCA [79], FA [80], and DA.
In the experimental part, the parameter values of the compared algorithms were set, as shown in Table 2. To ensure the fairness of the experiments as far as possible, the experimental environment of algorithms stayed the same. The experimentations used 30D classical functions for comparing the proposed method and other rivals. Table 3 recorded the experimental results on 30D. Each algorithm ran independently 30 times. The average (Ave) and standard deviation (Std) of the optimal solutions obtained are shown in these tables. “AVR” expresses the average of the algorithm’s ranking results on all functions. In this experiment, the maximum number of iterations and the population size (Pop) were set to 1000 and 30. Each algorithm was performed in every function with 30 dimensions for the test of scalabilities, respectively. The symbol “+/=/−” refers to whether the performance of GGBDA is greater, equal, or worse than other algorithms compared.

4.2.1. Results on 30D Functions

F1–F7 do not have local optimal solutions. They are very suitable for measuring the exploration competence of the algorithm. In F2, F3, and F6, the results of GGBDA are far superior to all the others. Furthermore, in the rest functions, the results of GGBDA are better than most comparison algorithms. The results of F1–F7 show that GGBDA has an advantage over other algorithms in the ability to explore in the unimodal locality.
F8–F13 represents the multi-modal functions that have numerous local optimal solutions. They are very suitable for evaluating the local optimal prevention of the search ability of the algorithm. For F10 and F11, the results of GGBDA are near to the global optimal solution. However, the other comparison algorithm is easy to fall into the non-global optimal solution to different degrees. For the rest functions, GGBDA still obtains results that are better than most other algorithms. In conclusion, the experimental result verifies the global exploration ability of GGBDA.
From the convergence in Figure 2, we can estimate and evaluate the convergence performance of the algorithm. In F3, F10, F18, F20, F27, F28, F29, and F30, the convergence of GGBDA is better than other comparison algorithms in the early iterations. From the convergence of F6 and F11, GGBDA does not obtain the best adaptive in the early iteration but in the later iteration. In summary, the symbol “+/=/−” shows that GGBDA ranks first with the avg far lower than the second SCA, and the performance is even better than OBSCA, m_SCA, SCADE, ASCA_PSO, ACWOA, MFO, SCA, FA, and DA.

4.2.2. Balance Analysis

In this section, we conduct a qualitative analysis of GGBDA on the 30 functions of CEC14. The original DA was selected for comparison with GGBDA. Figure 3 shows the results of the feasibility analysis of GGBDA and the DA. There are five columns in the figure. The first column (a) is the location distribution of the GGBDA search history on the three-dimensional plane. The second column (b) is the location distribution of the GGBDA search history on the two-dimensional plane. The third column (c) is the trajectory of the first dimension of GGBDA during the iteration. The fourth column (d) shows the change of the average fitness of GGBDA during the iteration. The fifth column (e) shows the convergence curves of GGBDA and DA. In Figure 3b, the red dot represents the location of the optimal solution, and the black dot represents the search location of GGBDA. In the selected 5 function images, the black dots are denser in the area around the red dots, which shows that GGBDA has developed the area in which the optimal solution is located. In Figure 3c, we can see that the first-dimensional trajectory of GGBDA fluctuates greatly in the early period. Early volatility indicates that the algorithm has conducted extensive searches. The average fitness change of GGBDA in the whole iterative process is shown in Figure 3d. We can see that the average fitness of GGBDA dropped to a lower level in the mid-term. This shows that GGBDA has a good convergence speed. In Figure 3e, we can clearly see that the convergence curve of GGBDA is lower than that of DA, which shows that GGBDA can obtain a better solution.
The balance analysis and diversity analysis are carried out on the same functions. Figure 4 shows the results of the balanced analysis of GGBDA and DA. In Figure 4, there are three curves in each graph. As shown in the Figure, the blue curve and red curve represent exploitation and exploration, respectively. The larger value of the curve means that the corresponding behavior is dominant in the algorithm. The green curve indicates incremental–decremental. The curve can more intuitively reflect the changing trends of the two behaviors of the algorithm. When the value of the curve increases, it means that the exploration activity is dominant. Instead, exploitative behavior predominates. When the curve drops to a negative value, the curve will be set to zero. Comparing the curves of the two algorithms shows that both algorithms were dominated by exploration behavior in the early stage. This is because the swarm intelligence optimization algorithm performs a global search first, at the beginning. However, the difference between the two algorithm curves is also very obvious. The DA spends more time on exploration behavior than GGBDA. The exploration behavior of DA almost accounts for half of the entire iteration process. However, the exploitative behavior of GGBDA quickly became dominant, indicating that it spent more time exploiting the target area. This is the impact of the two mechanisms added to GGBDA on its balance.
Figure 5 is the result of the diversity analysis of GGBDA and DA. In Figure 5, the ordinate represents the population diversity. We can see that the diversity of the two algorithms is very high at the beginning. This is because the initial population of the algorithm is randomly generated. Then, in the iterative process, the algorithm continues to narrow the search range so that the diversity of the population will reduce, although the diversity curves of the two algorithms almost reached the lowest in the iteration. However, the descent process of the two algorithms is very different. We can clearly observe that the DA maintained a high diversity in the early stage. The diversity curve of the DA dropped to its lowest value very quickly in the mid-term. This change was completed in a concise time.
In contrast, the curve of GGBDA declined more gently. GGBDA only declines rapidly at the initial stage, and then the rate of decline slows down. This is obvious for F2 and F14. This shows that the two added mechanisms have an impact on the diversity of the DA. Owing to the strong search capability, the proposed GGBDA can also be applied to other optimization problems, such as fault detection [81]; metabolomic data processing [82,83]; urban road planning [84]; multivariate time series analysis [85]; gene signature identification [86]; drug target discovery [87]; drug discovery [88]; pharmacoinformatics data mining [89]; service ecosystem [90,91]; information retrieval services [92,93,94]; kayak cycle phase segmentation [95]; covert communication system [96,97,98]; location-based services [99,100]; and human motion capture [101].

4.3. Real-World Problems

4.3.1. Pressure Vessel Design (PVD) Problem

The PVD problem is a common engineering design problem. There are four constraints and four parameters in the PVD problem. The main aim is to obtain a pressure vessel that meets the conditions with relatively minimal costs.
The formula of this problem is listed below.
Consider:
X = [ x 1   x 2   x 3   x 4 ] = [ T s   T h   R   L ]
Range of parameters:
0   x 1   99   0     x 2 99 10     x 3 200 10     x 4 200
Minimize:
f ( x ) = 0.6224 x 1 x 3 x 4 + 1.7781 x 3 x 1 2 + 3.1661 x 4 x 1 2 + 19.84 x 3 x 1 2
Subject to:
g 1 ( X ) = x 1 + 0.0193 x 3 0 g 2 ( X ) = x 3 + 0.00954 x 3 0 g 3 ( X ) = π x 4 x 3 2 4 3 π x 3 3 + 1296000 0 g 4 ( X ) = x 4 240 0
Table 4 shows the results GGBDA for the optimization for the PVD problem, compared with other peers in the literature. The results show that the optimal value obtained by the GGBDA was 6059.7298, which was better than CPSO, WOA, and Branch-bound. Moreover, GGBDA has a similar effect with MFO, HPSO, and BA.

4.3.2. Hydrostatic Thrust Bearings Design (HTBD) Problem

The goal of the HTBD problem is to minimize power loss. At the same time, the design needs to meet some constraints. There are four design variables: bearing step radius (R), recess radius (R0), oil viscosity (μ), and flow rate (Q). The mathematical model of this problem is shown as below.
Minimize:
f ( x ) = QP 0 0.7 + E f
Subject to:
g 1 ( x ) = π P 0 2 × R 2 R 0 2 ln ( R / R 0 ) W s 0 g 2 ( x ) = P m a x P 0 0 g 3 ( x ) = Δ T m a x Δ T 0 g 4 = h h m i n 0 g 5 ( x ) = R R 0 0 g 6 ( x ) = 0.001 γ / g P 0 ( Q / 2 π R h ) 0 g 7 ( x ) = 5000 W π ( R 2 R 0 2 ) 0
where
P 0 = 6 μ Q π h 3 ln ( R R 0 ) E f = 9336 Q γ C Δ T Δ T = 2 ( 10 P 560 ) P = log ( log ( 8.122 × 10 6 + 0.8 ) ) C 1 n h = ( 2 π N 60 ) 2 2 π μ E f ( R 4 4 R 0 4 4 ) C 1 = 10.04
n = −3.55, Pmax = 1000, Ws = 101000
Δ T m a x = 50 h m i n = 0.001
g = 386.4, N = 750
5 D e , D i 15 0.01 t 6 0.05 h 0.5
Table 5 shows the results of the HTBD problem. It can be seen that the optimal value of GGBDA is 19,508.76, which is better than PSO, SQP, and GASO. Moreover, GGBDA has almost the same effect as TNE and TLBO.

4.3.3. Welded Beam Design (WBD) Problem

WBD problem aims to minimize the cost of welded beams subject to the four constraints of shear stress (τ), bending stress (θ), buckling load (P_c), and deflection (δ). The variables in this problem are composed of welding seam thickness (h), welding joint length (l), beam width (t), and beam thickness (b). The mathematical model of this problem is listed as below.
Consider:
x = [ x 1 , x 2 , x 3 , x 4 ] = [   h   l   t   b ]
Minimize:
f ( x ) = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 4 )
Subject to:
g 1 ( x ) = τ ( x ) τ max 0 g 2 ( x ) = σ ( x ) σ max 0 g 3 ( x ) = δ ( x ) δ max 0 g 4 ( x ) = x 1 x 4 0 g 5 ( x ) = P P C ( x ) 0 g 6 ( x ) = 0.125 x 1 0 g 7 ( x ) = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 5.0 0
Variable range:
0.1 x 1 2 0.1 x 2 10 0.1 x 3 10 0.1 x 4 2
where
τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 τ = P 2 x 1 x 2 τ = MR J τ = MR J R = x 2 2 4 + ( x 1 + x 3 2 ) 2 J = 2 { 2 x 1 x 2 [ x 2 2 4 + ( x 1 + x 3 2 ) 2 ] } σ ( x ) = 6 PL x 4 x 3 2 δ ( x ) = 6 PL 3 Ex 3 2 x 4 P C ( x ) = 4 . 013 E x 3 2 x 4 6 36 L 2 ( 1 x 3 2 L E 4 G ) P = 60001 b L = 14 δ max = 0.25 E = 30 × 1 6 psi G = 12 × 1 0 6 psi           τ max = 13600 psi σ max = 30000 psi
The results of the WBD problem are shown in Table 6. The optimal value of GGBDA is 1.724527, which is the lowest among all the algorithms. It can be seen that GGBDA has a better effect than other peers in the experiment.

4.3.4. Tension–Compression String Design (TCSD) Problem

The TCSD problem is to design a tension–compression spring with the minimum weight and meets the constraints. The three variables in the problem are the wire diameter ( d ), mean coil diameter ( D ), and the number of active coils ( N ). The mathematical model of this problem is listed as below.
Consider:
x = [ x 1   x 2   x 3 ] = [ d   D   N ]
Objective function:
M i n i m i z e   f ( x ) = x 1 2 x 2 x 3 + 2 x 1 2 x 2
Subject to:
h 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 , h 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5180 x 1 2 1 0 h 3 ( x ) = 1 140.45 x 1 x 2 3 x 3 0 h 4 ( x ) = x 1 + x 2 1.5 1 0
Variable ranges:
0.05 x 1 2.00 0.25 x 2 1.30 ,                 2.00 x 3 15.0
Table 7 shows the results of the TCSD problem. The optimal values of GGBDA and NDE are both 0.012665, which is the lowest among the algorithms. It can be seen that GGBDA still has a good effect on the TCSD problem.

5. Conclusions

The purpose of this research was to propose an enhanced DA that anticipates engineering design problems more efficiently and precisely. The Gaussian mutation and the Gaussian barebone are embedded into the DA, termed as GGBDA. The Gaussian mutation was used to prevent slipping into local optimal situations and to update the individual locations in a random manner. To further enhance local exploitation capacities, Gaussian barebone was used in conjunction with the improvement of Gaussian mutation, the global searching ability, and the convergence efficiency of GGBDA to accelerate the convergent speed and strengthen local exploitation capacities. This study compared the performance of GGBDA with other competitive peers on 30 benchmarks and 4 engineering design issues. The experimental findings demonstrate that GGBDA outperforms DA and other competing algorithms in terms of solution accuracy and convergence speed.
GGBDA’s performance and time cost will be improved in future developments. For example, we will address GGBDA’s design issues. GGBDA may also be used to anticipate and optimize the parameters for energy optimization, image segmentation, and parameter optimization of machine learning methods.

Author Contributions

Conceptualization, L.Y., F.K. and H.C.; Methodology, H.C. and S.Z.; software, S.Z.; validation, H.C., L.Y., F.K. and S.Z.; formal analysis, L.Y. and F.K.; investigation, S.Z.; resources, H.C.; data curation, S.Z.; writing—original draft preparation, S.Z.; writing—review and editing, H.C., S.Z., L.Y. and F.K.; visualization, S.Z., L.Y. and F.K.; supervision, L.Y. and F.K.; project administration, S.Z.; funding acquisition, H.C., L.Y., F.K. and S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Wenzhou Science and Technology Bureau (ZG2020030) and the Humanities and Social Science Research Planning Fund Project of the Ministry of Education (20YJA790090).

Data Availability Statement

The data involved in this study are all public data, which can be downloaded through public channels.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhan, Z.H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev. 2021, 55, 59–110. [Google Scholar] [CrossRef]
  2. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  3. Luo, J.; Chen, H.; Heidari, A.A.; Xu, Y.; Zhang, Q.; Li, C. Multi-strategy boosted mutative whale-inspired optimization approaches. Appl. Math. Model. 2019, 73, 109–123. [Google Scholar] [CrossRef]
  4. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  5. Booker, L.B.; Goldberg, D.E.; Holland, J.H. Classifier systems and genetic algorithms. Artif. Intell. 1989, 40, 235–282. [Google Scholar] [CrossRef]
  6. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 1996, 26, 29–41. [Google Scholar] [CrossRef] [Green Version]
  7. Deng, W.; Xu, J.; Zhao, H. An Improved Ant Colony Optimization Algorithm Based on Hybrid Strategies for Scheduling Problem. IEEE Access 2019, 7, 20281–20292. [Google Scholar] [CrossRef]
  8. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks—Conference Proceedings, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995. [Google Scholar]
  9. Zhang, X.; Hu, W.; Xie, N.; Bao, H.; Maybank, S. A Robust Tracking System for Low Frame Rate Video. Int. J. Comput. Vis. 2015, 115, 279–304. [Google Scholar] [CrossRef] [Green Version]
  10. Yang, X.-S. Firefly Algorithms for Multimodal Optimization. In Stochastic Algorithms: Foundations and Applications; SAGA 2009. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 5792. [Google Scholar]
  11. Pan, W.T. A new Fruit Fly Optimization Algorithm: Taking the financial distress model as an example. Knowl.-Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  12. Shen, L.; Chen, H.; Yu, Z.; Kang, W.; Zhang, B.; Li, H.; Yang, B.; Liu, D. Evolving support vector machines using fruit fly optimization for medical data classification. Knowl.-Based Syst. 2016, 96, 61–75. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Liu, R.; Asghar Heidari, A.; Wang, X.; Chen, Y.; Wang, M.; Chen, H. Towards Augmented Kernel Extreme Learning Models for Bankruptcy Prediction: Algorithmic Behavior and Comprehensive Analysis. Neurocomputing 2020, 430, 185–212. [Google Scholar] [CrossRef]
  14. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  15. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  16. Xu, Y.; Chen, H.; Luo, J.; Zhang, Q.; Jiao, S.; Zhang, X. Enhanced Moth-flame optimizer with mutation strategy for global optimization. Inf. Sci. 2019, 492, 181–203. [Google Scholar] [CrossRef]
  17. Xu, Y.; Chen, H.; Heidari, A.A.; Luo, J.; Zhang, Q.; Zhao, X.; Li, C. An efficient chaotic mutative moth-flame-inspired optimizer for global optimization tasks. Expert Syst. Appl. 2019, 129, 135–155. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  19. Zhao, X.; Zhang, X.; Cai, Z.; Tian, X.; Wang, X.; Huang, Y.; Chen, H.; Hu, L. Chaos enhanced grey wolf optimization wrapped ELM for diagnosis of paraquat-poisoned patients. Comput. Biol. Chem. 2019, 78, 481–490. [Google Scholar] [CrossRef]
  20. Yang, X.S. A new metaheuristic Bat-inspired Algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); (Studies in Computational Intelligence); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  21. Yu, H.; Zhao, N.; Wang, P.; Chen, H.; Li, C. Chaos-enhanced synchronized bat optimizer. Appl. Math. Model. 2020, 77, 1201–1215. [Google Scholar] [CrossRef]
  22. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  23. Luo, J.; Chen, H.; Zhang, Q.; Xu, Y.; Huang, H.; Zhao, X. An improved grasshopper optimization algorithm with application to financial stress prediction. Appl. Math. Model. 2018, 64, 654–668. [Google Scholar] [CrossRef]
  24. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  25. Tu, J.; Chen, H.; Wang, M.; Gandomi, A.H. The Colony Predation Algorithm. J. Bionic Eng. 2021, 18, 674–710. [Google Scholar] [CrossRef]
  26. Yang, Y.; Chen, H.; Heidari, A.A.; Gandomi, A.H. Hunger games search: Visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst. Appl. 2021, 177, 114864. [Google Scholar] [CrossRef]
  27. Ahmadianfar, I.; Asghar Heidari, A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN Beyond the Metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  28. Ahmadianfar, I.; Asghar Heidari, A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An Efficient Optimization Algorithm based on Weighted Mean of Vectors. Expert Syst. Appl. 2022, 194, 116516. [Google Scholar] [CrossRef]
  29. Wu, S.-H.; Zhan, Z.-H.; Zhang, J. SAFE: Scale-adaptive fitness evaluation method for expensive optimization problems. IEEE Trans. Evol. Comput. 2021, 25, 478–491. [Google Scholar] [CrossRef]
  30. Li, J.-Y.; Zhan, Z.-H.; Wang, C.; Jin, H.; Zhang, J. Boosting data-driven evolutionary algorithm with localized data generation. IEEE Trans. Evol. Comput. 2020, 24, 923–937. [Google Scholar] [CrossRef]
  31. Ying, C.; Ying, C.; Ban, C. A performance optimization strategy based on degree of parallelism and allocation fitness. EURASIP J. Wirel. Commun. Netw. 2018, 2018, 1–8. [Google Scholar] [CrossRef] [Green Version]
  32. Hu, K.; Ye, J.; Fan, E.; Shen, S.; Huang, L.; Pi, J. A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropy. J. Intell. Fuzzy Syst. 2017, 32, 1775–1786. [Google Scholar] [CrossRef] [Green Version]
  33. Hu, K.; He, W.; Ye, J.; Zhao, L.; Peng, H.; Pi, J. Online Visual Tracking of Weighted Multiple Instance Learning via Neutrosophic Similarity-Based Objectness Estimation. Symmetry 2019, 11, 832. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, W.; Hou, W.; Li, C.; Yang, W.; Gen, M. Multidirection Update-Based Multiobjective Particle Swarm Optimization for Mixed No-Idle Flow-Shop Scheduling Problem. Complex Syst. Modeling Simul. 2021, 1, 176–197. [Google Scholar] [CrossRef]
  35. Liu, X.-F.; Zhan, Z.-H.; Gao, Y.; Zhang, J.; Kwong, S.; Zhang, J. Coevolutionary particle swarm optimization with bottleneck objective learning strategy for many-objective optimization. IEEE Trans. Evol. Comput. 2018, 23, 587–602. [Google Scholar] [CrossRef]
  36. Deng, W.; Zhang, X.; Zhou, Y.; Liu, Y.; Deng, W.; Chen, H.; Zhao, H. An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 2021, 585, 441–453. [Google Scholar] [CrossRef]
  37. Lai, X.; Zhou, Y. Analysis of multiobjective evolutionary algorithms on the biobjective traveling salesman problem (1, 2). Multimed. Tools Appl. 2020, 79, 30839–30860. [Google Scholar] [CrossRef]
  38. Yang, Z.; Li, K.; Guo, Y.; Ma, H.; Zheng, M. Compact real-valued teaching-learning based optimization with the applications to neural network training. Knowl. Based Syst. 2018, 159, 51–62. [Google Scholar] [CrossRef]
  39. Han, X.; Han, Y.; Chen, Q.; Li, J.; Sang, H.; Liu, Y.; Pan, Q.; Nojima, Y. Distributed Flow Shop Scheduling with Sequence-Dependent Setup Times Using an Improved Iterated Greedy Algorithm. Complex Syst. Modeling Simul. 2021, 1, 198–217. [Google Scholar] [CrossRef]
  40. Yi, J.-H.; Deb, S.; Dong, J.; Alavi, A.H.; Wang, G.-G. An improved NSGA-III algorithm with adaptive mutation operator for Big Data optimization problems. Future Gener. Comput. Syst. 2018, 88, 571–585. [Google Scholar] [CrossRef]
  41. Deng, W.; Liu, H.; Xu, J.; Zhao, H.; Song, Y.J. An improved quantum-inspired differential evolution algorithm for deep belief network. IEEE Trans. Instrum. Meas. 2020, 69, 7319–7327. [Google Scholar] [CrossRef]
  42. Sun, Y.; Xue, B.; Zhang, M.; Yen, G.G. Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 2019, 24, 394–407. [Google Scholar] [CrossRef] [Green Version]
  43. Deng, W.; Xu, J.; Zhao, H.; Song, Y. A Novel Gate Resource Allocation Method Using Improved PSO-Based QEA. IEEE Trans. Intell. Transp. Syst. 2020. [Google Scholar] [CrossRef]
  44. Deng, W.; Xu, J.; Song, Y.; Zhao, H. An Effective Improved Co-evolution Ant Colony Optimization Algorithm with Multi-Strategies and Its Application. Int. J. Bio-Inspired Comput. 2020, 16, 158–170. [Google Scholar] [CrossRef]
  45. Zhao, F.; Di, S.; Cao, J.; Tang, J. A novel cooperative multi-stage hyper-heuristic for combination optimization problems. Complex Syst. Modeling Simul. 2021, 1, 91–108. [Google Scholar] [CrossRef]
  46. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  47. Guha, D.; Roy, P.K.; Banerjee, S. Optimal tuning of 3 degree-of-freedom proportional-integral-derivative controller for hybrid distributed power system using dragonfly algorithm. Comput. Electr. Eng. 2018, 72, 137–153. [Google Scholar] [CrossRef]
  48. Wu, J.; Zhu, Y.; Wang, Z.; Song, Z.; Liu, X.; Wang, W.; Zhang, Z.; Yu, Y.; Xu, Z.; Zhang, T.; et al. A novel ship classification approach for high resolution SAR images based on the BDA-KELM classification model. Int. J. Remote Sens. 2017, 38, 6457–6476. [Google Scholar] [CrossRef]
  49. Ashok Kumar, C.; Vimala, R.; Aravind Britto, K.R.; Sathya Devi, S. FDLA: Fractional Dragonfly based Load balancing Algorithm in cluster cloud model. Clust. Comput. 2019, 22, 1401–1414. [Google Scholar] [CrossRef]
  50. VeeraManickam, M.R.M.; Mohanapriya, M.; Pandey, B.K.; Akhade, S.; Kale, S.A.; Patil, R.; Vigneshwar, M. Map-Reduce framework based cluster architecture for academic student’s performance prediction using cumulative dragonfly based neural network. Clust. Comput. 2019, 22, 1259–1275. [Google Scholar] [CrossRef]
  51. Yu, C.; Cai, Z.; Ye, X.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Li, C. Quantum-like mutation-induced dragonfly-inspired optimization approach. Math. Comput. Simul. 2020, 178, 259–289. [Google Scholar] [CrossRef]
  52. Sree Ranjini, S.R.; Murugan, S. Memory based Hybrid Dragonfly Algorithm for numerical optimization problems. Expert Syst. Appl. 2017, 83, 63–78. [Google Scholar]
  53. More, N.S.; Ingle, R.B. Energy-aware VM migration using dragonfly-crow optimization and support vector regression model in Cloud. Int. J. Modeling Simul. Sci. Comput. 2018, 9, 1850050. [Google Scholar] [CrossRef]
  54. Khadanga, R.K.; Padhy, S.; Panda, S.; Kumar, A. Design and Analysis of Tilt Integral Derivative Controller for Frequency Control in an Islanded Microgrid: A Novel Hybrid Dragonfly and Pattern Search Algorithm Approach. Arab. J. Sci. Eng. 2018, 43, 3103–3114. [Google Scholar] [CrossRef]
  55. Ghanem, W.A.H.M.; Jantan, A. A Cognitively Inspired Hybridization of Artificial Bee Colony and Dragonfly Algorithms for Training Multi-layer Perceptrons. Cogn. Comput. 2018, 10, 1096–1134. [Google Scholar] [CrossRef]
  56. Shilaja, C.; Arunprasath, T. Internet of medical things-load optimization of power flow based on hybrid enhanced grey wolf optimization and dragonfly algorithm. Future Gener. Comput. Syst. 2019, 98, 319–330. [Google Scholar]
  57. Aadil, F.; Ahsan, W.; Rehman, Z.U.; Shah, P.A.; Rho, S.; Mehmood, I. Clustering algorithm for internet of vehicles (IoV) based on dragonfly optimizer (CAVDO). J. Supercomput. 2018, 74, 4542–4567. [Google Scholar] [CrossRef]
  58. Aci, C.I.; Gülcan, H. A modified dragonfly optimization algorithm for single- and multiobjective problems using brownian motion. Comput. Intell. Neurosci. 2019, 2019, 6871298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Bao, X.; Jia, H.; Lang, C. Dragonfly algorithm with Opposition-based learning for multilevel thresholding color image segmentation. Symmetry 2019, 11, 716. [Google Scholar] [CrossRef] [Green Version]
  60. Li, L.L.; Zhao, X.; Tseng, M.L.; Tan, R.R. Short-term wind power forecasting based on support vector machine with improved dragonfly algorithm. J. Clean. Prod. 2020, 242, 118447. [Google Scholar] [CrossRef]
  61. Sayed, G.I.; Tharwat, A.; Hassanien, A.E. Chaotic dragonfly algorithm: An improved metaheuristic algorithm for feature selection. Appl. Intell. 2019, 49, 188–205. [Google Scholar] [CrossRef]
  62. Mafarja, M.; Aljarah, I.; Heidari, A.A.; Faris, H.; Fournier-Viger, P.; Li, X.; Mirjalili, S. Binary dragonfly optimization for feature selection using time-varying transfer functions. Knowl.-Based Syst. 2018, 161, 185–204. [Google Scholar] [CrossRef]
  63. Hariharan, M.; Sindhu, R.; Vijean, V.; Yazid, H.; Nadarajaw, T.; Yaacob, S.; Polat, K. Improved binary dragonfly optimization algorithm and wavelet packet based non-linear features for infant cry classification. Comput. Methods Programs Biomed. 2018, 155, 39–51. [Google Scholar] [CrossRef]
  64. Zhang, A.; Zhang, P.; Feng, Y. Short-term load forecasting for microgrids based on DA-SVM. COMPEL Int. J. Comput. Math. Electr. Electron. Eng. 2018, 38, 68–80. [Google Scholar] [CrossRef]
  65. Yuan, Y.; Lv, L.; Wang, X.; Song, X. Optimization of a frame structure using the Coulomb force search strategy-based dragonfly algorithm. Eng. Optim. 2019, 52, 915–931. [Google Scholar] [CrossRef]
  66. Zhang, Z.; Hong, W.C. Electric load forecasting by complete ensemble empirical mode decomposition adaptive noise and support vector regression with quantum-based dragonfly algorithm. Nonlinear Dyn. 2019, 98, 1107–1136. [Google Scholar] [CrossRef]
  67. Suresh, V.; Sreejith, S. Generation dispatch of combined solar thermal systems using dragonfly algorithm. Computing 2017, 99, 59–80. [Google Scholar] [CrossRef]
  68. Sureshkumar, K.; Ponnusamy, V. Power flow management in micro grid through renewable energy sources using a hybrid modified dragonfly algorithm with bat search algorithm. Energy 2019, 181, 1166–1178. [Google Scholar] [CrossRef]
  69. Xie, T.; Yao, J.; Zhou, Z. DA-based parameter optimization of combined kernel support vector machine for cancer diagnosis. Processes 2019, 7, 263. [Google Scholar] [CrossRef] [Green Version]
  70. Xu, L.; Jia, H.; Lang, C.; Peng, X.; Sun, K. A Novel Method for Multilevel Color Image Segmentation Based on Dragonfly Algorithm and Differential Evolution. IEEE Access 2019, 7, 19502–19538. [Google Scholar] [CrossRef]
  71. Zhang, Q.; Wang, Z.; Heidari, A.A.; Gui, W.; Shao, Q.; Chen, H.; Zaguia, A.; Turabieh, H.; Chen, M. Gaussian Barebone Salp Swarm Algorithm with Stochastic Fractal Search for medical image segmentation: A COVID-19 case study. Comput. Biol. Med. 2021, 139, 104941. [Google Scholar] [CrossRef]
  72. Xia, J.; Zhang, H.; Li, R.; Wang, Z.; Cai, Z.; Gu, Z.; Chen, H.; Pan, Z. Adaptive Barebones Salp Swarm Algorithm with Quasi-oppositional Learning for Medical Diagnosis Systems: A Comprehensive Analysis. J. Bionic Eng. 2022, 19, 1–17. [Google Scholar] [CrossRef]
  73. Liang, J.; Qu, B.Y.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report, 201311; Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  74. Abd Elaziz, M.; Oliva, D.; Xiong, S. An improved Opposition-Based Sine Cosine Algorithm for global optimization. Expert Syst. Appl. 2017, 90, 484–500. [Google Scholar] [CrossRef]
  75. Qu, C.; Zeng, Z.; Dai, J.; Yi, Z.; He, W. A Modified Sine-Cosine Algorithm Based on Neighborhood Search and Greedy Levy Mutation. Comput. Intell. Neurosci. 2018, 2018, 4231647. [Google Scholar] [CrossRef] [PubMed]
  76. Nenavath, H.; Jatoth, R.K. Hybridizing sine cosine algorithm with differential evolution for global optimization and object tracking. Appl. Soft Comput. 2018, 62, 1019–1043. [Google Scholar] [CrossRef]
  77. Issa, M.; Hassanien, A.E.; Oliva, D.; Helmi, A.; Ziedan, I.; Alzohairy, A. ASCA-PSO: Adaptive sine cosine optimization algorithm integrated with particle swarm for pairwise local sequence alignment. Expert Syst. Appl. 2018, 99, 56–70. [Google Scholar] [CrossRef]
  78. Elhosseini, M.A.; Haikal, A.Y.; Badawy, M.; Khashan, N. Biped robot stability based on an A–C parametric Whale Optimization Algorithm. J. Comput. Sci. 2019, 31, 17–32. [Google Scholar] [CrossRef]
  79. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  80. Yang, X.-S. Firefly Algorithms for Multimodal Optimization; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  81. Cui, H.; Guan, Y.; Chen, H.; Deng, W. A Novel Advancing Signal Processing Method Based on Coupled Multi-Stable Stochastic Resonance for Fault Detection. Appl. Sci. 2021, 11, 5385. [Google Scholar] [CrossRef]
  82. Fu, J.; Zhang, Y.; Wang, Y.; Zhang, H.; Liu, J.; Tang, J.; Yang, Q.; Sun, H.; Qiu, W.; Ma, Y. Optimization of metabolomic data processing using NOREVA. Nat. Protoc. 2021, 17, 129–151. [Google Scholar] [CrossRef]
  83. Li, B.; Tang, J.; Yang, Q.; Li, S.; Cui, X.; Li, Y.; Chen, Y.; Xue, W.; Li, X.; Zhu, F. NOREVA: Normalization and evaluation of MS-based metabolomics data. Nucleic Acids Res. 2017, 45, W162–W170. [Google Scholar] [CrossRef] [Green Version]
  84. Ran, X.; Zhou, X.; Lei, M.; Tepsan, W.; Deng, W. A novel k-means clustering algorithm with a noise algorithm for capturing urban hotspots. Appl. Sci. 2021, 11, 11202. [Google Scholar] [CrossRef]
  85. Wang, M.; Zhang, Q.; Chen, H.; Heidari, A.A.; Mafarja, M.; Turabieh, H. Evaluation of constraint in photovoltaic cells using ensemble multi-strategy shuffled frog leading algorithms. Energy Convers. Manag. 2021, 244, 114484. [Google Scholar] [CrossRef]
  86. Yang, Q.; Li, B.; Tang, J.; Cui, X.; Wang, Y.; Li, X.; Hu, J.; Chen, Y.; Xue, W.; Lou, Y. Consistent gene signature of schizophrenia identified by a novel feature selection strategy from comprehensive sets of transcriptomic data. Brief. Bioinform. 2020, 21, 1058–1068. [Google Scholar] [CrossRef] [PubMed]
  87. Li, Y.H.; Li, X.X.; Hong, J.J.; Wang, Y.X.; Fu, J.B.; Yang, H.; Yu, C.Y.; Li, F.C.; Hu, J.; Xue, W.W. Clinical trials, progression-speed differentiating features and swiftness rule of the innovative targets of first-in-class drugs. Brief. Bioinform. 2020, 21, 649–662. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. Zhu, F.; Qin, C.; Tao, L.; Liu, X.; Shi, Z.; Ma, X.; Jia, J.; Tan, Y.; Cui, C.; Lin, J. Clustered patterns of species origins of nature-derived drugs and clues for future bioprospecting. Proc. Natl. Acad. Sci. USA 2011, 108, 12943–12948. [Google Scholar] [CrossRef] [Green Version]
  89. Yin, J.; Sun, W.; Li, F.; Hong, J.; Li, X.; Zhou, Y.; Lu, Y.; Liu, M.; Zhang, X.; Chen, N. VARIDT 1.0: Variability of drug transporter database. Nucleic Acids Res. 2020, 48, D1042–D1050. [Google Scholar] [CrossRef]
  90. Xue, X.; Wang, S.F.; Zhan, L.J.; Feng, Z.Y.; Guo, Y.D. Social Learning Evolution (SLE): Computational Experiment-Based Modeling Framework of Social Manufacturing. IEEE Trans. Ind. Inform. 2019, 15, 3343–3355. [Google Scholar] [CrossRef]
  91. Xue, X.; Chen, Z.; Wang, S.; Feng, Z.; Duan, Y.; Zhou, Z. Value Entropy: A Systematic Evaluation Model of Service Ecosystem Evolution. IEEE Trans. Serv. Comput. 2020. [Google Scholar] [CrossRef]
  92. Wu, Z.; Li, R.; Xie, J.; Zhou, Z.; Guo, J.; Xu, X. A user sensitive subject protection approach for book search service. J. Assoc. Inf. Sci. Technol. 2020, 71, 183–195. [Google Scholar] [CrossRef]
  93. Wu, Z.; Shen, S.; Lian, X.; Su, X.; Chen, E. A dummy-based user privacy protection approach for text information retrieval. Knowl.-Based Syst. 2020, 195, 105679. [Google Scholar] [CrossRef]
  94. Wu, Z.; Shen, S.; Zhou, H.; Li, H.; Lu, C.; Zou, D. An effective approach for the protection of user commodity viewing privacy in e-commerce website. Knowl.-Based Syst. 2021, 220, 106952. [Google Scholar] [CrossRef]
  95. Qiu, S.; Hao, Z.; Wang, Z.; Liu, L.; Liu, J.; Zhao, H.; Fortino, G. Sensor Combination Selection Strategy for Kayak Cycle Phase Segmentation Based on Body Sensor Networks. IEEE Internet Things J. 2021, in press. [Google Scholar] [CrossRef]
  96. Zhang, L.; Zou, Y.; Wang, W.; Jin, Z.; Su, Y.; Chen, H. Resource Allocation and Trust Computing for Blockchain-Enabled Edge Computing System. Comput. Secur. 2021, 105, 102249. [Google Scholar] [CrossRef]
  97. Zhang, L.; Zhang, Z.; Wang, W.; Waqas, R.; Zhao, C.; Kim, S.; Chen, H. A Covert Communication Method Using Special Bitcoin Addresses Generated by Vanitygen. Comput. Mater. Contin. 2020, 65, 597–616. [Google Scholar]
  98. Zhang, L.; Zhang, Z.; Wang, W.; Jin, Z.; Su, Y.; Chen, H. Research on a Covert Communication Model Realized by Using Smart Contracts in Blockchain Environment. IEEE Syst. J. 2021, in press. [Google Scholar] [CrossRef]
  99. Wu, Z.; Li, G.; Shen, S.; Cui, Z.; Lian, X.; Xu, G. Constructing dummy query sequences to protect location privacy and query privacy in location-based services. World Wide Web 2021, 24, 25–49. [Google Scholar] [CrossRef]
  100. Wu, Z.; Wang, R.; Li, Q.; Lian, X.; Xu, G. A location privacy-preserving system based on query range cover-up for location-based services. IEEE Trans. Veh. Technol. 2020, 69, 5244–5254. [Google Scholar] [CrossRef]
  101. Qiu, S.; Zhao, H.; Jiang, N.; Wu, D.; Song, G.; Zhao, H.; Wang, Z. Sensor network oriented human motion capture via wearable intelligent system. Int. J. Intell. Syst. 2021, 37, 1646–1673. [Google Scholar] [CrossRef]
  102. Gandomi, A.; Yang, X.-S.; Alavi, A.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 2013, 22, 1239–1255. [Google Scholar] [CrossRef]
  103. He, Q.; Wang, L. A Hybrid Particle Swarm Optimization with a Feasibility-based Rule for Constrained Optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar] [CrossRef]
  104. Kaveh, A.; Talatahari, S. A Novel Heuristic Optimization Method: Charged System Search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  105. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intell. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  106. Kaveh, A.; Talatahari, S. An improved ant colony optimization for constrained engineering design problems. Eng. Comput. 2010, 27, 155–182. [Google Scholar] [CrossRef]
  107. Mezura-Montes, E.; ACoello Coello, C.; Velázquez-Reyes, J.; Muñoz-Dávila, L. Multiple trial vectors in differential evolution for engineering design. Eng. Optim. 2007, 39, 567–589. [Google Scholar] [CrossRef]
  108. Sandgren, E. Nonlinear integer and discrete programming in mechanical design. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Kissimmee, FL, USA, 25–28 September 1988; The American Society of Mechanical Engineers: New York, NY, USA, 1988; Volume 14. [Google Scholar]
  109. Wagdy, A. A novel differential evolution algorithm for solving constrained engineering optimization problems. J. Intell. Manuf. 2018, 29, 659–692. [Google Scholar]
  110. Rao, V.R.; Savsani, J.V.; Vakharia, P.D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  111. Kentli, A.; Sahbaz, M. Optimisation of Hydrostatic Thrust Bearing Using Sequential Quadratic Programming. Oxid. Commun. 2014, 37, 1144–1152. [Google Scholar]
  112. He, S.; Prempain, E.; Wu, Q. An improved particle swarm optimizer for mechanical design optimization problems. Eng. Optim. 2004, 36, 585–605. [Google Scholar] [CrossRef]
  113. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray Optimization. Comput. Struct. 2012, 112–113, 283–294. [Google Scholar] [CrossRef]
  114. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  115. Huang, F.-Z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  116. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  117. Coello Coello, C.A. Use of a self-adaptive penalty approach for engineering optimization problems. Comput. Ind. 2000, 41, 113–127. [Google Scholar] [CrossRef]
  118. Sandgren, E. Nonlinear Integer and Discrete Programming in Mechanical Design Optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
  119. Mezura-Montes, E.; Coello, C.A.C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar] [CrossRef]
Figure 1. Flowchart of GGBDA.
Figure 1. Flowchart of GGBDA.
Symmetry 14 00331 g001
Figure 2. Convergence graph of the 12 benchmarks.
Figure 2. Convergence graph of the 12 benchmarks.
Symmetry 14 00331 g002
Figure 3. (a) Three-dimensional location distribution of GGBDA, (b) two-dimensional location distribution of GGBDA, (c) trajectory of GGBDA in the first dimension, (d) average fitness of GGBDA, and (e) convergence curves of GGBDA and DA.
Figure 3. (a) Three-dimensional location distribution of GGBDA, (b) two-dimensional location distribution of GGBDA, (c) trajectory of GGBDA in the first dimension, (d) average fitness of GGBDA, and (e) convergence curves of GGBDA and DA.
Symmetry 14 00331 g003
Figure 4. Balance analysis of GGBDA and DA.
Figure 4. Balance analysis of GGBDA and DA.
Symmetry 14 00331 g004
Figure 5. Diversity analysis of GGBDA and DA.
Figure 5. Diversity analysis of GGBDA and DA.
Symmetry 14 00331 g005
Table 1. Description of the 30 benchmark functions.
Table 1. Description of the 30 benchmark functions.
IDFunction EquationSearch RangeOptimum Value
CEC 2014 Unimodal Functions
F1Rotated High Conditioned Elliptic Function[−100,100] f 1 { X m i n } = 100
F2Rotated Bent Cigar Function[−100,100] f 2 { X m i n } = 200
F3Rotated Discus Function[−100,100] f 3 { X m i n } = 300
CEC 2014 Simple Multi-Modal Functions
F4Shifted and Rotated Rosenbrock Function[−100,100] f 4 { X m i n } = 400
F5Shifted and Rotated Ackley Function[−100,100] f 5 { X m i n } = 500
F6Shifted and Rotated Weierstrass Function[−100,100] f 6 { X m i n } = 600
F7Shifted and Rotated Griewank Function[−100,100] f 7 { X m i n } = 700
F8Shifted Rastrigin Function[−100,100] f 8 { X m i n } = 800
F9Shifted and Rotated Rastrigin Function[−100,100] f 9 { X m i n } = 900
F10Shifted Schwefel Function[−100,100] f 10 { X m i n } = 1000
F11Shifted and Rotated Schwefel Function[−100,100] f 11 { X m i n } = 1100
F12Shifted and Rotated Katsuura Function[−100,100] f 12 { X m i n } = 1200
F13Shifted and Rotated HappyCat Function[−100,100] f 13 { X m i n } = 1300
F14Shifted and Rotated HGBat Function[−100,100] f 14 { X m i n } = 1400
F15Shifted and Rotated Expanded Griewank Plus Rosenbrock Function[−100,100] f 15 { X m i n } = 1500
F16Shifted and Rotated Expanded Scaffer F6 Function[−100,100] f 16 { X m i n } = 1600
CEC 2014 Hybrid Functions
F17Hybrid Function 1 (N = 3)[−100,100] f 17 { X m i n } = 1700
F18Hybrid Function 2 (N = 3)[−100,100] f 18 { X m i n } = 1800
F19Hybrid Function 3 (N = 4)[−100,100] f 19 { X m i n } = 1900
F20Hybrid Function 4 (N = 4)[−100,100] f 20 { X m i n } = 2000
F21Hybrid Function 5 (N = 5)[−100,100] f 21 { X m i n } = 2100
F22Hybrid Function 6 (N = 5)[−100,100] f 22 { X m i n } = 2200
CEC 2014 Composition Functions
F23Composition Function 1 (N = 5)[−100,100] f 23 { X m i n } = 2300
F24Composition Function 2 (N = 3)[−100,100] f 24 { X m i n } = 2400
F25Composition Function 3 (N = 3)[−100,100] f 25 { X m i n } = 2500
F26Composition Function 4 (N = 5)[−100,100] f 26 { X m i n } = 2600
F27Composition Function 5 (N = 5)[−100,100] f 27 { X m i n } = 2700
F28Composition Function 6 (N = 5)[−100,100] f 28 { X m i n } = 2800
F29Composition Function 7 (N = 3)[−100,100] f 29 { X m i n } = 2900
F30Composition Function 8 (N = 3)[−100,100] f 30 { X m i n } = 3000
Table 2. Parameter settings of the algorithms in the experiment.
Table 2. Parameter settings of the algorithms in the experiment.
AlgorithmsPopMaximum IterationsOthers
GGBDA
OBSCA
m_SCA
301000w ∈ [0.9 0.2]; s = 0.1; a = 0.1;
c = 0.7; f = 1; e = 1
301000a = 2
301000a = 2
SCADE
ASCA_PSO
301000a = 2; CR = 0.8; LSF = 0.8; USF = 0.2
301000M = 4; N = 9; Vmax = 6; wMax = 0.9; wMin = 0.2; c1 = 2;
c2 = 2;
ACWOA
MFO
SCA
FA
301000B = 1
301000B = 1
301000a = 2
301000alpha = 0.5; betamin = 0.2; gamma = 1;
DA
GGBDA
301000w ∈ [0.9 0.2]; s = 0.1; a = 0.1; c = 0.7; f = 1; e = 1
301000w ∈ [0.9 0.2]; s = 0.1; a = 0.1; c = 0.7; f = 1; e = 1
Table 3. Experimental results of the 30 dimensions (30Ds).
Table 3. Experimental results of the 30 dimensions (30Ds).
F1 F2 F3
AveStdAveStdAveStd
GGBDA3.3428 × 1072.3615 × 1075.7799 × 1071.32174 × 1072.2502 × 1031.0237 × 103
OBSCA3.8095 × 1081.2188 × 1082.4577 × 10103.9982 × 1095.1744 × 1047.3043 × 103
m_SCA7.2766 × 1073.9039 × 1076.4809 × 1092.7501 × 1092.6967 × 1047.4237 × 103
SCADE4.3235 × 1081.0258 × 1082.9383 × 10104.9065 × 1095.3542 × 1046.3130 × 103
ASCA_PSO1.5733 × 1077.8447 × 1065.7234 × 1087.6338 × 1082.0200 × 1045.3347 × 103
ACWOA1.3598 × 1085.9536 × 1077.6372 × 1093.3593 × 1095.1123 × 1048.7487 × 103
MFO7.0131 × 1078.4361 × 1071.3759 × 10107.4030 × 1099.8036 × 1046.1005 × 104
SCA2.2033 × 1087.5726 × 1071.6600 × 10103.2678 × 1093.5442 × 1046.2559 × 103
FA2.5375 × 1085.0283 × 1071.5600 × 10102.0292 × 1096.3396 × 1049.7529 × 103
DA8.11892 × 1084.3376 × 1082.2171 × 10102.2383 × 10105.9107 × 1041.5850 × 104
F4 F5 F6
AveStdAveStdAveStd
GGBDA5.9527 × 1028.7643 × 1015.2093 × 1025.5872 × 10−26.2033 × 1024.0175 × 100
OBSCA2.4186 × 1038.0598 × 1025.2097 × 1024.9147 × 10−26.3202 × 1021.7351 × 100
m_SCA7.5730 × 1021.0198 × 1025.2061 × 1021.4096 × 10−16.2114 × 1023.2807 × 100
SCADE2.4370 × 1035.6808 × 1025.2094 × 1026.3764 × 10−26.3419 × 1022.3689 × 100
ASCA_PSO5.7201 × 1021.5123 × 1025.2094 × 1024.1898 × 10−26.2512 × 1023.2965 × 100
ACWOA1.0827 × 1032.3891 × 1025.2083 × 1021.2246 × 10−16.3454 × 1023.1803 × 100
MFO1.4154 × 1031.1476 × 1035.2026 × 1022.0197 × 10−16.2398 × 1023.0738 × 100
SCA1.4155 × 1033.0882 × 1025.2093 × 1024.4333 × 10−26.3375 × 1022.6530 × 100
FA1.5386 × 1031.7232 × 1025.2095 × 1025.1811 × 10−26.3392 × 1026.4751 × 10−1
DA7.2148 × 1035.0944 × 1035.2096 × 1023.8523 × 10−26.3831 × 1023.8669 × 100
F7 F8 F9
AveStdAveStdAveStd
GGBDA7.0154 × 1021.4093 × 10−18.8953 × 1021.4755 × 1011.0718 × 1033.5377 × 101
OBSCA9.1188 × 1023.2095 × 1011.0564 × 1031.5937 × 1011.2007 × 1031.8331 × 101
m_SCA7.5112 × 1022.7312 × 1019.4797 × 1022.0587 × 1011.0570 × 1032.4289 × 101
SCADE8.9697 × 1023.1487 × 1011.0680 × 1031.3258 × 1011.2072 × 1031.7261 × 101
ASCA_PSO7.1122 × 1021.5224 × 1019.5707 × 1022.6319 × 1011.1114 × 1033.7255 × 101
ACWOA7.2872 × 1021.6207 × 1019.9483 × 1022.5768 × 1011.1277 × 1032.1651 × 101
MFO8.1621 × 1027.0326 × 1019.3286 × 1023.1243 × 1011.1154 × 1034.2025 × 101
SCA8.3820 × 1022.6572 × 1011.0372 × 1031.6583 × 1011.1745 × 1031.5443 × 101
FA8.3255 × 1029.9991 × 1001.0236 × 1031.5241 × 1011.1575 × 1038.8945 × 100
DA1.0796 × 1032.5224 × 1021.0603 × 1038.6112 × 1011.1875 × 1034.3320 × 101
F10 F11 F12
AveStdAveStdAveStd
GGBDA2.1632 × 1034.0569 × 1024.3103 × 1035.8058 × 1021.2016 × 1036.4654 × 10−1
OBSCA6.1914 × 1033.3800 × 1027.3712 × 1033.8870 × 1021.2022 × 1033.8380 × 10−1
m_SCA4.2173 × 1036.7303 × 1024.6926 × 1035.6709 × 1021.2007 × 1032.8914 × 10−1
SCADE7.3873 × 1032.0852 × 1028.2043 × 1032.8866 × 1021.2026 × 1032.9637 × 10−1
ASCA_PSO5.3236 × 1036.1947 × 1026.0330 × 1031.0051 × 1031.2024 × 1033.2840 × 10−1
ACWOA4.3616 × 1039.4361 × 1026.5284 × 1038.8174 × 1021.2018 × 1035.3507 × 10−1
MFO4.2961 × 1031.0010 × 1035.2553 × 1035.8399 × 1021.2004 × 1031.6921 × 10−1
SCA6.9536 × 1035.2169 × 1028.1744 × 1032.6469 × 1021.2024 × 1032.8490 × 10−1
FA7.5877 × 1032.4931 × 1027.8979 × 1032.2794 × 1021.2024 × 1033.1798 × 10−1
DA7.8983 × 1038.8564 × 1028.2497 × 1037.2246 × 1021.2024 × 1033.9165 × 10−1
F13 F14 F15
AveStdAveStdAveStd
GGBDA1.3006 × 1031.1023 × 10−11.4003 × 1034.9476 × 10−21.5246 × 1033.8702 × 100
OBSCA1.3037 × 1034.2284 × 10−11.4669 × 1031.1727 × 1011.7547 × 1049.8027 × 103
m_SCA1.3007 × 1033.3452 × 10−11.4142 × 1031.1462 × 1012.2627 × 1038.4352 × 102
SCADE1.3038 × 1032.5871 × 10−11.4902 × 1031.1514 × 1012.0450 × 1048.8527 × 103
ASCA_PSO1.3006 × 1031.4205 × 10−11.4035 × 1037.1583 × 1001.5545 × 1031.2124 × 102
ACWOA1.3017 × 1031.0761 × 1001.4166 × 1031.0655 × 1011.9949 × 1035.8404 × 102
MFO1.3019 × 1031.2975 × 1001.4267 × 1031.5955 × 1013.3650 × 1058.2577 × 105
SCA1.3029 × 1033.7934 × 10−11.4443 × 1039.4586 × 1005.0147 × 1033.4034 × 103
FA1.3029 × 1031.9248 × 10−11.4403 × 1034.8273 × 1001.5752 × 1044.4028 × 103
DA1.3068 × 1031.9095 × 1001.5637 × 1038.3347 × 1012.4757 × 1047.1463 × 104
F16 F17 F18
AveStdAveStdAveStd
GGBDA1.6122 × 1033.8012 × 10−12.1700 × 1062.7205 × 1061.5189 × 1045.2280 × 104
OBSCA1.6130 × 1031.4281 × 10−11.1486 × 1075.1039 × 1061.9793 × 1081.4800 × 108
m_SCA1.6115 × 1035.1409 × 10−11.5833 × 1061.7905 × 1063.4874 × 1074.7812 × 107
SCADE1.6127 × 1031.9941 × 10−11.4197 × 1076.7951 × 1061.6517 × 1081.1211 × 108
ASCA_PSO1.6126 × 1033.3022 × 10−11.2265 × 1061.0213 × 1063.6646 × 1061.0393 × 106
ACWOA1.6123 × 1034.6588 × 10−11.6366 × 1071.4017 × 1074.6377 × 1073.8096 × 107
MFO1.6128 × 1034.8526 × 10−14.0035 × 1065.0310 × 1063.9147 × 1071.0322 × 108
SCA1.6127 × 1032.8567 × 10−16.9907 × 1063.6926 × 1061.6756 × 1088.8211 × 107
FA1.6129 × 1032.3262 × 10−16.7491 × 1062.2624 × 1062.6476 × 1087.8340 × 107
DA1.6129 × 1032.4315 × 10−18.5018 × 1074.2101 × 1074.0928 × 1091.8915 × 109
F19 F20 F21
AveStdAveStdAveStd
GGBDA1.9217 × 1038.2441 × 1002.2795 × 1036.9312 × 1011.9235 × 1052.8749 × 105
OBSCA2.0091 × 1031.1149 × 1013.0362 × 1041.2377 × 1042.3649 × 1061.5032 × 106
m_SCA1.9453 × 1032.5699 × 1011.0286 × 1044.6386 × 1034.6439 × 1054.6037 × 105
SCADE2.0209 × 1031.7879 × 1012.7828 × 1041.2075 × 1042.7903 × 1061.0593 × 106
ASCA_PSO1.9258 × 1032.5713 × 1016.0026 × 1032.2111 × 1033.2508 × 1052.5701 × 105
ACWOA2.0062 × 1033.5162 × 1014.0828 × 1041.8916 × 1045.1240 × 1064.8145 × 106
MFO1.9722 × 1036.5003 × 1016.7453 × 1043.5593 × 1047.3786 × 1051.1693 × 106
SCA1.9950 × 1032.2940 × 1011.7570 × 1045.4464 × 1031.3486 × 1066.6249 × 105
FA2.0029 × 1031.1339 × 1012.1545 × 1048.6661 × 1031.8937 × 1066.2858 × 105
DA2.2044 × 1031.3085 × 1027.3418 × 1044.0133 × 1042.4506 × 1072.0130 × 107
F22 F23 F24
AveStdAveStdAveStd
GGBDA2.6667 × 1031.3749 × 1022.5001 × 1038.2205 × 10−22.6001 × 1034.2571 × 10−2
OBSCA3.0956 × 1031.6521 × 1022.6865 × 1031.6694 × 1012.6000 × 1032.6232 × 10−4
m_SCA2.6529 × 1031.6213 × 1022.6396 × 1031.0453 × 1012.6000 × 1036.3375 × 10−4
SCADE3.1130 × 1031.5936 × 1022.5000 × 1030.0000 × 1002.6000 × 1031.0671 × 10−7
ASCA_PSO2.7768 × 1031.7913 × 1022.6237 × 1033.9400 × 1002.6366 × 1038.2081 × 100
ACWOA3.0574 × 1032.1215 × 1022.5367 × 1037.4780 × 1012.6000 × 1038.5021 × 10−6
MFO2.9977 × 1032.5111 × 1022.6671 × 1034.5312 × 1012.6827 × 1033.0780 × 101
SCA2.9493 × 1031.4065 × 1022.6668 × 1031.2152 × 1012.6001 × 1035.8342 × 10−2
FA2.9399 × 1031.0040 × 1022.7354 × 1031.4354 × 1012.7065 × 1034.4005 × 100
DA1.3035 × 1041.1958 × 1042.8764 × 1032.2534 × 1022.6261 × 1035.0498 × 100
F25 F26 F27
AveStdAveStdAveStd
GGBDA2.7000 × 1031.3977 × 10−32.7006 × 1031.8063 × 10−12.9000 × 1031.8895 × 10−3
OBSCA2.7000 × 1031.0817 × 10−32.7039 × 1034.7598 × 10−13.2360 × 1034.5158 × 101
m_SCA2.7134 × 1032.6641 × 1002.7008 × 1033.4050 × 10−13.1926 × 1031.5161 × 102
SCADE2.7000 × 1030.0000 × 1002.7037 × 1036.1565 × 10−13.1829 × 1032.6437 × 102
ASCA_PSO2.7125 × 1035.1192 × 1002.7006 × 1031.2849 × 10−13.5114 × 1032.3638 × 102
ACWOA2.7000 × 1030.0000 × 1002.7471 × 1035.0332 × 1013.6882 × 1033.2535 × 102
MFO2.7190 × 1031.0042 × 1012.7023 × 1031.5257 × 1003.6672 × 1031.8397 × 102
SCA2.7242 × 1031.1442 × 1012.7023 × 1035.9638 × 10−13.4473 × 1033.1999 × 102
FA2.7342 × 1034.0567 × 1002.7023 × 1032.8881 × 10−13.8003 × 1032.8675 × 101
DA2.7109 × 1034.7079 × 1002.7740 × 1034.0242 × 1014.2646 × 1032.7086 × 102
F28 F29 F30
AveStdAveStdAveStd
GGBDA3.0000 × 1032.8946 × 10−23.1087 × 1035.4266 × 1003.5861 × 1036.0874 × 102
OBSCA5.3347 × 1033.2181 × 1021.8861 × 1071.0186 × 1074.5744 × 1051.5327 × 105
m_SCA3.9404 × 1032.3055 × 1021.6245 × 1064.3077 × 1064.6418 × 1042.2798 × 104
SCADE5.2213 × 1035.2511 × 1021.5436 × 1077.9392 × 1064.1012 × 1051.8490 × 105
ASCA_PSO4.4056 × 1033.2811 × 1025.1473 × 1066.2012 × 1064.1476 × 1043.1316 × 104
ACWOA4.2050 × 1031.1911 × 1032.1367 × 1071.7150 × 1073.9650 × 1052.1202 × 105
MFO3.9192 × 1031.3812 × 1022.6412 × 1063.4748 × 1065.7740 × 1044.9279 × 104
SCA4.7438 × 1032.5806 × 1021.1250 × 1076.3057 × 1062.4763 × 1057.9063 × 104
FA4.2782 × 1031.8313 × 1023.2845 × 1061.2612 × 1061.7562 × 1054.0487 × 104
DA8.5874 × 1031.1741 × 1032.7820 × 1082.7534 × 1084.1499 × 1062.4981 × 106
Overall Rank
Rank+/=/−
GGBDA1~
OBSCA928/0/2
m_SCA223/0/7
SCADE827/0/3
ASCA_PSO324/0/6
ACWOA526/0/4
MFO428/0/2
SCA628/0/2
FA730/0/0
DA1030/0/0
Table 4. Comparison results of the PVD problem between GGBDA and other approaches.
Table 4. Comparison results of the PVD problem between GGBDA and other approaches.
AlgorithmOptimum VariablesOptimum Cost
T s   T h   RL
GGBDA0.81250.437542.0983176.63806059.7298
MFO [15]0.81250.437542.0984176.63666059.7143
BA [102]0.81250.437542.0984176.63666059.7143
HPSO [103]0.81250.437542.0984176.63666059.7143
CSS [104]0.81250.437542.1036176.57276059.0888
CPSO [105]0.81250.437542.0912176.74656061.0777
ACO [106]0.81250.437542.1036176.57276059.0888
GWO [18]0.81250.434542.0892176.75876051.5639
WOA [2]0.81250.437542.0983176.63906059.7410
MDDE [107]0.81250.437542.0984176.63606059.7017
Branch-bound [108]1.12500.625047.7000117.70108129.1036
Table 5. Comparison results of the hydrostatic thrust bearing problem between GGBDA and other approaches.
Table 5. Comparison results of the hydrostatic thrust bearing problem between GGBDA and other approaches.
AlgorithmOptimum VariablesOptimum Cost
RR0μQ
GGBDA5.9560715.3893345.36 × 10−62.27176619,508.7584
PSO [8]5.9568685.3891755.4021 × 10−62.30154619,586.5788
NDE [109]5.9557815.3890135.3586 × 10−62.26965619,506.0090
TLBO [110]5.9557815.3890135.3586 × 10−62.26965619,505.3132
SQP [111]5.9558005.3890408.6332 × 10−68.00001026,114.5450
GASO [112]6.27100012.901005.6050 × 10−62.93800023,403.4320
Table 6. Comparison results of the WBD problem between GGBDA and other approaches.
Table 6. Comparison results of the WBD problem between GGBDA and other approaches.
AlgorithmOptimal Values for VariablesOptimum Cost
hltb
GGBDA0.1871563.6150209.0566720.2064641.724527
RO [113]0.2036873.5284679.0042330.2072411.735344
SSA [114]0.2057003.4714009.0366000.2057001.724910
CDE [115]0.2031373.5429989.0334980.2061791.733462
GWO [18]0.2057003.4784009.0368000.2058001.726240
GSA [116]0.1821293.85697910.000000.2023761.879950
NDE [109]0.2057293.4704889.9036620.2057291.724852
Table 7. Comparison results of the TCSD problem between GGBDA and other approaches.
Table 7. Comparison results of the TCSD problem between GGBDA and other approaches.
AlgorithmOptimal Values for VariablesOptimum Cost
dDN
GGBDA0.0516520.35583711.340810.012665
GA [117]0.0514800.35166111.632200.012705
RO [113]0.0513700.34909611.762790.012679
IHS [118]0.0511540.34987112.076430.012671
ES [119]0.0519890.36396510.890520.012681
GSA [116]0.0502760.32368013.525410.012702
WOA [2]0.05120712.004300.3452150.012676
PSO [8]0.01572811.244540.3576440.012675
NDE [109]0.0516890.35671811.288960.012665
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yuan, L.; Kuang, F.; Zhang, S.; Chen, H. The Gaussian Mutational Barebone Dragonfly Algorithm: From Design to Analysis. Symmetry 2022, 14, 331. https://doi.org/10.3390/sym14020331

AMA Style

Yuan L, Kuang F, Zhang S, Chen H. The Gaussian Mutational Barebone Dragonfly Algorithm: From Design to Analysis. Symmetry. 2022; 14(2):331. https://doi.org/10.3390/sym14020331

Chicago/Turabian Style

Yuan, Li, Fangjun Kuang, Siyang Zhang, and Huiling Chen. 2022. "The Gaussian Mutational Barebone Dragonfly Algorithm: From Design to Analysis" Symmetry 14, no. 2: 331. https://doi.org/10.3390/sym14020331

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop