Next Article in Journal
Feature Selection for Colon Cancer Detection Using K-Means Clustering and Modified Harmony Search Algorithm
Next Article in Special Issue
A Method for Prediction of Waterlogging Economic Losses in a Subway Station Project
Previous Article in Journal
Optimization and Simulation of Dynamic Performance of Production–Inventory Systems with Multivariable Controls
Previous Article in Special Issue
Using Artificial Neural Network with Prey Predator Algorithm for Prediction of the COVID-19: The Case of Brazil and Mexico
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhancing Differential Evolution Algorithm with a Rank-Up Selection: RUSDE

1
Department of Computer Science and Engineering, National Chung Hsing University, Taichung 402, Taiwan
2
Cyberspace Security Research Center, Pengcheng Laboratory, Shenzhen 518055, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work and co-first authors.
Mathematics 2021, 9(5), 569; https://doi.org/10.3390/math9050569
Submission received: 7 February 2021 / Revised: 25 February 2021 / Accepted: 3 March 2021 / Published: 7 March 2021
(This article belongs to the Special Issue Evolutionary Algorithms in Artificial Intelligent Systems)

Abstract

:
Recently, the differential evolution (DE) algorithm has been widely used to solve many practical problems. However, DE may suffer from stagnation problems in the iteration process. Thus, we propose an enhancing differential evolution with a rank-up selection, named RUSDE. First, the rank-up individuals in the current population are selected and stored into a new archive; second, a debating mutation strategy is adopted in terms of the updating status of the current population to decide the parent’s selection. Both of the two methods can improve the performance of DE. We conducted numerical experiments based on various functions from CEC 2014, where the results demonstrated excellent performance of this algorithm. Furthermore, this algorithm is applied to the real-world optimization problem of the four-bar linkages, where the results show that the performance of RUSDE is better than other algorithms.

1. Introduction

In real-world optimization problems, people have proposed various algorithms inspired by evolutions of lives, social behaviors, or physical phenomena, e.g., Genetic Algorithm (GA) [1], Ant Colony Optimization Algorithm (ACO) [2], Particle Swarm Optimization (PSO) [3], Cuckoo Search (CS) [4], Teach-Learning-Based Optimization (TLBO) [5,6,7], Krill Herd (KH) [8], Backtracking Search Optimization Algorithm (BSA) [9] and differential evolution (DE) [10]. Among them, the differential evolution (DE) algorithm and its variants showed excellent performance in competitions held under the IEEE Congress on Evolutionary Computation (CEC) conference series [11,12]. In recent years, DE has been widely applied in diverse fields, such as geography [13], chemistry [14], engineering design [15,16,17,18], image processing [19,20], and software development [21]. Additionally, there are many successful applications of the DE in these fields [22,23,24].
DE starts from a randomly generated initial population. A new individual is generated by summing the vector differences of any two individuals and the third individual. Then the new individual is compared with the corresponding individual in the current population. When the fitness of the new individual is better than that of the current individual, the old individual will be replaced by the new one in the next generation. Through continuous evolution, competition, retention, and deletion, the optimal solution is obtained.
However, in DE exists two problems. First, successful solutions may stop generating, and second, no fixed points can be converged; these two may lead to stagnation. In order to improve DE’s performance and reduce the stagnation, some researchers proposed diverse versions. Qin adjusted the parameters of DE and determined adaptively a more suitable generation strategy along with its parameter settings in the search process (SaDE) [25]. Brest introduced an efficient technique for adapting control parameter settings associated with DE (jDE) [26]. Zhang created an optimal archive that contains new updating individuals, that helps to generate the better one (JADE) [27]. Lampinen introduced an adaptive strategy with the control parameters on the base of JADE, and obtained better results (SHADE) [28]. Guo improved the optimal archive with all updating individuals and built a new Successful-Parent-Selecting framework (SPSDE) [29]. Later, she introduced the concept of eigenvector and proposed a new crossover method (EIGDE) [30]. The ranking idea was utilized to search for a better position, for example, the low-ranking individuals learn from the top-ranking ones [31,32]. Lixin combined adaptive population classification, adaptive control parameters, and mutation to create an Individual Dependent Mechanism (IDE) [33]. Some other concepts or methods were introduced, such as, neighborhood-based mutation operator (DEGL) [34], ensemble of parameters and mutation (EPSDE) [35], (MDEpBX) [36], a similarity-based mutant vector generation strategy (DE-SIM) [37], multipopulation-based mutation operators (CAMDE) [38], random neighbors based strategy (RNDE) [39], and adaptive lagrange interpolation search (ADELI) [40].
By far, plenty of algorithms adopt the two learning strategies: The individuals with low ranking learn from the ones with high ranking, such as rank-jDE [31] and TLBO [5]; the individuals without updating learn from the ones with updating, such as JADE [27], SHADE [28], and SPSDE [29]. These two learning strategies had been proved to be valid in improving the performance of DE. Therefore, this study proposes a new learning strategy to further reduce the stagnation of DE and accelerate the convergence. The individuals with no rank updating learn from the ones with rank updating. That is, enhancing differential evolution algorithm with a rank-up selection (RUSDE). The main idea of the proposed framework contains two parts: one is that an archive is created during the process which consists of the rank-up individuals, the other one is that a learning strategy has been adopted in terms of the continuous updating numbers of the top-rank individuals and the current individual. We discuss the influence of the archive size, test the RUSDE’s performance by CEC 2014 benchmarks [41]. The results show that RUSDE performs better than the other 11 algorithms. We apply the RUSDE into the optimal synthesis problem of four-bar mechanisms, and obtain a considerable result.
The outline of the rest of this paper is organized as follows. We discuss the original DE in Section 2, and RUSDE is proposed in Section 3. Section 4 shows the experimental results of different functions that show the superiority of RUSDE. Section 5 takes an example for the synthesis problem of four-bar mechanism. Section 6 concludes the paper.

2. Related Works

2.1. Differential Evolution

Differential Evolution is a powerful evolutionary algorithm that approaches a single objective optimization problem [10]. Its process contains four steps.
Step 1, Initialization
Initialize the population X, whose size is N, for D dimensional problems, the ith individual in dth dimension is randomly generated within the bound.
Step 2, Mutation
The mutation is used to generate a new population by a change or perturbation from parents. The selection methods are diverse. Reference [10] lists some of the most frequently used strategies, such as “DE/rand/1”, “DE/best/1”, “DE/current-to-best/1”, “DE/rand/2”, and “DE/best/2”. To explain them, we take “DE/rand/1” for example, shown in Equation (1).
v i c h i l d = x a + F x b x c
where xa, xb, and xc are different individuals randomly chosen from population X, they are all different from the index i; F is the user-specified scaling factor in the range (0, 2).
Step 3, Crossover
For increasing the population diversity, a new child individual u i , j c h i l d is generated by crossover operation The crossover scheme of DE can be written as Equation (2):
u i , d c h i l d =   v i , d c h i l d   if   ( rand ( 0 , 1 )   <   CR )   or   ( d   =   d r a n d ) x i d   Otherwise d   =   1 ,   2 ,   ,   D
where drand is randomly chosen from [1,D]; CR is a user-defined crossover rate in the range of [0, 1].
Step 4, Selection
The greedy selection mechanism is implemented to yield the population survival into the next generation in terms of the fitness value after the crossover process. Only the better one can survive. The selection procedure can be mathematically expressed as Equation (3):
x i =   u i c h i l d   if   (   f   ( u i c h i l d )   <   f ( x i ) ) x i   Otherwise

2.2. Differential Evolution Variants

In recent years, a number of DE variants occur to improve DE’s performance. We divide them into four categories.
Improving mutation, crossover, or both strategies: For example, the opposition number is generated in population initialization (ODE) to realize the generation jumping and the local improvement [42,43]; the differential covariance matrix’s i information is extracted for the determination of the search direction, similar to CMA-ES [44] combined with DE (DCMA) [45,46]. The covariance matrix learning is also used to create a proper coordinate system for the crossover operator, such as eigenvector (EIGDE) [30] and bimodal distribution parameter setting [47]. Besides, there are orthogonal crossover [48], adaptive Lagrange interpolation search(ADELI) [40], Gaussian mutation [49], and multiple strategies [50,51].
Adaptive control parameter strategies: The control parameters (scaling factor and crossover rate) have an effect on the performance. To balance the exploration and exploitation, adaptive or self-adaptive mechanisms are proposed, such as SaDE [25], jDE [26], SHADE [28], JADE [27], LTMA [52], IPOPCMAES [53] and jSO [54].
Alternating the parents: Some researchers select the parents according to the fitness values or rank to generate a better solution, such as neighborhood and directional Information [55], successful-parent-selecting framework (SPSDE) [29], and rank-based selection (rank-jDE) [31].
Building the population structure: The multiply populations strategies are introduced to adaptively enhance the performance of DE, such as IDE [33] and CAMDE [38]; An aging strategy is employed to jump out of the local optimum [56]. The pseudo-code for DE is given in Algorithm 1.
Algorithm 1 The pseudo code of DE/rand/1
Initialize a population X(N*D); Maximum iteration number MaxDT;
for G = 1 to MaxDT do
for i = 1 to N do
   Randomly select xa, xb, xc from the population X;
   Generate vi by Equation (1);
   for d = 1 to D do
    Generate ui by Equation (2);
   end for d
end for i
for i = 1 to N do
  Compute the fitness of u i c h i l d ;
   if f ( u i c h i l d )   <   f ( x i ) then
    Replace xi with u i c h i l d ;
   end if
end for i
end for G

3. The Proposed Methods

By far, the main idea of most ranking algorithms is that individuals with low ranking learn from individuals with high ranking. This is similar to human social behavior. Elites are in the top position and attract attention from people, hence everyone likes to learn from them. However, there is another group of people who have also attracted attention. Although they are not ranked high in their industries, their rankings have recently risen fast for some reason, hence people have begun to study the reasons for their rise and learn from it. Similar to the development of the national economy, some developed countries have advanced technology, but there will always be a period of slow economic growth, while some developing countries, which have relatively worse technology, have rapid economic growth. As a result, people will analyze the reason why these countries’ economies grow rapidly and learn from them to develop their own economies. This prompts us to propose the ranking-up algorithm in this paper which contains two parts.
First, in order to increase the learning speed of DE, the individuals with no rank up learn from the ones with rank up. The detailed process is that we store the individuals whose rank is up recently in an archive Y. The size of the archive is M, Y is a matrix with MD and D is the dimension.
To judge the rank-up of the individual, we can use Equation (4):
r a n k u p v a l u e ( i , G )   =   r a n k v a l u e ( i , G )     r a n k v a l u e ( i , G 1 )
where rankupvalue(i,G) is the ith individual’s rank-up value in Gth generation; rankvalue(i,G) is the ith individual’s rank index in Gth generation. Hence, if rankupvalue(i,G) < 0, it means this individual’s rank is up, then it will be stored into the archive Y. The method will obey the rule that the newest one replaces the oldest one. Second, a discussion is presented in terms of the current population. We use flagrank(i) to indicate the consecutive no-rising number of the current ith individual’s rank, and flagbest is indicated to the consecutive no-updating number of the global optimal individual, shown in Equations (5) and (6). We adapt the parent’s selection method to perform the mutation process. The learning behavior of the current individual is discussed in six cases according to the current individual’s ranking status: If the current individual’s rank rises, then the individual will be used as the initial position of the learning behavior; if the current individual’s rank does not rise twice consecutively, the individual will be replaced by a random individual from the archive Y; if the current individual’s rank does not rise for one consecutive time, the individual will be replaced by a random individual from the current population X; for the global optimal individual of the current population, if it is not updated, all individuals will not learn from it. Otherwise, all individuals learn from it. As shown in Equation (7).
f l a g r a n k ( i )   =     0 ,   the   i t h   individual s   rank   up f l a g r a n k ( i )   +   1   Otherwise
f l a g b e s t   =   0 ,   the   best   individual s   fitness   update f l a g b e s t   +   1   Otherwise
V i = Y 1 + F i Y 2     Y 3 +   F i x b e s t     Y 3 , if   f l a g r a n k ( i )   > = 2 ,   f l a g b e s t = 0 Y 1 + F i Y 2     Y 3   , if   f l a g r a n k ( i )   > = 2 ,   f l a g b e s t     0 X i + F i Y 2     Y 3 +   F i x best     Y 3 , if   f l a g r a n k ( i ) = 0 ,   f l a g b e s t = 0 X i + F i Y 2     Y 3 , if   f l a g r a n k ( i ) = 0 ,   f l a g b e s t     0 X 1 + F i x b e t t e r     X 2 +   F i x b e s t     X 2 , if   f l a g r a n k ( i ) = 1 ,   f l a g b e s t = 0 X 1 + F i x b e t t e r     X 2 , if   f l a g r a n k ( i ) = 1 ,   f l a g b e s t     0
where Y1, Y2, and Y3 are individuals randomly selected from the archive Y; X1 and X2 are two random individuals in the current population; xbest is the global optimal individual in the current population; Xbetter is an individual whose rank is better than the current individual’s; Xi is the current individual; Fi is the ith individual’s learning factor. In this study, Fi is set the same as Ref [26], shown in Equation (8).
F i =   0.1   +   0.9     r a n d ,   r a n d   <   0.1 0.5   Otherwise
Crossover: in this study, we choose the same method as [26] to achieve cross behavior, shown in Equation (9).
u i , d c h i l d = V i , j   if   r a n d     CR   or   j   =   r n ( i ) X i , j   if   r a n d   >   CR   or   j r n ( i )
where rn(i) represents a random integer from the set of integers 1, 2, ..., D, which is used to ensure that at least one dimension in u i , d c h i l d comes from V i , j . CR is a crossover operator rate whose value range is (0, 1). In this study, CR is set to 0.9, the same as in the literature [26].
Selection: when the new individual u i c h i l d is generated, its function fitness value will be calculated, compare the fitness value with the one of Xi, only the better one can survive.
In summary, the pseudo-code of the rank-up selection DE algorithm is shown in Algorithm 2.
In this paper, the crossover and selection part of RUSDE is the same with that of jDE [26]. Hence, RUSDE proposed in this paper is based on the frame of jDE. Besides, in order to alleviate the problem of stagnation of DE and improve the individual’s updating rate, most algorithms create an archive with updated individuals, and then randomly extract individuals from the archive to generate the offspring, such as SPS frame [29]. RUSDE generates the new archive with the rank-up individuals. The difference is that the fitness value of the rank-up individual must be updated, while the rank of the individuals whose fitness value updated may not rise. In other words, this filter is more strict. Therefore, in the numerical experiments, RUSDE will be compared with jDE and SPS-jDE to show its performance.
Algorithm 2 The pseudo code of RUSDE
Initialize a population X(N*D); Maximum iteration number MaxDT, size of archive V ( M ) and let count = 0, flagrank = zeros(N,1), rankupvalue = zeros(N,MaxDT);Sort all individuals and calculate the current rank rankvalue(i,1) of ith individual. The first M individuals are stored into archive Y.
for G = 2 to MaxDT do
  Sort all individuals in terms of fitness values;
  Record rank index of the ith individual in rankvalue(i, G)
  Calculate the rank-up value rankvalue(i, G) of ith individual by Equation (4)
for i = 1 to N do
   if rankvalue(i, G) < 0(% individual’s rank up) then
    count = count + 1;
    index = mod(count,M);
    Y(index,:) = x(i,:)(%Store this individual into Y, and replace the earliest one);
    flagrank(i) = 0;
  else
    flagrank(i) = flagrank(i) + 1;
  end if
end for i
for i = 1 to N do
   Randomly select X1, X2, X3 from the population X and Y1, Y2, Y3 from the archive Y;
  Generate vi by Equation (7);
  for d = 1 to D do
   Generate vi by Equation (9);
  end for d
end for i
for i = 1 to N do
   Compute the fitness of u i c h i l d ;
   if f ( u i c h i l d )   <   f ( x i ) then
    Replace xi with u i c h i l d ;
    if f ( u i c h i l d )   <   f ( g b e s t ) then
     g b e s t   =   u i c h i l d   ;
     flagbest = 0;
    else
     flagbest = flagbest + 1;
   end if
   end if
end for i
end for G

4. Experiments

4.1. Benchmarks and Experimental Settings

This study used a CEC 2014 benchmark problem [41] to evaluate RUSDE on the basis of comparison with other algorithms. The CEC 2014 benchmark functions can be divided into four categories: F1–F3 are unimodal functions, F4–F16 are simple multimodal functions, F17–F22 are hybrid functions, and F23–F30 are composition functions [57]. The shifted global optimum was determined by the random shift matrix o   =   [ o 1 , o 2 , , o D ] , and the rotated matrix M was generated as described in a previous study [58]. The benchmarks are shown in Appendix A.
We compared the RUSDE with DE and its improved version, i.e., DErand [10], jDE [26], SaDE [25], Rank-jDE [31], SPS-jDE [29], jDE-EIG [30], LSHADE [59], and other well-known algorithms and improvements, i.e., KH [8], LBSA [60], DGSTLBO [61], SCA [62]. Each algorithm uses the same parameters as in the original literature. To be fair, the variables were set as follows: for 30-dimensional problems, swarm population N = 100 and function.

4.2. The Influence of Parameter M

In RUSDE, the size of the selection archive Y may have a crucial influence on the calculation of the optimal solution. We used CEC 2014 benchmark functions (see in Appendix A) to test the effect on the results with different parameter M values. For the 30-dimensional problems, we set different M values, N is set to 100, and FES = 200,000, the algorithms are performed for 20 times, respectively. In Table 1, we list the results of the average rank. As the parameter M increases, that means the size of the archive Y enlarges, and the information from Y may be out-of-date to lead the average rank get worse, while if the parameter M is too small, the diversity of Y becomes worse, which deteriorates the average rank. When M = 50, which is half of the population, the average ranking of RUSDE is the smallest.
In order to discuss the statistical significance of the influence of the parameter M, we use Friedman test to assess the impact of it. Assume h0: Different parameters M have no effect on the operating results of the RUSDE. Fromthe F test results, F-score is 5.117, which is larger than 2.545 (the significance level is taken to 0.05, Freedom is 10). This means that this result negates the original hypothesis of Friedman’s test. Therefore, the parameter M has significant difference in the calculation of the optimal value of RUSDE. Therefore, in the later experiments, we all take M = 50.

4.3. Comparisons with Other Algorithms and Statistical Analysis of The Results

We used CEC 2014 to test 11 evolutionary algorithms (EAs) with 30 and 50-dimensional problems, and ranked the average value. The best, mean, and std values are shown in Table 2 and Table 3. It can be seen that for the 30-dimensional problems, the average rank of LSHADE, RUSDE, and SaDE were in the first, second, and third places, respectively. RUSDE was ranked first for F2, F7, F9, F12, F16, F19, and F26, with a total of seven first ranks. LSHADE was ranked first for F1, F7, F9, F11, F15, F17, F18, F20, F21, F26, and F29, with a total of 12 first ranks, in the first place. The third place were SaDE; for the 50-dimensional problems, the average rank of LSHADE, RUSDE, and SPS-jDE were first, second, and third, respectively. LSHADE was ranked first for F1, F9, F11, F18, F21, F26, F29, and F30, with a total of eight first ranks in the first place. RUSDE was ranked first for F16 and F19, with a total of two first ranks in the third place. SPS-jDE was ranked first for F6, F7, F11, F12, F13, and F15, with a total of six first ranks in the second place. From the comprehensive perspective of the results, the average ranking of the RUSDE algorithm was the second smallest, which indicated that RUSDE has the best comprehensive performance except for LSHADE.
Suppose the variance of all algorithms obeys normal distribution. We used a t-test with double tail to compare RUSDE with the other 10 algorithms. The significant level was set to 0.05. Assume h0: there is no difference between the compared algorithms and RUSDE. To determine whether the difference is statistically significant, the t-test calculates a t-value (the p-value is obtained directly from this t-value). The t-value measures the size of the difference relative to the variation in in the RUSDE and other algorithms. The greater the magnitude of T, the greater the evidence against the null hypothesis. A smaller p-value means that there is stronger evidence in favor of the alternative hypothesis. P and T stand for p-value and t-value in Table 4 and Table 5. “+”, “-”, and “ = ” indicated significantly better than, worse than, and no significant difference from the other 10 algorithms, respectively. The results were shown in Table 4 and Table 5. It illustrated that for the 30-dimensional problems, the number of “+” for the other 10 algorithms was greater than the number of “-”, and for the SaDE and SPS-jDE, the number of “+” was slightly more than the number of “-”. That indicated the performance of RUSDE was slightly better than that of SaDE and SPS-jDE, slightly worse than LSHADE, and better than the remaining eight algorithms; For the 50-dimensional problems, the number of “+” for the other 10 algorithms is greater than the number of “-”, and for the SPS-jDE algorithm, the number of “+” is equal to “-”. This shows that the performance of RUSDE is not significantly different from that of SPS-jDE and LSHADE, and it is better than the remaining nine algorithms.
The Friedman test was used to evaluate the effect of the algorithm [63]. The null hypothesis (H0) means there is no difference between the algorithms, and the alternative hypothesis (H1) means there is at least one algorithm that obtained different results compared with the others. There are 12 algorithms in total, which means the freedom is 11. When we set the confidence level to 0.05, the critical value is 2.4663. According to the calculations, the F-scores for the mean value ranks of 30D and 50D are 19.869 and 23.008, respectively, which are much higher than the critical value, hence demonstrating that hypothesis H1 can be accepted. The F-tests show that significant differences existed between the algorithm; therefore, we use the post hoc Duncan’s test to assess the differences between RUSDE and the other 11 algorithms. Table 6 illustrates the results based on the significance (p-value). It shows that for the 30D problems, RUSDE performs better than the other algorithms, except for SaDE, SPS-jDE, and LSHADE, and for the 50D problems, RUSDE obtains better performance than the other algorithms, except for SaDE, rank-jDE, SPS-jDE, and LSHADE.

4.4. Convergence Performance of RUSDE

Figure 1 shows the convergence curve of the RUSDE and 11 other algorithms in CEC 2014. It illustrated that for the unimodal function F2, RUSDE always had a faster convergence speed; for the multi-modal function F7, RUSDE had a faster convergence speed as well as jumping out of the local optimum; for the multi-modal functions F12, RUSDE converged at a normal speed in the early stage, but in the late stage, it could jump out of the local optimum and obtain a better solution; for composite functions and mixed functions F16, F19, and F26, RUSDE performed the superiority to other EAs.

5. Application Example

The four-bar mechanism is a common mechanism widely used in many machines and devices. The dimensional synthesis of four-bar mechanisms aims to synthesize a mechanism with the minimum errors between the coupler points and desired points to meet the design requirements. A typical case presented in [64] is tested in this paper.

5.1. The Classic Case of Four-Bar Mechanism

The schematic of the four-bar mechanism together with the variables is shown in Figure 2. In the world coordinate system XOY, the position of coupler point C can be written as Equation (10).
C x i ( x )   =   x 0   +   r 2 c o s ( θ 2 i   +   θ 0 )   +   r c x c o s ( θ 3 i   +   θ 0 )   r c y s i n ( θ 3 i   +   θ 0 ) C y i ( x )   =   y 0   +   r 2 s i n ( θ 2 i   +   θ 0 )   +   r c x s i n ( θ 3 i   +   θ 0 )   +   r c y c o s ( θ 3 i   +   θ 0 )
where r1, r2, r3, and r4 are the length of the links; rcx, rcy are the coordinates of coupler point C in the relative coordinate system XcOYc; x0, y0 are the coordinates of joint O2 in the world coordinate system XOY; θ0 is the angle of the stationary link with respect to the X-axis; θ 2 i is the ith input angle corresponding to the ith desired point in the relative coordinate system XtOYt.

5.2. The Constraints and Goal Function

The links’ length must satisfy the Grashof condition, which can be expressed by Equation (11).
h 1 :   r 1   +   r 2     r 3   +   r 4
In order to avoid the order defect, the sequence of input angles should fulfill the clockwise or counterclockwise rotation, shown in Equation (12).
h 2 : θ 2 k   <   θ 2 k 1   <     <   θ 2 2   <   θ 2 1   <   θ 2 n   <     <   θ 2 k + 1 θ 2 k   <   θ 2 k + 1   <     <   θ 2 n   <   θ 2 1   <   θ 2 2   <     <   θ 2 k 1
The objective function can be divided into two parts. The first part is the sum of the square of Euclidean distances between the coupler points and the corresponding desired points; the second part is the penalty functions. The Grashof condition and the Sequence condition are introduced as the penalty functions. At last, the objective function of the optimal problem can be expressed as Equation (13)
f o b j ( x )   =   i = 1 n [ ( C x i ( x )     C d x i ( x ) ) 2   +   ( C y i ( x )     C d y i ( x ) ) 2 ] +   h 1 ( x ) M 1   +   h 2 ( x ) M 2
where h1(x), h2(x) are the Grashof and Sequence condition, respectively. If the condition is satisfied, then the value is equal to 0, otherwise, it is set to 1; M1, M2 are high values to penalize the objective function, and they are set to 104.

5.3. The Experimental Settings and Results

We compare five algorithms, KH, DGSTLBO, SCA, DE, and jDE, on the synthesis problem of four-bar mechanisms. The parameter settings of algorithms are N   =   100 ; D   =   15 ; F E s   =   100,000 .
Then, the design variables of this example are:
X   =   [ r 1 ,   r 2 ,   r 3 ,   r 4 ,   r c x ,   r c y ,   x 0 ,   y 0 ,   θ 0 ,   θ 2 1 ,   θ 2 2 ,   θ 2 3 ,   θ 2 4 ,   θ 2 5 ,   θ 2 6 ]
This problem requires tracing a trajectory along eighteen given points arranged in an irregular closed path and with prescribed timing. The desired points are:
{ C d i }   =   [ ( 0.5 , 1.1 ) ;   ( 0.4 , 1.1 ) ;   ( 0.3 , 1.1 ) ;   ( 0.2 , 1 ) ;   ( 0.1 , 0.9 ) ;   ( 0.005 , 0.75 ) ;   ( 0.02 , 0.6 ) ;   ( 0 , 0.5 ) ;   ( 0 , 0.4 ) ;   ( 0.03 , 0.3 ) ;   ( 0.1 , 0.25 ) ;   ( 0.15 , 0.2 ) ;   ( 0.2 , 0.3 ) ;   ( 0.3 , 0.4 ) ;   ( 0.4 , 0.5 ) ;   ( 0.5 , 0.7 ) ;   ( 0.6 , 0.9 ) ;   ( 0.6 , 1 ) ] { θ 2 i }   =   { θ 2 1   +   ( i     1 ) π 9 } ,   i   =   1 ,   ,   18
The values of the best, mean, and standard deviations obtained by the six algorithms for 30 runs are presented in Table 7. It demonstrates that the best solution obtained by RUSDE is the most accurate among the six algorithms. The convergence graph of the best values for the synthesis problem of the four-bar mechanism is shown in Figure 3. The results show that RUSDE has the fastest convergence speed and the highest convergence accuracy among the six competitive algorithms. Therefore, RUSDE is superior to the other five algorithms in convergence speed and accuracy.

6. Conclusions

In this study, we proposed an enhanced learning method to improve the performance of differential evolution (DE), named RUSDE. In this method, we selected the rank-up individuals and stored them into an archive Y. Compared with the fitness updated individuals, this selection was more strict; we adopted different mutation strategies inspired by simulating human social behavior. In this learning strategy, parents were selected in terms of their updating status. We used the CEC 2014 benchmarks comprising unimodal, basic multimodal, expanded multimodal, and hybrid problems to test the performance of RUSDE, and discussed the influence of the archive size M. The results showed that when the size was half of the population, the reasonable average rank could be obtained. We compared the performance of RUSDE with DErand, jDE, SaDE, Rank-jDE, SPS-jDE, jDE-EIG, KH, LBSA, DGSTLBO, LSHADE, and SCA. The numerical experiment results showed that RUSDE performed superior to DErand, jDE, SaDE, Rank-jDE, jDE-EIG, KH, LBSA, DGSTLBO, and SCA, except for SPS-jDE in the 30D and 50D problems. The statistical analysis results show that for the 30D problems, RUSDE performs better than the other algorithms, except for SaDE, SPS-jDE, and LSHADE, and for the 50D problems, RUSDE obtains better performance than the other algorithms, except for SaDE, rank-jDE, SPS-jDE, and LSHADE. An example of an application of the proposed method to optimize a four-bar mechanism showed that RUSDE performed better than the other algorithms. The limits of RUSDE are that the size of the selection archive Y has a crucial influence on the calculation of the optimal solution. For different problems, RUSDE with different archive sizes performs differently. In future research, an adaptive mechanism to decide the archive size M will be developed for RUSDE. Moreover, we will introduce this method to other similar algorithms and apply it to practical problems in the field of engineering control system.

Author Contributions

Conceptualization, K.Z.; Methodology, K.Z. and Y.Y.; Supervision, Y.Y.; Visualization, Y.Y.; Writing—original draft, K.Z.; Writing—review & editing, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by a grant (#DMR-186-168) from National Chung Hsing University, Taichung, Taiwan.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Benchmarks in CEC 2014.
Table A1. Benchmarks in CEC 2014.
No.Function NameExpressionFi*
F1Rotated High Conditioned Elliptic Function F 1 ( x ) = f 1 ( M ( x o 1 ) ) + F 1 * 100
F2Rotated Bent Cigar Function F 2 ( x ) = f 2 ( M ( x     o 2 ) ) + F 2 * 200
F3Rotated Discus Function F 3 ( x ) = f 3 ( M ( x     o 2 ) ) + F 3 * 300
F4Shifted and Rotated Rosenbrock’s Function F 4 ( x ) = f 4 M 2.048 ( x     o 4 ) 100 + F 4 * 400
F5Shifted and Rotated Ackley’s Function F 5 ( x ) = f 5 ( M ( x     o 5 ) ) + F 5 * 500
F6Shifted and Rotated Weierstrass Function F 6 ( x ) = f 6 M 0.5 ( x     o 6 ) 100 + F 6 * 600
F7Shifted and Rotated Griewank’s Function F 7 ( x ) = f 7 M 600 ( x     o 7 ) 100 + F 7 * 700
F8Shifted Rastrigin’s Function F 8 ( x ) = f 8 M 5.12 ( x     o 8 ) 100 + F 8 * 800
F9Shifted and Rotated Rastrigin’s Function F 9 ( x ) = f 8 M 5.12 ( x     o 9 ) 100 + F 9 * 900
F10Shifted Schwefel’s Function F 10 ( x ) = f 9 M 1000 ( x     o 10 ) 100 + F 10 * 1000
F11Shifted and Rotated Schwefel’s Function F 11 ( x ) = f 9 M 1000 ( x     o 11 ) 100 + F 11 * 1100
F12Shifted and Rotated Katsuura Function F 12 ( x ) = f 10 M 5 ( x o 12 ) 100 + F 12 * 1200
F13Shifted and Rotated HappyCat Function F 13 ( x ) = f 11 M 5 ( x o 13 ) 100 + F 13 * 1300
F14Shifted and Rotated HGBat Function F 14 ( x ) = f 12 M 5 ( x     o 14 ) 100 + F 14 * 1400
F15Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function F 15 ( x ) = f 13 M 5 ( x     o 15 ) 100 + 1 + F 15 * 1500
F16Shifted and Rotated Expanded Scaffer’s F6 Function F 16 ( x ) = f 14 M ( x     o 16 ) + 1 + F 16 * 1600
F ( x )   =   g 1 ( M 1 z 1 )   +   g 2 ( M 2 z 2 )   +     +   g N ( M N z N )   +   F * ( x )
where
z 1   =   [ y S 1 ,   y S 2 ,   ,   y S n 1 ] ,   z 2   =   [ y S n 1 + 1 ,   y S n + 2 ,   ,   y S n 1 + n 2 ] ,   ,   z N   =   [ y S i   =   1 N 1 n i   +   1 ,   y S i   =   1 N 1 n i   +   2 ,   ,   y S D ]
n 1   =   p 1 D ,   n 2   =   p 2 D ,   ,   n N   =   D i   =   1 N 1 n i
F17Hybrid Function 1N = 3, p = [0.3, 0.3, 0.4], g = [ f 9 ,     f 8 ,     f 1 ]1700
F18Hybrid Function 2N = 3, p = [0.3, 0.3, 0.4], g = [ f 2 ,     f 12 ,     f 8 ]1800
F19Hybrid Function 3N = 4, p = [0.2, 0.2, 0.3, 0.3], g =   f 7 ,     f 6 ,     f 4 ,     f 14 1900
F20Hybrid Function 4N = 4, p = [0.2, 0.2, 0.3, 0.3], g =   f 12 ,     f 3 ,     f 13 ,     f 8 2000
F21Hybrid Function 5N = 5, p = [0.1, 0.2, 0.2, 0.2 0.3], g =   f 14 ,     f 12 ,     f 4 ,     f 9 ,     f 1 2100
F22Hybrid Function 6N = 5, p = [0.1, 0.2, 0.2, 0.2 0.3], g =   f 10 ,     f 11 ,     f 13 ,     f 9 ,     f 5 2200
F23Composition Function 1 N =   5 ,   σ =   [ 10 ,   20 ,   30 ,   40 ,   50 ] ,   λ =   [ 1 ,   1   ×   10 6 ,   1   ×   10 26 ,   1   ×   10 6 ,   1   ×   10 6 ] ,
bias = [0, 100, 200, 300, 400], g = [F4′, F1′, F2′, F3′, F1′]
2300
F24Composition Function 2N = 3, σ = [20, 20, 20], λ = [1, 1, 1],
bias = [0, 100, 200], g = [F10′, F9′, F11′]
2400
F25Composition Function 3 N =   3 ,   σ =   [ 10 ,   30 ,   50 ] ,   λ =   [ 0.25 ,   1 ,   1   ×   10 7 ] ,
bias = [0, 100, 200], g = [F11′, F9′, F1′]
2500
F26Composition Function 4 N =   5 ,   σ =   [ 10 ,   10 ,   10 ,   10 ,   10 ] ,   λ =   [ 0.25 ,   1 ,   1   ×   10 7 ,   2.5 ,   10 ] ,
bias = [0, 100, 200, 300, 400], g = [F11′, F13′, F1′, F6′, F7′]
2600
F27Composition Function 5 N =   5 ,   σ   =   [ 10 ,   10 ,   10 ,   10 ,   20 ] ,   λ =   [ 10 ,   10 ,   2.5 ,   25 ,   1   ×   10 6 ] ,
bias = [0, 100, 200, 300, 400], g = [F14′, F9′, F11′, F6′, F1′]
2700
F28Composition Function 6 N =   5 ,   σ   =   [ 10 ,   20 ,   30 ,   40 ,   50 ] ,   λ =   [ 2.5 ,   10 ,   2.5 ,   5   ×   10 4 ,   1   ×   10 6 ] ,
bias = [0, 100, 200, 300, 400], g = [F15′, F13′, F11′, F16′, F1′]
2800
F29Composition Function 7N = 3, σ = [10, 30, 50], λ = [1, 1, 1],
bias = [0, 100, 200], g = [F17′, F18′’, F19′]
2900
F30Composition Function 8N = 3, σ = [10, 30, 50], λ = [1, 1, 1],
bias = [0, 100, 200], g = [F20′, F21′, F22′]
3000
Table A2. Definitions of the basic functions in CEC 2014.
Table A2. Definitions of the basic functions in CEC 2014.
No.Function NameExpression
F1High Conditioned Elliptic Function f 1 ( x ) =   i = 1 D ( 10 6 ) i 1 D 1 x i 2
F2Bent Cigar Function f 2 ( x ) = x 1 2 + 10 6 i = 2 D x i 2
F3Discus Function f 3 ( x ) = 10 6 x 1 2 + i = 2 D x i 2
F4Rosenbrock’s Function f 4 ( x ) = i = 1 D 1 100 x i 2 x i + 1 2 + x i   1 2
F5Ackley’s Function f 5 ( x ) = 20 exp 0.2 1 D i = 1 D x i 2   exp 1 D i = 1 D cos 2 π x i + 20 + e
F6Weierstrass Function f 6 ( x )   =   i   =   1 D k   =   0 k max a k cos 2 π b k x i   +   0.5 D k   =   0 k max a k cos 2 π b k · 0.5
where a = 0.5, b = 3, kmax = 20
F7Griewank’s Function f 7 ( x ) = i = 1 D x i 2 4000 i = 1 D cos x i i + 1
F8Rastrigin’s Function f 8 ( x ) = i = 1 D x i 2 10 cos 2 π x i + 10
F9Modified Schwefel’s Function f 9 ( x )   =   418.9829   ×   i   =   1 D g ( z i ) where z i   =   x i   +   4.209687462275036 e   +   002
g ( z i )   =   z i sin ( z i 1 / 2 ) if z i   500 ( 500 mod ( z i , 500 ) ) sin ( 500 mod ( z i , 500 ) ) ( z i 500 ) 2 10000 D if z i >   500 ( mod ( z i , 500 )   500 ) sin ( mod ( z i , 500 ) 500 ) ( z i   +   500 ) 2 10000 D if z i <   500
F10Katsuura Function f 10 ( x ) = 10 D 2 i = 1 D 1 + i j 32 2 j x i r o u n d 2 j x i 2 j 10 D 2   10 D 2
F11HappyCat Function f 11 ( x ) = i = 1 D x i 2 D 1 / 4 + 0.5 i = 1 D x i 2 + i = 1 D x i / D + 0.5
F12HGBat Function f 12 ( x ) = i = 1 D x i 2 2     i = 1 D x i 2 1 / 2 + 0.5 i = 1 D x i 2 + i = 1 D x i / D + 0.5
F13Expanded Griewank’s plus Rosenbrock’s Function f 13 ( x ) = f 7 ( f 4 ( x 1 ,   x 2 ) ) + f 7 ( f 4 ( x 2 ,   x 3 ) ) + + f 7 ( f 4 ( x D 1 ,   x D ) ) + f 7 ( f 4 ( x D ,   x 1 ) )
F14Expanded Scaffer’s F6 Function g ( x , y ) = 0.5 + ( sin 2 x 2 + y 2 ) 0.5 1 + 0.001 ( x 2 + y 2 )
f 14 ( x ) = g ( x 1 ,   x 2 ) + g ( x 2 ,   x 3 ) + + g ( x Di 1 ,   x D ) + g ( x D ,   x 1 )

References

  1. Holland, J. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Application to Biology. Control and Artificial Intelligence; MIT Press: Cambridge, MA, USA, 1975. [Google Scholar]
  2. Dorigo, M. Optimization, Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 1992. [Google Scholar]
  3. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks—Conference Proceedings, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar]
  4. Xin-She, Y.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009. [Google Scholar]
  5. Rao, R.; Savsani, V.; Vakharia, D. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  6. Črepinšek, M.; Liu, S.-H.; Mernik, L. A note on teaching–learning-based optimization algorithm. Inf. Sci. 2012, 212, 79–93. [Google Scholar] [CrossRef]
  7. Črepinšek, M.; Liu, S.-H.; Mernik, L.; Mernik, M. Is a comparison of results meaningful from the inexact replications of computational experiments? Soft Comput. 2016, 20, 223–235. [Google Scholar] [CrossRef] [Green Version]
  8. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  9. Civicioglu, P. Backtracking Search Optimization Algorithm for numerical optimization problems. Appl. Math. Comput. 2013, 219, 8121–8144. [Google Scholar] [CrossRef]
  10. Rainer, S.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over con-tinuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar]
  11. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 Special Session on Real Parameter Optimization. J. Heuristics 2009, 15, 617–644. [Google Scholar] [CrossRef]
  12. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evo-lutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  13. Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential evolution with subpopulations for high-dimensional seismic inversion. Geophys. Prospect. 2018, 66, 1060–1069. [Google Scholar] [CrossRef] [Green Version]
  14. Elferik, S.; Hassan, M.; Alnaser, M. Adaptive Valve Stiction Compensation Using Differential Evolution. J. Chem. Eng. Jpn. 2018, 51, 407–417. [Google Scholar] [CrossRef]
  15. Qiu, X.; Xu, J.-X.; Xu, Y.H.; Tan, K.C. A New Differential Evolution Algorithm for Minimax Optimization in Robust Design. IEEE Trans. Cybern. 2018, 48, 1355–1368. [Google Scholar] [CrossRef] [PubMed]
  16. Aguitoni, M.C.; Pavão, L.V.; Siqueira, P.H.; Jiménez, L.; Ravagnani, M.A.D.S.S. Heat exchanger network synthesis using genetic algorithm and differential evolution. Comput. Chem. Eng. 2018, 117, 82–96. [Google Scholar] [CrossRef]
  17. Ak, C.; Yildiz, A.; Yıldız, A. A Novel Closed-Form Expression Obtained by Using Differential Evolution Algorithm to Calculate Pull-In Voltage of MEMS Cantilever. J. Microelectromech. Syst. 2018, 27, 392–397. [Google Scholar] [CrossRef]
  18. Zhao, W.-J.; Liu, E.-X.; Wang, B.; Gao, S.-P.; Png, C.E. Differential Evolutionary Optimization of an Equivalent Dipole Model for Electromagnetic Emission Analysis. IEEE Trans. Electromagn. Compat. 2018, 60, 1635–1639. [Google Scholar] [CrossRef]
  19. Manjit, K.; Kumar, V. Colour image encryption technique using differential evolution in non-subsampled con-tourlet transform domain. IET Image Process. 2018, 12, 1273–1283. [Google Scholar]
  20. Zhou, Z.; Gao, X.; Zhang, J.; Zhu, Z.; Hu, X. A novel hybrid model using the rotation forest-based differential evolution online sequential extreme learning machine for illumination correction of dyed fabrics. Text. Res. J. 2019, 89, 1180–1197. [Google Scholar] [CrossRef]
  21. Wang, Y.; Zhou, M.; Song, X.; Gu, M.; Sun, J. Constructing Cost-Aware Functional Test-Suites Using Nested Differential Evolution Algorithm. IEEE Trans. Evol. Comput. 2017, 22, 334–346. [Google Scholar] [CrossRef]
  22. Wang, L.; Zeng, Y.; Chen, T. Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst. Appl. 2015, 42, 855–863. [Google Scholar] [CrossRef]
  23. Marco, B.; Milani, A.; Santucci, V. Learning bayesian networks with algebraic differential evolution. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Coimbra, Portugal, 8–12 September 2018. [Google Scholar]
  24. Aleš, Z.; Sosa, J.D.H. Success history applied to expert system for underwater glider path planning using differential evolution. Expert Syst. Appl. 2019, 119, 155–170. [Google Scholar]
  25. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential Evolution Algorithm with Strategy Adaptation for Global Numerical Optimization. IEEE Trans. Evol. Comput. 2008, 13, 398–417. [Google Scholar] [CrossRef]
  26. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  27. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution with Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  28. Ryoji, T.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013. [Google Scholar]
  29. Guo, S.-M.; Yang, C.-C.; Hsu, P.-H.; Tsai, J.S.H. Improving Differential Evolution with a Successful-Parent-Selecting Framework. IEEE Trans. Evol. Comput. 2015, 19, 717–730. [Google Scholar] [CrossRef]
  30. Guo, S.-M.; Yang, C.-C. Enhancing Differential Evolution Utilizing Eigenvector-Based Crossover Operator. IEEE Trans. Evol. Comput. 2015, 19, 31–49. [Google Scholar] [CrossRef]
  31. Gong, W.; Cai, Z. Differential Evolution with Ranking-Based Mutation Operators. IEEE Trans. Cybern. 2013, 43, 2066–2081. [Google Scholar] [CrossRef] [PubMed]
  32. Guo, L.; Li, X.; Gong, W. Ranking-Based Differential Evolution for Large-Scale Continuous Optimization. Comput. Inform. 2018, 37, 49–75. [Google Scholar] [CrossRef]
  33. Tang, L.; Dong, Y.; Liu, J. Differential Evolution with an Individual-Dependent Mechanism. IEEE Trans. Evol. Comput. 2014, 19, 560–574. [Google Scholar] [CrossRef] [Green Version]
  34. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential Evolution Using a Neighborhood-Based Mutation Operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar] [CrossRef] [Green Version]
  35. Mallipeddi, R.; Suganthan, P.; Pan, Q.; Tasgetiren, M. Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [Google Scholar] [CrossRef]
  36. Islam, S.M.; Das, S.; Ghosh, S.; Roy, S.; Suganthan, P.N. An Adaptive Differential Evolution Algorithm with Novel Mutation and Crossover Strategies for Global Numerical Optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 482–500. [Google Scholar] [CrossRef]
  37. Segredo, E.; Lalla-Ruiz, E.; Hart, E. A novel similarity-based mutant vector generation strategy for differential evolution. In Proceedings of the Genetic and Evolutionary Computation Conference, Kyoto, Japan, 15–19 July 2018. [Google Scholar]
  38. Xu, B.; Tao, L.; Chen, X.; Cheng, W. Adaptive differential evolution with multi-population-based mutation operators for constrained optimization. Soft Comput. 2019, 23, 3423–3447. [Google Scholar] [CrossRef]
  39. Peng, H.; Guo, Z.; Deng, C.; Wu, Z. Enhancing differential evolution with random neighbors based strategy. J. Comput. Sci. 2018, 26, 501–511. [Google Scholar] [CrossRef]
  40. Huang, Q.; Zhang, K.; Song, J.; Zhang, Y.; Shi, J. Adaptive differential evolution with a Lagrange interpolation argument algorithm. Inf. Sci. 2019, 472, 180–202. [Google Scholar] [CrossRef]
  41. Liang, J.; Qu, B.Y.; Suganthan, P.N. Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. In Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report; Nanyang Technological University: Singapore, 2013; p. 635. [Google Scholar]
  42. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-Based Differential Evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  43. Wang, H.; Wu, Z.; Rahnamayan, S. Enhanced opposition-based differential evolution for solving high-dimensional continuous optimization problems. Soft Comput. 2010, 15, 2127–2140. [Google Scholar] [CrossRef]
  44. Hansen, N.; Ostermeier, A. Completely Derandomized Self-Adaptation in Evolution Strategies. Evol. Comput. 2001, 9, 159–195. [Google Scholar] [CrossRef]
  45. Hansen, N.; Niederberger, A.S.P.; Guzzella, L.; Koumoutsakos, P. A Method for Handling Uncertainty in Evolutionary Optimization with an Application to Feedback Control of Combustion. IEEE Trans. Evol. Comput. 2008, 13, 180–197. [Google Scholar] [CrossRef]
  46. Ghosh, S.; Das, S.; Roy, S.; Islam, S.M.; Suganthan, P. A Differential Covariance Matrix Adaptation Evolutionary Algorithm for real parameter optimization. Inf. Sci. 2012, 182, 199–219. [Google Scholar] [CrossRef]
  47. Wang, Y.; Li, H.-X.; Huang, T.; Li, L. Differential evolution based on covariance matrix learning and bimodal distribution parameter setting. Appl. Soft Comput. 2014, 18, 232–247. [Google Scholar] [CrossRef]
  48. Wang, Y.; Cai, Z.; Zhang, Q. Enhancing the search ability of differential evolution through orthogonal crossover. Inf. Sci. 2012, 185, 153–177. [Google Scholar] [CrossRef]
  49. Wang, H.; Rahnamayan, S.; Sun, H.; Omran, M.G.H. Gaussian Bare-Bones Differential Evolution. IEEE Trans. Cybern. 2013, 43, 634–647. [Google Scholar] [CrossRef] [PubMed]
  50. Wang, Y.; Cai, Z.; Zhang, Q. Differential Evolution with Composite Trial Vector Generation Strategies and Control Parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  51. Dorronsoro, B.; Bouvry, P. Improving Classical and Decentralized Differential Evolution with New Mutation Operator and Population Topologies. IEEE Trans. Evol. Comput. 2011, 15, 67–98. [Google Scholar] [CrossRef]
  52. Črepinšek, M.; Liu, S.-H.; Mernik, M.; Ravber, M. Long Term Memory Assistance for Evolutionary Algorithms. Mathematics 2019, 7, 1129. [Google Scholar] [CrossRef] [Green Version]
  53. Auger, A.; Hansen, N. A restart CMA evolution strategy with increasing population size. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Scotland, UK, 2–5 September 2005; Volume 2, pp. 1769–1776. [Google Scholar]
  54. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastián, Spain, 5–8 June 2017; pp. 1311–1318. [Google Scholar]
  55. Cai, Y.; Wang, J. Differential evolution with neighborhood and direction information for numerical optimiza-tion. IEEE Trans. Cybern. 2013, 43, 2202–2215. [Google Scholar] [CrossRef]
  56. Brest, J.; Korosec, P.; Šilc, J.; Zamuda, A.; Bošković, B.; Maučec, M.S. Differential evolution and differential ant-stigmergy on dynamic optimisation problems. Int. J. Syst. Sci. 2013, 44, 663–679. [Google Scholar] [CrossRef]
  57. Zhang, K.; Huang, Q.; Zhang, Y. Enhancing comprehensive learning particle swarm optimization with local optima topology. Inf. Sci. 2019, 471, 1–18. [Google Scholar] [CrossRef]
  58. Salomon, R. Re-evaluating genetic algorithm performance under coordinate rotation of benchmark functions. A survey of some theoretical and practical aspects of genetic algorithms. BioSystems 1996, 39, 263–278. [Google Scholar] [CrossRef]
  59. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014. [Google Scholar]
  60. Chen, D.; Zou, F.; Lu, R.; Wang, P. Learning backtracking search optimisation algorithm and its application. Inf. Sci. 2017, 376, 71–94. [Google Scholar] [CrossRef]
  61. Zou, F.; Wang, L.; Hei, X.; Chen, D.; Yang, D. Teaching–learning-based optimization with dynamic group strategy for global optimization. Inf. Sci. 2014, 273, 112–131. [Google Scholar] [CrossRef]
  62. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  63. Veček, N.; Črepinšek, M.; Mernik, M. On the influence of the number of algorithms, problems, and independent runs in the comparison of evolutionary algorithms. Appl. Soft Comput. 2017, 54, 23–45. [Google Scholar] [CrossRef]
  64. Singh, R.; Chaudhary, H.; Singh, A.K. Defect-free optimal synthesis of crank-rocker linkage using nature-inspired optimization algorithms. Mech. Mach. Theory 2017, 116, 105–122. [Google Scholar] [CrossRef]
Figure 1. Convergence curves in 30D problems of the CEC 2014 benchmarks.
Figure 1. Convergence curves in 30D problems of the CEC 2014 benchmarks.
Mathematics 09 00569 g001
Figure 2. Variables of the four-bar mechanism.
Figure 2. Variables of the four-bar mechanism.
Mathematics 09 00569 g002
Figure 3. Convergence curves of the four-bar mechanism with the six algorithms.
Figure 3. Convergence curves of the four-bar mechanism with the six algorithms.
Mathematics 09 00569 g003
Table 1. The influence of parameter M.
Table 1. The influence of parameter M.
Func.A1.M = 20M = 40M = 50M = 60M = 80M = 100
F1Mean2.94 × 1055.10 × 1053.96 × 1064.01 × 1065.00 × 1063.43 × 106
F2Mean1.24 × 10−92.19 × 10−84.35 × 10−101.76 × 10−82.95 × 10−71.84 × 10−6
F3Mean3.48 × 10−105.47 × 10−91.04 × 10−139.74 × 10−161.92 × 10−152.03 × 10−5
F4Mean11.525.28.111.70199.05
F5Mean20.620.620.420.420.420.3
F6Mean0.1424.19 × 10−40.13512.314.414.3
F7Mean0.007.40 × 10−40.000.000.001.11 × 10−17
F8Mean0.000000110.00003586.16 × 10−104.41 × 10−101.52 × 10−91.7 × 10−9
F9Mean33.658.649.152.153.054.8
F10Mean40.859.84.91 × 10−35.78 × 10−36.90 × 10−31.61 × 10−2
F11Mean2.23 × 1034.16 × 1032.70 × 1032.74 × 1032.77 × 1032.73 × 103
F12Mean0.5690.9520.5170.5360.4890.549
F13Mean0.2710.280.2370.2390.2250.292
F14Mean0.290.2920.2780.2550.2620.251
F15Mean3.9610.06.366.596.286.52
F16Mean10.911.210.09.9510.110.2
F17Mean5.53 × 10−34.04 × 1031.29 × 1036.64 × 1026.48 × 1026.34 × 102
F18Mean44.847.214.415.217.713.2
F19Mean3.324.134.574.825.026.15
F20Mean11.612.012.117.09.119.47
F21Mean9.48 × 1028.27 × 1023.26 × 1023.15 × 1023.61 × 1023.47 × 102
F22Mean1.29 × 1021.01 × 1021.85 × 1021.70 × 1021.38 × 1022.21 × 102
F23Mean3.15 × 1023.15 × 1023.15 × 1023.15 × 1023.15 × 1023.15 × 102
F24Mean2.23 × 1022.23 × 1022.23 × 1022.23 × 1022.24 × 1022.24 × 102
F25Mean2.03 × 1022.03 × 1022.05 × 1022.05 × 1022.05 × 1022.05 × 102
F26Mean1.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 102
F27Mean3.14 × 1023.09 × 1023.00 × 1023.00 × 1023.00 × 1023.04 × 102
F28Mean8.04 × 1028.16 × 1027.95 × 1028.10 × 1028.09 × 1028.14 × 102
F29Mean1.06 × 1039.73 × 1029.58 × 1027.95 × 1028.00 × 1021.30 × 103
F30Mean1.08 × 1031.29 × 1031.29 × 1031.40 × 1031.78 × 1031.87 × 103
Average Rank 3.074.202.372.573.073.80
evaluations FEs = 300,000; for 50-dimensional problems, N = 100 and FEs = 500,000; The experiment is carried out on a PC with a 3.6G HZ CPU and 32G memory, and the software used is MATLAB R2016b in a Windows 64-bit environment.
Table 2. The comparative results of 30-dimensional problems for CEC 2014.
Table 2. The comparative results of 30-dimensional problems for CEC 2014.
Func.A1.KHLBSADGSTLBOSCADErandjDESaDErank-jDEjDE-EIGSPS-jDELSHADERUSDE
F1Mean1.27 × 106 1.20 × 1053.82 × 1052.54 × 1081.59 × 1056.87 × 1051.42 × 1041.00 × 1058.17 × 1041.77 × 1052.00 × 1021.13 × 105
Std.4.16 × 105 1.01 × 1052.11 × 1058.21 × 1071.72 × 1053.89 × 1051.40 × 1046.46 × 1048.81 × 1041.14 × 1051.60× 10−56.53 × 104
F2Mean1.28 × 104 4.65 × 10−232.85 × 10−21.65 × 101025.10.001.82 × 10−230.002.52 × 10−240.001.19 × 10−170.00
Std.6.42 × 103 1.03 × 10−220.1152.49 × 1091.12 × 1020.005.62 × 10−230.001.13 × 10−230.001.15 × 10−170.00
F3Mean3.83 × 104 8.53 × 10−303.96 × 10−43.73 × 1041.464.95 × 10−254.77 × 10−281.87 × 10−291.25 × 10−279.77 × 10−282.13 × 10−223.08 × 10−28
Std.1.08 × 104 2.37 × 10−291.61 × 10−36.46 × 1033.976.89 × 10−251.22 × 10−274.28 × 10−293.11 × 10−271.92 × 10−273.21 × 10−221.32 × 10−27
F4Mean65.6 7.2592.61.12 × 10321.00.5660.3436.04 × 10−30.1064.100.5570.403
Std.37.2 20.727.43.08 × 10237.80.2950.221.63 × 10−20.15514.90.4510.212
F5Mean20.0 20.220.920.921.020.520.620.520.620.420.220.4
Std.6.45 × 10−43.45 × 10−23.84 × 10−25.29 × 10−24.80 × 10−24.42 × 10−25.36 × 10−24.58 × 10−24.73 × 10−20.1163.83 × 10−20.111
F6Mean20.5 7.0620.834.40.6630.3610.8061.330.5930.58110.30.432
Std.3.28 1.872.702.591.000.7651.491.080.8810.720.8150.82
F7Mean2.81 × 10−21.08 × 10−28.91 × 10−21.43 × 1021.48 × 10−23.70 × 10−41.84 × 10−32.46 × 10−34.93 × 10−40.000.000.00
Std.1.24 × 10−21.34 × 10−20.10921.83.04 × 10−31.65 × 10−36.02 × 10−34.52 × 10−32.20 × 10−30.000.000.00
F8Mean1.04 × 102 8.88 × 10−1786.92.44 × 10212.80.000.000.004.843.412.11 × 10−40.497
Std.25.6 3.97 × 10−1620.318.54.650.000.000.004.031.992.10 × 10−40.685
F9Mean1.10 × 102 38.51.01 × 1022.74 × 1062.395.838.463.991.134.622.634.2
Std.25.97.2119.016.455.69.8412.426.47.529.753.7310.9
F10Mean3.70 × 103 4.472.48 × 1035.97 × 1031.96 × 10248.41.1212.63.31 × 10222.16.664.38
Std.8.89 × 102 2.086.03 × 1024.28 × 1021.63 × 10214.11.087.5064.636.21.523.03
F11Mean3.80 × 103 1.94 × 1033.53 × 1037.14 × 1036.43 × 1034.23 × 1033.59 × 1034.08 × 1034.47 × 1032.07 × 1031.42 × 1032.11 × 103
Std.6.51 × 102 1.91 × 1028.74 × 1022.30 × 1021.10 × 1033.41 × 1023.09 × 1023.12 × 1023.58 × 1025.87 × 1021.97 × 1025.84 × 102
F12Mean0.258 0.2882.002.532.210.8820.8130.9011.060.2020.2280.177
Std.0.141 4.45 × 10−20.5230.2460.3610.1290.1108.95 × 10−20.1430.1102.73 × 10−29.74 × 10−2
F13Mean0.498 0.2610.4783.030.2740.300.2630.2580.2800.1300.1880.182
Std.9.24 × 10−25.02 × 10−20.1130.2664.34 × 10−24.22 × 10−24.95 × 10−23.96 × 10−23.06 × 10−23.66 × 10−22.00 × 10−25.04 × 10−2
F14Mean0.299 0.2250.25847.80.2760.2770.2700.2760.2560.2860.2440.261
Std.4.74 × 10−2 4.18 × 10−25.01 × 10−210.33.12 × 10−22.87 × 10−24.05 × 10−23.24 × 10−23.74 × 10−24.09 × 10−21.68 × 10−23.61 × 10−2
F15Mean16.6 5.8345.13.90 × 10314.29.947.019.339.563.032.653.43
Std.4.72 1.7023.44.11 × 1031.700.7191.010.8620.9521.080.3411.15
F16Mean13.0 9.5411.112.812.111.110.810.811.41.14E+019.429.339.09
Std.0.408 0.400.7050.2910.3080.2650.3530.3480.2770.9310.4631.15
F17Mean1.44 × 105 1.29 × 1044.40 × 1036.45 × 1066.24 × 1031.53 × 1043.96 × 1039.04 × 1031.40 × 1037.49 × 1033.66 × 1029.16 × 103
Std.7.44 × 104 1.27 × 1042.13 × 1033.17 × 1064.72 × 1031.52 × 1044.30 × 1039.00 × 1036.92 × 1026.43 × 10391.98.20 × 103
F18Mean9.04 × 102 81.92.84 × 1031.71 × 10815.664.333.81.28 × 10233.11.02 × 1037.1926.5
Std.7.71 × 102 36.14.34 × 1039.66 × 1079.6255.417.34.29 × 10218.23.02 × 1033.2022.9
F19Mean20.5 6.0614.083.32.874.924.644.334.152.993.362.74
Std.18.6 1.352.0125.10.7080.6170.7401.020.9640.8340.5560.549
F20Mean2.37 × 104 35.03.74 × 1021.63 × 10422.219.019.315.918.413.75.1419.0
Std.6.99 × 10315.41.19 × 1025.64 × 10321.07.745.567.289.514.021.6718.1
F21Mean1.14 × 105 5.20 × 1034.19 × 1031.80 × 1067.60 × 1021.34 × 1035.21 × 1021.22 × 1033.65 × 1021.67 × 1033.12 × 1021.79 × 103
Std.4.84 × 104 8.00 × 1033.67 × 1039.79 × 1059.40 × 1028.29 × 1023.02 × 1021.24 × 1031.02 × 1022.08 × 10369.91.94 × 103
F22Mean5.84 × 102 1.65 × 1024.19 × 1027.85 × 1021.08 × 10261.795.01.44 × 10281.62.74 × 10257.21.83 × 102
Std.1.94 × 102 87.02.38 × 1021.08 × 1021.34 × 10258.677.11.05 × 10262.41.32 × 10247.01.80 × 102
F23Mean3.15 × 102 3.15 × 1022.00 × 1023.66 × 1023.15 × 1023.15 × 1023.15 × 1023.15 × 1023.15 × 1023.15 × 1023.15 × 1023.15 × 102
Std.4.28 × 10−3 1.17 × 10−132.26 × 10−1410.29.22 × 10−145.83 × 10−147.72 × 10−145.83 × 10−145.22 × 10−148.35 × 10−145.83 × 10−146.12 × 10−14
F24Mean2.19 × 1022.29 × 1022.00 × 1022.00 × 1022.25 × 1022.24 × 1022.26 × 1022.29 × 1022.24 × 1022.25 × 1022.24 × 1022.24 × 102
Std.9.88 5.333.37 × 10−64.89 × 10−21.241.223.345.481.283.220.5961.74
F25Mean2.06 × 102 2.07 × 1022.00 × 1022.29 × 1022.03 × 1022.03 × 1022.04 × 1022.04 × 1022.05 × 1022.04 × 1022.03 × 1022.04 × 102
Std.3.34 2.895.83 × 10−145.380.4370.4211.070.9991.080.7150.1860.524
F26Mean1.80 × 102 1.00 × 1021.35 × 1021.03 × 1021.05 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 102
Std.40.87.18 × 10−248.70.40322.35.86 × 10−22.89 × 10−23.41 × 10−24.05 × 10−23.62 × 10−22.79 × 10−23.37 × 10−2
F27Mean8.26 × 102 4.62 × 1022.00 × 1028.01 × 1023.59 × 1023.64 × 1023.56 × 1023.41 × 1023.72 × 1023.22 × 1023.00 × 1023.16 × 102
Std.1.37 × 102 70.62.26 × 10−143.43 × 10245.148.451.440.245.538.11.19 × 10−434.0
F28Mean1.47 × 103 9.45 × 1025.73 × 1022.07 × 1038.33 × 1028.04 × 1028.21 × 1028.29 × 1028.41 × 1028.32 × 1027.97 × 1028.09 × 102
Std.4.27 × 102 1.53 × 1026.20 × 1023.24 × 10236.321.724.828.149.530.011.863.8
F29Mean4.45 × 105 8.88 × 1051.12 × 1061.15 × 1078.50 × 1021.01 × 1037.83 × 1028.43 × 1027.67 × 1029.26 × 1027.18 × 1028.62 × 102
Std.1.93 × 106 2.74 × 1062.21 × 1066.25 × 1062.14 × 1021.10 × 1021.95 × 1021.31 × 10294.33.38 × 1024.901.58 × 102
F30Mean7.96 × 103 2.34 × 1036.08 × 1032.18 × 1059.65 × 1021.42 × 1031.12 × 1031.76 × 1031.07 × 1031.63 × 1031.01 × 1031.68 × 103
Std.2.00 × 103 9.37 × 1025.80 × 1038.02 × 1046.08 × 1026.33 × 1023.78 × 1029.87 × 1025.55 × 1026.92 × 1022.96 × 1027.08 × 102
Avg.Rank9.606.307.9311.336.875.704.705.235.434.772.703.70
Table 3. The comparative results of 50-dimensional problems for CEC 2014.
Table 3. The comparative results of 50-dimensional problems for CEC 2014.
Func.A1.KHLBSADGSTLBOSCADErandjDESaDErank-jDEjDE-EIGSPS-jDELSHADERUSDE
F1Mean2.79 × 106 8.45 × 1052.27 × 1066.60 × 1089.38 × 1053.02 × 1063.37 × 1051.01 × 1068.04 × 1051.57 × 1061.01 × 1051.10 × 106
Std.4.74 × 105 4.60 × 1059.25 × 1051.58 × 1084.37 × 1051.15 × 1061.85 × 1053.61 × 1052.76 × 1056.08 × 1053.62 × 1041.98 × 105
F2Mean2.32 × 104 2.07 × 10−21.25 × 1065.41 × 10105.06 × 10385.536.30.6573.34 × 1034.18 × 1032.98 × 1022.20 × 102
Std.1.09 × 104 4.24 × 10−25.39 × 1065.87 × 1094.01 × 1031.04 × 1021.61 × 1021.162.87 × 1035.58 × 1031.93 × 1023.27 × 102
F3Mean8.26 × 104 2.672.10 × 1028.83 × 1042.90 × 1038.066.780.6846.28 × 10262.71.93 × 10340.6
Std.1.25 × 104 6.535.31 × 1021.39 × 1041.54 × 10311.416.71.127.74 × 10273.82.62 × 10396.9
F4Mean97.1 89.01.88 × 1027.86 × 10393.494.993.062.877.095.392.793.9
Std.37.438.852.31.73 × 10314.63.2233.233.132.02.434.073.14
F5Mean20.0 20.421.121.221.120.820.820.820.920.520.520.6
Std.8.48 × 10−42.93 × 10−23.37 × 10−22.72 × 10−24.65 × 10−24.21 × 10−24.47 × 10−23.90 × 10−24.34 × 10−28.75 × 10−25.02 × 10−20.175
F6Mean43.418.943.166.34.512.665.524.614.641.7428.63.24
Std.3.41 2.583.843.741.911.792.462.892.281.781.491.85
F7Mean0.1141.27 × 10−20.4025.16 × 1027.13 × 10−33.70 × 10−43.57 × 10−31.60 × 10−35.18 × 10−32.61 × 10−162.88 × 10−113.00 × 10−16
Std.3.00 × 10−21.22 × 10−20.71560.71.15 × 10−21.65 × 10−35.33 × 10−35.07 × 10−34.99 × 10−31.31 × 10−162.86 × 10−111.91 × 10−16
F8Mean2.25 × 1021.71 × 10−101.87 × 1025.13 × 10234.336.70.99510.775.714.515.13.68
Std.31.86.95 × 10−1028.528.36.906.291.128.748.394.141.712.04
F9Mean2.47 × 1021.08 × 1022.17 × 1025.78 × 1022.14 × 1022.52 × 1021.55 × 1021.95 × 1022.23 × 10271.360.263.0
Std.45.221.827.434.41.38 × 10217.229.565.48.7920.68.5014.3
F10Mean6.70 × 10318.45.61 × 1031.25 × 1041.43 × 1031.18 × 10363.57.46 × 1022.92 × 1031.88 × 1023.39 × 10234.2
Std.1.03 × 1036.491.13 × 1037.71 × 1021.50 × 1032.48 × 10231.51.54 × 1023.13 × 1021.54 × 10270.343.5
F11Mean6.86 × 1034.83 × 1036.84 × 1031.35 × 1041.32 × 1049.82 × 1038.60 × 1039.64 × 1039.84 × 1034.36× 1034.86 × 1035.11 × 103
Std.8.68 × 1024.63 × 1021.33 × 1033.92 × 1022.70 × 1022.66 × 1025.35 × 1024.57 × 1024.71 × 1021.02× 1034.55 × 1029.90 × 102
F12Mean0.3670.4042.913.483.441.351.301.351.560.1190.4170.290
Std.0.2814.10 × 10−20.2340.3770.2890.1220.1290.1400.1624.73× 10−24.64 × 10−20.227
F13Mean0.6670.4760.6034.690.4380.4320.3910.390.4110.2330.2730.335
Std.7.69 × 10−28.10 × 10−29.53 × 10−20.2945.40 × 10−24.91 × 10−24.48 × 10−24.27 × 10−24.74 × 10−24.98× 10−21.52 × 10−26.77 × 10−2
F14Mean0.3550.2840.3661.34 × 1020.3850.3760.3180.4340.3110.3390.3130.340
Std.4.14 × 10−23.69× 10−20.13618.50.1280.1634.10 × 10−20.1932.69 × 10−29.14 × 10−21.93 × 10−29.99 × 10−2
F15Mean43.221.21.03 × 1031.57 × 10530.124.121.722.623.76.818.427.91
Std.7.374.329.18 × 1028.10 × 1041.611.452.361.641.891.471.102.58
F16Mean22.118.720.722.722.120.820.620.621.018.618.718.1
Std.0.6020.3640.6200.1660.2030.2800.3650.3210.2380.9740.3161.07
F17Mean2.84 × 1056.92 × 1042.16 × 1054.99 × 1075.37 × 1041.34 × 1054.38 × 1046.30 × 1043.60 × 1048.69 × 1041.76× 1035.40 × 104
Std.9.00 × 1043.01 × 1042.22 × 1051.73 × 1072.18 × 1041.01 × 1053.04 × 1043.04 × 1042.14 × 1047.71 × 1044.65× 1023.18 × 104
F18Mean2.73 × 1037.18 × 1022.29 × 1031.52 × 1097.06 × 1023.88 × 1025.02 × 1027.03 × 1023.52 × 1024.96 × 10292.02.13 × 102
Std.1.37 × 1036.70 × 1021.36 × 1033.63 × 1088.39 × 1024.34 × 1025.97 × 1025.95 × 1023.13 × 1024.91 × 10217.52.74 × 102
F19Mean50.130.657.92.88 × 10212.817.926.720.414.79.7411.68.24
Std.33.328.226.729.94.599.7410.610.76.992.600.5751.67
F20Mean2.53 × 1043.62 × 1029.11 × 1023.61 × 1047.68 × 1022.00 × 1021.62 × 1021.48× 1021.89 × 1025.06 × 1025.72 × 1033.08 × 102
Std.7.84 × 1032.06 × 1023.31 × 1021.13 × 1045.41 × 10276.71.16 × 10277.678.64.50 × 1025.69 × 1032.01 × 102
F21Mean2.07 × 1057.84 × 1041.75 × 1041.09 × 1073.31 × 1046.24 × 1041.75 × 1044.73 × 1041.27 × 1045.96 × 1041.02× 1034.12 × 104
Std.8.20 × 1045.12 × 1048.73 × 1035.07 × 1061.72 × 1044.73 × 1041.64 × 1044.00 × 1048.03 × 1032.61 × 1042.20× 1023.31 × 104
F22Mean1.67 × 1036.16 × 1021.20 × 1032.43 × 1038.02 × 1026.17 × 1024.28 × 1023.76× 1026.24 × 1027.73 × 1024.41 × 1026.79 × 102
Std.3.18 × 1021.97 × 1023.44 × 1023.03 × 1024.61 × 1022.71 × 1021.87 × 1022.47× 1021.92 × 1022.74 × 1021.15 × 1022.45 × 102
F23Mean3.44 × 1023.44 × 1022.00× 1027.00 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 1023.44 × 102
Std.8.12 × 10−21.39 × 10−139.22× 10−1565.61.26 × 10−131.78 × 10−131.58 × 10−132.61 × 10−141.66 × 10−131.14 × 10−131.08 × 10−124.52 × 10−14
F24Mean2.74 × 1022.81 × 1022.00× 1022.41 × 1022.76 × 1022.69 × 1022.73 × 1022.72 × 1022.72 × 1022.70 × 1022.75 × 1022.71 × 102
Std.11.53.321.52× 10−760.72.472.902.212.602.262.680.9171.95
F25Mean2.00 × 1022.28 × 1022.00× 1022.69 × 1022.08 × 1022.08 × 1022.16 × 1022.08 × 1022.10 × 1022.09 × 1022.05 × 1022.09 × 102
Std.1.63 × 10−34.934.88× 10−1427.11.221.477.602.418.581.400.3011.97
F26Mean1.90 × 1021.50 × 1021.60 × 1021.05 × 1021.55 × 1021.05 × 1021.10 × 1021.30 × 1021.45 × 1021.05 × 1021.00× 1021.30 × 102
Std.30.651.150.00.49550.922.330.746.950.922.32.62× 10−246.9
F27Mean1.48 × 1039.03 × 1022.00× 1022.08 × 1034.92 × 1023.91 × 1024.56 × 1024.91 × 1024.79 × 1023.81 × 1023.61 × 1024.28 × 102
Std.1.13 × 10282.12.26× 10−141.89 × 10258.437.950.672.760.157.91.32 × 10258.2
F28Mean2.82 × 1031.73 × 1032.00× 1025.81 × 1031.18 × 1031.10 × 1031.19 × 1031.13 × 1031.20 × 1031.11 × 1031.13 × 1031.11 × 103
Std.7.93 × 1024.99 × 1022.53× 10−148.73 × 1021.09 × 10241.361.465.786.130.335.142.9
F29Mean5.53 × 1061.11 × 1071.78 × 1072.01 × 1081.38 × 1031.70 × 1031.18 × 1031.48 × 1031.28 × 1031.48 × 1039.34× 1021.76 × 106
Std.2.46 × 1072.00 × 1072.37 × 1074.66 × 1071.95 × 1025.05 × 1022.16 × 1023.85 × 1022.32 × 1022.62 × 10267.07.88 × 106
F30Mean3.11 × 1041.06 × 1042.18 × 1041.90 × 1069.38 × 1038.70 × 1039.16 × 1039.18 × 1039.87 × 1039.03 × 1038.68× 1038.93 × 103
Std.7.92 × 1032.09 × 1031.39 × 1047.91 × 1056.13 × 1023.88 × 1024.45 × 1028.10 × 1025.65 × 1024.57 × 1023.77× 1024.10 × 102
Avg.Rank9.105.877.9311.337.505.874.805.136.174.233.674.13
Table 4. Comparison between RUSDE and other algorithms using a t-test with double tail for 30-dimensional problems.
Table 4. Comparison between RUSDE and other algorithms using a t-test with double tail for 30-dimensional problems.
Func.A1.KHLBSADGSTLBOSCADErandjDESaDErank-jDEjDE-EIGSPS-jDELSHADE
F1T12.30.2455.4313.81.16.51−6.63−0.635−1.282.16−7.73
P7.54 × 10−150.8073.44 × 10−62.00 × 10−160.2691.14 × 10−77.92 × 10−80.5290.2073.68 × 10−22.56 × 10−9
+=++=+-==+-
F2T8.942.011.1129.51.000.001.450.001.000.004.64
P6.96 × 10−115.15 × 10−20.2758.33 × 10−280.3240.000.1550.000.3240.004.06 × 10−5
+==+======+
F3T15.8−1.011.1025.81.643.210.420−0.9801.251.282.97
P2.63 × 10−180.3170.2791.04 × 10−250.1102.73 × 10−30.6770.3330.2200.2075.14 × 10−3
+==+=+====+
F4T7.851.4815.016.22.442.02−0.866−8.32−5.051.111.39
P1.79 × 10−90.1481.39 × 10−171.26 × 10−181.97 × 10−25.09 × 10−20.3924.32 × 10−101.15 × 10−50.2730.174
+=+++==--==
F5T−16.8−6.6019.819.120.04.344.964.977.71−1.10−8.07
P3.53 × 10−198.72 × 10−81.18 × 10−214.65 × 10−219.21 × 10−221.00 × 10−41.49 × 10−51.47 × 10−52.76 × 10−90.2789.28 × 10−10
--+++++++=-
F6T26.514.532.355.90.797−0.2850.9802.960.5960.60838.3
P4.11 × 10−264.63 × 10−173.14 × 10−294.22 × 10−380.4310.7770.3335.24 × 10−30.5550.5475.50 × 10−32
++++===+==+
F7T10.23.623.6629.22.181.001.372.441.000.000.00
P2.23 × 10−128.49 × 10−47.70 × 10−41.18 × 10−273.56 × 10−20.3240.1781.96 × 10−20.3240.000.00
+++++==+===
F8T18.1−3.2519.158.811.7−3.25−3.25−3.254.756.18−3.25
P3.05 × 10−202.43 × 10−34.87 × 10−216.17 × 10−393.64 × 10−142.43 × 10−32.43 × 10−32.43 × 10−32.91 × 10−53.23 × 10−72.44 × 10−3
+-+++---++-
F9T12.01.4613.654.42.2118.81.134.6419.20.123−4.55
P1.72 × 10−140.1543.84 × 10−161.16 × 10−373.29 × 10−27.84 × 10−210.2654.04 × 10−53.52 × 10−210.9035.40 × 10−5
+=++++=++=-
F10T18.50.120.1830.6235.260.136−4.534.570.2262.183.01
P1.25 × 10−200.9052.04 × 10−207.19 × 10−405.83 × 10−63.30 × 10−165.75 × 10−55.07 × 10−51.26 × 10−233.53 × 10−24.60 × 10−3
+=++++-++++
F11T8.58−1.226.030.3590.1550.1409.940.1320.153−0.247−4.97
P2.03 × 10−100.2295.17 × 10−76.47 × 10−315.38 × 10−1881.30 × 10−163.99 × 10−128.60 × 10−167.36 × 10−180.8061.45 × 10−5
+=+++++++=-
F12T2.094.6415.339.824.319.519.324.422.90.7402.23
P4.30 × 10−24.11 × 10−57.81 × 10−181.37 × 10−329.77 × 10−252.25 × 10−213.01 × 10−217.60 × 10−257.80 × 10−240.4643.17 × 10−2
+++++++++=+
F13T13.44.9610.747.16.128.005.115.257.42−3.790.450
P5.72 × 10−161.53 × 10−55.21 × 10−132.63 × 10−353.85 × 10−71.15 × 10−99.31 × 10−66.12 × 10−66.61 × 10−95.22 × 10−40.655
+++++++++-=
F14T2.83−2.94−0.23920.71.351.550.7291.37−0.4972.00−1.97
P7.33 × 10−35.55 × 10−30.8122.81 × 10−220.1860.1300.4700.1790.6225.31 × 10−25.56 × 10−2
+-=+=======
F15T12.15.237.954.2323.421.510.418.418.4−1.15−2.91
P1.21 × 10−146.46 × 10−61.32 × 10−91.41 × 10−43.45 × 10−247.21 × 10−239.96 × 10−131.64 × 10−201.60 × 10−200.2565.94 × 10−3
+++++++++=-
F16T14.21.666.6914.111.37.646.356.198.790.9910.866
P7.96 × 10−170.1056.50 × 10−81.16 × 10−169.38 × 10−143.42 × 10−91.91 × 10−73.09 × 10−71.09 × 10−100.3280.392
+=+++++++==
F17T8.091.11−2.519.09−1.381.60−2.51−3.87 × 10−2−4.22−0.715−4.79
P8.62 × 10−100.2731.66 × 10−24.47 × 10−110.1770.1171.63 × 10−20.9691.47 × 10−40.4792.53 × 10−5
+=-+==-=-=-
F18T5.095.802.907.91−1.962.821.141.061.011.47−3.74
P1.0 × 10−51.07 × 10−66.17 × 10−31.52 × 10−95.68 × 10−27.55 × 10−30.2620.2980.3180.1506.08 × 10−4
++++=+====-
F19T4.2610.224.314.40.63611.89.186.145.651.083.55
P1.30 × 10−41.94 × 10−129.98 × 10−256.06 × 10−170.5292.93 × 10−143.45 × 10−113.70 × 10−71.71 × 10−60.2871.05 × 10−3
++++=++++=+
F20T15.22.9913.212.90.503−1.17 × 10−26.85 × 10−2−0.716−0.133−1.29−3.41
P1.02 × 10−174.85 × 10−38.78 × 10−161.69 × 10−150.6180.9910.9460.4780.8950.2041.54 × 10−3
++++======-
F21T10.41.852.598.20−2.10−0.928−2.85−1.09−3.24−0.182−3.37
P1.20 × 10−127.15 × 10−21.34 × 10−26.21 × 10−104.21 × 10−20.3596.97 × 10−30.2842.47 × 10−30.8561.76 × 10−3
+=++-=-=-=-
F22T6.78−0.3983.5412.8−1.49−2.86−2.00−0.832−2.371.82−3.01
P4.92 × 10−80.6931.08 × 10−32.16 × 10−150.1446.91 × 10−35.25 × 10−20.4112.29 × 10−27.68 × 10−24.57 × 10−3
+=++=-==-=-
F23T0.001.93−7.90 × 101522.40.000.000.000.000.000.000.00
P0.006.11 × 10−20.001.74 × 10230.001.001.001.001.001.001.00
==-+=======
F24T−2.363.37−62.7−62.41.55−0.1191.543.37−0.5450.834−1.04
P2.36 × 10−21.76 × 10−35.79 × 10−406.82 × 10−400.1280.9060.1321.75 × 10−30.5890.4100.304
-+--===+===
F25T3.465.11−30.521.3−1.89−2.371.420.4593.520.449−7.44
P1.36 × 10−39.49 × 10−62.64 × 10−289.90 × 10−236.66 × 10−32.29 × 10−20.1650.6491.15 × 10−30.6566.37 × 10−9
++-+=-==+=-
F26T8.780.003.2228.01.020.000.000.000.00−0.6031.56
P1.12 × 10−100.002.59 × 10−35.59 × 10−270.3130.000.000.000.000.5500.128
+=++=======
F27T16.18.35−15.26.303.413.642.882.104.440.537−2.08
P1.36 × 10−183.96 × 10−109.33 × 10−182.23 × 10−71.54 × 10−38.14 × 10−46.44 × 10−34.24 × 10−27.58 × 10−50.5944.42 × 10−2
++-++++++=-
F28T6.703.66−1.7016.91.45−0.32876.51.281.771.43−0.831
P6.20 × 10−87.55 × 10−49.79 × 10−22.87 × 10−190.1540.7450.4490.2098.40 × 10−20.1600.411
++=+=======
F29T1.031.452.278.23−0.1863.47−1.40−0.410−2.290.769−4.05
P0.3110.1552.92 × 10−25.64 × 10−100.8531.31 × 10−30.1700.6842.76 × 10−20.4472.46 × 10−4
==++=+==-=-
F30T13.32.553.3812.1−3.37−1.16−3.050.312−3.00−18.2−3.86
P7.91 × 10−161.50 × 10−21.70 × 10−31.44 × 10−141.72 × 10−30.2554.12 × 10−30.7574.71 × 10−30.8574.21 × 10−4
++++-=-=-=-
+ 2613212912148131236
= 214401613161512269
- 235123626115
Table 5. Comparison between RUSDE and other algorithms using a t-test with double tail for 50-dimensional problems.
Table 5. Comparison between RUSDE and other algorithms using a t-test with double tail for 50-dimensional problems.
Func.A1.KHLBSADGSTLBOSCADErandjDESaDErank-jDEjDE-EIGSPS-jDELSHADE
F1T14.7−2.265.5318.6−1.497.35−12.6−0.979−3.873.31−0.221
P2.77 × 10−172.98 × 10−22.52 × 10−61.07 × 10−200.1458.34 × 10−94.15 × 10−150.3344.15 × 10−42.03 × 10−32.54 × 10−23
+-++=+-=+-
F2T9.42−3.001.0441.25.37−1.75−2.25−2.994.863.160.928
P1.75 × 10−114.72 × 10−30.3053.83 × 10−334.16 × 10−68.86 × 10−23.05 × 10−24.83 × 10−32.07 × 10−53.05 × 10−30.359
+-=++=--++=
F3T29.6−1.751.4128.38.32−1.49−1.54−1.843.370.8103.23
P7.13 × 10−288.86 × 10−20.1673.71 × 10−274.31 × 10−100.1440.1327.31 × 10−21.74 × 10−30.4232.54 × 10−3
+==++===+=+
F4T0.389−0.5618.0520.1−0.1450.974−0.124−4.18−2.351.56−1.07
P0.700.5789.78 × 10−107.55 × 10−220.8850.3360.9021.64 × 10−42.42 × 10−20.1270.292
==++===--==
F5T−14.5−3.7714.414.814.35.775.575.237.23−1.14−2.07
P4.93 × 10−175.63 × 10−46.15 × 10−172.54 × 10−177.63 × 10−171.17 × 10−62.25 × 10−66.42 × 10−61.19 × 10−80.2614.53 × 10−2
--+++++++=-
F6T46.222.041.867.62.13−1.003.311.792.14−2.6147.8
P5.15 × 10−353.11 × 10−232.25 × 10−333.28 × 10−413.96 × 10−20.3222.04 × 10−38.19 × 10−23.90 × 10−21.29 × 10−21.48 × 10−35
+++++=+=+-+
F7T17.04.662.5238.02.781.003.001.414.64−0.7504.51
P2.20 × 10−193.87 × 10−51.62 × 10−27.65 × 10−328.46 × 10−30.3244.77 × 10−30.1664.04 × 10−50.4586.08 × 10−5
+++++=+=+=+
F8T31.1−8.0528.780.219.022.3−5.163.4837.310.519.1
P1.28 × 10−289.68 × 10−102.35 × 10−275.23 × 10−445.34 × 10−211.93 × 10−238.14 × 10−61.28 × 10−31.54 × 10−319.27 × 10−134.47 × 10−21
+-++++-++++
F9T17.37.7022.361.84.8837.812.68.8142.61.49−0.729
P1.23 × 10−192.83 × 10−91.98 × 10−239.61 × 10−401.95 × 10−59.16 × 10−323.70 × 10−151.03 × 10−101.05 × 10−330.1440.471
+++++++++==
F10T29.0−1.6022.172.04.1720.12.4419.940.74.3016.5
P1.68 × 10−270.1182.61 × 10−233.07 × 10−421.72 × 10−47.02 × 10−221.94 × 10−21.10 × 10−215.79 × 10−331.13 × 10−46.19 × 10−19
+=+++++++++
F11T5.89−1.154.6535.335.020.513.918.619.3−2.39−1.07
P8.13 × 10−70.2583.92 × 10−51.17 × 10−301.54 × 10−303.70 × 10−221.88 × 10−161.16 × 10−203.30 × 10−212.20 × 10−20.290
+=+++++++-=
F12T0.9542.2136.032.438.318.317.317.820.4−3.292.45
P0.3463.35 × 10−25.89 × 10−312.64 × 10−295.61 × 10−321.78 × 10−201.30 × 10−195.19 × 10−204.44 × 10−222.15 × 10−31.90 × 10−2
=++++++++-+
F13T14.55.9710.364.65.305.183.103.034.13−5.45−4.01
P4.53 × 10−176.30 × 10−71.65 × 10−121.83 × 10−405.16 × 10−67.63 × 10−63.60 × 10−34.33 × 10−31.94 × 10−43.23 × 10−62.77 × 10−4
+++++++++--
F14T0.610−2.350.67132.41.220.832−0.9291.93−1.29−3.07 × 10−2−1.21
P0.5462.42 × 10−20.5062.85 × 10−290.2290.4110.3596.16 × 10−20.2060.9760.234
=-=+=======
F15T20.211.84.998.6732.624.517.721.522.1−1.650.810
P6.32 × 10−222.66 × 10−141.36 × 10−51.53 × 10−102.25 × 10−297.44 × 10−256.49 × 10−206.73 × 10−232.88 × 10−230.1060.423
+++++++++==
F16T14.42.559.3518.916.511.09.9510.111.91.522.37
P5.24 × 10−171.51 × 10−22.13 × 10−116.97 × 10−216.51 × 10−192.19 × 10−133.89 × 10−122.41 × 10−122.41 × 10−140.1362.29 × 10−2
+++++++++=+
F17T10.81.563.2412.9−3.11 × 10−23.40−1.030.917−2.101.77−7.34
P3.88 × 10−130.1282.49 × 10−32.03 × 10−150.9751.60 × 10−30.3080.3654.29 × 10−28.49 × 10−28.65 × 10−9
+=++=+==-=-
F18T8.123.126.7218.82.501.521.973.351.502.25−1.97
P7.91 × 10−103.46 × 10−35.91 × 10−88.29 × 10−211.70 × 10−20.1375.65 × 10−21.85 × 10−30.1433.06 × 10−25.57 × 10−2
+++++==+=+=
F19T5.613.538.2941.94.204.397.655.034.032.188.53
P1.92 × 10−61.09 × 10−34.70 × 10−102.02 × 10−331.56 × 10−48.61 × 10−53.31 × 10−91.22 × 10−52.58 × 10−43.58 × 10−22.34 × 10−10
+++++++++++
F20T14.20.8496.9614.23.56−2.24−2.80−3.31−2.451.804.26
P7.98 × 10−170.4012.77 × 10−89.79 × 10−171.01 × 10−33.12 × 10−28.05 × 10−32.04 × 10−31.90 × 10−27.94 × 10−21.30 × 10−4
+=+++----=+
F21T8.402.73−3.119.61−0.9751.64−2.880.521−3.751.95−5.44
P3.46 × 10−109.63 × 10−33.56 × 10−31.01 × 10−110.3360.1096.53 × 10−30.6065.93 × 10−45.84 × 10−23.32 × 10−6
++-+==-=-=-
F22T10.9−0.9005.4920.21.05−0.763−3.65−3.90−0.8021.14−3.94
P2.72 × 10−130.3742.81 × 10−66.94 × 10−220.3010.4507.89 × 10−43.77 × 10−40.4280.2603.37 × 10−4
+=++==--==-
F23T0.000.00−1.40 × 101624.310.000.000.000.000.000.000.00
P0.000.000.009.69 × 10−250.000.000.001.000.000.000.00
==-+=======
F24T0.80310.8−1.64 × 102−2.275.94−3.212.330.4860.262−1.376.90
P0.4274.04 × 10−138.54 × 10−562.92 × 10−26.90 × 10−72.71 × 10−32.53 × 10−20.6290.7950.1783.34 × 10−8
=+--+-+===+
F25T−20.116.5−20.19.84−2.61−0.6844.31−1.410.7300.515−7.60
P8.02 × 10−226.24 × 10−197.76 × 10−225.35 × 10−121.29 × 10−20.4981.10 × 10−40.1660.4700.6103.85 × 10−9
-+-+-=+===-
F26T4.781.291.95−2.381.61−2.14−1.594.39 × 1030.967−2.15−2.86
P2.60 × 10−50.2065.87 × 10−22.23 × 10−20.1153.90 × 10−20.1200.9970.3403.77 × 10−26.79 × 10−3
+==-=-===--
F27T36.621.1−17.537.13.43−2.411.602.982.72−2.61−2.10
P3.03 × 10−311.42 × 10−228.18 × 10−201.87 × 10−311.47 × 10−32.10 × 10−20.1184.97 × 10−39.89 × 10−31.30 × 10−24.27 × 10−2
++-++-=++--
F28T9.595.45−95.324.12.06−1.123.971.133.81−0.4330.959
P1.09 × 10−113.24 × 10−67.57 × 10−471.34 × 10−244.60 × 10−20.2703.05 × 10−40.2664.95 × 10−40.6670.344
++-++=+=+==
F29T0.6521.942.8718.9−1.00−1.00−1.00−1.00−1.00−1.00−1.00
P0.5186.04 × 10−26.68 × 10−36.74 × 10−210.3240.3240.3240.3240.3240.3240.324
==++=======
F30T12.53.534.1310.72.65−1.921.601.165.980.742−2.05
P5.14 × 10−151.11 × 10−31.93 × 10−44.91 × 10−131.16 × 10−26.28 × 10−20.1180.2526.06 × 10−70.4634.71 × 10−2
+++++===+=-
+ 221520282012141217610
= 61040914101481810
- 256214645610
Table 6. The post hoc Duncan’s test results for CEC 2014.
Table 6. The post hoc Duncan’s test results for CEC 2014.
KHLBSADGSTLBOSCADErandjDESaDErank-jDEjDE-EIGSPS-jDELSHADE
30D0.0000.0470.0000.0000.0020.2340.9870.5940.4740.9960.991
50D0.0000.7620.0010.0000.0000.3691.0000.9880.1161.0001.000
Table 7. The results of the four-bar mechanism with the six algorithms.
Table 7. The results of the four-bar mechanism with the six algorithms.
AlgorithmKHDGSTLBOSCADErandjDESUDE
r155.4271246.5412323.7060850.86149141.74482250.921529
r232.909186.0047956.03242510.7607249.748663210.555366
r358.2594721.4750727.7180327.30292922.98360125.963469
r455.2288452.085129.2634445.53570538.47708244.544581
rcx−1.8635139.114455.27024627.27978925.52123624.896262
rcy24.4164220.138−29.362423.88611118.16177419.182391
x011.7731657.685022.196234−4.925943−0.6551421.6991638
y035.9032511.748317.08003259.60468855.04777657.587445
θ06.2828580.5168421.0510963.69438433.73220613.658706
θ215.9182985.4927220.6768411.76600231.43019791.6464615
θ226.0175410.0004182.0313812.4706552.52906282.4183316
θ236.101780.4107552.521282.92482652.99020062.8825657
θ246.1753160.7340775.972763.34590713.46510343.3720796
θ256.2623791.083256.0308243.77374193.96458543.8854152
θ266.2818081.57533904.25623434.61603224.5596146
fobj14.82.303.34 × 1021.35 × 10−20.4218.71 × 10−2
Mean43.037.75.53 × 1022.399.010.996
std27.757.32.28 × 1023.867.471.47
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, K.; Yu, Y. An Enhancing Differential Evolution Algorithm with a Rank-Up Selection: RUSDE. Mathematics 2021, 9, 569. https://doi.org/10.3390/math9050569

AMA Style

Zhang K, Yu Y. An Enhancing Differential Evolution Algorithm with a Rank-Up Selection: RUSDE. Mathematics. 2021; 9(5):569. https://doi.org/10.3390/math9050569

Chicago/Turabian Style

Zhang, Kai, and Yicheng Yu. 2021. "An Enhancing Differential Evolution Algorithm with a Rank-Up Selection: RUSDE" Mathematics 9, no. 5: 569. https://doi.org/10.3390/math9050569

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop