Next Article in Journal
A Comparative Analysis of Simulated Annealing and Variable Neighborhood Search in the ATCo Work-Shift Scheduling Problem
Next Article in Special Issue
Dynamic Parallel Mining Algorithm of Association Rules Based on Interval Concept Lattice
Previous Article in Journal
A Coupled Fixed Point Technique for Solving Coupled Systems of Functional and Nonlinear Integral Equations
Previous Article in Special Issue
Memes Evolution in a Memetic Variant of Particle Swarm Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Efficacy of Ensemble of Constraint Handling Techniques in Self-Adaptive Differential Evolution

by
Hassan Javed
1,†,
Muhammad Asif Jan
1,*,†,
Nasser Tairan
2,†,
Wali Khan Mashwani
1,†,
Rashida Adeeb Khanum
3,†,
Muhammad Sulaiman
4,†,
Hidayat Ullah Khan
5,† and
Habib Shah
2,†
1
Institute of Numerical Sciences, Kohat University of Science & Technology, Kohat 26000, Pakistan
2
College of Computer Science, King Khalid University, Abha 61321, Saudi Arabia
3
Jinnah College for Women, University of Peshawar, Peshawar 25000, Pakistan
4
Department of Mathematics, Abdul Wali Khan University, Mardan 23200, Pakistan
5
Department of Economics, Abbottabad University of Science & Technology, Abbottabad 22010, Pakistan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(7), 635; https://doi.org/10.3390/math7070635
Submission received: 9 June 2019 / Revised: 8 July 2019 / Accepted: 10 July 2019 / Published: 17 July 2019
(This article belongs to the Special Issue Evolutionary Algorithms in Intelligent Systems)

Abstract

:
Self-adaptive variants of evolutionary algorithms (EAs) tune their parameters on the go by learning from the search history. Adaptive differential evolution with optional external archive (JADE) and self-adaptive differential evolution (SaDE) are two well-known self-adaptive versions of differential evolution (DE). They are both unconstrained search and optimization algorithms. However, if some constraint handling techniques (CHTs) are incorporated in their frameworks, then they can be used to solve constrained optimization problems (COPs). In an early work, an ensemble of constraint handling techniques (ECHT) is probabilistically hybridized with the basic version of DE. The ECHT consists of four different CHTs: superiority of feasible solutions, self-adaptive penalty, ε -constraint handling technique and stochastic ranking. This paper employs ECHT in the selection schemes, where offspring competes with their parents for survival to the next generation, of JADE and SaDE. As a result, JADE-ECHT and SaDE-ECHT are developed, which are the constrained variants of JADE and SaDE. Both algorithms are tested on 24 COPs and the experimental results are collected and compared according to algorithms’ evaluation criteria of CEC’06. Their comparison, in terms of feasibility rate (FR) and success rate (SR), shows that SaDE-ECHT surpasses JADE-ECHT in terms of FR, while JADE-ECHT outperforms SaDE-ECHT in terms of SR.

1. Introduction

Evolutionary algorithms (EAs) are nature inspired population-based stochastic search and optimization methods. EAs work on the principle of natural evolution. In EAs, selected population members based on a fitness/selection scheme, the so called parents, undergo perturbation by applying genetic operators, mutation and crossover, to produce offspring. A selection scheme is then adopted to select the fittest individuals with a certain probability among the parents and offspring for the next generation. Many EAs, such as genetic algorithms (GAs), differential evolution (DE), particle swarm optimization (PSO), firefly algorithm (FA), bee algorithm (BA), ant colony optimization (ACO), evolution strategy (ES) etc. have been designed using different genetic operators and selection schemes for the unconstrained optimization problems since 1959.
Differential evolution (DE) [1] has proven to be a simple and efficient EA for many optimization problems. A number of variants of DE were developed and are in practice for unconstrained/constrained optimization [2,3,4,5,6,7,8,9]. In DE, a random initial population of size N P is generated in the whole search space to a possible extent and the fittest/best with minimum function value in the initial population is found. It then invokes one of the different mutation strategies such as DE/rand/1, DE/current to best/2, DE/rand/2, DE/current-to-rand/1 to generate a mutant vector. For example in DE/rand/1, the weighted difference scaled by a scaling factor F [ 0 , 2 ] between two population vectors is added to a target vector to generate a mutant vector. Afterwards, the parameters of mutant vector and target vector are mixed with a certain crossover probability C r [ 0 , 1 ] to produce the trial vector. Using the one-to-one-spawning selection mechanism, if the objective function value of the trial vector is less than the objective function value of the target vector, in minimization sense, then the trial vector replaces the target vector and becomes the parent for the next generation. The three steps of producing the mutant vector, the trail vector and comparison of the target and trial vectors are repeated until a stopping criterion is met. Also, the fittest/best individual is updated after every generation by comparing the function values of the trial vector, if it is successful in selection process, and fittest/best individual found so far. For more details of DE and different mutation strategies used in it, the readers are referred to [10,11].
The performance of the original DE algorithm is highly dependent on the mutation strategies and its parameters’ settings [11,12,13,14]. During different evolution stages, different strategies and different parameters’ settings with different global and local search capabilities might be preferred. Huang et al. developed a self-adaptive DE variant, SaDE [10]. SaDE automatically adapts the learning strategies and the parameters’ settings during evolution. It probabilistically selects one of the four mutation strategies: DE/rand/1, DE/current to best/2, DE/rand/2, DE/current-to-rand/1 for each individual in the current population. J. Zhang and A. C. Sanderson developed another self-adaptive DE version, self-adaptive differential evolution with optional external archive (JADE) [15]. JADE too automates the parameters and employs the mutation strategy DE/current-to-pbest with the optional external archive. The strategy DE/current-to-pbest uses not only the information of the best solution, but also the information on the other good solutions. The external archive keeps record of the inferior solutions, which are then used for diversity among population members and avoiding premature convergence.
For recent advances in DE, the readers are referred to [16,17]. EAs suit a variety of applications in the fields of engineering and science [18,19,20,21,22,23,24]. Generally, EAs outperform traditional optimization algorithms for problems which are not continuous, non-differentiable, multi-modal, noisy and not well-defined. However, EAs are unconstrained optimization techniques. They are not capable to directly solve COPs having constraints of any kind (e.g., equality, inequality, linear and non-linear etc.). To overcome this problem, CHTs are used with EAs to handle all types of constraints. The last three decades have witnessed many techniques for handling constraints by EAs [20,25]. Michalewicz and Schoenauer [26] categorized them into five classes: preserving feasibility of solutions, adopting penalty functions, separating feasible solutions from infeasible ones, decoding, and hybridizing different techniques. However, according to no free lunch theorem (NFL) [27], a single CHT can not outperform all other CHTs on each problem. Same is true for different EAs as well. Thus, one has to try and combine different CHTs and EAs to design a suitable algorithm that can solve most of the problems. So keeping in mind the NFL theorem and some other individual problems of COPs, an ensemble of constraint handling techniques (ECHT) is combined with the basic version of DE in [28,29]. ECHT consists of four different CHTs: superiority of feasible solutions, self-adaptive penalty, ε -constraint handling technique and stochastic ranking. SaDE and JADE, being advanced self-adaptive variants, are both unconstrained search and optimization algorithms. Like other EAs, they also need some additional CHTs to solve constrained optimization problems (COPs).
In this work, the ECHT is implemented in the selection scheme, where offspring and parents compete for survival to next generation, of JADE and SaDE. As a result, constrained versions of JADE and SaDE, denoted by JADE-ECHT and SaDE-ECHT, are developed. The performance of JADE-ECHT and SaDE-ECHT is tested and compared based on feasibility rate (FR) and success rate (SR) on 24 COPs according to algorithms’ evaluation criteria of CEC’06.
This rest of this paper is ordered as follows. The general COP and ECHT are detailed in Section 2. Section 3 presents the proposed modified algorithms, JADE-ECHT and SaDE-ECHT. Section 4 presents and discusses the experimental results obtained with JADE-ECHT and SaDE-ECHT. Finally, Section 5 describes the concluding remarks of this work.

2. Constrained Optimization Problem and ECHT

This section first describes the constrained optimization problem to be considered in this work. It then illustrates the four CHTs of ECHT.

2.1. Constrained Optimization Problem (COP)

Time, physical, and geometric etc. type constraints exist in most of the real world optimization problems. Such problems can be modelled as a COP. Mathematically, a COP, in case of minimization, can be formulated as follows [30]:
Minimize f ( x ) , x = [ x 1 , x 2 , , x n ] T subject to g i ( x ) 0 i = 1 , , l , h j ( x ) = 0 j = l + 1 , , p , l i x i u i , i = 1 , 2 , , n .
In problem (1), f ( x ) is called cost function which will be minimized. In case of maximizing the cost function, it needs to be multiplied with a negative sign. The n-dimensional vector x is called decision variable vector. There are l inequality and p l equality constraints. An inequality constraint g j ( x ) becomes an active constraint, if g j ( x * ) = 0 , where x * is global optimum solution, whereas equality constraints, h j ( x ) = 0 , are active by default. Generally, equality constraints are converted into inequality constraints by | h j ( x ) | ϵ 0 , where ϵ is an acceptable tolerance for equality constraints. According to CEC’06 [30] evaluation criteria, ϵ is set to 0.0001 (in this work, we will also use the same value for ϵ ). l i and u i are the lower and upper bounds of component x i of vector x . They form the whole search space S. The solution x S is referred to be feasible, if it satisfies all the equality and inequality constraints of problem (1); otherwise, it is called infeasible. We denote with F the set of all feasible solutions and normally F S . The total constraints’ violation for an infeasible solution is defined as [28,29]:
v ( x ) = i = 1 p c i ( g i ( x ) ) i = 1 p c i ,
where
g i ( x ) = m a x { g i ( x ) , 0 } , i = 1 , , l m a x { | h j ( x ) | ϵ , 0 } , j = l + 1 , , p . ,
where c i ( = 1 / g m a x i ) denotes weight parameter, g m a x i denotes the maximum constraint violation of constraint g i ( x ) , i = 1 , , l obtained thus far. It maybe noted that c i changes during the evolution process. This helps in balancing how each constraint contributes in the problem irrespective of their different numerical ranges. The four constraints handling techniques which are used in this work are detailed as follows.

2.2. Superiority of Feasible Solutions (SF)

As the name suggests, in SF feasible solutions have priority over infeasible solutions. SF was first suggested by Deb [31]. In this method, two solutions, a parent x i and an offspring x j compete. The parent x i is considered better than the offspring x j , if any of the subsequent three settings is met [31]:
  • Parent, x i is feasible and offspring, x j is infeasible.
  • Both parent and offspring, x i and x j are feasible, but parent, x i has minimum fitness value than the offspring, x j .
  • Both x i and x j are infeasible, and overall constraints’ violation v ( x i ) of parent, x i is less than overall constraints’ violation v ( x j ) of offspring, x j , where v ( x i ) and v ( x j ) are calculated by using Equation (2).

2.3. Self-Adaptive Penalty (SP)

Penalty methods are the most common approaches to handle constraints in the family of CHTs. In these techniques, in order to penalize an infeasible solution, the cost value of each infeasible solution and a penalty term corresponding to its constraints’ violation are added, in minimization sense (subtracted in maximization sense). In SP [28], an attempt has been made to facilitate the algorithm to search for feasible solutions, in case there are few feasible solutions, and find the optimum, in case there are enough feasible solutions. For this purpose, two penalties are added to the cost of an infeasible solution. This help in identifying the best infeasible solutions in the existing population. The amount of the added penalties considers the number of feasible solutions that exist in the current population. Thus, if there are few feasible solutions in the combined population of parents and offspring, the amount of penalty to infeasible individuals with higher constraints’ violation will be greater. On the contrary, with many feasible solutions, the fittest infeasible solutions in terms of cost are less penalized.

2.4. The ε -Constraint (EC) Handling Technique

The ε -constraint (EC) handling technique [32] adopts the parameter ε to relax the active constraints. The parameter ε is updated until a fixed generation counter is reached. Afterwards, ε becomes 0 to get individuals with no constraints’ violation (for detailed formulation of this technique, please see [28,32]).

2.5. Stochastic Ranking (SR)

SR [33] stochastically balances overall constraints’ violation and fitness function value. A solution is ranked based on its cost value, if it is feasible or if a randomly generated number is smaller than a probability factor p f ; otherwise, it is ranked on the constraints’ violation. The proposed value of p f = 0.475 . However, if this constant value is not used, then it decreases linearly from p f = 0.475 to p f = 0.025 from initial generation to the last generation.
In [28], the ECHT is tested with evolutionary programming (EP) and basic DE. In this paper, we hybridize ECHT with the advanced versions of DE, JADE [15] and SaDE [10].

3. JADE-ECHT and SaDE-ECHT

In this section, we first give the algorithmic details of JADE-ECHT, which is then followed by the details of SaDE-ECHT.

3.1. JADE-ECHT

JADE [15] is an updated version of DE. It is also an unconstrained optimization algorithm. So it needs some additional CHTs to solve COPs. In this work, we embed the four above discussed CHTs in the selection scheme of JADE to modify it for solving COPs. The whole procedure of the proposed technique JADE-ECHT, shown in Figure 1, is discussed as follows.
Step 1:
generate initial population P, set the generation number t = 1 , initial crossover probability μ C R = 0.5 , initial mutation factor μ F = 0.5 , the set of archive inferior solutions A i = , the sets of successful mutation factors and crossovers, S F i = , S C R i = , respectively, where i = 1 , , 4 .
Step 2:
divide population P into four subpopulations, P i , i = 1 , , 4 each of size P S (population size to be tackled by each CHT). Set parameters P A R i , i = 1 , , 4 of P S individuals each with dimension D according to the rules of JADE and corresponding CHT. Also, calculate F l i and C R l i , l { 1 , , P S } , where C R l i = r a n d n l i ( μ C R , 0.1 ) , F l i = r a n d n l i ( μ C R , 0.1 ) .
Step 3:
compute the cost and the total constraints’ violation for every solution in each subpopulation using Equations (1)–(3).
Step 4:
each parent subpopulation ( P i , i = 1 , , 4 ) generates offspring subpopulation ( O F F i , i = 1 , , 4 ) as a result of applying mutation and crossover operators, respectively as follows [15]:
v l , t i = x l , t i + F l i ( x p b e s t , t i x l , t i ) + F l i ( x r 1 l , t i x ˜ r 2 l , t i ) ,
where x p b e s t , t is one of the 100 P % best vectors. and x ˜ r 2 l , t i x r 1 l , t i x l , t i are chosen randomly from the existing population, P and from the union of current population and archived population, P A . The archive A retains the parent individuals that are unsuccessful in the selection scheme.
u l , t i = v l , t i , if ( r a n d j [ 0 , 1 ] < μ C R o r ( j = j r a n d ) x l , t i , otherwise .
In Equation (4), v l , t i and x l , t i are the l th components of the i th mutant and trial vectors in generation t.
Step 5:
evaluate the cost and the total constraints’ violation for every offspring in each subpopulation using Equations (1)–(3). Every offspring holds the cost and constraints values distinctly.
Step 6:
each parent subpopulation is grouped together with its own offspring and the offspring produced by the remaining three subpopulations corresponding to different CHTs. This way four different groups of populations are generated.
Step 7:
Parents population P for the next generation is selected from the four groups according to the rule of each CHT. Unsuccessful parents are added to the archive A i . All successful crossover probabilities from C R P S i and mutation factors from F P S i are added to S C R i and S F i .
Step 8:
Remove solutions randomly from A i so that | A i | P S . Update μ C R and μ F adopting the formulations of [28].
Step 9:
If the stopping criteria are not met, go to Step 2; otherwise, stop.

3.2. SaDE-ECHT

SaDE [10] is also an unconstrained optimization algorithm. Like JADE and other EAs, it also needs some additional mechanisms to solve COPs. In this work, the four CHTs of ECHT are used in the selection scheme of SaDE for solving COPs. The whole procedure of proposed SaDE-ECHT, shown in Figure 2, is as follow:
Step 1:
generate initial population P, initiate the generation counter t = 1 , initial crossover probability μ C R = 0.5 , initial mutation factor μ F = 0.5 .
Step 2:
divide population P into four subpopulations, P i , i = 1 , , 4 each of size P S . Set parameters P A R i , where i = 1 , , 4 of P S individuals each with dimension D and generate F in [0,2] and C R in (0,1) by using normal distribution according to the rules of SaDE and corresponding CHT.
Step 3:
compute the cost and the total constraints’ violation for every solution in each subpopulation using Equations (1)–(3).
Step 4:
each parent in each subpopulation produces offspring by using one of the four mutation strategies, DE/rand/1, DE/current-to-best/2, DE/rand/2, and DE/current-to-rand/1 (for details of these strategies, please see [10]) and crossover given in Equation (4). For first 20 generations, probabilities are fixed and set to p 1 = p 2 = p 3 = p 4 = 0.25 . Afterwards, the Roulette Wheel selection is adopted to update the respective probability p i as follows [10]:
p i = n s i n s i + n f i , i = 1 , 2 , 3 , 4
Step 5:
evaluate the cost and the total constraints’ violation for every offspring in each subpopulation using Equations (1)–(3).
Step 6:
each parent subpopulation is grouped together with its own offspring and the offspring produced by the remaining three subpopulations corresponding to different CHTs. This way four different groups of populations are generated.
Step 7:
parents population P for the next generation are selected from the four groups according to the rule of each CHT.
Step 8:
recalculate crossover probability after every five generations according to the mean of recorded C R values.
Step 9:
if the stopping criteria are not met, go to Step 2; otherwise, stop.

4. Experimental Results

The performances of JADE-ECHT and SaDE-ECHT were evaluated on the suit of CEC’06, which contains twenty four benchmark functions. The PC configuration and parameters’ settings are given in Table 1 and Table 2.

Result Achieved

In Table 3, Table 4, Table 5 and Table 6, a comparison of both algorithms after 5 × 10 5 F E s is shown. All the obtained results are gathered according to CEC’06 [30] algorithms’ evaluation criteria for problems g01 to g24. The criteria include collecting statistics of the best (minimum), worst (maximum), median, mean and standard deviation of the function error values f ( x ) f ( x * ) , where f ( x ) is the best objective function value obtained by the algorithm after 5 × 10 5 F E s and f ( x * ) is the know objective function value at the optimal solution. The numbers in parenthesis after the objective function value show the number of violated constraints, whereas c determines the number of violated constraints at the median solution with violation greater than 0.1 , 0.001 , 0.0001 . v ¯ shows mean violation at median solution, F R is the feasibility rate which is defined as the number of feasible runs over total runs, and S R is success rate given by the number of successful runs over total runs. A run is called a feasible run, if the algorithm attains in m a x _ F E s at least one feasible solution. Likewise, a run is successful, if the algorithm gets a feasible solution for which the function error value is smaller than 0.0001 in m a x _ F E s .
Table 3 compares the experimental results achieved by JADE-ECHT and SaDE-ECHT for problems g01–g06. This table shows that SaDE-ECHT achieved better statistics in terms of best, median, mean and standard deviation values than JADE-ECHT on problems g01 and g03, whereas JADE-ECHT surpasses SaDE-ECHT on problems g02 and g05 except the best value of g02. It can also be observed from the same table that both algorithms show comparable performance on problems g04 and g06. The table also shows that both algorithms have achieved 100 % FR on all six problems, as can be confirmed from the 0 s in parenthesis after the objective function values, and columns for c and v ¯ . The SR of SaDE-ECHT on problems g01–g03 is higher than JADE-ECHT. JADE-ECHT’s SR is better than SaDE-ECHT on problem g05, while both algorithms obtained the same SR of 100 % on problems g04 and g06.
Table 4 presents the experimental statistics achieved by JADE-ECHT and SaDE-ECHT for problems g07–g12. The results of this table show that both algorithms obtained comparable statistics for problems g08, g11 and g12. This table also shows superior performance of SaDE-ECHT in terms median, mean and standard deviation values than JADE-ECHT on the problems g07, g09 and g10 except the best values on problems g07 and g10, where JADE-ECHT got better best values. The table also confirms that both algorithms have achieved 100 % FR on all six problems, as can be seen from the 0 s in parenthesis after the objective function values, and columns for c and v ¯ . The SR of JADE-ECHT on problems g07 and g10 is higher than SaDE-ECHT. SaDE-ECHT’s SR is better than JADE-ECHT on problem g09, while both algorithms obtained the same SR of 100 % on problems g08, g11 and g12.
Table 5 demonstrates the experimental results achieved by JADE-ECHT and SaDE-ECHT for problems g13–g18. The results of this table show that both algorithms performed similar on problem g16. This table also shows superior performance of SaDE-ECHT in terms best, median, mean and standard deviation values than JADE-ECHT on problems g13 and g18 except the standard deviation of g13, while JADE-ECHT performed better than SaDE-ECHT on problems g14, g15 and g17 except the mean and standard deviation values of problem g14, where SaDE-ECHT got better values for the two quantities. The table also confirms that both algorithms have achieved 100 % FR on all six problems, as can be seen from the 0 s in parenthesis after the objective function values, and columns for c and v ¯ . The SR of JADE-ECHT on problems g14, g15, and g17 is higher than SaDE-ECHT. SaDE-ECHT’s SR is better than JADE-ECHT on problems g13 and g18, while both algorithms obtained the same SR of 100 % on problem g16.
Table 6 presents the experimental results achieved by JADE-ECHT and SaDE-ECHT for problems g19–g24. The results of this table show that both algorithms performed similar on problem g24. This table also shows superior performance of JADE-ECHT in terms best, median, mean and standard deviation values than SaDE-ECHT on problems g19, g20 g21 and g23, except the best value of problem g20 and standard deviation value of problem g23, while SaDE-ECHT performed better than JADE-ECHT on problem g22. The table also confirms that both algorithms have achieved 100 % FR on problems g19 and g24, as can be seen from the 0 s in parenthesis after the objective function values, and columns for c and v ¯ . Both algorithms are unsuccessful in solving problems g20 and g20. The FR of JADE-ECHT on problem g21 is lower than SaDE-ECHT, while the situation is vice versa in case of SR. The FR and SR of JADE-ECHT on problem g23 is higher than SaDE-ECHT.
Figure 3 compares the convergence graphs of JADE-ECHT and SaDE-ECHT for problems g01–g06. This figure shows that JADE-ECHT converges faster than SaDE-ECHT on problems g01, g05 and g06, as less number of F E s have been used by it. In case of problem g04, the convergence of SaDE-ECHT is speedy than JADE-ECHT, while in case of problems g02, g03 both algorithms converge at the same rate.
Figure 4 compares the constraints’ violations vs F E S graphs of JADE-ECHT and SaDE-ECHT for problems g01–g06. This figure shows that both algorithms converge quickly to the feasible region and the optimal solution (s) thus has zero constraints’ violations.
Figure 5 compares the convergence graphs of JADE-ECHT and SaDE-ECHT for problems g07–g12. This figure shows that both JADE-ECHT and SaDE-ECHT converge at the same rate for all six problems except g11, where JADE-ECHT converges faster than SaDE-ECHT.
Figure 6 compares the constraints’ violations vs F E S graphs of JADE-ECHT and SaDE-ECHT for problems g07–g12. This figure too shows that both algorithms converge quickly to the feasible region and optimal solution(s) thus has zero constraints’ violations.
Figure 7 compares the convergence graphs of JADE-ECHT and SaDE-ECHT for problems g13–g18. This figure shows that both JADE-ECHT and SaDE-ECHT converge at the same rate for all six problems except g15, where JADE-ECHT converges faster than SaDE-ECHT.
Figure 8 compares the constraints’ violations vs F E S graphs of JADE-ECHT and SaDE-ECHT for problems g13–g18. This figure shows that both algorithms explore the infeasible region for about 1000 iterations and then converge to the feasible region. As a result, optimal solution(s) thus obtained has zero constraints’ violations.
Figure 9 compares the convergence graphs of JADE-ECHT and SaDE-ECHT for problems g19–g24. This figure shows that both JADE-ECHT and SaDE-ECHT converge almost at the same rate for all six problems and utilize the maximum function evaluations.
Figure 10 compares the constraints’ violations vs F E S graphs of JADE-ECHT and SaDE-ECHT for problems g19–g24. This figure clearly shows that both algorithms failed to obtain any feasible solution in case of problems g20 and g22, although maximum function evaluations have been used.
Figure 3, Figure 5, Figure 7 and Figure 9 show the comparison of the convergence graphs vs F E S of both algorithms for all problems g01-g24, whereas Figure 4, Figure 6, Figure 8 and Figure 10 demonstrate their comparison graphs of the constraints’ violations vs F E s .
Overall, it can be concluded from the tabulated results and figures that both algorithms have achieved feasible solution (s) and near optimal solution (s) on 22 problems out of 24 except problems g20 and g22. The tables show that the FR of JADE-ECHT on 20 problems out of 24 is 100 % and that of SaDE-ECHT on 22 problems out of 24 is 100 % . The SR of JADE-ECHT on most of the problems is better than SaDE-ECHT. On two problems g20 and g22, the FR and SR of both algorithms are 0%. The dimension of these two problems is higher than other 22 problems. Also, these two problems had a large number of equality constraints. It can be noted from our experiments and some other literature review that equality constraints were hard to handle.
Table 7 compares the FR and SR of JADE-ECHT and SaDE-ECHT with other competing algorithms of CEC’2006. It can be seen from the said table that both JADE-ECHT and SaDE-ECHT achieved better FR, and can be placed at positions second and fourth, respectively. However, they failed to achieve better SR than the competing algorithms. A reason of failure could be the use of four different CHTs, where the resources ( F E s ) are distributed based on the success of each individual CHT, while the competing algorithms used just one CHT. The same can also be observed from Table 8 and Table 9, where the median and standard deviation values obtained after 5 × 10 5 F E s of JADE-ECHT and SaDE-ECHT are compared with other competing algorithms (the values of the two quantities for the competing algorithms are taken from each source paper). Another reason of low SR could be observed from the figures showing constraints’ violations vs F E S graphs. It can be noticed from these graphs that both algorithms converge quickly to the feasible region. As a result, they less explore the infeasible region and consequently suffer from stagnation and premature convergence.

5. Conclusions and Future Work

This paper employed ECHT in the frameworks of two self-adaptive variants of DE, JADE and SaDE. Thus, constrained versions of the two algorithms, denoted by JADE-ECHT and SaDE-ECHT were developed. The proposed algorithms JADE-ECHT and SaDE-ECHT were tested and compared on CEC’06 benchmark test suit. The experimental results show that the SR of JADE-ECHT on most of the tested problems is better than SaDE-ECHT, while SaDE-ECHT surpasses JADE-ECHT in terms of FR. Both algorithms, like other algorithms in the literature, failed to solve problems g20 and g22 due to the hard nature of these problems. In the future, we intend to design ECHT of some other CHTs, embed it then in DE and swarm based algorithms to develop constrained evolutionary algorithms and finally test these newly developed algorithms on some real-world and engineering optimization problems. In addition to that we are going to use [34] for multipath routing protocols and for video streaming systems [35] in order to get the advantages of these plus the benefits of the proposed work would be very beneficent and demanded.

Author Contributions

Conceptualization, H.J. and M.A.J.; methodology, H.K., M.A.J. and W.K.M.; software, R.A.K., N.T. and H.S.; validation, H.U.K. and M.S.; formal analysis, H.J., M.A.J., R.A.K. and H.S.; investigation, H.J., M.A.J. and M.S.; resources, N.T. and H.S.; writing—original draft preparation, H.J. and M.A.J.; writing—review and editing, H.U.K. and M.S.; supervision, M.A.J. and W.K.M.; project administration, N.T. and H.S.; funding acquisition, N.T. and H.S.

Funding

The authors would like to thank King Khalid University of Saudi Arabia for supporting this research under the grant number R.G.P.2/7/38.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Li, Z.; Shang, Z.; Liang, J.J.; Niu, B. An improved differential evolution for constrained optimization with dynamic constraint-handling mechanism. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation (CEC), Brisbane, Australia, 10–15 June 2012; pp. 1–6. [Google Scholar]
  3. Elsayed, S.M.; Sarker, R.A.; Essam, D.L. An improved self-adaptive differential evolution algorithm for optimization problems. IEEE Trans. Ind. Inform. 2013, 9, 89–99. [Google Scholar] [CrossRef]
  4. Li, G.; Lin, Q.; Cui, L.; Du, Z.; Liang, Z.; Chen, J.; Lu, N.; Ming, Z. A novel hybrid differential evolution algorithm with modified CoDE and JADE. Appl. Soft Comput. 2016, 47, 577–599. [Google Scholar] [CrossRef]
  5. Ali, M.; Kajee-Bagdadi, Z. A local exploration-based differential evolution algorithm for constrained global optimization. Appl. Math. Comput. 2009, 208, 31–48. [Google Scholar] [CrossRef]
  6. Ameca-Alducin, M.Y.; Mezura-Montes, E.; Cruz-Ramírez, N. Dynamic differential evolution with combined variants and a repair method to solve dynamic constrained optimization problems: an empirical study. Soft Comput. 2018, 22, 541–570. [Google Scholar] [CrossRef]
  7. Shah, T.; JAN, M.; Mashwani, W.K.; Wazir, H. Adaptive Differential Evolution for Constrained Optimization Problems. Sci. Int. 2016, 28, 2313–2320. [Google Scholar]
  8. Wazir, H.; Jan, M.; Mashwani, W.; Shah, T. A penalty function based differential evolution algorithm for constrained optimization. Nucleus 2016, 53, 155–166. [Google Scholar]
  9. Jan, M.A.; Khanum, R.A.; Tairan, N.M.; Mashwani, W.K. Performance of a Constrained Version of MOEA/D on CTP-series Test Instances. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 496–505. [Google Scholar]
  10. Brest, J.; Zumer, V.; Maucec, M.S. Self-adaptive differential evolution algorithm in constrained real-parameter optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 215–222. [Google Scholar]
  11. Caraffini, F.; Kononova, A.V.; Corne, D. Infeasibility and structural bias in Differential Evolution. arXiv 2019, arXiv:1901.06153. [Google Scholar] [CrossRef]
  12. Caraffini, F.; Kononova, A.V. Structural Bias in Differential Evolution: a preliminary study. AIP Conf. Proc. 2019, 2070, 020005. [Google Scholar]
  13. Yaman, A.; Iacca, G.; Caraffini, F. A comparison of three differential evolution strategies in terms of early convergence with different population sizes. AIP Conf. Proc. 2019, 2070, 020002. [Google Scholar]
  14. Iacca, G.; Neri, F.; Caraffini, F.; Suganthan, P.N. A differential evolution framework with ensemble of parameters and strategies and pool of local search algorithms. In Proceedings of the European Conference on the Applications of Evolutionary Computation, Granada, Spain, 23–25 April 2014; pp. 615–626. [Google Scholar]
  15. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  16. Caraffini, F.; Neri, F. A study on rotation invariance in differential evolution. Swarm Evol. Comput. 2018. [Google Scholar] [CrossRef]
  17. Caraffini, F.; Neri, F. Rotation invariance and rotated problems: An experimental study on differential evolution. In Proceedings of the International Conference on the Applications of Evolutionary Computation, Parma, Italy, 4–6 April 2018; pp. 597–614. [Google Scholar]
  18. Liu, C.; Wang, G.; Xie, Q.; Zhang, Y. Vibration sensor-based bearing fault diagnosis using ellipsoid-ARTMAP and differential evolution algorithms. Sensors 2014, 14, 10598–10618. [Google Scholar] [CrossRef] [PubMed]
  19. Datta, R.; Deb, K.; Kim, J.H. CHIP: Constraint Handling with Individual Penalty approach using a hybrid evolutionary algorithm. Neural Comput. Appl. 2018, 1–17. [Google Scholar] [CrossRef]
  20. Shakibayifar, M.; Hassannayebi, E.; Mirzahossein, H.; Taghikhah, F.; Jafarpur, A. An intelligent simulation platform for train traffic control under disturbance. Int. J. Model. Simul. 2019, 39, 135–156. [Google Scholar] [CrossRef]
  21. Cheraitia, M.; Haddadi, S.; Salhi, A. Hybridizing plant propagation and local search for uncapacitated exam scheduling problems. Int. J. Serv. Oper. Manag. 2017. [Google Scholar] [CrossRef]
  22. Wang, G.G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2017, 49, 542–555. [Google Scholar] [CrossRef]
  23. Fister, I.; Fister, I., Jr. Adaptation and Hybridization in Computational Intelligence; Springer: Berlin, Germany, 2015; Volume 18. [Google Scholar]
  24. Wang, H.; Yi, J.H. An improved optimization method based on krill herd and artificial bee colony with information exchange. Memet. Comput. 2018, 10, 177–198. [Google Scholar] [CrossRef]
  25. Coit, D.W.; Smith, A.E.; Tate, D.M. Adaptive penalty methods for genetic optimization of constrained combinatorial problems. INFORMS J. Comput. 1996, 8, 173–182. [Google Scholar] [CrossRef]
  26. Michalewicz, Z.; Schoenauer, M. Evolutionary algorithms for constrained parameter optimization problems. Evol. Comput. 1996, 4, 1–32. [Google Scholar] [CrossRef]
  27. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  28. Mallipeddi, R.; Suganthan, P.N. Ensemble of constraint handling techniques. IEEE Trans. Evol. Comput. 2010, 14, 561–579. [Google Scholar] [CrossRef]
  29. Mallipeddi, R.; Suganthan, P.N. Differential evolution with ensemble of constraint handling techniques for solving CEC 2010 benchmark problems. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  30. Liang, J.; Runarsson, T.P.; Mezura-Montes, E.; Clerc, M.; Suganthan, P.N.; Coello, C.C.; Deb, K. Problem definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter optimization. J. Appl. Mech. 2006, 41, 8–31. [Google Scholar]
  31. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  32. Takahama, T.; Sakai, S. Constrained optimization by the ε constrained differential evolution with an archive and gradient-based mutation. In Proceedings of the 2010 IEEE Congress on Evolutionary Computation (CEC), Barcelona, Spain, 18–23 July 2010; pp. 1–9. [Google Scholar]
  33. Runarsson, T.P.; Yao, X. Stochastic ranking for constrained evolutionary optimization. IEEE Trans. Evol. Comput. 2000, 4, 284–294. [Google Scholar] [CrossRef]
  34. Iqbal, Z.; Khan, S.; Mehmood, A.; Lloret, J.; Alrajeh, N.A. Adaptive cross-layer multipath routing protocol for mobile ad hoc networks. J. Sens. 2016, 2016, 5486437. [Google Scholar] [CrossRef]
  35. Taha, M.; Garcia, L.; Jimenez, J.M.; Lloret, J. SDN-based throughput allocation in wireless networks for heterogeneous adaptive video streaming applications. In Proceedings of the 2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), Valencia, Spain, 26–30 June 2017; pp. 963–968. [Google Scholar]
Figure 1. Flowchart of self-adaptive differential evolution with optional external archive (JADE)-ensemble of constraint handling techniques (ECHT).
Figure 1. Flowchart of self-adaptive differential evolution with optional external archive (JADE)-ensemble of constraint handling techniques (ECHT).
Mathematics 07 00635 g001
Figure 2. Flowchart of JADE-ECHT.
Figure 2. Flowchart of JADE-ECHT.
Mathematics 07 00635 g002
Figure 3. Convergence comparison of JADE-ECHT and SaDE-ECHT for g01–g06.
Figure 3. Convergence comparison of JADE-ECHT and SaDE-ECHT for g01–g06.
Mathematics 07 00635 g003
Figure 4. Constraint violation comparison of JADE-ECHT and self-adaptive differential evolution (SaDE)-ECHT for g01–g06.
Figure 4. Constraint violation comparison of JADE-ECHT and self-adaptive differential evolution (SaDE)-ECHT for g01–g06.
Mathematics 07 00635 g004
Figure 5. Convergence comparison of JADE-ECHT and SaDE-ECHT for g07–g12.
Figure 5. Convergence comparison of JADE-ECHT and SaDE-ECHT for g07–g12.
Mathematics 07 00635 g005
Figure 6. Constraint violation comparison of JADE-ECHT and SaDE-ECHT for g07–g12.
Figure 6. Constraint violation comparison of JADE-ECHT and SaDE-ECHT for g07–g12.
Mathematics 07 00635 g006
Figure 7. Convergence comparison of JADE-ECHT and SaDE-ECHT for g13–g18.
Figure 7. Convergence comparison of JADE-ECHT and SaDE-ECHT for g13–g18.
Mathematics 07 00635 g007
Figure 8. Constraint violation comparison of JADE-ECHT and SaDE-ECHT for g13–g18.
Figure 8. Constraint violation comparison of JADE-ECHT and SaDE-ECHT for g13–g18.
Mathematics 07 00635 g008
Figure 9. Convergence comparison of JADE-ECHT and SaDE-ECHT for g19–g24.
Figure 9. Convergence comparison of JADE-ECHT and SaDE-ECHT for g19–g24.
Mathematics 07 00635 g009
Figure 10. Constraint violation comparison of JADE-ECHT and SaDE-ECHT for g19–g24.
Figure 10. Constraint violation comparison of JADE-ECHT and SaDE-ECHT for g19–g24.
Mathematics 07 00635 g010
Table 1. Configuration of the PC.
Table 1. Configuration of the PC.
SystemWindows 8
CPU3.00 GHz
Ram2 GB
LanguageMATLAB 2012, 8.0.0.783
Table 2. Parameters’ settings.
Table 2. Parameters’ settings.
Parameters’ DescriptionParameters’ Settings
Population size for each CHT P S = 25
Whole population size N P = 4 P S = 100
Maximum number of generations t = 2500
Total number of runs r u n s = 25
Initial value of mutation factor μ F = 0.5
Initial value of crossover probability μ C R = 0.5
Termination criterion based on maximum function evaluations m a x _ F E s = 500,000 .
Table 3. Comparison of self-adaptive differential evolution with optional external archive (JADE)-ensemble of constraint handling techniques (ECHT) and self-adaptive differential evolution (SaDE)-ECHT after FES = 500,000 for g01–g06. The bold numbers indicate the better results.
Table 3. Comparison of self-adaptive differential evolution with optional external archive (JADE)-ensemble of constraint handling techniques (ECHT) and self-adaptive differential evolution (SaDE)-ECHT after FES = 500,000 for g01–g06. The bold numbers indicate the better results.
ProbAlgorithmBestMedianWorstc v ¯ MeanStdFRSR
g 01 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 2.0000 ( 0 ) 0 , 0 , 0 0 0.0800 0.4000 100 % 96 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 0 0 0 100 % 100%
g 02 JADE-ECHT 0.0001 ( 0 ) 0.0004 ( 0 ) 0.0276 ( 0 ) 0 , 0 , 0 0 0.0064 0.0091 100 % 16 %
SaDE-ECHT 0 ( 0 ) 0.0110 ( 0 ) 0.1263 ( 0 ) 0 , 0 , 0 0 0.0191 0.0254 100 % 24%
g 03 JADE-ECHT 0.0250 ( 0 ) 0.1015 ( 0 ) 0.4245 ( 0 ) 0 , 0 , 0 0 0.1385 0.1036 100 % 0 %
SaDE-ECHT 0 ( 0 ) 0.0122 ( 0 ) 0.1524 ( 0 ) 0 , 0 , 0 0 0.0243 0.0343 100 % 12%
g 04 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
g 05 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 0 0 0 100 % 100%
SaDE-ECHT 0 ( 0 ) 91.4773 ( 0 ) 515.4900 ( 0 ) 0 , 0 , 0 0 110.1546 101.2496 100 % 4 %
g 06 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
Table 4. Comparison of JADE-ECHT and SaDE-ECHT after FES = 500,000 for g07–g12. The bold numbers indicate the better results.
Table 4. Comparison of JADE-ECHT and SaDE-ECHT after FES = 500,000 for g07–g12. The bold numbers indicate the better results.
ProbAlgorithmBestMedianWorstc v ¯ MeanStdFRSR
g 07 JADE-ECHT 0 ( 0 ) 0.0879 ( 0 ) 0.2651 ( 0 ) 0 , 0 , 0 0 0.0976 0.0726 100 % 4%
SaDE-ECHT 0.0001 ( 0 ) 0.0114 ( 0 ) 0.3230 ( 0 ) 0 , 0 , 0 0 0.0518 0.0850 100 % 0 %
g 08 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
g 09 JADE-ECHT 0 ( 0 ) 0.0039 ( 0 ) 0.0714 ( 0 ) 0 , 0 , 0 0 0.0132 0.0192 100 % 20 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0.0006 ( 0 ) 0 , 0 , 0 0 0.0001 0.0002 100 % 76%
g 10 JADE-ECHT 0 ( 0 ) 133.9677 ( 0 ) 343.5425 ( 0 ) 0 , 0 , 0 0 143.0809 105.4501 100 % 4%
SaDE-ECHT 0.0012 ( 0 ) 0.1709 ( 0 ) 11.9004 ( 0 ) 0 , 0 , 0 0 1.1748 3.0352 100 % 0 %
g 11 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
g 12 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
Table 5. Comparison of JADE-ECHT and SaDE-ECHT after FES = 500,000 for g13–g18. The bold numbers indicate the better results.
Table 5. Comparison of JADE-ECHT and SaDE-ECHT after FES = 500,000 for g13–g18. The bold numbers indicate the better results.
ProbAlgorithmBestMedianWorstc v ¯ MeanStdFRSR
g 13 JADE-ECHT 0.3849 ( 0 ) 0.9118 ( 0 ) 0.9459 ( 0 ) 0 , 0 , 0 0 0.8275 0.1750 100 % 0 %
SaDE-ECHT 0 ( 0 ) 0.3870 ( 0 ) 0.8491 ( 0 ) 0 , 0 , 0 0 0.3608 0.2828 100 % 4%
g 14 JADE-ECHT 0 ( 0 ) 0.0174 ( 0 ) 5.5402 ( 0 ) 0 , 0 , 0 0 1.9415 2.2940 100 % 40%
SaDE-ECHT 0.4527 ( 0 ) 1.6397 ( 0 ) 3.3912 ( 0 ) 0 , 0 , 0 0 1.7600 0.6956 100 % 0 %
g 15 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 0 0 0 100 % 100%
SaDE-ECHT 0 ( 0 ) 0.0009 ( 0 ) 2.5449 ( 0 ) 0 , 0 , 0 0 0.3333 0.6971 100 % 44 %
g 16 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
g 17 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 74.0580 ( 0 ) 0 , 0 , 0 0 8.8870 24.5623 100 % 88%
SaDE-ECHT 7.9251 ( 0 ) 91.2351 ( 0 ) 297.1687 ( 0 ) 0 , 0 , 0 0 92.9967 50.4589 100 % 0 %
g 18 JADE-ECHT 0 ( 0 ) 0.0001 ( 0 ) 0.0206 ( 0 ) 0 , 0 , 0 0 0.0011 0.0041 100 % 52 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 0 0 0 100 % 100%
Table 6. Comparison of JADE-ECHT and SaDE-ECHT after FES = 500,000 for g19–g24. The bold numbers indicate the better results.
Table 6. Comparison of JADE-ECHT and SaDE-ECHT after FES = 500,000 for g19–g24. The bold numbers indicate the better results.
ProbAlgorithmBestMedianWorstc v ¯ MeanStdFRSR
g 19 JADE-ECHT 0 ( 0 ) 1.4028 ( 0 ) 3.6498 ( 0 ) 0 , 0 , 0 0 1.5502 1.0136 100 % 12%
SaDE-ECHT 0.3671 ( 0 ) 1.7022 ( 0 ) 6.6604 ( 0 ) 0 , 0 , 0 0 2.3120 1.9699 100 % 0 %
g 20 JADE-ECHT 3.2029 ( 9 ) 6.2057 ( 8 ) 15.4062 ( 12 ) 1 , 1 , 2 1.1209 7.2582 3.5087 0 % 0 %
SaDE-ECHT 2.4461 ( 11 ) 14.8045 ( 9 ) 18.3511 ( 11 ) 2 , 4 , 4 3.1946 13.1617 4.8304 0 % 0 %
g 21 JADE-ECHT 0 ( 0 ) 0.0633 ( 0 ) 263.7866 ( 1 ) 0 , 0 , 0 0 39.1073 63.9006 96 % 44%
SaDE-ECHT 0 ( 0 ) 77.3185 ( 0 ) 110.2441 ( 0 ) 0 , 0 , 0 0 71.8631 25.3368 100% 4 %
g 22 JADE-ECHT 390.4334 ( 4 ) 10,565.5111 ( 3 ) 19,715.2233 ( 4 ) 3 , 3 , 3 175,401.6096 10,557.6213 6162.3243 0 % 0 %
SaDE-ECHT 292.6511 ( 3 ) 8834.7836 ( 3 ) 19,258.8965 ( 3 ) 3 , 3 , 3 90,196.1317 9289.3437 4998.2886 0 % 0 %
g 23 JADE-ECHT 0 ( 0 ) 8.5726 ( 0 ) 601.1293 ( 0 ) 0 , 0 , 0 0 117.5730 198.2664 36%36%
SaDE-ECHT 182.7482 ( 0 ) 357.7081 ( 0 ) 518.9083 ( 0 ) 0 , 0 , 0 0 344.3397 87.4764 0 % 0 %
g 24 JADE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
SaDE-ECHT 0 ( 0 ) 0 ( 0 ) 0 ( 0 ) 0 , 0 , 0 000 100 % 100 %
Table 7. Comparison of JADE-ECHT and SaDE-ECHT in terms of feasibility rate (FR) and success rate (SR) with algorithms of CEC 2006.
Table 7. Comparison of JADE-ECHT and SaDE-ECHT in terms of feasibility rate (FR) and success rate (SR) with algorithms of CEC 2006.
AlgorithmsFRSR
DE95.65%78.09%
DMS-PSO100%90.61%
ϵ DE100%95.65%
GDE92.00%77.39%
jDE-295.65%80.00%
MDE95.65%87.65%
MPDE94.96%87.65%
PCX95.65%94.09%
PESO+95.48%67.83%
SaDE100%87.13%
JADE-ECHT95.30%57.04%
SaDE-ECHT95.65%46.43%
Table 8. Comparison of median values of JADE-ECHT, SaDE-ECHT and CEC’2006 algorithms achieved after 500,000 FEs. The bold numbers indicate the better results.
Table 8. Comparison of median values of JADE-ECHT, SaDE-ECHT and CEC’2006 algorithms achieved after 500,000 FEs. The bold numbers indicate the better results.
ProbDEDMS-PSO ϵ DEGDEjDE-2MDEMPDEPCXPESO+SaDEJADE-ECHTSaDE-ECHT
g010(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)
g025.1700 × 10 8 (0)0(0)3.0933 × 10 8 (0)2.3251 × 10 7 (0)3.3051 × 10 9 (0)0.017460(0)3.5608 × 10 6 0(0)1.4314 × 10 6 (0)3.0800 × 10 9 (0)0.0004(0)0.0110(0)
g036.7110 × 10 1 (0)0(0)−4.4409 × 10 16 (0)9.3634 × 10 1 (0)0.3481(0)0(0)−2.8866 × 10 15 0(0)1.5890 × 10 7 (0)1.7770 × 10 8 (0)0.1015(0)0.0122(0)
g047.6398 × 10 11 (0)0(0)0(0)8.0036 × 10 11 (0)0(0)0(0)3.6380 × 10 12 0(0)1.0000 × 10 10 (0)2.1667 × 10 7 (0)0(0)0(0)
g05−9.0949 × 10 13 (0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)91.4773(0)
g064.5475 × 10 11 0(0)1.1823 × 10 11 (0)6.1846 × 10 11 (0)1.1823 × 10 11 (0)0(0)1.0914 × 10 11 0(0)1.0000 × 10 10 (0)4.5475 × 10 11 (0)0(0)0(0)
g077.9783 × 10 11 (0)0(0)−1.8474 × 10 13 (0)3.6402 × 10 10 (0)−1.8829 × 10 13 (0)0(0)−1.8474 × 10 13 0(0)9.4367 × 10 6 (0)1.4608 × 10 7 (0)0.0879(0)0.0114(0)
g088.1964 × 10 11 (0)0(0)4.1633 × 10 17 (0)8.1964 × 10 11 (0)4.1633 × 10 17 (0)0(0)4.1633 × 10 17 0(0)1.0000 × 10 10 (0)8.1964 × 10 11 (0)0(0)0(0)
g09−9.8112 × 10 11 (0)0(0)0(0)−9.7884 × 10 11 (0)2.2737 × 10 13 (0)0(0)1.1369 × 10 13 0(0)1.0000 × 10 10 (0)3.7440 × 10 7 (0)0.0039(0)0(0)
g106.2755 × 10 11 (0)1.0124 × 10 8 (0)−9.0949 × 10 13 (0)6.9122 × 10 11 (0)−9.0949 × 10 13 (0)0(0)−9.0949 × 10 13 0(0)1.3432 × 10 3 (0)1.8120 × 10 6 (0)133.9677(0)0.1709(0)
g110(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)
g120(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)0(0)
g133.8486 × 10 1 (0)0(0)9.7145 × 10 17 (0)3.8486 × 10 1 (0)0.6800(0)0(0)3.8486 × 10 1 0(0)1.1200 × 10 8 (0)4.1898 × 10 11 (0)0.9118(0)0.3870(0)
g148.5123 × 10 12 (0)0(0)2.1316 × 10 14 (0)6.3148 × 10 8 (0)2.1316 × 10 14 (0)0(0)−3.9961 × 10 4 0(0)3.2912 × 10 3 (0)1.4793 × 10 5 (0)0.0174(0)1.6397(0)
g156.0822 × 10 11 (0)0(0)0(0)6.0936 × 10 11 (0)0(0)0(0)0(0)0(0)1.0000 × 10 10 (0)6.0822 × 10 11 (0)0(0)0.0009(0)
g166.5214 × 10 11 (0)0(0)4.4409 × 10 15 (0)6.5216 × 10 11 (0)5.1070 × 10 15 (0)0(0)5.3291 × 10 15 0(0)1.0000 × 10 10 (0)6.5214 × 10 11 (0)0(0)0(0)
g177.4058 × 10 1 (0)7.4058 × 10 1 (0)1.8190 × 10 12 (0)7.4052 × 10 1 (0)10.4896(0)0(0)7.4058 × 10 1 0(0)13.9638(0)7.4058 × 10 1 (0)0(0)91.2351(0)
g181.5561 × 10 11 (0)0(0)3.3307 × 10 16 (0)4.6362 × 10 11 (0)4.4408 × 10 16 (0)0(0)4.4409 × 10 16 0(0)4.0000 × 10 10 (0)1.5561 × 10 11 (0)0.0001(0)0(0)
g194.6370 × 10 11 (0)0(0)5.2162 × 10 8 (0)5.2669 × 10 9 (0)4.2632 × 10 14 (0)0.387033(0)3.5527 × 10 14 0(0)2.6302 × 10 2 (0)1.3868 × 10 10 (0)1.4028(0)1.7022(0)
g20−2.4674 × 10 2 (8)−7.3330 × 10 2 (17)−2.4674 × 10 2 (8)1.3503 × 10 1 (20)0.1082(2)0.103314(20)1.0151 × 10 1 0.0675(12)3.2600 × 10 2 (8)2.3757 × 10 1 (20)6.2057(8)14.8045(9)
g21−2.8371 × 10 10 (0)3.5911 × 10 6 (0)−2.8422 × 10 14 (0)6.5523 × 10 8 (0)−2.8421 × 10 14 (0)0(0)1.4211 × 10 13 0(0)81.3460(0)2.5785 × 10 8 (0)0.0633(0)77.3185(0)
g221.0336 × 10 4 (15)1.2200 × 10 2 (0)1.2332 × 10 1 (0)9.7885 × 10 3 (19)8033.6537(8)9210.082460(18)8.7919 × 10 3 9888.6409(15)14,198.8059(19)4.6907 × 10 1 (0)10,565.5111(3)8834.7836(3)
g233.0005 × 10 2 (0)1.0267 × 10 8 (0)0(0)1.0569 × 10 1 (0)2.2737 × 10 13 (0)0(0)0(0)0(0)130.5043(0)3.9790 × 10 13 (0)8.5726(0)357.7081(0)
g244.6736 × 10 12 (0)0(0)5.7732 × 10 14 (0)4.7269 × 10 12 (0)5.5067 × 10 14 (0)0(0)7.1054 × 10 14 0(0)0(0)4.6372 × 10 12 (0)0(0)0(0)
Table 9. Comparison of standard deviation values of JADE-ECHT, SaDE-ECHT and CEC’2006 algorithms achieved after 500,000 FEs. The bold numbers indicate the better results.
Table 9. Comparison of standard deviation values of JADE-ECHT, SaDE-ECHT and CEC’2006 algorithms achieved after 500,000 FEs. The bold numbers indicate the better results.
ProbDEDMS-PSO ϵ DEGDEjDE-2MDEMPDEPCXPESO+SaDEJADE-ECHTSaDE-ECHT
g016.4146 × 10 15 0000000000.40000
g021.0015 × 10 18 4.6953 × 10 3 1.7523 × 10 8 7.5112 × 10 3 3.0488 × 10 3 02.7802 × 10 3 07.1385 × 10 3 4.9786 × 10 3 0.00910.0254
g035.2098 × 10 14 02.9582 × 10 31 1.9833 × 10 1 1.0140 × 10 1 08.3681 × 10 2 01.5396 × 10 6 3.4743 × 10 5 0.10360.0343
g043.9644 × 10 13 000000001.8550 × 10 12 00
g050001.6854 × 10 2 2.419303.6380 × 10 13 001.8190 × 10 13 0101.2496
g06000000000000
g076.4146 × 10 15 02.1831 × 10 15 3.8948 × 10 7 1.7405 × 10 15 03.0215 × 10 15 02.2678 × 10 5 1.4993 × 10 5 0.07260.0850
g081.0015 × 10 18 01.2326 × 10 32 01.2580 × 10 32 00003.8426 × 10 18 00
g095.2098 × 10 14 004.9555 × 10 14 4.2538 × 10 14 00007.9851 × 10 14 0.01920.0002
g103.9644 × 10 13 8.8141 × 10 9 4.2426 × 10 13 09.1279 × 10 8 03.0165 × 10 13 05.8279 × 10 2 1.5244 × 10 6 105.45013.0352
g1100004.5540 × 10 4 07.0638 × 10 5 00000
g12000000000000
g1301.8204 × 10 6 03.8650 × 10 1 2.2376 × 10 1 02.8719 × 10 1 06.3801 × 10 6 2.7832 × 10 7 0.17500.2828
g143.4809 × 10 15 01.3924 × 10 15 3.8556 × 10 3 3.4809 × 10 15 07.9441 × 10 15 02.8553 × 10 2 6.4986 × 10 5 2.29400.6956
g150009.6094 × 10 1 2.2020 × 10 2 04.3027 × 10 6 00000.6971
g162.2092 × 10 16 01.5777 × 10 30 1.7764 × 10 16 00000000
g173.0234 × 10 1 01.2117 × 10 27 8.2114 × 10 1 3.8319 × 10 1 03.4111 × 10 1 042.2861.6168 × 10 1 24.562350.4589
g184.8817 × 10 17 02.1756 × 10 17 6.4619 × 10 1 3.6822 × 10 17 04.1541 × 10 17 03.8184 × 10 2 5.2898 × 10 2 0.00410
g191.2568 × 10 5 01.2568 × 10 5 4.5735 × 10 5 1.0531 × 10 13 0.8475173.5527 × 10 14 01.6158 × 10 1 7.2976 × 10 10 1.01361.9699
g204.2362 × 10 2 6.9516 × 10 3 4.2362 × 10 2 1.9580 × 10 0 1.1510 × 10 2 0.0216882.8984 × 10 0 0.02194.9951 × 10 3 1.0638 × 10 1 3.50874.8304
g216.5489 × 10 1 2.0784 × 10 6 3.3417 × 10 14 8.6788 × 10 1 3.6266 × 10 1 06.2358 × 10 1 067.0191.5058 × 10 3 63.900625.3368
g225.7875 × 10 3 2.7703 × 10 1 1.5690 × 10 1 5.9865 × 10 3 5.1748 × 10 3 4808.8009694.8022 × 10 3 4421.53264963.73.0415 × 10 1 6162.32434998.2886
g238.8097 × 10 6 4.5997 × 10 4 1.1139 × 10 14 1.9119 × 10 2 5.9983 × 10 1 05.1159 × 10 14 087.6343.4116 × 10 4 198.266487.4764
g248.8525 × 10 20 02.5244 × 10 29 01.9323 × 10 29 0000000

Share and Cite

MDPI and ACS Style

Javed, H.; Jan, M.A.; Tairan, N.; Mashwani, W.K.; Khanum, R.A.; Sulaiman, M.; Khan, H.U.; Shah, H. On the Efficacy of Ensemble of Constraint Handling Techniques in Self-Adaptive Differential Evolution. Mathematics 2019, 7, 635. https://doi.org/10.3390/math7070635

AMA Style

Javed H, Jan MA, Tairan N, Mashwani WK, Khanum RA, Sulaiman M, Khan HU, Shah H. On the Efficacy of Ensemble of Constraint Handling Techniques in Self-Adaptive Differential Evolution. Mathematics. 2019; 7(7):635. https://doi.org/10.3390/math7070635

Chicago/Turabian Style

Javed, Hassan, Muhammad Asif Jan, Nasser Tairan, Wali Khan Mashwani, Rashida Adeeb Khanum, Muhammad Sulaiman, Hidayat Ullah Khan, and Habib Shah. 2019. "On the Efficacy of Ensemble of Constraint Handling Techniques in Self-Adaptive Differential Evolution" Mathematics 7, no. 7: 635. https://doi.org/10.3390/math7070635

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop