Next Article in Journal
Asymmetric Flow Phenomena Affecting the Characterization of the Control Plant of an Altitude Test Facility for Aircraft Engines
Next Article in Special Issue
Delay Differential Equations with Several Sublinear Neutral Terms: Investigation of Oscillatory Behavior
Previous Article in Journal
Top Quarks from Tevatron to the LHC
Previous Article in Special Issue
Properties of Differential Equations Related to Degenerate q-Tangent Numbers and Polynomials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Differential Evolution Using Enhanced Mutation Strategy Based on Random Neighbor Selection

1
Department of Computer Science, Faculty of Computing and Information Technology, International Islamic University Islamabad, Islamabad 44000, Pakistan
2
Department of Computer Science, Hazara University, Mansehra 21120, Pakistan
3
Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan 29220, Pakistan
4
Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
5
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
*
Authors to whom correspondence should be addressed.
Symmetry 2023, 15(10), 1916; https://doi.org/10.3390/sym15101916
Submission received: 31 July 2023 / Revised: 7 September 2023 / Accepted: 1 October 2023 / Published: 14 October 2023
(This article belongs to the Special Issue Symmetries in Differential Equation and Application)

Abstract

:
Symmetry in a differential evolution (DE) transforms a solution without impacting the family of solutions. For symmetrical problems in differential equations, DE is a strong evolutionary algorithm that provides a powerful solution to resolve global optimization problems. DE/best/1 and DE/rand/1 are the two most commonly used mutation strategies in DE. The former provides better exploitation while the latter ensures better exploration. DE/Neighbor/1 is an improved form of DE/rand/1 to maintain a balance between exploration and exploitation which was used with a random neighbor-based differential evolution (RNDE) algorithm. However, this mutation strategy slows down convergence. It should achieve a global minimum by using 1000 × D, where D is the dimension, but due to exploration and exploitation balancing trade-offs, it can not achieve a global minimum within the range of 1000 × D in some of the objective functions. To overcome this issue, a new and enhanced mutation strategy and algorithm have been introduced in this paper, called DE/Neighbor/2, as well as an improved random neighbor-based differential evolution algorithm. The new DE/Neighbor/2 mutation strategy also uses neighbor information such as DE/Neighbor/1; however, in addition, we add weighted differences after various tests. The DE/Neighbor/2 and IRNDE algorithm has also been tested on the same 27 commonly used benchmark functions on which the DE/Neighbor/1 mutation strategy and RNDE were tested. Experimental results demonstrate that the DE/Neighbor/2 mutation strategy and IRNDE algorithm show overall better and faster convergence than the DE/Neighbor/1 mutation strategy and RNDE algorithm. The parametric significance test shows that there is a significance difference in the performance of RNDE and IRNDE algorithms at the 0.05 level of significance.

1. Introduction

Today, the modern world has entered a post peta scale era; the requirements are growing exponentially for computation and data processing, and the need for high-performance computation is increasing day by day; thus, the trend has changed from serial execution to high-performance computation. For achieving high-performance computation, several hurdles need to be tackled. Examples are those problems where the solution is very hard to find, or the solution merely exists or is very hard to achieve, e.g., NP-complete problems. To achieve the solution to those problems, we have very well-known heuristic techniques which provide the solution to these types of problems, but those solutions are not completely optimized. However, using optimization algorithms, such as the differential evolution (DE) and particle swarm optimization (PSO), optimized solutions to such problems can still be found.
Moreover, we will observe and discuss its variants, as it is already known that the original DE was first proposed by Storn and Price [1] in 1995; this drew the attention of many researchers as it was the simplest algorithm that provided the optimized solutions to many real-world problems. Thus, based on the original DE algorithm, different variants were introduced later. Some of the well-known approaches were the hybridization with other techniques, modification of mutation strategies, adaptation of mutation strategy and parameter settings, and use of neighbor information.

1.1. Problem Statement

The local optima issue is a challenging issue if the population loses its diversity in the differential evolution algorithm. The selection of parents is important to incorporate diversity in mutation and crossover operations’ DE algorithm. The perturbation of a vector that evolves the population around the neighborhood will be stuck in local optima because of the imbalance between the exploration and exploitation capability of the algorithm. The RNDE algorithm utilizes only one difference vector and a neighbor best vector from a set of N neighbors, where N is taken from the interval of N L L and N U L . Less diversity and slow convergence degrade the convergence speed of the RNDE Algorithm 1 [2].
Algorithm 1: Improved Random Neighbor-Based Differential Evolution
1:
Randomly initialize population
2:
Evaluate the objective function
3:
FEs = NP
4:
while  F E s < M a x ( F E s )   do
5:
 Calculate the number of neighbor’s Ni for each individual
N i = N l b + ( N u b N l b ) . f ( X i ) f m i n + ψ j = 1 N P ( f ( X j f m i n ) + ψ
6:
for  i = 1 : N P  do
7:
  Randomly choose Ni neighbors for ith individual and the best one Xnbest
8:
  Generate IRNDE mutant vector   
   V i according to V i = X n b e s t + F ( X r 1 X r 2 ) + F ( X r 3 X r 4 )
9:
  Execute the crossover operation to generate a trial vector U i
10:
  Evaluate the trial vector U i
11:
  FEs = FEs + 1
12:
   if  X i > f ( U i )   then
13:
     X i > U i
14:
  else
15:
   Update CR by using adaptive shift
16:
   if  f ( U i ) > f ( X i )  then
17:
    Flag = − Flag
18:
   end if
19:
   if Flag==1  then
20:
     C R = C R large + 0.1 randn
21:
   else
22:
     C R = C R small + 0.1 randn
23:
   end if
24:
  end if
25:
end for
26:
end while

1.2. Research Significance

The selection of the number of parents used in the perturbation of any individual is considered important in the evolution of the DE algorithm. The RNDE algorithm utilizes a difference vector that reduces the diversity in the population and, as a result, the algorithm converges slowly. The perturbation of one neighborhood’s best vector results in more exploitation than exploration and ultimately results in being stuck in local optima that can be fixed by increasing the exploration capability of the DE algorithm.

1.3. Research Contributions

  • This paper presents a novel mutation strategy in the RNDE algorithm to maintain the balance between the exploration and exploitation of the DE algorithm. The proposed IRNDE is helpful in increasing the convergence speed and average fitness solution quality of results.
  • Experimental results show that the performance of the improved RNDE algorithm is superior, as compared to the RNDE algorithm for the standard test suit of benchmark functions.
  • Convergence graphs confirm the quick convergence of the proposed IRNDE algorithm and statistical results show the significance of the IRNDE algorithm.

1.4. Research Question and Hypothesis

  • Ways to increase population diversity and incorporate a balance between exploration and exploration during the evolution process of the RNDE algorithm.
  • Finding significance in the performance of the RNDE algorithm and proposed algorithm.
In the rest of the paper, Section 2 shows how the DE algorithm works; a brief literature review is presented in Section 3; material and methods are given in Section 4; results and discussion are presented in Section 5; statistical analysis is given in Section 5.4; conclusion and future are given in the last section.

2. Principle of the Classical Differential Evolution Algorithm

As mentioned earlier, the purpose of the DE algorithm is to provide optimized solutions [3]. The algorithm keeps searching for the best individual among the given population [4]. It is also considered that DE can solve the problem for immediate goals using a given population and a set of parameters [5]. It is a population-based algorithm, such as genetic algorithms, and uses crossover and mutation as operators; the last step is the selection step. Moreover, it is self-adaptive, where all solutions have the same chance of being selected, no matter what their fitness values are [6]. It follows the greedy approach, especially in the selection phase. DE uses NP (number of population) D-dimensional parameter vectors, and it is a parallel direct search method. Once we obtain the result or new offspring from the DE algorithm, we compare the new offspring/generation with their parents and we evaluate both the parents and the new generation based on their fitness value. We obtain a new individual by applying mutation, crossover, and selection operators. Those who are better at fitness are kept, no matter whether it is a new generation or their parents. In the selection operation, the greedy selection is applied to select the individual among the target vector and trial vector [7]. DE uses NP, a Population Size, and D-dimensional parameter vectors, and it is a parallel direct search method. The individual is represented by X i , j , i = 1 , 2 , , N P , j = 1 , 2 , , D and the population size for the population of each generation G. The classical DE works in three phases: mutation, crossover, and selection.

2.1. Mutation Phase

The mutation phase is used to generate a mutant vector or donor vector that is then used in a crossover operation. To calculate each target vector X i , j , i = 1 , 2 , , N P , the donor or the mutant vector is generated according to
V i , G + 1 = X r 1 , G + F . X r 2 , G X r 3 , G
This equation during the G t h generation generates a donor vector, V i , G + 1 . r 1 , r 2 , r 3 ϵ 1 , 2 , , N P , with a mutually different integer and F > 0 . The random integers r 1 , r 2 , and r 3 are taken from the running index i; thus, the NP should be greater or equal to four to meet the condition. F is the real time constant factor, in which ϵ [ 0 , 2 ] , and is responsible for amplification of differential variations of ( X r 2 , G X r 3 , G ) . It shows the two-dimensional illustration which is responsible for the generation of V i , G + 1 .

2.2. Crossover Phase

The crossover is introduced to increase the diversity of the disconcerted parameter vectors [8]. The trial vector is
V i , G + 1 = X r 1 , G + F . X r 2 , G X r 3 , G
U i , G + 1 = u 1 i , G + 1 , u 2 i , G + 1 , . . . , u D i , G + 1
where j = 1 , 2 , , D .
In the crossover phase, r a n d b ( j ) is the j t h calculation of an unvarying random number generator with outcome ϵ [ 0 , 1 ] . CR is the crossover constant ϵ [ 0 , 1 ] , and this is set by the user. The r n b r ( i ) is the randomly chosen index ϵ 1 , 2 , , D which should make sure that u i , G + 1 always obtains at least one parameter from v i , G + 1 .

2.3. Selection Phase

The selection phase is responsible for deciding whether an individual should become a member of G + 1 or not. Hence, trial vector u i , G + 1 is always compared with target vector v i , G + 1 by using the greedy approach and if u i , G + 1 to achieve minimum fitness value x i , G , then x i , G + 1 is set to u i , G + 1 ; otherwise, the old value x i , G is taken [1].

2.4. Commonly Used Mutation Strategies

As the focus of the current study is DE neighbor information, for classical DE and in other variants of DE, the most commonly used group of mutation strategies [9] are given below
V i = X r 1 + F . X r 2 X r 3
V i = X best + F . X r 1 X r 2
V i = X i + F . ( X best X i ) + F . X r 1 X r 2
V i = X r 1 + F . X r 2 X r 3 + F . X r 4 X r 5
V i = X best + F . X r 1 X r 2 + F . X r 3 X r 4
The above-mentioned mutation strategies are used, not only in neighbor information types of DE algorithms, but also by different researchers of different variants of DE. Moreover, these strategies are also used in the classical version of DE.

2.5. Major Contributions of Study

A number of studies by various researchers are available in the literature to handle the local optima issue, balance between exploration and exploitation, improve the convergence speed and improve the solution quality of the DE algorithm. A few of the variants introduced by researchers include tournament selection-based DE [10], rank-based DE [11], fuzzy-based DE [12], self-adaptive DE [13], adaptive DE [14], and Pool-based DE [15] to maintain the balance between exploration and exploitation as well as to improve the convergence performance of the DE algorithm in their research work.
There are two commonly used mutation strategies for DE. The first is DE/best/1, which provides better exploitation, as it obtains the best population but results in poor exploration. On the other hand, in the second strategy of DE/rand/1, in which exploration is better as it obtains the base vector randomly, exploitation is not good, as there is no balance between exploration and exploitation. So far, to overcome this issue, the DE/Neighbor/1 mutation strategy and random neighbor-based differential evolution (RNDE) algorithm were introduced in [2] and tested on 27 extensively used benchmark functions a few years earlier. The authors stated that the DE/Neighbor/1 and RNDE algorithm is successful in maintaining the balance between exploration and exploitation. It is built to use the lower and upper bound limits to control the balance between exploration and exploitation. However, this mutation strategy shows a slow convergence. It should achieve a global minimum as the function falls within 1000 × D, but due to exploration and exploitation balancing trade-offs, it is unable to obtain a global minimum within the range of 1000 × D in some of the objective functions.
This study introduces a new approach, based on the RNDE variant, namely, the improved random neighbor-based differential evolution (IRNDE). The proposed algorithm uses neighbor information similar to RNDE; however, in addition, we added a new concept: weighted differences after various tests. The proposed IRNDE is tested on the same 27 commonly used benchmark functions on which RNDE was tested. Experiments are performed to compare its performance with RNDE. Results demonstrate faster convergence of IRNDE and its superior performance compared to RNDE.
The rest of this article is organized as follows. The related work is given in Section 3, which is followed by a description of the proposed IRNDE algorithm in Section 4. Section 5 presents the results, while the conclusion is given in Section 6.

3. Related Work

Many researchers proposed models/techniques to improve the DE algorithm to provide better and more optimized results [16,17]. Few researchers provided techniques or other algorithms that work with the DE algorithm to provide hybrid techniques obtaining more optimized and satisfactory results. DE algorithm has attracted many scholars around the globe; according to their work, the DE algorithm can be categorized in the following sections.

3.1. Hybridization with Other Techniques

The study [18] proposed a hybrid algorithm CADE which combines a customized canonical version of CA and DE. The canonical CA uses the ’Accept()’ function which selects the best individual from the population; then, it is updated in the belief space knowledge source by using the ’Update()’ function. The ’Influence()’ function selects the knowledge source that affects the evolution of the next generation of the population. The authors state that in CA, the major source of exploration is topographic knowledge, which is the knowledge about the functional landscape. Moreover, DE can also provide a complementary source of exploration knowledge hence it makes the perfect complement of CA. Both algorithms share the same population space and hence follow high-level teamwork. The study [19] proposed a mechanism, called ADE-ALC, which is abbreviated to the adaptive DE algorithm with an aging leader and challenges, which is helpful to solve optimization problems. It is introduced in the framework of DE, which helps in maintaining the diversity of the population. Moreover, in the DE algorithm, it is critical to retain the diversity of the evolutionary population in solving multimodal optimization problems. ADE-ALC achieves the optimal solution with fast-converging speed. In the ADE-ADC approach, the key parameters are updated that depend on the given probability distribution that could learn from their successful experience in the next generation. In the end, the effectiveness of the ADE-ALC algorithm is checked by numerical results of twenty-five benchmark test functions, where they found that ADE-ALC shows better or at least competitive optimization performance in terms of statistical performance. The authors proposed a hybrid technique in [20] to provide a statistically better performance in the optimization problems. The authors used a combination of the DE algorithm and the stochastic fractal search algorithm. As the hybrid approach is used, the combination of both algorithms has the strength of both competent algorithms and produces better results than the single algorithm. Moreover, to test the performance of the hybrid approach, they used the IEEE 30 benchmark suite, IEEE CEC2014. The results show a better performance of the hybrid approach compared to a single algorithm, and results show the statistical superiority of the hybrid approach.

3.2. Modification of Mutation Strategies

The study [21] proposed an approach to improve the search efficiency of the DE algorithm. The performance of DE is badly affected by parameter settings and evolutionary operators, e.g., the mutation, crossover, and selection process. To overcome this issue, the authors proposed a new technique, called a combined mutation strategy. A guiding individual-based parameter setting method and a diversity-based selection strategy are used. The proposed algorithm uses the concept of sub-population and divides the population into two subcategories, superior and inferior. Experiments are performed using CEC 2005 and CEC 2014 benchmarks. Moreover, their algorithm is different from greedy selection strategies; hence, they proved their algorithm produced more efficient results than previous proposed techniques. The study [22] points out that DE uses only the best solution to deal with global optimization problems. Similarly, mutation strategies in the existing literature utilize only one best solution. The authors challenged this concept and introduced the concept of m best candidates. The authors proposed that m best candidates should be selected to obtain the better gain or better achievement. A technique called the collective information-powered DE (CIPDE) algorithm is proposed to obtain the m best candidates and enhance the power of DE. The CEC2013 benchmark functions are used for experiments that prove that the CIPDE technique is much better than existing mutation strategies. The study [23] proposed a new technique in which they improved the structure of the DE algorithm. The authors argue that the performance of DE is based on control parameters and the mutation strategy; if we enhance both the selection of proper mutation strategy and control parameter, we can obtain better results. An automated system is proposed to produce an evolution matrix that later takes the place of the control parameter crossover rate, Cr. Furthermore, parameter F is renewed in the evolution process. The mutation strategy along with the time stamp system is also progressive in this study. The experiment results showed that the proposed technique is very competitive with the existing strategies.

3.3. Adaptation of Mutation Strategy and Parameter Settings

The study [24] proposed a new algorithm that can investigate problem landscape information and the performance histories of operators for dynamically selecting the most suitable DE operator during the evolution process. The need for this mutation strategy is justified by the fact that predominantly existing works use a single mutation strategy. The authors present the concept of using multiple mutation strategies. Multiple mutation strategy-based algorithms are reported to provide far better results than single mutation-based algorithms. In such algorithms, the emphasis is to obtain the better performing evolutionary operator, which will be totally based on performance history for creating new offspring. This procedure is carried out dynamically; it selects the most suitable evolutionary operator. Experimental results using 45 optimization problems show the efficacy of the proposed algorithm. The study [21] proposed a new and improved version of the DE algorithm. Firstly, the search strategy of the previous DE is improved by using the information of individuals to set the parameter of DE and update the population, and the combined mutation strategy is produced by combining two single mutation strategies. Secondly, the fitness value of the original and guiding individual is used. Finally, a diversity-based selection strategy is developed by applying a greedy selection strategy. The performance is evaluated using CEC 2005 and CEC 2014 benchmarks, and better results are reported. The study [25] investigates the high-level ensemble in the mutation strategies of DE algorithms. For this purpose, a multi-population-based framework (MFT) was introduced. An ensemble of differential evolution variants (EDEV) based on three high, popular, and efficient DE versions is utilized. JADE-adaptive DE with optional external archive, CoDE DE with composite trial vector generation strategies and control parameters, and EPSDE DE algorithm with an ensemble of parameters and mutation strategies are joined. Furthermore, the whole population of EDEV is divided into four subcategories. In the end, the EDEV-based test is run on the CEC 2005 and CEC 2014, which shows better performance of EDEV.

3.4. Use of Neighbor Information

The study [26] proposed an adaptive social learning (ASL) strategy for the DE algorithm so that neighborhood relationship information of individuals in the current population can be extracted; this is called the social learning of DE (SL-DE). In the classical DE algorithm, parents in mutation are randomly selected from the population. However, in the ASL strategy, the selection of parents is intelligently guided. In ASL, every individual can only interact with their neighbor and parents. To check the efficacy of SL-DE, it is applied to the advanced DE algorithm. Results demonstrate that SL-DE can achieve a better performance than most of the existing variants of DE. The study [27] proposed the technique in which the authors applied the global numerical optimization and the index-based neighborhood on DE. In this technique, the authors used information and population to enhance the performance of DE. In the existing literature, neighborhood information of the current population has not been systematically exploited in DE design. The authors proposed neighborhood-adaptive DE (NaDE). The NaDE technique is based on the pool of index-based neighborhood topologies. Firstly, several neighborhood interactions for every discrete individual are recorded and later used adaptively for specific function selection. Secondly, the authors introduced a neighborhood-directional mutation operator in NaDE to obtain the new resolution in the designated neighborhood topology. Finally, NaDE is easy to operate and implement and can be matched with earlier DE versions on different kinds of optimization problems. The authors proposed a new approach called enhancing De with a random neighbors-based strategy in [2]. Traditionally, DE/rand/1 and DE/best/1 mutation strategies are used with DE. In DE/rand/1, the base vector is chosen from the population randomly for better exploration. On the other hand, the DE/best/1 strategy has better exploitation and poor exploration. To overcome this issue, the authors proposed DE/Neighbor/1. In the proposed technique, for each individual population at every generation, the neighbors are chosen from the population in a random manner and the base factor of the DE/Neighbor/1 mutation strategy should be the best one among neighbors. Xiong et al. [28] introduced a speciation-based DE algorithm in their research work. The presented algorithm utilizes the mechanism of the adaptive neighborhood by considering multimodal benchmark functions. They used the concept of achievement to store inferior individuals in each iteration and remove similar-performing individuals using the mechanism of crowding relief. In their presented approach, the use can fine-tune the parameters adaptively. Liao et al. [29] considered the system of non-linear equations using the DE algorithm in their research work. They utilized neighborhood-based information to increase the exploitation capability of the DE algorithm. The size of the neighborhood is dynamically selected with the adjustment of parameter adaption in the state of evolution. The search efficiency of the DE algorithm was enhanced by achieving significant results. The research work [30] presented binary differential evolution based on a self-adaptive neighborhood method for change detection in super-pixels. The change detection process is carried out by using a binary DE mutation strategy to reduce the dimension of super-pixels. Lio et al. [31] introduced a variable neighborhood-based DE algorithm by utilizing a history archive in their research work. During the evolution process, the neighborhood size is dynamically controlled in their presented approach. The information exchange process is performed between the current population and the population stored in the achieved research. The information exchange is helpful to escape from local optima during the evolutionary process. Liu et al. [32] considered the economic dispatch problem by incorporating a direction-inducted strategy in neighborhood-based DE algorithm in their research work. They have used a new mutation strategy named a neighborhood-based non-elite direction strategy that enhances the exploitation capability of the presented algorithm. Sheng el al. [33] introduced the concept of an adaptive neighborhood-based mutation in the DE algorithm. The presented technique is helpful to focus on an intensive search followed by an initial search by the DE algorithm. They also used a Gaussian local search to evolve promising individuals during the search process. Wang et al. [34] introduced an adaptive memetic-based neighborhood crossover strategy in their research work. They used the concept of a multi-nitching sampling for the evolution of the sub-population to ensure intensive search. They also presented the design of adaptive elimination-based local search in their research work. Their neighborhood crossover strategy focuses on an exploitation capability in the DE algorithm to encourage a good quality solution. Cai et al. [35] presented a self-organizing DE algorithm in their research work that is helpful in guiding the search process by utilizing neighborhood information. The adaptive adjustment of various individuals in the explored works use a cosine similarity in the self-organizing map. Segredo et al. [36] proposed a neighborhood based on proximity in the DE algorithm that is helpful to balance between exploration and exploitation during the evolution process. They used Euclidean-based distance to measure the similarity between neighbors of individuals and termed it a similarity-based neighborhood search. Baioletti et al. [37] presented algebraic differential evolution based on a variable neighborhood concept in their research work. Their presented algorithm utilizes the information of three neighborhoods for shifting and swapping purposes to form permutations. Tian and Gao [38] introduced the adaptive evolution method by using the neighborhood mechanism in the DE algorithm. They used a selection probability based on the selection of individuals, as well as two mutation operators based on the neighborhood to improve the evolution process. They also used a simple reduction method to adjust the population size to incorporate diversity in the DE algorithm. Tarkhaneh and Moser [39] performed a cluster analysis by incorporating a neighborhood search and Archimedean spiral in the DE algorithm in their research work. Mantegna Levy’s flight mechanism was used in the Archimedean spiral by generating robust solutions to balance between exploration and exploitation during the searching process. In this section, we analyzed the DE variants in terms of mutation strategies, use of neighbor information, hybridization of the DE algorithm, etc. Experimental results and performance reports from these works indicate that the performance of DE can be enhanced in several ways. Some of the studies used hybrid approaches to achieve the enhancement while others used a combination of something likr test functions, etc. We can say that, to some extent, researchers were able to obtain a better performance from enhanced versions rather than from the simple version of the DE algorithm. However, to achieve a better performance of DE, they had to make a trade-off. We realize that there are many research challenges for DE to further improve its performance. This research aims to enhance the concept of random neighbors; the focus is to obtain a faster convergence compared to the existing random neighbors approach.

4. Materials and Methods

4.1. DE with Random Neighbor-Based Mutation Strategy

For this research, we selected the random neighbor-based differential evolution (RNDE) approach by [2]. It was proposed to achieve a balance between a better exploration and exploitation, which cannot be achieved using traditional DE/rand/1 and DE/best/1 mutation strategies. The mutation phase is RNDE, given as
V i = X nbest + F . X r 1 X r 2
The number of neighbors N plays a critical role in leading the balance between exploration and exploitation by using the upper and lower bound limits. A small value of N makes the mutation strategy similar to the DE/rand/1 strategy, which results in better exploration and poor exploitation. Contrarily, the large value of N (near to NP) makes the mutation strategy similar to DE/best/1, which provides a better exploitation. The large value of N is not a wise choice because it can make the algorithm become stuck in the local optimum. The authors also proposed a self-adaptive strategy that dynamically updates N and the number of neighbors for each individual X i , as follows
N i = N l b + ( N u b N l b ) . f X i f min + ψ j = 1 N P f X j f min + ψ
where N l b and N u b show lower bounds and upper bounds, respectively, f m i n is the smallest best value of the objective function in the population in the current generation and ψ is used as the smallest constant to avoid a zero division-error.
The RNDE is successful in maintaining the balance between exploration and exploitation, as it was built to use the lower and upper bound limits to control the balance between exploration and exploitation. However, as the whole focus of the RNDE algorithm is to maintain the balance between exploration and exploitation, this mutation strategy makes convergence very slow, thus requiring a larger number of iterations in achieving the global optimum.

4.2. Proposed Approach

To overcome the slower convergence problem of RNDE, this study proposes an improved random neighbor-based mutation strategy for DE (IRNDE). The flow chart of the proposed IRNDE is given in Figure 1. IRNDE also uses neighbor information, such as the RNDE algorithm and DE/Neighbor/1 mutation strategy; however, in addition, we added another term of weighted differences in the DE/neighbor/2 mutation strategy after various tests. As we added an extra-weighted vector in the mutation phase, the upper and lower bound limits of N, which is denoted by neighbors, are also increased. The proposed IRNDE mutation equation is given as
V i = X nbest + F . X r 1 X r 2 + F . X r 3 X r 4
The original/base RNDE algorithm and DE/Neighbor/1 mutation strategy have one weighted difference vector
V i = X nbest + F X r 1 X r 2
On the contrary, the proposed IRNDE algorithm and DE/neighbor/2 mutation strategy have two weighted difference vectors
V i = X nbest + F X r 1 X r 2 + F X r 3 X r 4
In addition, the upper and lower neighbor bounds limits are also adjusted accordingly. In the base algorithm RNDE, the mutation strategy DE/Neighbor/1 lower bound N l b was set to 3, and the N u b upper bound was set to 10 after experimentation. For the proposed IRNDE, using the DE/neighbor/2 mutation strategy, we updated upper and lower bound limits accordingly, and set N l b to 5 and N u b to 12.
We used lower bound 5 because minimum vectors are 5 in our mutation equation. Moreover, the range of implication factor F in the base algorithm RNDE and the proposed algorithm IRNDE is between 0 to 2 and was varied according to the nature of the objective functions. The value of F is kept differently for each function until the best result is achieved. However, we have faced many difficulties during the implementation of IRNDE. In the RNDE algorithm, N denotes the neighbors and is very important in maintaining the balance between exploration and exploitation. If the individual from the population learns the best information from their neighbors, the efficiency of the overall algorithm will be enhanced and more fit offspring can be obtained.
Another major change from the original DE, which is used both by RNDE and the proposed IRNDE, is the dynamic updation of CR, if the trial vector is worse than the target/current vector. The idea behind the dynamic update of CR is that if the trial vector is worse than the target/current vector, i.e., f ( U i ) f ( X i ) , it means current CR cannot provide the best solution; it needs to be updated. Conversely, if a small value of CR is not suitable, then this method can shift the value to the larger one. This strategy, called the adaptive shift strategy, is based on C R l and C R s and uses a standard deviation of 0.1 and the random number denoted by r a n d n , which is a real number between 0 and 1. To fulfill this, RNDE uses two terms C R l and C R s , where C R l means large and C R s means the smaller value of CR. The value of C R l is set to 0.85 and the value of C R s is set to 0.1 after conducting many experiments. If the fitness of the trial vector is worse than the target vector, then CR is updated using the negation from C R l to C R s or C R s to C R l using the following equations
C R = C R l + 0.1 × r a n d n
C R = C R u + 0.1 × r a n d n
In IRNDE, the crossover phase is given as
U i j = V i j , if r a n d j ( 0 , 1 ) C R or j = j r a n d X i j o t h e r w i s e
Equation (16) is responsible for performing the crossover operation, as it is used in the same classical DE crossover phase studies. Moreover, where G = 1 , 2 , , D , and i = 1 , 2 , , N P j r a n d randomly choose an integer from 1 , 2 , , D , r a n d j is a random value consistently distributed in [0, 1], j = 1 , 2 , , D , and, normally, CR should be in between [0, 1], as it is a crossover probability.
However, in RNDE and IRNDE, CR is dynamically updated, because if the value of CR is large, then the CR-made trial vector learns more from the mutant vector and less from the target vector; this causes an increase in the population diversity and is contrary to a small value of CR, making the trial vector learn more from the target vector and less from the mutant vector. IRNDE selection phase is denoted as
U i j = V i j , if r a n d j ( 0 , 1 ) C R or j = j r a n d X i j o t h e r w i s e
Equation (17) shows the selection phase of IRNDE, which is different from the classical DE. As in classical DE, greedy choice is used between the U i trial vector and X i target vector, and if U i is better, then X i is replaced with U i . Hence, it survives in the next generation; however, in RNDE and IRNDE, if the U i trial vector is not better than the X i target vector, then X i will be replaced with X i and will dynamically update the CR. This is where N P shows the number of individuals in the population, F E s is the number of function evaluations, M a x ( F E s ) is the maximum number of functions evaluated, V i is the mutant vector around the individual X i (or called a target vector), U i is the trial vector, f m i n is the minimum (best) value of the objective function in the population at the current generation, and x i is the smallest constant in the computer to avoid a zero-division-error. F l a g is used to inverse the value of C R and is initialized with 0, C R is the crossover probability, C R l a r g e is the large mean value which is generated by Gaussian distribution, C R s m a l l is a small mean value that is generated by the Gaussian distribution, and 0.1 is the standard deviation. After some experimentation and surveys, C R l a r g e is set to 0.85 and C R s m a l l is set to 0.1.

5. Results and Discussions

To evaluate the effectiveness of the proposed algorithm IRNDE and the new enhanced mutation strategy, namely DE/Neighbor/2, we utilized 27 commonly used benchmark functions in which the previous RNDE algorithm and DE/Neighbor/1 mutation strategy were tested. For fair testing with the base algorithm, we implement both RNDE and the proposed IRNDE using the DE/Neighbor/1 mutation strategy and DE/Neighbor/2 mutation strategy, respectively, on the same parameter settings.

5.1. Parameter Settings

As mentioned earlier, the original/base RNDE algorithm and DE/Neighbor/1 mutation strategy have one amplified difference vector, however, the proposed IRNDE algorithm and DE/neighbor/2 mutation strategy have two amplified difference vectors used to generate the mutant vector or donor vector. In addition, upper and lower bounds limits for the calculation of neighbors are also adjusted accordingly. In the base algorithm, the RNDE mutation strategy DE/Neighbor/1 lower bound N l b was set to 3, and the N u b upper bound was set to 10 after some research and experimentation. We have improved the mutation equation in our algorithm and added an extra weighted difference vector in the proposed IRNDE mutation strategy DE/neighbor/2. We have updated the upper and lower bound limits as well and set N l b = 5 and N u b = 12.
The lower bound is set to 5 because minimum vectors are 5 in our mutation equation. Moreover, the range of implication factor F in the base algorithm RNDE and the proposed algorithm IRNDE was between 0 and 2 and has been varied according to the nature of the objective functions. The value of F is kept different for each function until the best result is achieved. Finally, the selection phase is the same as used in the original DE algorithm except for the updating process of CR, which is already explained.

5.2. Benchmark Functions

For experimental evaluation, we have used a test suite with 27 benchmark functions, which are also used by the RNDE algorithm. The details of the test suit are provided in Table 1.

5.3. Results

The list given in Table 1 is the list of benchmark functions, their ranges, and their global minimum. These are the functions that we used to check the performance of both algorithms, RNDE and IRNDE. For experiments, 5000 iterations are used to evaluate the performance of both algorithms. Convergence graphs are shown only for f1 to f6; however, tabular data are presented for f1 to f15.
Figure 2 shows the graphical representation of the fitness results of f1 to f6, where iterations are 5000, population NP = 150, and dimension D = 50. Moreover, the overall enhancement of the proposed algorithm IRNDE can be clearly observed.
The performance of both algorithms, RNDE and IRNDE, is analyzed with respect to variations in the number of populations (NP) and dimensions that are denoted by D. Results of fitness values are reported in Table 2 for population size NP = 150, dimension size D = 50, and iterations = 5000. The results are divided into the pair of five benchmark functions: f1 to f5, f6 to f10, and f11 to f15. It can be clearly observed that the proposed algorithm IRNDE has performed far better than the base algorithm RNDE. There is a visible difference, as the proposed algorithm IRNDE is reducing more quickly than the base algorithm RNDE.
Results given in Table 3 are generated using population size NP = 150, dimension size D = 50, and iterations = 5000 for f6 to f10. It can be observed that the proposed algorithm IRNDE shows better performance compared to the base algorithm RNDE.
The results for f11 to f15 are given in Table 4, which is indicative of the superior performance of the proposed IRNDE for f11 to f15. Results demonstrate that the proposed IRNDE algorithm can obtain a global optimum with less numbers of iterations than the RNDE algorithm.
In Table 5, results are given for both RNDE and IRNDE regarding the best, mean, and worst values with standard deviation and number of iterations needed to reach the global optimum. Results are generated using the population size NP = 150, dimension size D = 10, and iterations = 5000. It can be observed that f14 RNDE at the 5000th iteration still could not reach the global optimum, as −450 is the rounded value and the original value is (−449.99983911013300), whereas IRNDE reached the global optimum in 3976 iterations. While observing the number of iterations for f1 to f27, it can be observed that IRNDE can achieve a global optimum with much less numbers of iterations compared to RNDE, which shows the superiority of the proposed IRNDE algorithm.
Results given in Table 5, Table 6 and Table 7 report the performance of both RNDE and IRNDE for dimensions D of 10, 30, and 50. The performance is analyzed with respect to the number of fitness evaluations (NFE). This test is based on 10 runs of fitness evaluation and will keep running until the terminating condition is satisfied, where the termination condition is set as 10,000 × D. The size of D varies from 10, and 30 to 50. For example, if the number of dimensions is D = 30 then the algorithm has to run for 10,000 × 30 = 300,000 iterations to achieve the global minimum. Moreover, if the algorithm reaches the global minimum in 300,000 iterations, then we record in how many iterations the global minimum is achieved; if the algorithm is not able to achieve the global minimum in 300,000 iterations, it will consider and mark that the global minimum is not achieved so the output will be the error, as shown for f9 and f25 in Table 6 and Table 7, where both RNDE and IRNDE are unable to obtain the global minimum. Similarly from the above-mentioned discussion, it is concluded that if there is a change in dimensions, then the iterations must change as there is a direct relation between dimensions and iterations.
Moreover, we performed the abovementioned tests on the same well-known 27 benchmark functions that were used by the RNDE algorithm for its evaluation. In addition, we also discussed the different scenarios, such as how many iterations are required for both algorithms to achieve the global minimum if the algorithms obtain the worst, median, or best population data and what the success rate of both algorithms is in achieving the global minimum during the calculation of NFE. Finally, Figure 3 and Figure 4 show a bar graph of the median values of both algorithms and demonstrate the performance of both algorithms. It is noteworthy to point out that the proposed IRNDE outperforms the RNDE algorithm.

5.4. Statistical Significance

The statistical significance performance of average fitness values of two algorithms are analyzed using the two-sampled pair t-test significant test. The null hypothesis states that there is no significance difference between the average fitness performance of the RNDE algorithm ( μ 1 ) and IRNDE algorithm ( μ 2 ). The s.t μ 1 μ 2 = 0 and alternate hypothesis state that there is a significance difference between the average fitness performance of the RNDE algorithm and IRNDE algorithm( μ 2 ) s.t μ 1 μ 2 . We used a 0.05 level of significance test to generate significant t-Test results, which are reported in Table 8 of this paper. The degree of freedom used in the research was 34 for 35 observations used to generate test statistics, the sample mean, variance, Pearson correlation, and p values for two-tailed critical t-test values. We generated significant values for fifteen functions; the results of the rest of the functions were similar. It can be observed from the table that all p-values except f 4 and f 14 are less than the level of significance (0.05). It can be summarized that overall there is a significant difference in the performance of the RNDE algorithm and IRNDE algorithm.

6. Conclusions

Differential evolution is a strong evolutionary algorithm that provides a powerful solution to resolve global optimization problems. However, the existing mutation strategies, DE/best/1 and De/rand/1, do not provide a balance between better exploitation and exploration. So far, to overcome this issue, the DE/Neighbor/1 mutation strategy and RNDE algorithm have been introduced. The DE/Neighbor/1 mutation strategy maintains a balance between exploration and exploitation, as it was built to use the lower and upper bound limits. However, this mutation strategy makes the whole procedure or convergence very slow, and requires a higher number of iterations for convergence. This study overcomes this limitation by introducing IRNDE with a DE/neighbor/2 mutation strategy. Contrary to the DE/Neighbor/1 mutation strategy in the RNDE algorithm, the proposed IRNDE adds weighted differences after various tests. The proposed IRNDE algorithm and DE/neighbor/2 mutation strategy are tested on the same 27 commonly used benchmark functions, on which the DE/Neighbor/1 mutation strategy and RNDE algorithm were tested. Experimental results demonstrate that the new mutation strategy DE/neighbor/2 and IRNDE algorithm is better and faster overall in convergence. Moreover, while performing successful tests on both state-of-the-art algorithms, we have gone through different situations during the implementation of benchmark functions concerning minimum, mean, and worst values. Although it has been proven that the proposed algorithm IRNDE is better and more successful overall, not only in maintaining the balance between exploration and exploitation but also in converging more quickly than the base RNDE algorithm, it may not provide optimal results in some scenarios. For results using 27 benchmark functions’ test suites, the proposed algorithm IRNDE performs better than the base algorithm RNDE except for the f9 Rastrigin function and f25 Schwefels Problem. The experimental results of the average fitness ensures the significant difference between the performance of RNDE and IRNDE algorithms using a two-tailed paired t-test at a 0.05 level of significance. The limitation of this study is that it is applied to constrained problems. Applying this study to unconstrained problems could be a good idea in future work. Finally, we intend to apply the proposed algorithm to complex, real-world problems, such as steganography, which remains an attractive topic. Another future work of this study could be the usage of memory to store the convergence track of parameters associated with the proposed algorithm based on user-defined time periods.

Author Contributions

Conceptualization, M.H.B. and Q.A.; Data curation, Q.A. and J.A.; Formal analysis, M.H.B. and J.A.; Funding acquisition, S.A.; Investigation, K.M. and S.A.; Methodology, J.A.; Project administration, S.A. and M.S.; Software, K.M. and M.S.; Supervision, I.A.; Validation, I.A.; Visualization, K.M. and M.S.; Writing—original draft, M.H.B. and Q.A.; Writing—review and editing, I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Researchers Supporting Project Number (RSPD2023R890), King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to King Saud University for funding this research through Researchers Supporting Project Number (RSPD2023R890), King Saud University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Storn, R.; Price, K. Differential evolution–A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  2. Peng, H.; Guo, Z.; Deng, C.; Wu, Z. Enhancing differential evolution with random neighbors based strategy. J. Comput. Sci. 2018, 26, 501–511. [Google Scholar] [CrossRef]
  3. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Song, Y.; Xu, J. An improved differential evolution algorithm and its application in optimization problem. Soft Comput. 2021, 25, 5277–5298. [Google Scholar] [CrossRef]
  4. Hu, Z.; Gong, W.; Li, S. Reinforcement learning-based differential evolution for parameters extraction of photovoltaic models. Energy Rep. 2021, 7, 916–928. [Google Scholar] [CrossRef]
  5. Kharchouf, Y.; Herbazi, R.; Chahboun, A. Parameter’s extraction of solar photovoltaic models using an improved differential evolution algorithm. Energy Convers. Manag. 2022, 251, 114972. [Google Scholar] [CrossRef]
  6. Yu, X.; Liu, Z.; Wu, X.; Wang, X. A hybrid differential evolution and simulated annealing algorithm for global optimization. J. Intell. Fuzzy Syst. 2021, 41, 1375–1391. [Google Scholar] [CrossRef]
  7. Cheng, J.; Pan, Z.; Liang, H.; Gao, Z.; Gao, J. Differential evolution algorithm with fitness and diversity ranking-based mutation operator. Swarm Evol. Comput. 2021, 61, 100816. [Google Scholar] [CrossRef]
  8. Kumar, R.; Singh, K. A survey on soft computing-based high-utility itemsets mining. Soft Comput. 2022, 26, 1–46. [Google Scholar] [CrossRef]
  9. Abbas, Q.; Ahmad, J.; Jabeen, H. The analysis, identification and measures to remove inconsistencies from differential evolution mutation variants. Scienceasia 2017, 43S, 52–68. [Google Scholar] [CrossRef]
  10. Abbas, Q.; Ahmad, J.; Jabeen, H. A novel tournament selection based differential evolution variant for continuous optimization problems. Math. Probl. Eng. 2015, 2015, 1–21. [Google Scholar] [CrossRef]
  11. Li, J.; Yang, L.; Yi, J.; Yang, H.; Todo, Y.; Gao, S. A simple but efficient ranking-based differential evolution. IEICE Trans. Inf. Syst. 2022, 105, 189–192. [Google Scholar] [CrossRef]
  12. Kaliappan, P.; Ilangovan, A.; Muthusamy, S.; Sembanan, B. Temperature Control Design with Differential Evolution Based Improved Adaptive-Fuzzy-PID Techniques. Intell. Autom. Soft Comput. 2023, 36, 781–801. [Google Scholar] [CrossRef]
  13. Chen, X.; Shen, A. Self-adaptive differential evolution with Gaussian–Cauchy mutation for large-scale CHP economic dispatch problem. Neural Comput. Appl. 2022, 34, 11769–11787. [Google Scholar] [CrossRef]
  14. Deng, W.; Ni, H.; Liu, Y.; Chen, H.; Zhao, H. An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation. Appl. Soft Comput. 2022, 127, 109419. [Google Scholar] [CrossRef]
  15. Abbas, Q.; Ahmad, J.; Jabeen, H. Random controlled pool base differential evolution algorithm (RCPDE). Intell. Autom. Soft Comput. 2017, 24, 377–390. [Google Scholar] [CrossRef]
  16. Thakur, S.; Dharavath, R.; Shankar, A.; Singh, P.; Diwakar, M.; Khosravi, M.R. RST-DE: Rough Sets-Based New Differential Evolution Algorithm for Scalable Big Data Feature Selection in Distributed Computing Platforms. Big Data 2022, 10, 356–367. [Google Scholar] [CrossRef]
  17. Zhan, Z.H.; Li, J.Y.; Zhang, J. Evolutionary deep learning: A survey. Neurocomputing 2022, 483, 42–58. [Google Scholar] [CrossRef]
  18. Awad, N.H.; Ali, M.Z.; Suganthan, P.N.; Reynolds, R.G. CADE: A hybridization of cultural algorithm and differential evolution for numerical optimization. Inf. Sci. 2017, 378, 215–241. [Google Scholar] [CrossRef]
  19. Fu, C.; Jiang, C.; Chen, G.; Liu, Q. An adaptive differential evolution algorithm with an aging leader and challengers mechanism. Appl. Soft Comput. 2017, 57, 60–73. [Google Scholar] [CrossRef]
  20. Awad, N.H.; Ali, M.Z.; Suganthan, P.N.; Jaser, E. A decremental stochastic fractal differential evolution for global numerical optimization. Inform. Sci. 2016, 372, 470–491. [Google Scholar] [CrossRef]
  21. Tian, M.; Gao, X.; Dai, C. Differential evolution with improved individual-based parameter setting and selection strategy. Appl. Soft Comput. 2017, 56, 286–297. [Google Scholar] [CrossRef]
  22. Zheng, L.M.; Zhang, S.X.; Tang, K.S.; Zheng, S.Y. Differential evolution powered by collective information. Inf. Sci. 2017, 399, 13–29. [Google Scholar] [CrossRef]
  23. Meng, Z.; Pan, J.S. QUasi-Affine TRansformation Evolution with External ARchive (QUATRE-EAR): An enhanced structure for differential evolution. Knowl.-Based Syst. 2018, 155, 35–53. [Google Scholar] [CrossRef]
  24. Sallam, K.M.; Elsayed, S.M.; Sarker, R.A.; Essam, D.L. Landscape-based adaptive operator selection mechanism for differential evolution. Inf. Sci. 2017, 418, 383–404. [Google Scholar] [CrossRef]
  25. Wu, G.; Shen, X.; Li, H.; Chen, H.; Lin, A.; Suganthan, P.N. Ensemble of differential evolution variants. Inf. Sci. 2018, 423, 172–186. [Google Scholar] [CrossRef]
  26. Cai, Y.; Liao, J.; Wang, T.; Chen, Y.; Tian, H. Social learning differential evolution. Inf. Sci. 2018, 433, 464–509. [Google Scholar] [CrossRef]
  27. Cai, Y.; Sun, G.; Wang, T.; Tian, H.; Chen, Y.; Wang, J. Neighborhood-adaptive differential evolution for global numerical optimization. Appl. Soft Comput. 2017, 59, 659–706. [Google Scholar] [CrossRef]
  28. Xiong, S.; Gong, W.; Wang, K. An adaptive neighborhood-based speciation differential evolution for multimodal optimization. Expert Syst. Appl. 2023, 211, 118571. [Google Scholar] [CrossRef]
  29. Liao, Z.; Zhu, F.; Mi, X.; Sun, Y. A neighborhood information-based adaptive differential evolution for solving complex nonlinear equation system model. Expert Syst. Appl. 2023, 216, 119455. [Google Scholar] [CrossRef]
  30. Gao, T.; Li, H.; Gong, M.; Zhang, M.; Qiao, W. Superpixel-based multiobjective change detection based on self-adaptive neighborhood-based binary differential evolution. Expert Syst. Appl. 2023, 212, 118811. [Google Scholar] [CrossRef]
  31. Liao, Z.; Mi, X.; Pang, Q.; Sun, Y. History archive assisted niching differential evolution with variable neighborhood for multimodal optimization. Swarm Evol. Comput. 2023, 76, 101206. [Google Scholar] [CrossRef]
  32. Liu, D.; Hu, Z.; Su, Q. Neighborhood-based differential evolution algorithm with direction induced strategy for the large-scale combined heat and power economic dispatch problem. Inf. Sci. 2022, 613, 469–493. [Google Scholar] [CrossRef]
  33. Sheng, M.; Chen, S.; Liu, W.; Mao, J.; Liu, X. A differential evolution with adaptive neighborhood mutation and local search for multi-modal optimization. Neurocomputing 2022, 489, 309–322. [Google Scholar] [CrossRef]
  34. Wang, Z.; Chen, Z.; Wang, Z.; Wei, J.; Chen, X.; Li, Q.; Zheng, Y.; Sheng, W. Adaptive memetic differential evolution with multi-niche sampling and neighborhood crossover strategies for global optimization. Inf. Sci. 2022, 583, 121–136. [Google Scholar] [CrossRef]
  35. Cai, Y.; Wu, D.; Zhou, Y.; Fu, S.; Tian, H.; Du, Y. Self-organizing neighborhood-based differential evolution for global optimization. Swarm Evol. Comput. 2020, 56, 100699. [Google Scholar] [CrossRef]
  36. Segredo, E.; Lalla-Ruiz, E.; Hart, E.; Voß, S. A similarity-based neighbourhood search for enhancing the balance exploration–Exploitation of differential evolution. Comput. Oper. Res. 2020, 117, 104871. [Google Scholar] [CrossRef]
  37. Baioletti, M.; Milani, A.; Santucci, V. Variable neighborhood algebraic differential evolution: An application to the linear ordering problem with cumulative costs. Inf. Sci. 2020, 507, 37–52. [Google Scholar] [CrossRef]
  38. Tian, M.; Gao, X. Differential evolution with neighborhood-based adaptive evolution mechanism for numerical optimization. Inf. Sci. 2019, 478, 422–448. [Google Scholar] [CrossRef]
  39. Tarkhaneh, O.; Moser, I. An improved differential evolution algorithm using Archimedean spiral and neighborhood search based mutation approach for cluster analysis. Future Gener. Comput. Syst. 2019, 101, 921–939. [Google Scholar] [CrossRef]
Figure 1. Flowchart of proposed IRNDE algorithms.
Figure 1. Flowchart of proposed IRNDE algorithms.
Symmetry 15 01916 g001
Figure 2. Convergence graphs of f1 to f6 for RNDE and IRNDE. (a) Function f1 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (b) Function f2 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (c) Function f3 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (d) Function f4 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (e) Function f5 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, and (f) Function f6 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000.
Figure 2. Convergence graphs of f1 to f6 for RNDE and IRNDE. (a) Function f1 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (b) Function f2 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (c) Function f3 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (d) Function f4 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, (e) Function f5 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000, and (f) Function f6 convergence graph of RNDE and IRNDE for NP = 150, D = 30, Iterations = 5000.
Symmetry 15 01916 g002aSymmetry 15 01916 g002b
Figure 3. Number of fitness evaluations for RNDE and IRNDE. (a) NFE comparison of RNDE and IRNDE when NP = 150 and D = 10; (b) NFE comparison of RNDE and IRNDE when NP = 150 and D = 30.
Figure 3. Number of fitness evaluations for RNDE and IRNDE. (a) NFE comparison of RNDE and IRNDE when NP = 150 and D = 10; (b) NFE comparison of RNDE and IRNDE when NP = 150 and D = 30.
Symmetry 15 01916 g003
Figure 4. NFE comparison of RNDE and IRNDE when NP = 150 and D = 50.
Figure 4. NFE comparison of RNDE and IRNDE when NP = 150 and D = 50.
Symmetry 15 01916 g004
Table 1. Test suite with 27 benchmark functions.
Table 1. Test suite with 27 benchmark functions.
FunctionNameSearch RangeGlobal Optimum
Unimodal Functions
f1Sphere[−100, 100]0
f2Schwefel2.22[−10, 10]0
f3Schwefel1.2[−100, 100]0
f4Schwefel2.21[−100, 100]0
f5Rosenbrock’s[−30, 30]0
f6Step[−1.28, 1.28]0
f7Quartic with Noise[−100, 100]0
Multimodal Functions
f8Schwefel2.26[−500, 500]−418.98
f9Rastrigin’s[−5.12, 5.12]0
f10Ackley[−32, 32]0
f11Griewank’s[−600, 600]0
f12Penalized1[−50, 50]0
f13Penalized2[−50, 50]0
Shifted Unimodal Functions
f14Shifted Sphere Function[−100, 100]−450
f15Shifted Schwefel’s Problem 1.2[−100, 100]−450
f16Shifted Rotated High Conditioned Elliptic Function[−100, 100]−450
f17Shifted Schwefel’s Problem 1.2 with Noise in Fitness[−100, 100]−450
f18Schwefel’s Problem 2.6 with Global Optimum on Bounds[−100, 100]−310
Shifted Multimodal Functions
f19Shifted Rosenbrock’s Function[−100, 100]390
f20Shifted Rotated Griewank’s Function without Bounds[0, 600]−180
f21Shifted Rotated Ackley’s Function with Global Optimum on Bounds[−32, 32]−140
f22Shifted Rastrigin’s Function[−5, 5]−330
f23Shifted Rotated Rastrigin’s Function[−5, 5]−330
f24Shifted Rotated Weierstrass Function[−0.5, 0.5]90
f25Schwefel’s Problem 2.13[ π , π ]−460
f26Shifted Expanded Griewank’s plus Rosenbrock’s Function (F8F2)[−3, 1]−130
f27Shifted Rotated Expanded Scaffer’s F6 Function[−100, 100]−300
Table 2. Fitness values of function f1 to f5 for NP = 150, D = 50, and iterations = 5000.
Table 2. Fitness values of function f1 to f5 for NP = 150, D = 50, and iterations = 5000.
Iterationsf1f2f3f4f5
RNDEIRNDERNDEIRNDERNDEIRNDERNDEIRNDERNDEIRNDE
1107,166107,166197,013197,01391.452991.45292.19833e+712.19833e+714.31944e+084.31944e+08
5107,166107,166197,013197,01391.452991.45291.56318e+711.25631e+694.31944e+084.31944e+08
10106,76398,577.1197,013197,01391.452991.45291.56318e+719.13478e+684.31944e+084.31944e+08
15106,76398577.1197,013197,01391.452991.45291.36815e+716.45164e+684.31944e+084.31944e+08
20106,76398577.1196,295168,44191.452991.45296.09134e+695.62354e+674.31944e+084.31944e+08
30102,92493,241.7196,295168,44191.452991.45291.10354e+693.20042e+674.31944e+084.31944e+08
4093,490.693,241.7196,295168,44191.452991.45298.49963e+681.1789e+654.31944e+084.31944e+08
5093,490.691,660.3196,295168,44191.452991.45293.2853e+676.41441e+634.31944e+084.31944e+08
10086,431.956,403.3196,295168,44191.406589.28371.27491e+633.44804e+554.04911e+082.99116e+08
15069,868.940,416.5171,358143,13890.305889.28371.15333e+625.90041e+534.04911e+082.77088e+08
20064,28134,341.9171,358143,13890.305889.07544.09299e+602.93405e+513.90937e+081.99466e+08
40031,954.46800.34164,102113,80089.507185.32488.52268e+576.13965e+472.92851e+081.51033e+07
60017,025.51761.99153,40696,057.689.507174.63935.21376e+499.92589e+442.45835e+081.92694e+06
8007159.23271.509153,40696,057.688.088467.85021.49244e+474.0476e+431.54781e+08350747
10002954.6585.033113,34978,643.585.250556.03952.36147e+463.02577e+425.83435e+0757207.6
12001411.6112.6946113,34978571.181.302551.91734.34581e+457.79156e+401.66845e+0723060.8
1400705.5283.36049113,34978,508.464.495243.1033.27444e+421.04872e+381.06145e+076365.95
1600330.40.705804103,11677,69264.495238.95892.92916e+379.65888e+343.61785e+063784.11
1800143.3210.148906103,11676,766.758.712527.67522.68602e+331.27846e+341.06625e+061897.68
200072.66450.02782510200870697.249.541924.41222.62547e+331.74314e+328614871077.72
220028.55240.00816997100,86361497.648.85621.77972.3213e+312.27938e+30263,768916.068
240015.15820.0017015597,335.351,589.843.186718.73372.27744e+301.26613e+27195775700.118
26005.756240.00038519494,308.251,589.839.643814.77892.27744e+302.4849e+26112,458642.347
28002.847938.46806e-0594308.246713.436.374512.64631.06585e+301.44058e+26108,988578.795
30001.303381.73096e-0589,843.445,302.932.33949.990713.97908e+282.10136e+2480144.5515.251
32000.5914424.38451e-0689,843.438085.927.83298.729143.97908e+289.46517e+2349,036462.433
34000.2430821.02105e-0685,722.338,085.925.28416.371693.97908e+282.44522e+2238,846.7347.158
36000.1128682.96409e-0758,333.637,141.825.28410.43673.97908e+283.53667e+2030,109.7347.158
38000.04541324.85207e-0858,087.233,533.522.14584.331063.97908e+283.23352e+1921,079.1301.099
40000.02335521.12556e-0858,087.232429.320.21363.772521.09389e+281.09388e+1819,657.9248.39
42000.01047782.35218e-0950,409.528,602.219.44093.069972.02774e+265.27283e+1613542.1188.169
44000.005075793.57897e-1048,908.524,14316.62392.61461.14581e+212.90392e+167989.71129.291
46000.001919555.69887e-1148,908.521,842.514.76132.187451.14581e+214.48367e+125393.1575.9085
48000.0007905191.34205e-1148,908.519,772.314.40841.638881.14581e+211.43737e+114743.7156.1816
50000.0002850692.70302e-1248,861.719,772.312.70421.387641.14581e+211.43737e+114269.4149.7603
Table 3. Fitness values of function f6 to f10 for NP = 150, D = 50, and iterations = 5000.
Table 3. Fitness values of function f6 to f10 for NP = 150, D = 50, and iterations = 5000.
Iterationsf6f7f8f9f10
RNDEIRNDERNDEIRNDERNDEIRNDERNDEIRNDERNDEIRNDE
121211.42628e+101.42628e+10−3516.54−3516.54728.486728.48620.707320.7073
520211.42628e+101.42628e+10−4303.27−4484.28710.839728.48620.707320.7073
1020191.42628e+101.42628e+10−6262.3−6764.29710.839728.48620.663620.6009
1520191.42628e+101.42628e+10−6282.13−6935.1710.839724.25820.663620.6009
2020191.42628e+101.12205e+10−7428.17−10,057710.839724.25820.663620.5467
3020161.42628e+101.12205e+10−9398.02−14,537.2710.839724.25820.638120.5467
4020161.42628e+101.05978e+10−9534.25−16,106.3710.839682.22220.540820.5467
5020161.32638e+101.05978e+10−10322.7−19,528.8710.839679.31420.448520.4571
10017131.26344e+106.24556e+09−12489.3−20,949698.499594.51420.331119.7722
15016111.17699e+103.04017e+09−14,918.2−20,949698.499533.42919.855519.2148
2001571.17699e+101.56747e+09−16,420−20,949698.499494.47619.746617.9631
4001025.23897e+092.18584e+08−20,949−20,949698.499455.16318.777113.0003
600704.14472e+091.98573e+07−20,949−20,949698.499404.72316.88268.77267
800401.17689e+092.39732e+06−20,949−20,949698.499362.0214.55235.26086
1000402.94735e+08229054−20,949−20,949682.614362.0212.34333.72313
1200305.57891e+0734096.6−20,949−20,949682.614352.4969.905522.51792
1400201.79379e+072328.99−20,949−20,949682.614349.7617.685461.84401
1600204.92482e+06268.371−20,949−20,949682.614349.2125.721340.697559
1800101.60315e+0622.3301−20,949−20,949661.983349.2124.525710.242736
200010291,0841.7914−20,949−20,949661.983349.2124.080720.0935032
220010149,0860−20,949−20,949661.983346.0073.429920.0420455
24001064,393.30−20,949−20,949661.983346.0072.928680.01849
26000022,563.60−20,949−20,949661.983346.0072.761940.00828884
2800006538.420−20,949−20,949661.983346.0072.458480.00367194
3000001510.690−20,949−20,949645.042323.1892.182280.00170531
320000434.2610−20,949−20,949645.042323.1891.657110.00071923
34000096.37540−20,949−20,949637.334323.1890.9582960.000306232
36000029.83990−20,949−20,949637.334319.5480.3892580.000158771
3800006.600630−20,949−20,949618.245319.5480.2334836.64578e-05
4000003.475870−20,949−20,949618.245319.5480.1768243.53399e-05
42000000−20,949−20,949618.245311.840.1137531.68176e-05
44000000−20,949−20,949618.245311.840.06659636.99799e-06
46000000−20,949−20,949618.245311.840.0415633.74626e-06
48000000−20,949−20,949601.333311.840.02639471.71841e-06
50000000−20,949−20,949601.333311.840.01887138.13691e-07
Table 4. Fitness values of function f11 to f15 for NP = 150, D = 50, and iterations = 5000, * indicates global optimum could not reach.
Table 4. Fitness values of function f11 to f15 for NP = 150, D = 50, and iterations = 5000, * indicates global optimum could not reach.
Iterationsf11f12f13f14f15
RNDEIRNDERNDEIRNDERNDEIRNDERNDEIRNDERNDEIRNDE
11055.151055.151.08539e+091.08539e+092.00701e+092.00701e+09168,301168,301398,585398,585
51055.151055.151.0548e+091.08539e+092.00701e+092.00701e+09159,042166,735355,510334,098
101055.151039.81.05323e+091.08539e+092.00701e+092.00701e+09125,808163,621355,510283,650
151055.151039.81.05323e+091.08539e+092.00701e+092.00701e+09125,808162,531335,683250,962
201052.27937.9411.05323e+091.08539e+092.00701e+092.00701e+09120,207131,129335,683250,962
301052.27937.9411.05323e+091.08539e+092.00701e+092.00701e+09120,207118,288335,683250,962
401052.27894.5721.05323e+091.08539e+092.00701e+092.00701e+09106,490114,755335,683250,962
501002.21710.9051.05323e+091.08539e+092.00701e+092.00701e+09106,490109,345263,847228,577
1001002.21542.4171.05323e+091.01396e+091.88035e+091.94988e+0985847.864180.9224,621185,333
1501002.21347.7651.05323e+095.11633e+081.88035e+091.11306e+0981634.255196.3224,621155,871
200966.712270.4019.49775e+084.21792e+081.84609e+098.48217e+0869772.134856.8224,452155,871
400917.70852.66025.81916e+083.97822e+071.3873e+097.41636e+0724635.15930.8209,400152,495
600917.70813.75985.63661e+089537631.26871e+093.5074e+0612060.91400.9177,338144,816
800759.0713.213184.32972e+081005.071.2686e+0943217.15247.32−48.6701170,882125,566
1000729.9911.604812.97419e+0822.7967.18698e+08767.8092420.72−370.845154,60797,994.6
1200632.361.103414.74073e+079.43214.88062e+0838.0572838.294−432.312154,60797,768.6
1400564.3991.017361.54405e+074.538851.71841e+0816.5448114.425−446.984153,22294,129.1
1600450.3620.6583333.30737e+064.255494.75411e+073.86112−167.851−449.317153,22279,506.3
1800272.7590.22698310,2832.833761.47191e+071.10332-296.65−449.78141,58377,475.6
2000254.4330.044540575,962.62.31513.43088e+060.308245−398.253−449.961135,50074,589.6
2200198.6760.00737067126.9162.122971.16831e+060.0776809−424.14−449.993135,50069,304.9
2400124.4910.0017167831.65761.76203258,6910.0147251−437.264−449.998127,62062,349.5
260096.09860.00040952719.79271.4751919,247.40.00279349−444.947−450127,62056,522
280066.93949.89848e-0512.18950.995179296.1060.000594427−446.741−450123,79655,272.9
300046.40222.55152e-0511.00360.91308862.82480.000129401−449.27−450121,61551,567.8
320034.69195.18012e-067.552570.53610252.53422.94385e-05−449.568−450112,01447,125.5
340021.5996.62209e-076.51130.28283733.23087.30486e-06−449.82−45097,448.545,670.9
360017.93871.96707e-076.233030.045519614.01151.12448e-06−449.929−45097,356.944,078.9
380010.86254.72381e-085.515510.007974896.562872.64319e-07−449.976−450 *95,037.943,505.3
40007.993929.67328e-093.459580.001582623.83055.15215e-08−449.988−45095,037.930,530.6
42005.551381.67994e-093.339070.0003023781.875596.99077e-09−449.994−45079,005.228,242.5
44004.711484.01571e-103.339076.66714e-050.6578851.60463e-09−449.997−45079,005.227,167.2
46003.694656.71285e-113.271461.49248e-050.4580762.50137e-10−449.999−45079,005.227,167.2
48002.527841.32597e-113.271463.3658e-060.2559315.15735e-11−450−45076,609.318,035
50002.14932.38842e-123.271466.39102e-070.1073539.56412e-12−450*−45069,629.217,402.5
Table 5. Number of function evaluations for functions f1 to f15 for NP = 150, D = 10, and iterations = 1000 × D.
Table 5. Number of function evaluations for functions f1 to f15 for NP = 150, D = 10, and iterations = 1000 × D.
10 Runs Fitness Evaluations for NP = 150/D = 10
BestMedianWorstMean ± Std. Dev.Success RateRNDE vs. IRNDE (# of Iterations)
f1RNDE5045185265.157e+2 ± 8.68012e+0100%516
IRNDE3403453573.468e+2 ± 5.05085e+0100%348
f2RNDE1457148015281.4885e+3 ± 2.25549e+1100%1549
IRNDE7848198758.26e+2 ± 3.32031e+1100%802
f3RNDE8208468748.456e+2 ± 1.42533e+1100%477
IRNDE5906256476.207e+2 ± 1.82638e+1100%336
f4RNDE8088238478.251e+2 ± 1.19856e+1100%832
IRNDE3423493593.495e+2 ± 5.01664e+0100%618
f5RNDE1652171517571.7163e+3 ± 3.54622e+1100%1737
IRNDE1034107211321.0773e+3 ± 3.23661e+1100%1087
f6RNDE115271.45e+1 ± 7.15309e+0100%13
IRNDE713221.34e+1 ± 4.29987e+0100%12
f7RNDE1871922041.955e+2 ± 6.62067e+0100%205
IRNDE1221321391.316e+2 ± 5.05964e+0100%138
f8RNDE1827312.66e+1 ± 3.62706e+0100%30
IRNDE911171.22e+1 ± 2.65832e+0100%11
f9RNDE8308719088.742e+2 ± 2.23696e+1100%915
IRNDE906112612651.1178e+3 ± 1.04919e+2100%1227
f10RNDE7978178398.198e+2 ± 1.18771e+1100%815
IRNDE5525655745.653e+2 ± 7.33409e+0100%569
f11RNDE1729206523282.037e+3 ± 2.02333e+2100%2621
IRNDE1996258637002.7613e+3 ± 5.16669e+2100%3453
f12RNDE4464684934.671e+2 ± 1.26179e+1100%484
IRNDE3213313443.327e+2 ± 7.39444e+0100%328
f13RNDE4724885064.879e+2 ± 1.13671e+1100%476
IRNDE3323383463.393e+2 ± 4.49815e+0100%327
f14RNDE5035115245.112e+2 ± 6.47731e+0100%503
IRNDE3373533593.512e+2 ± 7.56894e+0100%353
f15RNDE1440148415581.4951e+3 ± 3.86018e+1100%1539
IRNDE8108248878.363e+2 ± 2.70803e+1100%858
f16RNDE4965225425.204e+2 ± 1.26947e+1100%499
IRNDE3433503593.519e+2 ± 5.95259e+0100%345
f17RNDE1471153216031.5474e+3 ± 4.45676e+1100%1496
IRNDE8448589078.693e+2 ± 2.34902e+1100%832
f18RNDE6166436796.494e+2 ± 1.96932e+1100%652
IRNDE5135355585.346e+2 ± 1.12862e+1100%518
f19RNDE2652742952.777e+2 ± 9.84378e+0100%265
IRNDE2202272382.292e+2 ± 5.82714e+0100%233
f20RNDE4004314844.396e+2 ± 2.63236e+1100%518
IRNDE2282613082.683e+2 ± 2.55345e+1100%352
f21RNDE3433553693.562e+2 ± 8.25698e+0100%365
IRNDE2973043203.063e+2 ± 8.28721e+0100%310
f22RNDE3883964113.991e+2 ± 7.37036e+0100%393
IRNDE2572682742.681e+2 ± 5.21643e+0100%267
f23RNDE2863083163.066e+2 ± 1.01784e+1100%306
IRNDE1952062132.059e+2 ± 5.46606e+0100%196
f24RNDE7618038318.008e+2 ± 1.98203e+1100%773
IRNDE6166346496.343e+2 ± 1.09143e+1100%612
f25RNDE3620508094955.83986e+3 ± 2.00043e+370%3217
IRNDE28642310884.74111e+2 ± 2.39794e+290%1608
f26RNDE1791822161.918e+2 ± 1.31976e+1100%172
IRNDE1081171231.164e+2 ± 4.16867e+0100%127
f27RNDE2142232292.234e+2 ± 4.74224e+0100%228
IRNDE1621691711.683e+2 ± 2.71006e+0100%169
Table 6. Number of function evaluations for functions f1 to f15 for NP = 150, D = 30, and iterations = 1000 × D.
Table 6. Number of function evaluations for functions f1 to f15 for NP = 150, D = 30, and iterations = 1000 × D.
10 Runs Fitness Evaluations for NP = 150/D = 30
BestMedianWorstMean ± Std. Dev.Success RateRNDE vs. IRNDE (# of Iterations)
f1RNDE2555262727452.6491e+3 ± 6.40424e+1100%2675
IRNDE1736175317921.7634e+3 ± 2.09401e+1100%1802
f2RNDE57,20259,99366,3806.05669e+4 ± 2.46933e+3100%57,718
IRNDE23,16223,82824,9612.39414e+4 ± 5.28169e+2100%24,016
f3RNDE8581882592588.9154e+3 ± 2.28402e+2100%9119
IRNDE5464553258415.6021e+3 ± 1.32403e+2100%5546
f4RNDE4294434044764.356e+3 ± 6.34333e+1100%4480
IRNDE3086316933953.1891e+3 ± 8.24155e+1100%3303
f5RNDE1250412915137211.30622e+4 ± 4.15667e+2100%13,426
IRNDE7377758078037.5919e+3 ± 1.51639e+2100%7659
f6RNDE1792252722.27e+2 ± 3.39706e+1100%227
IRNDE1181431681.437e+2 ± 1.75439e+1100%142
f7RNDE1150122412891.2318e+3 ± 4.22816e+1100%1239
IRNDE8118409028.455e+2 ± 2.53213e+1100%886
f8RNDE701081371.017e+2 ± 2.17718e+1100%102
IRNDE2535583.68e+1 ± 8.9666e+0100%52
f9RNDE3988143,29348,7644.37697e+4 ± 2.6549e+3100%41,835
IRNDE----0%6.58946e+1
f10RNDE3998408942084.0932e+3 ± 7.4265e+1100%4147
IRNDE2700273027702.7372e+3 ± 2.18469e+1100%2718
f11RNDE4072414742974.1676e+3 ± 8.43567e+1100%4146
IRNDE1827201721612.0213e+3 ± 1.14815e+2100%2092
f12RNDE2569270128402.6904e+3 ± 7.63081e+1100%2782
IRNDE1719173918231.7558e+3 ± 3.35354e+1100%1752
f13RNDE2586261627192.6462e+3 ± 5.21575e+1100%2661
IRNDE1738176918381.7794e+3 ± 3.34139e+1100%1809
f14RNDE2646266727422.6824e+3 ± 3.43615e+1100%2630
IRNDE1746178018331.7887e+3 ± 3.41989e+1100%1867
f15RNDE5834160,13162,7266.07395e+4 ± 1.69443e+3100%59,989
IRNDE23,93824,43624,9562.44474e+4 ± 3.02649e+2100%23,667
f16RNDE2574264227522.651e+3 ± 5.04094e+1100%2674
IRNDE1770180318421.8056e+3 ± 2.39499e+1100%1780
f17RNDE61,57561,98565,6196.29908e+4 ± 1.38551e+3100%62,862
IRNDE25,12926,03227,1972.62584e+4 ± 7.24199e+2100%27,668
f18RNDE4695478148784.7949e+3 ± 6.23154e+1100%4890
IRNDE3486363838213.6549e+3 ± 8.90661e+1100%3741
f19RNDE1018103910881.0458e+3 ± 1.88255e+1100%1023
IRNDE8058238448.251e+2 ± 1.46246e+1100%803
f20RNDE4386472852034.7566e+3 ± 2.72762e+2100%4829
IRNDE2633293638873.0857e+3 ± 3.68177e+2100%3050
f21RNDE1257127913111.2815e+3 ± 1.75768e+1100%1307
IRNDE1029105010561.0457e+3 ± 1.02746e+1100%1082
f22RNDE2078215122282.1589e+3 ± 4.80681e+1100%2653
IRNDE1399142415071.4387e+3 ± 3.33801e+1100%1532
f23RNDE1629168517341.6867e+3 ± 3.29209e+1100%1699
IRNDE1084112111811.1264e+3 ± 2.89988e+1100%1111
f24RNDE6375646566186.4829e+3 ± 8.03388e+1100%6556
IRNDE4785493951624.946e+3 ± 1.2686e+2100%4926
f25RNDE50,303---10%2549.54
IRNDE----0%966.653
f26RNDE1988244835542.5159e+3 ± 4.72809e+2100%1570
IRNDE985120213501.1722e+3 ± 1.12576e+2100%937
f27RNDE1069110411481.1106e+3 ± 2.85081e+1100%1088
IRNDE7657888067.859e+2 ± 1.30933e+1100%778
Table 7. Number of function evaluations for functions f1 to f15 for NP = 150, D = 50, and iterations = 1000 × D.
Table 7. Number of function evaluations for functions f1 to f15 for NP = 150, D = 50, and iterations = 1000 × D.
10 Runs Fitness Evaluations for NP = 150/D = 50
BestMedianWorstMean ± Std. Dev.Success RateRNDE vs. IRNDE (# of iterations)
f1RNDE7214744577217.4651e+3 ± 1.41869e+2100%7391
IRNDE3869392140283.9451e+3 ± 5.64298e+1100%3888
f2RNDE109,913113,260120,0461.14284e+5 ± 2.98052e+3100%109,893
IRNDE67,11869,54670,7886.91903e+4 ± 1.32311e+3100%74,815
f3RNDE38,40039,67941,6094.01089e+4 ± 9.56063e+2100%37,964
IRNDE22,57923,70424,4922.35874e+4 ± 6.57373e+2100%24,331
f4RNDE19,63321,49122,8992.14575e+4 ± 8.63521e+2100%20,941
IRNDE12,04814,03815,5831.4151e+4 ± 1.31694e+3100%11,980
f5RNDE53,74756,42365,0625.67883e+4 ± 3.33907e+3100%57,945
IRNDE21,98522,92623,5652.29314e+4 ± 5.33276e+2100%23,786
f6RNDE2093269538582.7862e+3 ± 4.73772e+2100%2801
IRNDE3905586445.562e+2 ± 7.65721e+1100%464
f7RNDE4389462049604.6433e+3 ± 1.99158e+2100%4743
IRNDE2012207421102.0685e+3 ± 3.06132e+1100%2166
f8RNDE1742203782.451e+2 ± 6.85492e+1100%472
IRNDE4060776.e+1 ± 1.09341e+1100%64
f9RNDE363687---10%2.69166e+1
IRNDE----0% 2.17122e+2
f10RNDE11,61011,83312,1821.19068e+4 ± 1.86668e+2100%12,179
IRNDE5818597961115.9913e+3 ± 9.06116e+1100%6069
f11RNDE20,07421,69823,8672.19229e+4 ± 1.05793e+3100%22,594
IRNDE3846402543104.0539e+3 ± 1.37402e+2100%4014
f12RNDE16,96918,45624,6201.90873e+4 ± 2.41333e+3100%21,185
IRNDE5077568063035.7656e+3 ± 4.00958e+2100%5537
f13RNDE87879155102669.484e+3 ± 5.46782e+2100%9104
IRNDE3997415842324.154e+3 ± 6.94358e+1100%4149
f14RNDE7136725476797.3257e+3 ± 1.76951e+2100%7320
IRNDE3905394540553.9735e+3 ± 5.40684e+1100%3999
f15RNDE111,558113,908123,7801.15099e+5 ± 3.86545e+3100%115,568
IRNDE64,77468,52371,6526.89078e+4 ± 1.95258e+3100%68,125
f16RNDE7278731176377.3623e+3 ± 1.05346e+2100%7636
IRNDE3836399640633.9862e+3 ± 7.47274e+1100%4004
f17RNDE117,296124,557128,8451.24642e+5 ± 3.76015e+3100%125,185
IRNDE74,88078,51182,1227.8798e+4 ± 2.27369e+3100%80,990
f18RNDE15,52716,15817,4551.64022e+4 ± 6.10143e+2100%17,064
IRNDE12,86313,17713,8761.32738e+4 ± 3.59021e+2100%12,389
f19RNDE1896193720311.9478e+3 ± 4.20972e+1100%1949
IRNDE1519153015801.5368e+3 ± 1.88078e+1100%1531
f20RNDE14,09216,08820,2331.67794e+4 ± 2.10221e+3100%21,606
IRNDE13,98118,90034,2392.20096e+4 ± 5.75139e+3100%25,853
f21RNDE2333243324832.4255e+3 ± 4.67529e+1100%2434
IRNDE1873193320441.9509e+3 ± 5.22865e+1100%1949
f22RNDE5807642073606.4919e+3 ± 4.23616e+2100%6823
IRNDE3137324336603.3141e+3 ± 1.69914e+2100%3916
f23RNDE4805501853605.0267e+3 ± 1.55067e+2100%5029
IRNDE2509257526852.5812e+3 ± 5.33621e+1100%2592
f24RNDE15,92816,31716,7091.63417e+4 ± 2.60714e+2100%15,566
IRNDE12,58113,00713,5821.31475e+4 ± 3.50596e+2100%12,686
f25RNDE401842---10%3.44991e+3
IRNDE----0%7.49417e+5
f26RNDE5996752189277.407e+3 ± 9.59153e+2100%6808
IRNDE3368455766204.7084e+3 ± 1.10688e+3100%4394
f27RNDE2314240424602.4018e+3 ± 5.12636e+1100%2415
IRNDE1644168617501.6957e+3 ± 3.59755e+1100%1710
Table 8. Statistically paired two-sample significance t-test for means of RNDE vs. IRNDE.
Table 8. Statistically paired two-sample significance t-test for means of RNDE vs. IRNDE.
FunctionAlgorithmMeanVariancePearson Correlationt-Statp-Value
f1RNDE3.16e+042.00e+09---
IRNDE2.65e+041.75e+099.79e-013.23e+002.77e-03
f2RNDE1.23e+053.08e+09---
IRNDE9.24e+043.78e+099.72e-011.21e+017.92E-14
f3RNDE5.92e+019.70e+02---
IRNDE4.53e+011.42e+039.69e-017.66e+006.71e-09
f4RNDE1.94e+703.02e+141---
IRNDE6.36e+691.38e+1416.40e-011.82e+007.82e-02
f5RNDE1.55e+083.83e+16---
IRNDE1.21e+083.48e+169.25e-012.71e+001.04e-02
f6RNDE7.00e+007.35e+01---
IRNDE5.14e+006.32e+019.59e-014.50e+007.47e-05
f7RNDE4.58e+093.97e+19---
IRNDE3.19e+092.89e+199.10e-013.10e+003.86e-03
f8RNDE-1.72e+043.59e+07---
IRNDE-1.85e+042.79e+078.99e-012.82e+007.95e-03
f9RNDE6.70e+021.39e+03---
IRNDE4.44e+022.64e+047.98e-019.92e+001.43E-11
f10RNDE9.63e+007.74e+01---
IRNDE7.37e+008.45e+019.51e-014.69e+004.35e-05
f11RNDE5.00e+022.03e+05---
IRNDE2.54e+021.67e+057.69e-014.93e+002.14e-05
f12RNDE3.85e+082.30e+17---
IRNDE3.05e+082.25e+179.19e-012.46e+001.92e-02
f13RNDE7.72e+088.16e+17---
IRNDE5.73e+087.80e+178.91e-012.83e+007.72e-03
f14RNDE3.74e+043.17e+09---
IRNDE3.68e+043.73e+099.78e-012.63e-017.94e-01
f15RNDE1.82e+059.61e+09---
IRNDE1.23e+051.01e+109.84e-011.93e+016.53E-20
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Baig, M.H.; Abbas, Q.; Ahmad, J.; Mahmood, K.; Alfarhood, S.; Safran, M.; Ashraf, I. Differential Evolution Using Enhanced Mutation Strategy Based on Random Neighbor Selection. Symmetry 2023, 15, 1916. https://doi.org/10.3390/sym15101916

AMA Style

Baig MH, Abbas Q, Ahmad J, Mahmood K, Alfarhood S, Safran M, Ashraf I. Differential Evolution Using Enhanced Mutation Strategy Based on Random Neighbor Selection. Symmetry. 2023; 15(10):1916. https://doi.org/10.3390/sym15101916

Chicago/Turabian Style

Baig, Muhammad Hassan, Qamar Abbas, Jamil Ahmad, Khalid Mahmood, Sultan Alfarhood, Mejdl Safran, and Imran Ashraf. 2023. "Differential Evolution Using Enhanced Mutation Strategy Based on Random Neighbor Selection" Symmetry 15, no. 10: 1916. https://doi.org/10.3390/sym15101916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop