Next Article in Journal
Geosmin and 2-Methylisoborneol Detection in Water Using Semiconductor Gas Sensors
Next Article in Special Issue
LMFRNet: A Lightweight Convolutional Neural Network Model for Image Analysis
Previous Article in Journal
A Data-Driven Path-Tracking Model Based on Visual Perception Behavior Analysis and ANFIS Method
Previous Article in Special Issue
Head and Voice-Controlled Human-Machine Interface System for Transhumeral Prosthesis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Dual-Population Differential Evolution Based on Hierarchical Mutation and Selection Strategy

School of Artificial Intelligence and Computer Science, Jiangnan University, Lihu Avenuc, Wuxi 214122, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(1), 62; https://doi.org/10.3390/electronics13010062
Submission received: 4 December 2023 / Revised: 14 December 2023 / Accepted: 19 December 2023 / Published: 22 December 2023

Abstract

:
The dual-population differential evolution (DDE) algorithm is an optimization technique that simultaneously maintains two populations to balance global and local search. It has been demonstrated to outperform single-population differential evolution algorithms. However, existing improvements to dual-population differential evolution algorithms often overlook the importance of selecting appropriate mutation and selection operators to enhance algorithm performance. In this paper, we propose a dual-population differential evolution (DPDE) algorithm based on a hierarchical mutation and selection strategy. We divided the population into elite and normal subpopulations based on fitness values. Information exchange between the two subpopulations was facilitated through a hierarchical mutation strategy, promoting a balanced exploration–exploitation trade-off in the algorithm. Additionally, this paper presents a new hierarchical selection strategy aimed at improving the population’s capacity to avoid local optima. It achieves this by accepting discarded trial vectors differently compared to previous methods. We expect that the newly introduced hierarchical selection and mutation strategies will work in synergy, effectively harnessing their potential to enhance the algorithm’s performance. Extensive experiments were conducted on the CEC 2017 and CEC 2011 test sets. The results showed that the DPDE algorithm offers competitive performance, comparable to six state-of-the-art differential evolution algorithms.

1. Introduction

In recent years, nature-inspired optimization algorithms [1,2,3,4,5] have been extensively researched for solving complex optimization problems. Storn and Price, inspired by nature, proposed the differential evolution algorithm, which has been successfully applied to a broad spectrum of optimization challenges, ranging from multi-objective problems [6] to constrained optimization [7] and large-scale optimization [8,9]. Over the past few years, many excellent differential evolution algorithms have been proposed [10,11,12,13,14]. This article primarily focuses on enhancing the performance of DE for single-objective numerical optimization.
The single-population differential evolution algorithm requires the construction of a population composed of multiple individuals and, then, iteratively evolving the individuals through the mutation, crossover, and selection operators to continuously find better solutions. The mutation strategy has a significant impact on the performance of the algorithm [15,16,17,18]. Improper setting of the mutation strategy may lead to two extremes: (1) premature convergence and (2) inability to converge. For instance, the mutation strategy DE/rand/1 is recognized for its robust exploration capabilities, because it does not depend on specific individuals, but entirely on randomness to select vectors for driving evolution; this method is conducive to the algorithm exploring different regions of the search space. However, it is incapable of fully exploiting high-quality regions, leading to a need for more iterations to find the global optimal solution. The mutation strategy DE/best/1 operates using the best individual in the population as the base vector, thus possessing significant convergence capabilities. It efficiently guides the search process towards the direction of optimized solutions. However, this inevitably leads to a rapid decline in the diversity of the algorithm. In order to have a better balance between exploration and exploitation in these algorithms, some researchers have proposed new mutation strategies. Zhang and Anderson [19] enhanced diversity and achieved a more-balanced exploration and exploitation of evolutionary processes by introducing an external archive to guide the mutation operators. However, this method does not fully leverage the potential of superior individuals. Yintong Li [20] further proposed a new directional mutation strategy to ensure that the evolution progresses in a better direction. Wang [21] introduced the DE/current-to-lpbest/1 mutation strategy, which leverages a blend of the current individual’s information and the elite solutions. This approach involves guiding the mutation process with insights from multiple elite individuals, enhancing the algorithm’s ability to explore potential areas more effectively. EFADE [22] introduced a new triangular mutation operator to better balance exploration and exploitation tendencies. Some researchers have found that combining multiple mutation strategies can help further enhance the algorithm’s performance when faced with different problems. Based on the above discussion, how to choose an appropriate mutation strategy to balance exploration and exploitation is challenging.
Besides the mutation strategy, the survivor selection strategy also significantly impacts the performance of the differential evolution algorithm. Most research tends to use a greedy selection strategy. This strategy, based on the principle of natural selection, accepts individuals with better fitness values and eliminates those with poorer ones. Although this can make the population converge quickly to better values, this method overlooks the information from the discarded trial individuals that could help improve the algorithm’s performance, and it is inefficient when facing local optima. Some studies have shown that improving survivor selection strategies can also enhance the performance of differential evolution algorithms. Pravesh Kumar and Millie Pant [23] proposed a new selection strategy where it is no longer a competition between the target vector and its corresponding trial vector, but a screening of all candidate vectors from both, allowing better individuals to enter the next generation. Tian [24] proposed a new selection strategy, in which the selection of individuals is related not only to the fitness values, but also to their positions, enabling us to fully utilize exploration information and enhance diversity. Guo [25] proposed a subset-to-subset selection operator that divides the target population and the trial population into multiple subsets and uses a rank-based selection operator in the corresponding subsets to improve the convergence performance of DE. Zeng [26] introduced a new selection operator designed to aid individuals in stagnation. When an individual is stuck, this operator first uses the best vector previously discarded by the individual for replacement. If that does not lead to evolution, it then uses the second-best vector, and so on. It continues with all the vectors from the individual’s past successful updates. However, if none of these methods help the individual improve, a drastic measure is taken. The operator replaces the individual with the best individual of itself from its history. This strategy is employed with the aim of assisting the individual to escape from local optima.
Based on the above discussion, previous studies have typically applied improved mutation and selection strategies to single-population differential evolution algorithms. However, due to the fact that single-population DE algorithms maintain only one population, the inevitable exchange guidance between individuals during the evolution process may lead to a decrease in population diversity, shifting the focus towards local search. This can result in the algorithm prematurely converging to local optima, without fully exploring other potential global optima in the search space.
In response to the limitations of single-population DE algorithms, many researchers have attempted to improve DE algorithms using multi-population techniques, achieving promising results [27,28,29]. Among them, the performance of dual-population algorithms stands out, in particular. Pan [30] proposed a dual-population differential evolution algorithm based on different combinations of mutation strategies. This approach enhances the robustness of populations when facing various problems by applying mutation strategies with different exploration and exploitation capabilities to different populations. Li [31] introduced a dual-population DE algorithm based on fitness stratification, utilizing a leader population to guide an adjoint population in exploring more-promising regions while maintaining good convergence properties. Furthermore, Wang [32] presented a DE algorithm that co-evolves a population with an accompanying population. It stores suboptimal solutions discarded by the main population in the previous generation and uses these individuals to assist in the subsequent evolution through mutation strategies, greatly enhancing population diversity in this manner.
The study indicates that existing dual-population differential evolution algorithms can be categorized into three types:
1.
Dual-population with varied mutation strategies: This approach uses different mutation strategies for exploration and exploitation to achieve a balance between global and local searches. It enhances the algorithm’s convergence speed, global search capability, and robustness across various problems, facilitating the discovery of high-quality solutions more efficiently. However, it overlooks inter-population information exchange, potentially impairing the algorithm’s full perception of target areas and leading to performance decline. Moreover, its tendency to employ a greedy selection strategy can result in stagnation when encountering local optima.
2.
Hierarchical guidance in dual-populations: In this approach, a hierarchy is established between two populations—an elite population guides a standard population. This guidance aims to facilitate the exchange of information and strategies between the two groups, leveraging the strengths of the elite population to improve overall performance. However, this method has its pitfalls. The over-reliance on elite individuals can lead to a decrease in diversity within the population, as the algorithm may focus too narrowly on the solutions proposed by the elite. This reduced diversity can make the algorithm susceptible to becoming trapped in local optima, where it repeatedly explores suboptimal solutions without finding better alternatives.
3.
Main and accompanying populations’ collaboration: This type involves creating an accompanying population to support the main population’s evolution. The accompanying population is used to store individuals that are discarded by the main population, which can later be reintegrated into the mutation strategies. This reintegration aims to increase the diversity within the main population, countering the natural tendency of evolutionary algorithms to converge on similar solutions. However, this approach also faces a significant challenge when dealing with local optima. The presence of too many stagnant individuals, who do not contribute to further evolution, in both the main and auxiliary populations can severely hinder the evolutionary process, making it difficult for the algorithm to progress beyond suboptimal solutions.
Based on the above discussion, we propose a dual-population differential evolution algorithm (DPDE) based on a hierarchical mutation and selection strategy. On the one hand, we divided the population into different subpopulations based on fitness and implemented a hierarchical mutation strategy to better balance exploration and exploitation. On the other hand, we introduced a new selection strategy to assist individuals in enhancing their ability to escape local optima. This selection strategy consists of two main steps: (1) determining individual stagnation using our proposed state definition and (2), if stagnation is detected, applying the appropriate survivor selection strategy based on the individual’s population. In this way, we hope to improve the performance of the DE algorithm when facing local optima and to allow better cooperation with the mutation strategy.
This paper’s primary contributions are outlined as follows:
1.
In this study, a hierarchical mutation strategy is proposed for the dual-population differential evolution algorithm. Specifically, the population is divided into elite and normal subpopulations based on the fitness values. The elite subpopulation focuses on optimizing solutions, while the normal subpopulation is dedicated to exploring the search space. By adopting this hierarchical mutation strategy, effective information exchange between subpopulations is facilitated, thereby enhancing the overall performance of the algorithm.
2.
An innovative selection strategy is proposed in this paper. Firstly, a new criterion is introduced to identify whether an individual has fallen into a local optimum. Subsequently, for individuals in stagnation, survivors in the selection strategy are not limited to only those with superior fitness, but may include different individuals. This approach enhances the performance of the dual-population differential evolution algorithm when facing local optima.
3.
Comprehensive evaluations were performed using the CEC 2017 and CEC 2011 benchmark suites. The results indicated that our proposed DPDE algorithm exhibited superior performance in comparison to several state-of-the-art algorithms.
The rest of this paper is structured as follows. Section 2 introduces the traditional DE algorithm. Section 3 details our proposed algorithm, DPDE. In Section 4, we present the comparative results of DPDE with other algorithms on the CEC 2017 and CEC 2011 test suites, along with an analysis of the parameter sensitivity and the method’s effectiveness. Finally, Section 5 provides a summary of the paper.

2. Basic Differential Evolution

In this section, we will provide a detailed introduction to the structure of the differential evolution algorithm, mainly through the four operations of initialization, mutation, crossover, and selection. The minimization problem considered in this article is as follows:
min ( f ( x i ) ) , x i S
In this formula, f ( . ) denotes the value of the objective function and x stands for the individuals of the population. The search space S is defined as
S = j = 1 D [ X j , min , X j , max ]
where X j , min and X j , max correspond to the lower and upper bounds of the j-th dimension of the individual X, respectively. D represents the maximum dimension of the individual.

2.1. Initialization

The initialization operation is used to generate a population pop consisting of N P individuals, pop = ( X 1 , X 2 , , X N P ) . The operation for initializing population individuals is as follows:
X i , j = X j , min + rand ( 0 , 1 ) × ( X j , max X j , min )
where rand (0, 1) is a random number uniformly distributed in the range [0, 1] and X i , j represents the j-th dimension of individual i.

2.2. Mutation

In the g-th generation, the mutation strategy is used to generate a mutant vector V i g based on the target vector X i g . The following are six widely used mutation strategies:
1.
DE/rand/1:
V i g = X r 1 g + F × X r 2 g X r 3 g
2.
DE/rand/2:
V i g = X r 1 g + F × X r 2 g X r 3 g + F × X r 4 g X r 5 g
3.
DE/best/1:
V i g = X b e s t g + F × X r 1 g X r 2 g
4.
DE/best/2:
V i g = X b e s t g + F × X r 1 g X r 2 g + F × X r 3 g X r 4 g
5.
DE/current-to-rand/1:
V i g = X i g + F × X r 1 g X r 2 g + F × X r 3 g X r 4 g
6.
DE/current-to-pbest/1:
V i g = X i g + F × X p b e s t g X i g + F × X r 1 g X r a g ¯
Among them, X best g represents the best individual in the g-th generation of the population, X pbest g represents a randomly selected individual from the top 100 p % individuals with the best fitness in the g-th generation of the population, r a is a randomly generated integer from the set { 1 , 2 , , N P + | A | } , A is an archive that stores the parent vectors discarded in the selection strategy, X ¯ g is the collection of the population and the archive A, the subscripts r 1 , r 2 , r 3 , r 4 , and r 5 are randomly generated integers from the set { 1 , 2 , , N P } , and the subscripts are different from each other.

2.3. Crossover

After the mutation operation, the obtained mutation vector and the target vector are crossed to obtain a trial vector. The definitions of commonly used crossover operations are as follows:
U i , j g = V i , j g if rand ( 0 , 1 ) C R or j = j rand X i , j g otherwise
where CR represents the crossover rate, rand (0, 1) is a random number uniformly distributed in the range [0, 1], j rand is a random integer randomly generated from the range [1, D], and D is the maximum value of the individual’s dimension.

2.4. Selection

Following the crossover process, the selection mechanism decides between the trial vector U i g and the associated target vector X i g , with the superior of the two progressing to the subsequent generation. The formula based on the greedy survivor selection strategy is as follows:
X i g + 1 = X i g if f ( X i g ) < f ( U i g ) U i g otherwise

3. Dual-Population Differential Evolution Based on Hierarchical Mutation and Selection Strategy

In differential evolution algorithms, achieving a balance between the exploration and exploitation of the population, as well as assisting individuals in escaping local optima pose significant challenges. However, traditional dual-population differential evolution algorithms often struggle to effectively address these issues. On the one hand, the selection of appropriate mutation strategies is a challenging task. On the other hand, when individuals become trapped in local optima, traditional greedy selection strategies often struggle to efficiently drive further population evolution.
To overcome these challenges, this paper initially divides the population into two subpopulations, then introduces a hierarchical mutation strategy to balance the exploration and exploitation capabilities of the population. A new hierarchical selection strategy is introduced to assist the dual-population differential evolution algorithm to better address the local optima issues. These two strategies work in tandem to enhance the overall performance of the population. Ultimately, based on these strategies, the paper proposes the DPDE algorithm.

3.1. Dual-Population Strategy

The dual-population differential algorithm has been extensively studied by researchers in recent years. It can adopt different mutation strategies in various populations, thereby facilitating the maintenance of good exploration and exploitation capabilities within the populations. In this paper, we employed fitness-based grouping of the population, where individuals ranking within the top bp% are classified as the elite population (ep), while the remaining individuals are categorized as the normal population (np). Notably, the ep focuses more on refining solution quality, whereas the np specializes in exploring potential solution spaces. We introduce a novel hierarchical mutation strategy aimed at maximizing information exchange between these two subpopulations, enhancing search capabilities and preventing premature convergence. As illustrated in Figure 1, individuals within the ep (depicted by red circles) represent the top-performing fraction, tasked with uncovering superior solutions. In contrast, the np, guided by the ep, explores areas with untapped potential within the solution space. This approach leverages the information from the ep to guide the evolution of the np, thus promoting a balanced exploration–exploitation strategy. Furthermore, we incorporated a novel hierarchical selection strategy designed to improve the individual’s ability to escape local optima. Detailed explanations of the hierarchical mutation and selection strategies will follow in subsequent sections. At the end of each generation, individuals in these two subpopulations are reclassified based on their fitness values, facilitating information exchange and ensuring the leadership of the ep.
The value of the bp influences the size of the np and ep, which, in turn, affects the algorithm’s performance. We designed a larger bp value in the early stage, benefiting the normal population in exploring more potential regions and, thus, boosting the exploration capability. Later, we reduced the b p value, allowing both populations to learn from the top-performing individuals in the entire population, thereby enhancing the algorithm’s convergence capability. The formula for the b p is as follows:
b p = 0.29 × ( 1 n f e s m a x n f e s ) + 0.11
Here, n f e s represents the current number of function evaluations and m a x n f e s represents the maximum number of function evaluations.

3.2. Hierarchical Mutation Strategy

In this section, we will introduce the hierarchical mutation strategy used in this paper. The hierarchical mutation strategy consists primarily of two mutation strategies, namely DE/current-to-pbest/1 and improved DE/current-to-pbest/1, employed for the elite population and the normal population, respectively.
For the elite population: We adopted the DE/current-to-pbest/1 mutation strategy from JADE. This strategy selects the top 100 p % individuals from the population and utilizes an external archive A to maintain diversity. This approach possesses an impressive convergence rate. Through this strategy, we aimed for the individuals to search within the best regions of the population, thereby finding improved solutions.
V i g = X i g + F × ( X p b e s t g X i g ) + F × ( X r 1 g X r a g ¯ )
where X p b e s t g represents an individual randomly selected from the 100 p % best-fitness individuals of the population in the g-th generation. ra is a randomly generated integer from the set { 1 , 2 , , N P + | A | } . A is an archive storing the parent vectors that are eliminated in the selection strategy, and X g ¯ is the union of the population and the archive A.
For the normal population: We employed the improved DE/current-to-pbest/1. This strategy improves upon DE/current-to-pbest/1 by changing the selection of the top 100 p % individuals in the population to the selection from the elite population. Through this strategy, we allow the less-proficient individuals in the population to accept guidance from the superior individuals in the elite population, thereby exploring more-promising regions, enhancing the algorithm’s exploration capabilities, while maintaining a certain degree of convergence.
V i g = X i g + F × ( X e p 1 g X i g ) + F × ( X r 1 g X r a g ¯ )
where X e p 1 is an individual randomly selected from the elite population.

3.3. Hierarchical Selection Strategy

As mentioned earlier, we propose a dual-population DE algorithm. This approach achieves a balance between exploration and exploitation through multiple mutation strategies. However, during the evolutionary process, individuals often get stuck in local optima. Traditional greedy selection strategies struggle to provide an effective solution. Therefore, to address this issue more effectively, we designed a new selection strategy. This strategy consists of two steps: (1) determining if an individual has fallen into a local optimum; (2) deciding on the selection strategy based on the population to which the individual belongs. We will now provide a detailed explanation of these two steps.
First phase: Determining whether an individual is trapped in a local optimum is challenging. While many studies lean towards a simplified approach, i.e., considering that the individual has stagnated when its fitness has not improved over a certain number of generations, this method overlooks the probability of an individual getting trapped in local optima at different stages of evolution. In the early stages of evolution, individuals with poorer fitness are more likely to improve. If there is no significant improvement in fitness over a long time, it is highly probable that they have fallen into a local optimum. In contrast, in the later stages of evolution, individuals often focus on convergence, and improving fitness becomes challenging. Therefore, a more-stringent stagnation judgment criterion is required. To make a more-accurate judgment, we propose a new formula for determining individual stagnation based on the iteration phase, as shown in Equation (15).
T = T 1 if n f e s 0.5 × maxnfes T 1 + n f e s 0.5 × maxnfes 0.5 × maxnfes ( T 2 T 1 ) if n f e s > 0.5 × maxnfes
In the formula, T represents the threshold used for individual stagnation judgment. T1 and T2 represent the minimum and maximum values of the threshold, respectively. nfes denotes the current iteration number, while maxnfes stands for the maximum number of iterations. Specific settings will be discussed in detail in the experimental section. As the formula suggests, in the early stages of evolution, we set a relatively small fixed stagnation judgment value for the population. It is noteworthy that this value should not be set too low to reduce misjudgments. In the later stages of evolution, we employed a linearly increasing stagnation judgment, adapting to the increased difficulty in judging stagnation during the evolutionary process and helping the population converge better.
Second phase: When an individual falls into a local optimum, we adopted a targeted selection strategy based on the population’s characteristics.
For individuals in the normal population that fall into a local optimum, our first approach was to accept the suboptimal trial individual they generate. If they remain stagnant after accepting this for T3 generations, we concluded that the current individual is in a difficult-to-improve area. At this time, we randomly selected the best discarded trial vector from the historical data of an individual in the elite population. This approach has two advantages. First, it reduces computational resource consumption due to stagnated individuals in the normal population. Second, by incorporating a high-quality trial vector with a good fitness value, we gain insights from superior regions. By employing this strategy, we can better explore potentially promising regions. As illustrated in Figure 2, when individual A from the normal population falls into a local optimum, we first accept the inferior trial vector C, with a certain probability of disturbing to vector D, thereby escaping the local optimum. When the perturbation range of A is small and we accept experimental individual B, it is still difficult to get out of the local optimum. If after T3 generations, the fitness of the individual still has not improved, we then accept the unused optimal discarded vector C from the history of the elite population. We hope to further explore promising regions based on C and assist individual A to escape the local optimum.
Stagnant individuals in the elite population already possess relatively good fitness values, so we focused more on whether the surrounding target landscape can help them escape local optima. Therefore, we selected the discarded trial individuals generated in the current generation. Through this method, we expected to better develop locally. With the HS strategy, the population’s exploration and exploitation abilities are further enhanced. It is worth noting that the best individuals still use a greedy selection strategy to improve the algorithm’s stability.

3.4. Control Parameter

The efficiency of the differential evolution algorithm is significantly affected by its parameters. For the parameter settings, Success-History Based Parameter Adaptation for Differential Evolution (SHADE) has achieved widespread success. In this paper, we also choose it for the parameter setting. The SHADE method maintains two historical memory sets, S C R and S F , which store the crossover rate CR and the scaling factor F values used during successful updates, respectively. In the G-th generation, the F and CR values for each individual in the population are generated using the following formulae:
F i , G = randc ( u F r , 0.1 )
C R i , G = randn ( u C R r , 0.1 )
where randc represents the Cauchy distribution and randn represents the normal distribution. u F r and u C R r are randomly selected from M F and M C R , respectively, where r = 1 , , H. H is the pre-defined size of the set. Initially, M F and M C R are both set to 0.5 and are updated at the end of each generation. The formulas for M F and M C R are as follows:
w k = Δ f k k = 1 | S C R | Δ f k
M C R = mean W L ( S C R ) mean W L ( S C R ) = K = 1 | S C R | W k × S C R , k 2 K = 1 | S C R | W k × S C R , k
M F = mean W L ( S F ) mean W L ( S F ) = K = 1 | S F | W k × S F , k 2 K = 1 | S F | W k × S F , k
where Δ f k = f ( U k , G ) f ( X k , G ) , represents the improvement magnitude of the fitness value for individual k that was successfully updated in the G-th generation. This method effectively achieves the adaptive adjustment of the parameters. S denotes the size of the set. In this paper, F will be truncated to 0.6 in the early stages and extended to the range [0–1] in the later stages.
Reducing the population size during the iteration process can enhance the performance of the algorithm. In this paper, we adopted the Linear Population Size Reduction strategy (LPSR). Under this strategy, we used a linear formula to adjust the population size in each generation. The specific formula for updating the population size is as follows:
N G + 1 = round ( N min N init ) maxnfes × nfes + N init
where N init represents the initial population size and N min represents the predetermined minimum population size. At the end of each generation, the ( N G N G + 1 ) individuals with the worst fitness in the population will be removed.
Based on the above discussion, the pseudocode for DPDE is shown as Algorithm 1.
Algorithm 1 DPDE algorithm.
  1:
Set NP = 18 · D , N min = 4 , and initialize T1, T2, T3
  2:
Initialize p 0 randomly according to Equation (3), p 0 = X 1 , X 2 , , X N P
  3:
Set M F = 0.5 , M C R = 0.5 , A = , nfes = N P , and set all values in the count equal to 0, H = 5
  4:
while nfes < maxnfes do
  5:
     S F = , S C R = , C R = , F =
  6:
    for  i = 1 to NP do
  7:
         r = randomly select from [1, H]
  8:
        if  M C R r < 0  then
  9:
            C R i = 0
10:
        else
11:
            C R i = N ( M C R r , 0.1 )
12:
        end if
13:
         F i = C ( M F , r , 0.1 )
14:
        if  nfes < 0.6 · maxnfes and F i 0.6  then
15:
            F i = 0.6
16:
        end if
17:
        Calculate bp according to Equation (12)
18:
        Calculate T according to Equation (15)
19:
        if i ≤ bp · NP then
20:
           Compute V i g by mutation Equation (13)
21:
        else
22:
           Compute V i g by mutation Equation (14)
23:
        end if
24:
        Compute U i g by crossover Equation (10)
25:
    end for
26:
    Sort the population by the ascending fitness values
27:
    for  i = 1 to NP do
28:
        if  f ( U i g ) < f ( x i g )  then
29:
            x i g + 1 = u i g , c o u n t ( i ) = 0 , f l a g ( i ) = 0
30:
            A x i g , S F F i , C R i S C R
31:
        else
32:
            c o u n t ( i ) = c o u n t ( i ) + 1
33:
           if  i < b p · N P and c o u n t ( i ) T  then
34:
                x i g + 1 = U i g
35:
           else if  i b p · N P and c o u n t ( i ) T  then
36:
               if  f l a g ( i ) = = 0  then
37:
                    x i g + 1 = U i g
38:
               else
39:
                    f l a g ( i ) = f l a g ( i ) + 1
40:
               end if
41:
               if  f l a g ( i ) T 3  then
42:
                    x i g + 1 = the historical best discarded vector that individual i has never used
43:
               end if
44:
           end if
45:
        end if
46:
        Update A if necessary
47:
    end for
48:
    Calculate N P new according to Equation (21)
49:
     N P = N P new
50:
    if  S F  then
51:
        Calculate W k according to Equation (18)
52:
        Update M F , k and M C R , k according to Equations (19) and (20)
53:
    end if
54:
    nfes = nfes + NP
55:
end while

4. Experiments’ Analysis and Comparison

In this section, we primarily discuss the performance of the proposed DPDE on the CEC 2017 test set [33], comparing it with some of the advanced DE algorithms. Further, this paper verifies the effectiveness of the new selection strategy proposed. Simultaneously, a detailed discussion on some parameter settings involved in the method is conducted. The test dimensions in this paper range from 10, 30, 50, to 100, with a search range between [−100, 100]. For each function, the optimal value is known. Each algorithm was run independently 51 times, recording the mean and standard deviation of the fitness error value f ( X b e s t ) f ( X * ) . Here, X b e s t represents the optimal solution achieved by the algorithm in a single run, while X * denotes the true global optimum of each function. When the fitness error value is less than 10 8 , it is considered as 0. For the CEC2011 test set [34], the maximum number of evaluations in this paper was set to 150,000, and each algorithm was run 25 times on each instance. Moreover, to better assess the algorithm’s performance, this study conducted non-parametric statistical tests at a 5% significance level, namely the Wilcoxon signed-rank test and the Friedman test.
The programming language used was MATLAB, compiled with MATLAB 2021. The experiments were executed on a system powered by an AMD Ryzen 7 5800H with Radeon Graphics 3.20 GHz and equipped with 16 GB RAM.

4.1. Experimental Setup

In this subsection, we delve into the specifics of the DPDE algorithm on the CEC 2017 test set. The CEC 2017 test suite includes unimodal functions (f1–f3), simple multi-modal functions (f4–f10), hybrid functions (f11–f20), and composition functions (f21–f30). The symbols “+”, “−”, and “=” are used to indicate whether DPDE performed better, worse, or similarly, respectively, compared to the corresponding algorithm. The algorithms taken for comparison were JADE, LSHADE, jSO [35], MEPDE [36], HIP-DE [37], and PaDE [38]. JADE is a classic differential evolution algorithm. LSHADE was the champion of the CEC 2014 real-parameter single-objective optimization competition. jSO was the top performer in the CEC 2017 competition. MEPDE is a classic multi-population differential evolution algorithm. Both the HIPDE and PaDE are recent competitive differential evolution algorithms. The specific parameter settings are displayed in Table 1. For DPDE, some parameters were set as follows: T1 was 48 for 10D and 24 for 30D, 50D, and 100D. T2 was 208 for 10D, 30 D, and 50D, while it was 160 for 100D. T3 was consistently set to 16.

4.2. Optimization Accuracy

To thoroughly validate the DPDE algorithm’s effectiveness, this study conducted an extensive analysis using the CEC2017 test suite. The results are showcased in Table A1, Table A2, Table A3 and Table A4. In these tables, the lowest average error values for each function are highlighted in bold. For 10D, among the 30 benchmarks, our algorithm performed significantly better than HIP-DE for 11 functions, outperformed PaDE for 8, surpassed jSO for 12, was superior to LSHADE for 10, exceeded JADE for 14, and outshone MPEDE for 19. Our approach achieved the best performance for the functions f5, f7, f10, f17, and f29. For 30D, among the 30 benchmarks, our algorithm outperformed HIP-DE for 12 functions, PaDE for 12, jSO for 15, LSHADE for 17, JADE for 21, and MPEDE for 17. Our method secured the best results for functions f5, f7, f8, f10, f16, f17, f21, and f29. For 50D, out of 30 benchmarks, our technique surpasses HIP-DE in 17 instances, PaDE in 15, jSO in 14, LSHADE in 16, JADE in 26, and MPEDE in 23. Notably, our method achieved optimal performance for functions f5, f6, f7, f8, f10, f16, f17, f20, f21, f26, and f29. For 100D, among the 30 benchmarks, our algorithm stood out by besting HIP-DE on 16 functions, PaDE on 21, jSO on 14, LSHADE on 19, JADE on 22, and MPEDE on 23. In this dimension, our technique excelled for functions f5, f7, f8, f10, f16, f17, f20, f21, f22, f23, f24, f26, f28, and f29.
As depicted in the Table 2, based on the CEC 2017 benchmark functions, the table lists the rankings of different algorithms under the Friedman test. A lower ranking denotes better algorithmic performance. The top-ranked algorithm is emphasized in bold. It can be observed that, for 10 dimensions, the PaDE algorithm ranked first, while our proposed technique came in a close second. In the 30D, 50D, and 100D settings, our method clinched the top ranking. This underscores the observation that, as the dimensionality increased, the performance of our algorithm became more pronounced.

4.3. Comparison on Convergence Speed

To better analyze the convergence characteristics of DPDE, this paper presents the convergence curves of the algorithm for functions f5, f10, f21, and f24 across the dimensions 10D, 30D, 50D, and 100D. As shown in Figure 3, Figure 4, Figure 5 and Figure 6, for the function f24, the convergence curve of our algorithm was fairly comparable to the other algorithms. However, for functions f5, f10, and f21, it can be observed that our algorithm did not converge as quickly during the early iterations. Yet, in the later stages, it managed to converge to a value better than the other algorithms. This indicates that our method can identify promising regions and converge to more-optimal solutions, reflecting its robust exploration and exploitation capabilities.
From the above analysis, we can conclude that the performance of the algorithm proposed in this paper had an advantage compared to the six outstanding differential evolution algorithms that were examined as a point of comparison.

4.4. Effectiveness of HS Strategy and New Stagnation Detection Formula

This paper introduces a new selection strategy (HS) and a new stagnation detection formula. To demonstrate the effectiveness of these two strategies, ablation experiments were conducted. As shown in Table 3, DPDE-1 represents the DPDE algorithm without the new selection strategy. The ranking results of the two algorithms after the Friedman test are presented in the figure. The DPDE algorithm showed significant improvements across all four dimensions, with the smallest improvement in the 10-dimensional space. Table 4 illustrates the validity of the new stagnation detection formula. We set up three comparative experiment groups, namely DPDE-fixed24, DPDE-fixed48, and DPDE-fixed96, representing the use of a fixed number of 24, 48, and 96 consecutive non-updated individuals to determine if an individual is stagnant. Our method outperformed the DPDE approach using fixed counts across all four dimensions. Among them, the performance gap was the smallest between DPDE-fixed96 and DPDE in 10D and between DPDE-fixed24 and DPDE in 30D, 50D, and 100D. Therefore, it was deduced that the novel HS strategy and the formula for detecting stagnation contributed positively to improving the differential evolution algorithm’s performance.

4.5. Parameter Analysis

The settings of T1 and T2 play a significant role in determining whether an individual is stagnant. Specifically, T1 and T2 represent the minimum and maximum values for the stagnation threshold, respectively. To investigate the impact of the settings of T1 and T2 on the algorithm performance, three values were set for each: T1 at 24, 48, and 96 and T2 at 160, 208, and 256. Combining these yielded nine variants, the ranking results based on the Friedman test are displayed in Table 5. It was revealed that that DPDE-T5 performed best in 10D, DPDE-T2 in 30D and 50D, and the DPDE-T1 setting in 100D. T3 played a crucial role in helping individuals stuck in local optima in the normal population. It represents the maximum stagnant count after which an individual can accept a worse trial solution. In our experiments, we tested T3 values of 8, 16, and 32. The results in Table 6 show that T3 = 8 performed best in 10D, while T3 = 16 was optimal in 30D, 50D, and 100D. The chosen value for T3 in this paper was 16.

4.6. Comparison Results for the Real-World Problems on CEC 2011

To validate the effectiveness of the algorithm proposed in this paper in real-world problems, we selected four bounded problems for testing: problem 1, problem 5, problem 6, and problem 7. The dimensions of these problems are 30, 30, 30, and 20, respectively. As shown in Table 7, it can be observed that the DPDE algorithm significantly outperformed or was at least comparable to six of the most-advanced DE variants. Notably, our method outshone all other approaches for problem 7. Moreover, JADE, PaDE, HIP-DE, MPEDE, and jSO performed significantly worse than the DPDE algorithm on half of the problems. The experimental results suggested that the DPDE algorithm had potential for solving real-world problems.

5. Conclusions

In this paper, we introduced the application of hierarchical mutation and selection strategies to the dual-population differential evolution algorithm. This aimed to address the challenge of balancing exploration and exploitation in the algorithm and mitigating the issue of evolution stagnation. As a result, we proposed the DPDE algorithm. In the hierarchical mutation strategy, the DE/current-to-pbest/1 and improved DE/current-pbest/1 strategies were applied separately to the elite population and the normal population. Due to the guiding role of the elite population in the mutation strategy, we facilitated information exchange between these two subpopulations, further achieving a balance between exploration and exploitation. In the novel hierarchical selection strategy, when individuals were trapped in local optima, elite subpopulation individuals accepted discarded vectors from the current generation, while the normal subpopulation individuals accepted both discarded vectors and historical optimal discarded vectors from the elite subpopulation. This strategy makes full use of historically generated trial vectors, thus enhancing individuals’ ability to escape local optima and improving the overall algorithm performance. Additionally, we employed adaptive parameter techniques and population size adjustment strategies to enhance the algorithm’s exploration and exploitation capabilities at different stages.
Experimentally, we rigorously tested our approach on the CEC 2017 benchmark suite and compared it with six state-of-the-art differential evolution algorithms, namely HIP-DE, PaDE, jSO, LSHADE, JADE, and MPEDE. The experimental results indicated that the algorithm presented in this paper significantly outperformed the other compared differential evolution algorithms. The ablation study demonstrated the effectiveness of the HS strategy and the new stagnation detection formula introduced in this paper. The results on the CEC 2011 benchmark further demonstrated the potential of our method to solve real-world problems.
Lastly, while the algorithm in this paper primarily targets single-objective parameter problems, it will be meaningful to explore the feasibility of employing the DPDE algorithm for multi-objective optimization challenges in the future.

Author Contributions

Methodology: Y.H.; Validation: Y.H., X.Q. and W.S.; Writing—original draft preparation: Y.H.; Writing—review and editing: Y.H. and X.Q.; Project administration: X.Q. and W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (62076110) and the Natural Science Foundation of Jiangsu Province (BK20181341).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Table A1. Mean errors of the algorithms under 10D of CEC 2017.
Table A1. Mean errors of the algorithms under 10D of CEC 2017.
  DPDEHIP-DEPaDEjSOLSHADEJADEMPEDE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+001.46E-09
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+005.65E-09
F2Mean0.00E+000.00E+000.00E+000.00E+000.00E+006.75E-100.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+004.82E-090.00E+00
F3Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F4Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+001.28E-08
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+003.33E-08
F5Mean3.13E-011.68E+002.27E+002.11E+002.89E+003.76E+006.13E+00
 Std4.67E-018.78E-019.75E-017.87E-018.02E-011.07E+001.50E+00
F6Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+002.25E-05
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+005.62E-06
F7Mean1.11E+011.18E+011.25E+011.21E+011.23E+011.35E+011.78E+01
 Std4.30E-015.40E-011.22E+004.84E-017.24E-011.85E+001.82E+00
F8Mean3.71E-011.93E+002.36E+002.28E+002.54E+003.81E+006.41E+00
 Std5.96E-018.99E-017.95E-017.78E-011.04E+001.04E+001.85E+00
F9Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F10Mean1.18E+003.86E+012.77E+012.89E+011.91E+011.34E+022.74E+02
 Std1.63E+007.55E+014.38E+014.90E+013.34E+011.07E+021.08E+02
F11Mean0.00E+004.54E-015.29E-010.00E+005.37E-011.50E+002.25E+00
 Std0.00E+007.77E-017.62E-010.00E+007.46E-011.10E+006.39E-01
F12Mean1.19E+022.67E+012.37E+011.45E+015.22E+018.14E+021.10E+01
 Std6.49E+015.03E+014.79E+013.89E+017.07E+013.24E+031.42E+01
F13Mean4.20E+001.37E+001.92E+004.34E+004.40E+004.70E+005.56E+00
 Std2.02E+002.34E+002.37E+001.71E+001.73E+002.93E+002.01E+00
F14Mean4.31E-017.41E-011.72E-019.75E-023.07E-012.45E+004.32E+00
 Std2.80E+007.13E-013.59E-012.99E-014.04E-015.96E+001.81E+00
F15Mean3.67E-011.68E-011.06E-012.21E-011.77E-012.84E-017.72E-01
 Std1.75E-012.04E-011.67E-012.19E-011.96E-012.09E-012.29E-01
F16Mean6.96E-014.62E-013.18E-014.45E-013.57E-015.22E+002.35E+00
 Std2.09E-012.03E-011.48E-013.80E-012.04E-012.33E+017.68E-01
F17mean1.30E-012.56E-011.62E-014.09E-011.64E-011.67E+007.15E+00
 Std1.72E-012.90E-011.60E-013.66E-012.02E-015.48E+002.44E+00
F18mean7.64E-012.39E-012.49E-012.64E-011.88E-016.68E+002.75E+00
 Std2.81E+002.08E-012.05E-012.07E-011.95E-019.38E+001.50E+00
F19mean2.76E-021.38E-021.02E-021.33E-021.44E-026.28E-026.05E-01
 Std3.28E-022.67E-021.07E-021.25E-021.12E-022.93E-011.93E-01
F20mean2.75E-010.00E+000.00E+003.55E-016.12E-034.90E-011.05E+00
 Std1.35E-010.00E+000.00E+001.08E-014.37E-022.79E+006.60E-01
F21mean1.38E+021.61E+021.31E+021.59E+021.58E+021.81E+021.21E+02
 Std4.96E+015.01E+014.76E+015.57E+015.14E+014.46E+014.28E+01
F22mean1.00E+021.00E+021.00E+021.00E+021.00E+029.24E+019.41E+01
 Std0.00E+006.43E-141.48E-131.48E-134.85E-022.64E+012.38E+01
F23mean3.03E+023.01E+023.02E+023.03E+023.04E+023.06E+023.06E+02
 Std1.98E+001.43E+001.63E+001.63E+001.07E+002.05E+001.64E+00
F24mean2.41E+023.09E+023.03E+022.95E+023.05E+023.11E+022.35E+02
 Std1.13E+026.08E+016.80E+018.49E+017.31E+017.03E+011.19E+02
F25mean4.11E+024.19E+024.18E+024.03E+024.15E+024.23E+024.05E+02
 Std2.09E+012.30E+012.28E+011.49E+012.23E+012.33E+011.67E+01
F26Mean3.00E+023.00E+023.00E+023.00E+023.00E+023.05E+023.00E+02
 Std0.00E+000.00E+000.00E+000.00E+000.00E+003.60E+011.52E-08
F27Mean3.89E+023.93E+023.93E+023.89E+023.89E+023.91E+023.89E+02
 Std1.39E-011.74E+001.96E+001.67E-011.39E-017.65E+003.65E-01
F28Mean3.67E+023.56E+023.11E+023.76E+023.82E+024.35E+023.11E+02
 Std1.24E+021.15E+025.56E+011.30E+021.35E+021.49E+025.56E+01
F29Mean2.31E+022.33E+022.32E+022.34E+022.35E+022.53E+022.51E+02
 Std3.34E+004.46E+003.66E+003.23E+003.41E+001.26E+015.67E+00
F30Mean1.64E+044.01E+024.00E+024.85E+043.25E+041.06E+053.96E+02
 Std1.14E+051.67E+011.57E+011.94E+051.60E+052.94E+059.56E-01
 + 11812101419
  9114637
 = 10111414134
The top-ranked algorithm is emphasized in bold.
Table A2. Mean errors of the algorithms under 30D of CEC 2017.
Table A2. Mean errors of the algorithms under 30D of CEC 2017.
  DPDEHIP-DEPaDEjSOLSHADEJADEMPEDE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F2Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F3Mean0.00E+000.00E+000.00E+000.00E+000.00E+004.74E+030.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F4Mean5.86E+015.63E+015.62E+015.86E+015.86E+015.08E+015.35E+01
 Std2.72E-141.15E+011.44E+012.69E-142.72E-142.14E+011.77E+01
F5Mean2.95E+007.35E+008.05E+008.58E+006.84E+002.68E+012.74E+01
 Std1.87E+001.18E+001.46E+001.91E+001.45E+004.60E+007.32E+00
F6Mean0.00E+008.75E-090.00E+004.62E-081.38E-090.00E+000.00E+00
 Std0.00E+003.28E-080.00E+001.61E-076.88E-090.00E+000.00E+00
F7Mean3.48E+013.62E+013.87E+013.93E+013.80E+015.50E+015.62E+01
 Std1.33E+009.40E-011.64E+002.29E+001.20E+003.53E+007.25E+00
F8Mean3.35E+007.42E+009.12E+009.55E+007.53E+002.55E+012.72E+01
 Std1.93E+001.23E+001.75E+001.90E+001.52E+004.04E+007.26E+00
F9Mean0.00E+000.00E+000.00E+000.00E+000.00E+008.91E-031.42E-02
 Std0.00E+000.00E+000.00E+000.00E+000.00E+006.36E-026.64E-02
F10Mean1.16E+021.52E+031.54E+031.40E+031.44E+031.92E+032.63E+03
 Std1.23E+021.78E+022.89E+022.71E+021.94E+022.01E+023.58E+02
F11Mean2.34E+011.42E+011.27E+011.41E+011.76E+013.12E+012.30E+01
 Std2.74E+012.14E+011.79E+012.20E+012.41E+012.54E+011.77E+01
F12Mean1.12E+031.14E+031.00E+033.13E+021.05E+031.25E+038.99E+02
 Std3.73E+024.07E+023.95E+022.42E+023.44E+024.05E+024.01E+02
F13Mean1.48E+011.52E+011.39E+011.95E+011.85E+013.71E+012.17E+01
 Std6.56E+007.09E+007.04E+003.59E+006.58E+001.55E+018.49E+00
F14Mean1.93E+012.14E+012.11E+012.16E+012.16E+015.22E+031.66E+01
 Std4.74E+003.99E+004.79E+002.33E+003.16E+009.17E+031.05E+01
F15Mean2.85E+003.08E+003.08E+001.51E+003.86E+006.50E+028.86E+00
 Std2.28E+001.74E+001.78E+001.15E+001.53E+002.11E+033.28E+00
F16Mean1.47E+011.16E+021.09E+024.05E+013.89E+014.12E+024.01E+02
 Std2.08E+009.12E+019.92E+015.39E+013.82E+011.55E+021.76E+02
F17Mean2.19E+013.10E+012.90E+013.23E+013.28E+017.58E+015.38E+01
 Std5.86E+007.28E+007.00E+006.76E+007.11E+003.12E+011.46E+01
F18Mean2.20E+012.28E+012.14E+012.09E+012.22E+012.14E+042.24E+01
 Std1.52E+001.64E+002.98E+004.16E-011.43E+005.90E+048.71E+00
F19Mean5.20E+005.08E+005.22E+005.36E+006.20E+001.39E+037.13E+00
 Std1.64E+002.03E+001.90E+001.70E+001.98E+003.35E+031.94E+00
F20Mean2.16E+013.40E+014.04E+012.73E+013.13E+011.11E+027.82E+01
 Std3.99E+007.35E+002.40E+016.30E+004.65E+005.48E+015.66E+01
F21Mean2.05E+022.07E+022.08E+022.10E+022.08E+022.26E+022.28E+02
 Std1.95E+001.62E+001.43E+001.86E+001.58E+004.77E+007.39E+00
F22Mean1.00E+021.00E+021.00E+021.00E+021.00E+021.00E+021.00E+02
 Std1.44E-141.44E-141.44E-146.39E-141.44E-141.44E-141.44E-14
F23Mean3.51E+023.44E+023.45E+023.57E+023.54E+023.73E+023.76E+02
 Std4.69E+003.69E+003.68E+003.73E+003.66E+006.11E+001.02E+01
F24Mean4.25E+024.18E+024.20E+024.28E+024.26E+024.47E+024.49E+02
 Std1.63E+001.98E+002.49E+002.74E+002.24E+004.84E+008.17E+00
F25Mean4.50E+024.48E+024.48E+024.49E+024.48E+024.58E+024.59E+02
 Std1.44E-022.05E-022.74E-027.98E-032.46E-021.35E-011.32E+00
F26Mean4.74E+024.71E+024.72E+024.72E+024.71E+024.79E+024.80E+02
 Std3.54E+014.06E+014.24E+013.94E+012.85E+016.98E+011.07E+02
F27Mean4.98E+024.94E+024.95E+024.95E+024.94E+025.00E+025.01E+02
 Std3.37E+004.99E+005.09E+005.59E+004.34E+007.49E+006.27E+00
F28Mean1.90E+031.91E+031.91E+031.90E+031.90E+031.97E+031.98E+03
 Std5.22E+014.90E+014.89E+014.36E+015.40E+015.51E+014.92E+01
F29Mean1.96E+031.96E+031.96E+031.96E+031.96E+032.02E+032.02E+03
 Std9.65E+006.29E+001.06E+011.40E+019.46E+002.73E+012.77E+01
F30Mean2.01E+032.07E+032.04E+031.96E+032.00E+032.36E+032.02E+03
 Std6.47E+017.30E+015.33E+012.79E+015.98E+011.06E+038.62E+01
 + 121215172117
  646014
 = 121491389
The top-ranked algorithm is emphasized in bold.
Table A3. Mean errors of the algorithms under 50D of CEC 2017.
Table A3. Mean errors of the algorithms under 50D of CEC 2017.
  DPDEHIP-DEPaDEjSOLSHADEJADEMPEDE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F2Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F3Mean0.00E+000.00E+000.00E+000.00E+000.00E+004.31E+047.77E-04
 Std0.00E+000.00E+000.00E+000.00E+000.00E+005.70E+043.38E-03
F4Mean6.82E+019.54E+017.51E+015.42E+016.69E+017.24E+015.97E+01
 Std4.75E+014.79E+015.07E+014.97E+015.22E+014.79E+014.47E+01
F5Mean3.53E+001.48E+011.69E+011.58E+011.18E+018.11E+015.37E+01
 Std1.66E+002.01E+001.98E+003.39E+002.28E+008.11E+001.21E+01
F6Mean9.64E-096.52E-088.41E-045.07E-073.31E-055.59E-066.97E-04
 Std3.03E-081.06E-072.54E-039.51E-072.36E-046.29E-062.09E-03
F7Mean6.05E+016.11E+016.47E+016.70E+016.35E+011.36E+021.07E+02
 Std1.98E+001.67E+002.53E+003.19E+001.84E+007.67E+001.13E+01
F8Mean3.10E+001.53E+011.78E+011.55E+011.23E+018.24E+015.53E+01
 Std1.81E+001.99E+002.43E+003.50E+002.25E+007.36E+001.23E+01
F9Mean0.00E+000.00E+000.00E+000.00E+000.00E+005.51E-029.35E-01
 Std0.00E+000.00E+000.00E+000.00E+000.00E+002.64E-018.92E-01
F10Mean7.18E+023.22E+033.09E+033.04E+033.20E+035.40E+034.85E+03
 Std2.79E+022.46E+022.97E+024.46E+022.79E+022.57E+027.28E+02
F11Mean4.20E+015.03E+016.52E+012.87E+014.95E+011.32E+021.04E+02
 Std7.73E+001.05E+011.28E+012.82E+009.93E+007.75E+012.51E+01
F12Mean2.61E+032.28E+032.23E+032.09E+032.37E+034.10E+039.81E+03
 Std6.31E+025.45E+024.12E+025.32E+025.96E+022.33E+037.02E+03
F13Mean6.04E+016.19E+015.91E+013.72E+015.66E+011.15E+029.52E+01
 Std3.23E+012.33E+012.29E+011.92E+012.81E+014.50E+013.51E+01
F14Mean2.47E+013.23E+013.01E+012.37E+013.15E+016.53E+036.14E+01
 Std2.57E+004.05E+003.58E+002.11E+003.86E+002.61E+041.64E+01
F15Mean5.35E+015.76E+014.14E+012.39E+014.90E+011.54E+027.83E+01
 Std1.68E+011.58E+011.17E+012.65E+001.56E+017.93E+014.32E+01
F16Mean1.42E+023.99E+023.59E+024.02E+023.70E+021.00E+039.67E+02
 Std3.35E+011.08E+021.12E+021.48E+021.35E+021.58E+022.91E+02
F17Mean5.51E+013.07E+022.92E+022.55E+022.15E+027.98E+025.88E+02
 Std4.06E+017.49E+017.34E+019.18E+017.57E+011.43E+021.99E+02
F18Mean4.96E+015.57E+014.05E+012.54E+014.94E+011.20E+041.21E+02
 Std2.24E+012.20E+011.10E+012.36E+001.81E+018.43E+049.31E+01
F19Mean4.69E+014.07E+012.75E+011.46E+013.48E+011.01E+024.44E+01
 Std1.95E+011.35E+018.62E+002.56E+001.17E+013.20E+011.86E+01
F20Mean4.28E+011.80E+021.72E+021.17E+021.55E+026.25E+023.50E+02
 Std5.64E+006.97E+017.29E+016.75E+015.55E+011.17E+021.74E+02
F21Mean2.05E+022.17E+022.18E+022.19E+022.14E+022.83E+022.53E+02
 Std3.13E+001.92E+002.51E+002.75E+002.58E+008.76E+001.37E+01
F22Mean4.40E+021.00E+024.69E+022.23E+032.29E+034.31E+033.68E+03
 Std2.51E+021.37E+001.11E+031.53E+031.67E+032.58E+032.67E+03
F23Mean4.28E+024.29E+024.28E+024.36E+024.32E+025.05E+024.81E+02
 Std6.23E+006.68E+006.76E+005.57E+003.49E+001.28E+011.39E+01
F24Mean5.05E+025.07E+025.04E+025.19E+025.11E+025.55E+025.43E+02
 Std2.68E+005.73E+005.25E+004.14E+003.17E+008.80E+001.34E+01
F25Mean4.82E+024.82E+024.98E+024.81E+024.82E+025.18E+024.98E+02
 Std1.17E+014.26E+002.95E+013.31E+003.77E+003.52E+013.44E+01
F26Mean1.08E+031.12E+031.12E+031.22E+031.19E+031.79E+031.56E+03
 Std4.83E+015.92E+018.05E+014.36E+014.73E+011.07E+021.68E+02
F27Mean5.37E+025.36E+025.40E+025.37E+025.35E+025.34E+025.43E+02
 Std1.18E+011.33E+011.35E+012.05E+011.11E+011.72E+012.41E+01
F28Mean4.70E+024.91E+024.99E+024.59E+024.64E+024.85E+024.89E+02
 Std5.08E+005.14E+005.28E+001.59E+016.06E+002.63E+011.22E+01
F29Mean3.32E+023.65E+023.53E+023.58E+023.49E+025.12E+024.39E+02
 Std3.07E+013.32E+013.36E+014.78E+013.39E+015.40E+016.45E+01
F30Mean6.46E+056.19E+056.11E+056.41E+056.57E+056.59E+056.80E+05
 Std5.72E+043.27E+043.88E+046.82E+047.53E+046.39E+049.69E+04
 + 171514162623
  258100
 = 111081347
The top-ranked algorithm is emphasized in bold.
Table A4. Mean errors of the algorithms under 100D of CEC 2017.
Table A4. Mean errors of the algorithms under 100D of CEC 2017.
  DPDEHIP-DEPaDEjSOLSHADEJADEMPEDE
F1Mean0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
 Std0.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
F2Mean3.38E+054.63E+016.85E+002.85E+044.08E+041.72E+039.32E+14
 Std2.10E+061.30E+024.13E+011.27E+052.39E+051.23E+046.66E+15
F3Mean2.46E-041.65E-076.49E-083.83E-064.66E-062.08E+051.73E+01
 Std6.27E-041.49E-071.02E-075.03E-066.55E-062.08E+055.49E+01
F4Mean1.99E+021.96E+021.30E+022.01E+021.98E+021.28E+028.84E+01
 Std6.04E+005.40E+006.87E+019.20E+006.58E+005.71E+016.43E+01
F5Mean4.53E+004.00E+014.60E+013.55E+012.79E+012.55E+021.57E+02
 Std1.82E+003.83E+005.37E+005.58E+005.66E+001.77E+012.75E+01
F6Mean1.01E-031.18E-031.60E-022.36E-051.52E-032.66E-051.69E-01
 Std9.28E-041.07E-031.58E-022.12E-051.17E-031.66E-041.40E-01
F7Mean1.22E+021.31E+021.42E+021.40E+021.36E+023.78E+022.98E+02
 Std8.61E+003.14E+004.77E+006.63E+004.35E+002.14E+013.50E+01
F8Mean3.78E+004.20E+014.63E+013.34E+012.83E+012.60E+021.46E+02
 Std1.69E+004.09E+004.31E+006.68E+004.62E+002.10E+012.33E+01
F9Mean2.30E-021.15E-011.07E+008.78E-031.13E-011.64E+003.59E+01
 Std8.06E-022.41E-019.39E-012.69E-021.94E-011.12E+002.36E+01
F10Mean3.59E+031.04E+049.56E+039.68E+031.02E+041.70E+041.11E+04
 Std6.55E+025.14E+025.89E+025.61E+025.12E+024.77E+021.05E+03
F11Mean2.92E+024.47E+025.79E+021.02E+023.89E+021.31E+047.82E+02
 Std8.17E+018.65E+019.38E+013.29E+011.02E+021.18E+042.24E+02
F12Mean3.15E+042.20E+041.93E+041.93E+042.53E+041.88E+043.43E+04
 Std1.35E+048.16E+037.46E+038.28E+031.02E+046.88E+032.26E+04
F13Mean2.46E+032.03E+039.79E+022.23E+021.16E+032.24E+036.46E+02
 Std5.58E+028.66E+027.06E+025.66E+017.54E+021.85E+037.56E+02
F14Mean2.45E+022.51E+022.73E+027.28E+012.54E+024.24E+024.83E+02
 Std3.35E+013.86E+013.76E+011.25E+012.92E+011.06E+021.26E+02
F15Mean2.47E+022.45E+022.50E+022.17E+022.58E+023.01E+023.32E+02
 Std4.21E+014.50E+014.20E+015.48E+014.90E+016.15E+011.25E+02
F16Mean1.96E+021.84E+031.48E+031.68E+031.54E+033.56E+032.83E+03
 Std1.17E+022.46E+022.61E+023.63E+022.80E+023.20E+025.86E+02
F17Mean1.27E+021.27E+031.17E+031.14E+031.06E+032.68E+031.92E+03
 Std5.70E+012.03E+021.76E+022.53E+021.78E+022.60E+024.57E+02
F18Mean2.00E+022.12E+022.27E+021.99E+022.19E+023.01E+021.41E+03
 Std3.69E+013.92E+014.80E+013.96E+014.26E+018.51E+011.23E+03
F19Mean1.69E+021.76E+021.84E+021.41E+021.80E+022.07E+022.39E+02
 Std1.96E+012.10E+013.04E+011.93E+012.34E+015.28E+016.49E+01
F20Mean1.89E+021.60E+031.56E+031.26E+031.43E+032.81E+031.85E+03
 Std6.55E+011.87E+021.81E+022.55E+022.20E+022.29E+023.75E+02
F21Mean2.32E+022.66E+022.71E+022.62E+022.58E+024.80E+023.64E+02
 Std3.05E+005.17E+005.33E+005.48E+004.83E+002.13E+012.45E+01
F22Mean2.62E+031.12E+041.08E+049.71E+031.09E+041.81E+041.22E+04
 Std5.64E+022.19E+036.04E+026.55E+025.90E+024.76E+021.11E+03
F23Mean5.58E+026.12E+025.94E+025.65E+025.62E+027.42E+026.94E+02
 Std9.45E+001.13E+011.78E+019.24E+007.62E+001.12E+012.85E+01
F24Mean9.04E+029.28E+029.25E+029.25E+029.17E+021.10E+031.03E+03
 Std6.51E+001.53E+011.50E+011.00E+016.32E+001.60E+012.67E+01
F25Mean7.51E+027.33E+027.30E+027.21E+027.50E+027.42E+027.41E+02
 Std2.52E+013.67E+014.04E+014.36E+012.95E+014.26E+016.04E+01
F26Mean3.15E+033.36E+033.36E+033.37E+033.40E+035.00E+034.51E+03
 Std8.58E+018.98E+011.21E+021.30E+021.12E+022.12E+022.89E+02
F27Mean6.36E+026.41E+026.50E+026.25E+026.54E+026.60E+027.00E+02
 Std1.89E+011.34E+011.85E+012.03E+011.70E+012.02E+013.69E+01
F28Mean5.23E+025.34E+025.27E+025.34E+025.31E+025.37E+025.26E+02
 Std1.60E+013.51E+013.34E+012.87E+013.05E+013.23E+013.59E+01
F29Mean9.85E+021.25E+031.21E+031.46E+031.42E+032.41E+032.43E+03
 Std1.30E+021.55E+021.72E+022.11E+021.64E+022.31E+024.55E+02
F30Mean2.42E+032.55E+032.58E+032.50E+032.40E+033.39E+032.55E+03
 Std1.42E+021.38E+021.84E+022.20E+021.40E+021.03E+031.93E+02
 + 162114192223
  3510452
 = 1146735
The top-ranked algorithm is emphasized in bold.

References

  1. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  2. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  3. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  4. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  5. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  6. Liang, J.; Qiao, K.; Yue, C.; Yu, K.; Qu, B.; Xu, R.; Li, Z.; Hu, Y. A clustering-based differential evolution algorithm for solving multimodal multi-objective optimization problems. Swarm Evol. Comput. 2021, 60, 100788. [Google Scholar] [CrossRef]
  7. Mezura-Montes, E.; Velázquez-Reyes, J.; Coello, C.C. Modified differential evolution for constrained optimization. In Proceedings of the 2006 IEEE International Conference on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 25–32. [Google Scholar]
  8. Deng, W.; Shang, S.; Cai, X.; Zhao, H.; Zhou, Y.; Chen, H.; Deng, W. Quantum differential evolution with cooperative coevolution framework and hybrid mutation strategy for large scale optimization. Knowl.-Based Syst. 2021, 224, 107080. [Google Scholar] [CrossRef]
  9. Wang, Z.J.; Zhan, Z.H.; Kwong, S.; Jin, H.; Zhang, J. Adaptive granularity learning distributed particle swarm optimization for large-scale optimization. IEEE Trans. Cybern. 2020, 51, 1175–1188. [Google Scholar] [CrossRef]
  10. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  11. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar]
  12. Zhang, X.; Liu, Q.; Qu, Y. An adaptive differential evolution algorithm with population size reduction strategy for unconstrained optimization problem. Appl. Soft Comput. 2023, 138, 110209. [Google Scholar] [CrossRef]
  13. Zeng, Z.; Zhang, M.; Zhang, H.; Hong, Z. Improved differential evolution algorithm based on the sawtooth-linear population size adaptive method. Inf. Sci. 2022, 608, 1045–1071. [Google Scholar] [CrossRef]
  14. Li, Y.; Wang, S.; Yang, B.; Chen, H.; Wu, Z.; Yang, H. Population reduction with individual similarity for differential evolution. Artif. Intell. Rev. 2023, 56, 3887–3949. [Google Scholar] [CrossRef]
  15. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–4 September 2005; Volume 2, pp. 1785–1791. [Google Scholar]
  16. Wang, Y.; Cai, Z.; Zhang, Q. Differential evolution with composite trial vector generation strategies and control parameters. IEEE Trans. Evol. Comput. 2011, 15, 55–66. [Google Scholar] [CrossRef]
  17. Peng, H.; Han, Y.; Deng, C.; Wang, J.; Wu, Z. Multi-strategy co-evolutionary differential evolution for mixed-variable optimization. Knowl.-Based Syst. 2021, 229, 107366. [Google Scholar] [CrossRef]
  18. Zhong, X.; Cheng, P. An elite-guided hierarchical differential evolution algorithm. Appl. Intell. 2021, 51, 4962–4983. [Google Scholar] [CrossRef]
  19. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  20. Li, Y.; Han, T.; Wang, X.; Zhou, H.; Tang, S.; Huang, C.; Han, B. MjSO: A modified differential evolution with a probability selection mechanism and a directed mutation strategy. Swarm Evol. Comput. 2023, 78, 101294. [Google Scholar] [CrossRef]
  21. Wang, H.B.; Ren, X.N.; Li, G.Q.; Tu, X.Y. APDDE: Self-adaptive parameter dynamics differential evolution algorithm. Soft Comput. 2018, 22, 1313–1333. [Google Scholar] [CrossRef]
  22. Mohamed, A.W.; Suganthan, P.N. Real-parameter unconstrained optimization based on enhanced fitness-adaptive differential evolution algorithm with novel mutation. Soft Comput. 2018, 22, 3215–3235. [Google Scholar] [CrossRef]
  23. Kumar, P.; Pant, M.; Singh, V. Information preserving selection strategy for differential evolution algorithm. In Proceedings of the 2011 World Congress on Information and Communication Technologies, Mumbai, India, 11–14 December 2011; pp. 462–466. [Google Scholar]
  24. Tian, M.; Gao, X.; Dai, C. Differential evolution with improved individual-based parameter setting and selection strategy. Appl. Soft Comput. 2017, 56, 286–297. [Google Scholar] [CrossRef]
  25. Guo, J.; Li, Z.; Yang, S. Accelerating differential evolution based on a subset-to-subset survivor selection operator. Soft Comput. 2019, 23, 4113–4130. [Google Scholar] [CrossRef]
  26. Zeng, Z.; Zhang, M.; Chen, T.; Hong, Z. A new selection operator for differential evolution algorithm. Knowl.-Based Syst. 2021, 226, 107150. [Google Scholar] [CrossRef]
  27. Zhan, Z.H.; Wang, Z.J.; Jin, H.; Zhang, J. Adaptive distributed differential evolution. IEEE Trans. Cybern. 2019, 50, 4633–4647. [Google Scholar] [CrossRef] [PubMed]
  28. Xia, X.; Gui, L.; Zhang, Y.; Xu, X.; Yu, F.; Wu, H.; Wei, B.; He, G.; Li, Y.; Li, K. A fitness-based adaptive differential evolution algorithm. Inf. Sci. 2021, 549, 116–141. [Google Scholar] [CrossRef]
  29. Deng, L.; Li, C.; Han, R.; Zhang, L.; Qiao, L. TPDE: A tri-population differential evolution based on zonal-constraint stepped division mechanism and multiple adaptive guided mutation strategies. Inf. Sci. 2021, 575, 22–40. [Google Scholar] [CrossRef]
  30. Pan, J.S.; Liu, N.; Chu, S.C. A hybrid differential evolution algorithm and its application in unmanned combat aerial vehicle path planning. IEEE Access 2020, 8, 17691–17712. [Google Scholar] [CrossRef]
  31. Li, Y.; Wang, S.; Yang, H.; Chen, H.; Yang, B. Enhancing differential evolution algorithm using leader-adjoint populations. Inf. Sci. 2023, 622, 235–268. [Google Scholar] [CrossRef]
  32. Wang, M.; Ma, Y. A differential evolution algorithm based on accompanying population and piecewise evolution strategy. Appl. Soft Comput. 2023, 143, 110390. [Google Scholar] [CrossRef]
  33. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  34. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems; Jadavpur University: Kolkata, India; Nanyang Technological University: Singapore, 2010; pp. 341–359. [Google Scholar]
  35. Brest, J.; Maučec, M.S.; Bošković, B. Single objective real-parameter optimization: Algorithm jSO. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia-San Sebastián, Spain, 5–8 June 2017; pp. 1311–1318. [Google Scholar]
  36. Wu, G.; Mallipeddi, R.; Suganthan, P.N.; Wang, R.; Chen, H. Differential evolution with multi-population based ensemble of mutation strategies. Inf. Sci. 2016, 329, 329–345. [Google Scholar] [CrossRef]
  37. Meng, Z.; Yang, C. Hip-DE: Historical population based mutation strategy in differential evolution with parameter adaptive mechanism. Inf. Sci. 2021, 562, 44–77. [Google Scholar] [CrossRef]
  38. Meng, Z.; Pan, J.S.; Tseng, K.K. PaDE: An enhanced Differential Evolution algorithm with novel control parameter adaptation schemes for numerical optimization. Knowl.-Based Syst. 2019, 168, 80–99. [Google Scholar] [CrossRef]
Figure 1. The framework of DPDE.
Figure 1. The framework of DPDE.
Electronics 13 00062 g001
Figure 2. Illustration of how the selection strategy helps to escaping from local optima.
Figure 2. Illustration of how the selection strategy helps to escaping from local optima.
Electronics 13 00062 g002
Figure 3. Convergence curves of the mean fitness on certain test functions for 10D.
Figure 3. Convergence curves of the mean fitness on certain test functions for 10D.
Electronics 13 00062 g003
Figure 4. Convergence curves of the mean fitness on certain test functions in 30D.
Figure 4. Convergence curves of the mean fitness on certain test functions in 30D.
Electronics 13 00062 g004
Figure 5. Convergence curves of the mean fitness on certain test functions in 50D.
Figure 5. Convergence curves of the mean fitness on certain test functions in 50D.
Electronics 13 00062 g005
Figure 6. Convergence curves of the mean fitness on certain test functions in 100D.
Figure 6. Convergence curves of the mean fitness on certain test functions in 100D.
Electronics 13 00062 g006
Table 1. Parameter settings.
Table 1. Parameter settings.
AlgorithmYearParameters’ Initial Settings
MPEDE2016 N = 250 , n g = 20 , λ 1 = λ 2 = λ 3 = 0.2
JADE2009 N = 100 , u F = 0.5 , u C R = 0.5 , p = 0.05 , c = 0.1
LSHADE2014 N m a x = 18 × D , N m i n = 4 , H = 6 , r a r c = 2.6 , M F = 0.5 , M C R = 0.5
jSO2017 N m a x = 25 × l o g ( D ) × s q r t ( D ) , H = 5 , M F = 0.3 , M C R = 0.8 , r a r c = 2.6 ,
p m i n = 0.125 , p m a x = 0.25
PaDE2019 25 × l o g ( D ) × s q r t ( D ) , u F = 0.8 , u C R = 0.6 , k = 4 , p = 0.11 , r a r c = 1.6
   T 0 = 70
HIP-DE2021 N m a x = 15 × D , N m i n = 4 , M F = 0.6 , M C R = 0.8 , τ F = τ C R = 0.9 , K = 6 ,
DPDE  N m a x = 18 × D , N m i n = 4 , H = 6 , r a r c = 2.6 , M F = 0.5 , M C R = 0.5
Table 2. Average rankings between DPDE and the other algorithms according to the Friedman test at the 0.05 significance level.
Table 2. Average rankings between DPDE and the other algorithms according to the Friedman test at the 0.05 significance level.
Algorithm10D30D50D100D
DPDE3.23332.76672.50002.5000
HIP-DE3.45003.46673.73333.8000
PaDE3.15003.18333.60003.7333
jSO3.53333.73332.93332.7667
LSHADE3.90004.20003.26673.6000
JADE5.71675.80006.20005.9000
MPEDE5.01674.85005.76675.7000
The top-ranked algorithm is emphasized in bold.
Table 3. Comparison of average ranks between DPDE and DPDE-1 using the Friedman test.
Table 3. Comparison of average ranks between DPDE and DPDE-1 using the Friedman test.
 10D30D50D100DAverage
DPDE1.33331.13331.20001.30001.2417
DPDE-11.66671.86671.80001.70001.7583
The top-ranked algorithm is emphasized in bold.
Table 4. Comparison of average ranks between DPDE and its variants using the Friedman test.
Table 4. Comparison of average ranks between DPDE and its variants using the Friedman test.
 10D30D50D100DAverage
DPDE2.21672.01672.00001.96672.0500
DPDE-fixed242.76672.31672.20002.36672.4125
DPDE-fixed482.65002.61672.53332.76672.6417
DPDE-fixed962.36673.05003.26672.90002.8958
The top-ranked algorithm is emphasized in bold.
Table 5. Comparison of average ranks between DPDE and its variants using the Friedman test.
Table 5. Comparison of average ranks between DPDE and its variants using the Friedman test.
 10D30D50D100D
DPDE-T15.75003.95003.63333.0833
DPDE-T25.25003.66673.53334.4500
DPDE-T35.75004.10003.76673.3167
DPDE-T44.38334.18335.66675.3833
DPDE-T54.30004.53334.80004.5500
DPDE-T64.78334.86674.60005.1000
DPDE-T75.08336.66676.26676.2333
DPDE-T84.88336.25006.13336.8667
DPDE-T94.81676.78336.60006.0167
The top-ranked algorithm is emphasized in bold.
Table 6. Comparison of average ranks between DPDE and its variants using the Friedman test under different values of T3.
Table 6. Comparison of average ranks between DPDE and its variants using the Friedman test under different values of T3.
 10D30D50D100DAverage
DPDE_81.93332.10002.06671.96672.0167
DPDE_161.95001.86671.86671.86671.8875
DPDE_322.11672.03332.06672.16672.0958
The top-ranked algorithm is emphasized in bold.
Table 7. Comparison results for the real-world problems.
Table 7. Comparison results for the real-world problems.
ProblemMPEDEJADELSHADEjSOPaDEHIP-DEDPDE
problem 2−2.17E+01−2.33E+01−2.62E+01−2.59E+01−2.62E+01−2.64E+01−2.64E+01
(+)(+)(=)(+)(=)(=)
problem 5−3.41E+01−3.59E+01−3.62E+01−3.57E+01−3.61E+01−3.61E+01−3.59E+01
(+)(=)(=)(=)(=)(=)
problem 6−2.82E+01−2.90E+01−2.92E+01−2.91E+01−2.60E+01−2.11E+01−2.83E+01
(=)(=)(−)(−)(+)(+)
problem 71.29E+001.17E+001.15E+001.14E+001.08E+001.13E+007.43E-01
(+)(+)(+)(+)(+)(+)
Total +/−/=3/0/12/0/21/1/22/1/12/0/22/0/2
The top-ranked algorithm is emphasized in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Y.; Qian, X.; Song, W. Improving Dual-Population Differential Evolution Based on Hierarchical Mutation and Selection Strategy. Electronics 2024, 13, 62. https://doi.org/10.3390/electronics13010062

AMA Style

Huang Y, Qian X, Song W. Improving Dual-Population Differential Evolution Based on Hierarchical Mutation and Selection Strategy. Electronics. 2024; 13(1):62. https://doi.org/10.3390/electronics13010062

Chicago/Turabian Style

Huang, Yawei, Xuezhong Qian, and Wei Song. 2024. "Improving Dual-Population Differential Evolution Based on Hierarchical Mutation and Selection Strategy" Electronics 13, no. 1: 62. https://doi.org/10.3390/electronics13010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop