Next Article in Journal
Genetic Mean Reversion Strategy for Online Portfolio Selection with Transaction Costs
Previous Article in Journal
Improved Security of E-Healthcare Images Using Hybridized Robust Zero-Watermarking and Hyper-Chaotic System along with RSA
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization

1
School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea
3
Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1072; https://doi.org/10.3390/math10071072
Received: 4 February 2022 / Revised: 13 March 2022 / Accepted: 24 March 2022 / Published: 26 March 2022
(This article belongs to the Topic Soft Computing)

Abstract

:
High-dimensional optimization problems are more and more common in the era of big data and the Internet of things (IoT), which seriously challenge the optimization performance of existing optimizers. To solve these kinds of problems effectively, this paper devises a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating valuable evolutionary information in different elite particles in the swarm to guide the updating of inferior ones. Specifically, the swarm is first separated into two exclusive sets, namely the elite set (ES) containing the top best individuals, and the non-elite set (NES), consisting of the remaining individuals. Then, the dimensions of each particle in NES are randomly divided into several groups with equal sizes. Subsequently, each dimension group of each non-elite particle is guided by two different elites randomly selected from ES. In this way, each non-elite particle in NES is comprehensively guided by multiple elite particles in ES. Therefore, not only could high diversity be maintained, but fast convergence is also likely guaranteed. To alleviate the sensitivity of DGCELSO to the associated parameters, we further devise dynamic adjustment strategies to change the parameter settings during the evolution. With the above mechanisms, DGCELSO is expected to explore and exploit the solution space properly to find the optimum solutions for optimization problems. Extensive experiments conducted on two commonly used large-scale benchmark problem sets demonstrate that DGCELSO achieves highly competitive or even much better performance than several state-of-the-art large-scale optimizers.

1. Introduction

Large-scale optimization problems, also called high-dimensional problems, are ubiquitous in daily life and industrial engineering in the era of big data and the Internet of Things (IoT), such as water distribution optimization problems [1], cyber-physical systems design problems [2], control of pollutant spreading on social networks [3], and offshore wind farm collector system planning problems [4]. As the dimensionality of optimization problems increases, most existing optimization methods encounter the degradation of optimization effectiveness, due to the “curse of dimensionality” [5,6].
Specifically, the increase of dimensionality results in the following challenges for existing optimization algorithms: (1) With the growth of dimensionality, the properties of optimization problems become much more complicated. In particular, in the high-dimensional environment, optimization problems usually are non-convex, non-differentiable, or even non-continuous [7,8,9]. This makes traditional gradient-based optimization algorithms become infeasible. (2) The solution space grows exponentially as the dimensionality increases [10,11,12,13]. This greatly challenges the optimization efficiency of most existing algorithms. (3) The landscape of optimization problems becomes more complex in a high-dimensional space. On the one hand, some unimodal problems may become multimodal with the increase of dimensionality; on the other hand, in some multimodal problems, not only does the number of local optimal regions increase rapidly, but also the local regions become much wider and flatter [11,12,14]. This likely leads to premature convergence and stagnation of existing optimization techniques.
As a kind of metaheuristic algorithm, particle swarm optimization (PSO) maintains a population of particles, each of which represents a feasible solution to optimization problems, to search the solution space for the global optimum solutions [15,16,17]. By means of its great merits, such as strong global search ability, independence in the mathematic properties of optimization problems, and inherent parallelism [17], PSO has witnessed rapid development and excellent success in solving complex optimization problems [18,19,20,21,22] since it was proposed in 1995 [15]. As a result, PSO has been widely employed to solve real-world optimization problems in daily life and industrial engineering [1,23].
However, most existing PSOs are initially designed for low-dimensional optimization problems. Confronted with large-scale optimization problems, their effectiveness usually deteriorates due to the previously mentioned challenges [24,25,26]. To improve the optimization effectiveness of PSO in tackling high-dimensional problems, researchers have been devoted to designing novel and effective evolution mechanisms for PSO. Broadly speaking, existing large-scale PSOs can be divided into two categories [27], namely cooperative coevolutionary large-scale PSOs [6,28,29] and holistic large-scale PSOs [24,26,30,31,32].
Cooperative coevolutionary PSOs (CCPSOs) [6,28,29,33] adopt the divide-and-conquer technique to decompose one large-scale optimization problem into several exclusive smaller sub-problems and then optimize these sub-problems individually by traditional PSOs designed for low-dimensional problems to find the optimal solution to the large-scale optimization problem. Since the decomposed subproblems are separately optimized, the key component of CCPSOs is the decomposition strategy [6,28]. Ideally, a good decomposition strategy should place interacted variables into the same sub-problem, so that they can be optimized together. However, without prior knowledge, it is considerably difficult to decompose a large-scale problem accurately. As a result, current research on CCPSOs lies in developing novel decomposition strategies to divide the large-scale optimization problem as accurately as possible. Hence, many effective decomposition strategies [6,34,35,36,37,38] have been put forward.
However, CCPSOs heavily rely on the quality of the decomposition strategies. According to the no free lunch theorem, there is no decomposition strategy suitable for all large-scale problems. Therefore, some researchers attempt to design large-scale PSOs from another perspective, namely the holistic large-scale PSOs [5,26,30,39].
In contrast to CCPSOs, holistic large-scale PSOs [5,26,30,39,40] still optimize all variables simultaneously such as traditional PSOs. Since the learning strategy in updating the velocity of particles plays the most important role in PSO [15,16,18], the key to improving the effectiveness of PSO in coping with large-scale optimization is to devise effective learning strategies for particles, which should not only help particles explore the solution space efficiently to locate promising areas fast, but also aid particles to exploit the promising areas effectively to obtain high-quality solutions. Along this line, researchers have developed many remarkable learning strategies for PSO to solve high-dimensional problems, such as the competitive learning scheme [26], the social learning strategy [30], the two-phase learning method [1], and the level-based learning approach [25]. Recently, some researchers even have attempted to develop novel coding schemes for PSO to improve its optimization performance in solving large-scale optimization problems [41].
Although the above-mentioned large-scale PSOs have presented excellent optimization performance in solving some large-scale optimization problems, they still encounter limitations, such as premature convergence and stagnation into local areas, in solving complicated high-dimensional problems, especially those with overlapping correlated variables or fully non-separable variables. Therefore, the optimization performance of PSOs in tackling large-scale optimization still deserves improvement, which still remains an open and hot topic to study in the evolutionary computation community.
In nature, individuals with better fitness usually preserve more valuable evolutionary information than those with worse fitness, to guide the evolution of one species [42]. Moreover, in general, different individuals usually preserve different useful genes. Inspired by these observations, in this paper, we propose a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) by integrating useful genes embedded in different elite individuals to guide the update of particles to search the large-scale solution space effectively and efficiently. Specifically, the main components of the proposed DGCELSO are summarized as follows:
(1)
A dimension group-based comprehensive elite learning scheme is proposed to guide the update of inferior particles by learning from multiple superior ones. Instead of learning from only at most two exemplars in existing holistic large-scale PSOs [24,25,26,30], the devised learning strategy first randomly divides the dimensions of each inferior particle into several equally sized groups and then employs different superior particles to guide the update of different dimension groups. Moreover, unlike existing elite strategies that only use one elite to direct the evolution of an individual [43,44], it employs a random dimension group-based recombination techniques to try to integrate valuable evolutionary information in multiple elites to guide the update of each non-elite particle. In this way, the learning diversity of particles could be largely promoted, which is beneficial for particles to avoid falling into local traps. Moreover, it is also possible that useful evolutionary information embedded in different superior particles could be integrated to direct the learning of inferior particles, which may be profitable for particles to approach promising areas quickly.
(2)
Dynamic adjustment strategies for the control parameters involved in the proposed learning strategy are further designed to cooperate with the learning strategy to help PSO search the large-scale solution space properly. With these dynamic strategies, the developed DGCELSO could appropriately compromise the intensification and diversification of the search process at the swarm level and the particle level.
To verify the effectiveness of the proposed DGCELSO, extensive experiments are conducted to compare DGCELSO with several state-of-the-art large-scale optimizers on the widely used CEC’2010 [7] and CEC’2013 [8] large-scale benchmark optimization problem sets. Meanwhile, deep investigations on DGCELSO are also conducted to discover what contributes to its good performance.
The rest of this paper is organized as follows. Section 2 introduces the classical PSO and large-scale PSO variants. Then, the proposed DGCELSO is elucidated in detail in Section 3. Section 4 conducts extensive experiments to verify the effectiveness of the proposed DGCELSO. Finally, Section 5 concludes this paper.

2. Related Work

In this paper, a D -dimensional single-objective minimization optimization problem is considered, which is defined as follows:
min   f ( x ) ,   x R D
where x consisting of D variables is a feasible solution to the optimization problem, and D is the dimension size. In this paper, we directly use the function value as the fitness value of one particle.

2.1. Canonical PSO

In the canonical PSO [15,16], each particle is represented by two vectors, namely the position vector x and the velocity vector v. During the evolution, in the canonical PSO [15,16], each particle is guided by its historically personal best position and the historically best position of the whole swarm. Specifically, each particle is updated as follows:
v i d w v i d + c 1 r 1 ( p b e s t i d x i d ) + c 2 r 2 ( g b e s t d x i d )
x i d x i d + v i d
where v i d is the d th dimension of the velocity of the i th particle, x i d is the d th dimension of the position of the i th particle, p b e s t i d is the d th dimension of the historically personal best position found by the i th particle, and g b e s t d is the d th dimension of the historically global best position found by the whole swarm. As for the parameters, c 1 and c 2 are two acceleration coefficients, while r 1 and r 2 are two real random numbers uniformly generated within [0, 1]. w represents the inertia weight.
As shown in Equation (2), in the canonical PSO, each particle is cognitively directed by its pbest (the second part in the right hand of Equation (2) and socially guided by gbest of the whole swarm (the third part in the right hand of Equation (2). Due to the greedy attraction of gbest, the swarm in the canonical PSOs usually becomes trapped in local areas when tackling multimodal problems [18,45]. Therefore, to improve the effectiveness of PSO in searching multimodal space with many local areas, researchers developed many novel learning strategies to guide the learning of particles, such as the comprehensive learning strategy [46], the genetic learning strategy [47], the scatter learning strategy [18], and the orthogonal learning strategy [48], etc.
Though a lot of novel learning strategies have helped PSO achieve very promising performance in solving multimodal problems, most of them are particularly designed for low-dimensional optimization problems. Encountered with large-scale optimization problems, most existing PSOs lose their effectiveness due to the “curse of dimensionality” and the aforementioned challenges in high-dimensional problems.

2.2. Large-Scale PSO

To solve the previously mentioned challenges of large-scale optimization, researchers devoted extensive attention to designing novel PSOs. As a result, numerous large-scale PSO variants have sprung up [1,26]. In a broad sense, existing large-scale PSOs can be classified into the following two categories.

2.2.1. Cooperative Coevolutionary Large-Scale PSO (CCPSO)

Cooperative coevolutionary PSOs (CCPSOs) [6,29,49] mainly use the divide-and-conquer technique to separate all variables of one high-dimensional problem into several exclusive groups, and then optimize each group of variables independently to obtain the optimal solution to the high-dimensional problem. Bergh and Engelbrecht put forward the earliest CCPSO [49]. In this algorithm, all variables in a large-scale optimization problem are randomly divided into K groups with each containing D/K variables (where D is the dimension size). Then the canonical PSO described in Section 2.1 is employed to optimize each group of variables. Nevertheless, the performance of this algorithm heavily relies on the setting of the number of groups (namely K). To alleviate this issue, in [29], an improved CCPSO, named CCPSO2, was proposed by first predefining a set of group numbers and then randomly selecting a group number in each iteration to separate variables into groups. In the above two algorithms, the correlations between variables are not taken into account explicitly. Hence, their optimization effectiveness degrades dramatically in solving problems with many interacted variables [11,12].
To alleviate the above issue, researchers have attempted to design effective variable grouping strategies to separate variables into groups by detecting the correlations between variables [6,35,36,37]. In the literature, the most representative grouping strategy is the differential grouping (DG) method [6], which uses the differential function values to detect the correlation between any two variables by exerting the same disturbance on the two variables. Based on the detected correlations between variables, DG could separate variables into groups satisfactorily. However, this method has two drawbacks. (1) It cannot detect the indirect interaction between variables [36], and (2) it consumes a lot of fitness evaluations (O(D2), D is the number of variables) in the variable decomposition stage [35,37].
To fill the first gap, Sun et al. devised an extended DG (XDG) [36], and Mei et al. brought up a global DG (GDG) [50] to detect both the direct and indirect interactions between variables. To alleviate the second predicament, a fast DG, named DG2 [35], and a recursive DG (RDG) [37] were put forward to reduce the consumption of fitness evaluations in the variable grouping stage. To further improve the detection efficiency of RDG, an efficient recursive differential grouping (ERDG) [51] was devised to reduce the used fitness evaluations in the decomposition stage, and to alleviate the sensitivity of RDG to parameters, an improved version, named RDG2, was developed [52] by adaptively adjusting the setting of parameters. In [53], Ma et al. proposed a merged differential grouping method based on subset-subset interaction and binary search by first identifying separable variables and non-separable variables, and putting all separable variables into the same subset, while dividing the non-separable variables into multiple subsets via a binary-tree-based iterative merging method. To further promote the variable grouping accuracy, Liu et al. proposed a deep grouping method by considering both the variable interaction and the essentialness of the variable to decompose one high-dimensional problem [54]. Instead of decomposing a large-scale optimization problem into fixed variable groups, Zhang et al. developed a dynamic grouping strategy to dynamically separate variables into groups during the evolution [55]. Specifically, the proposed algorithm first evaluates the contribution of variables based on the historical information and then constructs dynamic variable groups for the next generation based on the evaluated contribution and the detected interaction information.
By means of their promising performance in solving large-scale optimization problems, cooperative coevolutionary algorithms have been widely applied to solve various industrial engineering problems. For instance, Neshat et al. [56] proposed a novel multi-swarm cooperative co-evolution algorithm with the multi verse optimizer algorithm, the equilibrium optimization method, and the moth flame optimization approach, to optimize the layout of offshore wave energy converters. To tackle distributed flowshop group scheduling problems, Pan et al. [57] proposed a cooperative co-evolutionary algorithm with a collaboration model and a re-initialization scheme to tackle them. In [58], a hybrid cooperative co-evolution algorithm with a symmetric local search plus Nelder–Mead was devised to optimize the positions and the power-take-off settings of wave energy converters. In [59], Liang et al. developed a cooperative coevolutionary multi-objective evolutionary algorithm to tackle the transit network design and frequency setting problem.
Although the above-mentioned cooperative coevolutionary algorithms including CCPSOs achieved good performance in dealing with certain kinds of high-dimensional problems and have been applied to solve real-world problems, they are still confronted with limitations in tackling complicated high-dimensional problems. On the one hand, according to the theorem of No Free Lunch, there is no universal grouping method that could accurately separate variables into groups for all types of large-scale optimization problems; on the other hand, faced with high-dimensional problems with overlapping variable correlations, most existing variable grouping strategies would separate all these variables into the same group, leading to a very large variable group. Under this situation, traditional PSOs designed for low-dimensional problems used in CCPSO still cannot effectively optimize such a large group of variables. As a result, some researchers have attempted to design large-scale PSOs from another perspective to be elucidated next.

2.2.2. Holistic Large-Scale PSO

Unlike CCPSOs, holistic large-scale PSOs [18,26] still consider all variables as a whole and optimize them simultaneously like in traditional low-dimensional PSOs [16]. To solve the previously mentioned challenges of large-scale optimization, the key to holistic large-scale PSOs is to devise effective and efficient learning strategies for particles to largely promote the swarm diversity so that particles could explore the exponentially increased solution space efficiently and exploit the promising areas extensively to obtain high-quality solutions.
In [60], a dynamic multi-swarm PSO along with the Quasi-Newton local search method (DMS-L-PSO) was proposed to optimize large-scale optimization problems by dynamically separating particles into smaller sub-swarms in each generation. Taking inspiration from the competitive learning scheme in human society, Cheng and Jin proposed a competitive swarm optimizer (CSO) [26]. Specifically, this optimizer first separates particles into exclusive pairs and then lets each pair of particles compete with each other. After the competition, the winner is not updated and thus directly enters the next generation, while the loser is updated by learning from the winner. Likewise, inspired by the social learning strategy in animals, a social learning PSO (SLPSO) [61] was devised to let each particle probabilistically learn from those which are better than itself. By extending the pairwise competition mechanism in CSO to a tri-competitive strategy, Mohapatra et al. [62] developed a modified CSO (MCSO) to accelerate the convergence speed of the swarm to tackle high-dimensional problems. Taking inspiration from the comprehensive learning strategy designed for low-dimensional problems [46] and the competitive learning approach in CSO [26], Yang et al. designed a segment-based predominant learning swarm optimizer (SPLSO) [30] to cope with large-scale optimization. Specifically, this optimizer first uses the pairwise competition mechanism in CSO to divide particles into two groups, namely the relatively good particles and the relatively poor particles. Then, it further randomly separates the dimensions of each relatively poor particle into a certain number of exclusive segments, and subsequently randomly selects a relatively good particle to direct the update of each segment of the inferior particle.
Unlike the above large-scale PSOs [26,30,62], which let the updated particle learn from only one superior, Yang et al. devised a level-based learning swarm optimizer (LLSO) [25] by taking inspiration from the teaching theory in pedagogy. Specifically, this optimizer first separates particles into different levels and then lets each particle in lower levels learn from two random superior exemplars selected from higher levels. Inspired by the cooperative learning behavior in human society, Lan et al. put forward a two-phase learning swarm optimizer (TPLSO) [24]. This optimizer separates the learning of each particle into the mass learning phase and the elite learning phase. In the former learning phase, the tri-competitive mechanism is employed to update particles, while in the elite learning phase, the elite particles are picked out to learn from each other to further exploit promising areas to refine the found solutions. Similarly, Wang et al. proposed a multiple strategy learning particle swarm optimization (MSL-PSO) [40], in which different learning strategies are used to update particles in different evolution stages. In the first stage, each particle learns from those with better fitness and the mean position of the swarm to probe promising positions. Then, all the best probed positions are sorted based on their fitness and the top best ones are used to update particles in the second stage. In [41], Jian et al. developed a novel region encoding scheme to extend the solution representation from a single point to a region, and a novel adaptive region search strategy to keep the search diversity. These two schemes are then embedded into SLPSO to tackle large-scale optimization problems.
To find a good compromise between exploration and exploitation, Li et al. devised a learning structure to decouple exploration and exploitation for PSO in [63] to solve large-scale optimization. In particular, an exploration learning strategy was devised to direct particles to sparse areas based on a local sparseness degree measurement, and then an adaptive exploitation learning strategy was developed to let particles exploit the found promising areas. Deng et al. [39] devised a ranking-based biased learning swarm optimizer (RBLSO) based on the principle that the fitness difference between learners and exemplars should be maximized. In particular, in this algorithm, a ranking paired learning (RPL) scheme was designed to let the worse particles learn peer-to-peer from the better ones, and at the same time, a biased center learning (BCL) strategy was devised to let each particle learn from the weighted mean position of the whole swarm. Lan et al. [64] proposed a hierarchical sorting swarm optimizer (HSSO) to tackle large-scale optimization. Specifically, this optimizer first divides particles into a good swarm and a bad swarm with equal sizes based on their fitness. Then, particles in the bad group are updated by learning from those in the good one. Subsequently, the good swarm is taken as a new swarm to execute the above swarm division and particle updating operations until there is only one particle in the good swarm. Kong et al. [65] devised an adaptive multi-swarm particle swarm optimizer to cope with high-dimensional problems. Specifically, it first adaptively divides particles into several sub-swarms and then employs the competition mechanism to select exemplars for particle updating. Huang et al. [66] put forward a convergence speed controller to cooperate with PSO to deal with large-scale optimization. Specifically, this controller is triggered periodically to produce an early warning to PSO before it falls into premature convergence.
Though most existing large-scale PSOs have presented their success in solving certain kinds of high-dimensional problems, their effectiveness still degrades in solving complicated high-dimensional problems [11,12,27,67], especially on those with many wide and flat local areas. Therefore, promoting the effectiveness and efficiency of PSO in solving large-scale optimization still deserves extensive attention and thus this research direction is still an active and hot topic in the evolutionary computation community.

3. Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer

In nature, during the evolution of one species, those elite individuals with better adaptability to the environment usually preserve more valuable evolutionary information, such as genes, to direct the evolution of the species [42]. Moreover, different individuals may preserve different useful genes. Likewise, during the evolution of the swarm in PSO, different particles may contain useful variable values that may be close to the true global optimal solutions. Therefore, a natural idea is to integrate those useful values embedded in different particles to guide the evolution of the swarm. To this end, this paper proposes a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) to tackle large-scale optimization. The detailed components of this optimizer are elucidated as follows.

3.1. Dimension Group-Based Comprehensive Elite Learning

Given that NP particles are maintained in the swarm, the proposed DGCEL strategy first partitions the swarm into two exclusive sets, namely the elite set, denoted by ES, and the non-elite set, denoted by NES. Specifically, ES contains the best es particles in the swarm, while NES consists of the rest nes = (NPes) particles. Since the size of ES, namely es, is related to NP, we set es = [ t p N P ] (where tp is the ratio of the elite particles in ES out of the whole swarm), for the convenience of parameter fine-tuning.
Since elite particles usually preserve more valuable evolutionary information than the non-elite ones, in this paper, we first develop an elite learning strategy (EL). Specifically, we let the elite particles in ES directly enter the next generation, while only updating the non-elite particles in NES. Moreover, the elite particles in ES are employed to guide the learning of non-elite particles in NES.
With respect to the elite particles, during the evolution, though they may be far from the global optimal area, they usually contain valuable genes that are very close to the true global optimal solution. To integrate the useful evolutionary information embedded in different elites, we propose a dimension group-based comprehensive learning strategy (DGCL). Specifically, during the update of each non-elite particle, the whole dimensions of this particle are first randomly shuffled and then are partitioned into NDG dimension groups (where NDG denotes the number of dimension groups), with each group containing D/NDG dimensions. In this way, the dimensions of each non-elite particle are randomly divided into NDG groups, namely DG = [ D G 1 ,   D G 2 ,   ,   D G N D G ].
Here, it should be mentioned that for each non-elite particle, the dimensions are randomly shuffled, and thus it is likely that the division of dimension groups is different for different non-elite particles. In addition, if D%NDG is not zero, then the remaining dimensions are equally allocated to the first (D%NDG) groups, i.e., each of the first (D%NDG) groups contains (D/NDG + 1) dimensions.
Subsequently, unlike most existing large-scale PSOs [25,26,30] which use the same exemplars to update all dimensions of one inferior particle, the proposed DGCL uses one exemplar to update each dimension group of each non-elite particle, and thus one non-elite particle could learn from different exemplars.
Incorporating the proposed EL into the DGCL, the DGCEL is developed by using the elite particles in ES to direct the update of each dimension group of a non-elite particle. Specifically, each non-elite particle is updated as follows:
V N E S j D G i r 1 V N E S j D G i + r 2 ( X E S r 1 D G i X N E S j D G i ) + ϕ r 3 ( X E S r 2 D G i X N E S j D G i )
X N E S j D G i V N E S j D G i + X N E S j D G i
where N E S j represents the jth non-elite particle in NES; D G i denotes the ith dimension group of the jth non-elite particle; X N E S j N G i and V N E S j D G i   are the ith dimension group of the position and velocity of the jth particle in NES, respectively; E S r 1 and E S r 2 are two different elite particles randomly selected from ES; r 1 , r 2 , and r 3 are three random real parameters uniformly sampled within [0, 1]; ϕ     0 ,   1 is a control parameter in charge of the influence of the second elite particle.
As for the update of each non-elite particle in NES, as shown in Equation (4), the following details should be paid careful attention:
(1)
As previously mentioned, for each non-elite particle, the dimensions are randomly shuffled. As a result, the partition of dimension groups is different for different non-elite particles.
(2)
For each dimension group D G i , two different elite particles X E S r 1 and X E S r 2 are first randomly selected from ES. Then, the better one between these two elites (suppose it is X E S r 1 ) acts as the first exemplar in Equation (4), while the worse one (suppose it is X E S r 2 ) acts as the second exemplar to guide the update of the dimension group of the non-elite particle.
(3)
The two elite particles guiding the update of each dimension group are both randomly selected. Therefore, they are likely to be different for different dimension groups.
As a whole, a complete flowchart of the proposed DGCEL is shown in Figure 1. Taking deep analysis on Equation (4) and Figure 1, we find that the proposed DGCEL strategy brings the following advantages to PSO:
(1)
Instead of using historical evolutionary information, such as the historically global best position (gbest), the personal best positions (pbest), and the neighborhood best position (nbest), in traditional PSOs [18,47], the devised DGCEL employs the elite particles in the current swarm to direct the learning of the non-elite particles. In contrast to the historical information, which may remain unchanged for many generations, particles in the swarm are usually updated generation by generation. Therefore, in the proposed DGCEL, the selected two guiding exemplars are not only likely different for different particles but also probably different for the same particle in different generations. This is very beneficial for the promotion of swarm diversity.
(2)
Instead of updating each particle with the same exemplars for all dimensions in most existing large-scale PSOs [5,24,25,26,30], the proposed DGCEL updates non-elite particles at the dimension group level. Therefore, for different dimension groups, the two guiding exemplars are likely different. In this way, not only could one non-elite particle learn from multiple different elite ones, but also the useful genes hidden in different elites could be incorporated to direct the evolution of the swarm. As a result, not only the learning diversity of particles could be improved, but also the learning efficiency of particles could be promoted.
(3)
In DGCEL, each dimension group of a non-elite particle is guided by two randomly selected elite particles in ES. With the guidance of multiple elites, each non-elite particle is expected to approach promising areas quickly. In addition, since the elite particles in ES are not updated and directly enter the next generation, the useful evolutionary information in the current swarm is protected from being destroyed by uncertain updates. Therefore, the elites in ES become better and better as the evolution iterates, and at last, it is expected that these elites converge to the optimal areas.

Remark

To the best of our knowledge, there are four existing PSOs that are very similar to the proposed DGCELSO. They are CLPSO [46], OLPSO [48], GLPSO [47], and SPLSO [30]. The first three were originally designed for low-dimensional problems, while the last one was initially devised for large-scale optimization. Compared with these existing PSOs, the developed DGCELSO distinguishes from them in the following ways:
(1)
In contrast to the three low-dimensional PSOs [46,47,48], the proposed DGCELSO uses the elite particles in the swarm to comprehensively guide the learning of the non-elite particles at the dimension group level. First, the three low-dimensional PSOs all use the personal best positions (pbests) of particles to construct only one guiding exemplar for each updated particle, whereas DGCELSO leverages the elite particles in the current swarm to construct two different guiding exemplars for each non-elite particle. Second, the three low-dimensional PSOs construct the guiding exemplar dimension by dimension. Nevertheless, DGCELSO constructs the two guiding exemplars group by group. With these two differences, DGCELSO is expected to construct more promising guiding exemplars for the updated particles, and thus the learning effectiveness and efficiency of particles could be largely promoted to explore the large-scale solution space.
(2)
In contrast to the large-scale PSO, namely SPLSO [30], DGCELSO uses two different elite particles to direct the update of each dimension group of each non-elite particle. First, the partition of the swarm in DGCELSO is very different from the one in SPLSO. In DGCELSO, the swarm is divided into two exclusive sets according to the fitness of particles, with the best es particles entering ES and the rest entering NES. However, in SPLSO, particles in the swarm are paired together and each paired two particles compete with each other, with the winner entering the relatively good set and the loser entering the relatively poor set. Second, for each non-elite particle, DGCELSO adopts two random elites in ES to guide the update of each dimension group, whereas in SPLSO, each dimension group of a loser is updated by only one random relatively good particle with the other exemplar being the mean position of the relatively good set, which is shared by all updated particles. Therefore, it is expected that the learning effectiveness and efficiency of particles in DGCELSO are higher than in SPLSO. Hence, DGCELSO is expected to explore and exploit the large-scale solution space more appropriately than SPLSO.

3.2. Adaptive Strategies for Control Parameters

Taking deep investigation on the proposed DGCELSO, we find that except for the swarm size NP, it has three control parameters, namely the ratio of elite particles out of the whole swarm tp, the number of dimension groups NDG, and the control parameter ϕ in Equation (4). The swarm size NP is a common parameter for all evolutionary algorithms, which is usually problem-dependent and thus remains fine-tuned. As for ϕ , it subtly controls the influence of the second guiding exemplar in the velocity update. We also leave it to be fine-tuned in the experiment as NP. For the other two control parameters, we devise the following dynamic adjustment schemes to alleviate the sensitivity of DGCELSO to them.

3.2.1. Dynamic Adjustment for tp

With respect to the ratio of elite particles out of the whole swarm tp, it determines the size of the elite set ES. When tp is large, on the one hand, a large number of particles are preserved and enter the next generation directly; on the other hand, the learning of non-elite particles is diversified due to a large number of candidate exemplars, namely the elite particles. In this situation, the swarm biases to explore the solution space. In contrast, when tp is small, only a small number of elites are preserved. In this case, the learning of non-elite particles is concentrated to exploit the promising areas where the elites locate. Therefore, the swarm biases to exploit the solution space. However, it should be mentioned that such a bias is not at the serious sacrifice of swarm diversity because the guiding exemplars are both randomly selected for each dimension group of each non-elite particle.
Based on the above consideration, it seems rational not to keep tp fixed during the evolution. To this end, we devise a dynamic adjustment strategy for tp as follows:
t p = 0.4 0.2 × f e s F e s m a x
where fes represents the number of fitness evaluations used so far, and F e s m a x is the maximum number of fitness evaluations.
From Equation (6), it is found that tp is linearly decreased from 0.4 to 0.2. Therefore, at the early stage, tp is high, while at the late stage, tp is small. As a result, as the evolution proceeds, the swarm gradually tends to exploit the solution space. This just matches the expectation that the swarm should explore the solution fully in the early stages to find promising areas while exploiting the found promising areas in the late stage to obtain high-quality solutions. The effectiveness of this dynamic adjustment scheme will be verified in the experiments in Section 4.3.

3.2.2. Dynamic Adjustment for NDG

In terms of the number of dimension groups NDG, it directly affects the learning of non-elite particles. A large NDG leads to a large number of elite particles that might participate in the learning of non-elite particles. This might be useful when the useful genes are scattered in very diversified dimensions. In this situation, with a large NDG, the chance of integrating the useful genes together to direct the learning of non-elite particles could be promoted. By contrast, when the useful genes are scattered in centered dimensions, a small NDG is preferred. However, without prior knowledge of the positions of useful genes embedded in the elite particles, it is difficult to give a proper setting of NDG.
To alleviate the above concern, we devise the following dynamic adjustment of NDG for each non-elite particle based on the Cauchy distribution:
N D G N E S j   ~   C a u c h y ( 60 , 10 )
N D G N E S j = f l o o r ( N D G N E S j / 10 ) 10 + 0 i f mod ( N D G N E S j , 10 ) < 5 10 o t h e r w i s e
where N D G N E S j denotes the setting of NDG for the jth particle in NES, Cauchy (60, 10) is a Cauchy distribution with the position parameter 60 and scaling parameter 10. floor(x) is a function that returns the largest integer smaller than x. mod(x,y) is a function that returns the remainder when x/y.
In Equations (7) and (8), two details deserve careful attention. First, the Cauchy distribution is used here because it can generate values around the position parameter with a long fat tail. With this distribution, the generated NDGs for different non-elite particles are likely diversified. Second, with Equation (8), we keep the setting of NDG for each non-elite particle at multiple times of 10. This setting is adopted here for promoting the difference between two different values of NDG to improve the learning diversity of non-elite particles and for the convenience of computation.
From Equations (7) and (8), it is found that different non-elite particles likely preserve different NDGs. On the one hand, the learning diversity of non-elite particles could be further improved. On the other hand, the chance of integrating useful genes embedded in different elite particles is likely promoted with different settings of NDG. The effectiveness of this dynamic adjustment scheme for NDG will be verified in the experiments in Section 4.3.

3.3. Overall Procedure of DGCELSO

By integrating the above components, DGCELSO is developed with the overall procedure outlined in Algorithm 1 and the complete flowchart shown in Figure 2. Specifically, after the swarm is initialized and evaluated (Line 1), the algorithm goes to the main iteration loop (Lines 2~17). First, the swarm is partitioned into the elite set (ES) and the non-elite set (NES) as shown in Lines 3 and 4. Then, each particle in NES is updated as shown in Lines 5~16. During the update of one non-elite particle, the dimensions of this particle are first separated into several dimension groups (Lines 6 and 7). Then, for each dimension group of the non-elite particle, two different elite particles are randomly selected from ES (Line 9), and then the dimension group is updated by learning from these two elites (Line 13). The above process iterates until the termination condition is met. At the end of the algorithm, the best solution in the swarm is output (Line 18).
With respect to the computational complexity in time, from Algorithm 1, it is found that in each generation, it takes O(NPlog2NP) to sort the swarm and O(NP) to partition the swarm into two sets in Line 4; then, it takes O(NPD) to shuffle the dimensions and O(NPD) to partition the shuffled dimensions into groups for all non-elite particles (Line 7); at last, it takes O(NPD) to update all non-elite particles (Lines 8~14). To sum up, the time complexity of DGCELSO is O(NPD) based on the consideration that the swarm size is usually much smaller than the dimension size in large-scale optimization.
Algorithm 1: The Pseudocode of DGCELSO.
Input:Population size NP, Maximum number of fitness evaluations FESmax, Control parameter ϕ ;
1:Initialize NP particles randomly and calculate their fitness; fes = NP;
2:While (fesFESmax) do
3: Calculate tp according to Equation (6) and obtain the elite set size es = [ t p N P ] ;
4: Sort particles based on their fitness and divide them into two sets, namely ES and NES;
5:For each non-elite particle N E S j   in NES do
6:  Generate N D G N E S j based on Equation (7);
7:  Random shuffle the dimensions and then split the dimensions into N D G N E S j groups;
8:  For each dimension group D G i do
9:   Randomly select two different elite particles from ES: X E S r 1   and   X E S r 2 ;
10:   If (f( X E S r 2 ) < f( X E S r 1 )) then
11:    Swap ESr1 and ESr2;
12:   End If
13:   Update the dimension group of N E S j according to Equations (3) and (4);
14:  End For
15:  Calculate the fitness of the updated N E S j , and fes ++;
16:End For
17:End While
18:Obtain the best solution in the swarm gbest and its fitness f(gbest)
Output: f(gbest) and gbest
Regarding the computational complexity in space occupation, in Algorithm 1, we can see that except for O(NPD) to store the positions of all particles and O(NPD) to store the velocities of all particles, it only takes extra O(NP) to store the index of particles in the two sets, and O(D) to store the dimension groups. Comprehensively, DGCELSO only takes O(NPD) space.
Based on the above time and space complexity analysis, it is found that the proposed DGCELSO remains as efficient as the classical PSO, which also takes O(NPD) time in each generation and O(NPD) space.

4. Experimental Section

To verify the effectiveness of the proposed DGCELSO, extensive experiments are conducted on two sets of large-scale optimization problems, namely the CEC’2010 [7] and the CEC’2013 [8] large-scale benchmark sets in this section. The CEC’2010 set contains 20 high-dimensional problems with 1000 dimensions, while the CEC’2013 set consists of 15 problems with 1000 dimensions as well. In particular, the CEC’2013 set is an extension of the CEC’2010 set by introducing more complicated features, such as overlapping interactions among variables and imbalance contribution of variables. Therefore, compared with the CEC’2010 problems, the CEC’2013 problems are more complicated and more difficult to optimize. For more detailed information on the two benchmark large-scale problem sets, readers are referred to [7,8].
In this section, we first investigate the settings of two key parameters (namely the swarm size NP and the control parameter ϕ ) for DGCELSO in Section 4.1. Then, extensive experiments are conducted on the two benchmark sets to compare DGCELSO with several state-of-the-art large-scale optimizers in Section 4.2. At last, a deep investigation into the proposed DGCELSO is performed to observe what contributes to the good performance of DGCELSO.
In the experiments, unless otherwise stated, the maximum number of fitness evaluations is set as 3000 × D, where D is the dimension size. In this paper, the dimension size of all optimization problems is 1000, and thus the total number of fitness evaluations is 3 × 106. To make fair and comprehensive comparisons, the median, the mean, and the standard deviation (Std) values over 30 independent runs are used to evaluate the performance of all algorithms. Moreover, to tell the statistical significance, the Wilcoxon rank-sum test at the significance level of “α = 0.05” was conducted to compare two different algorithms. Furthermore, to obtain the overall ranks of different algorithms on one whole benchmark set, the Friedman test at the significance level of “α = 0.05” was conducted on each benchmark set.
Lastly, it is worth noting that we use the C programming language and Code Blocks software to implement the proposed DGCELO. Moreover, all experiments were run on a PC with 8 Intel Core i7-10700 2.90-GHz CPUs, 8-GB memory, and the 64-bit Ubuntu 12.04 LTS system.

4.1. Parameter Setting

Due to the proposed two dynamic adjustment strategies of the associated parameters in DGCELSO, there are only two parameters, namely the swarm size NP and the control parameter ϕ that need fine-tuning. Therefore, to investigate the optimal setting of the two parameters for DGCELSO in solving 1000-D large-scale optimization problems, we conduct experiments by varying NP from 100 to 600 and ϕ ranging from 0.1 to 0.9 for DGCELSO on the CEC’2010 benchmark set. Table 1 shows the mean fitness values obtained by DGCELSO with different settings of NP and ϕ on the CEC’2010 set. In this table, the best results are highlighted in bold, and the average rank of each configuration is also presented, which was obtained using the Friedman test at the significance level of “α = 0.05”.
From this table, we obtain the following findings. (1) From the perspective of the Friedman test, when NP is fixed, the setting of parameter ϕ is neither too small nor too large, and the optimal setting is usually within [0.3, 0.6]. Specifically, when NP is 100 and 200, the optimal ϕ is 0.6 and 0.5 respectively. When NP is within [300, 500], the optimal ϕ is consistently 0.4. When NP is 600, the optimal ϕ is 0.3. (2) More specifically, we find that when NP is small, such as 100, the optimal ϕ is usually large. This is because a small NP could not afford enough diversity for DGCELPSO to explore the solution space. Therefore, to improve the diversity, ϕ should be large to enhance the influence of the second guiding exemplar in Equation (4), which is in charge of preventing the updated particle from being greedily attracted by the first guiding exemplar. On the contrary, when NP is large, such as 600, a small ϕ is preferred. This is because a large NP offers too high diversity for DGCELPSO to slow down its convergence. Consequently, to let particles fully exploit the found promising areas, ϕ should be small to decrease the influence of the second guiding exemplar in Equation (4). (3) Taking comprehensive comparisons among all settings of NP along with the associated optimal settings of ϕ , we find that DGCELSO with NP = 300 and ϕ = 0.4 achieves the best overall performance.
Based on the above observation, NP = 300 and ϕ = 0.4 are adopted for DGCELSO in the experiments related to 1000-D optimization problems.

4.2. Comparisons with State-of-the-Art Methods

To comprehensively verify the effectiveness of the devised DGCELSO, this section conducts extensive comparison experiments to compare DGCELSO with several state-of-the-art large-scale algorithms. Specifically, nine popular and latest large-scale methods are selected, namely TPLSO [24], SPLSO (The source code can be downloaded from https://gitee.com/mmmyq/SPLSO, accessed on 1 January 2022) [30], LLSO (The source code can be downloaded from https://gitee.com/mmmyq/LLSO, accessed on 1 January 2022) [25], CSO (The source code can be downloaded from http://www.soft-computing.de/CSO_Matlab_New.zip, accessed on 1 January 2022) [26], SLPSO (The source code can be downloaded from http://www.soft-computing.de/SL_PSO_Matlab.zip, accessed on 1 January 2022) [61], DECC-GDG (The source code can be downloaded from https://ww2.mathworks.cn/matlabcentral/mlc-downloads/downloads/submissions/45783/versions/1/download/zip/CC-GDG-CMAES.zip, accessed on 1 January 2022) [50], DECC-DG2 (The source code can be downloaded from https://bitbucket.org/mno/differential-grouping2/src/master/, accessed on 1 January 2022) [35], DECC-RDG (The source code can be downloaded from https://www.researchgate.net/profile/Yuan-Sun-18/publications, accessed on 1 January 2022) [37], and DECC-RDG2 (The source code can be downloaded from https://www.researchgate.net/profile/Yuan-Sun-18/publications, accessed on 1 January 2022) [52]. The former five large-scale optimizers are state-of-the-art holistic large-scale PSO variants, while the latter four algorithms are state-of-the-art cooperative coevolutionary evolutionary algorithms. Compared with these nine different state-of-the-art large-scale optimizers, the effectiveness of DGCELSO is expected to be demonstrated.
Table 2 and Table 3 display the comparison results between DGCELSO and the nine compared algorithms on the 1000-D CEC’2010 and the 1000-D CEC’2013 large-scale benchmark sets, respectively. In these two tables, the symbols, “+”, “−”, and “=” above the p-values obtained from the Wilcoxon rank test denote that the proposed DGCELSO is significantly superior to, significantly inferior to, and equivalent to the associated compared algorithms on the related functions, respectively. “w/t/l” in the second to last rows of the two tables count the numbers of functions where DGCELSO performs significantly better, equivalently, and significantly worse than the associated compared methods. Actually, they are the numbers of “+”, “=” and “−”, respectively. In the last rows of the two tables, the averaged ranks of all algorithms obtained from the Friedman test are presented as well.
In Table 2, the comparison results on the CEC’2010 set are summarized as follows. (1) From the perspective of the Friedman test, as shown in the last row, it is found that the proposed DGCELSO has the lowest rank value, which is much smaller than those of the compared algorithms. This means that DGCELSO achieves the best overall performance and shows great superiority to the compared algorithms. (2) With respect to the Wilcoxon rank-sum test, as shown in the second last row, it is observed that DGCELSO performs significantly better than the compared algorithms on at least 14 problems. In particular, competed with the four cooperative coevolutionary evolutionary algorithms, DGCELSO presents significant superiority to them on at least 16 problems and only shows inferiority in at most four problems. In comparison with the five holistic large-scale PSO variants, DGCELSO is significantly superior to SLPSO on 18 problems, achieves much better performance than TPLSO on 16 problems, outperforms both LLSO and CSO on 15 problems, and beats SPLSO down on 14 problems. The superiority of DGCELSO to the five holistic large-scale PSOs demonstrates the effectiveness of the proposed DGCEL strategy.
In Table 3, we summarize the comparison results on the CEC’2013 set as follows. (1) From the perspective of the Friedman test, as shown in the last row, it is found that the rank value of the proposed DGCELSO is still the lowest among the ten algorithms, and such a rank is still much smaller than those of the nine compared algorithms. This demonstrates that DGCELSO still achieves the best overall performance on the complicated CEC’2013 benchmark set and shows great dominance to the compared algorithms. (2) With respect to the Wilcoxon rank-sum test, as shown in the second to last row, it is observed that except for SPLSO, DGCELSO shows significantly better performance than the other eight compared algorithms on at least 10 problems and shows inferiority on at most three problems. Competed with SPLSO, DGCELSO beats it on eight problems and is defeated on only three problems. The superiority of DGCELSO to the compared algorithms on the CEC’2013 benchmark set demonstrates that it is promising for complicated large-scale optimization problems.
The above experiments demonstrated the effectiveness of the proposed DGCELSO. To further demonstrate its efficiency in solving large-scale optimization problems, we conduct experiments on the two large-scale benchmark sets to investigate the convergence speed of the proposed DGCELSO in comparison with the nine compared methods. In this experiment, the maximum number of fitness evaluations is set as 5 × 106. Figure 3 and Figure 4 show the convergence comparison results on the CEC’2010 and the CEC’2013 benchmark sets, respectively.
In Figure 3, on the CEC’2010 benchmark set, the following findings can be obtained. (1) At first glance, it is found that the proposed DGCELSO obviously obtains faster convergence along with better solutions than all the nine compared algorithms on nine problems (F1, F4, F7, F9, F11, F12, F14, F16, and F17). On F3, F13, F18, and F20, DGCELSO achieves very similar performance with some compared algorithms in terms of the solution quality but obtains much faster convergence than the associated compared algorithms. (2) More specifically, we find that DGCELSO obviously shows much better performance in both convergence speed and solution quality than the five holistic large-scale PSO variants, namely TPLSO, SPLSO, LLSO, CSO, and SLPSO on 17, 16, 15, 16, and 17, respectively. In the competition with the four cooperative coevolutionary evolutionary algorithms, namely DECC-DG, DECC-GD2, DECC-RDG, and DECC-RDG2, DGCELSO shows clear superiority in both convergence speed and solution quality on 17, 17, 17, and 15 problems, respectively.
From Figure 4, similar observations on the CEC’2013 benchmark set can be attained. (1) At first glance, it is found that the proposed DGCELSO obtains faster convergence along with better solutions than all the nine compared algorithms on six problems (F1, F4, F7, F11, F13, and F14). On F8, F9, and F12, DGCELSO shows superiority in both convergence speed and solution quality to eight compared algorithms and is inferior to only one compared algorithm. (2) More specifically, we find that DGCELSO performs better with faster convergence speed and higher solution quality than TPLSO, SPLSO, LLSO, CSO, and SLPSO on 11, 11, 9, 12, and 10 problems, respectively. In competition with DECC-DG, DECC-GD2, DECC-RDG, and DECC-RDG2, DGCELSO presents great dominance to them on 11, 9, 11, and 12 problems, respectively.
To sum up, compared with these state-of-the-art large-scale algorithms, DGCELSO performs much better in both convergence speed and solution quality. The superiority of DGCELSO mainly benefits from the proposed DGCEL strategy, which could implicitly assemble useful information embedded in elite particles to guide the evolution of the swarm. In particular, the superiority of DGCELSO to the five holistic large-scale PSOs, which also adopt elite particles in the current swarm to direct the evolution of the swarm, demonstrates that the assembly of evolutionary information in elites is effective. Such assembly not only improves the learning diversity of particles due to the random selection of guiding exemplars from the elites but also promotes the learning effectiveness of particles because each updated particle could learn from multiple different elites with the help of the dimension group-based learning. As a result, DGCELSO could compromise search intensification and diversification well to explore and exploit the large-scale solution appropriately to locate satisfactory solutions.

4.3. Deep Investigation on DGCELSO

In this section, we conduct extensive experiments on the 1000-D CEC’2010 benchmark set to verify the effectiveness of the main components in the proposed DGCELSO.

4.3.1. Effectiveness of the Proposed DGCEL

First, we conduct experiments to investigate the effectiveness of the proposed DGCEL strategy. To this end, we first incorporate the segment-based predominance learning strategy (SPL) in SPLSO, which is the most similar work to the proposed DGCELSO, to replace the DGCEL strategy, leading to a new variant of DGCELSO, which we denote as “DGCELSO-SPL”. In addition, we also develop two extreme cases of DGCELSO, where the number of dimension groups (NDG) is set as 1 and 1000, respectively. The former, which we denote as “DGCELSO-1”, con all dimensions as a group, and thus can be considered a DGCELSO without the dimension group-based comprehensive learning, while the latter, which we denote as “DGCELSO-1000”, considers each dimension as a group. This can be considered a DGCELSO by introducing the comprehensive learning strategy in CLPSO [46] to replace the dimension group-based comprehensive learning in DGCELSO. Then, we conduct experiments on the CEC’2010 benchmark set to compare the above four versions of DGCELSO. Table 4 shows the comparison results among the four versions of DGCELSO. In this table, the best results are highlighted in bold.
From Table 4, the following observations can be attained. (1) From the perspective of the Friedman test, it is found that the rank value of DGCELSO is the smallest among the four versions of DGCELSO. This demonstrates that DGCELSO achieves the best overall performance. (2) Comparing DGCELSO with DGCELSO-SPL, DGCELSO shows great superiority. This demonstrates that the proposed DGCEL strategy is much better than SPL. It should be mentioned that, like DGCEL, SPL also lets each particle learn from multiple elites in the swarm, based on the dimension group. The differences between DGCEL and SPL lie in two aspects. On the one hand, SPL lets particles learn from relatively better elites which are determined by the competition between randomly paired two particles, while DGCEL lets particles learn from absolutely better elites which are the top tpNP best particles in the swarm. On the other hand, the second exemplar in the velocity update in SPL is the mean position of the whole swarm, which is shared by all updated particles, while the second exemplar in DGCEL is also randomly selected from the elite particles. With the observed superiority of DGCEL to SPL, it is demonstrated that the exemplar selection in DGCEL is better than that in SPL. (3) Competed with DGCELSO-1 and DGCELSO-1000, DGCELSO presents great superiority. This superiority demonstrates the effectiveness of the proposed dimension group-based comprehensive learning strategy. Instead of learning from only two exemplars in DGCELSO-1, which consider all dimensions as a group, and learning from multiple exemplars dimension by dimension in DGCELSO-1000, which considers each dimension as a group, DGCELSO lets each updated particle learn from multiple exemplars based on dimension group. In this way, the potentially useful information embedded in different exemplars is more likely to be assembled in DGCELSO than in DGCELSO-1 and DGCELSO-1000.
Based on the above observations, it is found that the proposed DGCEL strategy is effective and plays a crucial role in helping DGCELSO achieve promising performance.

4.3.2. Effectiveness of the Proposed Dynamic Adjustment Schemes for Parameters

In this subsection, we conduct experiments to verify the effectiveness of the proposed dynamic adjustment schemes for the two control parameters, namely the elite ratio tp and the number of dimension groups NDG.
First, we conduct experiments to investigate the effectiveness of the proposed dynamic scheme for tp. To this end, we first set tp as different fixed values from 0.1 to 0.9. Then, we compare the DGCELSO with the dynamic scheme with these DGCELSOs with different fixed tp values. Table 5 shows the comparison results between the DGCELSO with the dynamic scheme and the ones with different values of tp on the CEC’2010 benchmark set. In this table, the best results are highlighted in bold.
From Table 5, the following findings can be obtained. (1) From the perspective of the Friedman test, it is found that DGCELSO with the dynamic tp ranks first among all versions of DGCELSO with different settings of tp. This demonstrates that DGCELSO with the dynamic tp achieves the best overall performance. (2) More specifically, we find that DGCELSO with the dynamic strategy obtains the best results on 4 problems and its results on the other problems are very close to the best ones obtained by the DGCELSO with the associated optimal settings of tp. These two observations demonstrate that the dynamic strategy for tp is helpful in achieving good performance for DGCELSO.
Then, we conduct experiments to verify the dynamic scheme for the number of dimension groups (NDG). To this end, we first set NDG as different fixed values from 20 to 100. Subsequently, we conduct experiments on the CEC’2010 set to compare the DGCELSO with the dynamic scheme for NDG and the ones with different fixed values of NDG. Table 6 shows the comparison results among the above versions of DGCELSO. In this table, the best results are highlighted in bold.
From Table 6, we can obtain the following findings. (1) From the perspective of the Friedman test, it is found that the rank value of the DGCELSO with the dynamic scheme for NDG is the smallest among all versions of DGCELSO with different settings of NDG. This demonstrates that DGCELSO with the dynamic strategy achieves the best overall performance. (2) More specifically, we find that DGCELSO with the dynamic strategy obtains the best results on nine problems, while DGCELSO with fixed NDG obtains the best results on at most four problems. In particular, on the other 11 problems where DGCELSO with the dynamic strategy does not achieve the best results, its optimization results are very close to the best ones obtained by DGCELSO with the associated optimal NDG. These two observations verify the effectiveness of the dynamic strategy for NDG.
To sum up, the above comparative experiments demonstrated the effectiveness and efficiency of DGCELSO in solving large-scale optimization problems. In particular, the deep investigation experiments have validated that it is the proposed DGCEL strategy along with the two dynamic strategies that play a crucial role in helping DGCELSO achieve promising performance.

5. Conclusions

This paper proposed a dimension group-based comprehensive elite learning swarm optimizer (DGCELSO) to effectively solve large-scale optimization problems. Specifically, this optimizer first partitions the swarm into two exclusive sets, namely the elite set and the non-elite set. Then, the non-elite particles are updated by learning from the elite ones with the elite particles directly entering the next generation. During the update of each non-elite particle, the dimensions are separated into several dimension groups. Subsequently, for each dimension group, two elites are randomly selected from the elite set and then act as the guiding exemplars to direct the update of the dimension group. In this way, each non-elite particle could comprehensively learn from multiple elites. Moreover, not only are the guiding exemplars for different non-elite particles different, but the guiding exemplars for different dimension groups of the same non-elite particle are also likely to be different. As a result, not only could the learning diversity of particles be improved, but the learning efficiency of particles could also be promoted. To further aid the optimizer to explore and exploit the solution space properly, we designed two dynamic adjustment strategies for the associated control parameters in the proposed DGCELSO.
Experiments conducted on the 1000-D CEC’2010 and CEC’2013 large-scale benchmark sets verified the effectiveness of the proposed DGCELSO by comparing it with nine state-of-the-art large-scale methods. Experimental results demonstrate that DGCELSO achieves highly competitive or even much better performance than the compared methods in terms of both the solution quality and the convergence speed.

Author Contributions

Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. K.-X.Z.: Implementation, formal analysis, and writing—original draft preparation. X.-D.G.: Methodology, and writing—review, and editing. D.-D.X.: Writing—review and editing. Z.-Y.L.: Writing—review and editing, and funding acquisition. S.-W.J.: Writing—review and editing. J.Z.: Conceptualization and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62006124 and U20B2061, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, in part by the National Research Foundation of Korea (NRF-2021H1D3A2A01082705), and in part by the Startup Foundation for Introducing Talent of NUIST.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jia, Y.H.; Mei, Y.; Zhang, M. A Two-Stage Swarm Optimizer with Local Search for Water Distribution Network Optimization. IEEE Trans. Cybern. 2021. [Google Scholar] [CrossRef]
  2. Cao, K.; Cui, Y.; Liu, Z.; Tan, W.; Weng, J. Edge Intelligent Joint Optimization for Lifetime and Latency in Large-Scale Cyber-Physical Systems. IEEE Internet Things J. 2021. [Google Scholar] [CrossRef]
  3. Chen, W.N.; Tan, D.Z.; Yang, Q.; Gu, T.; Zhang, J. Ant Colony Optimization for the Control of Pollutant Spreading on Social Networks. IEEE Trans. Cybern. 2020, 50, 4053–4065. [Google Scholar] [CrossRef]
  4. Zuo, T.; Zhang, Y.; Meng, K.; Tong, Z.; Dong, Z.Y.; Fu, Y. A Two-Layer Hybrid Optimization Approach for Large-Scale Offshore Wind Farm Collector System Planning. IEEE Trans. Ind. Inform. 2021, 17, 7433–7444. [Google Scholar] [CrossRef]
  5. Yang, Q.; Chen, W.N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern. 2020, 52, 1960–1976. [Google Scholar] [CrossRef]
  6. Omidvar, M.N.; Li, X.; Mei, Y.; Yao, X. Cooperative Co-Evolution with Differential Grouping for Large Scale Optimization. IEEE Trans. Evol. Comput. 2014, 18, 378–393. [Google Scholar] [CrossRef][Green Version]
  7. Tang, K.; Li, X.; Suganthan, P.; Yang, Z.; Weise, T. Benchmark Functions for the CEC 2010 Special Session and Competition on Large-Scale Global Optimization; Nature Inspired Computation and Applications Laboratory, University of Science and Technology of China: Hefei, China, 2009. [Google Scholar]
  8. Li, X.; Tang, K.; Omidvar, M.N.; Yang, Z.; Qin, K.; China, H. Benchmark Functions for the CEC 2013 Special Session and Competition on Large-Scale Global Optimization; Technical Report; Evolutionary Computation and Machine Learning Group, RMIT University: Melbourne, Australia, 2013. [Google Scholar]
  9. Yang, Q.; Li, Y.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics 2021, 9, 3207. [Google Scholar] [CrossRef]
  10. Yang, Q.; Chen, W.N.; Gu, T.; Zhang, H.; Yuan, H.; Kwong, S.; Zhang, J. A Distributed Swarm Optimizer with Adaptive Communication for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 50, 3393–3408. [Google Scholar] [CrossRef]
  11. Omidvar, M.N.; Li, X.; Yao, X. A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization: Part A. IEEE Trans. Evol. Comput. 2021. in press. Available online: https://ieeexplore.ieee.org/document/9627116 (accessed on 1 January 2022).
  12. Omidvar, M.N.; Li, X.; Yao, X. A Review of Population-Based Metaheuristics for Large-Scale Black-Box Global Optimization: Part B. IEEE Trans. Evol. Comput. 2021. in press. Available online: https://ieeexplore.ieee.org/document/9627138 (accessed on 1 January 2022).
  13. Yang, Q.; Xie, H.; Chen, W.; Zhang, J. Multiple Parents Guided Differential Evolution for Large Scale Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 3549–3556. [Google Scholar]
  14. Yang, Q.; Chen, W.N.; Li, Y.; Chen, C.L.P.; Xu, X.M.; Zhang, J. Multimodal Estimation of Distribution Algorithms. IEEE Trans. Cybern. 2017, 47, 636–650. [Google Scholar] [CrossRef][Green Version]
  15. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  16. Shi, Y.; Eberhart, R. A Modified Particle Swarm Optimizer. In Proceedings of the IEEE International Conference on Evolutionary Computation Proceedings: IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  17. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  18. Ren, Z.; Zhang, A.; Wen, C.; Feng, Z. A Scatter Learning Particle Swarm Optimization Algorithm for Multimodal Problems. IEEE Trans. Cybern. 2014, 44, 1127–1140. [Google Scholar] [CrossRef]
  19. Zhang, J.; Lu, Y.; Che, L.; Zhou, M. Moving-Distance-Minimized PSO for Mobile Robot Swarm. IEEE Trans. Cybern. 2021. [Google Scholar] [CrossRef]
  20. Villalón, C.L.C.; Dorigo, M.; Stützle, T. PSO-X: A Component-Based Framework for the Automatic Design of Particle Swarm Optimization Algorithms. IEEE Trans. Evol. Comput. 2021. [Google Scholar] [CrossRef]
  21. Ding, W.; Lin, C.T.; Cao, Z. Deep Neuro-Cognitive Co-Evolution for Fuzzy Attribute Reduction by Quantum Leaping PSO with Nearest-Neighbor Memeplexes. IEEE Trans. Cybern. 2019, 49, 2744–2757. [Google Scholar] [CrossRef]
  22. Yang, Q.; Hua, L.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. Mathematics 2022, 10, 761. [Google Scholar] [CrossRef]
  23. Bonavolontà, F.; Noia, L.P.D.; Liccardo, A.; Tessitore, S.; Lauria, D. A PSO-MMA Method for the Parameters Estimation of Interarea Oscillations in Electrical Grids. IEEE Trans. Instrum. Meas. 2020, 69, 8853–8865. [Google Scholar] [CrossRef]
  24. Lan, R.; Zhu, Y.; Lu, H.; Liu, Z.; Luo, X. A Two-Phase Learning-Based Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 51, 6284–6293. [Google Scholar] [CrossRef]
  25. Yang, Q.; Chen, W.; Deng, J.D.; Li, Y.; Gu, T.; Zhang, J. A Level-Based Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Evol. Comput. 2018, 22, 578–594. [Google Scholar] [CrossRef]
  26. Cheng, R.; Jin, Y. A Competitive Swarm Optimizer for Large Scale Optimization. IEEE Trans. Cybern. 2015, 45, 191–204. [Google Scholar] [CrossRef]
  27. Mahdavi, S.; Shiri, M.E.; Rahnamayan, S. Metaheuristics in Large-Scale Global Continues Optimization: A Survey. Inf. Sci. 2015, 295, 407–428. [Google Scholar] [CrossRef]
  28. Ma, X.; Li, X.; Zhang, Q.; Tang, K.; Liang, Z.; Xie, W.; Zhu, Z. A Survey on Cooperative Co-Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2018, 23, 421–441. [Google Scholar] [CrossRef]
  29. Li, X.; Yao, X. Cooperatively Coevolving Particle Swarms for Large Scale Optimization. IEEE Trans. Evol. Comput. 2011, 16, 210–224. [Google Scholar]
  30. Yang, Q.; Chen, W.; Gu, T.; Zhang, H.; Deng, J.D.; Li, Y.; Zhang, J. Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2017, 47, 2896–2910. [Google Scholar] [CrossRef][Green Version]
  31. Xie, H.Y.; Yang, Q.; Hu, X.M.; Chen, W.N. Cross-Generation Elites Guided Particle Swarm Optimization for Large Scale Optimization. In Proceedings of the IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  32. Song, G.W.; Yang, Q.; Gao, X.D.; Ma, Y.Y.; Lu, Z.Y.; Zhang, J. An Adaptive Level-Based Learning Swarm Optimizer for Large-Scale Optimization. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Melbourne, Australia, 17–20 October 2021; pp. 152–159. [Google Scholar]
  33. Potter, M.A.; De Jong, K.A. A Cooperative Co-Evolutionary Approach to Function Optimization. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Berlin, Germany, 22–26 September 1994; pp. 249–257. [Google Scholar]
  34. Yang, Q.; Chen, W.N.; Zhang, J. Evolution Consistency Based Decomposition for Cooperative Coevolution. IEEE Access 2018, 6, 51084–51097. [Google Scholar] [CrossRef]
  35. Omidvar, M.N.; Yang, M.; Mei, Y.; Li, X.; Yao, X. DG2: A Faster and More Accurate Differential Grouping for Large-Scale Black-Box Optimization. IEEE Trans. Evol. Comput. 2017, 21, 929–942. [Google Scholar] [CrossRef][Green Version]
  36. Sun, Y.; Kirley, M.; Halgamuge, S.K. Extended Differential Grouping for Large Scale Global Optimization with Direct and Indirect Variable Interactions. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; pp. 313–320. [Google Scholar]
  37. Sun, Y.; Kirley, M.; Halgamuge, S.K. A Recursive Decomposition Method for Large Scale Continuous Optimization. IEEE Trans. Evol. Comput. 2017, 22, 647–661. [Google Scholar] [CrossRef]
  38. Song, A.; Chen, W.N.; Gong, Y.J.; Luo, X.; Zhang, J. A Divide-and-Conquer Evolutionary Algorithm for Large-Scale Virtual Network Embedding. IEEE Trans. Evol. Comput. 2020, 24, 566–580. [Google Scholar] [CrossRef]
  39. Deng, H.; Peng, L.; Zhang, H.; Yang, B.; Chen, Z. Ranking-Based Biased Learning Swarm Optimizer for Large-Scale Optimization. Inf. Sci. 2019, 493, 120–137. [Google Scholar] [CrossRef]
  40. Wang, H.; Liang, M.; Sun, C.; Zhang, G.; Xie, L. Multiple-Strategy Learning Particle Swarm Optimization for Large-Scale Optimization Problems. Complex Intell. Syst. 2021, 7, 1–16. [Google Scholar] [CrossRef]
  41. Jian, J.R.; Chen, Z.G.; Zhan, Z.H.; Zhang, J. Region Encoding Helps Evolutionary Computation Evolve Faster: A New Solution Encoding Scheme in Particle Swarm for Large-Scale Optimization. IEEE Trans. Evol. Comput. 2021, 25, 779–793. [Google Scholar] [CrossRef]
  42. Kampourakis, K. Understanding Evolution; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  43. Ju, X.; Liu, F. Wind Farm Layout Optimization Using Self-Informed Genetic Algorithm with Information Guided Exploitation. Appl. Energy 2019, 248, 429–445. [Google Scholar] [CrossRef]
  44. Ju, X.; Liu, F.; Wang, L.; Lee, W.-J. Wind Farm Layout Optimization Based on Support Vector Regression Guided Genetic Algorithm with Consideration of Participation among Landowners. Energy Convers. Manag. 2019, 196, 1267–1281. [Google Scholar] [CrossRef]
  45. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.L.; Zhan, Z.H. Triple Archives Particle Swarm Optimization. IEEE Trans. Cybern. 2020, 50, 4862–4875. [Google Scholar] [CrossRef]
  46. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  47. Gong, Y.; Li, J.; Zhou, Y.; Li, Y.; Chung, H.S.; Shi, Y.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2016, 46, 2277–2290. [Google Scholar] [CrossRef][Green Version]
  48. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef][Green Version]
  49. Van den Bergh, F.; Engelbrecht, A.P. A Cooperative Approach to Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2004, 8, 225–239. [Google Scholar] [CrossRef]
  50. Mei, Y.; Omidvar, M.N.; Li, X.; Yao, X. A Competitive Divide-and-Conquer Algorithm for Unconstrained Large-Scale Black-Box Optimization. ACM Trans. Math. Softw. 2016, 42, 1–24. [Google Scholar] [CrossRef]
  51. Yang, M.; Zhou, A.; Li, C.; Yao, X. An Efficient Recursive Differential Grouping for Large-Scale Continuous Problems. IEEE Trans. Evol. Comput. 2021, 25, 159–171. [Google Scholar] [CrossRef]
  52. Sun, Y.; Omidvar, M.N.; Kirley, M.; Li, X. Adaptive Threshold Parameter Estimation with Recursive Differential Grouping for Problem Decomposition. In Proceedings of the Genetic and Evolutionary Computation Conference, Kyoto, Japan, 15–19 July 2018. [Google Scholar]
  53. Ma, X.; Huang, Z.; Li, X.; Wang, L.; Qi, Y.; Zhu, Z. Merged Differential Grouping for Large-scale Global Optimization. IEEE Trans. Evol. Comput. 2022, in press. [Google Scholar] [CrossRef]
  54. Liu, H.; Wang, Y.; Fan, N. A Hybrid Deep Grouping Algorithm for Large Scale Global Optimization. IEEE Trans. Evol. Comput. 2020, 24, 1112–1124. [Google Scholar] [CrossRef]
  55. Zhang, X.Y.; Gong, Y.J.; Lin, Y.; Zhang, J.; Kwong, S.; Zhang, J. Dynamic Cooperative Coevolution for Large Scale Optimization. IEEE Trans. Evol. Comput. 2019, 23, 935–948. [Google Scholar] [CrossRef]
  56. Neshat, M.; Mirjalili, S.; Sergiienko, N.Y.; Esmaeilzadeh, S.; Amini, E.; Heydari, A.; Garcia, D.A. Layout Optimisation of Offshore Wave Energy Converters Using a Novel Multi-swarm Cooperative Algorithm with Backtracking Strategy: A Case Study from Coasts of Australia. Energy 2022, 239, 122463. [Google Scholar] [CrossRef]
  57. Pan, Q.K.; Gao, L.; Wang, L. An Effective Cooperative Co-Evolutionary Algorithm for Distributed Flowshop Group Scheduling Problems. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
  58. Neshat, M.; Alexander, B.; Wagner, M. A Hybrid Cooperative Co-Evolution Algorithm Framework for Optimising Power Take off and Placements of Wave Energy Converters. Inf. Sci. 2020, 534, 218–244. [Google Scholar] [CrossRef]
  59. Liang, M.; Wang, W.; Dong, C.; Zhao, D. A Cooperative Coevolutionary Optimization Design of Urban Transit Network and Operating Frequencies. Expert Syst. Appl. 2020, 160, 113736. [Google Scholar] [CrossRef]
  60. Zhao, S.-Z.; Liang, J.J.; Suganthan, P.N.; Tasgetiren, M.F. Dynamic Multi-Swarm Particle Swarm Optimizer with Local Search for Large Scale Global Optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Hong Kong, China, 1–6 June 2008; pp. 3845–3852. [Google Scholar]
  61. Cheng, R. A Social Learning Particle Swarm Optimization Algorithm for Scalable Optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  62. Mohapatra, P.; Das, K.N.; Roy, S. A Modified Competitive Swarm Optimizer for Large Scale Optimization Problems. Appl. Soft Comput. 2017, 59, 340–362. [Google Scholar] [CrossRef]
  63. Li, D.; Guo, W.; Lerch, A.; Li, Y.; Wang, L.; Wu, Q. An Adaptive Particle Swarm Optimizer with Decoupled Exploration and Exploitation for Large Scale Optimization. Swarm Evol. Comput. 2021, 60, 100789. [Google Scholar] [CrossRef]
  64. Lan, R.; Zhang, L.; Tang, Z.; Liu, Z.; Luo, X. A Hierarchical Sorting Swarm Optimizer for Large-Scale Optimization. IEEE Access 2019, 7, 40625–40635. [Google Scholar] [CrossRef]
  65. Kong, F.; Jiang, J.; Huang, Y. An Adaptive Multi-Swarm Competition Particle Swarm Optimizer for Large-Scale Optimization. Mathematics 2019, 7, 521. [Google Scholar] [CrossRef][Green Version]
  66. Huang, H.; Lv, L.; Ye, S.; Hao, Z. Particle Swarm Optimization with Convergence Speed Controller for Large-Scale Numerical Optimization. Soft Comput. 2019, 23, 4421–4437. [Google Scholar] [CrossRef]
  67. LaTorre, A.; Muelas, S.; Peña, J.-M. A Comprehensive Comparison of Large Scale Global Optimizers. Inf. Sci. 2015, 316, 517–549. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed DGCEL strategy.
Figure 1. Flowchart of the proposed DGCEL strategy.
Mathematics 10 01072 g001
Figure 2. Flowchart of the proposed DGCELSO.
Figure 2. Flowchart of the proposed DGCELSO.
Mathematics 10 01072 g002
Figure 3. Convergence behavior comparison between DGCELSO and the compared algorithms on each 1000-D CEC’2010 benchmark problem.
Figure 3. Convergence behavior comparison between DGCELSO and the compared algorithms on each 1000-D CEC’2010 benchmark problem.
Mathematics 10 01072 g003
Figure 4. Convergence behavior comparison between DGCELSO and the compared algorithms on each 1000-D CEC’2013 benchmark problem.
Figure 4. Convergence behavior comparison between DGCELSO and the compared algorithms on each 1000-D CEC’2013 benchmark problem.
Mathematics 10 01072 g004
Table 1. Comparison results among DGCELSO with different settings of NP and ϕ on the 1000-D CEC’2010 problems.
Table 1. Comparison results among DGCELSO with different settings of NP and ϕ on the 1000-D CEC’2010 problems.
FNP = 100NP = 200
ϕ = 0.1ϕ = 0.2ϕ = 0.3ϕ = 0.4ϕ = 0.5ϕ = 0.6ϕ = 0.7ϕ = 0.8ϕ = 0.9ϕ = 0.1ϕ = 0.2ϕ = 0.3ϕ = 0.4ϕ = 0.5ϕ = 0.6ϕ = 0.7ϕ = 0.8ϕ = 0.9
F13.31 × 1025.18 × 1078.23 × 1071.34 × 1071.12 × 1032.92 × 10−239.51 × 10−205.27 × 1051.01 × 1085.12 × 10−266.22 × 10−295.73 × 10−270.00 × 1001.10 × 10−262.11 × 10−229.04 × 1025.08 × 1071.22 × 109
F22.93 × 1033.58 × 1033.64 × 1033.22 × 1032.50 × 1031.62 × 1031.12 × 1039.12 × 1031.13 × 1041.16 × 1031.61 × 1031.69 × 1031.40 × 1038.84 × 1022.95 × 1031.07 × 1041.14 × 1041.19 × 104
F35.74 × 1001.13 × 1011.14 × 1018.23 × 1003.24 × 1002.18 × 10−16.43 × 10−143.89 × 10−11.36 × 1013.47 × 10−142.90 × 10−21.19 × 10−13.42 × 10−143.81 × 10−144.88 × 10−143.63 × 10−11.27 × 1011.71 × 101
F44.77 × 10115.57 × 10125.93 × 10122.88 × 10121.66 × 10111.14 × 10111.53 × 10112.90 × 10117.08 × 10111.74 × 10111.96 × 10114.21 × 10111.28 × 10111.25 × 10111.67 × 10112.48 × 10116.04 × 10112.14 × 1013
F52.96 × 1073.16 × 1073.01 × 1073.54 × 1071.30 × 1082.75 × 1082.86 × 1082.96 × 1083.05 × 1082.81 × 1082.36 × 1082.24 × 1082.55 × 1082.77 × 1082.84 × 1082.91 × 1083.04 × 1083.09 × 108
F61.99 × 1012.02 × 1012.02 × 1012.01 × 1011.99 × 1012.01 × 1012.15 × 1012.15 × 1012.03 × 1011.94 × 1011.97 × 1011.97 × 1011.98 × 1011.96 × 1014.00 × 10−93.82 × 10−11.34 × 1011.78 × 101
F72.94 × 1069.82 × 1081.28 × 1093.37 × 1087.90 × 1057.40 × 1051.17 × 1058.40 × 1047.80 × 1053.34 × 10−63.70 × 1041.75 × 1061.40 × 1038.73 × 10−63.14 × 10−12.51 × 1045.02 × 1051.76 × 107
F83.39 × 1074.89 × 1074.72 × 1074.42 × 1071.67 × 1056.68 × 1041.58 × 1074.17 × 1074.89 × 1073.33 × 1055.47 × 1062.47 × 1071.86 × 1033.95 × 1031.73 × 1073.97 × 1074.52 × 1074.62 × 107
F98.72 × 1071.03 × 1091.17 × 1096.34 × 1082.80 × 1071.75 × 1074.08 × 1074.70 × 1081.36 × 10101.97 × 1073.44 × 1076.48 × 1071.98 × 1071.47 × 1074.03 × 1073.65 × 1092.29 × 10104.11 × 1010
F103.14 × 1033.85 × 1033.97 × 1033.43 × 1032.65 × 1031.69 × 1032.66 × 1031.09 × 1041.16 × 1041.20 × 1031.76 × 1031.82 × 1031.48 × 1039.59 × 1021.01 × 1041.08 × 1041.14 × 1041.20 × 104
F117.08 × 1019.71 × 1019.38 × 1018.60 × 1015.22 × 1013.03 × 1012.47 × 1012.53 × 1016.32 × 1011.57 × 1012.00 × 1012.03 × 1012.00 × 1011.09 × 1011.85 × 10−131.38 × 1005.04 × 1011.41 × 102
F129.54 × 1041.03 × 1061.13 × 1066.89 × 1054.82 × 1038.18 × 1026.14 × 1045.07 × 1066.64 × 1062.35 × 1031.71 × 1047.25 × 1041.72 × 1031.92 × 1032.36 × 1065.14 × 1066.61 × 1067.97 × 106
F135.89 × 1033.83 × 1064.40 × 1069.77 × 1055.29 × 1033.06 × 1032.35 × 1036.58 × 1041.36 × 1086.55 × 1028.02 × 1021.02 × 1035.58 × 1024.97 × 1025.48 × 1022.96 × 1033.95 × 1079.38 × 109
F142.68 × 1082.11 × 1092.30 × 1091.45 × 1098.31 × 1074.56 × 1071.39 × 1083.47 × 1093.23 × 10105.82 × 1071.12 × 1082.20 × 1086.07 × 1074.61 × 1072.11 × 1082.02 × 10105.18 × 10107.58 × 1010
F153.33 × 1034.07 × 1034.13 × 1033.53 × 1032.81 × 1031.11 × 1041.09 × 1041.12 × 1041.17 × 1041.07 × 1043.71 × 1033.33 × 1031.07 × 1041.05 × 1041.05 × 1041.08 × 1041.14 × 1041.21 × 104
F161.89 × 1022.56 × 1022.56 × 1022.22 × 1021.47 × 1028.35 × 1015.10 × 1016.61 × 1012.58 × 1026.10 × 10−11.60 × 1012.71 × 1016.75 × 1003.42 × 10−22.93 × 10−131.35 × 1012.52 × 1023.39 × 102
F173.02 × 1051.61 × 1061.71 × 1061.27 × 1063.52 × 1041.06 × 1042.08 × 1069.92 × 1061.40 × 1074.93 × 1049.98 × 1042.77 × 1052.24 × 1041.18 × 1056.85 × 1061.08 × 1071.47 × 1071.80 × 107
F181.70 × 1047.87 × 1081.16 × 1093.75 × 1072.67 × 1031.71 × 1032.72 × 1036.17 × 1064.49 × 10101.95 × 1032.54 × 1033.90 × 1031.66 × 1031.30 × 1031.59 × 1031.71 × 1072.95 × 10101.40 × 1011
F192.34 × 1064.66 × 1064.74 × 1063.92 × 1061.72 × 1066.52 × 1061.48 × 1072.01 × 1072.49 × 1079.10 × 1062.46 × 1062.41 × 1065.98 × 1061.09 × 1071.60 × 1072.09 × 1072.58 × 1073.04 × 107
F208.41 × 1031.01 × 1091.39 × 1094.32 × 1072.93 × 1031.29 × 1031.25 × 1037.61 × 1064.93 × 10101.41 × 1032.13 × 1032.72 × 1031.51 × 1031.10 × 1031.02 × 1032.27 × 1073.25 × 10101.47 × 1011
Rank3.756.357.055.302.802.453.255.758.303.254.255.253.152.453.956.257.708.75
FNP = 300NP = 400
ϕ = 0.1ϕ = 0.2ϕ = 0.3ϕ = 0.4ϕ = 0.5ϕ = 0.6ϕ = 0.7ϕ = 0.8ϕ = 0.9ϕ = 0.1ϕ = 0.2ϕ = 0.3ϕ = 0.4ϕ = 0.5ϕ = 0.6ϕ = 0.7ϕ = 0.8ϕ = 0.9
F19.78 × 10−270.00 × 1000.00 × 1000.00 × 1000.00 × 1007.97 × 10−105.83 × 1051.86 × 1082.23 × 1096.50 × 10−240.00 × 1000.00 × 1000.00 × 1008.36 × 10−244.18 × 10−33.39 × 1063.30 × 1083.01 × 109
F26.93 × 1021.06 × 1031.15 × 1038.88 × 1025.90 × 1021.04 × 1041.09 × 1041.15 × 1041.21 × 1045.75 × 1028.12 × 1028.78 × 1026.57 × 1029.82 × 1031.05 × 1041.10 × 1041.16 × 1041.22 × 104
F33.36 × 10−143.05 × 10−143.15 × 10−143.18 × 10−143.88 × 10−141.15 × 10−75.14 × 1001.47 × 1011.77 × 1013.51 × 10−142.99 × 10−142.98 × 10−143.15 × 10−143.98 × 10−143.29 × 10−47.64 × 1001.54 × 1011.79 × 101
F42.22 × 10112.13 × 10112.00 × 10111.60 × 10111.57 × 10112.06 × 10113.80 × 10111.31 × 10126.16 × 10132.88 × 10112.60 × 10112.27 × 10111.96 × 10111.82 × 10112.48 × 10115.19 × 10113.12 × 10121.09 × 1014
F52.83 × 1082.81 × 1082.76 × 1082.80 × 1082.82 × 1082.86 × 1082.93 × 1083.02 × 1083.18 × 1082.82 × 1082.82 × 1082.81 × 1082.78 × 1082.83 × 1082.89 × 1082.94 × 1083.06 × 1083.16 × 108
F64.00 × 10−96.18 × 1001.86 × 1014.00 × 10−94.00 × 10−92.08 × 10−75.36 × 1001.54 × 1011.84 × 1014.00 × 10−93.88 × 10−94.00 × 10−94.00 × 10−94.00 × 10−94.72 × 10−48.08 × 1001.61 × 1011.86 × 101
F73.43 × 10−31.20 × 10−33.63 × 10−22.15 × 10−52.54 × 10−33.60 × 1021.83 × 1059.66 × 1054.68 × 1088.32 × 10−13.50 × 10−23.16 × 10−27.06 × 10−31.03 × 1007.18 × 1033.40 × 1055.23 × 1061.55 × 109
F81.37 × 1073.51 × 1053.72 × 1044.36 × 1039.82 × 1053.01 × 1074.30 × 1074.58 × 1074.65 × 1072.23 × 1071.00 × 1073.36 × 1066.67 × 1051.33 × 1073.52 × 1074.41 × 1074.61 × 1074.67 × 107
F92.47 × 1072.28 × 1072.49 × 1071.77 × 1072.14 × 1071.69 × 1081.40 × 10103.19 × 10105.08 × 10103.12 × 1072.42 × 1072.41 × 1072.01 × 1073.01 × 1071.36 × 1091.92 × 10103.70 × 10105.55 × 1010
F108.49 × 1021.13 × 1031.22 × 1039.23 × 1029.75 × 1031.05 × 1041.09 × 1041.15 × 1041.22 × 1049.74 × 1038.59 × 1029.25 × 1021.11 × 1031.02 × 1041.05 × 1041.10 × 1041.16 × 1041.22 × 104
F111.25 × 10−132.23 × 10−16.68 × 1001.10 × 10−131.17 × 10−133.45 × 10−78.82 × 1007.59 × 1011.59 × 1021.30 × 10−131.11 × 10−131.05 × 10−131.06 × 10−131.31 × 10−137.60 × 10−41.51 × 1019.53 × 1011.67 × 102
F122.50 × 1044.39 × 1035.55 × 1032.55 × 1038.74 × 1044.13 × 1065.75 × 1067.14 × 1068.45 × 1061.71 × 1059.83 × 1037.41 × 1031.18 × 1042.03 × 1064.62 × 1066.16 × 1067.49 × 1068.77 × 106
F135.69 × 1025.35 × 1025.91 × 1025.15 × 1024.69 × 1024.85 × 1021.08 × 1055.63 × 1081.67 × 10105.31 × 1025.36 × 1025.46 × 1024.93 × 1024.50 × 1024.80 × 1023.72 × 1051.46 × 1092.23 × 1010
F147.79 × 1076.96 × 1077.62 × 1075.17 × 1077.69 × 1075.73 × 1093.85 × 10106.50 × 10108.96 × 10101.14 × 1087.11 × 1077.20 × 1076.01 × 1071.43 × 1081.68 × 10104.63 × 10107.22 × 10109.64 × 1010
F151.04 × 1041.05 × 1041.05 × 1041.04 × 1041.03 × 1041.06 × 1041.10 × 1041.16 × 1041.22 × 1041.04 × 1041.03 × 1041.04 × 1041.03 × 1041.03 × 1041.06 × 1041.11 × 1041.17 × 1041.23 × 104
F162.15 × 10−135.86 × 10−29.78 × 10−21.55 × 10−132.01 × 10−132.36 × 10−61.02 × 1022.92 × 1023.52 × 1022.39 × 10−131.61 × 10−131.53 × 10−131.60 × 10−132.44 × 10−137.14 × 10−31.52 × 1023.07 × 1023.57 × 102
F171.53 × 1065.71 × 1045.65 × 1046.57 × 1044.44 × 1068.75 × 1061.28 × 1071.65 × 1071.98 × 1074.81 × 1061.72 × 1051.03 × 1057.07 × 1056.20 × 1061.02 × 1071.37 × 1071.72 × 1072.06 × 107
F181.56 × 1031.65 × 1031.56 × 1031.31 × 1031.13 × 1031.26 × 1031.57 × 1095.42 × 10101.76 × 10111.31 × 1031.31 × 1031.33 × 1031.25 × 1031.12 × 1033.88 × 1034.92 × 1096.87 × 10101.94 × 1011
F191.33 × 1078.92 × 1068.14 × 1061.02 × 1071.43 × 1071.88 × 1072.25 × 1072.75 × 1073.21 × 1071.45 × 1071.09 × 1071.02 × 1071.21 × 1071.53 × 1071.94 × 1072.47 × 1072.87 × 1073.36 × 107
F201.12 × 1031.32 × 1031.40 × 1031.08 × 1039.79 × 1021.00 × 1031.88 × 1095.74 × 10101.82 × 10119.89 × 1021.15 × 1031.10 × 1039.85 × 1029.82 × 1021.72 × 1035.69 × 1097.40 × 10102.03 × 1011
Rank3.803.484.032.032.885.006.907.958.953.902.902.502.053.955.707.008.009.00
FNP = 500NP = 600
ϕ = 0.1ϕ = 0.2ϕ = 0.3ϕ = 0.4ϕ = 0.5ϕ = 0.6ϕ = 0.7ϕ = 0.8ϕ = 0.9ϕ = 0.1ϕ = 0.2ϕ = 0.3ϕ = 0.4ϕ = 0.5ϕ = 0.6ϕ = 0.7ϕ = 0.8ϕ = 0.9
F18.02 × 10−220.00 × 1000.00 × 1000.00 × 1006.91 × 10−214.51 × 1008.52 × 1064.72 × 1083.65 × 1092.72 × 10−192.86 × 10−260.00 × 1005.33 × 10−269.43 × 10−161.90 × 1021.57 × 1076.02 × 1084.18 × 109
F28.73 × 1036.81 × 1027.34 × 1025.74 × 1021.01 × 1041.06 × 1041.11 × 1041.16 × 1041.23 × 1049.87 × 1036.03 × 1026.59 × 1024.15 × 1031.02 × 1041.06 × 1041.11 × 1041.17 × 1041.23 × 104
F33.92 × 10−142.96 × 10−142.90 × 10−143.12 × 10−148.55 × 10−141.24 × 10−29.15 × 1001.58 × 1011.80 × 1018.00 × 10−132.97 × 10−142.90 × 10−143.16 × 10−146.07 × 10−111.06 × 10−11.02 × 1011.61 × 1011.82 × 101
F43.35 × 10113.11 × 10112.86 × 10112.49 × 10112.16 × 10113.21 × 10116.31 × 10111.05 × 10131.22 × 10144.16 × 10113.83 × 10113.30 × 10112.84 × 10112.67 × 10113.79 × 10117.59 × 10111.73 × 10131.39 × 1014
F52.80 × 1082.76 × 1082.78 × 1082.77 × 1082.77 × 1082.90 × 1082.95 × 1083.06 × 1083.20 × 1082.80 × 1082.79 × 1082.76 × 1082.79 × 1082.84 × 1082.87 × 1082.99 × 1083.03 × 1083.18 × 108
F64.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−91.69 × 10−29.72 × 1001.66 × 1011.88 × 1014.09 × 10−93.88 × 10−93.88 × 10−94.00 × 10−96.07 × 10−91.45 × 10−11.08 × 1011.68 × 1011.89 × 101
F72.61 × 1011.21 × 1007.81 × 10−14.75 × 10−13.43 × 1013.08 × 1044.80 × 1053.01 × 1072.65 × 1092.63 × 1021.79 × 1011.21 × 1018.88 × 1003.52 × 1027.34 × 1046.46 × 1051.15 × 1083.95 × 109
F82.74 × 1071.73 × 1071.16 × 1079.50 × 1062.05 × 1073.79 × 1074.47 × 1074.63 × 1074.68 × 1073.08 × 1072.23 × 1071.74 × 1071.59 × 1072.52 × 1073.96 × 1074.50 × 1074.64 × 1074.69 × 107
F93.94 × 1072.71 × 1072.62 × 1072.33 × 1074.11 × 1074.17 × 1092.23 × 10104.01 × 10105.87 × 10104.84 × 1073.01 × 1072.85 × 1072.62 × 1075.78 × 1076.49 × 1092.49 × 10104.26 × 10106.14 × 1010
F101.00 × 1041.35 × 1037.94 × 1029.73 × 1031.02 × 1041.06 × 1041.11 × 1041.17 × 1041.23 × 1041.02 × 1049.45 × 1036.43 × 1039.99 × 1031.03 × 1041.06 × 1041.11 × 1041.17 × 1041.24 × 104
F111.67 × 10−131.11 × 10−131.04 × 10−131.10 × 10−135.51 × 10−132.59 × 10−21.94 × 1011.08 × 1021.72 × 1026.69 × 10−121.12 × 10−131.05 × 10−131.13 × 10−134.00 × 10−101.90 × 10−12.66 × 1011.17 × 1021.75 × 102
F121.54 × 1062.34 × 1041.47 × 1044.12 × 1043.02 × 1064.87 × 1066.37 × 1067.75 × 1069.01 × 1062.65 × 1064.99 × 1042.82 × 1041.18 × 1053.37 × 1065.07 × 1066.50 × 1067.83 × 1069.11 × 106
F135.27 × 1024.92 × 1024.67 × 1024.65 × 1024.69 × 1024.80 × 1021.08 × 1062.45 × 1092.62 × 10104.82 × 1025.44 × 1025.20 × 1024.56 × 1024.42 × 1025.63 × 1023.15 × 1063.37 × 1092.90 × 1010
F141.66 × 1088.14 × 1077.67 × 1077.21 × 1073.28 × 1082.36 × 10105.14 × 10107.78 × 10101.30 × 10112.61 × 1089.02 × 1078.54 × 1078.71 × 1078.67 × 1082.77 × 10105.44 × 10107.81 × 10101.03 × 1011
F151.03 × 1041.03 × 1041.03 × 1041.03 × 1041.03 × 1041.07 × 1041.11 × 1041.17 × 1041.24 × 1041.03 × 1041.03 × 1041.03 × 1041.03 × 1041.03 × 1041.07 × 1041.12 × 1041.17 × 1041.24 × 104
F162.83 × 10−131.65 × 10−131.55 × 10−131.66 × 10−131.34 × 10−122.74 × 10−11.83 × 1023.16 × 1023.61 × 1021.54 × 10−111.74 × 10−131.58 × 10−131.80 × 10−131.18 × 10−92.41 × 1002.05 × 1023.22 × 1023.63 × 102
F175.98 × 1067.91 × 1052.80 × 1053.07 × 1067.05 × 1061.07 × 1071.43 × 1071.76 × 1072.10 × 1076.68 × 1062.56 × 1061.14 × 1064.24 × 1067.51 × 1061.11 × 1071.44 × 1071.81 × 1072.12 × 107
F181.18 × 1031.26 × 1031.18 × 1031.13 × 1031.00 × 1031.10 × 1058.30 × 1097.98 × 10102.08 × 10111.08 × 1031.22 × 1031.22 × 1031.05 × 1039.59 × 1021.41 × 1061.12 × 10108.67 × 10102.19 × 1011
F191.55 × 1071.23 × 1071.15 × 1071.31 × 1071.64 × 1072.02 × 1072.50 × 1072.92 × 1073.32 × 1071.62 × 1071.31 × 1071.23 × 1071.40 × 1071.72 × 1072.06 × 1072.48 × 1072.97 × 1073.49 × 107
F209.94 × 1021.02 × 1031.06 × 1039.70 × 1029.78 × 1021.10 × 109.17 × 1098.52 × 10102.20 × 10119.91 × 1029.85 × 1029.94 × 1029.77 × 1029.86 × 1021.53 × 1061.28 × 10109.33 × 10102.30 × 1011
Rank4.252.752.002.004.155.857.008.009.004.102.651.852.304.205.907.008.009.00
Table 2. Fitness comparison between DECELSO and the compared algorithms on the 1000-D CEC’2010 problems with 3 × 106 fitness evaluations.
Table 2. Fitness comparison between DECELSO and the compared algorithms on the 1000-D CEC’2010 problems with 3 × 106 fitness evaluations.
FQualityDGCELSOTPLSOSPLSOLLSOCSOSLPSODECC-GDGDECC-DG2DECC-RDGDECC-RDG2
F1Median0.00 × 1001.98 × 10−187.70 × 10−202.97 × 10−224.64 × 10−127.65 × 10−186.53 × 1001.95 × 10−12.60 × 10−31.05 × 10−3
Mean0.00 × 1001.93 × 10−187.73 × 10−203.13 × 10−224.75 × 10−127.73 × 10−186.54 × 1007.34 × 10−16.42 × 1008.08 × 10−3
Std0.00 × 1003.04 × 10−196.95 × 10−216.93 × 10−237.77 × 10−138.84 × 10−199.35 × 10−11.61 × 1003.41 × 1013.28 × 10−2
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F2Median8.85 × 1021.13 × 1034.45 × 1029.71 × 1027.52 × 1031.94 × 1031.40 × 1033.00 × 1032.98 × 1032.99 × 103
Mean8.88 × 1021.11 × 1034.45 × 1029.78 × 1027.48 × 1031.93 × 1031.40 × 1033.00 × 1032.98 × 1033.00 × 103
Std4.13 × 1018.28 × 1011.63 × 1015.17 × 1012.60 × 1028.05 × 1012.67 × 1011.34 × 1021.16 × 1021.35 × 102
p-value-4.32 × 10−8+4.32 × 10−8−4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F3Median3.24 × 10−141.44 × 1002.56 × 10−132.89 × 10−142.56 × 10−91.88 × 1001.12 × 1011.08 × 1011.12 × 1011.11 × 101
Mean3.18 × 10−141.45 × 1002.52 × 10−132.76 × 10−142.57 × 10−91.84 × 1001.11 × 1011.09 × 1011.11 × 1011.10 × 101
Std1.32 × 10−151.34 × 10−11.86 × 10−142.16 × 10−151.82 × 10−102.62 × 10−15.69 × 10−16.40 × 10−16.46 × 10−16.88 × 10−1
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8−4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F4Median1.58 × 10112.77 × 10114.36 × 10114.48 × 10116.92 × 10112.68 × 10111.37 × 10141.44 × 10121.39 × 10121.37 × 1012
Mean1.60 × 10112.89 × 10114.30 × 10114.54 × 10116.87 × 10112.83 × 10111.38 × 10141.69 × 10121.49 × 10121.44 × 1012
Std3.72 × 10109.22 × 10108.17 × 10101.29 × 10111.76 × 10118.77 × 10102.68 × 10136.16 × 10116.33 × 10115.35 × 1011
p-value-1.00 × 100=3.49 × 10−3+4.32 × 10−8+1.02 × 10−3+3.19 × 10−7+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+3.19 × 10−7+
F5Median2.82 × 1081.63 × 1075.97 × 1061.09 × 1072.00 × 1062.89 × 1073.84 × 1081.72 × 1081.75 × 1081.72 × 108
Mean2.80 × 1081.59 × 1076.30 × 1061.16 × 1072.46 × 1063.04 × 1073.82 × 1081.75 × 1081.71 × 1081.73 × 108
Std9.11 × 1064.51 × 1061.73 × 1062.93 × 1061.33 × 1068.42 × 1061.54 × 1071.84 × 1071.84 × 1071.50 × 107
p-value-4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8+4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−
F6Median4.00 × 10−92.08 × 1001.00 × 10−84.00 × 10−98.18 × 10−72.14 × 1013.51 × 1058.81 × 1001.07 × 1011.06 × 101
Mean4.00 × 10−92.20 × 1009.44 × 10−94.00 × 10−98.16 × 10−71.95 × 1013.58 × 1058.90 × 1001.05 × 1011.05 × 101
Std3.73 × 10−153.74 × 10−11.18 × 10−98.27 × 10−252.57 × 10−84.13 × 1004.27 × 1046.50 × 10−17.02 × 10−16.84 × 10−1
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8−4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F7Median1.89 × 10−59.21 × 1024.51 × 1026.58 × 1002.13 × 1046.26 × 1042.98 × 10101.80 × 1034.86 × 1015.18 × 101
Mean2.15 × 10−55.86 × 1034.76 × 1022.31 × 1012.13 × 1046.49 × 1043.10 × 10101.98 × 1036.40 × 1015.87 × 101
Std1.55 × 10−51.03 × 1041.29 × 1027.45 × 1014.53 × 1033.81 × 1044.19 × 1099.49 × 1024.67 × 1013.71 × 101
p-value-2.07 × 10−6+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F8Median4.28 × 1034.78 × 1053.11 × 1072.33 × 1073.86 × 1077.51 × 1066.78 × 1086.05 × 1026.57 × 10−13.68 × 10−1
Mean4.36 × 1034.98 × 1053.11 × 1072.33 × 1073.87 × 1077.57 × 1068.05 × 1082.71 × 1056.65 × 1057.43 × 10−1
Std4.17 × 1021.43 × 1059.43 × 1042.96 × 1058.47 × 1042.44 × 1064.70 × 1089.94 × 1051.49 × 1061.24 × 100
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F9Median1.76 × 1074.25 × 1074.57 × 1074.64 × 1076.65 × 1073.31 × 1077.45 × 1082.15 × 1081.76 × 1081.77 × 108
Mean1.77 × 1074.32 × 1074.59 × 1074.48 × 1076.68 × 1073.35 × 1077.43 × 1082.18 × 1081.73 × 1081.77 × 108
Std1.69 × 1064.10 × 1062.99 × 1064.16 × 1064.38 × 1063.63 × 1063.71 × 1071.73 × 1071.22 × 1071.66 × 107
p-value-4.32 × 10−8+1.00 × 100=4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F10Median9.18 × 1029.67 × 1027.99 × 1038.87 × 1029.58 × 1032.59 × 1034.16 × 1036.73 × 1036.32 × 1036.27 × 103
Mean9.23 × 1029.84 × 1027.99 × 1038.88 × 1029.58 × 1032.79 × 1034.15 × 1036.72 × 1036.32 × 1036.27 × 103
Std3.82 × 1018.52 × 1011.25 × 1023.50 × 1016.49 × 1011.28 × 1035.70 × 1019.30 × 1011.12 × 1021.09 × 102
p-value-1.06 × 10−2+4.32 × 10−8+4.32 × 10−8−4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F11Median1.11 × 10−133.48 × 1003.02 × 10−122.90 × 1003.98 × 10−82.37 × 1015.58 × 1005.39 × 1004.76 × 1004.86 × 100
Mean1.10 × 10−133.50 × 1003.05 × 10−125.51 × 1003.98 × 10−82.42 × 1015.53 × 1005.59 × 1004.75 × 1004.86 × 100
Std2.36 × 10−151.30 × 1002.84 × 10−135.43 × 1003.19 × 10−93.03 × 1005.49 × 10−16.12 × 10−14.79 × 10−13.88 × 10−1
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F12Median2.55 × 1031.23 × 1049.39 × 1041.24 × 1044.25 × 1051.30 × 1042.87 × 1053.99 × 1042.22 × 1042.21 × 104
Mean2.55 × 1031.23 × 1049.53 × 1041.23 × 1044.37 × 1051.54 × 1042.87 × 1053.94 × 1042.21 × 1042.19 × 104
Std2.13 × 1021.30 × 1036.64 × 1031.32 × 1036.49 × 1047.06 × 1031.10 × 1042.17 × 1031.28 × 1031.45 × 103
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+3.19 × 10−7+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F13Median4.64 × 1027.29 × 1024.50 × 1027.82 × 1024.68 × 1028.87 × 1021.39 × 1031.65 × 1038.25 × 1028.17 × 102
Mean5.15 × 1027.54 × 1025.48 × 1027.91 × 1025.53 × 1029.81 × 1021.42 × 1031.77 × 1038.24 × 1028.40 × 102
Std1.49 × 1021.07 × 1021.66 × 1022.37 × 1021.75 × 1023.86 × 1023.40 × 1025.06 × 1021.35 × 1021.98 × 102
p-value-5.90 × 10−5+3.49 × 10−3+4.32 × 10−8+2.73 × 10−1=2.85 × 10−2+3.49 × 10−3+5.90 × 10−5+1.18 × 10−5+2.07 × 10−6+
F14Median5.10 × 1071.29 × 1081.61 × 1081.23 × 1082.46 × 1088.61 × 1078.59 × 1088.71 × 1087.19 × 1087.18 × 108
Mean5.17 × 1071.32 × 1081.60 × 1081.22 × 1082.46 × 1088.55 × 1078.64 × 1088.60 × 1087.23 × 1087.25 × 108
Std2.76 × 1069.33 × 1068.42 × 1066.41 × 1061.29 × 1077.57 × 1063.30 × 1074.17 × 1073.65 × 1073.44 × 107
p-value-4.32 × 10−8+2.07 × 10−6+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F15Median1.04 × 1041.04 × 1049.92 × 1038.30 × 1021.01 × 1041.12 × 1046.75 × 1036.73 × 1036.55 × 1036.56 × 103
Mean1.04 × 1048.88 × 1039.91 × 1038.97 × 1021.01 × 1041.12 × 1046.76 × 1036.73 × 1036.55 × 1036.55 × 103
Std6.65 × 1013.41 × 1036.31 × 1013.47 × 1026.48 × 1011.19 × 1028.82 × 1017.27 × 1018.86 × 1018.39 × 101
p-value-1.44 × 10−1=4.32 × 10−8−4.32 × 10−8−1.06 × 10−2−4.32 × 10−8+4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−
F16Median1.55 × 10−131.78 × 1014.66 × 10−124.40 × 1005.64 × 10−82.12 × 1013.98 × 10−43.89 × 10−41.92 × 10−51.88 × 10−5
Mean1.55 × 10−131.89 × 1014.68 × 10−124.33 × 1005.68 × 10−82.36 × 1013.97 × 10−43.90 × 10−41.93 × 10−51.89 × 10−5
Std2.66 × 10−157.46 × 1004.41 × 10−132.50 × 1006.21 × 10−91.11 × 1011.44 × 10−51.33 × 10−58.87 × 10−78.30 × 10−7
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F17Median6.70 × 1049.65 × 1046.90 × 1059.17 × 1042.19 × 1068.64 × 1042.64 × 1052.64 × 1051.99 × 1051.97 × 105
Mean6.57 × 1049.83 × 1046.84 × 1059.12 × 1042.21 × 1068.74 × 1042.65 × 1052.63 × 1051.98 × 1051.98 × 105
Std7.55 × 1039.90 × 1033.57 × 1045.43 × 1032.07 × 1051.39 × 1047.79 × 1037.33 × 1038.75 × 1039.45 × 103
p-value-4.32 × 10−8+7.15 × 10−2+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F18Median1.25 × 1032.29 × 1031.25 × 1032.49 × 1031.38 × 1032.95 × 1031.15 × 1031.14 × 1031.08 × 1031.11 × 103
Mean1.31 × 1032.36 × 1031.35 × 1032.51 × 1031.64 × 1032.92 × 1031.16 × 1031.13 × 1031.07 × 1031.10 × 103
Std2.94 × 1024.19 × 1023.81 × 1027.42 × 1028.13 × 1028.08 × 1021.31 × 1021.29 × 1021.08 × 1021.02 × 102
p-value-5.90 × 10−5+3.49 × 10−3+2.61 × 10−4+2.73 × 10−1=2.85 × 10−2+3.19 × 10−7−3.19 × 10−7−3.19 × 10−7−4.32 × 10−8−
F19Median1.02 × 1073.94 × 1068.19 × 1061.85 × 1069.78 × 1065.20 × 1062.11 × 1062.09 × 1061.96 × 1061.93 × 106
Mean1.02 × 1073.89 × 1068.20 × 1061.82 × 1069.86 × 1065.23 × 1062.12 × 1062.10 × 1061.95 × 1061.92 × 106
Std7.69 × 1052.64 × 1054.61 × 1059.22 × 1045.07 × 1059.15 × 1058.77 × 1049.92 × 1047.80 × 1041.05 × 105
p-value-4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−
F20Median1.06 × 1032.04 × 1039.79 × 1021.88 × 1039.87 × 1021.73 × 1035.43 × 1035.33 × 1034.32 × 1034.25 × 103
Mean1.08 × 1032.08 × 1031.06 × 1031.92 × 1031.07 × 1031.73 × 1035.45 × 1035.46 × 1034.28 × 1034.34 × 103
Std7.30 × 1012.00 × 1021.75 × 1023.00 × 1021.70 × 1021.53 × 1023.32 × 1023.37 × 1022.29 × 1023.20 × 102
p-value-4.32 × 10−8+5.90 × 10−5−4.32 × 10−8+5.90 × 10−5−1.18 × 10−5+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
w/t/l16/2/214/1/515/0/515/2/318/0/217/0/316/0/416/0/416/0/4
Rank2.754.804.653.706.256.058.207.105.855.65
Table 3. Fitness comparison between DECELSO and the compared algorithms on the 1000-D CEC’2013 problems with 3 × 106 fitness evaluations.
Table 3. Fitness comparison between DECELSO and the compared algorithms on the 1000-D CEC’2013 problems with 3 × 106 fitness evaluations.
FQualityDGCELSOTPLSOSPLSOLLSOCSOSLPSODECC-GDGDECC-DG2DECC-RDGDECC-RDG2
F1Median0.00 × 1003.21 × 10−181.17 × 10−194.02 × 10−227.92 × 10−121.03 × 10−177.06 × 1003.46 × 1002.04 × 10−22.96 × 10−2
Mean0.00 × 1003.81 × 10−181.18 × 10−194.28 × 10−227.88 × 10−121.65 × 10−177.43 × 1006.31 × 1003.51 × 10−21.08 × 10−1
Std0.00 × 1001.57 × 10−181.04 × 10−201.29 × 10−221.19 × 10−123.25 × 10−179.38 × 10−17.78 × 1003.88 × 10−22.08 × 10−1
p-value-4.32 × 10−8+4.32 × 10−8+9.63 × 10−7+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F2Median8.61 × 1021.30 × 1039.64 × 1021.14 × 1038.58 × 1032.09 × 1031.43 × 1037.81 × 1037.81 × 1037.69 × 103
Mean8.77 × 1021.34 × 1031.06 × 1031.14 × 1038.58 × 1032.10 × 1031.43 × 1037.88 × 1037.74 × 1037.74 × 103
Std4.28 × 1011.75 × 1024.38 × 1025.00 × 1011.76 × 1021.61 × 1022.43 × 1014.07 × 1023.47 × 1023.56 × 102
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F3Median2.16 × 1012.22 × 1012.16 × 1012.16 × 1012.16 × 1012.16 × 1012.15 × 1012.15 × 1012.14 × 1012.15 × 101
Mean2.16 × 1012.31 × 1012.16 × 1012.16 × 1012.16 × 1012.16 × 1012.15 × 1012.15 × 1012.14 × 1012.15 × 101
Std6.26 × 10−31.72 × 1007.11 × 10−157.11 × 10−157.11 × 10−152.37 × 10−13.00 × 10−24.23 × 10−24.82 × 10−24.90 × 10−2
p-value-3.49 × 10−3+2.61 × 10−4−2.61 × 10−4−2.07 × 10−6−7.15 × 10−1=4.65 × 10−1=4.65 × 10−1=7.15 × 10−1=7.15 × 10−1=
F4Median2.55 × 1094.23 × 1099.14 × 1096.40 × 1091.22 × 10104.28 × 1094.15 × 10118.12 × 10107.45 × 10106.10 × 1010
Mean2.52 × 1094.27 × 1099.41 × 1096.55 × 1091.35 × 10104.33 × 1094.20 × 10117.79 × 10107.16 × 10106.78 × 1010
Std6.55 × 1081.03 × 1091.86 × 1091.40 × 1093.12 × 1099.91 × 1087.75 × 10102.19 × 10101.92 × 10102.32 × 1010
p-value-1.44 × 10−1=1.02 × 10−3+1.06 × 10−2+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F5Median7.83 × 1056.80 × 1056.43 × 1056.51 × 1055.90 × 1058.89 × 1058.62 × 1066.10 × 1065.81 × 1065.72 × 106
Mean7.91 × 1056.79 × 1056.30 × 1056.56 × 1055.97 × 1058.90 × 1058.66 × 1066.06 × 1065.72 × 1065.67 × 106
Std1.03 × 1051.10 × 1051.00 × 1051.01 × 1051.03 × 1051.31 × 1052.80 × 1052.40 × 1054.24 × 1053.61 × 105
p-value-1.06 × 10−2−1.18 × 10−5−3.49 × 10−3−2.61 × 10−4−2.85 × 10−2+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F6Median1.06 × 1061.17 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 106
Mean1.06 × 1061.22 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 1061.06 × 106
Std1.27 × 1031.61 × 1050.00 × 1000.00 × 1000.00 × 1000.00 × 1003.00 × 1030.00 × 1000.00 × 1000.00 × 100
p-value-5.90 × 10−5+4.65 × 10−1=1.18 × 10−5−2.73 × 10−1=2.61 × 10−4−4.65 × 10−1=1.44 × 10−1=2.73 × 10−1=2.85 × 10−2−
F7Median7.93 × 1041.22 × 1065.42 × 1061.70 × 1065.45 × 1061.47 × 1067.45 × 1087.36 × 1072.84 × 1088.36 × 107
Mean9.71 × 1041.24 × 1065.50 × 1061.87 × 1065.81 × 1061.58 × 1067.67 × 1087.79 × 1073.65 × 1088.25 × 107
Std5.54 × 1045.05 × 1052.23 × 1061.08 × 1063.04 × 1067.53 × 1051.32 × 1082.73 × 1072.63 × 1082.06 × 107
p-value-1.18 × 10−5+1.18 × 10−5+4.32 × 10−8+2.61 × 10−4+3.19 × 10−7+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F8Median5.58 × 10137.07 × 10131.56 × 10141.37 × 10142.43 × 10149.65 × 10131.70 × 10169.35 × 10156.96 × 10155.83 × 1015
Mean6.15 × 10137.28 × 10131.55 × 10141.36 × 10142.46 × 10141.09 × 10141.65 × 10169.32 × 10156.95 × 10156.38 × 1015
Std2.08 × 10134.02 × 10132.92 × 10133.39 × 10138.71 × 10135.44 × 10134.49 × 10152.71 × 10151.64 × 10151.99 × 1015
p-value-5.90 × 10−5+1.44 × 10−1=2.85 × 10−2−1.18 × 10−5−3.49 × 10−3−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−
F9Median4.67 × 1074.52 × 1077.23 × 1071.11 × 1085.94 × 1078.05 × 1075.62 × 1085.55 × 1085.40 × 1085.32 × 108
Mean4.47 × 1074.28 × 1078.08 × 1071.29 × 1086.08 × 1077.99 × 1075.61 × 1085.59 × 1085.38 × 1085.31 × 108
Std1.37 × 1077.49 × 1062.21 × 1078.85 × 1071.29 × 1071.18 × 1073.24 × 1072.93 × 1073.03 × 1072.33 × 107
p-value-4.65 × 10−1=3.19 × 10−7+4.32 × 10−8+2.61 × 10−4+3.19 × 10−7+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F10Median9.40 × 1079.44 × 1079.40 × 1079.41 × 1079.41 × 1079.37 × 1079.46 × 1079.46 × 1079.46 × 1079.45 × 107
Mean9.40 × 1079.52 × 1079.39 × 1079.41 × 1079.40 × 1079.27 × 1079.46 × 1079.46 × 1079.46 × 1079.45 × 107
Std2.95 × 1051.70 × 1062.18 × 1052.23 × 1052.14 × 1051.99 × 1062.57 × 1052.51 × 1051.98 × 1052.78 × 105
p-value-1.02 × 10−3+6.79 × 10−2=1.18 × 10−5+2.07 × 10−6+1.02 × 10−3−3.19 × 10−7+3.19 × 10−7+4.32 × 10−8+2.07 × 10−6+
F11Median6.44 × 1071.88 × 1089.22 × 10119.23 × 10119.26 × 10119.38 × 10116.80 × 1081.99 × 10105.75 × 1081.33 × 1010
Mean7.14 × 1071.83 × 1089.27 × 10119.28 × 10119.29 × 10119.34 × 10116.84 × 1082.52 × 10105.68 × 1081.49 × 1010
Std2.45 × 1075.62 × 1079.35 × 1099.68 × 1099.63 × 1098.96 × 1091.09 × 1081.38 × 10109.23 × 1077.57 × 109
p-value-3.49 × 10−3+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+2.61 × 10−4+4.32 × 10−8+3.49 × 10−3+4.32 × 10−8+
F12Median1.12 × 1032.19 × 1031.03 × 1031.80 × 1031.04 × 1031.76 × 1035.54 × 1035.42 × 1034.28 × 1034.25 × 103
Mean1.14 × 1032.13 × 1031.05 × 1031.82 × 1031.08 × 1031.77 × 1035.51 × 1035.59 × 1034.34 × 1034.30 × 103
Std9.96 × 1012.72 × 1025.45 × 1011.52 × 1027.45 × 1011.69 × 1023.67 × 1027.64 × 1023.24 × 1022.48 × 102
p-value-4.32 × 10−8+2.85 × 10−2−4.32 × 10−8+1.00 × 100=4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+
F13Median4.89 × 1072.01 × 1081.20 × 1092.98 × 1087.08 × 1084.01 × 1081.56 × 1091.43 × 1092.87 × 1097.08 × 108
Mean6.40 × 1072.21 × 1081.20 × 1093.42 × 1087.48 × 1085.20 × 1081.50 × 1091.47 × 1092.98 × 1097.17 × 108
Std5.35 × 1071.24 × 1084.91 × 1081.42 × 1082.85 × 1084.85 × 1083.35 × 1083.46 × 1087.23 × 1081.57 × 108
p-value-3.19 × 10−7+1.00 × 100=4.32 × 10−8+1.02 × 10−3+4.32 × 10−8+1.44 × 10−1=2.73 × 10−1=4.32 × 10−8+3.49 × 10−3+
F14Median1.77 × 1075.86 × 1075.19 × 1098.06 × 1072.90 × 1091.51 × 1084.45 × 1094.54 × 1092.23 × 1092.50 × 109
Mean1.78 × 1076.05 × 1078.31 × 1091.59 × 1083.67 × 1092.51 × 1085.28 × 1094.58 × 1092.78 × 1093.33 × 109
Std2.62 × 1061.34 × 1076.56 × 1092.27 × 1083.32 × 1092.25 × 1083.84 × 1091.83 × 1091.85 × 1092.09 × 109
p-value-4.32 × 10−8+4.32 × 10−8+4.32 × 10−8+2.07 × 10−6+3.19 × 10−7+2.85 × 10−2+6.79 × 10−2=1.00 × 100=6.79 × 10−2=
F15Median3.53 × 1071.29 × 1074.13 × 1074.58 × 1067.60 × 1075.99 × 1078.60 × 1068.82 × 1067.75 × 1068.04 × 106
Mean3.54 × 1071.26 × 1074.13 × 1074.59 × 1067.61 × 1076.03 × 1078.98 × 1068.95 × 1067.96 × 1068.07 × 106
Std7.60 × 1061.36 × 1063.05 × 1063.22 × 1056.14 × 1066.54 × 1068.90 × 1059.38 × 1059.30 × 1059.78 × 105
p-value-4.32 × 10−8−4.32 × 10−8+4.32 × 10−8+1.18 × 10−5+4.65 × 10−1=4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−4.32 × 10−8−
w/t/l11/2/28/4/312/0/311/2/211/2/211/3/110/4/111/3/111/2/2
Rank2.734.474.874.135.805.008.137.406.476.00
Table 4. Comparison results among different versions of DGCELSO on the 1000-D CEC’2010 problems.
Table 4. Comparison results among different versions of DGCELSO on the 1000-D CEC’2010 problems.
FDGCELSODGCELSO-1DGCELSO-1000DGCELSO-SPL
F10.00 × 1003.85 × 10−260.00 × 1001.81 × 103
F28.88 × 1021.98 × 1038.70 × 1021.54 × 103
F33.18 × 10−141.08 × 1003.16 × 10−141.97 × 10−2
F41.60 × 10112.15 × 10111.56 × 10119.44 × 1011
F52.80 × 1086.93 × 1072.79 × 1081.07 × 107
F64.00 × 10−91.96 × 1014.00 × 10−93.74 × 10−1
F72.15 × 10−54.01 × 1032.17 × 10−56.15 × 106
F84.36 × 1036.84 × 1054.26 × 1033.27 × 107
F91.77 × 1073.28 × 1071.77 × 1071.05 × 108
F109.23 × 1022.02 × 1039.34 × 1023.63 × 103
F111.10 × 10−132.08 × 1011.10 × 10−136.66 × 10−1
F122.55 × 1034.60 × 1032.63 × 1031.99 × 105
F135.15 × 1027.69 × 1024.87 × 1021.42 × 103
F145.17 × 1079.78 × 1075.13 × 1073.42 × 108
F151.04 × 1042.04 × 1031.05 × 1041.00 × 104
F161.55 × 10−132.92 × 1012.93 × 10−25.72 × 10−1
F176.57 × 1044.30 × 1047.12 × 1047.10 × 105
F181.31 × 1032.30 × 1031.33 × 1032.38 × 104
F191.02 × 1071.33 × 1061.06 × 1076.52 × 106
F201.08 × 1031.98 × 1031.08 × 1032.11 × 104
Rank1.802.901.903.40
Table 5. Comparison results between DGCELSO with the dynamic strategy for tp and the ones with different fixed settings of tp on the 1000-D CEC’2010 problems.
Table 5. Comparison results between DGCELSO with the dynamic strategy for tp and the ones with different fixed settings of tp on the 1000-D CEC’2010 problems.
Ftp = 0.1tp = 0.2tp = 0.3tp = 0.4tp = 0.5tp = 0.6tp = 0.7tp = 0.8tp = 0.9Dynamic
F19.55 × 10−30.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1003.31 × 10−260.00 × 100
F22.33 × 1031.38 × 1031.05 × 1038.21 × 1026.71 × 1021.03 × 1039.26 × 1039.83 × 1031.00 × 1048.88 × 102
F31.41 × 1003.30 × 10−143.17 × 10−143.14 × 10−142.98 × 10−142.96 × 10−142.99 × 10−142.93 × 10−142.98 × 10−143.18 × 10−14
F46.60 × 10111.64 × 10111.80 × 10111.89 × 10112.01 × 10112.24 × 10112.28 × 10112.52 × 10112.53 × 10111.60 × 1011
F55.90 × 1072.64 × 1082.75 × 1082.76 × 1082.83 × 1082.79 × 1082.81 × 1082.82 × 1082.83 × 1082.80 × 108
F61.99 × 1012.00 × 1011.98 × 1014.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−93.88 × 10−94.00 × 10−94.00 × 10−9
F71.15 × 1069.66 × 10−85.36 × 10−58.46 × 10−33.32 × 10−12.98 × 1001.56 × 1018.23 × 1013.67 × 1022.15 × 10−5
F84.15 × 1071.10 × 1037.82 × 1031.19 × 1052.07 × 1066.99 × 1061.07 × 1071.36 × 1071.57 × 1074.36 × 103
F91.21 × 1081.93 × 1071.85 × 1071.78 × 1072.05 × 1072.03 × 1072.21 × 1072.17 × 1072.34 × 1071.77 × 107
F102.44 × 1031.53 × 1031.06 × 1032.19 × 1039.48 × 1039.80 × 1031.01 × 1041.02 × 1041.02 × 1049.23 × 102
F112.86 × 1012.04 × 1011.05 × 1011.11 × 10−131.09 × 10−131.11 × 10−131.11 × 10−131.13 × 10−131.15 × 10−131.10 × 10−13
F121.68 × 1051.12 × 1032.48 × 1037.33 × 1032.81 × 1041.16 × 1055.74 × 1051.53 × 1062.08 × 1062.55 × 103
F131.68 × 1034.27 × 1025.13 × 1024.12 × 1026.18 × 1024.30 × 1024.50 × 1024.88 × 1025.19 × 1025.15 × 102
F143.74 × 1085.88 × 1075.39 × 1075.68 × 1075.82 × 1076.54 × 1076.97 × 1078.12 × 1078.92 × 1075.17 × 107
F152.66 × 1031.08 × 1041.05 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 104
F167.57 × 1015.55 × 1001.62 × 10−131.64 × 10−131.68 × 10−131.76 × 10−131.79 × 10−131.87 × 10−131.94 × 10−131.55 × 10−13
F175.02 × 1052.01 × 1045.70 × 1041.86 × 1063.51 × 1064.29 × 1064.92 × 1065.16 × 1065.46 × 1066.57 × 104
F184.17 × 1031.45 × 1031.35 × 1031.45 × 1031.09 × 1031.43 × 1031.16 × 1031.12 × 1031.16 × 1031.31 × 103
F192.18 × 1066.26 × 1061.05 × 1071.20 × 1071.32 × 1071.39 × 1071.42 × 1071.50 × 1071.53 × 1071.02 × 107
F203.09 × 1031.30 × 1031.19 × 1031.10 × 1031.06 × 1031.03 × 1031.02 × 1039.94 × 1029.86 × 1021.08 × 103
Rank7.754.984.534.234.855.355.96.287.733.43
Table 6. Comparison results between DGCELSO with the dynamic strategy for NDG and the ones with different fixed settings of NDG on the 1000-D CEC’2010 problems.
Table 6. Comparison results between DGCELSO with the dynamic strategy for NDG and the ones with different fixed settings of NDG on the 1000-D CEC’2010 problems.
FNDG = 20NDG = 30NDG = 40NDG = 50NDG = 60NDG = 70NDG = 80NDG = 90NDG = 100Dynamic
F10.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 1000.00 × 100
F28.46 × 1028.56 × 1028.49 × 1028.41 × 1028.53 × 1028.48 × 1028.52 × 1028.51 × 1028.44 × 1028.88 × 102
F33.18 × 10−143.18 × 10−143.19 × 10−143.22 × 10−143.21 × 10−143.24 × 10−143.19 × 10−143.21 × 10−143.19 × 10−143.18 × 10−14
F41.69 × 10111.69 × 10111.68 × 10111.65 × 10111.54 × 10111.56 × 10111.65 × 10111.58 × 10111.65 × 10111.60 × 1011
F52.78 × 1082.79 × 1082.80 × 1082.78 × 1082.78 × 1082.78 × 1082.79 × 1082.78 × 1082.79 × 1082.80 × 108
F64.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−94.00 × 10−93.88 × 10−94.00 × 10−9
F72.36 × 10−53.22 × 10−52.64 × 10−52.58 × 10−52.18 × 10−51.92 × 10−52.22 × 10−52.00 × 10−52.93 × 10−52.15 × 10−5
F85.49 × 1035.20 × 1035.15 × 1035.12 × 1035.14 × 1034.98 × 1035.07 × 1035.01 × 1035.07 × 1034.36 × 103
F91.80 × 1071.78 × 1071.73 × 1071.81 × 1071.74 × 1071.76 × 1071.78 × 1071.74 × 1071.82 × 1071.77 × 107
F108.97 × 1028.94 × 1028.94 × 1028.94 × 1028.92 × 1028.92 × 1029.02 × 1029.14 × 1028.89 × 1029.23 × 102
F111.11 × 10−131.11 × 10−131.11 × 10−131.11 × 10−131.11 × 10−131.10 × 10−131.10 × 10−131.11 × 10−131.11 × 10−131.10 × 10−13
F123.13 × 1033.13 × 1033.24 × 1033.18 × 1033.24 × 1033.21 × 1033.11 × 1033.14 × 1033.28 × 1032.55 × 103
F134.48 × 1025.03 × 1025.13 × 1024.83 × 1025.30 × 1025.09 × 1024.51 × 1024.64 × 1024.82 × 1025.15 × 102
F145.26 × 1075.26 × 1075.24 × 1075.16 × 1075.16 × 1075.07 × 1075.23 × 1075.18 × 1075.24 × 1075.17 × 107
F151.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 1041.04 × 104
F161.59 × 10−131.57 × 10−131.58 × 10−131.58 × 10−133.85 × 10−21.58 × 10−132.93 × 10−21.59 × 10−131.59 × 10−131.55 × 10−13
F171.19 × 1051.20 × 1051.19 × 1051.20 × 1051.28 × 1051.22 × 1051.28 × 1051.34 × 1051.35 × 1056.57 × 104
F181.19 × 1031.21 × 1031.28 × 1031.31 × 1031.24 × 1031.18 × 1031.31 × 1031.31 × 1031.28 × 1031.31 × 103
F191.10 × 1071.09 × 1071.12 × 1071.10 × 1071.12 × 1071.13 × 1071.12 × 1071.12 × 1071.11 × 1071.02 × 107
F201.12 × 1031.11 × 1031.09 × 1031.11 × 1031.08 × 1031.11 × 1031.09 × 1031.08 × 1031.10 × 1031.08 × 103
Rank6.056.056.185.335.784.705.505.186.004.25
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Q.; Zhang, K.-X.; Gao, X.-D.; Xu, D.-D.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization. Mathematics 2022, 10, 1072. https://doi.org/10.3390/math10071072

AMA Style

Yang Q, Zhang K-X, Gao X-D, Xu D-D, Lu Z-Y, Jeon S-W, Zhang J. A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization. Mathematics. 2022; 10(7):1072. https://doi.org/10.3390/math10071072

Chicago/Turabian Style

Yang, Qiang, Kai-Xuan Zhang, Xu-Dong Gao, Dong-Dong Xu, Zhen-Yu Lu, Sang-Woon Jeon, and Jun Zhang. 2022. "A Dimension Group-Based Comprehensive Elite Learning Swarm Optimizer for Large-Scale Optimization" Mathematics 10, no. 7: 1072. https://doi.org/10.3390/math10071072

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop