Next Article in Journal
A High-Loop-Gain Low-Dropout Regulator with Adaptive Positive Feedback Compensation Handling 1-A Load Current
Next Article in Special Issue
Distributed Multi-Mobile Robot Path Planning and Obstacle Avoidance Based on ACO–DWA in Unknown Complex Terrain
Previous Article in Journal
A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration
Previous Article in Special Issue
Simulation of Biochemical Reactions with ANN-Dependent Kinetic Parameter Extraction Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hierarchical Collaborated Fireworks Algorithm

School of Artificial Intelligence, Peking University, Beijing 100871, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(6), 948; https://doi.org/10.3390/electronics11060948
Submission received: 19 February 2022 / Revised: 9 March 2022 / Accepted: 15 March 2022 / Published: 18 March 2022
(This article belongs to the Special Issue Advances in Swarm Intelligence, Data Science and Their Applications)

Abstract

:
The fireworks algorithm (FWA) has achieved significant global optimization ability by organizing multiple simultaneous local searches. By dynamically decomposing the target problem and handling each one with a sub-population, it has presented distinct property and applicability compared with traditional evolutionary algorithms. In this paper, we extend the theoretical model of fireworks algorithm based on search space partition to obtain a hierarchical collaboration model. It maintains both multiple local fireworks for local exploitation and one global firework for overall population distribution control. The implemented hierarchical collaborated fireworks algorithm is able to combine the advantages of both classic evolutionary algorithms and fireworks algorithms. Several experiments are provided for in-depth analysis and discussion on the proposed algorithm. The effectiveness of proposed strategy is demonstrated on the benchmark test suite from CEC 2020. Experimental results validate that the hierarchical collaborated fireworks algorithm outperforms former fireworks algorithms significantly and achieves similar results compared with state-of-the-art evolutionary algorithms.

1. Introduction

Global optimization of non-convex problems has been a significant task for numerous academic studies and industrial applications. However, traditional gradient methods are facing difficulties in dealing with complex optimization situations, such as multi-modal or non-differentiable objective functions. Recently, a great number of evolutionary algorithms (EAs) and swarm intelligence optimization algorithms (SIOAs) have been proposed, developed, and widely applied in optimization tasks for their advantages in terms of flexibility and robustness.
The fireworks algorithm (FWA [1]) is a novel swarm intelligence optimization framework inspired from the explosion of fireworks in the night sky. Unlike the classic EAs or SIOAs, fireworks algorithm maintains several individuals called fireworks to explore different aspects of the objective function by sub-populations composed with basic individuals called sparks. Meanwhile, fireworks collaborate their local strategies to achieve efficient global optimization. For example, some studies on the fireworks algorithm have proposed variant approaches with multi-scale [2] or multi-local [3] collaboration strategies. Fireworks algorithms are particularly suitable for complex problems with a considerable number of local extrema and scenarios where large scale parallel computation is supported. Currently, it has been widely applied in fields like portfolio optimization [4], image processing [5], and power system reconfiguration [6]. It is also implemented for complex optimization scenarios like multi-objective optimization [7].
The fireworks algorithm framework is able to significantly improve the efficiency of optimization by dynamically decomposing the target problem and handling each one with a sub-population, but it can also be harmful if the decomposition and collaboration are not performed properly. For example, when optimization is decomposed into multiple local searches, collaboration must effectively integrate local information and analyze the overall landscape of the objective function. Otherwise, information about the overall trend becomes unavailable and the efficiency of optimization is compromised.
This paper deals with this problem by combining multi-local and multi-scale decomposition and collaboration methods in the fireworks algorithm. A hierarchical model of the fireworks population is developed, analyzed, and implemented. In the proposed fireworks algorithm, as global firework controls a sub-population that approximates the overall distribution of the entire spark population, while multiple local fireworks perform local searches within dynamically partitioned sub-regions. Specifically, this paper contributes to the following aspects of fireworks algorithm.
  • A hierarchical model. A theoretical model based on information theory is developed for the proposed collaboration framework, which is very helpful for the design and interpretation of related algorithms.
  • A refined control of individual search strategy. The basic individual optimization strategy introduced from CMA-ES is analyzed and improved for the specific requirements of the FWA framework.
  • A unified collaboration strategies. Both multi-local collaboration and multi-scale collaboration are implemented in a unified strategy, which is intuitive but effectively combines the advantages of both approaches.
  • An efficient FWA variant. The proposed algorithm is able to combine the advantages of traditional EAs and FWAs. It achieves excellent results on benchmark test problems.
The remainder of this paper is organized as follows: Section 2 introduces the background of the target problem and related works. Then, Section 3 develops a theoretical model of the hierarchical collaboration framework. In Section 4, the individual optimization strategy is introduced and analyzed for global and local fireworks. The unified collaboration strategy is described in Section 5. Several sets of experimental designs and results are exhibited and discussed in Section 6. Finally, Section 7 concludes the paper.

2. Background

2.1. Problem Definition

This paper targets to solve the general continuous single-objective black-box global optimization problem with boundary constraints, which is formulated in Equation (1).
x * = arg min x S f ( x )
where the objective function f : R D R is sampled from an unknown distribution p ( f ) related to specific task scenario. The search space or feasible space S = x l b i x i u b i restricts each variable to some finite interval.
A general optimizer is used to approximates x * by proposing a finite number of solutions or samples x i within S. For the black-box problem, the objective function provides only one scalar y = f ( x ) for each sample x . A well-developed optimizer is able to combine the historical evaluations data D n = ( x i , y i ) i = 1 n with specific prior knowledge about p ( f ) to generate additional valid samples in each iteration.
In particular, for EAs and SIOAs, a considerable number of samples are generated in each iteration. In this case, each sample is also referred to as an individual, and the total of all individuals in each iteration (or generation) is referred to as the population. The objective function is also referred to as the sfitness function.

2.2. Fireworks Algorithm

Fireworks algorithm (FWA) is a novel swarm intelligence optimization algorithm that focuses on combining multiple sub-populations with a unified local optimization strategy. During the optimization process, the primary individuals called “sparks” are divided into multiple parts, each of which performs a local search for some aspect of the objective function under the leadership of an individual called “firework”. The general firework algorithm performs the following steps in each optimization iteration until the termination condition is satisfied. First, each firework distributes and evaluates explosion sparks around itself. Sometimes, several additional mutation sparks are generated utilizing the information obtained by the explosion. Then, a new generation of fireworks is selected from the current fireworks and sparks. Finally, the collaboration strategies are sometimes applied to further tune the fireworks or their explosion parameters, including the range and number of their sparks. Such a framework has significant advantages in dealing with problems with many local extremums or scenarios where a large-scale population is available. Currently, FWA has shown great potential for global optimization efficiency and has become a representative swarm-based optimization algorithm.
In the early FWAs, the individual strategies for fireworks are directly decided by comparing fitness values. The original FWA [1] decides spark number and explosion amplitude of a firework linearly to its relative fitness value. As a significant number of studies focused on the efficiency of individual search are proposed in this period, the fireworks become more and more independent. The enhanced FWA [8] made several corrections to the original FWA, including the primary explosion method. Then, both [9,10] proposed dynamic individual explosion amplitude strategies that improve FWA’s efficiency significantly. Meanwhile, a large number of FWA variants with mutation operators such as [11,12] are proposed to utilize the information obtained from explosion sparks.
Then, more and more studies have spared attention to the collaboration of fireworks. The bare-bone FWA [13] achieved decent efficiency with a single firework and a minimalist strategy, which implies the failure of former collaboration methods and provides a reliable individual optimization strategy. In the cooperative framework of FWA [14], a variant algorithm with an independent selection strategy is proposed, so each firework selects a child from its sparks. A crowdedness-avoiding cooperative strategy is also applied to repel fireworks from the explosion range of the best firework. A large number of subsequent studies are based on this independent selection. In [15], a loser-out tournament strategy is proposed to estimate the potential of each firework and timely restart ones that are unlikely to outperform the current best. Recently, refs. [2,3] proposed collaboration strategies that assign fireworks to different scales or different local areas, respectively.

2.3. Related Works

The main contribution of this paper is to propose a hierarchical multi-population collaborative optimization framework. The related works are introduced in two directions.

2.3.1. Study on Optimization Algorithm with Multiple Population

According to our knowledge, there are two primary research directions on adopting multiple populations in global optimization.
Many optimization algorithms that adopt multiple populations are called multi-population methods. In those methods, all populations evolve to solve the optimization problem simultaneously with diversified strategies or parameters. In the competition of CEC 2020 [16], several differential evolution (DE) methods achieved the best results with such a framework, including IMODE [17], J2020 [18], MP-EEH [19], and mpmL-SHADE [20]. Such methods handle the original problem with multiple sub-populations and usually collaborate by individual sharing. In contrast, the proposed algorithm handles decomposed sub-problems with sub-populations and directly collaborates strategies of each one.
Another branch of optimization algorithms with multiple populations is called cooperative co-evolution. Those methods handle large-scale problems by decomposing the variables of the solution. Ref. [21] provides an example of automatic solution decomposition and cooperative optimization. Those methods share a similar idea with the proposed algorithm but target different problems.

2.3.2. Study on Population Structure

There have been successful applications of static population topologies since the early research of particle swarm optimization (PSO), such as [22,23,24]. In [23,25], basic static population structures, including the star, ring, and von Neumann topologies, and their influence on the performance of PSO were analyzed. A looser topology generally provides better population diversity, while a tighter topology leads to a faster convergence rate. The population topology mechanism has also been naturally applied to differential evolution [26] and genetic algorithm [27].
Dynamic topologies are common in recent EAs and SIOAs. Some of them are simple extensions of classic static structures. For example, [28] proposed PSO with increasing topology connectivity to adapt to the different needs of different search stages. Other methods achieve greater flexibility by dynamically adapting the topology, including elite strategy [29], clustering [30], or randomization [31].
The hierarchical population topology model applied in this paper can be explained as a combination of basic topology models, sometimes referred to as the island model. Such a framework has already been applied in particle swarm optimization [32], differential evolution [33], genetic algorithm [34], and genetic programming [35]. However, the proposed algorithm adopts covariance matrix adaptation evolution strategy (CMA-ES) for each sub-population and combines the population structure with the geometric relationship in the search space. Therefore, it has a more intuitive and stronger control on the population.

3. Hierarchical Collaboration Model

In this section, the theoretical model of the proposed hierarchical framework is built through an information perspective.
For an optimization task, the unknown objective function can be regarded as a random sample from distribution p ( f ) . The desired optimal solution x * is then a random variable within feasible space S. With history evaluation data D t = { x i , y i } i = 1 t , the posterior distribution of f can be obtained, so does the posterior of x * in Equation (2).
p ( x * | D t ) = f p ( f | D t ) × I x * = arg min x S f ( x ) d f
With a limited number of sample data, an optimizer targets to obtain information on x * , that is, to reduce the entropy of its posterior distribution, which is shown in Equation (3).
H ( p ( x * | D t ) ) = x * S p ( x * | D t ) log p ( x * | D t ) d x *
The expected entropy reduction caused by new data ( x , y ) is an ideal evaluation for sampling x and is adopted in algorithms called entropy search [36]. Consider a partition of the feasible space { S i } i = 1 N that S = i = 1 N S i and S i S j = , Appendix A indicates that the entropy can be decomposed into each subspace, as in Equation (4).
H ( p S ( x * ) ) = i = 1 N p ( x * S i ) × H ( p S i ( x * ) ) + H ( { p ( x * S i ) } i = 1 N )
where condition on history data D t is abbreviated for each distribution of x * . p S ( x * ) means the distribution of optimal solution restricted within S. The second term, called region entropy, is the entropy of discrete distribution that describes which local region the global optimal is located.
The entropy decomposition model illustrates that the overall optimization can be performed by the independent optimization within each sub-space S i and the specification of probability that the global optimal locates in each sub-space. This model corresponds directly to the idea of the fireworks algorithm, in which fireworks optimize in different local areas and collaborate.
The reduction in region entropy in Equation (4) has to be performed by the effective collaboration between fireworks. However, it can also be regarded as a discrete approximation of H ( p S ( x * ) ) ignoring the local details within each sub-space. When the sub-spaces become smaller and smaller, the region entropy gradually approximates the global entropy. Inspired from this fact, Equation (4) can be rewritten as follow.
H ( p S ( x * ) ) = ( 1 α ) × H ( p S ( x * ) ) + α × i = 1 N p ( x * S i ) × H ( p S i ( x * ) ) + α × H ( { p ( x * S i ) } i = 1 N )
Since the entropy in the third term of Equation (5) is an approximation of entropy in the first term, they can be combined and represented by the entropy of an auxiliary distribution as in Equation (6).
H ( p S ( x * ) ) H ( p S ( α ) ( x * ) ) + α × i = 1 N p ( x * S i ) × H ( p S i ( x * ) )
where p S ( α ) ( x * ) represents an auxiliary distribution balanced between the original distribution p S ( x * ) and discrete distribution p ( x * S i ) . The reduction in its entropy corresponds to the optimization of a low-fidelity objective function that is smoothed on each sub-region.
Equation (6) describes the theoretical model of the proposed algorithm, which contains N local fireworks optimizing in sub-spaces { S i } i = 1 N and a global firework optimizing a low-fidelity target in the overall feasible space S. Usually, when using a local optimization algorithm for the global firework, its sampling range is a subset of feasible space S 0 S which implies the possible range of the global optimal. In this case, the approximation still holds when { S i } i = 1 N be a partition of S 0 .
For any efficient local optimization algorithm, the model illustrates that it is better applied in individual optimization of this framework for the following advantages than direct optimization.
  • Task Decomposition. The hierarchical model decomposes the overall optimization task into N + 1 independent optimization tasks. All of those sub-tasks are easier than the original optimization. The linear decomposition equation ensures that more overall entropy reduction can be achieved by processing multiple simpler tasks simultaneously.
  • Sample Efficiency. On a modern parallel computation device, the number of samples that can be evaluated at the same time is much larger than the number of samples needed per generation for most optimization algorithms. By adopting N + 1 independent optimization, the proposed framework can utilize the computation device more efficiently.
  • Flexibility and Simplicity. The theoretical model does not make any limitation on the strategy of individual optimization. They can be unified for simplicity or varied for specific requirements. The global firework directly controls the overall balance of exploration and exploitation. No additional collaboration strategy is necessary once the search space partition is satisfied.
  • Multi-Scale Optimization. The framework is particularly suitable for objective functions with both global trends and local patterns, which is quite common in practical problems.
For better geometric intuition, an algorithm adopting a dynamic local Gaussian distribution model for individual optimization is proposed in this paper, which evolves according to an extended CMA-ES algorithm. Furthermore, the collaboration intends to make their search ranges dynamically adjusted towards a partition with the minimal loss of independent optimization efficiency. The advantages will be further examined and discussed in experiments.

4. Individual Optimization Strategy

During the optimization, each firework maintains sparks that follow multi-variant Gaussian distributions. The individual optimization strategy samples sparks from the distribution and adapts it according to their evaluations. In the remainder of this section, the unified algorithm flow is first introduced, followed by a detailed explanation of the different parameter choices for local and global fireworks.

4.1. Sparks Generation

Sparks of each firework are generated as i.i.d samples of a multi-variant Gaussian distribution according to the following equation.
x i , 1 : λ i m i + σ i × N ( 0 , C i )
where λ i is the number of sparks. m i and C i are the mean and covariance matrix, respectively. σ i is an overall scale factor. All sparks beyond the boundaries are re-mapped with a mirrored mapping rule.
x i , j , k = 2 l b k x i , j , k , if x i , j , k < l b k x i , j , k , if l b k x i , j , k u b k 2 u b k x i , j , k , if x i , j , k > u b k
where l b k and u b k are the lower bound and upper bound on the k-th dimension, respectively. All the samples are collected and evaluated by the objective function y i j = f ( x i j ) .

4.2. Mean Shift

The new mean position m i ( l ) is first adapted as a weighted average of the sparks.
m i ( l ) = m i + c m j = 1 λ i w i j ( x i j m i )
where c m [ 0 , 1 ] is the learning rate. The recombination weights satisfy w i j 0 and j = 1 λ i w i j = 1 . Usually, the better individuals have higher weights.

4.3. Covariance Adaptation

Both rank- μ update and rank-1 update are applied for the adaptation of covariance matrix according to Equation (10).
C i ( l ) = ( 1 c μ c 1 ) C i + c μ j = 1 λ i w i j y i j y i j T + c 1 p c , i p c , i T
where c μ and c 1 are learning rates. The second term in the formula corresponds to the rank- μ update. Sample bias y i j = x i j m i ( r ) , where the reference mean m i ( r ) is a position selected between the original and the new mean in order to balance the exploitation and exploration ability.
m i ( r ) = ( 1 c r ) m i + c r m i ( l )
The third term in Equation (10) corresponds to the rank-1 update, which adjusts the distribution according to the historical trajectory of the mean position. The evolution path p c , i is updated according to the following equation.
p c , i new = ( 1 c c ) p c , i + c c ( 2 c c ) μ eff × m i ( l ) m i σ i
where c c is the learning rate and μ eff = ( w 1 / w 2 ) 2 is the variance effective selection mass of the sample weights.

4.4. Scale Adaptation

The overall scale σ is adjusted according to the states of individual optimization, which is illustrated by the conjugate evolution path p σ .
p σ , i new = ( 1 c σ ) p σ , i + c σ ( 2 c σ ) μ eff × C i 1 2 m i ( l ) m i σ i
The Euclidean norm of p σ is compared with the expectation of sample norm from standard Gaussian. The fact that the conjugate evolution path is longer indicates the mean position has been moving significantly in the same direction, so the overall scale σ should be amplified. Otherwise, it will shrink.
ln σ i ( l ) = ln σ i + c σ d σ p σ , i new E N 0 , I 1
c σ is the learning rate. Damping factor d σ is used to control the magnitude of the update.

4.5. Restart

Fireworks that are stagnant or unlikely to surpass the current optimal should be reset timely. Primary restart conditions include the following.
  • Fitness Converged. std y i , 1 : λ i ϵ v .
  • Position Converged. σ i × C i 2 ϵ p .
  • Not Improving. The firework’s optimal solution has not improved for ϵ l iterations.
  • Mean Converged. The firework’s mean position converges with a better firework, that is, m i m j < ϵ p .
  • Cover by Better. The firework’s explosion range is covered by a better firework, which is verified when more than 90% of its sparks lie within the explosion range of the other firework.
Once a restart condition is met, the firework is re-initialized in a random location within the feasible space. The explosion range will be defined later in Equation (18).

4.6. Parameter Settings

Local and global fireworks require different parameter settings for their distinct targets with the unified individual optimization framework. In general, the local fireworks are desired to conduct an efficient search in local areas. On the other hand, the global firework requires a gradual and steady narrowing of its search space started from the whole feasible space. The parameter settings of both global and local fireworks are described below, and their behaviors are illustrated in Figure 1.

4.6.1. Initialization

Each firework in the proposed algorithm maintains the same number of sparks λ i = λ / ( N + 1 ) . The local fireworks locate uniformly within the feasible space, while the global firework is sampled near the origin. All covariance matrices are initialized with the identity matrix. Additionally, the overall scale of global firework is shown in Equation (15).
σ g l o b a l = u b l b 2 × E N 0 , I
where u b = max ( u b k ) and l b = max ( l b k ) are the upper bound and lower bound of the feasible space. Local fireworks are initialized with σ l o c a l = σ g l o b a l / N .

4.6.2. Recombination Weights

For efficient exploitation, local fireworks select the better half of sparks ( μ = λ / 2 ) and assign logarithmic weights according to their rank as follows. Here, w j is the weight for the j-th best spark.
w j min 0 , log ( μ + 0.5 ) log j
For stable exploration, the global firework eliminates 5% worst sparks ( μ = 0.95 λ ) and assign uniform weights for the rest.

4.6.3. Referenced Mean

Scalar c r [ 0 , 1 ] decides the referenced mean and balances the exploration and exploitation of rank- μ update. Original CMA-ES uses c r = 0 for highly exploratory population. The global firework takes c r = 1.0 , so the adapted covariance matrix tends to reproduce its selected samples with maximum probability. The local fireworks take c r = 0.5 for a balanced local optimization.

4.6.4. Rank-1 Update

The rank-1 update adjusts the sample distribution based on the historical movement trajectory of the mean position. It is very effective for local fireworks but not suitable for the global firework.

4.6.5. Scale Update

The damping factor d σ in Equation (14) control the magnitude of scale update, which is shrunk to balance with the effect of collaboration. The local fireworks reduce d σ to 0.5 times its original value designed in CMA-ES. Although the global takes d σ = 0 because it is sufficient to control search range by selection and collaboration.

4.6.6. Restart Conditions

The value and position precisions of all fireworks both take ϵ v = ϵ p = 10 5 . Local fireworks restart after ϵ l = 100 failed iterations, while N × ϵ l is allowed for the global firework.

4.6.7. Overall Learning Rate

The converge rate of the global firework is already limited through parameters selection. However, during the optimization, it is desired to be significantly slower than local fireworks. An additional overall learning rate c g is set for the global firework to slow down its optimization further as follows. c g = 1 / N is assigned in the proposed algorithm.
m 0 ( l ) c g m 0 ( l ) + ( 1 c g ) × m 0 C 0 ( l ) c g C 0 ( l ) + ( 1 c g ) × C 0 σ 0 ( l ) c g σ 0 ( l ) + ( 1 c g ) × σ 0

4.7. Individual Optimization Framework

The framework of individual optimization strategy is shown in Algorithm 1.
Algorithm 1 Individual optimization framework.
  • for all firework Fi do
  •   Generate sparks x i j according to Equation (7)
  • end for
  • Gather and evaluate sparks y i j = f ( x i j )
  • for all firework Fi do
  •   Compute m i ( l ) according to Equation (9)
  •   Update evolution path p i according to Equation (12)
  •   Compute C i ( l ) according to Equation (10)
  •   Update conjugate evolution path p c , i according to Equation (13)
  •   Compute overall scale σ i ( l ) according to Equation (14)
  • end for
  • Adjust global firework according to Equation (17)

5. Collaboration Strategy

Based on the model in Equation (6), the proposed algorithm mainly considers collaboration by adjusting fireworks’ search ranges towards a partition. Since each firework maintains a multi-variant Gaussian distribution, its search space is defined by a bounded region where the probability density exceeds a specific value.
B F = x | C 1 2 ( x m ) / σ = d B
Equation (18) defines the boundary of firework F i as B F i , abbreviated as B i , which corresponds to an ellipsoidal shell. Then its closure S F = B ¯ F is the firework’s search range. In the proposed algorithm, d B = mean ( χ D ) + 0.5 × std ( χ D ) is taken, by which the defined search range S F covers about 70% of samples for arbitrary dimension D.
For simplicity, the collaboration of firework’s search range is approximated in pairs, including the following steps.

5.1. Computation of Dividing Points

For each pair of fireworks F i and F j , a dividing point can be obtained on the connecting line m i m j , which specifies the cut-off point of their boundaries under ideal collaboration. Let the radius of S i on the line m i m j be r i j and d i j = m i m j . The dividing point d i j can be obtain by adjusting r i j and r j i simultaneously until their boundaries coincide on m i m j . For local fireworks, this is completed by solving the following equation.
r i j e a i j w + r j i e a j i w = d i j
where r i j changes to r i j e a i j w by collaboration, which also equals to m i d i j . The sensitivity factors a i j and a j i control the relative magnitudes of change for F i and F j . The collaboration is also presented in Figure 2.
The following equation should be solved when F i is the global firework.
r i j e a i j w r j i e a j i w = d i j
where the negative sensitivity of global firework indicates that it changes in the opposite direction to local firework. The collaboration with global firework is presented in Figure 3.
The sensitivity factors are set to balance the influence of global collaboration on individual optimizations. A comparison is made between their optimization states for each pair of fireworks F i and F j . Only when the worst spark of F i is still better than the best of F j , let a i j = 0.0 and a j i = 1.0 so the optimization of F i will not be influenced. Otherwise, let a i j = a j i = 1.0 so they collaborate by the same magnitude. In order to protect effective individual optimization, the sensitivity of a local firework is adjusted to zero if it improves within 0.2 ϵ l iterations. When both sensitivities are zeros, dividing points fall on the boundary of each of the fireworks, so they do not change in collaboration.
The sensitivity factor of global firework is amplified by c a > 1.0 to ensure the efficiency of local optimization. In the proposed algorithm, a significant large value c a = 5.0 is taken.
For both equations, the value on the left is always monotonic to w when the sensitivity factors are positive. Therefore, it is guaranteed to be solved quickly for a given accuracy and range.

5.2. Selection of Feature Points

The dividing points describe the boundaries for each pair of fireworks under the complete and ideal collaboration. However, only several dividing points should be selected and adjusted for firework F i during optimization.
First, the collaboration should only be performed between adjacent fireworks. In order to avoid the complex computation of geometric relationship in high-dimensional space, the τ = 2 most critical points d i k k = 1 τ are selected for each firework. For local fireworks, dividing points with the highest probability density are selected. On the contrary, dividing points with the lowest density are selected for the global firework.
Then, the selected points should be clipped towards the distribution boundary to avoid excessive change. Based on the analysis of individual searches, a similar magnitude of change was taken within [ 0.85 , 1.20 ] . The feature point is calculated in Equation (21), and the clipping is shown in Equation (22).
f i k = m i ( l ) + d i k ( c l i p ) × d i k m i ( l ) d i k m i ( l )
where d i k ( c l i p ) is the distance from mean position m i ( l ) to feature point f i k .
d i k ( c l i p ) = α l r i j , i f d i k m i ( l ) < 0.85 r i j α u r i j , i f d i k m i ( l ) < 1.20 r i j d i k m i ( l ) , e l s e .

5.3. Adaptation to Feature Points

The distribution of F i after individual adaptation is then further adjusted to fit the feature points f i k k = 1 τ on its boundary. For simplicity, the fitting is performed one by one for each feature point.

5.3.1. Mean Shift

First, the mean position is shifted to balance the overall boundary shape change. Then, for each feature point f i k , the reverse of its shortest vector to the boundary B i is averaged. The shift vector of the mean position is calculated as follows.
mv i = 1 τ k = 1 τ ( f i k q i k )
The point q i k is the closer intersection of B i and line m i f i k . The shifting vector is restricted within 20% of the radius on the corresponding direction for local fireworks and 5% for the global firework.
m i ( g ) = m i ( l ) + mv i × min 1 , α m r i mv i
According to experiments, the mean shift is also helpful in reducing the condition number of the resulted covariance matrix and avoiding possible numerical problems.

5.3.2. Boundary Adaptation

The boundary collaboration is completed separately for each feature point and averaged. It is easy to examine the following theorem by simply substituting the matrix in Equation (25) into the definition of firework boundary.
Theorem 1.
For a multi-variant Gaussian distribution N ( m ( g ) , C ( l ) ) with overall scale σ ( l ) and a feature point f , the following matrix C f ( g ) satisfies that f is on its boundary and it has the same radius as C ( l ) on the conjugate directions.
C f ( g ) = C ( l ) + λ σ 2 × ( f m ( g ) ) ( f m ( g ) ) T
where
λ = 1 d B 2 1 z T z
and
z i k = C i ( l ) 1 2 × ( f i k m i ( g ) ) / σ i ( l )
Then, the collaborated covariance matrix C i ( g ) is obtained by averaging the effects of each feature point.
C i ( g ) = C i ( l ) + 1 τ k = 1 τ λ i k σ i 2 × ( f i k m i ( g ) ) ( f i k m i ( g ) ) T
The distribution scale is changed along with the covariance matrix, so σ i remains the same.
σ i ( g ) = σ i ( l )

5.4. Collaboration Framework

The proposed algorithm is shown in Algorithm 2. An overall reboot is conducted when the population has not improved for over M = 100 iterations.
Algorithm 2 Hierarchical collaborated fireworks algorithm.
  • while The termination condition is not satisfied do
  •  Initialize local fireworks population { F i } i = 1 n
  •  Initialize global firework F 0
  • while the population improved within M iterations do
  •   # Individual Optimization
  •   for all firework F i , i = 0 , 1 , , n do
  •    Generate sparks x i j
  •   end for
  •   Collect and evaluate all sparks
  •   for all firework F i , i = 0 , 1 , , n do
  •    Compute m i ( l ) , C i ( l ) , σ i ( l )
  •    if any restart condition is satisfied then
  •     Re-initialize F i
  •    end if
  •   end for
  •   # Collaboration
  •   for all pair of fireworks F i and F j do
  •    compute dividing point d i j as in Section 5.1
  •   end for
  •   for all firework F i do
  •    select and adjust feature points { f i k } k = 1 τ as in Section 5.2
  •    compute m i ( g ) , C i ( g ) , σ i ( g ) as in Section 5.3
  •    update firework distribution: m i m i ( g ) , C i C i ( g ) , σ i σ i ( g )
  •   end for
  • end while
  • end while
  • Return the best evaluated solution

6. Experiments and Discussions

In this section, the efficiency and properties of the proposed algorithm are experimented and discussed. All the experiments are run on Ubuntu 18.04 with Intel(R) Xeon(R) CPU E5-2675 v3.

6.1. Experiments on the Algorithm Efficiency

The efficiency of the proposed algorithm is examined on the bound-constrained single objective benchmark test suite from the IEEE Congress on Evolutionary Computation (CEC) 2020 [16]. The benchmark set contains ten objective functions, including uni-modal, basic, hybrid, and composition problems. It provides a relatively comprehensive test of different aspects of performance for optimization algorithms. Since consistent experimental results are observed in 10, 15, and 20 dimensions, only 20-dimensional performance is presented in this paper.
The hierarchical collaborated FWA (HCFWA) is first compared with previous important FWAs to examine the effectiveness of the proposed strategy. The loser-out tournament FWA (LoTFWA [15]) is a classical FWA that has received widespread attention and served as the foundation of many subsequent studies. CMAFWA replaces the local search of LoTFWA with CMA-ES and is used as a comparison algorithm. FWA based on search space partition (FWASSP [3]) uses a similar collaboration strategy with the proposed algorithm but only adopts local fireworks.
Each algorithm is tested 30 times on 20-dimensional problems with a maximum number of 10,000,000 evaluations. The proposed algorithm keeps most parameters consistent with the previous ones as they were published. The results of the longitudinal comparisons are shown in Table 1. The best results for each problem are shown in bold.
The Wilcoxon rank-sum tests are performed to verify the difference in optimization results’ significance. The proposed algorithm obtained the best average results on five problems. CMAFWA presents the most outstanding local optimization ability with the original CMA-ES strategy, especially on the first and second problems: a uni-modal problem and a multi-modal problem with high local search capability requirements. FWASSP and HCFWA, with fireworks collaboration, obtain all the best average results on the rest problems. However, their local exploitation ability is slightly weaker because of the defined minimum optimization precision ϵ v = ϵ p = 10 5 and the adverse effects of collaboration. With a confidence level of 95%, HCFWA outperforms FWASSP significantly on five problems but is worse on two problems, which might be caused by an improper global convergence rate.
The proposed algorithm is also compared with important algorithms, including IPOP-CMA-ES [37] and SHADE [38], which are both currently the most widely applied and efficient evolutionary algorithms. The results of LoTFWA, IPOP-CMA-ES, SHADE, and HCFWA are listed in Table 2. It has been complicated for firework algorithms to complete with various differential evolution (DE) algorithms on benchmarks after CEC 2017. The proposed algorithm achieved similar results with SHADE and is better suited to specific problems.
Although it remains tough to compete with some state-of-art implementations of DE such as [17,39] on this benchmark, HCFWA achieves better results on problems with a large number of local optimal, such as function (3) and function (8), which will be further discussed in later experiments. Function (2) also contains plenty of local optimal, but many local areas have huge condition numbers, which is disadvantageous to the proposed algorithm’s individual optimization.

6.2. Experiments on the Population Behavior

In the second set of experiments, the effect of global firework is analyzed and examined from the perspective of population behavior.
The population behavior of the fireworks algorithm is quite different compared with traditional evolutionary algorithms. For most EAs, such as CMA-ES or DE, their populations are first dispersed throughout the feasible space and then gradually converge to a single point. However, fireworks in FWA are uniformly distributed within the feasible space, and each sub-population converges to its corresponding firework. As a result, the population of EAs eventually converges even for multi-modal problems, while the population of the fireworks algorithm usually does not.
The proposed algorithm forms a compromise between those two types of methods. The optimization of global firework makes the overall population distribution converge gradually such as a general evolutionary algorithm. Meanwhile, the local fireworks remain relatively independent and rapidly exploit their local areas. The 2D visualizations of the optimization progress are presented in Figure 4 to show the behavior of the population in the proposed algorithm.
Images in the first row present the optimization process on the uni-modal cigar function. As all fireworks approach the exact global optimum, the search ranges of local fireworks quickly connect. The global firework also narrows its boundary quickly under both individual optimization and collaboration. Then, the collaboration prevents local fireworks’ search ranges from overlapping. Only the best local firework independently exploits around the optimum position and converges. The global firework gradually narrows down the distribution of the whole population at a much slower rate. Moreover, the rest local fireworks fill the remained search area of the global fireworks.
Images in the second row present the optimization process on the multi-modal Rastrigin’s function. In this case, each local firework might converge to a local minimum near its initial position. Only when some local firework is not able to keep similar progress of optimization with a better neighbor, it becomes more sensitive in collaboration and fills up the search range of the global firework. In the last picture, two local fireworks converged, while the other two are much more collaborative. The global firework keeps slowly narrowing around the local fireworks. This steady-state will be maintained until a converged firework finishes its local exploitation or another firework fails to discover potential solutions within the remained space of global firework for too long.

6.3. Experiments on the Algorithm Applicability

Combining the analysis on theoretical modal and population behavior, the applicability of the proposed algorithm is discussed and examined below.
According to the “no free lunch” (NFL) theorem [40], any optimization algorithm can only be effective for a particular class of target problems. The distinct population behaviors of classic EAs and FWAs result in different performance characteristics during optimization, leading to their respective suitability for objective functions.
Evolutionary algorithms usually widely distribute their population within the feasible space and slowly narrow the search range until convergence. Therefore, it always considers the global trend of objective functions first and turns to the local pattern after entering a specific local area. On the other hand, the fireworks algorithm directly starts exploiting random local areas. Therefore, the global trend information needs to be obtained and utilized by an effective collaboration strategy, usually absent from the previous algorithms. As analyzed before, the proposed HCFWA combines both types of population behaviors. The global firework keeps continuous optimization on the global trend. Meanwhile, the local fireworks keep exploitation of different local areas. This compromise is built on collaboration and achieved at the cost of the individual optimization efficiency of some worse fireworks.
A set of toy experiments on Rastrigin’s function are conducted to show the optimization characteristics of those algorithms. CMAFWA, SHADE, and HCFWA with different global firework learning rates are tested on the adjusted objective function, containing many local minimums and considerable high-frequency amplitude. Each algorithm is repeated 30 times. The mean and variance value of the best fitness evaluated in each iteration are present in Figure 5.
The first image in Figure 5 presents the optimization performances of CMAFWA, SHADE, and HCFWA. CMAFWA, and SHADE exhibit very different search patterns. As discussed before, the single population evolutionary algorithm SHADE first observes the global trend of the objective function. Therefore, it improves relatively slowly but is stable with significantly low variance. On the contrary, CMAFWA directly exploits random local areas, improving rapidly in the early stage and having a considerable variance. As HCFWA shares properties of both algorithms, its performance also lies in between.
The second image in Figure 5 presents the optimization performance of HCFWA with different global firework learning rates. As the learning rate gradually increases, the behavior pattern of the proposed algorithm gradually shifts from CMAFWA to SHADE. However, if the learning rate is too large, the pattern can become much more complicated due to the restart of the global fireworks.
Such an optimization pattern of the proposed algorithm indicates that it is more suitable for problems that have both global trends and local patterns. Its convergence rate is better specified based on the global structure of the target problem. When the global feature is significant, the global firework should optimize faster and guide the local fireworks to more potential areas. When the global feature is not significant, the global firework should optimize slowly or stop to guide the local fireworks to broad exploration. Fortunately, such a global structure can usually be evaluated at a relatively small cost in plenty of practical optimization scenarios by techniques, such as approximations, sampling, and simulations.

7. Conclusions

This paper extends the fireworks algorithm based on search space partition into a hierarchical collaborative framework. A theoretical model is developed from an information-theoretic perspective and used to guide the algorithm’s design and analyze its properties. In the proposed framework, multiple local fireworks exploit each local area, such as classic fireworks algorithms, while a global firework optimizes on a larger scale and controls the overall distribution of the whole population. The hierarchical collaborated fireworks algorithm is implemented based on a unified individual optimization algorithm and collaboration strategy. Experimental results on the CEC 2020 benchmark demonstrate that the proposed algorithm achieved better performance than former variants of FWA and a competitive efficiency compared with other successful frameworks, especially for complex multi-modal problems. Additional experiments are provided to analyze the properties of the proposed framework. It can be observed that HCFWA can simultaneously maintain optimizations on the global trend and local patterns at multiple locations. Therefore, the stability of global exploration and the convergence speed of local exploitation can be guaranteed simultaneously.
Based on the theoretical model of hierarchical collaborative fireworks algorithm, it can analyze the fundamental principle of multi-local and multi-scale optimization and helps build effective optimization algorithms with multiple populations. Furthermore, the significantly experimental results also imply the outstanding ability of the proposed algorithm on specific types of problems. The fireworks algorithm based on such a model contains the considerable potential for further efficiency improvements.
The proposed hierarchical collaborated fireworks algorithm is a basic implementation of a theoretical modal. There are still tremendous possible improvements in many approximation details in collaboration. For example, the utilization of the spatial neighbor relationship of fireworks and the dynamic strategy adjustment of global fireworks seem to have significant effects on the efficiency improvement of the algorithm.

Author Contributions

Conceptualization, Y.L.; Data curation, Y.L.; Formal analysis, Y.L.; Funding acquisition, Y.T.; Investigation, Y.L.; Methodology, Y.L.; Project administration, Y.T.; Resources, Y.T.; Software, Y.L.; Supervision, Y.T.; Validation, Y.L.; Visualization, Y.L.; Writing—original draft, Y.L.; Writing—review and editing, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (Grant No. 62076010), and partially supported by Science and Technology Innovation 2030—“New Generation Artificial Intelligence” Major Project (Grant Nos.: 2018AAA0102301 and 2018AAA0100302). (Y.T. is the corresponding author.)

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Decomposition of Entropy

Here, the entropy decomposition of random variable x S into the partition S i i = 1 n is presented in Equation (A1). During the derivation, p ( x A ) = p ( x ) / p ( A ) presents the distribution restricted in A S . Additionally, H ( x A ) corresponds to the entropy of p ( x A ) .
H ( x S ) = x S p ( x S ) log p ( x S ) d x = i = 1 n x S i p ( x S ) log p ( x S ) d x = i = 1 n x S i p ( x S i ) p ( S i ) log p ( x S i ) p ( S i ) d x = i = 1 n x S i p ( x S i ) p ( S i ) log p ( x S i ) d x i = 1 n x S i p ( x S i ) p ( S i ) log p ( S i ) d x = i = 1 n p ( S i ) H ( x S i ) + H ( p ( S i ) )

References

  1. Tan, Y.; Zhu, Y. Fireworks algorithm for optimization. In Proceedings of the International Conference in Swarm Intelligence, Beijing, China, 12–15 June 2010; pp. 355–364. [Google Scholar]
  2. Li, Y.; Tan, Y. Multi-Scale Collaborative Fireworks Algorithm. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  3. Li, Y.; Tan, Y. Enhancing Fireworks Algorithm in Local Adaptation and Global Collaboration. In Proceedings of the International Conference on Swarm Intelligence, Qingdao, China, 17–21 July 2021; pp. 451–465. [Google Scholar]
  4. Bacanin, N.; Tuba, M. Fireworks algorithm applied to constrained portfolio optimization problem. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1242–1249. [Google Scholar]
  5. Tuba, E.; Tuba, M.; Dolicanin, E. Adjusted fireworks algorithm applied to retinal image registration. Stud. Inform. Control 2017, 26, 33–42. [Google Scholar] [CrossRef] [Green Version]
  6. Imran, A.M.; Kowsalya, M. A new power system reconfiguration scheme for power loss minimization and voltage profile enhancement using fireworks algorithm. Int. J. Electr. Power Energy Syst. 2014, 62, 312–322. [Google Scholar] [CrossRef]
  7. Liu, L.; Zheng, S.; Tan, Y. S-metric based multi-objective fireworks algorithm. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1257–1264. [Google Scholar]
  8. Zheng, S.; Janecek, A.; Tan, Y. Enhanced fireworks algorithm. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2069–2077. [Google Scholar]
  9. Li, J.; Zheng, S.; Tan, Y. Adaptive fireworks algorithm. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 3214–3221. [Google Scholar]
  10. Zheng, S.; Janecek, A.; Li, J.; Tan, Y. Dynamic search in fireworks algorithm. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 3222–3229. [Google Scholar]
  11. Yu, C.; Li, J.; Tan, Y. Improve enhanced fireworks algorithm with differential mutation. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 264–269. [Google Scholar]
  12. Yu, C.; Tan, Y. Fireworks algorithm with covariance mutation. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 1250–1256. [Google Scholar]
  13. Li, J.; Tan, Y. The bare bones fireworks algorithm: A minimalist global optimizer. Appl. Soft Comput. 2018, 62, 454–462. [Google Scholar] [CrossRef]
  14. Zheng, S.; Li, J.; Janecek, A.; Tan, Y. A cooperative framework for fireworks algorithm. IEEE/ACM Trans. Comput. Biol. Bioinform. 2015, 14, 27–41. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Li, J.; Tan, Y. Loser-out tournament-based fireworks algorithm for multimodal function optimization. IEEE Trans. Evol. Comput. 2017, 22, 679–691. [Google Scholar] [CrossRef]
  16. P-N-Suganthan. 2020-Bound-Constrained-Opt-Benchmark. 2020. Available online: https://github.com/P-N-Suganthan/2020-Bound-Constrained-Opt-Benchmark (accessed on 18 February 2022).
  17. Sallam, K.M.; Elsayed, S.M.; Chakrabortty, R.K.; Ryan, M.J. Improved multi-operator differential evolution algorithm for solving unconstrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  18. Brest, J.; Maučec, M.S.; Bošković, B. Differential evolution algorithm for single objective bound-constrained optimization: Algorithm j2020. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  19. Bolufé-Rohler, A.; Chen, S. A multi-population exploration-only exploitation-only hybrid on CEC-2020 single objective bound constrained problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  20. Jou, Y.C.; Wang, S.Y.; Yeh, J.F.; Chiang, T.C. Multi-population Modified L-SHADE for Single Objective Bound Constrained optimization. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  21. Omidvar, M.N.; Li, X.; Mei, Y.; Yao, X. Cooperative co-evolution with differential grouping for large scale optimization. IEEE Trans. Evol. Comput. 2013, 18, 378–393. [Google Scholar] [CrossRef] [Green Version]
  22. Bratton, D.; Kennedy, J. Defining a standard for particle swarm optimization. In Proceedings of the 2007 IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007; pp. 120–127. [Google Scholar]
  23. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the 2002 Congress on Evolutionary Computation. CEC’02 (Cat. No. 02TH8600), Honolulu, HI, USA, 12–17 May 2002; Volume 2, pp. 1671–1676. [Google Scholar]
  24. Li, X. Niching without niching parameters: Particle swarm optimization using a ring topology. IEEE Trans. Evol. Comput. 2009, 14, 150–169. [Google Scholar]
  25. Kennedy, J. Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, pp. 1931–1938. [Google Scholar]
  26. De Falco, I.; Della Cioppa, A.; Maisto, D.; Scafuri, U.; Tarantino, E. Impact of the topology on the performance of distributed differential evolution. In Proceedings of the European Conference on the Applications of Evolutionary Computation, Granada, Spain, 23–25 April 2014; pp. 75–85. [Google Scholar]
  27. Tomassini, M. The parallel genetic cellular automata: Application to global function optimization. In Artificial Neural Nets and Genetic Algorithms; Springer: Berlin/Heidelberg, Germany, 1993; pp. 385–391. [Google Scholar]
  28. Lim, W.H.; Isa, N.A.M. Particle swarm optimization with increasing topology connectivity. Eng. Appl. Artif. Intell. 2014, 27, 80–102. [Google Scholar] [CrossRef]
  29. Fieldsend, J.E.; Everson, R.M.; Singh, S. Using unconstrained elite archives for multiobjective optimization. IEEE Trans. Evol. Comput. 2003, 7, 305–323. [Google Scholar] [CrossRef] [Green Version]
  30. Halder, U.; Das, S.; Maity, D. A cluster-based differential evolution algorithm with external archive for optimization in dynamic environments. IEEE Trans. Cybern. 2013, 43, 881–897. [Google Scholar] [CrossRef] [PubMed]
  31. Ni, Q.; Deng, J. A new logistic dynamic particle swarm optimization algorithm based on random topology. Sci. World J. 2013, 2013, 409167. [Google Scholar] [CrossRef]
  32. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  33. Price, K.; Storn, R.M.; Lampinen, J.A. Differential Evolution: A practical Approach to Global Optimization; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  34. Mirjalili, S. Genetic algorithm. In Evolutionary Algorithms and Neural Networks; Springer: Berlin/Heidelberg, Germany, 2019; pp. 43–55. [Google Scholar]
  35. Koza, J.R.; Poli, R. Genetic programming. In Search Methodologies; Springer: Berlin/Heidelberg, Germany, 2005; pp. 127–164. [Google Scholar]
  36. Hennig, P.; Schuler, C.J. Entropy Search for Information-Efficient Global Optimization. J. Mach. Learn. Res. 2012, 13. [Google Scholar]
  37. Loshchilov, I. CMA-ES with restarts for solving CEC 2013 benchmark problems. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 369–376. [Google Scholar]
  38. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 71–78. [Google Scholar]
  39. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K.; Awad, N.H. Evaluating the Performance of Adaptive Gaining-Sharing Knowledge Based Algorithm on CEC 2020. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC) as part of the IEEE World Congress on Computational Intelligence (IEEE WCCI), Online, 19–24 July 2020. [Google Scholar]
  40. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Individual optimization of local and global fireworks. The solid and dashed lines are the contours of the distribution before and after the local single-step optimization, respectively.
Figure 1. Individual optimization of local and global fireworks. The solid and dashed lines are the contours of the distribution before and after the local single-step optimization, respectively.
Electronics 11 00948 g001
Figure 2. Dividing Point in local collaboration.
Figure 2. Dividing Point in local collaboration.
Electronics 11 00948 g002
Figure 3. Dividing point in global collaboration.
Figure 3. Dividing point in global collaboration.
Electronics 11 00948 g003
Figure 4. Optimization process of HCFWA on 2D Functions. The solid, dashed, and dotted eclipses correspond to each firework’s original distribution, locally adapted distribution, and collaborated distribution. The dots represent sparks. The learning rates of fireworks’ individual optimization are reduced for better visualization.
Figure 4. Optimization process of HCFWA on 2D Functions. The solid, dashed, and dotted eclipses correspond to each firework’s original distribution, locally adapted distribution, and collaborated distribution. The dots represent sparks. The learning rates of fireworks’ individual optimization are reduced for better visualization.
Electronics 11 00948 g004
Figure 5. Optimization curves on the Rastrigin’s function.
Figure 5. Optimization curves on the Rastrigin’s function.
Electronics 11 00948 g005
Table 1. Comparison with FWAs on 20 dimensional problems of CEC 2020.
Table 1. Comparison with FWAs on 20 dimensional problems of CEC 2020.
 LoTFWACMA-FWAFWASSPHCFWA
Func.MeanStd MeanStd MeanStd MeanStd
11.625e+064.048e+05+0.000e+000.000e+001.238e–053.640e–06=1.751e–051.929e–06
21.531e+034.151e+02+2.647e+021.215e+024.299e+021.681e+02=6.815e+022.293e+02
36.873e+019.701e+00+2.437e+018.288e–1+6.181e+022.962e+01+1.721e+018.254e+00
41.074e+011.604e+00+1.421e+003.200e–1+1.867e+006.521e–01+6.636e–011.049e–01
52.692e+051.768e+05+1.230e+033.018e+02+1.891e+024.939e+013.757e+021.054e+02
64.579e+022.063e+02+7.375e+007.963e+00+1.594e+025.865e+01+1.632e+002.585e–01
76.508e+045.798e+04+4.565e+022.158e+02+1.005e+024.884e+012.374e+028.577e+01
81.084e+021.010e+01+4.589e+021.463e+02+1.000e+022.272e–07+7.201e+014.043e+01
94.505e+021.856e+01+4.049e+021.659e+00+2.112e+029.651e+01+1.067e+022.494e+01
104.185e+021.358e+01+4.063e+025.418e–034.024e+025.840e+00=4.028e+025.095e+00
Result10 vs. 08 vs. 25 vs. 2 
AR3.802.402.101.70
Table 2. Comparison with classic algorithms on 20 dimensional problems of CEC 2020.
Table 2. Comparison with classic algorithms on 20 dimensional problems of CEC 2020.
 LoTFWAIPOP-CMA-ESSHADEHCFWA
Func.MeanStd MeanStd MeanStd MeanStd
11.63e+064.05e+05+0.00e+000.00e+000.00e+000.00e+001.75e–051.93e–06
21.53e+034.15e+02+2.16e+032.41e+01+2.16e+019.14e+006.82e+022.29e+02
36.87e+019.70e+00+5.43e+017.97e+00+2.08e+012.20e–01=1.72e+018.25e+00
41.07e+011.60e+00+2.32e+002.78e–01+6.48e–016.49e–02=6.64e–011.05e–01
52.69e+051.77e+05+1.23e+032.83e+02+4.37e+013.89e+013.76e+021.05e+02
64.58e+022.06e+02+4.91e+022.19e+00+2.07e+002.12e–01+1.63e+002.59e–01
76.51e+045.80e+04+7.18e+022.10e+02+1.50e+009.57e–012.37e+028.58e+01
81.08e+021.01e+01+2.48e+031.85e+02+1.00e+020.00e+00+7.20e+014.04e+01
94.51e+021.86e+01+4.32e+021.48e+00+4.07e+022.19e+00+1.07e+022.49e+01
104.19e+021.36e+01+4.30e+024.55e–01+4.06e+026.97e–03+4.03e+025.10e+00
Result10 vs. 09 vs. 14 vs. 4 
AR3.603.251.551.60
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, Y.; Tan, Y. Hierarchical Collaborated Fireworks Algorithm. Electronics 2022, 11, 948. https://doi.org/10.3390/electronics11060948

AMA Style

Li Y, Tan Y. Hierarchical Collaborated Fireworks Algorithm. Electronics. 2022; 11(6):948. https://doi.org/10.3390/electronics11060948

Chicago/Turabian Style

Li, Yifeng, and Ying Tan. 2022. "Hierarchical Collaborated Fireworks Algorithm" Electronics 11, no. 6: 948. https://doi.org/10.3390/electronics11060948

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop