Next Article in Journal
Improved Differential Evolution Algorithm Guided by Best and Worst Positions Exploration Dynamics
Next Article in Special Issue
A New Approach Based on Collective Intelligence to Solve Traveling Salesman Problems
Previous Article in Journal
Teleoperated Grasping Using Data Gloves Based on Fuzzy Logic Controller
Previous Article in Special Issue
A New Hybrid Particle Swarm Optimization–Teaching–Learning-Based Optimization for Solving Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Running-Time Analysis of Brain Storm Optimization Based on Average Gain Model

1
School of Physics and Electronic Engineering, Hanshan Normal University, Chaozhou 521041, China
2
School of Software Engineering, South China University of Technology, Guangzhou 510006, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(2), 117; https://doi.org/10.3390/biomimetics9020117
Submission received: 24 November 2023 / Revised: 24 January 2024 / Accepted: 31 January 2024 / Published: 15 February 2024
(This article belongs to the Special Issue Bioinspired Algorithms)

Abstract

:
The brain storm optimization (BSO) algorithm has received increased attention in the field of evolutionary computation. While BSO has been applied in numerous industrial scenarios due to its effectiveness and accessibility, there are few theoretical analysis results about its running time. Running-time analysis can be conducted through the estimation of the upper bounds of the expected first hitting time to evaluate the efficiency of BSO. This study estimates the upper bounds of the expected first hitting time on six single individual BSO variants (BSOs with one individual) based on the average gain model. The theoretical analysis indicates the following results. (1) The time complexity of the six BSO variants is O ( n ) in equal coefficient linear functions regardless of the presence or absence of the disrupting operator, where n is the number of the dimensions. Moreover, the coefficient of the upper bounds on the expected first hitting time shows that the single individual BSOs with the disrupting operator require fewer iterations to obtain the target solution than the single individual BSOs without the disrupting operator. (2) The upper bounds on the expected first hitting time of single individual BSOs with the standard normally distributed mutation operator are lower than those of BSOs with the uniformly distributed mutation operator. (3) The upper bounds on the expected first hitting time of single individual BSOs with the U 1 2 , 1 2 mutation operator are approximately twice those of BSOs with the U ( 1 , 1 ) mutation operator. The corresponding numerical results are also consistent with the theoretical analysis results.

1. Introduction

The swarm intelligence algorithm is one of the nature-inspired optimization algorithms that simulates the behavior of biological groups in nature [1,2,3]. Over the past two decades, many different types of swarm intelligence algorithms have been proposed, such as particle swarm optimization (PSO) [4,5], ant colony optimization (ACO) [6,7], artificial bee colony (ABC) [8,9], and brain storm optimization (BSO) [10,11,12]. Different from traditional methods, these algorithms solve problems by simulating the behavior of animal or human groups, with higher flexibility and adaptability.
BSO, a novel swarm intelligence algorithm, is inspired by the human brainstorming process. It is a continuous evolutionary algorithm that simulates the collective behavior of human beings. In recent years, BSOs have seen various practical applications in power systems [13,14,15,16], aviation design [17,18,19], mobile robot path planning [20,21], antenna design [22], financial optimization [23,24,25], and many other fields [26,27,28,29,30].
In addition, the theoretical analysis of BSO is also very important, especially for the practical application of BSO. The theoretical analysis benefits researchers in enabling them to understand the mechanism of the algorithm in guiding its design, improvement, and application in practice. The theoretical analysis can be divided into convergence analysis and running-time analysis. Zhou et al. [31], Qiao et al. [32], and Zhang et al. [33] have performed corresponding convergence analyses on BSO. The BSO–CAPSO algorithm, proposed by Alkmini [1], effectively enhances the computational efficiency of BSO through hybridization with chaotic accelerated particle swarm optimization. However, there are few works on the running-time analysis of BSO.
Some theoretical methods have been proposed as general analysis tools to investigate the running time of random heuristic algorithms, including the fitness level method [34], drift analysis method [35], switch analysis method [36], wave model [37], etc. These methods are mainly used to analyze discrete random heuristic algorithms. In contrast, fewer theoretical analysis results have been obtained for continuous random heuristic algorithms [38,39,40]. However, a large number of practical application problems are continuous. Therefore, the running time of continuous random heuristic algorithms has important research significance. To analyze the running time of continuous random heuristic algorithms, Huang et al. [41] proposed the average gain model.
Huang et al. [41,42] and Zhang et al. [43] used an average gain model to evaluate the expected first hit time of the ( 1 + 1 )-evolutionary algorithm (( 1 + 1 ) EA), evolutionary strategy (ES), and covariance matrix adaptation evolution strategy (CMA-ES). The concept of the first hit time refers to the minimum number of iterations required before the algorithm finds an optimal solution [35]. The expected first hit time represents the average number of iterations needed to find the optimal solution, which actually reflects the average time complexity of the algorithm [44]. Wang [45] employed the average gain model to analyze the computational efficiency of the proposed swarm intelligence algorithm and provided theoretical evidence for its effectiveness. Therefore, the expected first hit time is a core metric in runtime analysis. Based on the average gain model, the expected first hit time of BSO is deeply analyzed in this paper.
The core of BSO consists of three key components: clustering, interruption, and update. The mutation operator plays an important role in BSO and is included in the interrupt operation and update operation. Specifically, the mutation operator helps the algorithm to jump out of the local optimal solution and further explore a broader search space by introducing random factors in the search process. For example, Zhan et al. [46] proposed an improved BSO (MBSO) in which the mutation operator employs a novel thought difference strategy (IDS). This strategy takes advantage of the thought differences among individuals in the group and increases the diversity of the group by introducing random factors, thus increasing the probability of the algorithm finding the global optimal solution. In addition, El-Abd [47] improved the step equation of the mutation operator and improved the performance of BSO by adjusting the step size and distribution. This improvement helps the algorithm to balance the local search and the global search better, so that the algorithm can find the global optimal solution more effectively when solving complex optimization problems.
The time complexity of the single individual BSO is analyzed in this paper based on the research process from simple to complex. The single individual BSO without the disrupting operation is the same as the (1+1) ES [41]. However, the corresponding results can explain the influence of the mutation operator and disrupting operator on the time complexity of BSO. In this paper, we choose the three most classic and representative distributions, N ( 0 , 1 ) , U 1 2 , 1 2 , and U ( 1 , 1 ) , as the analysis objects for the mutation operator. Therefore, six BSO variants are obtained as the analyzed algorithms based on the combination of three mutation operators and the presence or absence of a disrupting operator.
The remainder of this paper is organized as follows. Section 2 introduces the process of BSO and the mathematical model of running-time analysis for BSO. Section 3 provides the theoretical analysis results for the running-time analysis of three different BSO variants. Section 4 presents the corresponding experimental results to evaluate the theoretical results. Finally, Section 5 concludes the paper.

2. Mathematical Model for Running-Time Analysis of BSO

2.1. Brain Storm Optimization

BSO was proposed by Shi [48,49] in 2011, and it is simple in concept and easy to implement. BSO can be simplified as follows.
In Steps 4 and 5, the new solution is generated by x ˜ = x + Δ , where x is the original individual, x ˜ is the newly generated individual, and Δ is a vector generated according to the mutation operator. In this paper, we focus on the running time of BSO with three different mutation operators. The same mutation operators are used to generate new individuals in both Steps 4 and 5. The superposition of different mutation operators is not considered in this work.
To accurately observe the effects of the disrupting and updating operations on the running time of BSO, we select a single individual form of BSO as the analyzed object. The single individual BSO framework simplifies the effect of the population size, which helps to evaluate the effect of the disrupting and updating operations on the running time. Furthermore, following the principle from simple to difficult, the single individual BSO is a suitable starting point for the running-time analysis of BSO. Moreover, the randomness of Δ and the design of the operations in Steps 4 and 5 are derived from the mutation operator design of evolutionary programming [50,51]. Therefore, the conclusion of this analysis will have positive implications for the study of similar mutation operators in evolutionary programming algorithms.

2.2. Stochastic Process Model of BSO

BSO can be represented as a stochastic process. In this section, we introduce the terminology for the analysis of the running time of BSO.
Definition 1 
(Hill-climbing task). Given a search space S R n and a fitness function f : S R , the hill-climbing task is to find a solution x * S , where the fitness of x * reaches the target value H, where f ( x * ) H .
In this paper, we focus on analyzing the BSO running time in the hill-climbing task of a continuous search space.
Definition 2 
(State of BSO). The state of BSO at the t-th ( t = 0 , 1 , ) iteration is defined as P t = ξ 1 t , ξ 2 t , , ξ λ t , where λ is the size of the population, and ξ 1 t , ξ 2 t , , ξ λ t S .
Definition 3 
(State space of BSO). The set of all possible BSO states is called the state space of BSO, denoted as
Ω = S λ = ξ 1 , ξ 2 , , ξ λ | ξ k S , k = 1 , , λ .
An optimization problem is a mapping from a decision space to an objective space, and the state space of BSO represents the corresponding decision space.
Definition 4 
(Expected first hitting time). Let { X t } t = 0 be a stochastic process, where, for any t 0 , X t 0 holds. Suppose that X t is the Euclidean distance value of the t-th iteration state of BSO to the target solution, and the target threshold ε > 0 , the first hitting time [35] of the ε-approximation solution, can be defined by
T ε = min { t 0 : X t ε } .
Therefore, the expected first hitting time [44] of BSO can be denoted with E ( T ε | X 0 ) .
The expected first hit time refers to the average number of iterations required for the BSO algorithm to reach the target fitness value. This metric can more accurately measure the performance of an algorithm because it takes into account the probability distribution of the algorithm over different iterations. Through this metric, we can evaluate the average time complexity of the algorithm in finding the optimal solution, so as to better understand the efficiency of the algorithm.

2.3. Running-Time Analysis of BSO Based on Average Gain Model

Inspired by drift analysis [52] and the idea of Riemann integrals [53], Huang et al. [41] proposed the average gain model. Zhang et al. [43] separated this model by introducing the concepts of the supermartingale and stopping time. Based on the former research results [41,43] of the average gain model, Huang et al. [42] proposed an experimental method to estimate the running time of the continuous evolution algorithms.
The expected one-step variation
δ t = E ( X t X t + 1 | H t ) , t 0
is called the average gain, where { X t } t = 0 is a stochastic process, and H t = σ ( X 0 , X 1 , , X t ) . The σ -algebra H t contains all the events generated by X 0 , X 1 , , X t . All the information observed from the original population to the t-th iteration is recorded in H t .
Based on Definition 2, P t = { ξ 1 t , ξ 2 t , , ξ λ t } is the state of BSO at the t-th iteration. The process of BSO in solving the hill-climbing task is considered as the gradual process of the stochastic state from the initial population to the population that contains the optimal solution. Let f ( P t ) = max { f ( x ) : x P t } be the highest fitness value of individuals in P t . X t is used to measure the distance of the current population to the population of the target value. X t = f * f ( P t ) , where f * is the fitness value of the optimal solution. Obviously, { X t } t = 0 is a non-negative stochastic process.
The state of BSO in the ( t + 1 ) -th iteration P t + 1 only depends on P t . In other words, the stochastic process { P t } t = 0 can be modeled by a Markov chain [42]. Similarly, { X t } t = 0 can also be regarded as a Markov chain. In this case, the average gain δ t = E ( X t X t + 1 | H t ) can be simplified to δ t = E ( X t X t + 1 | X t ) . Based on Th. 2 of  [43], the expectation of T ε of BSO can be estimated as follows.
Theorem 1. 
Suppose that { X t } t = 0 is a stochastic process associated with BSO, where X t 0 for all t 0 . Let h : [ 0 , A ] R + be a monotonically increasing and integrable function. If E ( X t X t + 1 | X t ) h ( X t ) and X t > ε > 0 , it holds for T ε that
E ( T ε | X 0 ) 1 + ε X 0 1 h ( x ) x .
Theorem 1 shows the upper bounds on the expected first hitting time of BSO based on the average gain model. The average gain δ t plays a key role in analyzing the first hitting time T ε of the ε -approximation solution for BSO. The higher average gain indicates a more efficient iteration of the optimization process.

3. Running-Time Analysis of BSO Instances for Equal Coefficient Linear Functions

In this section, we present the theoretical analysis results based on the average gain model to analyze the expected first hitting time of BSO for equal coefficient linear functions. The running time of BSO with three different mutation operators is analyzed from the perspective of whether the disrupting operation exists. In this paper, we refer to the BSO without a disrupting operation as BSO-I, and the BSO with a disrupting operation as BSO-II.
On this basis, the equal coefficient linear functions are selected as the research object [54,55,56]. These functions are a form of basic continuous optimization problem whose function expression is as follows:
f ( x 1 , x 2 , x n ) = k ( x 1 + x 2 + + x n ) = k i = 1 n x i ,
where ( x 1 , x 2 , , x n ) S . It is assumed that the function starts from the origin and sets the target fitness value to n a , where a > 0 . The objective of optimizing the equal coefficient linear function is to find a solution x * S , such that f ( x * ) n a .
The mutation operator that obeys the Gaussian distribution and the uniform distribution is selected for the evaluation of BSO. The Gaussian distribution and uniform distribution are common tools for the design of mutation operators [50,51,57], so it is representative to select these two distributions as research cases.

3.1. Case Study of BSO without Disrupting Operator

Since the single individual BSO is analyzed in this paper, λ is equal to 1 in the state P t = { ξ 1 t , ξ 2 t , , ξ λ t } of BSO. The BSO of a single individual has only one individual, so ξ 1 t represents both the optimized individual and the random state of the algorithm. The procedure of single individual BSO can be described as follows when the disrupting operation does not exist (i.e., Step 4 in Algorithm 1 is ignored).
Algorithm 1 Brain Storm Optimization (BSO)
1:
Initialization: Randomly generate λ individuals (potential solutions) to form the initial population P = { ξ 1 , ξ 2 , , ξ λ } and evaluate the λ individuals;
2:
while fail to achieve the predetermined maximum number of iterations do
3:
   Clustering: Use clustering algorithms to divide λ individuals into m clusters;
4:
   Disrupting: The mutation occurs with a certain probability, and a randomly selected cluster’s central individual is replaced by a randomly generated new individual;
5:
   Updating: Randomly choose one or two clusters to create a new individual;
   Compare the newly generated individual and the original individual with the same individual index. The better one will be saved as the new individual;
   Update the whole population; the offspring population is recorded as P = { ξ 1 , ξ 2 , , ξ λ } . Evaluate the individuals in P ;
6:
end while
7:
Output the most optimal solution discovered.
x t = ( x 1 t , x 2 t , , x n t ) S , t = 0 , 1 , is the t-th generation of the algorithm.
X t = a n f ( x t ) = a n k ( x 1 t + x 2 t , + x n t ) ,
is defined as the Euclidean distance of the t-th iteration to the optimal solution. The gain at t-th is given by
η t = X t X t + 1 = k ( x 1 t + 1 + x 2 t + 1 + + x n t + 1 ) k ( x 1 t + x 2 t + + x n t ) .

3.1.1. When z i N ( 0 , 1 )

If the mutation operator obeys the standard normal distribution N ( 0 , 1 ) , the distribution function of η t is as presented by Lemma 1.
Lemma 1. 
For BSO-I, if its mutation operator z obeys N ( 0 , 1 ) , the distribution function F ( u ) = P ( η t u ) of the gain η t is
F ( u ) = 0 , u < 0 1 2 , u = 0 1 2 π n k u e t 2 2 n k 2 t , u > 0 .
Proof. 
According to Step 4 and Step 5 of Algorithm 2, the ( t + 1 )-th individual is
x t + 1 = x t , f ( y t ) f ( x t ) y t , f ( y t ) > f ( x t ) .
Algorithm 2 BSO-I
1:
Initialization: Randomly generate an individual x = ( x 1 , x 2 , , x n ) R n based on uniform distribution;
2:
while stopping criterion is not satisfied do
3:
    y = x + z , where z is the mutation operator;
4:
   if  y adapts better than x  then
5:
       x is substituted for y
6:
   end if
7:
end while
Output: 
x
According to the definition of η t , where t = 0 , 1 , ,
(1)
If f ( y t ) f ( x t ) ,
η t = k ( x 1 t + 1 + x 2 t + 1 + + x n t + 1 ) k ( x 1 t + x 2 t + + x n t ) = k ( x 1 t + x 2 t + + x n t ) k ( x 1 t + x 2 t + + x n t ) = 0
(2)
If f ( y t ) > f ( x t ) ,
η t = k ( x 1 t + 1 + x 2 t + 1 + + x n t + 1 ) k ( x 1 t + x 2 t + + x n t ) = k ( y 1 t + y 2 t + + y n t ) k ( x 1 t + x 2 t + + x n t ) = k [ ( y 1 t x 1 t ) + ( y 2 t x 2 t ) + + ( y n t x n t ) ] = k ( z 1 t + z 2 t + + z n t ) = f ( z t )
Since z i N ( 0 , 1 ) , z 1 , , z n are independent of each other. All of the z i satisfy the additivity of the normal distribution, so f ( z t ) obeys the distribution of N ( 0 , n k 2 ) .
Hence, the distribution function of η t is shown as F ( u ) = P ( η t u ) .
(1)
If u < 0 , according to the definition of η t where η t 0 , it has F ( u ) = 0 .
(2)
If u = 0 , the probability density function of N ( 0 , n ) is symmetric in the y axis, so F ( u ) = P ( η t u ) = P ( η t = 0 ) = 1 2 .
(3)
If u > 0 , F ( u ) = P ( η t u ) = 1 2 π n k u e t 2 2 n k 2 t .
Lemma 1 holds.    □
Theorem 2 is presented based on the above proof.
Theorem 2. 
If the mutation operator z of BSO-I obeys N ( 0 , 1 ) , the upper bound on the expected first hitting time to reach the target fitness value n a is derived as follows.
E ( T ε | X 0 ) 1 + 2 π n a k 2 π n ε k .
Proof. 
E ( X t X t + 1 | X t ) = E ( η t | X t ) = + u F ( u ) = 0 + u 1 2 π n k u e t 2 2 n k 2 t = k n 2 π
It is assumed that the algorithm starts from the origin at initialization, where x 0 = ( 0 , 0 , , 0 ) , i.e.,
X 0 = n a f ( x 0 ) = n a k ( 0 + 0 + 0 ) = n a
According to Theorem 1, the upper bound on the expected first hitting time is derived as
E ( T ε | X 0 ) 1 + ε n a 2 π k n x = 1 + 2 π n a k 2 π n ε k .
Theorem 2 holds.    □
Theorem 2 indicates that for BSO-I, if its mutation operator z obeys N ( 0 , 1 ) , the computational time complexity of BSO-I for the equal coefficient linear function is E ( T ε | X 0 ) = O ( n ) .

3.1.2. When z i U 1 2 , 1 2

The uniform distribution function does not satisfy additivity like the normal distribution function. The Lindeberg–Levy center limit theorem [58] can provide an idea to find the distribution of η t . The Lindeberg–Levy center limit theorem is introduced below.
Suppose that X n is a sequence of independent and identically distributed random variables with E ( X i ) = μ and ( X i ) = σ 2 > 0 ; let
Y n * = X 1 + X 2 + + X n n μ σ n ,
then
lim n P ( Y n * y ) = Φ ( y ) = 1 2 π y e t 2 2 t
is satisfied for any real number y.
The Lindeberg–Levy center limit theorem [58] shows that if n is sufficiently large, Y n * N ( 0 , 1 ) , it has i = 1 n X i N ( n μ , n σ 2 ) . Generally, the case of higher dimensions requires more attention in the study of the computational time complexity of algorithms. If the mutation operator obeys U 1 2 , 1 2 , the distribution function of η t can be represented by Lemma 2.
Lemma 2. 
For BSO-I, if its mutation operator z obeys U 1 2 , 1 2 , the distribution function F ( u ) = P ( η t u ) of the gain η t is
F ( u ) = 0 , u < 0 1 2 , u = 0 6 π n k u e 6 t 2 n k 2 t , u > 0 .
Proof. 
According to the definition of η t , where t = 0 , 1 , .
(1)
If f ( y t ) f ( x t ) , η t = 0 .
(2)
If f ( y t ) > f ( x t ) , η t = k ( z 1 t + z 2 t + + z n t ) , where z i U 1 2 , 1 2 , and z 1 , , z n are independent of each other. According to the Lindeberg–Levy center limit theorem, η t obeys N 0 , 1 12 n k 2 .
Hence, the distribution function of η t is F ( u ) = P ( η t u ) .
(1)
If u < 0 , F ( u ) = 0 .
(2)
If u = 0 , F ( u ) = P ( η t = 0 ) = 1 2 .
(3)
If u > 0 , F ( u ) = 6 π n k u e 6 t 2 n k 2 t .
Lemma 2 holds.    □
Theorem 3 is presented based on the above proof.
Theorem 3. 
For BSO-I, if its mutation operator z obeys U 1 2 , 1 2 , the upper bound on the expected first hitting time to reach the target fitness value n a is derived as follows.
E ( T ε | X 0 ) 1 + 2 6 π n a k 2 6 π n ε k .
The proof of this theorem is based on the same principle as Theorem 2. The detailed derivation is given as follows.
Proof. 
According to Lemma 2, we have
E ( X t X t + 1 | X t ) = E ( η t | X t ) = + u F ( u ) = 0 + u 6 π n k u e 6 t 2 n k 2 t = k n 2 6 π ,
and the algorithm starts from the origin at initialization, x 0 = ( 0 , 0 , , 0 ) , i.e., X t = n a . According to Theorem 1, the upper bound on the expected first hitting time is derived as
E ( T ε | X 0 ) 1 + ε n a 2 6 π k n x = 1 + 2 6 π n a k 2 6 π n ε k .      
Theorem 3 indicates that if the mutation operator z of BSO-I obeys U 1 2 , 1 2 , its computational time complexity is E ( T ε | X 0 ) = O ( n ) for the equal coefficient linear function.

3.1.3. When z i U ( 1 , 1 )

If the mutation operator obeys U ( 1 , 1 ) , the distribution function of η t can be represented by Lemma 3.
Lemma 3. 
For BSO-I, if its mutation operator z obeys U ( 1 , 1 ) , the distribution function F ( u ) = P ( η t u ) of the gain η t is
F ( u ) = 0 , u < 0 1 2 , u = 0 3 2 π n k u e 3 t 2 2 n k 2 t , u > 0 .
The proof of this lemma is based on the same principle as Lemma 2.
Proof. 
According to the definition of η t , t = 0 , 1 , ,
(1)
If f ( y t ) f ( x t ) , η t = 0 .
(2)
If f ( y t ) > f ( x t ) , η t = f ( z t ) .
Since z i U ( 1 , 1 ) , z 1 , , z n are independent of each other, according to the Lindeberg–Levy center limit theorem, f ( z t ) obeys N ( 0 , 1 3 n k 2 ) .
Hence, the η t distribution function F ( u ) = P ( η t u ) is
(1)
If u < 0 , F ( u ) = 0 .
(2)
If u = 0 , F ( u ) = P ( η t = 0 ) = 1 2 .
(3)
If u > 0 , F ( u ) = 3 2 π n k u e 3 t 2 2 n k 2 t .
   □
According to Lemma 3 and Theorem 1, Theorem 4 can be inferred.
Theorem 4. 
For BSO-I, if its mutation operator z obeys U ( 1 , 1 ) , the upper bound on the expected first hitting time to reach the target fitness value n a is derived as
E ( T ε | X 0 ) 1 + 6 π n a k 6 π n ε k .
The proof of this theorem is based on the same principle as Theorem 2. The proof is given as follows.
Proof. 
According to Lemma 3, we have
E ( X t X t + 1 | X t ) = E ( η t | X t ) = + u F ( u ) = 0 + u 3 2 π n k u e 3 t 2 2 n k 2 t = k n 6 π ,
and the algorithm starts from the origin at initialization, x 0 = ( 0 , 0 , , 0 ) , i.e., X t = n a . According to Theorem 1, the upper bound on the expected first hitting time is derived as
E ( T ε | X 0 ) 1 + ε n a 6 π k n x = 1 + 6 π n a k 6 π n ε k .      
Theorem 4 indicates that if the mutation operator z of BSO-I obeys U ( 1 , 1 ) , its computational time complexity for the equal coefficient linear function is E ( T ε | X 0 ) = O ( n ) .
The time complexity of BSO-I with three different mutation operators is O ( n ) . In the next section, we will discuss the running time of BSO considering the case with a disrupting operation.

3.2. Case Study of BSO with Disrupting Operator

Based on the average gain model, this section analyzes the upper bounds of the expected first hit time in three BSO cases. When interference operations are added to the single individual BSO, the algorithm process can be simplified as follows.
In Algorithm 3, the disrupting operations, which are executed with a small probability, are shown in Steps 3 to 6. Let A = { P b | P b < P b } indicate that replacement occurs, while A ¯ = { P b | P b P b } indicates that no replacement occurs.
Algorithm 3 BSO-II
1:
Initialization: Randomly generate an individual x = ( x 1 , x 2 , , x n ) R n based on uniform distribution;
2:
while stopping criterion is not satisfied do
3:
   Randomly generate a value P b from 0 to 1 based on uniform distribution;
4:
   if  P b is smaller than a pre-determined probability P b  then
5:
       replace x with a randomly generated individual b = ( b 1 , b 2 , , b n ) based on uniform distribution;
6:
   end if
7:
   if  P b < P b  then
8:
        y = b + z , where z is a mutation operator;
9:
   else
10:
      y = x + z ;
11:
  end if
12:
  if  y has better fitness than x  then
13:
      replace x with y
14:
  end if
15:
end while
Output: 
x
To highlight the effect of each mutation operator on the algorithm, we choose the same mutation operators of BSO-I in Steps 5 and 8 to generate new individuals. As a result, b i = x i + Δ x i , where mutation operator parameter Δ x i and z i follow the same distribution.
Here, x t = ( x 1 t , x 2 t , , x n t ) S is still the t-th individual of the algorithm. We have X t = a n f ( x t ) = a n k ( x 1 t + x 2 t , + x n t ) , and the corresponding gain at t is given by η t = X t X t + 1 = k ( x 1 t + 1 + x 2 t + 1 + + x n t + 1 ) k ( x 1 t + x 2 t + + x n t ) .

3.2.1. When z i N ( 0 , 1 )

(1)
If P b P b , it is the same as the result of the case with no disrupting operation in Section 3.1, and the average gain is
E ( X t X t + 1 | X t , A ¯ ) = E ( η t | X t , A ¯ ) = k n 2 π .
(2)
If P b < P b and the mutation operator obeys N ( 0 , 1 ) , the distribution function of η t is represented by Lemma 4.
Lemma 4. 
For BSO-II, if its mutation operator z obeys N ( 0 , 1 ) and P b < P b , the distribution function of the gain η t is F ( u ) = P ( η t u ) .
F ( u ) = 0 , u < 0 1 2 , u = 0 1 2 k π n u e t 2 4 n k 2 t , u > 0 .
Proof. 
According to Step 12 and Step 13 of Algorithm 3, the ( t + 1 )-th individual is x t + 1 = x t , f ( y t ) f ( x t ) y t , f ( y t ) > f ( x t ) .
According to the definition of η t , t = 0 , 1 , ,
(1)
If f ( y t ) f ( x t ) , η t = 0 .
(2)
If f ( y t ) > f ( x t ) ,
η t = k ( x 1 t + 1 + x 2 t + 1 + + x n t + 1 ) k ( x 1 t + x 2 t + + x n t ) = k [ ( b 1 t + b 2 t + + b n t ) + ( z 1 t + z 2 t + + z n t ) ] k ( x 1 t + x 2 t + + x n t ) = k [ ( x 1 t + x 2 t + + x n t ) + ( Δ x 1 t + Δ x 2 t + + Δ x n t ) ] + k ( z 1 t + z 2 t + + z n t ) k ( x 1 t + x 2 t + + x n t ) = k [ ( Δ x 1 t + Δ x 2 t + + Δ x n t ) + ( z 1 t + z 2 t + + z n t ) ]
Since z i N ( 0 , 1 ) , Δ x i N ( 0 , 1 ) , and z 1 , , z n are independent of each other, Δ x 1 , Δ x 2 , , Δ x n are also independent of each other. All of z i and Δ x i satisfy the additivity of the normal distribution. As a result, η t obeys N ( 0 , 2 n k 2 ) .
Hence, the distribution function of η t is F ( u ) = P ( η t u ) .
(1)
If u < 0 , F ( u ) = 0 .
(2)
If u = 0 , F ( u ) = P ( η t u ) = P ( η t = 0 ) = 1 2 .
(3)
If u > 0 , F ( u ) = P ( η t u ) = 1 2 k π n u e t 2 4 n k 2 t .
Lemma 4 holds. □
Based on the above proofs, we can conclude that
E ( X t X t + 1 | X t , A ) = E ( η t | X t , A ) = 0 + u 1 2 k π n u e t 2 4 n k 2 t = k n π
Assume that the probability P b is equal to 0.2 [59]; Theorem 5 can be obtained according to Theorem 1.
Theorem 5. 
For BSO-II, if its mutation operator z obeys N ( 0 , 1 ) , the upper bound on the expected first hitting time to reach the target fitness value n a is derived as follows.
E ( T ε | X 0 ) 1 + 5 2 π n 4 + 2 a k 5 2 π 4 n + 2 n ε k .
Proof. 
E ( X t X t + 1 | X t ) = P ( A ¯ ) E ( X t X t + 1 | X t , A ¯ ) + P ( A ) E ( X t X t + 1 | X t , A ) ( full   expectation   formula ) = ( 1 0.2 ) × E ( X t X t + 1 | X t , A ¯ ) + 0.2 × E ( X t X t + 1 | X t , A ) = 0.8 × k n 2 π + 0.2 × k n π = 4 k n + k 2 n 5 2 π
The algorithm starts from the origin at initialization, x 0 = ( 0 , 0 , , 0 ) , i.e., X t = n a . According to Theorem 1, the upper bound on the expected first hitting time is derived as
E ( T ε | X 0 ) 1 + ε n a 5 2 π 4 k n + k 2 n x = 1 + 5 2 π n 4 + 2 a k 5 2 π 4 n + 2 n ε k
Theorem 5 holds. □
According to the proof of Theorem 5, if the mutation operator z of BSO-II obeys N ( 0 , 1 ) , its computational time complexity for the equal coefficient linear function is E ( T ε | X 0 ) = O ( n ) .

3.2.2. When z i U 1 2 , 1 2

(1)
If P b P b , the result is the same as the case in Section 3.1 with no disrupting operation. The average gain is
E ( X t X t + 1 | X t , A ¯ ) = E ( η t | X t , A ¯ ) = k n 2 6 π .
(2)
If P b < P b and the mutation operator obeys U 1 2 , 1 2 , the distribution function of η t is represented by Lemma 5.
Lemma 5. 
If the mutation operator of BSO-II z obeys U 1 2 , 1 2 and P b < P b , the distribution function of the gain η t is F ( u ) = P ( η t u ) .
F ( u ) = 0 , u < 0 1 2 , u = 0 3 π n k u e 3 t 2 n k 2 t , u > 0 .
The proof of this lemma is based on the same principle as Lemma 4. The detailed derivation is given as follows.
Proof. 
According to the definition of η t , t = 0 , 1 , ,
(1)
If f ( y t ) f ( x t ) , η t = 0 .
(2)
If f ( y t ) > f ( x t ) , η t = k [ ( Δ x 1 t + Δ x 2 t + + Δ x n t ) + ( z 1 t + z 2 t + + z n t ) ]
Since z i U 1 2 , 1 2 , Δ x i U 1 2 , 1 2 , and z 1 , , z n are independent of each other, Δ x 1 , Δ x 2 , , Δ x n are also independent of each other. f ( z t ) obeys N 0 , 1 6 n k 2 according to the Lindeberg–Levy center limit theorem.
Hence, the η t distribution function F ( u ) = P ( η t u ) is
(1)
If u < 0 , F ( u ) = 0 .
(2)
If u = 0 , F ( u ) = P ( η t u ) = P ( η t = 0 ) = 1 2 .
(3)
If u > 0 , F ( u ) = P ( η t u ) = 3 π n k u e 3 t 2 n k 2 t .
Theorem 6 can be concluded based on Lemma 5 and Theorem 1.
Theorem 6. 
If the mutation operator of BSO-II z obeys U 1 2 , 1 2 , the upper bound on the expected first hitting time to reach the target fitness value n a is derived as follows.
E ( T ε | X 0 ) 1 + 10 6 π n 4 + 2 a k 10 6 π 4 n + 2 n ε k .
The proof of this theorem is based on the same principle as Theorem 5. The detailed derivation is presented as follows.
Proof. 
According to Lemma 5, we have
E ( X t X t + 1 | X t , A ) = E ( η t | X t , A ) = 0 + u 3 π n k u e 3 t 2 n k 2 t = k n 2 3 π .
Suppose that the probability P b = 0.2 ; according to Theorem 1, the following conclusions can be drawn:
E ( X t X t + 1 | X t ) = P ( A ¯ ) E ( X t X t + 1 | X t , A ¯ ) + P ( A ) E ( X t X t + 1 | X t , A ) = ( 1 0.2 ) × E ( X t X t + 1 | X t , A ¯ ) + 0.2 × E ( X t X t + 1 | X t , A ) = 0.8 × k n 2 6 π + 0.2 × k n 2 3 π = 4 k n + k 2 n 10 6 π
The algorithm starts from the origin at initialization, x 0 = ( 0 , 0 , , 0 ) , i.e., X t = n a . According to Theorem 1, the upper bound on the expected first hitting time is derived as
E ( T ε | X 0 ) 1 + ε n a 10 6 π 4 n + 2 n x = 1 + 10 6 π n 4 + 2 a k 10 6 π 4 n + 2 n ε k .      
Theorem 6 indicates that for BSO-II, if its mutation operator z obeys U 1 2 , 1 2 , the computational time complexity of BSO-II for the equal coefficient linear function is E ( T ε | X 0 ) = O ( n ) .

3.2.3. When z i U ( 1 , 1 )

(1)
If P b P b , the result is the same as the case in Section 3.1 with no disrupting operation. The average gain is
E ( X t X t + 1 | X t , A ¯ ) = E ( η t | X t , A ¯ ) = k n 6 π .
(2)
If P b < P b , and the mutation operator obeys U ( 1 , 1 ) , the distribution function of η t is represented by Lemma 6.
Lemma 6. 
If the mutation operator of BSO-II z obeys U ( 1 , 1 ) and P b < P b , its distribution function F ( u ) = P ( η t u ) of the gain η t is
F ( u ) = 0 , u < 0 1 2 , u = 0 3 4 π n k u e 3 t 2 4 n k 2 t , u > 0 .
The proof of Lemma 6 is based on the same principle as Lemma 4. The proof is given as follows, which is used to support the proof of Theorem 7.
Proof. 
According to the definition of η t , t = 0 , 1 , ,
(1)
If f ( y t ) f ( x t ) , η t = 0 .
(2)
If f ( y t ) > f ( x t ) , η t = k [ ( Δ x 1 t + Δ x 2 t + + Δ x n t ) + ( z 1 t + z 2 t + + z n t ) ]
Since z i U ( 1 , 1 ) , Δ x i U ( 1 , 1 ) , and z 1 , , z n are independent of each other, Δ x 1 , Δ x 2 , , Δ x n are also independent of each other. f ( z t ) obeys N ( 0 , 2 3 n k 2 ) according to the Lindeberg–Levy center limit theorem.
Hence, the η t distribution function F ( u ) = P ( η t u ) is
(1)
If u < 0 , F ( u ) = 0 .
(2)
If u = 0 , F ( u ) = P ( η t u ) = P ( η t = 0 ) = 1 2 .
(3)
If u > 0 , F ( u ) = P ( η t u ) = 3 4 π n k u e 3 t 2 4 n k 2 t .
Theorem 7. 
If the mutation operator of BSO-II z obeys U ( 1 , 1 ) , the upper bound on the expected first hitting time to reach the target fitness value n a is derived as
E ( T ε | X 0 ) 1 + 5 6 π n 4 + 2 a k 5 6 π 4 n + 2 n ε k .
The proof of this theorem is based on the same principle as Theorem 5. The proof is given as follows.
Proof. 
According to Lemma 6, we have
E ( X t X t + 1 | X t , A ) = E ( η t | X t , A ) = 0 + u 3 4 π n k u e 3 t 2 4 n k 2 t = k n 3 π .
Suppose that the probability P b = 0.2 ; according to Theorem 1, the following conclusions can be drawn:
E ( X t X t + 1 | X t ) = P ( A ¯ ) E ( X t X t + 1 | X t , A ¯ ) + P ( A ) E ( X t X t + 1 | X t , A ) = ( 1 0.2 ) × E ( X t X t + 1 | X t , A ¯ ) + 0.2 × E ( X t X t + 1 | X t , A ) = 0.8 × k n 6 π + 0.2 × k n 3 π = 4 k n + k 2 n 5 6 π ,
The algorithm starts from the origin at initialization, x 0 = ( 0 , 0 , , 0 ) , i.e., X t = n a . According to Theorem 1, the upper bound on the expected first hitting time is derived as
E ( T ε | X 0 ) 1 + ε n a 5 6 π 4 n + 2 n x = 1 + 5 6 π n 4 + 2 a k 5 6 π 4 n + 2 n ε k .      
Theorem 7 indicates that if the mutation operator of BSO-II z obeys U ( 1 , 1 ) , its computational time complexity for the equal coefficient linear function is E ( T ε | X 0 ) = O ( n ) .

3.3. Summary

Overall, we summarize the theoretical analysis results of the running-time analysis of the single individual BSO in solving the n-dimensional equal coefficient linear function in six different situations. The theoretical analysis results are shown in Table 1. The time complexity of the single individual BSO in these six cases is O ( n ) . However, the coefficients in the display expressions are different.
Table 1 shows the correlation between the expected first hitting time, the dimension n, the slope k, and the parameter a. The upper bounds on the expected first hitting time of BSO-II are lower than those of BSO-I. This means that the performance of BSO-II is better than that of BSO-I in solving the equal coefficient linear function. The disrupting operation in BSO helps to reduce the running time of the algorithm. Moreover, the upper bounds on the expected first hitting time of the algorithm using the standard normal distribution mutation operator are lower than those of the algorithms with the uniform distribution mutation operator. In addition, the upper bounds on the expected first hitting time of the algorithm using the U 1 2 , 1 2 mutation operator are approximately two times higher than those of the algorithm using the U ( 1 , 1 ) mutation operator.

4. Experimental Results

In Section 3, we obtain the theoretical analysis results of the expected first hitting time of single individual BSOs through the average gain model. To verify the correctness of the analysis results, numerical experiments are presented in this section.
As the number of samples increases, the arithmetic mean will gradually approach true mathematical expectations based on the Wiener–Khinchine theorem of large numbers [58]. The Wiener–Khinchine theorem of large numbers is introduced as follows.
Suppose that X 1 , X 2 , is a sequence of independent and identically distributed random variables with E ( X i ) = μ ; for any ε > 0 , the following equation will hold.
lim n P 1 n i = 1 n X i μ < ε = 1 .
The Wiener–Khinchine theorem of large numbers indicates that if the number of samples is sufficiently large, the mathematical expectations are approximately equal to the mean of samples X 1 , X 2 , , X n . Therefore, we use the arithmetic mean of the first hitting time of multiple experiments to estimate the actual expected first hitting time.
In this section, the parameters of the proposed approach are set as follows. The fixing error is ε = 1 × 10 8 , the initial individual is x 0 = ( x 1 0 , x 2 0 , , x n 0 ) = ( 0 , 0 , , 0 ) , the slope is k = 1 , and the target fitness parameter is a = 10 . The problem dimension n is set from 10 to 280. BSO-I and BSO-II are conducted on the n-dimensional equal coefficient linear function for 300 runs for each dimension. The termination criterion for each experiment is that the error of the optimal solution should be below a predefined threshold ε . Table 2 shows the numerical results of the practical expected first hitting time E ( T ε | X 0 ) ^ and the theoretical time upper bound, where E ( T ε | X 0 ) ^ = i = 1 300 T ε i 300 , and T ε i is the first hitting time of the ε -approximation solution at the i-th run.
As shown in Table 2, the experimental results strongly fit the theoretical upper bounds, indicating that the error between the theoretical upper bounds and the actual value is within ε . The points larger than the theoretical upper bounds are highlighted in boldface. The arithmetic mean of multiple experiments is used to estimate the expected first hitting time. In the real case, only the arithmetic mean of 300 experiments is used to estimate the expected first hitting time, which allows a certain statistical error. According to the central limit theorem, the results obtained from 300 independent experiments follow a normal distribution. The null hypothesis H 0 means that the mean value of the expected first hitting time in 300 experiments is less than or equal to the corresponding theoretical upper bounds. The corresponding significance level α is 0.05 with the T testing. Moreover, as shown in Figure 1, the actual expected running time is followed with the estimated result based on our proof. All the detailed results are shown in Table 3.
Table 3 provides the numerical results, where h represents the hypothetical result, p represents the p-value of the test, and c i is the confidence interval. As shown in the T testing of Table 3, h = 0 and p > α . The null hypothesis H 0 is accepted at the significance level α = 0.05 . Therefore, the analytic expression of the running time of BSO obtained based on the average gain model can characterize the actual upper bounds of the running time of BSO in these six BSO variants.

5. Conclusions

The running time of six BSO variants for the equal coefficient linear function is analyzed in this paper based on the average gain model. The additivity of the normal distribution and the Lindeberg–Levy center limit theorem are applied to deal with the superposition of the normal distribution mutation operator and the uniform distribution mutation operator, respectively. Furthermore, the full expectation formula is utilized to deal with the problem of individual replacement with a certain probability in the disrupting operator. This paper also concludes the upper bounds on the expected first hitting time of the single individual BSO in equal coefficient linear functions.
The analysis results show that the time complexity of BSO-I and BSO-II is O ( n ) in the equal coefficient linear function. However, their coefficients are different. In the linear function with equal coefficients, the upper bound of the expected first hit time of BSO-II is smaller than that of BSO-I. In addition, the single individual BSO using the standard normally distributed mutation operator expects a lower upper bound on the first hit time than the corresponding algorithm using the uniformly distributed mutation operator. The upper bounds on the expected first hitting time of single individual BSOs with the U ( 1 2 , 1 2 ) mutation operator are approximately twice those of BSOs with the U ( 1 , 1 ) mutation operator.
In our future work, we will analyze the running time of the population-based BSO in the equal coefficient linear function. The running time of population-based BSOs in practical optimization problems is also an important topic. Moreover, it is crucial to extend our research to practical optimization problems that are encountered in real-world applications with complex constraints, non-linear relationships, or high-dimensional spaces.

Author Contributions

Conceptualization, G.M. and F.L.; methodology, G.M., F.L. and Y.H.; formal analysis, G.M. and Y.H.; investigation, D.L., J.S., X.Y. and H.H.; writing original draft preparation, F.L., D.L. and J.S.; writing review and editing, G.M. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (62276103), Natural Science Foundation of Guangdong Province (2022A1515110058, 2022A1515011551), Guangdong Provincial Department of Education (No.2021KTSCX070, 2021KCXTD038, 2023KCXTD002), Guangzhou Science and Technology Planning Project (2023A04J1684), Doctor Starting Fund of Hanshan Normal University, China (No. QD20190628, QD2021201), Scientific Research Talents Fund of Hanshan Normal University, China (No. Z19113), The quality of teaching construction project of Hanshan Normal University (E22022, E23045), Research Platform Project of Hanshan Normal University (PNB221102), The quality of teaching construction project of Guangdong Provincial Department of Education in 2023 (No.350) and Guangdong Provincial Key Laboratory of Data Science and Intelligent Education (2022KSYS003).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Michaloglou, A.; Tsitsas, N.L. A Brain Storm and Chaotic Accelerated Particle Swarm Optimization Hybridization. Algorithms 2023, 16, 208. [Google Scholar] [CrossRef]
  2. Slowik, A.; Kwasnicka, H. Nature inspired methods and their industry applications-swarm intelligence algorithms. IEEE Trans. Ind. Inform. 2017, 14, 1004–1015. [Google Scholar] [CrossRef]
  3. Xue, Y.; Jiang, J.; Zhao, B.; Ma, T. A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Comput. 2018, 22, 2935–2952. [Google Scholar] [CrossRef]
  4. Liu, X.; Zhan, Z.; Gao, Y.; Zhang, J.; Kwong, S.; Zhang, J. Coevolutionary Particle Swarm Optimization With Bottleneck Objective Learning Strategy for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 587–602. [Google Scholar] [CrossRef]
  5. Zhao, Q.; Li, C. Two-Stage Multi-Swarm Particle Swarm Optimizer for Unconstrained and Constrained Global Optimization. IEEE Access 2020, 8, 124905–124927. [Google Scholar] [CrossRef]
  6. Yu, X.; Chen, W.; Gu, T.; Yuan, H.; Zhang, H.; Zhang, J. ACO-A*: Ant Colony Optimization Plus A* for 3-D Traveling in Environments with Dense Obstacles. IEEE Trans. Evol. Comput. 2019, 23, 617–631. [Google Scholar] [CrossRef]
  7. Lyu, Z.; Wang, Z.; Duan, D.; Lin, L.; Li, J.; Yang, Y.; Chen, Y.; Li, Y. Tilting Path Optimization of Tilt Quad Rotor in Conversion Process Based on Ant Colony Optimization Algorithm. IEEE Access 2020, 8, 140777–140791. [Google Scholar] [CrossRef]
  8. Zhang, L.; Wang, S.; Zhang, K.; Zhang, X.; Sun, Z.; Zhang, H.; Chipecane, M.T.; Yao, J. Cooperative Artificial Bee Colony Algorithm with Multiple Populations for Interval Multiobjective Optimization Problems. IEEE Trans. Fuzzy Syst. 2019, 27, 1052–1065. [Google Scholar] [CrossRef]
  9. Kumar, D.; Mishra, K. Co-variance guided artificial bee colony. Appl. Soft Comput. 2018, 70, 86–107. [Google Scholar] [CrossRef]
  10. Cheng, S.; Qin, Q.; Chen, J.; Shi, Y. Brain storm optimization algorithm: A review. Artif. Intell. Rev. 2016, 46, 445–458. [Google Scholar] [CrossRef]
  11. Cheng, S.; Sun, Y.; Chen, J.; Qin, Q.; Chu, X.; Lei, X.; Shi, Y. A comprehensive survey of brain storm optimization algorithms. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastián, Spain, 5–8 June 2017; pp. 1637–1644. [Google Scholar]
  12. Cheng, S.; Lei, X.; Hui, L.; Zhang, Y.; Shi, Y. Generalized pigeon-inspired optimization algorithms. Sci. China Inf. Sci. 2019, 62, 120–130. [Google Scholar] [CrossRef]
  13. Xiong, G.; Shi, D.; Zhang, J.; Zhang, Y. A binary coded brain storm optimization for fault section diagnosis of power systems. Electr. Power Syst. Res. 2018, 163, 441–451. [Google Scholar] [CrossRef]
  14. Wang, Z.; He, J.; Xu, Y.; Crossley, P.; Zhang, D. Multi-objective optimisation method of power grid partitioning for wide-area backup protection. IET Gener. Transm. Distrib. 2018, 12, 696–703. [Google Scholar] [CrossRef]
  15. Ogawa, S.; Mori, H. A Hierarchical Scheme for Voltage and Reactive Power Control with Predator-Prey Brain Storm Optimization. In Proceedings of the 2019 20th International Conference on Intelligent System Application to Power Systems (ISAP), New Delhi, India, 10–14 December 2019; pp. 1–6. [Google Scholar] [CrossRef]
  16. Matsumoto, K.; Fukuyama, Y. Voltage and Reactive Power Control by Parallel Modified Brain Storm Optimization. In Proceedings of the 2020 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), Fukuoka, Japan, 19–21 February 2020; pp. 553–558. [Google Scholar] [CrossRef]
  17. Soyinka, O.K.; Duan, H. Optimal Impulsive Thrust Trajectories for Satellite Formation via Improved Brainstorm Optimization. In Proceedings of the Advances in Swarm Intelligence; Tan, Y., Shi, Y., Niu, B., Eds.; Springer: Cham, Switzerland, 2016; pp. 491–499. [Google Scholar]
  18. Li, J.; Duan, H. Simplified brain storm optimization approach to control parameter optimization in F/A-18 automatic carrier landing system. Aerosp. Sci. Technol. 2015, 42, 187–195. [Google Scholar] [CrossRef]
  19. Zhang, C.; Xu, X.; Shi, Y.; Deng, Y.; Li, C.; Duan, H. Binocular Pose Estimation for UAV Autonomous Aerial Refueling via Brain Storm Optimization. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 254–261. [Google Scholar] [CrossRef]
  20. Tuba, E.; Strumberger, I.; Zivkovic, D.; Bacanin, N.; Tuba, M. Mobile Robot Path Planning by Improved Brain Storm Optimization Algorithm. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  21. Li, G.; Zhang, D.; Shi, Y. An Unknown Environment Exploration Strategy for Swarm Robotics Based on Brain Storm Optimization Algorithm. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 1044–1051. [Google Scholar] [CrossRef]
  22. Aldhafeeri, A.; Rahmat-Samii, Y. Brain Storm Optimization for Electromagnetic Applications: Continuous and Discrete. IEEE Trans. Antennas Propag. 2019, 67, 2710–2722. [Google Scholar] [CrossRef]
  23. Sun, Y. A Hybrid Approach by Integrating Brain Storm Optimization Algorithm with Grey Neural Network for Stock Index Forecasting. Abstr. Appl. Anal. 2014, 2014, 1–10. [Google Scholar] [CrossRef]
  24. Xiong, G.; Shi, D. Hybrid biogeography-based optimization with brain storm optimization for non-convex dynamic economic dispatch with valve-point effects. Energy 2018, 157, 424–435. [Google Scholar] [CrossRef]
  25. Wu, Y.; Wang, X.; Xu, Y.; Fu, Y. Multi-objective Differential-Based Brain Storm Optimization for Environmental Economic Dispatch Problem. In Proceedings of the Brain Storm Optimization Algorithms: Concepts, Principles and Applications; Cheng, S., Shi, Y., Eds.; Springer: Cham, Switzerland, 2019; pp. 79–104. [Google Scholar]
  26. Ma, X.; Jin, Y.; Dong, Q. A generalized dynamic fuzzy neural network based on singular spectrum analysis optimized by brain storm optimization for short-term wind speed forecasting. Appl. Soft Comput. 2017, 54, 296–312. [Google Scholar] [CrossRef]
  27. Liang, J.J.; Wang, P.; Yue, C.T.; Yu, K.; Li, Z.H.; Qu, B. Multi-objective Brainstorm Optimization Algorithm for Sparse Optimization. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  28. Fu, Y.; Tian, G.; Fathollahi-Fard, A.M.; Ahmadi, A.; Zhang, C. Stochastic multi-objective modelling and optimization of an energy-conscious distributed permutation flow shop scheduling problem with the total tardiness constraint. J. Clean. Prod. 2019, 226, 515–525. [Google Scholar] [CrossRef]
  29. Pourpanah, F.; Shi, Y.; Lim, C.P.; Hao, Q.; Tan, C.J. Feature selection based on brain storm optimization for data classification. Appl. Soft Comput. 2019, 80, 761–775. [Google Scholar] [CrossRef]
  30. Peng, S.; Wang, H.; Yu, Q. Multi-Clusters Adaptive Brain Storm Optimization Algorithm for QoS-Aware Service Composition. IEEE Access 2020, 8, 48822–48835. [Google Scholar] [CrossRef]
  31. Zhou, Z.; Duan, H.; Shi, Y. Convergence analysis of brain storm optimization algorithm. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 3747–3752. [Google Scholar] [CrossRef]
  32. Qiao, Y.; Huang, Y.; Gao, Y. The Global Convergence Analysis of Brain Storm Optimization. NeuroQuantology 2018, 16, 6. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Huang, H.; Hongyue, W.; Hao, Z. Theoretical analysis of the convergence property of a basic pigeon-inspired optimizer in a continuous search space. Sci. China Inf. Sci. 2019, 62, 86–94. [Google Scholar] [CrossRef]
  34. Sudholt, D. A New Method for Lower Bounds on the Running Time of Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2013, 17, 418–435. [Google Scholar] [CrossRef]
  35. He, J.; Yao, X. Average Drift Analysis and Population Scalability. IEEE Trans. Evol. Comput. 2017, 21, 426–439. [Google Scholar] [CrossRef]
  36. Yu, Y.; Qian, C.; Zhou, Z.H. Switch Analysis for Running Time Analysis of Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2015, 19, 777–792. [Google Scholar] [CrossRef]
  37. Li, Y.; Xiang, Z.; Ji, D. Wave models and dynamical analysis of evolutionary algorithms. Sci. China Inf. Sci. 2019, 62, 53–68. [Google Scholar]
  38. Lehre, P.K.; Witt, C. Concentrated Hitting Times of Randomized Search Heuristics with Variable Drift. In Proceedings of the Algorithms and Computation; Ahn, H.K., Shin, C.S., Eds.; Springer: Cham, Switzerland, 2014; pp. 686–697. [Google Scholar]
  39. Witt, C. Fitness levels with tail bounds for the analysis of randomized search heuristics. Inf. Process. Lett. 2014, 114, 38–41. [Google Scholar] [CrossRef]
  40. Yu, Y.; Qian, C. Running time analysis: Convergence-based analysis reduces to switch analysis. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; pp. 2603–2610. [Google Scholar] [CrossRef]
  41. Huang, H.; Xu, W.; Zhang, Y.; Lin, Z.; Hao, Z. Runtime analysis for continuous (1 + 1) evolutionary algorithm based on average gain model. Sci. China Inf. Sci. 2014, 44, 811–824. [Google Scholar]
  42. Huang, H.; Su, J.; Zhang, Y.; Hao, Z. An Experimental Method to Estimate Running Time of Evolutionary Algorithms for Continuous Optimization. IEEE Trans. Evol. Comput. 2020, 24, 275–289. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Huang, H.; Hao, Z.; Hu, G. First hitting time analysis of continuous evolutionary algorithms based on average gain. Clust. Comput. 2016, 19, 1323–1332. [Google Scholar]
  44. Yu, Y.; Zhou, Z.H. A new approach to estimating the expected first hitting time of evolutionary algorithms. Artif. Intell. 2006, 172, 1809–1832. [Google Scholar] [CrossRef]
  45. Wang, Y. Application of data mining based on swarm intelligence algorithm in financial support of livestock and poultry breeding insurance. Soft Comput. 2023. [Google Scholar] [CrossRef]
  46. Zhan, Z.; Zhang, J.; Shi, Y.; Liu, H. A modified brain storm optimization. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, QLD, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  47. El-Abd, M. Brain storm optimization algorithm with re-initialized ideas and adaptive step size. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 2682–2686. [Google Scholar]
  48. Shi, Y. Brain Storm Optimization Algorithm. In Proceedings of the Advances in Swarm Intelligence; Tan, Y., Shi, Y., Chai, Y., Wang, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 303–309. [Google Scholar]
  49. Shi, Y. An Optimization Algorithm Based on Brainstorming Process. Int. J. Swarm Intell. Res. 2011, 2, 35–62. [Google Scholar] [CrossRef]
  50. Yao, X.; Xu, Y. Recent advances in evolutionary computation. J. Comput. Sci. Technol. 2006, 21, 1–18. [Google Scholar] [CrossRef]
  51. Agapie, A.; Agapie, M.; Zbaganu, G. Evolutionary algorithms for continuous-space optimisation. Int. J. Syst. Sci. 2013, 44, 502–512. [Google Scholar] [CrossRef]
  52. He, J.; Yao, X. Drift analysis and average time complexity of evolutionary algorithms. Artif. Intell. 2001, 127, 57–85. [Google Scholar] [CrossRef]
  53. Hassler, U. Riemann Integrals. In Proceedings of the Stochastic Processes and Calculus: An Elementary Introduction with Applications; Springer: Cham, Switzerland, 2016; pp. 179–197. [Google Scholar]
  54. Jägersküpper, J. Combining Markov-chain analysis and drift analysis: The (1 + 1) evolutionary algorithm on linear functions reloaded. Algorithmica 2011, 59, 409–424. [Google Scholar] [CrossRef]
  55. Witt, C. Tight Bounds on the Optimization Time of a Randomized Search Heuristic on Linear Functions. Comb. Probab. Comput. 2013, 22, 294–318. [Google Scholar] [CrossRef]
  56. Hao, Z.; Huang, H.; Zhang, X.; Tu, K. A Time Complexity Analysis of ACO for Linear Functions. In Proceedings of the Simulated Evolution and Learning, 6th International Conference, SEAL 2006, Hefei, China, 15–18 October 2006. [Google Scholar]
  57. Jgersküpper, J. Algorithmic analysis of a basic evolutionary algorithm for continuous optimization. Theor. Comput. Sci. 2007, 379, 329–347. [Google Scholar] [CrossRef]
  58. Feller, W. An Introduction to Probability Theory and Its Applications; John Wiley & Sons: Hoboken, NJ, USA, 2008; Volume 2. [Google Scholar]
  59. Zhan, Z.H.; Chen, W.N.; Lin, Y.; Gong, Y.J.; Li, Y.L.; Zhang, J. Parameter investigation in brain storm optimization. In Proceedings of the 2013 IEEE Symposium on Swarm Intelligence (SIS), Singapore, 16–19 April 2013; pp. 103–110. [Google Scholar] [CrossRef]
Figure 1. The curve of the expected and the actual running time. The six figures, arranged from top to bottom and left to right, depict the three distributions corresponding to BSO-I and the three distributions corresponding to BSO-II, respectively.
Figure 1. The curve of the expected and the actual running time. The six figures, arranged from top to bottom and left to right, depict the three distributions corresponding to BSO-I and the three distributions corresponding to BSO-II, respectively.
Biomimetics 09 00117 g001
Table 1. Analysis of the running time of BSO in six different situations.
Table 1. Analysis of the running time of BSO in six different situations.
AlgorithmMutation Operator z Display Expression for E ( T ε | X 0 ) Time Complexity T ( n )
BSO-I N ( 0 , 1 ) E ( T ε | X 0 ) 1 + 2 π n a k 2 π n ε k O ( n )
U 1 2 , 1 2 E ( T ε | X 0 ) 1 + 2 6 π n a k 2 6 π n ε k O ( n )
U ( 1 , 1 ) E ( T ε | X 0 ) 1 + 6 π n a k 6 π n ε k O ( n )
BSO-II N ( 0 , 1 ) E ( T ε | X 0 ) 1 + 5 2 π n 4 + 2 a k 5 2 π 4 n + 2 n ε k O ( n )
U 1 2 , 1 2 E ( T ε | X 0 ) 1 + 10 6 π n 4 + 2 a k 10 6 π 4 n + 2 n ε k O ( n )
U ( 1 , 1 ) E ( T ε | X 0 ) 1 + 5 6 π n 4 + 2 a k 5 6 π 4 n + 2 n ε k O ( n )
Table 2. Comparison of the estimation of the expected first hitting time and the theoretical upper bounds.
Table 2. Comparison of the estimation of the expected first hitting time and the theoretical upper bounds.
Algorithm z n104070100130160190220250280
BSO-I N ( 0 , 1 ) 1 + 2 π n a k 2 π n ε k 80.27159.53210.72251.66286.80318.07346.51372.79397.33420.44
E ( T ε | X 0 ) ^ 79.65159.57212.51252.09287.14316.84345.61373.66398.08418.75
U 1 2 , 1 2 1 + 2 6 π n a k 2 6 π n ε k 275.59550.17727.49869.32991.041099.351197.901288.931373.941453.98
E ( T ε | X 0 ) ^ 275.81548.63727.97868.67990.571102.961198.001288.831373.061455.95
U ( 1 , 1 ) 1 + 6 π n a k 6 π n ε k 138.29275.59364.24435.16496.02550.17599.45644.96687.47727.49
E ( T ε | X 0 ) ^ 137.27275.45363.95435.81498.34553.30598.41641.36685.46723.91
BSO-II N ( 0 , 1 ) 1 + 5 2 π n 4 + 2 a k 5 2 π 4 n + 2 n ε k 74.20147.40194.68232.49264.93293.81320.08344.35367.01388.35
E ( T ε | X 0 ) ^ 72.97145.55193.03232.77262.05294.26318.51347.10365.51387.87
U 1 2 , 1 2 1 + 10 6 π n 4 + 2 a k 10 6 π 4 n + 2 n ε k 254.58508.16671.91802.89915.301015.321106.331190.401268.901342.82
E ( T ε | X 0 ) ^ 251.80505.49674.53804.58915.611014.221103.211190.821271.881342.12
U ( 1 , 1 ) 1 + 5 6 π n 4 + 2 a k 5 6 π 4 n + 2 n ε k 127.79254.58336.45401.95458.15508.16553.67595.70634.95671.91
E ( T ε | X 0 ) ^ 128.10253.61334.30400.20458.07507.69554.34594.77634.45669.75
The points larger than the theoretical upper bounds are highlighted in boldface.
Table 3. Statistical results of hypothesis testing.
Table 3. Statistical results of hypothesis testing.
Algorithm z n104070100130160190220250280
BSO-I N ( 0 , 1 ) h0000000000
p0.800.480.060.380.410.810.720.310.340.83
c i 78.41157.86210.57249.84284.76314.52343.03370.82395.13415.87
InfInfInfInfInfInfInfInfInfInf
U 1 2 , 1 2 h0000000000
p0.440.790.420.600.570.100.490.510.610.27
c i 273.43545.49724.19864.51986.221098.221193.501283.961367.751450.62
InfInfInfInfInfInfInfInfInfInf
U ( 1 , 1 ) h0000000000
p0.850.540.570.360.100.070.690.950.810.94
c i 135.64273.05361.29432.88495.38549.76595.05637.79681.73720.20
InfInfInfInfInfInfInfInfInfInf
BSO-II N ( 0 , 1 ) h0000000000
p0.950.960.930.420.990.380.830.060.810.61
c i 71.71143.86191.16230.52259.89291.83315.84344.26362.73385.06
InfInfInfInfInfInfInfInfInfInf
U 1 2 , 1 2 h0000000000
p0.980.920.120.230.450.650.860.440.180.59
c i 249.47502.38670.91800.79911.601009.531098.541185.871266.441337.20
InfInfInfInfInfInfInfInfInfInf
U ( 1 , 1 ) h0000000000
p0.370.770.920.840.520.600.370.660.590.83
c i 126.53251.47331.78397.34455.24504.80551.07591.02630.94665.97
InfInfInfInfInfInfInfInfInfInf
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mai, G.; Liu, F.; Hong, Y.; Liu, D.; Su, J.; Yang, X.; Huang, H. Running-Time Analysis of Brain Storm Optimization Based on Average Gain Model. Biomimetics 2024, 9, 117. https://doi.org/10.3390/biomimetics9020117

AMA Style

Mai G, Liu F, Hong Y, Liu D, Su J, Yang X, Huang H. Running-Time Analysis of Brain Storm Optimization Based on Average Gain Model. Biomimetics. 2024; 9(2):117. https://doi.org/10.3390/biomimetics9020117

Chicago/Turabian Style

Mai, Guizhen, Fangqing Liu, Yinghan Hong, Dingrong Liu, Junpeng Su, Xiaowei Yang, and Han Huang. 2024. "Running-Time Analysis of Brain Storm Optimization Based on Average Gain Model" Biomimetics 9, no. 2: 117. https://doi.org/10.3390/biomimetics9020117

Article Metrics

Back to TopTop