Next Article in Journal / Special Issue
Vibration State Identification of Hydraulic Units Based on Improved Artificial Rabbits Optimization Algorithm
Previous Article in Journal
Improved Dipper-Throated Optimization for Forecasting Metamaterial Design Bandwidth for Engineering Applications
Previous Article in Special Issue
Solving the Min-Max Clustered Traveling Salesmen Problem Based on Genetic Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Environmental Stimulus and Biological Competition Tactics Interactive Artificial Ecological Optimization Algorithm for Clustering

School of Science, Xi’an University of Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Biomimetics 2023, 8(2), 242; https://doi.org/10.3390/biomimetics8020242
Submission received: 21 April 2023 / Revised: 2 June 2023 / Accepted: 5 June 2023 / Published: 7 June 2023
(This article belongs to the Special Issue Nature-Inspired Computer Algorithms)

Abstract

:
An interactive artificial ecological optimization algorithm (SIAEO) based on environmental stimulus and a competition mechanism was devised to find the solution to a complex calculation, which can often become bogged down in local optimum because of the sequential execution of consumption and decomposition stages in the artificial ecological optimization algorithm. Firstly, the environmental stimulus defined by population diversity makes the population interactively execute the consumption operator and decomposition operator to abate the inhomogeneity of the algorithm. Secondly, the three different types of predation modes in the consumption stage were regarded as three different tasks, and the task execution mode was determined by the maximum cumulative success rate of each individual task execution. Furthermore, the biological competition operator is recommended to modify the regeneration strategy so that the SIAEO algorithm can provide consideration to the exploitation in the exploration stage, break the equal probability execution mode of the AEO, and promote the competition among operators. Finally, the stochastic mean suppression alternation exploitation problem is introduced in the later exploitation process of the algorithm, which can tremendously heighten the SIAEO algorithm to run away the local optimum. A comparison between SIAEO and other improved algorithms is performed on the CEC2017 and CEC2019 test set.

1. Introduction

In the realm of optimization, the emerging heuristic algorithm has become a focus of researchers due to its convenience and comprehensibility. Swarm intelligence (SI) is an important branch of biological heuristic computing. The SI algorithm takes advantage of the collaborative competitive relationship among populations to pilot the population to conduct the optimal solution, which fascinates a great deal of investigators. Several novel algorithms have been put forward in recent years. For example, SFO [1], HHO [2], BMO [3], EJS [4], MCSA [5], SDABWO [6], STOA [7], SCA [8], SSA [9] et al. Swarm-based intelligent optimization algorithm has greatly promoted its application in engineering optimization because it has abandoned gradient information in traditional optimization algorithms, such as time series prediction [10], image segmentation [11], feature selection [12] and cloud computing scheduling tasks [13].
The AEO algorithm [14] is a SI algorithm introduced by Zhao et al. in 2019. The AEO algorithm is designed according to energy flow and material circulation in natural ecosystem. The algorithm model is established through three stages of production, consumption and decomposition. The first stage is the producer, who itself obtains energy from nature and uses it to describe the plants in the ecosystem. The second stage is consumer, whose main body is an animal; the final stage is the decomposer, which feeds on both the producer and the consumer.
In the population of AEO algorithm, all individuals are called consumers except for one producer and decomposer. Consumers can be divided into herbivorous animals, carnivorous animals and omnivorous animals according to their different ways of foraging. Energy levels in an ecosystem decrease along the food chain, so producers have the highest energy levels, and decomposers have the lowest. AEO algorithm has a strong ability to equilibrium the whole search and local exploitation, and is easy to implement, with few design parameters, and has good applications in many fields. One study [15] improved AEO by introducing self-adaption nonlinear weight, optional cross-learning and Gaussian mutation strategies, as well as an excogitated IAEO algorithm to handle the optimization problem of PEMFC. In [16], the authors applied an EAEO to settle the actuality problem of distributed power distribution in the distribution system by combining AEO with the sine and cosine operator so as to cut down the power wastage of the distribution network. In [17], the authors proposed the MAEO algorithm to handle the parameter estimation of the PEMFC problem by introducing the linear operator H to balance exploration and exploitation. The authors of [18] proposed a promising blend multi-population algorithm HMPA for managing multiple engineering optimization problems. HMPA adopts a new population division method, which divides the population into several subpopulations, each of which dynamically exchanges solutions. An associative algorithm based on AEO and HHO was presented. Regarding this knowledge, multiple strategies were mixed to get the maximum efficiency of HMPA. For [19], the authors proposed a new random vector function chain (RVFL) network combined with the AEO algorithm to calculate the optimal outcome of the SWGH system. The authors of [20] proposed a new fitness function reconstruction technology based on the AEO algorithm for photovoltaic array reconstruction optimization. The authors of [21] used an AEO algorithm to acquire the ideal modulation parameters of a PID controller for an AVR system.
Although the AEO algorithm has been successfully used in the above domains, the AEO algorithm first explores all individuals and then executes the exploitation in sequence mode, which aggrandizes the computational burden and retards the optimization speed of the AEO. Secondly, the correcting of the AEO in the exploration stage puts individuals at the end of the biological chain, which does not account for the biological competition in nature—that is, every creature preys on other species and is itself the target of prey, so the solution accuracy is not high.
For the sake of making the AEO characteristics better and ameliorating the accuracy of an optimization scheme, inspired by environmental stimulation and rival mechanism, when the population diversity in the environment is large, there are fewer similar organisms distributed in unit space, and the exploration ability is strong. In this case, the exploitation should be strengthened, and vice versa. Based on this, this paper proposes a combination of environmental stimuli and biological competition mechanism improved the artificial ecological optimization algorithm (SIAEO), improvements on AEO to do the following two aspects: (1) the environmental stimulation mechanism is introduced into the AEO algorithm, the external environment stimulation to achieve defined by the population diversity population interaction perform manipulations of consumption and, reduce the computational complexity of algorithm; (2) In the consumption stage, considering the biological competition, an individual is added to die due to predation, and the maximum cumulative success rate of each individual performing different tasks is used to guide the individual to choose a more suitable consumption renewal strategy, which redounds to the exploitation of the algorithm in the exploration stage. In the decomposition stage, two arbitrarily chosen individuals are introduced to define a potential exploitation and update formula, which can excellently assist the algorithm apart from the local optimal so as to build on proportionate the overall search and local optimization of the algorithm.
K-means [22,23] is an excellent clustering algorithm in a complex clustering problem. After determining the number of classes as well as clustering centers, the nearest neighbor principle is used to allocate samples to the categories determined by K clustering centers, and the in-class distance is minimized, and the inter-class distance is maximized by constantly updating the K-clustering centers. Among them, the determination of K cluster centers belongs to the high-dimensional optimization problem.
To demonstrate the effectiveness and functionality of the SIAEO algorithm, this experiment consists of two parts. Firstly, 40 benchmark functions composed of the CEC2017 and CEC2019 are designed to measure the performance of SIAEO to reply to complex high-dimension numerical optimization, and the results are paralleled with nine other excellent algorithms. The numerical optimization potential of the SIAEO is proved. Secondly, the validity of SIAEO solving the engineering optimization and K-means optimal clustering center was verified, and four engineering optimization problems and nine criterion data sets of a UCI machine learning knowledge base were clustered. The results confirmed the significance and competitiveness of the SIAEO–K-means algorithm.
The structure of this article is that Section 2 outlines the AEO, and Section 3 gives a list of the improvement strategy of the improved AEO algorithm. In Section 4, numerical examples and practical application examples and cluster optimization show the excellent performance of the SIAEO. In the conclusions, we review the entire paper and point out directions for future research.

2. AEO Algorithm

For the  D -dimensional optimization problem:
min f ( x 1 , x 2 , x D ) s . t l o w j x j u p j , j = 1 , 2 , , D
where  f ( x 1 , x 2 , x D )  represents the optimization function,  x j  is the  j - t h  decision variable and  u p j  and  l o w j  are the upper and lower bounds of  x j , respectively.
The AEO algorithm simulates the energy flow of the ecosystem, which is presented in 2019. According to the composition of the food chain, it is divided into three stages: production, consumption and decomposition. The detailed steps for solving the optimization problem AEO algorithm are:
(1)
Initialization: Suppose  N  expresses the population size, randomly generate  N  individuals in the  D -dimensional decision space to compose the early generation population ed by  X ( 0 ) = { X i ( 0 ) } i = 1 N , the j-th dimension of the individual  X i ( 0 ) = { x i j ( 0 ) } j = 1 D  is created by
x i j ( 0 ) = r a n d ( 0 , 1 ) ( u p j l o w j ) + l o w j .
in which  r a n d ( 0 , 1 )  is a random number within (0, 1).
(2)
Production phase: Suppose the t-th population is  X ( t ) = { X i ( t ) } i = 1 N , which are ranked in descending order with optimal value, in the production stage, only a temporary location update is performed on producer  X 1 ( t ) . The specific update strategy is a linear combination of best  X N ( t )  and randomly generated individual  X r ( t ) , and the updated formula is:
X 1 n e w ( t + 1 ) = ( 1 a ) X N ( t ) + a X r ( t ) .
where  a = r 1 ( 1 t T )  expresses the coefficient used for linear weighting,  X r ( t ) = { x r j ( t ) } j = 1 D x r j ( t ) = l o w j + r a n d ( 0 , 1 ) ( u p j l o w j ) , represents a member arbitrarily generated in the solution space;  t , T  respectively represent this iteration and the final iteration.  r 1  indicates a random value between [0, 1]. Equation (2) shows that in the pre-development stage of the algorithm, producers tend to explore  X r ( t )  extensively, while in the later stage, focus on further exploitation near  X N ( t ) .
(3)
Consumption phase: In this stage, the second to N-th individuals are temporarily updated according to the random walk strategy generated by Levy’s flight. Let  v 1  and  v 2  be two random numbers with standard normal distribution and the consumption factor is defined:
C = 1 2 v 1 | v 2 | , v 1 ~ N ( 0 , 1 ) , v 2 ~ N ( 0 , 1 ) .
in which  N ( 0 , 1 )  represents the normal distribution.
The i-th individual was classified and updated according to probability by using the formula of herbivorous ( Task  1), carnivorous ( Task  2) and omnivorous ( Task  3), i.e.,
X i n e w ( t + 1 ) = { X i ( t ) + C ( X i ( t ) X 1 ( t ) ) , i [ 2 , N ] , 0 < r 2 1 3 ( Task 1 ) X i ( t ) + C ( X i ( t ) X k ( t ) ) , i [ 3 , N ] , 1 3 < r 2 2 3 ( Task 2 ) X i ( t ) + C ( r 3 ( X i ( t ) X 1 ( t ) ) + ( 1 r 3 ) ( X i ( t ) X k ( t ) ) ) , i [ 3 , N ] , 2 3 < r 2 < 1 ( Task 3 ) .
where  k  is a positive integer with a random value in  [ 2 , i 1 ] r 2 , r 3  are the random values in [0, 1].
(4)
Decomposition phase: The temporarily updated population  X n e w ( t + 1 )  is further revised to obtain the final revised population:
X i ( t + 1 ) = X N ( t + 1 ) + 3 u ( e X N ( t + 1 ) h X i n e w ( t + 1 ) ) .
where  u  is a random number that is normally distributed,  e = r 4 r a n d ( 1 , 2 ) 1 h = 2 r 4 1  and  r 4  is a random number between [0, 1].
In summary, the AEO algorithm wantonly creates initial solutions. In the process of updating and optimizing the population of each generation, the position of the producer is revised by relying on Equation (2), while the updating of consumers are carried out according to the isoprobability decision method by using Equation (4) and the final decomposition process is performed by Equation (5) until the termination conditions are met, the algorithm ends.

3. An Improved Interactive Artificial Ecological Optimization Algorithm Based on Environmental Stimulus and Biological Competition Mechanism

In an ecosystem, the survival and death of various organisms in the ecosystem, the predation of animals, and the stability and balance of the ecological chain are all greatly affected by the environment. In the process of predation, animals respond to different environmental stimuli in order to cope with the impact of the intensity of environmental stimulus on the predation patterns of animals. The incentive mechanism is a mechanism to describe the relationship between environmental stimulus and individual response. Combined with the AEO algorithm, environmental stimulus S is defined to guide individuals to carry out task conversion between consumption and decomposition, thereby meaning that consumption and decomposition can be executed interactively rather than sequentially, thereupon then reducing the complexity of the algorithm. In the process of predation, animals have incentive responses to different environmental stimuli and form their own preferences that are influenced by natural enemies. Furthermore, since the survival of organisms itself follows the competition mechanism of survival of the fittest, the introduction of competitive trend in the consumption stage can better simulate the real situation of the ecosystem.
The purpose of this paper is to better improve the exploration and development ability of the AEO algorithm. Therefore, we introduced an interactive execution environment to stimulate the consumption and decomposition of tasks through the incentive mechanism and biological competition relations into the AEO algorithm. The consumption phase of the population individuals changes by searching the environment, and the largest accumulation of mission success rates guides the best-performing way to feed. The method of isoprobability execution of the AEO is abandoned, the convergence ability of the AEO is accelerated, and the exploitation function of the AEO in the exploration stage is intensified. At the same time, during the decomposition stage, through the introduction of arbitrary individuals, the exploration ability in the exploitation stage is boosted.

3.1. Environmental Stimulus

Environmental stimulus describes the external drive of an individual to perform a task. In the process of an algorithm search, the execution opportunity of consumption and decomposition can be measured by population diversity.
Population diversity is an indicator that describes the differences between individuals and reflects the distribution of populations [24]. For the t-generation population, population diversity [25] was measured as:
D i v e r s i t y ( t ) = 1 N L i = 1 N j = 1 D ( x i j ( t ) x ¯ j ( t ) ) 2 .
x ¯ j ( t ) = i = 1 N x i j ( t ) N .
where  N  and  L  represent the size of population and reconciliation space, respectively,  D  is the dimension,  X ¯ ( t )  is the population center,  X ¯ ( t ) = { x ¯ j ( t ) } j = 1 D .
Environmental stimulus is defined as:
S ( t ) = D i v e r s i t y ( t ) p .
as well as  p  indicates the sensitivity coefficient of the stimulus to diversity. Referring to the value in reference [25] and simulation experiment,  p  is 50 in this paper.
When  S ( t ) 1 , the variety of the group is better and the current algorithm is in the exploration stage. This moment increasing the exploitation potential of the algorithm while carrying out exploration can be helpful to improve the calculation accuracy. Therefore, when  S ( t ) 1 , the consumption stage of AEO algorithm is implemented, and the updated formula is modified using a competition relationship to enhance the exploitation ability. When  S ( t ) < 1 , the population diversity is relatively poor, and the individuals of the population have converged to the vicinity of the optimal solution. At this time, the exploitation should be strengthened to help the algorithm seek out the best solution, nevertheless at the same time, the scope of the population should be maintained to forestall individuals from getting bogged down in the local optimal. Therefore, when  S ( t ) < 1 , the decomposition stage of AEO algorithm is implemented in which the individuals emulate from the best individuals as well as the random mean is integrated to enhance the exploration performance. To boost the global and local searching of the AEO, the calculation precision of the algorithm is improved.

3.2. Incentive Mechanism

In the incentive mechanism, the individual’s success in performing a search task refers to the improvement of its fitness, and the individual’s propensity to perform a task is measured by the cumulative success rate. The cumulative success rate should increase when the individual successfully performs the task and decrease when the individual fails to perform the task. The greater the cumulative success rate, the greater the propensity to perform the task, and vice versa.
By absorbing the advantages of the incentive mechanism, the three different consumer predation modes (Task 1), (Task 2) and (Task 3) in the consumption stage are regarded as three different search tasks. Therefore, we utilize the cumulative times of individual  i  successfully executing task  j  until the  t -th iteration to calculate the cumulative success rate. The specific methods are as follows:
(a)
Cumulative success rate initialization. For each individual  X i ( 0 ) = ( x i 1 ( 0 ) , x i 2 ( 0 ) , , x i D ( 0 ) )  in the initial population, a three-dimensional cumulative success rate vector  C S R i ( 0 ) = ( C S R i 1 ( 0 ) , C S R i 2 ( 0 ) , C S R i 3 ( 0 ) )  is initialized, where  C S R i j ( 0 ) [ 0 , 1 ]  represents the initial cumulative success rate of the  i -th individual performing the  j -th task. The assignment method is:
C S R i 1 ( 0 ) = C S R i 2 ( 0 ) = r a n d ( 0 , 0.4 ) ,
C S R i 3 ( 0 ) = 1 C S R i 1 ( 0 ) C S R i 2 ( 0 ) .
where  r a n d ( a , b )  stands for the random value in  [ a , b ] , and Formula (10) ensures that the sum of the elements after initialization is 1. Then initialize the three dimensional count vector  C o u n t i ( 0 ) = ( C o u n t i 1 ( 0 ) , C o u n t i 2 ( 0 ) , C o u n t i 3 ( 0 ) )  to store the times of individual  i  successfully executing task  j , where  C o u n t i j ( 0 ) = 0 .
(b)
Cumulative success rate update process. Firstly, the update process of count vector  C o u n t i ( t )  is introduced. Let  F l a g i j ( t )  represent the identifier of the  j -th task performed by the  i -th individual of in the  t -th generation. When individual i performs task j, let  F l a g i j ( t ) = 1 ; otherwise, it is 0. Since each individual in each generation performs only one of the three tasks in Equation (4). Therefore, for individual i, only one element of  F l a g i j ( t )  ( j = 1 , 2 , 3 ) is 1. Let  X i n e w ( t )  represent the new individual obtained by individual i of in the t-th generation after performing task j, then the  F l a g i j ( t ) = 1 . If  X i n e w ( t )  is superior to  X i ( t ) , that is, individual i successfully performs task j, then the possibility of executing task j next time should be increased while the chance of executing other tasks should be weakened. The  j -th element of the corresponding count vector  C o u n t i ( t + 1 )  increases by 1 while the remaining elements remain unchanged, that is,
{ C o u n t i j ( t + 1 ) = C o u n t i j ( t ) + 1 C o u n t i k ( k j ) ( t + 1 ) = C o u n t i k ( ( k j ) ) ( t ) F l a g i j ( t ) = 1 & & f i t ( X i n e w ( t ) ) < f i t ( X i ( t ) )
where  f i t ( )  refers to the individual’s fitness, which is defined as the objective function value in this paper.
If  X i n e w ( t )  is inferior to  X i ( t ) , that is, individual i fails to execute task j, then the possibility of executing task j next time should be reduced and the chance of executing other tasks should be increased. The  j -th element of the corresponding count vector  C o u n t i ( t + 1 )  remains unchanged, while the current value of other elements increases by 1, that is
{ C o u n t i j ( t + 1 ) = C o u n t i j ( t ) C o u n t i k ( k j ) ( t + 1 ) = C o u n t i k ( k j ) ( t ) + 1 F l a g i j ( t ) = 1 & & f i t ( X i n e w ( t ) ) f i t ( X i ( t ) )
At this point, the vector  C S R i ( t + 1 )  of the cumulative success rate of the next generation of individual i is updated with the count vector  C o u n t i ( t + 1 )  as:
C S R i j ( t + 1 ) = C o u n t i j ( t + 1 ) 2 ( N 1 ) t .
For each individual, cumulative success rate vector  C S R i ( t + 1 ) = ( C S R i 1 ( t + 1 ) ,   C S R i 2 ( t + 1 ) ,   C S R i 3 ( t + 1 ) )  can be calculated. Let  C S R i * ( t + 1 ) = max 1 j 3 { C S R i j ( t + 1 ) }  and take  C S R i * ( t + 1 )  as a definite indicator of the task to be executed in the next consumption phase. In the consumption phase of the AEO, the corresponding updatings is executed with equal probability without considering the incentive mechanism. Now, formula (13) is used to calculate the cumulative success rate of each individual performing the three tasks, and the strategy corresponding to the * task is determined according to the indicator of task propensity  C S R i * ( t + 1 ) , so as to motivate individuals to explore more promising areas in a more appropriate way. The incentive mechanism pseudocode is named Algorithm 1.
Algorithm 1: Incentive mechanism (N X ( t ) C S R ( t ) F l a g )
1For t-th generation population  X ( t ) = { X i ( t ) } i = 1 N  
2Calculated population diversity  D i v e r s i t y ( t )  and environmental stimulus  S ( t )  
3   i f S ( t ) 1
4   for i = 1:N
5   C S R i * ( t ) = max ( C S R i j ( t ) )
6Perform  Task *;  F l a g i * ( t ) = 1  
7   end for  % Get the updated population  X n e w ( t ) = ( X i n e w ( t ) ) i = 1 N  
8   f o r i = 1 : N
9         f o r j = 1 : 3
10    i f F l a g i j = = 1 & & f i t ( X i n e w ( t ) ) < f i t ( X i ( t ) )
11Update optimal solution and fitness value
12 C o u n t i j ( t + 1 ) = C o u n t i j ( t ) + 1 ,   C o u n t i k ( k j ) ( t + 1 ) = C o u n t i k ( ( k j ) ) ( t ) e l s e i f F l a g i j = = 1 & & f i t ( X i n e w ( t ) ) > = f i t ( X i ( t ) )
13         C o u n t i j ( t + 1 ) = C o u n t i j ( t ) ,   C o u n t i k ( k j ) ( t + 1 ) = C o u n t i k ( k j ) ( t ) + 1
14   end if
15      C S R i j ( t + 1 ) = C o u n t i j ( t + 1 ) / ( 2 ( N 1 ) t )
16   F l a g i j ( t + 1 ) = 0
17   e n d f o r
18   e n d f o r
19   e l s e
20   for i=1:N
21Execute the decomposition phase
22   end for
23   e n d i f
Output:  t + 1  generation population

3.3. Updating Based on Competition Mechanism

Due to ecological competition, the individuals in the consumption stage of the ecosystem eat both the low-ranking organisms and the high-ranking natural enemies. In Equation (4), the influence of natural enemies is not taken into account, which makes the algorithm have strong exploration ability but poor exploitation ability in the consumption stage. In Equation (5), the algorithm is apt to stumble into local optimization as a result of only considering the lead of the optimal individual. When choosing the execution strategy of the algorithm according to the environmental stimulus, each strategy should have both exploration and exploitation performance. Therefore, the influence of predator predation is integrated into (4), and the Task to be executed is determined according to the maximum cumulative success rate. That is, if the  C S R i * ( t ) = max ( C S R i j ( t ) ) , then the Task* is executed and updated as
X i ( t + 1 ) = { X i ( t ) + C ( X i ( t ) X 1 ( t ) ) , i [ 2 , N ] . Task 1 X i ( t ) + C ( r 5 ( X i ( t ) X k ( t ) ) + ( 1 r 5 ) ( X j ( t ) X i ( t ) ) ) , i [ 3 , N ] . Task 2 X i ( t ) + C ( r 6 ( X i ( t ) X 1 ( t ) ) + ( 1 r 6 ) ( X i ( t ) X k ( t ) ) + r 7 ( X j ( t ) X i ( t ) ) ) , i [ 3 , N ] . Task 3
where  k  means a positive integer of the random value in interval  [ 2 , i 1 ] j  indicates a positive integer of the random value of  [ i + 1 , N ] , and  r 5 , r 6 , r 7  are random numbers at (0, 1).
In Equation (5), the drag of random individuals is integrated into enabling the algorithm to from local optimum. For two randomly selected individuals  X r a n d 1 ( t )  and  X r a n d 2 ( t )  in the current population, a more reasonable updating formula is put forward to substitute for the decomposition strategy of the original algorithm. Formula (15) is adopted to displace the intrinsic individual position updating strategy of Formula (5):
X i ( t + 1 ) = X N ( t + 1 ) + 3 u ( e X N ( t + 1 ) h X i n e w ( t + 1 ) ) + X r a n d 1 ( t ) + X r a n d 2 ( t ) 2 .
Other parameters are the same as Equation (5). In Equation (15), the insertion of two heterogeneous individuals in the population can perfectly heighten the universality of exploration of the algorithm and aid the algorithm in averting the local optimum.

3.4. Implementation Steps of SIAEO Algorithm

To make the above preparations, the incentive mechanism and competition relationship were integrated into AEO, the environmental stimuli were defined according to population diversity, and the next-generation search strategy was determined according to the cumulative success rate of the search tasks, so as to effectively make the convergence speed and the precision in the calculation of the AEO better. The global searching and local optimization of the AEO balance are further promoted by modifying the updated formula of consumption and decomposition stage through competition relationship. Based on this, this paper provides an improved artificial ecological optimization algorithm (SIAEO) that integrates into environmental stimulus and competition mechanism. The execution programs of the SIAEO are described as:
Step 1: Set algorithm parameters.
Step 2: According to Equation (1), the initial population  X ( 0 ) = { X i ( 0 ) } i = 1 N  is randomly generated and the cumulative success rate of each individual performing three different search tasks is initialized and assigned.
Step 3: The diversity ( D i v e r s i t y ) of the population and the environmental stimulus  S  were calculated through Equations (6)–(8), and the individuals were rearranged in ascending order in light of the fitness value.
Step 4: According to the numerical value of  S , the corresponding strategy is executed to obtain the new population, as follows:
If  S 1 , execute the consumption operator, each individual adopts the corresponding formula in Formula (14) to update according to the maximum cumulative success rate; The cumulative success rate of each individual performing three search tasks was calculated;
If  S < 1 , then use Equation (15) to perform the decomposition operator.
Step 5: Judge whether  t < T  is true; if so, make  t = t + 1 , then the algorithm turn to Step 3. Otherwise, the algorithm ends and outputs the best solution and corresponding function value.
The working frame of the SIAEO is displayed in Figure 1.

3.5. Time Complexity Analysis of the Algorithm

In the SIAEO, if the meanings of  N D  and  T  are shown above, then the time complexity of the beginning phase is  O ( N D ) . The time complexity of the population diversity and the production are  O ( N D T )  and  O ( D T ) , while consumption and the decomposition phase are both  O ( N D T ) . According to the incentive mechanism, only one of the two can be executed in an iteration, but because the consumption operator and the decomposition operator contain exploration and exploitation capacity in the meantime, the stimulus incentive mechanism properly uses these two operators and better balances the relationship between them. The complexity of evaluating fitness values for each individual is  O ( N T ) . The complexity of sorting fitness values is  O ( N log N T ) . As a consequence, the total computational time complexity of the SIAEO algorithm is:
O ( S I A E O ) = O ( ( N + T ) D + ( 2 N D + N + N log N ) T )

4. Numerical Experiment

For the sake of testing the superiority of the SIAEO in settling complicated problems and the practical applicability of solving the actual problems, the latest standard test set CEC2019 was selected to separately analyze the validity of the improved strategy in Section 4.1. In Section 4.2, the CEC2017 test set was selected for high dimensional numerical experiments, and CEC2019 was selected to test the performance divergence between SIAEO and other algorithms. In Section 4.3, the SIAEO is applied to manage four engineering minimization problems. Finally, the high-dimensional K-means clustering problem is selected to declare the precision and accuracy of clustering by the SIAEO.

4.1. Strategy Effectiveness Analysis

To validate the influence of a single improvement strategy on the SIAEO algorithm, experiments were set up to compare the SIAEO algorithm with the SAEO that only appended environmental stimulus strategy, the CAEO that only added incentive mechanism and biological competition mechanism strategy in the composition stage, the DAEO that only increased local escape operator mechanism strategy in decomposition stage and AEO [2].

4.1.1. Test Functions

The effectiveness of the improved strategies was evaluated using ten multi-modal functions of the CEC2019 standard test set [26]. Table 1 gives the range of variables, dimensions, and the theoretical optimal value  F ( x * )  of the CEC2019 test function. Although the dimension of the CEC2019 test set is not high, each function has many local optimal advantages, among which f4, f6, f7 and f8 have many local optimal advantages, which tremendously challenges the global minimization of the intelligence algorithm.

4.1.2. Strategy Effectiveness Verification of the SIAEO

Table 2 compares the outcomes of the SIAEO with the SAEO, CAEO, DAEO and AEO algorithms on CEC2019 test set. The same parameter settings were used for all five algorithms, namely, an independent running for 30 times,  N = 50 , the estimation times of the maximization function is  N 10000  times, average Ave and standard deviation Std represent the error between the optimal value and the theoretical value of 30 times running and average CPU Time of 30 times running was calculated. Two-tailed t- and Friedman tests were applied to inspect the statistical comparison of all the methods of concern. Where the bold data represents the optimal results calculated by the five algorithms, (+), (=) and (−) respectively signify the use of a two-tailed t-test at a significance level  α = 0.05 , the SIAEO is superior to, equal to and inferior to the comparison algorithm, and (#) line remarks the number of corresponding results. The Friedman rank means the sorting result of the Friedman test.
It can be noted from Table 2 that, for the 10 functions of the CEC2019, the solution results of the five algorithms for the f1 function have reached the optimal value. The SAEO, CAEO, DAEO, and SIAEO each have advantages over the other test functions. Among them, the SAEO has the best performance in solving the f3 and f4 functions. The CAEO possessed the optimal mean and standard deviation in the f2 function. While the DAEO acquired the best average results on three functions (f6, f8 and f9), the SIAEO captured the best average results on the f5, f7 and f10 functions. On the whole, the addition of each strategy individually contributes to the advancement of the AEO algorithm in some way. For different test functions, each strategy has distinct effects. Therefore, the SIAEO algorithm gathered in one place with the three improved strategies is more promising and competitive.

4.1.3. Analysis of Statistical Test Results

The last four lines of Table 2 clearly state that the results of two statistical tests, the two-tailed t-test and the Friedman test, demonstrate the absolute superiority of our improved SIAEO algorithm. The number of functions similar to the SIAEO, SAEO, CAEO and DAEO algorithms is 2, 6, 4 and 7, respectively, while the number of functions inferior to SIAEO is 7, 3, 5 and 2, respectively. On the one hand, this manifests that the SIAEO algorithm is provided with significant advantages and superior performance over the AEO algorithm and shows the effectiveness of the improvement strategy at the same time. On the other hand, we can also discover the importance of the orientation and influence degree of the three strategies in the iterative optimization process. The DAEO algorithm, which only raises the local escape operator in the decomposition stage, gains more proficiency than other strategies. As many as seven functions are close to the SIAEO’s performance. The SAEO, which only attaches environmental stimulus mechanisms to reduce computational complexity, obtains six functions close to the SIAEO’s performance.
The Friedman test showed an ability to test the similarity of the five algorithms, with smaller results indicating better algorithms. The SIAEO algorithm took the minimum value of 2.419 and ranked first overall, followed by the DAEO, CAEO, SAEO and AEO algorithms to demonstrate the superiority of the improved strategy.

4.1.4. The Tendency Analysis of Consumption Operator

There are three predation modes in the consumption stage, showed by Task1, Task2 and Task3. The algorithm changes from overall situation performance to local performance with iterations increasing. However, the original algorithm adopts three tasks with equal probability execution, which will cause a part of resource waste in the late iteration. The incentive mechanism will improve this situation to a large extent. The number of tasks performed by Task2 is much higher than that of other tasks, followed by Task3, and the number of tasks performed by Task1 decreases in the final stage of the iteration. In each iteration, we recorded the frequency of individuals performing the three tasks and stacked them up as we went through the iteration. Figure 2 shows the running results on the node test functions of the CEC2019.
In Figure 2, it can be seen that in the prophase iterations of the SIAEO, the algorithm pays attention to global exploration ability and executes the consumption operator. The cumulative number of tasks performed increases with each iteration. The improved carnivorous and weedy strategies have a higher probability of being selected because they combine exploration and exploitation, resulting in a higher slope of the cumulative line. At the end of the iteration, environmental stimuli guide the algorithm to perform decomposition operations. Due to the termination of the consumption operator, the individual does not perform the three tasks, so the cumulative curve presents a horizontal straight line in the late iteration.

4.2. Numerical Optimization Experiment

4.2.1. Parameter Setting

To show the availability and stability of the SIAEO in handling high-dimensional optimization problems, a total of 40 benchmark test functions of the CEC2019 [26] and CEC2017 [27] were numerically optimized. The CEC2017 test set is composed of a total of 30 functions, including single-peak functions (F1–F3), simple multi-peak functions (F4–F10), mixed functions (F11–F20) and composite functions (F21–F30). In the experiment, the SIAEO algorithm was compared with the AEO’s improved algorithms IAEO [15], EAEO [16] and AEO [14], as well as other representative swarm intelligence algorithms, including AOA [28], TSA [29], HHO [2], QPSO [30], WOA [31] and GWO [32]. Table 3 shows the parameter values of each algorithm used in the experiment. To be fair, all algorithms run independently 30 times,  N = 50  and,  D = 100  and the maximum estimation times is  N 10000  times. The computer is configured as Intel(R) Core I7, main frequency 3.60 ghz, memory 8 GB, Windows 7 64-bit operating system, and the programming is by MATLAB R2014a.

4.2.2. Comparison of Results between SIAEO and Comparison Algorithm

Table 4 and Table 5 exhibit that the SIAEO and nine other algorithms run the CEC2017 and CEC2019 test sets independently 30 times. The Ave and Std and the average CPU Time of 30 times are calculated. The bold data display the minimum value obtained by each algorithm. The symbols “+”, “−” and “=“ in () after time in the table indicate that the SIAEO is significantly better than, worse than or similar to the competitive algorithm when the t-test is executed on the SIAEO and the other nine comparison algorithms. The last four rows of Table 4 and Table 5 show the number of best solutions achieved on all test functions ((#) best), as well as the t-test ‘+’, ‘−’, and ‘=’ quantity statistics and Friedman test results.
The data in Table 4 demonstrate that the SIAEO performed best for the high-dimensional and challenging CEC2017 test set in line with the number of optimal solutions achieved ((# best) with a value of (# best) of 21. Specifically, the SIAEO accessed the best results on six of the seven simple multi-peak functions (F4–F10). For mixed functions (F11–F20) and composite functions (F21–F30), the performance is best on seven and eight functions, respectively. For three unimodal functions (F1–F3), the performance is slightly inferior to that of the EAEO. In general, the SIAEO performs slightly less well for unimodal functions, but its performance on simple multimodal functions, mixed functions, and composite functions is the the most promising. Thus, the SIAEO algorithm possesses more advantages and more comprehensive performance in solving high-dimensional complex optimization problems.
The data in Table 5 illustrate that the SIAEO’s (#)best value is nine in terms of the number of optimal solutions ((#)best), which indicate that the SIAEO unfolds before one’s eyes a significant advantage in solving the CEC2019 test set with fixed dimensions. In conclusion, the SIAEO algorithm shows more promising performance for both the high-dimensional CEC2017 test set and the fixed-dimensional CEC2019 test set, which authenticates that the SIAEO outperforms the other nine comparison algorithms in terms of scalability.
Figure 3 provides the comparison diagram of curve convergence process of three simple multi-peak functions (F5, F6, and F8), three mixed functions (F15, F16, and F19), and three composite functions (F21, F23, and F24) of the CEC2017 test set when the SIAEO and other algorithms search for optimization. For ease of observation, the convergence curve is plotted with  log 10 ( F ( x ) )  as the ordinate. Figure 3 illustrates this point: although the SIAEO algorithm is slower than AEO algorithm in solving simple multi-peak, mixed function and composite functions, it has strong exploration and exploitation performance. It is obviously better than the other nine reference algorithms in terms of solving precision and has a strong ability to escape the local optimal.

4.2.3. Analysis of Statistical Test Results

Statistical test [33] is an essential mathematical method to analyze the results of intelligent optimization algorithms. When the significance level is  α = 0.05 , three statistical tests, including the two-tailed t-test, Wilcoxon signed-rank test and Friedman test, are used to test the performance of the comparison algorithm.
(1)
Two-tailed t-test
In Table 4 and Table 5, the t-test results of the SIAEO and the other nine comparison algorithms are given in parentheses after Time, and (#) best gives the number of winning results. As an attempt to comprehensively assess all algorithms, the comprehensive performance CP is defined by the number of superior algorithms minus the number of inferior algorithms, i.e., “(#)+” − “(#)−”. When the CP is positive, that is, CP > 0, it indicates that the SIAEO is superior to the comparison algorithm and vice versa. Table 6 shows CP values of the SIAEO and other comparison algorithms on the CEC2017 test set and the CEC2019 test set. Obviously, the data in Table 6 indicates that the SIAEO is an excellent algorithm for the CEC2017 test set. The CP value of the SIAEO compared with the AEO is 13, which exhibits the superiority of the SIAEO amendment scheme. For the EAEO and IAEO, the CP values are 29 and 16, respectively, which suggests that the proposed algorithm is significantly superior to the EAEO and IAEO. By comparing the CP values of the other six intelligent algorithms, the SIAEO achieved an overwhelming superiority, with the CP values of thirty compared with the other four algorithms, except that the CP values of WOA and HHO algorithms are 28. For the CEC2019 test set, the CP values of the SIAEO algorithm are positive compared with the AEO, EAEO and IAEO, while the CP values of the other six comparison algorithms are close to the maximum. In all, it shows that the SIAEO algorithm has more advantages and prospects in exploring high-dimensional minimize problems.
(2)
Wilcoxon signed-rank test
For the CEC2017 and CEC2019 test sets, the Wilcoxon signed-rank test with a significance level of 0.05 was used to test the difference between the SIAEO and other competitive algorithms. In Table 7, R+ and R− represent the sum of the rank of all the test functions in which the SIAEO is better or worse than its competitors. The greater the difference between R+ and R-, the better the performance of the algorithm will be. p represents the probability value corresponding to the test result. When p < 0.05, it means that a significant difference exists in the two algorithms. Significance means the significance of the difference, “+” expresses that the underlying algorithm is prominently better than the comparison algorithm, and “=” indicates that the performance is similar to the comparison algorithm. Table 7 shows that the SIAEO significantly outperforms the other nine algorithms except for the IAEO for the CEC2019 test set. Not only is p < 0.05, but the number of R+ is far greater than the number of R-, especially for the EAEO, AOA, TSA, HHO, QPSO, WOA and GWO, the number of R− is 0. For the CEC2017 test set, although the SIAEO is not significantly superior to the EAEO and AEO in p, it provides more potential performance than the EAEO and AEO algorithms in R+ and R−. In general, the SIAEO has a significant advantage among the nine comparison algorithms based on the p-value. Moreover, it outperforms all competitors in the number of R+ and R- and shows a more promising performance in high-dimensional cases.
(3)
The Friedman test
Nonparametric Friedman test [34] was used to test the overall performance of the ten comparison algorithms. During the inspection, the average value of 30 times of independent operation is used as input data to calculate the overall ranking of each algorithm. Friedman test results and rankings for the CEC2017 and CEC2019 test sets are shown in the last row of Table 4 and Table 5. The results in Table 4 and Table 5 indicate that the SIAEO ranked first in both the 100 dimensions of the CEC2017 and the fixed dimensions of the CEC2019. The AEO, EAEO and IAEO rank 2nd, 3rd and 4th, respectively. In addition, the ranking does not change with the change of dimension, so illustrating the SIAEO has significant robustness and the ability to solve high-dimensional problems. Figure 4 shows the differences in Friedman ranking and illustrates the SIAEO having prominent preponderance over the other algorithms.

4.3. Engineering Optimization Experiment

In this subsection, to validate the ability of the SIAEO to solve the optimization problems in real-world applications, four real-world engineering challenges are disposed of availing on the proposed algorithm. In the comparison results, the optimal value is represented in bold font.

4.3.1. Three-Bar Truss Design Problem

This example discusses a 3-bar planar truss structure (Figure 5) that minimizes volume while meeting stress restrictions on each side of the truss members [35]. Optimization of two-stage rod length  A 1 , A 2  with the variable vector  x = ( x 1 , x 2 ) = ( A 1 , A 2 )  can be mathematically simulated as below:
min f e o 1 ( x ) = ( 2 2 x 1 + x 2 ) l s . t . g 1 ( x ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 g 3 ( x ) = 1 x 1 + 2 x 2 P σ 0 0 x 1 , x 2 1 .
The findings of the SIAEO and other excellent comparative methodologies are given in Table 8. Table 8 also lists the best decision variables of the optimal solution for all comparing approaches. Table 8 shows that by supplying optimal variables at  x * = ( x 1 * , x 2 * ) = ( 0.788675136 , 0.40824828 )  with minimum objective function value:  f e o 1 ( x * ) = 263.895843 , the proposed SIAEO achieved better and comparable outcomes than other potential minimized methods.

4.3.2. Himmelblau’s Nonlinear Problems

The mathematical description of Himmelblau’s nonlinear problems [40] using the vector  x = ( x 1 , x 2 , x 3 , x 4 , x 5 )  can be surveyed as:
min f e o 2 ( x ) = 5.3578547 x 3 2 + 0.8356891 x 2 x 5 + 37.293239 x 1 40792.141 s . t . 0 g 1 ( x ) = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 92 , 90 g 2 ( x ) = 80.51249 + 0.0071317 x 2 x 5 + 0.0029955 x 1 x 2 0.0021813 x 3 2 110 , 20 g 3 ( x ) = 9.300961 + 0.0047026 x 3 x 5 + 0.0012547 x 1 x 3 + 0.0019085 x 3 x 4 25 , 78 x 1 102 , 33 x 2 45 , 27 x 3 , x 4 , x 5 45 .
The best results obtained using various methods are demonstrated in Table 9, which reveal that the SIAEO outperforms existing approaches and also has engineering practicability.

4.3.3. Tabular Column Design Problem

Figure 6 illustrates the uniform tabular column design problem [44]. Generating a homogeneous column of tabular design at the lowest possible cost is its purpose. The following is a mathematical description of the problem using the variable vector  x = ( x 1 , x 2 ) .
min f e o 3 ( x ) = 9.82 x 1 x 2 + 2 x 1 s . t . g 1 ( x ) = P π σ y x 1 x 2 1 0 , g 2 ( x ) = 8 P L 2 π 3 E x 1 x 2 ( x 1 2 + x 1 2 ) 1 0 , g 3 ( x ) = 2.0 / x 1 1 0 , g 4 ( x ) = x 1 / 14 1 0 , g 5 ( x ) = 0.2 / x 2 1 0 , g 6 ( x ) = x 2 / 0.8 1 0 , 2 x 1 14 , 0.2 x 2 0.8 .
Although the optimization variables of the tabular column design problem are few, there are many constraints that increase the difficulty of hunting for feasible solutions. The best results of each algorithm for searching such a problem are given in Table 10. The SIAEO method’s variation is clearly lower than the other approaches; in addition, the SIAEO method’s best result has the best performance by supplying optimal variables at  x * = ( x 1 * , x 2 * ) = ( 5.4512 , 0.29167 )  with a minimum objective function value:  f e o 3 ( x * ) = 26.526 .

4.3.4. Gas Transmission Compressor Design Problem

This four-variable mechanical design problem was first proposed by Beightlerand Phillips designed the problem [46]. The goal is to seek out the variable for sending natural gas to the gas pipeline transmission system at the lowest possible cost. The following is a mathematical description of the problem with the vector  x = ( x 1 , x 2 , x 3 , x 4 ) .
min f e o 4 ( x ) = 8.61 × 10 5 x 1 1 / 2 x 2 x 3 2 / 3 x 4 1 / 2 + 3.69 × 10 4 x 3 + 7.72 × 10 8 x 1 1 x 2 0.219 765.43 × 10 6 x 1 1 s . t . g 1 ( x ) = x 4 x 2 2 + x 2 2 1 0 , 20 x 1 50 , 1 x 2 10 , 20 x 3 50 , 0.1 x 4 60 .
The objective function of this problem is complex and highly nonlinear, which puts forward higher requirements for the optimization algorithm.
The optimal results of different methods for solving such a problem are given in Table 11. The SIAEO method’s variation is clearly lower than the other approaches; in addition, the SIAEO method’s best result has the best performance by supplying optimal variables at  x * = ( x 1 * , x 2 * , x 3 * , x 4 * ) = ( 50 , 1.17828 , 24.59259 , 0.38835 )  with a minimum objective function value:  f e o 4 ( x * ) = 2964895 .

4.4. Application of SIAEO in Clustering Problem

K-means clustering is a very effective clustering method [51,52]. When the number of classes is determined, the determination of the clustering center is the key to this method. The finding of the clustering center can be turned into the optimization problem of minimum class spacing. For the data set  D a t a = { o 1 , o 2 , , o V }  composed of V samples, there are  M  classes in total, the cluster center of class  i ( i = 1 , 2 , M )  is  P i , and the number of data in the cluster of class i is  V i , then the clustering center  P i ( i = 1 , 2 , M )  of K-means clustering is the optimal of the following optimization model.
min i = 1 M j = 1 V i o j P i 2 2
Attempting to demonstrate the SIAEO algorithm’s competitiveness in solving high-dimensional practical problems by verifying its optimization ability for high-dimensional clustering, this section uses the SIAEO to search for the optimal clustering center of K-means clustering.

4.4.1. SIAEO—K-Means Algorithm

Table 12 gives the number of data instances, class-number Classes, Dimension Features of data and the number of dimensions of the decision variables in optimization problem (17) in nine data sets from the UCI standard database [53]. The process of solving optimization problem (17) with the SIAEO algorithm is as follows: Firstly, N Dimension individuals are generated as N groups of clustering centers. For each individual, that is, each group of clustering centers, the K-means clustering method is adopted to determine the number of data contained in each category. Individual fitness values are calculated using Formula (17) and determine the optimal individuals. Then, the SIAEO algorithm is used to update each individual until the optimal cluster center is found.

4.4.2. Experimental Setup and Performance Evaluation

In the experiment,  N = 20  and  T = 100 , and the independent operation was performed 30 times. The IAEO, EAEO, AEO, DE and PSO were selected to combine with K-means as the comparison algorithm. Table 13 provides the mean Ave, standard deviation Std and Rank gained by the SIAEO and comparison algorithm to search the optimization model established by Formula (17) of the data in Table 12. The last three lines of Table 13 give the evaluation index. The count indicates the number of data sets of the optimal result obtained by each optimization algorithm. The Avg_Rank represents the average ranking and Total Rank quantitatively expresses the contrast in the optimization ability of each algorithm. For nine data sets, Figure 7 depicts the search process of each contrast algorithm to reflect the characteristics of the algorithm and to explore the global optimal solution during operation.

4.4.3. Comparison and Analysis of Results

Table 13 shows that when compared with other algorithms, the SIAEO acquires the ideal intra-class distance in seven data sets and ranks 1.33 on average, placing it first overall. The EAEO and IAEO came in second and third place, while the PSO and DE came in 4th and 5th place, respectively, while the AEO ranked last, which differed greatly from the SIAEO. Figure 7 shows that the SIAEO possesses a strong capability to escape the local minimum and explores the global minimum compared with other algorithms, making it an outstanding heuristic algorithm for solving high-dimensional clustering problems.

5. Conclusions

In artificial ecosystem optimization algorithms, the relationship between exploration and exploitation has always been the focus of research. In this paper, by introducing the environmental stimulus incentives mechanism, which was defined by the population diversity of the external environment stimulus to assist populations to realize the conversion between consumption and decomposition, the largest cumulative success rate by individuals performing different tasks to guide the consumption of individual choice was found to be more suitable for an update strategy. This method decreases the complexity of the AEO and improves the calculation precision.
At the same time, in the consumption stage, a new exploration and updating method that uses biological competition to make it have successfully random search features is proposed. Because of this, an improved artificial ecosystem optimization algorithm (SIAEO) on account of environmental stimulus incentive mechanisms and biological competition was proposed. Two types of tests were developed to confirm the superiority of the SIAEO. The validity of the SIAEO and other intelligent algorithms and the AEO variant algorithms to resolve the CEC2017 and CEC2019 benchmark functions was confirmed in the first set of experiments. The SIAEO has a greater solving precision and convergence speed than contrast algorithms, according to the findings of the experiments. The SIAEO’s better stability and robustness are further demonstrated through statistical tests and the convergence curve.
The second group of experiments verified the usefulness of the SIAEO. Four engineering minimize problems verify the efficiency of the SIAEO in light of the complex engineering nonlinear search problems. The SIAEO–K-means model was established to optimize the K-means clustering center, and nine UCI standard data sets were applied in the experiment. The results showed that SIAEO–K-means obtained higher evaluation index values and had better performance in high-dimensional clustering data sets. The SIAEO exceeds the comparative algorithms according to optimization ability and performance in addressing high-dimensional problems in light of the two groups of experimental data.
As the AEO is a new heuristic algorithm, a more effective improvement strategy to balance its exploration and exploitation ability is studied to improve its optimization ability. The authors of [54] provided a dynamic data flow clustering method based on intelligent algorithms, providing a reference for the fusion of dynamic data clustering and heuristic algorithms. Another study [55] required an optimized estimation of hydraulic jump roller length. These new requirements need further exploration of the large-scale clustering ability of AEO algorithm and its deep application in the mechanical field in the future.

Author Contributions

W.G.: supervision, methodology, formal analysis, writing—original draft, writing—review and editing. M.W.: software, data curation, conceptualization, visualization, formal analysis, writing—original draft. F.D.: methodology, supervision, data curation, supervision, writing—review and editing. Y.Q.: methodology, formal analysis, data curation, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China No. 61976176.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

This study did not conduct data archiving, as the algorithm has randomness, and the results obtained each time are different.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shadravan, S.; Naji, H.R.; Bardsiri, V.K. The sailfish optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 80, 20–34. [Google Scholar] [CrossRef]
  2. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  3. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H. Barnacles Mating Optimizer: A Bio-Inspired Algorithm for Solving Optimization Problems. In Proceedings of the 2018 19th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), Busan, Republic of Korea, 27–29 June 2018; pp. 265–270. [Google Scholar]
  4. Hu, G.; Wang, J.; Li, M.; Hussien, A.G.; Abbas, M. EJS: Multi-strategy enhanced jellyfish search algorithm for engineering applications. Mathematics 2023, 11, 851. [Google Scholar] [CrossRef]
  5. Hu, G.; Yang, R.; Qin, X.; Wei, G. MCSA: Multi-strategy boosted chameleon-inspired optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2023, 403, 115676. [Google Scholar] [CrossRef]
  6. Hu, G.; Du, B.; Wang, X.; Wei, G. An enhanced black widow optimization algorithm for feature selection. Knowl.-Based Syst. 2022, 235, 107638. [Google Scholar] [CrossRef]
  7. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intell. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  8. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  10. Alqaness, M.A.A.; Ewees, A.A.; Fan, H.; Aziz, M. Optimization method for forecasting confirmed cases of COVID-19 in China. Clin. Med. 2020, 9, 674. [Google Scholar]
  11. Ewees, A.A.; Elaziz, M.A.; Al-Qaness, M.A.A.; Khalil, H.A.; Kim, S. Improved artificial bee colony using Sine-cosine algorithm for multi-level thresholding image segmentation. IEEE Access 2020, 8, 26304–26315. [Google Scholar] [CrossRef]
  12. Rajamohana, S.; Umamaheswari, K. Hybrid approach of improved binary particle swarm optimization and shuffled frog leaping for feature selection. Comput. Electr. Eng. 2018, 67, 497–508. [Google Scholar] [CrossRef]
  13. Attiya, I.; Elaziz, M.A.; Xiong, S. Job scheduling in cloud computing using a modified harris hawks optimization and simulated annealing algorithm. Comput. Intell. Neurosci. 2020, 16, 3504642. [Google Scholar] [CrossRef] [Green Version]
  14. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2020, 32, 9383–9425. [Google Scholar] [CrossRef]
  15. RizkALllah, R.M.; ELFergany, A.A.; SMIEEE. Artificial Ecosystem Optimizer for Parameters Identification of Proton Exchange Membrane Fuel Cells Model. Int. J. Hydrog. Energy 2020, 46, 37612–37627. [Google Scholar] [CrossRef]
  16. Sultana, H.M.; Menesy, A.S.; Kamel, S.; Korashy, A.; Almohaimeed, S.A.; Abdel-Akher, M. An improved artificial ecosystem optimization Algorithm for optimal configuration of a hybrid PV/WT/FC energy system. Alex. Eng. J. 2021, 60, 1001–1025. [Google Scholar] [CrossRef]
  17. Menesy, A.S.; Sultan, H.M.; Korashy, A.; Banakhr, F.A.; Ashmawy, M.G.; Kamel, A.S. Effective parameter extraction of different polymer electrolyte membrane fuel cell stack models using a modified artificial ecosystem optimization algorithm. IEEE Access 2020, 8, 31892–31909. [Google Scholar] [CrossRef]
  18. Barshandeh, S.; Piri, F.; Sangani, S.R. HMPA: An innovative hybrid multi-population algorithm based on artificial ecosystem-based and Harris Hawks optimization algorithms for engineering problems. Eng. Comput. 2022, 38, 1581–1625. [Google Scholar] [CrossRef]
  19. Essa, F.A.; Mohamed, A. Prediction of power consumption and water productivity of seawater greenhouse system using random vector functional link network integrated with artificial ecosystem-based optimization. Process. Saf. Environ. 2020, 144, 322–329. [Google Scholar] [CrossRef]
  20. Yousri, D.; Babu, T.S.; Mirjalili, S.; Rajasekar, N.; Elaziz, M.A. A novel objective function with artificial ecosystem-based optimization for relieving the mismatching power loss of large-scale photovoltaic array. Energy Convers. Manag. 2020, 225, 113385. [Google Scholar] [CrossRef]
  21. Calasan, M.; Mice, M.; Djurovic, Z.; Mageed, H.M. Artificial ecosystem-based optimization for optimal tuning of robust PID controllers in AVR systems with limited value of excitation voltage. Int. J. Electr. Eng. Educ. 2020, 1–28. [Google Scholar] [CrossRef]
  22. Mohammadrezapour, O.; Kisi, O.; Pourahmad, F. Fuzzy c-means and K-means clustering with genetic algorithm for identification of homogeneous regions of groundwater quality. Neural Comput. Appl. 2020, 32, 3763–3775. [Google Scholar] [CrossRef]
  23. Capó, M.; Pérez, A.; Lozano, J.A. An efficient approximation to the k-means clustering for massive data. Knowl.-Based Syst. 2017, 117, 56–69. [Google Scholar] [CrossRef] [Green Version]
  24. Peng, Z.; Zheng, J.; Zou, J. A population diversity maintaining strategy based on dynamic environment evolutionary model for dynamic multiobjective optimization. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 274–281. [Google Scholar]
  25. Ursem, R. Diversity-guided evolutionary algorithms. In Parallel Problem Solving from Nature; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2439, pp. 462–471. [Google Scholar]
  26. Price, K.V.; Awad, N.H.; Ali, M.Z.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the 100-Digit Challenge Special Session and Competition on Single Objective Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2018. [Google Scholar]
  27. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  28. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  29. Kaur, S.; Awasthi, L.; Sangal, A.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  30. Sun, J.; Fang, W.; Wu, X.; Palade, V.; Xu, W. Quantum-behaved particle swarm optimization: Analysis of individual particle behavior and parameter selection. Evol. Comput. 2012, 20, 349–393. [Google Scholar] [CrossRef] [PubMed]
  31. Seyedali, M.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar]
  32. Mirjalili, S.; Mohammad, S.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  33. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  34. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  35. Nowcki, H. Optimization in pre-contract ship design. In Computer Applications in the Automation of Shipyard Operation and Ship Design; Fujita, Y., Lind, K., Williams, T.J., Eds.; Elsevier: North-Holland, NY, USA, 1974; Volume 2, pp. 327–338. [Google Scholar]
  36. Feng, Z.; Niu, W.; Liu, S. Cooperation search algorithm: A novel metaheuristic evolutionary intelligence algorithm for numerical optimization and engineering optimization problems. Appl. Soft. Comput. 2020, 98, 106734. [Google Scholar] [CrossRef]
  37. Ray, T.; Saini, P. Engineering design optimization using a swarm with an intelligent information sharing among individuals. Eng. Optim. 2001, 33, 735–748. [Google Scholar] [CrossRef]
  38. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  39. Abualigah, L.; Shehab, M.; Diaba, A.; Abraham, A. Selection scheme sensitivity for a hybrid salp swarm algorithm: Analysis and applications. Eng. Comput. 2020, 38, 1149–1175. [Google Scholar] [CrossRef]
  40. Himmelblau, D. Applied Nonlinear Programming; McGraw-Hill Companies: New York, NY, USA, 1972. [Google Scholar]
  41. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  42. He, S.; Prempain, E.; Wu, H. An improved particle swarm optimizer for Mechanical design optimization problems. Eng. Optim. 2004, 36, 585–605. [Google Scholar] [CrossRef]
  43. Dimopoulos, G. Mixed-variable engineering optimization based on evolutionary and social metaphors. Comput. Methods Appl. Mech. Eng. 2007, 196, 803–817. [Google Scholar] [CrossRef]
  44. Bard, J. Engineering Optimization: Theory and Practice; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  45. Hsu, L.; Liu, C. Developing a fuzzy proportional-derivative controller optimization engine for engineering design optimization problems. Eng. Optim. 2007, 39, 679–700. [Google Scholar] [CrossRef]
  46. Beightler, C.; Phillips, D. Applied Geometric Programming; Wiley: New York, NY, USA, 1976. [Google Scholar]
  47. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2018, 23, 715. [Google Scholar] [CrossRef]
  48. Cheng, M.; Prayogo, D. Symbiotic organisms search: A new metaheuristic optimization algorithm. Comput. Struct. 2014, 139, 98–112. [Google Scholar] [CrossRef]
  49. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  50. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  51. Peña, M.; Lozano, A.; Larranaga, P. An empirical comparison of four initialization methods for the k-means algorithm. Pattern Recognit. Lett. 1999, 20, 1027–1040. [Google Scholar] [CrossRef]
  52. Askari, S. Fuzzy C-Means clustering algorithm for data with unequal cluster sizes and contaminated with noise and outliers: Review and development. Expert. Syst. Appl. 2021, 165, 113856. [Google Scholar] [CrossRef]
  53. Dheeru, D.; Taniskidou, E. UCI Repository of machine learning databases. Available online: http://www.ics.uci.edu/ (accessed on 2 June 2023).
  54. Yeoh, J.M.; Caraffini, F.; Homapour, E.; Santucci, V.; Milani, A. A clustering system for dynamic data streams based on metaheuristic optimisation. Mathematics 2019, 7, 1229. [Google Scholar] [CrossRef] [Green Version]
  55. Agresta, A.; Biscarini, C.; Caraffini, F.; Santucci, V. An intelligent optimised estimation of the hydraulic jump roller length. In Applications of Evolutionary Computation: 26th European Conference, EvoApplications; Springer: Cham, Switzerland, 2023; pp. 475–490. [Google Scholar]
Figure 1. The flowchart of the SIAEO algorithm.
Figure 1. The flowchart of the SIAEO algorithm.
Biomimetics 08 00242 g001
Figure 2. The cumulative number of tasks performed.
Figure 2. The cumulative number of tasks performed.
Biomimetics 08 00242 g002aBiomimetics 08 00242 g002b
Figure 3. Convergence curve of nine benchmark functions of the CEC2017.
Figure 3. Convergence curve of nine benchmark functions of the CEC2017.
Biomimetics 08 00242 g003
Figure 4. Friedman test results of the SIAEO and comparison algorithms on CEC2017 and CEC2019.
Figure 4. Friedman test results of the SIAEO and comparison algorithms on CEC2017 and CEC2019.
Biomimetics 08 00242 g004
Figure 5. Sketch map of the three-bar truss design problem.
Figure 5. Sketch map of the three-bar truss design problem.
Biomimetics 08 00242 g005
Figure 6. Sketch map of the tabular column design problem.
Figure 6. Sketch map of the tabular column design problem.
Biomimetics 08 00242 g006
Figure 7. Convergence curves between the SIAEO–K-means and other algorithms on all data sets.
Figure 7. Convergence curves between the SIAEO–K-means and other algorithms on all data sets.
Biomimetics 08 00242 g007aBiomimetics 08 00242 g007b
Table 1. CEC2019 benchmark function.
Table 1. CEC2019 benchmark function.
FunctionRangeDim   F ( x * )
f1Storn’s Chebyshev Polynomial Fitting Problem[−8192, 8192]91
f2Inverse Hilbert Matrix Problem[−16348, 16348]161
f3Lennard–Jones Minimum Energy Cluster [−4, 4]181
f4Rastrigin’s Function [−100, 100]101
f5Griewangk’s Function [−100, 100]101
f6Weierstrass Function[−100, 100]101
f7Modified Schwefel Function[−100, 100]101
f8Expanded Schaffer F6 Function [−100, 100]101
f9Happy Cat Function [−100, 100]101
f10Ackley Function [−100, 100]101
Table 2. Comparison of results of SIAEO’s improved strategy on CEC2019 functions.
Table 2. Comparison of results of SIAEO’s improved strategy on CEC2019 functions.
FunctionsAlgorithms
AEOSAEOCAEODAEOSIAEO
f1Ave00000
Std00000
Time10 (NaN)12 (NaN)12 (NaN)14 (NaN)15
f2Ave3.353.383.253.283.36
Std2.68 × 10−12.65 × 10−11.87 × 10−21.47 × 10−12.21 × 10−1
Time10 (=) 12 (=) 13 (=) 14 (=) 17
f3Ave4.09 × 10−13.58 × 10−14.09 × 10−11.023.62 × 10−1
Std1.34 × 10−151.45 × 10−16.28 × 10−161.901.46 × 10−1
Time10 (−)11 (=) 12 (−) 13 (−) 15
f4Ave2.00 × 1011.13 × 1012.54 × 1011.31 × 1011.16 × 101
Std6.482.431.10 × 1016.073.89
Time11 (−) 13 (=) 13 (−) 15 (=) 18
f5Ave2.79 × 10−11.45 × 10−11.81 × 10−11.75 × 10−11.10 × 10−1
Std1.59 × 10−18.35 × 10−29.21 × 10−28.64 × 10−25.37 × 10−2
Time10 (−) 11 (=) 11 (−) 13 (=) 13
f6Ave3.091.252.193.11 × 10−14.76 × 10−1
Std2.011.081.405.28 × 10−11.51 × 10−1
Time27 (−) 28 (=) 29 (−) 30 (=) 34
f7Ave6.40 × 1026.74 × 1024.81 × 1026.47 × 1024.40 × 102
Std1.36 × 1021.97 × 1022.90 × 1022.60 × 1021.43 × 102
Time11 (−) 13 (−) 14 (=) 15 (−) 19
f8Ave2.472.782.441.752.36
Std1.60 × 10−12.71 × 10−14.72 × 10−16.22 × 10−14.77 × 10−1
Time10 (=)12 (−) 13 (=) 14 (=) 18
f9Ave2.51 × 10−11.70 × 10−12.67 × 10−11.34 × 10−11.45 × 10−1
Std9.58 × 10−27.84 × 10−29.51 × 10−23.49 × 10−22.54 × 10−2
Time10 (−) 12 (=)12 (−) 14 (=) 18
f10Ave1.78 × 1012.00 × 1011.29 × 1011.50 × 1018.14
Std6.259.61 × 10−49.829.267.56
Time10 (−) 12 (−) 13 (=) 14 (=) 17
(#)+7352
(#)=2647
(#)0000
Friedman rank3.4343.1393.0952.6392.419
Table 3. Algorithm parameter setting.
Table 3. Algorithm parameter setting.
AlgorithmsParameterValue
SIAEO   p 50
AOA   C 1 , C 2 , C 3 , C 4 2, 6, 2, 0.5
TSA   p max , p min 4, 1
HHO   β 1.5
QPSO ω max ω min 1, 0.5
GWO a max a min 2, 0
WOA a max a min 2, 0
Table 4. Comparison between the SIAEO and comparison algorithm on the CEC2017 test set (D = 100).
Table 4. Comparison between the SIAEO and comparison algorithm on the CEC2017 test set (D = 100).
FunctionsAlgorithms
SIAEOIAEOEAEOAEOAOATSAHHOQPSOWOAGWO
F1Ave1.12 × 1048.56 × 1095.90 × 1036.62 × 1032.58 × 10111.09 × 10113.49 × 1081.57 × 10111.07 × 1076.51 × 1010
Std1.03 × 1041.11 × 1094.68 × 1035.37 × 1031.71 × 10101.28 × 10103.84 × 1076.62 × 1091.81 × 1069.21 × 109
Time34 27 (+)28 (=)26 (=)124 (+)165 (+)26 (+)31 (+)82 (+)28 (+)
F2Ave4.62 × 10778.67 × 101128.53 × 10422.14 × 10468.04 × 101705.89 × 101408.06 × 10936.30 × 101495.83 × 101292.97 × 10132
Std9.80 × 10772.19 × 101132.34 × 10436.04 × 1046Inf1.67 × 101412.28 × 10944.00 × 101491.65 × 101308.40 × 10132
Time34 28 (=)29 (=) 26 (=) 125 (=) 166 (=) 27 (=) 32 (+)83 (=) 29 (=)
F3Ave1.25 × 1052.81 × 1054.81 × 1031.07 × 1043.01 × 1052.00 × 1051.45 × 1052.69 × 1057.10 × 1052.19 × 105
Std1.53 × 1041.40 × 1041.40 × 1034.61 × 1031.06 × 1042.61 × 1042.01 × 1041.23 × 1041.22 × 1051.91 × 104
Time33 27 (+) 28 (−) 25 (−) 123 (+) 164 (+) 26 (+) 31 (+) 83 (+) 31 (+)
F4Ave3.14 × 1021.36 × 1032.91 × 1022.97 × 1028.83 × 1041.73 × 1045.94 × 1023.71 × 1045.61 × 1025.61 × 103
Std3.72 × 1011.61 × 1026.11 × 1017.65 × 1011.01 × 1045.18 × 1031.23 × 1026.72 × 1038.31 × 1018.65 × 102
Time34 27 (+) 28 (=) 25 (=) 123 (+) 163 (+) 26 (+) 31 (+) 83 (+) 31 (+)
F5Ave2.64 × 1029.40 × 1027.82 × 1027.99 × 1021.19 × 1031.50 × 1039.64 × 1021.34 × 1039.24 × 1021.11 × 103
Std4.37 × 1016.04 × 1017.06 × 1015.26 × 1018.16 × 1019.99 × 1013.79 × 1012.56 × 1017.10 × 1015.93 × 101
Time36 30 (+) 30 (+) 28 (+) 125 (+) 166 (+) 29 (+) 33 (+) 85 (+) 33 (+)
F6Ave5.775.74 × 1014.43 × 1012.32 × 1019.69 × 1011.13 × 1028.20 × 1018.98 × 1018.22 × 1016.12 × 101
Std2.098.746.654.023.707.174.712.471.04 × 1012.57
Time43 40 (+) 37 (+) 35 (+) 132 (+) 173 (+) 38 (+) 40 (+) 92 (+) 40 (+)
F7Ave6.27 × 1021.60 × 1031.98 × 1032.04 × 1033.07 × 1032.83 × 1032.93 × 1032.40 × 1032.57 × 1031.74 × 103
Std1.10 × 1021.85 × 1022.56 × 1022.55 × 1021.78 × 1022.05 × 1021.88 × 1029.26 × 1019.73 × 1015.81 × 101
Time36 30 (+) 30 (+) 28 (+) 125 (+) 166 (+) 29 (+) 33 (+) 85 (+) 33 (+)
F8Ave2.62 × 1029.45 × 1028.72 × 1028.65 × 1021.29 × 1031.53 × 1031.07 × 1031.40 × 1031.04 × 1031.15 × 103
Std4.06 × 1015.87 × 1017.96 × 1017.47 × 1011.48 × 1021.43 × 1026.84 × 1013.04 × 1018.37 × 1013.58 × 101
Time36 30 (+) 30 (+) 28 (+) 125 (+) 166 (+) 29 (+) 33 (+) 85 (+) 33 (+)
F9Ave1.97 × 1034.40 × 1042.05 × 1042.21 × 1046.96 × 1041.08 × 1054.04 × 1045.64 × 1043.59 × 1044.09 × 104
Std9.92 × 1026.36 × 1032.11 × 1032.73 × 1033.26 × 1032.26 × 1042.41 × 1033.10 × 1031.05 × 1044.44 × 103
Time36 30 (+) 30 (+) 28 (+) 124 (+) 166 (+) 29 (+) 33 (+) 85 (+) 33 (+)
F10Ave1.27 × 1041.95 × 1041.43 × 1041.48 × 1042.89 × 1042.63 × 1041.87 × 1042.94 × 1041.96 × 1042.86 × 104
Std1.05 × 1031.69 × 1031.44 × 1031.09 × 1038.93 × 1021.34 × 1031.48 × 1031.25 × 1032.48 × 1037.18 × 102
Time38 33 (+) 32 (+) 30 (+) 127 (+) 167 (+) 32 (+) 35 (+) 87 (+) 35 (+)
F11Ave8.33 × 1022.38 × 1041.08 × 1039.27 × 1021.40 × 1057.28 × 1042.36 × 1039.15 × 1044.61 × 1035.17 × 104
Std1.36 × 1024.98 × 1031.84 × 1021.50 × 1021.41 × 1041.41 × 1043.67 × 1022.34 × 1038.23 × 1021.07 × 104
Time34 28 (+) 29 (+) 26 (=)124 (+) 164 (+) 27 (=) 32 (+) 84 (+) 32 (+)
F12Ave2.18 × 1076.97 × 1082.83 × 1062.59 × 1061.85 × 10115.52 × 10104.16 × 1081.02 × 10116.44 × 1081.89 × 1010
Std8.83 × 1065.07 × 1071.23 × 1061.14 × 1061.56 × 10101.55 × 10101.95 × 1088.02 × 1092.72 × 1082.33 × 109
Time35 31 (+) 30 (−) 28 (−) 125 (+) 166 (+) 29 (+) 33 (+) 86 (+) 33 (+)
F13Ave2.76 × 1031.04 × 1071.11 × 1044.24 × 1034.30 × 10101.46 × 10105.64 × 1062.79 × 10108.92 × 1042.89 × 109
Std1.45 × 1033.88 × 1065.21 × 1032.95 × 1033.95 × 1095.36 × 1091.48 × 1061.69 × 1083.95 × 1049.55 × 108
Time34 29 (+) 29 (+) 27 (=) 123 (+) 164 (+) 28 (+) 32 (+) 84 (+) 32 (+)
F14Ave8.20 × 1042.84 × 1063.95 × 1045.87 × 1044.36 × 1076.45 × 1061.34 × 1061.18 × 1071.21 × 1068.47 × 106
Std5.17 × 1049.97 × 1051.57 × 1043.12 × 1041.70 × 1074.70 × 1062.10 × 1051.43 × 1064.81 × 1053.47 × 106
Time37 33 (+) 32 (=) 30 (=) 127 (+) 167 (+) 31 (+) 35 (+) 87 (+) 35 (+)
F15Ave9.44 × 1025.96 × 1052.48 × 1032.46 × 1032.24 × 10108.01 × 1091.49 × 1061.55 × 10108.04 × 1048.42 × 108
Std9.83 × 1021.97 × 1052.17 × 1032.93 × 1033.29 × 1094.85 × 1093.45 × 1051.17 × 1094.73 × 1041.90 × 108
Time34 28 (+) 29 (+) 26 (=) 123 (+) 164 (+) 27 (+) 32 (+) 84 (+) 32 (+)
F16Ave3.90 × 1035.09 × 1034.46 × 1034.51 × 1031.84 × 1041.08 × 1045.86 × 1031.37 × 1047.20 × 1038.15 × 103
Std5.91 × 1024.47 × 1026.25 × 1028.49 × 1022.06 × 1031.87 × 1037.92 × 1021.17 × 1031.42 × 1037.17 × 102
Time35 30 (+) 30 (+) 27 (=) 124 (+) 165 (+) 28 (+) 33 (+) 85 (+) 33 (+)
F17Ave2.51 × 1033.74 × 1033.70 × 1033.50 × 1034.28 × 1061.11 × 1054.70 × 1035.59 × 1044.92 × 1036.42 × 103
Std4.75 × 1024.15 × 1026.34 × 1026.24 × 1022.73 × 1061.14 × 1055.67 × 1026.07 × 1034.37 × 1025.28 × 102
Time40 37 (+) 35 (+) 33 (+) 130 (+) 171 (+) 35 (+) 39 (+) 91 (+) 38 (+)
F18Ave2.26 × 1054.19 × 1061.72 × 1051.58 × 1055.85 × 1077.24 × 1063.18 × 1062.25 × 1071.78 × 1061.15 × 107
Std1.46 × 1051.33 × 1065.65 × 1047.04 × 1042.21 × 1073.98 × 1061.18 × 1061.13 × 1075.22 × 1053.57 × 106
Time35 30 (+) 30 (=) 27 (=) 124 (+) 165 (+) 28 (+) 33 (+) 85 (=) 33 (+)
F19Ave8.99 × 1021.22 × 1064.49 × 1035.03 × 1032.12 × 10106.43 × 1094.93 × 1061.04 × 10101.03 × 1076.47 × 108
Std7.49 × 1023.79 × 1056.07 × 1034.33 × 1032.48 × 1095.20 × 1091.30 × 1062.63 × 1097.15 × 1061.20 × 108
Time71 78 (+) 66 (+) 64 (+) 161 (+) 202 (+) 73 (+) 69 (+) 121 (+) 69 (+)
F20Ave2.22 × 1033.04 × 1033.46 × 1033.20 × 1035.30 × 1034.24 × 1033.88 × 1034.71 × 1034.05 × 1034.35 × 103
Std3.23 × 1025.23 × 1023.70 × 1027.41 × 1023.25 × 1023.90 × 1024.55 × 1023.64 × 1024.24 × 1026.19 × 102
Time42 39 (+) 37 (+) 34 (+) 131 (+) 172 (+) 38 (+) 40 (+) 92 (+) 40 (+)
F21Ave5.04 × 1021.15 × 1031.05 × 1031.10 × 1032.12 × 1032.03 × 1031.91 × 1031.74 × 1031.72 × 1031.34 × 103
Std4.17 × 1018.51 × 1018.91 × 1011.49 × 1021.68 × 1021.67 × 1021.52 × 1021.90 × 1012.20 × 1025.08 × 101
Time65 69 (+) 59 (+) 57 (+) 153 (+) 194 (+) 64 (+) 62 (+) 114 (+) 62 (+)
F22Ave1.46 × 1042.19 × 1041.71 × 1041.59 × 1043.12 × 1042.83 × 1042.09 × 1042.75 × 1042.09 × 1042.99 × 104
Std1.55 × 1037.94 × 1021.30 × 1032.02 × 1031.02 × 1031.61 × 1031.66 × 1033.36 × 1031.76 × 1038.82 × 102
Time68 72 (+) 62 (+) 59 (=) 157 (+) 197 (+) 68 (+) 65 (+) 117 (+) 65 (+)
F23Ave9.05 × 1021.42 × 1031.38 × 1031.24 × 1034.26 × 1033.03 × 1032.65 × 1033.89 × 1032.66 × 1031.72 × 103
Std5.75 × 1011.02 × 1021.78 × 1026.39 × 1013.66 × 1023.31 × 1022.83 × 1027.78 × 1012.87 × 1026.67 × 101
Time79 88 (+) 74 (+) 71 (+) 168 (+) 209 (+) 80 (+) 76 (+) 129 (+) 76 (+)
F24Ave1.31 × 1032.13 × 1032.31 × 1032.05 × 1038.98 × 1034.19 × 1033.66 × 1036.00 × 1033.32 × 1032.27 × 103
Std1.04 × 1029.40 × 1012.53 × 1029.81 × 1011.69 × 1032.47 × 1023.54 × 1021.65 × 1022.94 × 1027.89 × 101
Time74 83 (+) 69 (+) 67 (+) 156 (+) 192 (+) 75 (+) 71 (+) 119 (+) 72 (+)
F25Ave8.98 × 1021.84 × 1037.95 × 1028.11 × 1022.53 × 1048.41 × 1031.12 × 1031.16 × 1041.03 × 1034.87 × 103
Std5.61 × 1011.22 × 1024.70 × 1017.14 × 1011.95 × 1031.59 × 1035.81 × 1012.05 × 1025.37 × 1015.55 × 102
Time79 89 (+) 73 (−)71 (=)160 (+) 196 (+) 81 (+) 76 (+) 124 (+) 76 (+)
F26Ave8.64 × 1031.25 × 1041.72 × 1041.56 × 1044.95 × 1042.96 × 1042.18 × 1042.62 × 1042.74 × 1041.77 × 104
Std8.66 × 1028.07 × 1036.07 × 1036.34 × 1032.29 × 1031.83 × 1032.67 × 1032.87 × 1023.34 × 1031.11 × 103
Time86 97 (+) 80 (+) 78 (+) 167 (+) 203 (+) 89 (+) 83 (+) 131 (+) 83 (+)
F27Ave8.97 × 1021.29 × 1031.39 × 1031.31 × 1038.13 × 1033.26 × 1031.76 × 1035.26 × 1032.57 × 1031.71 × 103
Std5.36 × 1011.00 × 1021.46 × 1022.58 × 1024.21 × 1036.08 × 1024.70 × 1021.32 × 1021.02 × 1031.66 × 102
Time97 114 (+) 92 (+) 90 (+) 179 (+) 215 (+) 102 (+) 95 (+) 143 (+) 95 (+)
F28Ave7.63 × 1021.96 × 1036.55 × 1026.48 × 1023.28 × 1041.31 × 1048.43 × 1021.05 × 1048.17 × 1026.44 × 103
Std4.63 × 1011.77 × 1022.89 × 1012.68 × 1012.76 × 1033.27 × 1033.61 × 1012.50 × 1023.41 × 1017.42 × 102
Time92 107 (+) 87 (−) 85 (−) 174 (+) 209 (+) 98 (+) 90 (+) 138 (+) 90 (+)
F29Ave2.94 × 1034.84 × 1034.40 × 1034.07 × 1032.62 × 1051.48 × 1046.15 × 1036.21 × 1041.07 × 1047.88 × 103
Std4.85 × 1024.31 × 1023.50 × 1025.72 × 1021.23 × 1055.60 × 1033.93 × 1022.89 × 1033.23 × 1035.99 × 102
Time61 67 (+) 56 (+) 54 (+) 144 (+) 179 (+) 61 (+) 59 (+) 107 (+) 59 (+)
F30Ave1.83 × 1042.65 × 1072.32 × 1041.90 × 1043.69 × 10101.21 × 10103.55 × 1071.93 × 10101.77 × 1082.45 × 109
Std8.12 × 1036.51 × 1061.49 × 1048.99 × 1036.59 × 1094.17 × 1099.20 × 1061.92 × 1095.02 × 1076.43 × 108
Time88 103 (+) 84 (=) 81 (=) 188 (+) 229 (+) 106 (+) 97 (+) 150 (+) 96 (+)
(#)best21063000000
(#)+ 292016303028302830
(#)= 1611002020
(#) 043000000
Friedman rank1.694.452.522.389.578.125.348.545.636.47
Table 5. Comparison between the SIAEO and comparison algorithm on the CEC2019 test set.
Table 5. Comparison between the SIAEO and comparison algorithm on the CEC2019 test set.
FunctionsAlgorithm
SIAEOIAEOEAEOAEOAOATSAHHOQPSOWOAGWO
f1Ave0.000.000.000.003.75 × 10−151.78 × 1020.001.16 × 10−93.94 × 1064.02 × 104
Std0.000.000.000.001.19 × 10−143.97 × 1020.002.41 × 10−95.26 × 1061.06 × 105
Time1.71 0.95 (=) 1.26 (=) 1.08 (=) 1.34 (=) 1.71 (+) 0.96 (=) 1.47 (=) 1.18 (+) 0.78 (+)
f2Ave3.433.313.553.373.905.58 × 1023.984.006.60 × 1037.05 × 102
Std1.70 × 10−13.39 × 10−23.18 × 10−12.35 × 10−11.89 × 10−11.67 × 1026.51 × 10−24.28 × 10−46.17 × 1023.85 × 102
Time1.57 0.82 (=) 1.18 (=) 1.00 (=) 1.84 (+) 2.46 (+) 0.85 (+) 1.39 (+) 1.42 (+) 0.72 (+)
f3Ave3.73 × 10−13.144.09 × 10−14.38 × 10−13.827.682.755.772.903.89
Std1.31 × 10−11.022.41 × 10−89.17 × 10−25.37 × 10−12.271.221.052.594.41 × 10−1
Time1.65 0.81 (+) 1.18 (=) 0.98 (=) 2.01 (+) 2.61 (+) 0.83 (+) 1.42 (+) 1.47 (+) 0.72 (+)
f4Ave1.55 × 1011.80 × 1011.69 × 1011.69 × 1015.20 × 1016.34 × 1014.22 × 1016.07 × 1015.19 × 1012.81 × 101
Std3.176.955.316.767.681.07 × 1018.574.671.99 × 1012.35
Time1.48 0.82 (=) 1.16 (=) 0.98 (=) 1.33 (+) 1.72 (+) 0.86 (+) 1.38 (+) 1.12 (+) 0.68 (+)
f5Ave2.07 × 10−19.30 × 10−12.10 × 10−12.73 × 10−14.35 × 1014.939.67 × 10−15.22 × 1017.94 × 10−12.60
Std1.72 × 10−11.39 × 10−15.42 × 10−21.31 × 10−19.093.623.39 × 10−19.895.18 × 10−16.25 × 10−1
Time1.72 0.84 (+) 1.17 (=) 0.99 (=) 1.34 (+) 1.73 (+) 0.86 (+) 1.38 (+) 1.13 (+) 0.69 (+)
f6Ave1.091.682.593.327.303.795.356.626.233.22
Std7.40 × 10−12.78 × 10−16.61 × 10−11.638.01 × 10−18.79 × 10−18.07 × 10−15.26 × 10−11.242.76 × 10−1
Time3.29 3.04 (+) 2.83 (+) 2.64 (+) 2.97 (+) 3.36 (+) 2.85 (+) 3.03 (+) 2.77 (+) 2.33 (+)
f7Ave5.75 × 1027.65 × 1027.43 × 1021.05 × 1031.47 × 1031.62 × 1031.27 × 1031.57 × 1031.48 × 1031.09 × 103
Std2.36 × 1022.88 × 1022.93 × 1023.09 × 1022.13 × 1021.36 × 1023.96 × 1022.68 × 1023.44 × 1022.29 × 102
Time1.37 0.83 (=) 1.16 (=) 0.98 (+) 1.32 (+) 1.70 (+) 0.88 (+) 1.36 (+)1.11 (+) 0.69 (+)
f8Ave2.732.922.892.903.473.093.543.363.282.96
Std2.83 × 10−13.03 × 10−13.20 × 10−13.72 × 10−11.44 × 10−12.22 × 10−11.24 × 10−11.13 × 10−12.53 × 10−12.29 × 10−1
Time1.32 0.81 (=) 1.13 (=) 0.96 (=) 1.31 (+) 1.68 (+) 0.87 (+) 1.34 (+) 1.10 (=) 0.67 (=)
f9Ave2.16 × 10−12.68 × 10−12.74 × 10−13.41 × 10−11.733.88 × 10−15.99 × 10−16.53 × 10−15.88 × 10−12.55 × 10−1
Std3.05 × 10−26.60 × 10−21.11 × 10−19.51 × 10−24.13 × 10−11.18 × 10−11.18 × 10−19.44 × 10−21.50 × 10−18.00 × 10−2
Time1.30 0.78 (=) 1.12 (=) 0.94 (+) 1.28 (+) 1.67 (+) 0.82 (+) 1.32 (+) 1.08 (+) 0.65 (=)
f10Ave1.50 × 1011.58 × 1011.68 × 1011.80 × 1012.03 × 1012.04 × 1012.02 × 1012.03 × 1012.00 × 1012.04 × 101
Std8.547.236.856.323.83 × 10−21.09 × 10−11.23 × 10−11.47 × 10−14.14 × 10−25.64 × 10−2
Time1.36 0.81 (=) 1.14 (=) 0.96 (=) 1.31 (+) 1.68 (+) 0.87 (+) 1.34 (+) 1.10 (+) 0.67 (+)
(#)best9211001000
(#)+ 3139109998
(#)= 797101112
(#) 000000000
Friedman rank2.163.882.883.217.467.616.267.937.166.12
Table 6. The CP values of the SIAEO and comparison algorithms on CEC2017 and CEC2019 test sets.
Table 6. The CP values of the SIAEO and comparison algorithms on CEC2017 and CEC2019 test sets.
SIAEO. VSCEC2017 (D = 100)CEC2019
(#)+(#)−CP(#)+(#)−CP
IAEO29029303
EAEO20416101
AEO16313303
AOA30030909
TSA3003010010
HHO28028909
QPSO30030909
WOA28028909
GWO30030808
Table 7. Wilcoxon sign rank test results of the SIAEO and comparison algorithms at 0.05 significance level.
Table 7. Wilcoxon sign rank test results of the SIAEO and comparison algorithms at 0.05 significance level.
SIAEO. VSCEC2017 (D = 100)CEC2019
R+R−pSignificanceR+R−pSignificance
IAEO46500.0000+4320.0117+
EAEO2951700.1986=4500.0039+
AEO2931720.2134=4410.0078+
AOA46500.0000+5500.0020+
TSA46500.0000+5500.0020+
HHO46500.0000+4500.0039+
QPSO46500.0000+5500.0020+
WOA46500.0000+5500.0020+
GWO46500.0000+5500.0020+
Table 8. Optimal results of various methods for the Three bar truss design problem.
Table 8. Optimal results of various methods for the Three bar truss design problem.
AlgorithmThe Best Decision VariablesMinimum Cost
  x 1   x 2
CS [5]0.788670.40902263.9716
CSA [36]0.7886389760.408350573263.895844
SSA [9]0.788665410.408275784263.89584
Ray and Sain [37]0.7950.395264.3
MBA [38]0.78856500.4085597263.89585
PHSSA [39]0.822990.31925264.701723
SIAEO0.7886751360.40824828263.895843
Table 9. Optimal results of various methods for the Himmelblau’s nonlinear problems.
Table 9. Optimal results of various methods for the Himmelblau’s nonlinear problems.
AlgorithmThe Best Decision VariablesMinimum Cost
  x 1   x 2   x 3   x 4   x 5
Himmelblau [40]N/AN/AN/AN/AN/A−30,373.949
Deb [41]N/AN/AN/AN/AN/A−30,665.539
He et al. [42]783329.9952564536.775813−30,665.539
Dimopoulos [43]783329.9952564536.775813−30,665.54
CS [5]783329.996164536.77605−30,665.233
CSA [36]783329.9952564536.775813−30,665.53867
SIAEO783329.9952564536.775812−30,665.53867
Table 10. Optimal results of each algorithm for the tabular column design.
Table 10. Optimal results of each algorithm for the tabular column design.
AlgorithmThe Best Decision VariablesMinimum Cost
  x 1   x 2
CS [5]5.451390.2919626.53217
CSA [36]5.451160.2919626.5313
Hsu and Liu [45]5.45070.29225.5316
Rao [44]5.440.29326.5323
SIAEO5.45120.2916726.526
Table 11. Optimal results of various methods for the Gas transmission compressor design problem.
Table 11. Optimal results of various methods for the Gas transmission compressor design problem.
AlgorithmThe Best Decision VariablesMinimum Cost
  x 1   x 2   x 3   x 4
WOA [31]501.1824.580.38832,964,900
SSA [9]26.191.1021.470.21193,034,100
BOA [47]33.191.1026.480.21623,007,000
SOS [48]501.1824.580.38832,964,900
PSO [49]31.791.1031.570.22243,050,900
DE [50]501.1824.590.38842,964,900
SIAEO501.1782824.592590.388352,964,895
Table 12. Characteristics of the nine datasets.
Table 12. Characteristics of the nine datasets.
DatasetsInstancesClassesFeaturesDimension
Cancer6832918
Heartstatlog27021326
Wine17831339
Ecoli3368756
WDBC56923060
Vehicle84641872
Segmentation210718126
Air359364192
Abalone4177297203
Table 13. Comparison of K-means optimization results of six algorithms on nine datasets.
Table 13. Comparison of K-means optimization results of six algorithms on nine datasets.
AlgorithmSIAEO-
K-Means
IAEO-
K-Means
EAEO-
K-Means
AEO-
K-Means
DE-K-MeansPSO-
K-Means
Cancermean3.67 × 1034.65 × 1033.84 × 1034.36 × 1035.85 × 1036.03 × 103
Std2.94 × 1021.593.23 × 1022.99 × 1024.01 × 1024.32 × 102
Rank142356
Heartstatlogmean1.13 × 1041.40 × 1041.16 × 1041.48 × 1041.40 × 1041.45 × 104
Std7.02 × 1011.60 × 1025.32 × 1020.006.37 × 1021.94 × 101
Rank132645
Winemean1.75 × 1041.91 × 1041.71 × 1042.10 × 1041.91 × 1042.36 × 104
Std1.26 × 1031.05 × 1032.42 × 1021.94 × 1038.75 × 1027.46 × 101
Rank231546
Ecolimean9.88 × 1011.21 × 1029.96 × 1011.52 × 1021.36 × 1021.07 × 102
Std3.631.17 × 1011.04 × 1010.008.324.21
Rank142653
WDBCmean1.65 × 1052.46 × 1052.00 × 1052.96 × 1052.80 × 1052.98 × 105
Std4.22 × 1033.20 × 1013.96 × 1043.98 × 1032.07 × 1042.17 × 101
Rank132546
Vehiclemean9.07 × 1049.96 × 1041.02 × 1051.28 × 1051.14 × 1051.27 × 105
Std7.10 × 1035.36 × 1032.97 × 1030.007.32 × 1031.40 × 101
Rank123645
Segmentationmean2.90 × 1043.53 × 1043.08 × 1043.56 × 1043.66 × 1043.91 × 104
Std9.79 × 1025.38 × 1022.66 × 1025.05 × 1031.33 × 1031.50 × 102
Rank132456
airmean4.48 × 1013.57 × 1013.97 × 1017.72 × 1011.35 × 1026.22 × 101
Std3.922.716.68 × 10−11.810.007.03
Rank312564
abalonemean1.17 × 1031.59 × 1031.27 × 1031.91 × 1031.91 × 1031.58 × 103
Std6.174.60 × 1019.40 × 1016.98 × 1013.51 × 1011.87 × 102
Rank142563
count711000
Avg_Rank1.33 3.00 2.00 5.00 4.78 4.89
Total Rank132645
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, W.; Wu, M.; Dai, F.; Qiang, Y. Improved Environmental Stimulus and Biological Competition Tactics Interactive Artificial Ecological Optimization Algorithm for Clustering. Biomimetics 2023, 8, 242. https://doi.org/10.3390/biomimetics8020242

AMA Style

Guo W, Wu M, Dai F, Qiang Y. Improved Environmental Stimulus and Biological Competition Tactics Interactive Artificial Ecological Optimization Algorithm for Clustering. Biomimetics. 2023; 8(2):242. https://doi.org/10.3390/biomimetics8020242

Chicago/Turabian Style

Guo, Wenyan, Mingfei Wu, Fang Dai, and Yufan Qiang. 2023. "Improved Environmental Stimulus and Biological Competition Tactics Interactive Artificial Ecological Optimization Algorithm for Clustering" Biomimetics 8, no. 2: 242. https://doi.org/10.3390/biomimetics8020242

Article Metrics

Back to TopTop