1. Introduction
Meta-heuristic algorithms are widely used in path planning [
1], image detection [
2], system control [
3], and shop floor scheduling [
4] due to their excellent flexibility, practicality, and robustness. Common meta-heuristic algorithms include the genetic algorithm (GA) [
5,
6], the particle swarm optimization algorithm (PSO) [
7,
8], the gray wolf optimization algorithm (GWO) [
9,
10], the chicken flock optimization algorithm (CSO) [
11], the sparrow optimization algorithm (CSA) [
12], the whale optimization algorithm (WOA) [
13], etc.
Different intelligent optimization algorithms exist with different search approaches, but most of them aim at the balance between population diversity and search ability and avoid premature maturity while ensuring convergence accuracy and speed [
14]. In response to the above ideas, numerous scholars have proposed improvements to the intelligent algorithms they studied. For example, Zhi-jun Teng et al. [
15] introduced the idea of PSO on the basis of the gray wolf optimization algorithm, which preserved the individual optimum while improving the ability of the algorithm to jump out of the local optimum; Hussien A. G. et al. [
16] proposed two transfer functions (S-shaped and V-shaped) to map the continuous search space to the binary space, which improved the search accuracy and speed of the whale optimization algorithm; Wang et al. [
17] introduced a fuzzy system in the process of chicken flock optimization algorithm, which adaptively adjusted the number of individuals in the algorithm, as well as random factors to balance the local exploitation performance and global search ability of the algorithm; Tian et al. [
18] used logistic chaotic mapping to improve the initial population quality of the particle swarm algorithm while applying the auxiliary speed mechanism to the global optimal particles, which effectively improved the convergence of the algorithm; Li et al. [
19] integrated two strategies, Levy flight and dimension-by-dimension evaluation, in the mothballing algorithm to improve the global search capability and to enhance the effectiveness of the algorithm.
The chimpanzee optimization algorithm (ChoA) is a heuristic optimization algorithm based on the social behavior of chimpanzee populations proposed by Khishe et al. [
20] in 2020. Compared with traditional algorithms, ChoA has the advantages of fewer parameters, being prone to understand, and high stability. However, it also has the problems of initialized population boundary aggregation distribution, slow convergence speed, low accuracy, and being prone to fall into the local optimum. To address these problems, many researchers have proposed different improvement methods. Du et al. [
21] introduced a somersault foraging strategy in ChoA to avoid the algorithmic population from easily falling into the local optimum, as well as to improve the diversity of the pre-population. However, this relatively single improvement leads to its less obvious improvement effect; Kumari et al. [
22] combined the SHO algorithm with the ChoA algorithm, which improved the convergence accuracy of the ChoA algorithm itself and enhanced its local exploitation ability to deal with high-dimensional problems; Houssein et al. [
23] proposed to extend the population diversity in the search space of the ChoA algorithm in relation to the algorithm initialization phase by applying opposition-based learning (OBL).
In summary, there are numerous improvements to the chimpanzee optimization algorithm, and the improved algorithms are suitable for the optimization of some single problems but reveal shortcomings for others. Therefore, in order to improve the performance of ChoA optimization, an improved chimpanzee optimization algorithm incorporating multiple strategies (IMSChoA) is proposed in this paper. Firstly, an improved sine chaotic mapping is used to initialize the population and solve the phenomenon of population boundary aggregation distribution. Secondly, a linear weight factor and an adaptive acceleration factor are introduced to add to the particle swarm algorithm and cooperate with the improved nonlinear convergence factor to balance the search ability of the algorithm, accelerate the convergence of the algorithm, and improve the convergence accuracy. Finally, the sparrow elite mutation and Bernoulli chaos mapping strategy improved by adaptive change water wave factor are introduced to improve the ability of individuals to jump out of the local optimum. After 21 standard test functions for the optimization search test, and with the help of the Wilcoxon rank sum statistical test for the optimization results, the robustness and applicability of the improved algorithm are verified. Finally, the IMSChoA optimization algorithm is applied to two engineering examples to further verify the superiority of the IMSChoA optimization algorithm in dealing with mechanical structure optimization design problems.
The other sections of the article are organized as follows: in
Section 2, the mathematical model of the traditional ChoA algorithm is presented.
Section 3 presents the specific improvement strategies incorporated on top of the ChoA algorithm.
Section 4 shows the comparison and analysis of the results of IMSChoA with the other four optimization algorithms after 21 standard test function search tests.
Section 5 applies the IMSChoA algorithm to two engineering examples and analyzes their optimization results accordingly. Finally, the full text is summarized in
Section 6 for discussion.
2. Basic Chimpanzee Algorithm
The ChoA algorithm is an intelligent algorithm proposed by simulating the prey-hunting behavior of chimpanzee groups. According to the abilities shown in the chimpanzee hunting process, individual chimpanzees are classified into driver, barrier, chaser, and attacker. The chimpanzee group hunting process is mainly divided into exploratory phases, i.e., repelling, blocking, and chasing prey. The development stage involves attacking the prey. Each type of chimpanzee has the ability to think independently and search for the location of prey in its own way, while chimpanzees are also affected by sexual behavior, making them appear to confuse individual hunting behavior in the final stage. It is assumed that the first driver, barrier, chaser, and attacker are able to predict the location of prey and the others update their position according to the closest chimpanzee to the prey. The equation model for chimpanzee repelling and chasing prey is shown in Equations (1) and (2).
where
XP is the position of the prey,
XE is the position of the chimpanzee,
m is the chaotic vector,
t is the number of iterations,
d(
t) is the distance of the chimpanzee from the prey, and
a and c are the coefficient vectors.
a and
c are calculated by Equations (3) and (4), respectively. When |
a| < 1, the chimpanzee individual tends to the prey, and when |
a| > 1, it means the chimpanzee has deviated from the prey position and expanded the search range.
where
r1 and
r2 are random numbers taking values of [0, 1], a is a random variable between [−2
f, 2
f],
f is a linear convergence factor, and the calculation of the f formula is Equation (5).
During the iterations, f decays linearly from 2 to 0, and tmax is the maximum number of iterations.
The position of chimpanzees in the population is co-determined by the position of the driver, barrier, chaser, and attacker. The mathematical model of chimpanzee attack on prey is shown in Equations (6)–(8).
From Equations (6)–(8), the position of the prey is estimated from the position of the driver, barrier, chaser, and attacker. Other chimpanzees update their position in the direction of the prey.
In the final stages of a population’s predation, when individuals obtain food satisfaction, chimpanzees unleash their natural instinct to force chaotic access to food. The chaotic behavior of chimpanzees in the final stage helps to further alleviate the two problems of local optimal traps and slow convergence when the problem is high-dimensional. To simulate the chimpanzee’s chaotic behavior, it is assumed that there is a 50% probability of choosing one of the update positions in either the normal update position mechanism or the chaotic model, and the model formulation is shown in Equation (9) [
24].
where
μ takes the value of [0, 1] random number, and Chaotic is the chaotic mapping used to update the position.
3. Improving the Chimpanzee Algorithm
Firstly, for ChoA, the population initialization is performed by the random distribution method. This approach leads to population diversity, poor uniformity, easy boundary aggregation phenomenon, and large blindness of individual search for the best result. Secondly, the convergence factor of linear decay of the algorithm balancing local search and global search does not conform to the nonlinear merit-seeking characteristics of the algorithm, and finally, the algorithm jumps out of the local optimum with low chaotic perturbation trigger probability, which has great instability.
In summary, the corresponding improvement strategies are introduced for the problems of the ChoA algorithm, as follows.
3.1. Improved Sine Chaotic Mapping for Initializing Populations
Because the size of each dimension of chimpanzee individuals is randomly generated in the initialization stage, which leads to poor population diversity, serious boundary aggregation, and low individual variability. Chaotic searches are based on non-repetition and ergodicity, which are different from stochastic search methods, which are based on probabilities [
25]. The common chaotic mappings include circle chaotic mapping, tent chaotic mapping, iteration chaotic mapping, logistic chaotic mapping, and sine chaotic mapping. Among them, sine mapping has good stability and high coverage, but it still has uneven distribution and boundary aggregation phenomena.
The expression of the original sine chaos mapping is:
Therefore, the sine mapping is improved by introducing the Chebyshev mapping for the above problem. At the same time, a high-dimensional chaotic mapping is established to make it better represent the chaotic property based on the original one.
The expression of the improved sine chaos mapping is:
where
λ and
μ are random numbers between [0, 1] and satisfy the sum of
λ and
μ as 1.
The dimensional distribution map and dimensional distribution histograms of both initial solutions before and after the improvement are shown in
Figure 1. Here,
Figure 1a,c is the original sine mapping, and
Figure 1b,d is the improved sine mapping. Comparing
Figure 1a,b and
Figure 1c,d, it was found that the value distribution of the improved sine mapping chaos is more uniform, and the boundary aggregation problem is effectively solved.
3.2. PSO Idea and Nonlinear Convergence Factor
3.2.1. PSO Idea
In order to regulate the balance of global and local search ability in the early and late stages of the ChoA algorithm, the particle swarm idea is introduced to improve the position updating method of the ChoA algorithm. The best position information experienced by the particle itself and the best position information of the population are used to update the current position of the particle and realize the information exchange between individual chimpanzees and the population [
26]. The position update formula is Equation (13).
where
w is the inertia weight coefficient and
C is the acceleration factor. The inertia weight w and acceleration factor
C take values related to the influence of the past motion state of the particle on the present motion state, when w and C become large, the search space of the particle will be expanded, and, when w and C become small, the direction of particle motion produces many changes, and the search space is relatively small, which will lead the algorithm to fall into the local optimum. The convergence speed of the algorithm is accelerated by adjusting the size of
w,
C to regulate the ability of local search and global search of the algorithm [
27].
In order to improve the global search ability of the algorithm in the early stage while strengthening the local optimization ability of the particle swarm in the later stage, this paper introduces an adaptive linear acceleration factor: at the beginning of the algorithm iteration, a larger value of C is given to complete a wide range of search, which is conducive to the algorithm to quickly search for the global optimal position, and, as the number of iterations increases, the algorithm gradually converges, and individuals search for the optimal solution locally, and, at this time, a smaller C value is given to achieve accurate exploration of the optimal position in small steps size so as to improve the convergence accuracy of the algorithm. At the same time, in order to prevent the algorithm from falling into the local optimum during the iteration process, the cosine function is introduced to correct the acceleration factor and to keep the acceleration factor fluctuating at all times. The adaptive acceleration factor mathematical model is shown in Equation (14), and the variation of C with the number of iterations is shown in
Figure 2.
where
g is the adjustment trade-off factor, and
tmax is the maximum number of iterations.
At the same time, in order to accelerate the convergence of the algorithm and improve the convergence accuracy, the linear inertia weight model is introduced. At the beginning of the algorithm iteration, a large weight is given to make the population search the solution space extensively in large steps and to search the global optimal position quickly. With an increase in iteration number, the algorithm converges gradually at this time. At this time, the inertia weight coefficient gradually becomes smaller to facilitate the fine search of the optimal position in small steps and to improve the convergence accuracy of the algorithm. The linear inertia weight mathematical model is shown in Equation (15), and
w varies with the number of iterations, as in
Figure 3.
where
wmax and
wmin are the maximum weight coefficient and minimum weight coefficient, respectively,
t is the number of iterations, and
tmax is the maximum number of iterations.
3.2.2. Nonlinear Decay Convergence Factor
One of the important factors in evaluating the performance of heuristic algorithms is the ability to balance the algorithm’s global search ability and local search ability. From the analysis of the chimpanzee algorithm, it is known that, when |
a| < 1, the chimpanzee individual converges to the prey, and, when |
a| > 1, this means that the chimpanzee has deviated from the prey position and expanded the search range. Therefore, the change in the convergence factor determines the global and local search ability of the algorithm. According to the above description, this paper introduces a nonlinear decay variation model, which cooperates with the adaptive acceleration factor in the particle swarm idea to jointly balance the global search ability and local search ability of the algorithm. Meanwhile, a control factor
б is introduced to control the decay amplitude. The nonlinear decay convergence factor mathematical model is described as Equation (16).
where
t is the number of iterations,
tmax is the maximum number of iterations, and
fg is the initial convergence factor.
б∈[1, 10], and, the larger the
б, the slower the decay rate, as shown in
Figure 4.
3.3. Improved Sparrow Elite Variation and Logistic Chaos Mapping
In the ChoA algorithm, the individual update is affected by the last optimal individual in each iteration, so the ChoA algorithm is easy to converge to the local optimum during the iterative process. To address the above problems, an optimization strategy combining adaptive water wave factor improved sparrow elite mutation and Bernoulli chaotic mapping is proposed.
3.3.1. Improved Sparrow Elite Variation and Logistic Chaos Mapping
The sparrow search algorithm is an efficient population intelligence optimization algorithm, which divides the search population into three parts: explorers, followers, and early warners, whose work is divided among themselves to find the optimal value [
28]. Sparrow elite mutation is used to assign the capabilities of individuals with higher search performance to the current optimal individual. At each ChoA iteration, the individuals with the top 40% of the current fitness value are given a stronger optimization ability, and an adaptive water wave factor is added to the mutant individual update formula [
29] to further improve the optimization ability of mutant individuals. The sparrow elite mutation mathematical description is shown in Equation (17).
where
X(
t)
0.4 is for the top 40% of the current fitness value of the individual,
Q is a random number obeying a normal distribution of [0, 1],
L is a 1 × d matrix with all elements of 1,
ST is the warning value, taken as 0.6, and
v is the water wave factor, which varies adaptively with the number of iterations. The mathematical model of the adaptive water wave factor is shown in Equation (18).
As the iterations increase, the uncertainty in the iterative process and the dramatic abrupt changes in the water wave factor enhance the ability of individuals to jump out of the local optimum. The water wave factor changes are shown in
Figure 5.
3.3.2. Bernoulli Chaotic Mappings
Bernoulli chaotic mapping is a classical representative of chaotic mapping and is more widely used [
30]. Its mathematical expression is shown in Equation (19).
where
t is the number of chaotic iterations and
λ is the conditioning factor, generally taken as 0.4. The resulting new chaotic sequence roots are mapped into the search space of the solution as follows.
where
Xtd is the position of the tth element in d dimensions,
XU and
XL are analyzed as the upper and lower bounds of the search space, and
Ztd is the chaotic value generated by Equation (19).
3.4. IMSChoA Algorithm Flow
The specific implementation steps of the IMSChoA algorithm are as follows.
Step 1: Initialize the population using the improved sine chaotic mapping, including the number of population individuals N, the maximum number of iterations tmax, the dimension d, the search boundary ub, and lb, the maximum and minimum weight factors, and the adjustment trade-off factor g, and set the relevant parameters.
Step 2: Update the acceleration factor, inertia weight, convergence factor, and water wave factor.
Step 3: Calculate the position of each chimpanzee.
Step 4: Update the positions of repellers, blockers, pursuers, and attackers.
Step 5: Calculate the adaptation degree value and the average value of the adaptation degree to find the global optimum and individual optimum.
Step 6: Compare the individual adaptation degree value f with the average value of adaptation degree favg. If f < favg, perform Brenoylli perturbation to determine whether the perturbed individual is better than the original individual, and update if better. Otherwise, keep the original individual unchanged; if f > favg, perform sparrow elite variation, and replace it if it is better than the original individual, otherwise keep it.
Step 7: Update the global optimal value of the population and the individual optimal value.
Step 8: Determine whether the condition is satisfied, and output the result if satisfied, otherwise return to step 2 for execution.
3.5. Time Complexity Analysis
Time complexity is an important index reflecting the performance of the algorithm [
31]. Assuming that the chimpanzee population size is
N, the search space dimension is n, the initialization time is
t1, the update time of individual chimpanzee positions is
t2, and the time to solve for the value of the target fitness function is
f(
n), the time complexity of the ChoA algorithm is:
In the IMSChoA algorithm, the time required to initialize the parameters is kept consistent with the standard ChoA. The time used to initialize the population using the modified sine is
t3, which is employed in the loop phase, assuming that the time required to introduce the particle swarm idea, the nonlinear convergence factor, the modified sparrow elite variation, and the logistic chaos mapping are
t4,
t5, and
t6, respectively. Then, the time complexity of IMSChoA is:
The time complexity of SPWChoA and ChoA is the same by Equations (21) and (22). It is shown in Equation (23).
In summary, the improvement strategy proposed in this paper for the ChoA defect does not increase the time complexity.
6. Conclusions
In this paper, we propose an improved chimpanzee search algorithm with multi-strategy fusion, namely, IMSChoA, to address the problems of the ChoA optimization algorithm, such as low convergence accuracy and being prone to fall into local optimality. Firstly, we use improved sine chaotic mapping to initialize the population and solve the phenomenon of population boundary aggregation distribution. Secondly, the particle swarm algorithm idea was added, cooperating with the improved nonlinear convergence factor to balance the searchability of the algorithm, to accelerate the convergence of the algorithm, and to improve the convergence accuracy. Finally, the adaptive water wave factor improved sparrow elite mutation, and the Bernoulli chaos mapping strategy was added to improve the ability of individuals to jump out of the local optimum. After 21 standard test functions for the optimization search test and analysis with the help of Wilcoxon rank sum statistical test results, the robustness and applicability of the algorithm were verified. Finally, the IMSChoA optimization algorithm was applied to the spring design case study and the optimization analysis of the fully automatic piston manometer control system, and the experimental results showed that the IMSChoA optimization algorithm also has good applicability to mechanical structure optimization design problems, but it has to be said that the comprehensive performance of the algorithm for low-dimensional, small-range high-precision search is still inadequate. Therefore, the next step will be to consider combining the IMSChoA algorithm with deep learning to eliminate the limitations of the algorithm in optimizing high-precision, as well as complex, problems, as well as to use it to solve more practical engineering problems.