Next Article in Journal
Self-Trained Deep Forest with Limited Samples for Urban Impervious Surface Area Extraction in Arid Area Using Multispectral and PolSAR Imageries
Previous Article in Journal
ASSERT: A Blockchain-Based Architectural Approach for Engineering Secure Self-Adaptive IoT Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Grey Wolf Optimization with Multi-Strategy Ensemble for Robot Path Planning

School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(18), 6843; https://doi.org/10.3390/s22186843
Submission received: 30 July 2022 / Revised: 29 August 2022 / Accepted: 6 September 2022 / Published: 9 September 2022
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Grey wolf optimization (GWO) is a meta-heuristic algorithm inspired by the hierarchy and hunting behavior of grey wolves. GWO has the superiorities of simpler concept and fewer adjustment parameters, and has been widely used in different fields. However, there are some disadvantages in avoiding prematurity and falling into local optimum. This paper presents an improved grey wolf optimization (IGWO) to ameliorate these drawbacks. Firstly, a modified position update mechanism for pursuing high quality solutions is developed. By designing an ameliorative position update formula, a proper balance between the exploration and exploitation is achieved. Moreover, the leadership hierarchy is strengthened by proposing adaptive weights of α, β and δ. Then, a dynamic local optimum escape strategy is proposed to reinforce the ability of the algorithm to escape from the local stagnations. Finally, some individuals are repositioned with the aid of the positions of the leaders. These individuals are pulled to new positions near the leaders, helping to accelerate the convergence of the algorithm. To verify the effectiveness of IGWO, a series of contrast experiments are conducted. On the one hand, IGWO is compared with some state-of-the-art GWO variants and several promising meta-heuristic algorithms on 20 benchmark functions. Experimental results indicate that IGWO performs better than other competitors. On the other hand, the applicability of IGWO is verified by a robot global path planning problem, and simulation results demonstrate that IGWO can plan shorter and safer paths. Therefore, IGWO is successfully applied to the path planning as a new method.

1. Introduction

In recent years, with the development of robot technology, more and more work began to rely on robots to finish. Mobile robots gradually become a part of human industrial manufacture and mankind daily life [1,2,3,4]. Robots can execute operations such as perception and decision-making, assisting or even replacing human beings to complete heavy, repetitive or dangerous tasks. At present, the topics about robots covers robot navigation, robot mechanism design, human-robot interaction and so on [5,6,7,8]. One of the key technologies in robot navigation is global path planning [9,10], which refers to generating a safe and effective trajectory on the premise of knowing all environmental information. Path planning algorithms can be divided into traditional algorithms and meta-heuristic algorithms [11]. Traditional path planning algorithms include A* algorithm, rapidly-exploring random tree (RRT), Dijkstra algorithm, etc. Meta-heuristic algorithms include particle swarm optimization (PSO) [12], ant colony algorithm (ACO) [13], cuckoo algorithm (CS) [14], bat algorithm (BA) [15], grey wolf algorithm (GWO) [16], etc. On account of the advantages of fewer parameters and easy implementation, the investigation about meta-heuristic algorithms gains rapid expansion [17].
At present, heuristic algorithms have been widely applied. Wu et al. [18] successfully solved the graph partitioning problem using deterministic annealing neural network. In [19], Wu proposed a deterministic annealing neural network approach and verified the effectiveness of the algorithm through test problems. The meta-heuristic algorithm is regarded as an improved heuristic algorithm with the addition of random elements. Meta-heuristic algorithms, also known as intelligent optimization algorithms, are a class of methods that solve optimization problems based on computational intelligence mechanisms. They constantly imitate the rules, behaviors or mechanisms in physical, biological and social fields [20]. Due to the stochastic characters and the ability of saving the information in the optimization routine, meta-heuristic algorithms are feasible in settling complex and challenging optimization problems. In addition, meta-heuristic algorithms can be adapted to different applications by small adjustments [21]. Extensive researches and experiments in recent years indicate that meta-heuristic algorithms have unique advantages in solving robot path planning problems [22,23]. In [24], an improved ant colony algorithm was proposed and applied to the path planning of mobile robots. By extending the number of search directions of ants to 16, individual can query 24 neighborhoods of the current node each time, thus increasing the optional directions and search range in the search process and improving the search efficiency and accuracy of the algorithm. To enhance the ability of jumping out of the local minima, an improved PSO was proposed by employing an adaptive velocity formula into PSO [25]. Simultaneously, a robot path planning method with smoothing strategy was proposed by combining high-order Bezier curve. To improve the performance of PSO and solve the unmanned aerial vehicle (UAV) path planning problem in complex environment, spherical vector was introduced to PSO in [26]. Song et al. [27] designed a parallel cuckoo search algorithm (CS) to help UAV avoid obstacles and reach destination in a three-dimensional complex environment. In [28], a PSO variant based on evolutionary operator was proposed by introducing crossover operator and bee colony operator to solve multi-robot path planning problems. In [29], a hybrid PSO-GWO algorithm was proposed to optimize the robot trajectory, combining the characteristics of GWO and PSO.
GWO [16] was proposed by Seyedali Mirjalili et al. in 2014. This algorithm simulates the hunting process of grey wolves and has become one of the most popular algorithms in recent years [30,31]. GWO not only has the advantages of simple structure and few parameters like other meta-heuristic algorithms, but also possesses adaptive regulatory factor, which could better balance the intensification and diversification of GWO compared with other algorithms [32,33]. Therefore, GWO has attracted extensive attention of scholars and has been successfully applied in multifarious fields, including parameter extraction [34], job shop scheduling [35], feature selection [36], and disease classification prediction [37]. However, GWO also has the common problems of meta-heuristic algorithms, such as prematurity and slow convergence [38,39]. In order to improve the population diversity, reinforcement learning (RL) was introduced to GWO in [40]. Then the improved GWO was successfully applied to UAV path planning. In [41], aiming at the weakness that GWO is easy to fall into local optima in high-dimensional tasks, random leaders and random spiral-form motions were introduced into GWO. Heidari et al. [42] proposed an improved GWO based on Levy flight in order to cope with the disadvantages of prematurity and enhance the ability to solve global optimization problems. Luo [43] proposed an enhanced GWO variant to strengthen the search guidance of the head wolves. Experimental results proved that the convergence speed and accuracy of the algorithm were significantly improved. In [44], a new GWO variant was proposed for robot path planning, where the concept of loin swarm optimization (LSO) and dynamic weights were introduced into the original GWO to augment the searching ability of α, β and δ on the premise of ensuring population diversity. The above studies have manifested that GWO is useful in solving optimization problems, and the performance of the algorithm can be improved from different aspects.
In order to improve the performance of the algorithm, an improved grey wolf optimization (IGWO) is proposed in this paper, and its feasibility in the field of robot path planning is demonstrated by simulation experiments. It mainly possesses the three following improvements:
1. A modified position update mechanism is proposed to improve the accuracy of the solution by providing the algorithm with a better tradeoff of exploration and exploitation.
2. A dynamic local optimum escape strategy is designed, which can help the algorithm jump out of local optima traps when the algorithm is considered to be trapped in local stagnation.
3. An individual repositioning method is proposed. This strategy can pull back some individuals of the solutions to the vicinity of the current leaders compulsorily to accelerate the convergence of the algorithm in the anaphasis of the iterations.
To verify the performance of IGWO and the feasibility of IGWO to solve the path planning problem, this paper carries out extensive comparative experiments, including numerical optimization experiments and robot path planning examples. In the numerical optimization experiments, 20 benchmark functions are selected for two groups of statistical experiments. In the first set of experiments, IGWO is compared with five excellent GWO variants proposed recently. In the second group, seven well-established meta-heuristic algorithms including GWO are picked out in contrast with IGWO. Experimental results illustrate that IGWO has obvious advantages both in convergence accuracy and speed on the majority of benchmark functions. In robot path planning examples, this paper designs two obstacle environment models and fully verifies the applicability of IGWO with several comparison tests. In addition, cubic spline interpolation is used to smooth the path and obtain the path more aligned with the kinematic characteristics of robot. Experimental results exhibit that compared with the contrast algorithms, the routes planned by IGWO is not only shorter, but also more stable, indicating that IGWO has higher application value.
The remaining parts of the paper are structured as follows: Section 2 reviews the original GWO briefly. Section 3 provides a detailed description of the proposed approach. Section 4 illustrates the information of the benchmark functions used in experiments and analyzes the statistical results in detail, which proves that the proposed IGWO has better performance. The application of IGWO to settle robot path planning problems is highlighted in Section 5. The comparative test results of path planning are presented to verify the effectiveness and feasibility of the improved algorithm applied to path planning. Finally, Section 6 summarizes the conclusion and identifies future works.

2. Review of GWO

2.1. Leadership Hierarchy

Grey wolves live in a pack with strict social hierarchy, which is also one of the inspirations for GWO as shown in Figure 1. A grey wolf pack could be divided into 4 levels: alpha (α), beta (β), delta (δ) and omega (ω), from high to low, corresponding to the structural pyramid in Figure 1 from top to bottom. Each wolf plays specific role in the pack according to the level the wolf appertains to. α, also called the dominant wolf, is responsible for planning and deciding the hunting behavior of the group, representing the optimal solution obtained by the algorithm so far. β follows α in the rankings, whose duty is to assist α in decision-making. δ, ranking the third level in the social hierarchy, takes charge of reconnoitering, surveilling and guarding the pack as scouts, sentinels and guardians. Although ω wolves are in the lowest status of the tribe, their presence is essential to maintain the pack peace [16]. In the mathematical model of GWO, the first three best-fit solutions in the population are regarded as α, β and δ wolves, and the remaining wolves are regarded as ω wolves.

2.2. Hunting Mechanism

GWO not only mimics the leadership of grey wolves in nature, but also mimics the hunting mechanism [45]. The mathematical model is conducted with respect to the behavior of encircling prey, and the action mode can be described by Equations (1)–(5).
D = | C X P t X t |
X t + 1 = X p t A D
A = 2 a r 1 a
C = 2 r 2
a = 2 2 ( t / M a x _ i t e r a t i o n )
where a is the convergence factor, decreasing linearly from 2 to 0 with the number of iterations. r1 and r2 represent random vectors within [0, 1]. Both A and C are coefficient vectors. D denotes the approximate distance between the current wolf position and prey, Xt and Xt+1 stand for the positions of the wolf at tth and t+1th iteration respectively. X p t represents the position of the prey at tth iteration.
To mathematically describe the hunting mechanism, GWO assumes that the higher-level wolves are closer to the prey in the pack, that is, α, β and δ occupy the most powerful positions in the whole pack. Thus, the positions of remaining wolves are determined by the three wolves, which can be calculated as follows:
{ D α = C 1 X α t X t D β = C 2 X β t X t D δ = C 3 X δ t X t
{ X 1 t = X a t - A 1 D α X 2 t = X β t A 2 D β X 3 t = X δ t A 3 D δ
X t + 1 = X 1 t + X 2 t + X 3 t 3
where C1, C2 and C3 are random coefficient vectors defined by Equation (4). A1, A2 and A3 are coefficient vectors defined by Equation (3). X a t , X β t and X δ t represent the positions of α, β and δ. GWO pseudocode are shown in Algorithm 1, and the location update of wolves are exhibited in Figure 2.
Algorithm 1. The pseudocode of conventional GWO
1. Generate a population Xi (i = 1, 2, …, n) randomly
2. Initialize the parameters of GWO (max_iteration, a, A and C)
3. Calculate the fitness values and assign α, β and δ
4. While (t < max_iteration)
5.         For each grey wolf
6.                 Update the position of the current grey wolf using Equations (6)–(8)
7.         End for
8.         Update a, A and C
9.         Amend the grey wolves’ positions beyond boundary limits
10.       Calculate the fitness values of the new positions
11.       Update the α, β and δ
12.        t = t + 1
13. End while
14. Return the position of α

3. Development of IGWO

Three modifications are proposed in this section to overcome the defects of GWO. Firstly, a modified position update mechanism is designed, where an improved updating formula is constructed, and an enhanced diversity method including opposition-based learning strategy and Cauchy mutation is imported. In the proposed updating formula, the leaders (α, β and δ) compose the leader-determined term. The leaders will match different leadership authorities according to their fitness values, which is more suited to the hierarchy in nature. Furthermore, a random term is added on the basis of the leader-determined term to increase the exploration capability of the algorithm. Secondly, a dynamic local optimum escape strategy is proposed to escape from the local optimal value. In this strategy, a tuning parameter is employed to dynamically adjust the degree of variation. Finally, an individual repositioning method is imported to speed up the convergence rate of the algorithm by using the positions of α, β and δ. The framework of IGWO is presented in Figure 3 and the details of IGWO are introduced and discussed as follows.

3.1. Modified Position Update Mechanism

Wolves in nature have a strict hierarchy. The higher the rank of the wolf, the more dominant it is over the whole population and the stronger it is in the hunting process. A rigid hierarchy is crucial in the hunting process. However, in the hunting mechanism of GWO, as the position updating mechanism described by Equation (8), the leaders have the same power (1/3) over the population. It’s apparent that the distribution of power is incongruent with the real hierarchy.
Due to the fitness values of the individuals can reflect the individuals’ merits and determine their hierarchy, this paper takes the fitness value into consideration and designs an adaptive weight coefficient based on the fitness value to better simulate the real hierarchy of the wolf pack. The mathematical expression is given as Equations (9)–(11).
θ i = { 1 | f ( X i ) | + 0.0001 ,   i { α ,   β ,   δ } ,   when   objective   function   is   minimized | f ( X i ) | ,   i { α ,   β ,   δ } ,   when   objective   function   is   maximized  
where θ i ,   i { α ,   β ,   δ } is the intermediate variable for calculating the adaptive weight coefficient. Moreover, the larger θ i is, the closer the corresponding position is to the prey. In addition, the constant 0.0001 is added to prevent infinity when f ( X i ) equals 0 in the minimization problem. Then, the weight coefficients of α, β and δ can be calculated using Equation (10), and the position updating formula is rewritten as Equation (11).
w i = θ i / ( θ α + θ β + θ δ ) ,   i { α ,   β ,   δ }  
X t = r a n d ( w α X α t + w β X β t + w δ X δ t )
where rand is a random number between 0 and 1. Taking optimizing Sphere function as an example, Figure 4 shows the changing of the leaders’ weight coefficients (wα, wβ, and wδ) calculated by Equations (9) and (10). It is obvious that during the optimization, the corresponding weights of the α, β, and δ are descend sequentially, which is more consistent with the hierarchical system of a wolf pack.
Because the individuals are randomly initialized and distributed in the search space, α, β and δ have similar probability of approaching prey compared to ω in preliminary phase. Apparently, the leader wolves are incompetent to guide the population search at the beginning. Hence using the leaders to guide the search would increase the risk of the premature convergence. To settle this deficiency, this paper adds a random term to the position update formula, which could randomly select individuals from the community to take charge of next search. Combining with Equation (11), the improved position updating formula is constructed ultimately as Equation (12).
X t = n 1 r a n d 2 ( w α X 1 t + w β X 2 t + w δ X 3 t ) / 3 + n 2 ( X t + r a n d 2 ( X r a n d t X t ) )
n 1 = t / M a x _ i t e r a t i o n
n 2 = 1 n 1
where Xt+1 and Xt denote the current wolf’s position at t+1th and at tth iteration respectively. X r a n d t represents the randomly selected wolf’s position. rand1 and rand2 are random numbers in [0, 1]. n1 and n2 are adjustment parameters that can bring about a tradeoff of the leader-determined term and the random term. Moreover, n1 and n2 are set to add up to 1, ensuring the convergence of solution. n1 will increase linearly to 1 with the number of iterations, and the corresponding search behavior could be described as follows: While n1 is smaller compared with n2 in the early stage, the random term provides the main guidance of movement for the pack, conducting a full exploration in search space. In the later stage, n1 increases to 1, and the dominating rights returns to the three leaders, enhancing the exploitation capacity of the approach. Compared with Equation (8), the new formula could better simulate the hierarchy of the wolf society, and balance the global and local searches.
In addition, this paper introduces the population opposition-based learning [46,47,48] and Cauchy mutation to increase population diversity. The mathematical model is shown in Equation (15), where gamma is the scale parameter of mutation. A threshold τ is set to decide how to renew the solutions. The enhanced diversity method will be executed with the probability of τ and Equation (12) will be used with the probability of (1 − τ). In this paper, τ is set to 0.3. Finally, the greedy strategy is employed in this paper to reserve the high-quality solutions. The mathematical expression is defined as Equation (16).
X i n e w = { X i + C a u c h y ( o , g a m m a ) , i { α ,   β ,   δ } r a n d ( 1 , D ) ( u b + l b ) X i ,    e l s e
X t + 1 = { X t ,   i f   f ( X t ) < f ( X t + 1 ) X t + 1 ,   o t h e r w i s e  

3.2. Dynamic Local Optimum Escape Strategy

The ultimate goal of this subsection is to address the concern of GWO being prone to fall into local minima. Inspired by random walk strategy in the improved symbiotic search algorithm (ISOS) [49], we design a dynamic local optimum escape strategy. The phenomenon of ISOS can be classified into three fundamental categories: mutualism phase, commensalism phase and parasitism phase. Previous researches point out that in the commensalism phase in nature, it is very likely that there are no suitable individuals nearby to establish commensalism relationship. It is crucial to explore new individuals and build commensalism. Miao [49] implemented a random walk strategy for the individual i to achieve a disturbance and jump out of the current solution. Then, the individual can be empowered to build commensalism relationship with the new result. More precisely, it’s the second component of Equation (17) that is the disturbance term to help the algorithm generate the new solution. And the specific model of random walk can be defined as follows:
X n e w = X + r a n d ( 1 ,   D ) ( X m t X n t ) R i
R i = { 1 ,   r > K 0 ,   o t h e r w i s e
where rand (1, D) is a D-dimensional random matrix in the range of [0, 1]. ( X m t X n t ) represents the distance of random walk. X m t , X n t are random candidates in the search space, satisfying mni. Ri is a logical value that determines whether to execute the random walk strategy. K is a fixed threshold. r is a random number. When r is greater than K, the random walk strategy will be executed. Otherwise, the original solution will be maintained.
In view of Cauchy mutation having large variation step, we propose dynamic local optimum escape strategy by importing the concept of Cauchy mutation. Its mathematical model can be described as follows:
X t + 1 = C a u c h y ( X t , σ ) + r a n d 3 X t r a n d 4 X r a n d t
σ = a 2 | X α t X i t |
where rand3 and rand4 are random numbers inside [0, 1]. σ is a tuning parameter standing for the extent of mutation, which changes dynamically during the search process. a changes according to Equation (5). r a n d 3 X t r a n d 4 X r a n d t is a small random disturbance that can further increase the ability of fleeing local optimum.
In addition, we select the top half of the superior agents as subgroup GS. The average fitness value of GS serves as the strategy’s enable switch. The average fitness value of GS can be calculated by Equation (21). The search state will be considered fallen into stagnation area and immediately activate dynamic local optimum escape strategy if Equation (22) is satisfied.
m e a n _ G S v a l u e t = 2 n i = 1 n / 2 f ( X i t )
m e a n _ G S v a l u e t + 1 = m e a n _ G S v a l u e t
In terms of the convergence of the algorithm, since the tuning parameter σ can be regulated according to the number of iterations and the distance from the predicted prey, it’s trustworthy that the proposed strategy does not affect the algorithm convergence. It is assumed that at the end of the evolutionary process, the average fitness of GS remains unchanged and a is close to 0. The whole pack is located near the prey in this period, causing | X α t X i t | and r a n d 3 X t r a n d 4 X r a n d t almost equaling to 0. It can be proved that the disturbance to the current position caused by the dynamic local optimum escape strategy is quite puny, guaranteeing the algorithm convergence.

3.3. Individual Repositioning Method

Miao et al. [50] repositioned the three wolves with the worst quality by using α, β, δ and the original positions in each iteration to remove the poor candidates and prevent search bias. Inspired by this idea, we propose a new individual repositioning method, which ignores the original position information and only uses the position of the head wolves, to speed up the convergence of the algorithm. The corrected position of the agents can be calculated by Equation (23), and the relocation process is shown in Figure 5.
X n e w _ w o r s t = r 1 X α + r 2 X β + r 3 X δ
r 2 = r a n d ( 1 r 1 2 , 1 r 1 )
r 3 = 1 r 1 r 2
where Xnew_worst is the wolf’s position after repositioning, and r1 is a random number within [0.5, 1]. The sum of r1, r2 and r3 is set to 1 to ensure the convergence of the solutions.
The individuals that need to be relocated are those with low fitness. The purpose of the individual repositioning method is to make the search rapidly converge to the optimal value by correcting poorly behaved agents artificially. As the three leader wolves possess the best experience in the whole pack, it’s hoped that these individuals can be repositioned to the three leaders’ neighborhood, which is considered more likely to close to the prey. And the concept is embodied in Equation (23), where the positions of α, β and δ are used to generate a new position. Furthermore, as mentioned in Section 3.1, the authority ranking of the leaders is α > β > δ. To carry out the concept of the wolf hierarchy, the weights of the leaders are set to r1 > r2 > r3 in Equation (23).
It can be observed in Figure 5 that as the proposed individual repositioning method forces some individuals to relocate near the leaders, it can reduce the population diversity partly. However, the search prophase requires a high population diversity to find more potential optimal solutions. In other words, the individual repositioning method proposed in this subsection is supposed to be dormant in the early stage of search. Therefore, the position revising method won’t be activated until anaphase to maintain population diversity.

4. Numerical Optimization Experiments

In this section, the proposed algorithm is examined on 20 benchmark functions. These functions are all minimization problems, including 6 unimodal functions, 4 multimodal functions, 3 shifted and rotated multimodal functions, 4 fixed-dimension functions and 3 composite functions. The parameter settings of the functions are presented in Table 1 and Table 2, where Dim and Range represent the dimension and the boundary of the solution domain respectively, and F* symbolizes the optimal value for each benchmark function. To verify the superiority of IGWO, we conduct two suites of comparison experiments. In the first suite, IGWO is compared with five well-established GWO variants namely MixedGWO [51], GWOCS [34], LearnGWO [33], mGWO [52], and RW_GWO [53]. In the second suite, some mature meta-heuristic algorithms are chosen for the comparison of IGWO. These algorithms include GWO [16], particle swarm optimization (PSO) [12], artificial bee colony (ABC) [54], sine cosine algorithm (SCA) [55], whale optimization algorithm (WOA) [56], multi-verse optimizer (MVO) [57] and tunicate swarm algorithm (TSA) [58]. Additionally, to ensure the fairness and objectivity of the comparison experiments, all experiments are run on CPU Core I5-9400F and 8 GB RAM, and programmed on MATLAB R2018b.

4.1. Comparison of IGWO with Different GWO Variants

In this subsection, IGWO is compared with five state-of-the-art GWO variants. The number of iterations is set to 1000 and the population size is fixed to 30 for all algorithms. All algorithms run independently thirty times on each benchmark function to reduce the randomness of experimental results. Moreover, Wilcoxon rank-sum test at the significance level of 5% is adopted. “+”, “−” and “≈” respectively represent that IGWO is superior to, inferior to, and similar to the corresponding contrast method. The average value (Mean) and the standard deviation (Std) of the numerical results of 30 independent experiments are listed in Table 3 and Table 4, where the best Mean and Std are highlighted in bold. Meanwhile, Wilcoxon rank-sum test results (T) and algorithm ranking (R) are also recorded in Table 3 and Table 4. Finally, the overall Wilcoxon rank-sum test results and the mean ranking of the algorithms are provided in Table 5.

4.1.1. Analysis of Numerical Results

Referring to the data presented in Table 3 and Table 4, IGWO hits the fittest results on fourteen benchmark functions out of twenty functions (14/20), then RW_GWO hits three out of twenty (3/20), then MixedGWO and LearnGWO hit two out of twenty (2/20). Finally, GWOCS and mGWO obtain zero out of twenty (0/22).
According to Table 3, with regard to unimodal functions (F1–F6), it can be proved that IGWO has higher chance of hitting the optimal value on most unimodal functions. It’s observed that IGWO ranks first on four functions (F1–F4) and obtains the theoretical optimal solutions of these benchmarks. Despite IGWO misses best result on F5 and F6, IGWO comes to the third place on F5 and the second place on F6 with the result very close to the best one. The improved updating formula can bring up an excellent tradeoff between exploitation and exploration, increasing the chance of positioning the optimum, while the greedy strategy ensures the preservation of the worthy candidates in each generation, which can further improve solution accuracy. For the multimodal functions (F7–F10), IGWO hits the global optimal values on F7 and F9 and obtains the best result among the six variants on F8, signifying the better search performance of the proposed IGWO. As the dynamic local optimum escape strategy can conduct more random operations, it can be easier for the algorithm to explore the search space more fully and avoid falling into the multiple local optimal peaks. For the shifted and rotated multimodal functions (F11–F13), IGWO gains best results on F12 and F13. Additionally, the results gained by RW_GWO are more accurate than remaining methods.
Table 4 indicates that, with respect to the multimodal fixed-dimension functions (F14–F17), IGWO yields the best results on F14 and F15, and locates the theoretical optimal values on F15 and F16. Meanwhile, MixedGWO converges to the theoretical optimal values on F16 and F17, and shows a little more stable search performance. For the composite functions (F18–F20), IGWO is superior to other approaches. Although LearnGWO get the same mean value as IGWO on F19 and F20, its standard deviation is bigger than that of IGWO, which means that IGWO is more stable and reliable than LearnGWO. The improvement of the performance should be owed to the modified strategies presented in this paper, which have diverse impact in dealing with different characteristics of the complex functions.
Furthermore, Table 5 summarizes the results above, showing the overall ranking of the six approaches is: IGWO > GWOCS > RW_GWO > LearnGWO > mGWO > MixedGWO.

4.1.2. Analysis of Convergence Curves

Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 exhibit the convergence curves of the six GWO variants on eight test functions (F1, F3, F8, F9, F12, F14, F18, and F20). The x-axis and the y-axis represent the iterations and the corresponding best average fitness values achieved by thirty independent experiments, respectively.
Figure 6 and Figure 7 indicates that the curve of IGWO can converge to the global optimum at a gallop, confirming that IGWO has better search accuracy and speed on F1 and F3. Furthermore, LearnGWO performed relatively well on F1 and F3 in comparison to MixedGWO, GWOCS, RW_GWO and mGWO, with the accuracy up to 1E10-120. Figure 8 illustrates that after about forty iterations, IGWO has already reached the final result with higher accuracy. From Figure 9, it can be observed that both IGWO and mGWO can converge to the optimal value on F9 rapidly, but IGWO can avoid the local optimum more efficiently. Figure 10 shows that RW_GWO and IGWO own higher convergence accuracy on F12, and IGWO ranks first by virtue of slightly higher accuracy. For the function F14, Figure 11 exhibits that the convergence speed of the six algorithms is similar, but the convergence accuracy of IGWO is slightly better. As shown in Figure 12 and Figure 13, compared with other improved versions, IGWO has noticeable virtue both in convergence speed and accuracy. In addition, Figure 13 demonstrates that the accuracy gap of the six algorithms is small, yet IGWO possesses an obvious advantage in the search speed. In summary, as presented in Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13, the proposed strategies can visually improve the search accuracy and convergence speed on most functions.
Therefore, from the above discussions of Table 3, Table 4 and Table 5 and Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13, it can be concluded that the proposed IGWO has better optimization precision and convergence speed for the majority of the twenty benchmarks, which indicates that the proposed strategies can effectively improve the performance of the algorithm.

4.2. Comparison of IGWO with Other Meta-Heuristic Algorithms

In this subsection, IGWO is compared with the original GWO and six other well-known meta-heuristic algorithms, including PSO, ABC, SCA, WOA, MVO and TSA. The benchmark functions used in this suite are enumerated in Table 1 and Table 2. Each algorithm runs independently 30 times, and the average value (Mean) and the standard deviation (Std) of the results are recorded in Table 6 and Table 7. The optimal Mean and standard deviation obtained by the eight algorithms are marked in bold. In addition, Wilcoxon rank-sum test is also conducted with the confidence interval of 5% to test whether there are significant differences between the proposed variant and other competitors, and the algorithms are ranked. The rank-sum test results (T) and the final ranking results (R) are shown in Table 6 and Table 7. Table 8 summarizes the data above.

4.2.1. Analysis of Numerical Results

Referring to the results presented in Table 6 and Table 7, IGWO hits the best results on thirteen benchmark functions out of twenty functions (13/20), then TSA both hit three out of twenty (3/20), then PSO and ABC hits two out of twenty (2/20), then GWO and WOA hit one out of twenty (1/20). Finally, SCA and MVO obtain zero out of twenty (0/22). As shown in Table 6, IGWO overtakes all the comparison algorithms on unimodal functions except F5. Particularly, IGWO reaches the theoretical optimal values on F1–F4. For the multimodal functions (F7–F10), IGWO beats other contrast meta-heuristic algorithms on F7–F9, and converges to the theoretical optimal values on F7 and F9. For F8, compared with GWO, the search accuracy of IGWO is improved by 2 orders of magnitude. With respect to the shifted and rotated multimodal functions (F11–F13), IGWO obtains the best result among other meta-heuristic algorithms on F12. PSO and TSA do best on F11 and F13 respectively, whereas IGWO ranks third on F11 and second on F13, and the results are close to the best.
For fixed-dimension functions (F14–F17), IGWO obtains the best result on F14 and reaches the theoretical optimal values on F15 and F16. TSA ranks the first on F17, but the result gained by IGWO is also in good quality. The results acquired by IGWO are closer to the optimal solution on the composite functions (F18–F20), and the search performance is more stable by referring to the corresponding Std. Besides, compared with the original GWO, IGWO shows competitive performance except for F5 and F10, which is inseparable with the strategies proposed in this paper.
Table 8 exhibits the overall Wilcoxon rank-sum test results and average ranking of the algorithms according to Table 6 and Table 7. The average ranking of IGWO is 1.7, manifesting that IGWO has outstanding search ability in function optimization problems compared with the other seven algorithms. Furthermore, we can reach the conclusion that the performance ranking of the algorithms in this part is IGWO > GWO > WOA > PSO > TSA > MVO > ABC > SCA.

4.2.2. Analysis of Convergence Curve

Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21 illustrate the convergence curves of IGWO and seven contrast algorithms on partial functions (F1, F2, F7, F9, F12, F14, F18 and F20). The number of iterations of each algorithm is set to 1000, and the value corresponding to each iteration in the curve is the mean value of thirty independent experimental results.
Figure 14 and Figure 15 indicate that IGWO outperforms all other methods on F1 and F2, where IGWO can converge to the global optimal value at the fastest speed. Meanwhile, WOA reaches the second place by means of faster convergence speed and higher accuracy on F1 and F2. Figure 16 reveals that the performance of IGWO is significantly better than others on F7, and WOA achieves similar search accuracy to IGWO. It’s worth mentioning that WOA traps into the local minimums during the 200th iteration to the 500th while IGWO converges rapidly. Therefore, IGWO algorithm has better local optimum avoidance ability. The preeminence of IGWO can be noted in Figure 17 in terms of the search accuracy and speed on multimodal function F9. Figure 18 shows that all algorithms obtain similar results and IGWO obtains slightly better result on F12. As shown in Figure 19, IGWO and TSA all achieve better results on F14, while the result obtained by IGWO owns slightly higher accuracy. Figure 20 and Figure 21 indicate that solution accuracy gained by IGWO ranks first on the composite functions F18 and F20, with the obvious speed advantage contemporarily. Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21 vividly show the search performance of the eight methods, certifying that the proposed IGWO has excellent performance on most functions.

5. Application of IGWO in Robot Path Planning

To verify the feasibility of IGWO to solve the robot path planning problem, this section designs two terrains to simulate the environment where the robot works, one is simple and the other is complex. The proposed IGWO is compared with MEGWO [59] and RMPSO [60] on the experiments. In addition, we introduce cubic spline interpolation to smooth the planned track, making it more in line with the dynamics characteristics of robots and increasing application value.

5.1. Environment Models

To simplify the problem, we make two assumptions: (a) Since the area of a circle is larger than that of a square when the perimeters are equal, the obstacle is set as a circle; (b) The size of the robot is added to the radius of the obstacle, so that the robot can be regarded as a mass point. Based on the above assumptions, the two kinds of environmental models are designed as shown in Figure 22. In the environmental models, the starting points, ending points and obstacles are represented by yellow squares, green pentagrams and black circles, respectively. The simple obstacle environment model consists of three obstacles, with the starting point set as (0, 0) and the end point set as (4, 6), while the complex environment model contains nine obstacles, with the starting point at (0, 0) and the end point at (10, 10). Additionally, the Euclidean distance between the beginning and the end is considered to be the shortest path length. The information of the two terrains is listed in Table 9. The mathematical expression of the obstacle in the coordinate system is defined as follows:
r 2 = ( x x o b s ) 2 + ( y y o b s ) 2
where r is the radius of the obstacle, and (xobs, yobs) represents the coordinates of the center of the obstacle.

5.2. Path Smoothing

Wang et al. [61] smoothed the planned path by utilizing inner arcs to ensure the continuity of robot motion. Liu et al. [62] pointed out that using the cubic spline interpolation to smooth path has significant virtues compared with circular arcs or straight line and less constraints compared with fillet fitting. Therefore, we adopt the cubic spline interpolation to smooth the trajectory. Cubic spline interpolation is a piecewise interpolation method, which obtains a smooth curve through a series of interpolation points based on cubic polynomials. The interpolation process can be described as follows:
Take n + 1 nodes on a given interval [a, b], then the interval can be divided into n subintervals [ ( x 0 ,   x 1 ) , ( x 1 ,   x 2 ) , , ( x n 1 ,   x n ) ] , namely segmentation process. And the value of the node is given in advance as follows:
f ( x i ) = f i , i = 0 , 1 , 2 , , n
Suppose there is a function S(x), if S(x) satisfies the following conditions:
1. S ( x ) , S ( x ) , S ( x ) is continuous.
2. S ( x i ) = f i ,   i = 0 , 1 , 2 , , n
3. S ( x ) = S i ( x ) ,   x [ x i ,   x i + 1 ] is a cubic polynomial.
Then S(x) is called a cubic spline interpolation function, whose curve can be utilized to generate the required curve. Furthermore, on each subinterval [xi, xi+1], S(x) is defined as follows:
S ( x ) = a i x 3 + b i x 2 + c i x + d i , i = 0 , 1 , 2 , , n 1
where ai, bi, ci and di are undetermined coefficients. As S(x) has 4n undetermined coefficients, there is a need for at least 4n known conditions to solve the undetermined coefficients, and the specific conditions are given as follows:
n + 1: S ( x i ) = f i ,   i = 0 , 1 , 2 , , n
n − 1: S ( x i ) = S + ( x i ) ,   i = 1 , 2 , , n 1
n − 1: S ( x i ) = S + ( x i ) ,   i = 1 , 2 , , n 1
The other two conditions can be obtained by boundary conditions. The commonly used three boundary conditions can be described as follows:
1. Clamped Spline: S ( x 0 ) = A ,   S ( x n ) = B , where A and B are specified.
2. Natural Spline: S ( x 0 ) = S ( x n ) = 0 .
3. Not-A-Knot Spline: S ( x 0 ) = S ( x 1 ) ,   S ( x n 1 ) = S ( x n ) .
The proposed IGWO is applied to robot path planning problems. The common points between adjacent subintervals are considered as path nodes, and each node represents a turn along the planned path. Thus, we take the number of path nodes as the dimension of a grey wolf, that is, an individual of the wolf pack represents a candidate path.
Suppose D path nodes are given, and their coordinates are (xdD, ydD), (xdD, ydD), …, (xdD, ydD), where the starting and ending coordinates are (xs, ys) and (xt, yt). Firstly, split the abscissa and ordinate of the above D + 2 points into the sets of {wx} = {xs, xd1, xd2,…, xdD, xt} and {wy} = {ys, yd1, yd2,…, ydD, yt}. Then, apply cubic spline interpolation to{wx} and {wy} separately and obtain the abscissa and the ordinate of n interpolation points, namely {x1, x2,…, xn} and { y1, y2,…, yn}. Finally, {(xs, ys), (x1, y1), (x2, y2), …, (xn, yn), (xt, yt)} is the path of the robot after smoothing.

5.3. Construction of Fitness Function

The purpose of robot path planning is to plan a shortest path without collision with obstacles from the starting point (xs, ys) to the end point (xt, yt) [63]. Therefore, a fitness function is constructed to measure the performance of a robot obstacle avoidance and path length in this subsection. The mathematical model is defined as follows:
f = f l ( 1 + λ f o b s )
where λ is a penalty factor employed to exclude the candidates through the obstacle area. fl is the sum of Euclidean distance between the interpolation points and the calculation method is shown in Equation (30). fobs is a marker variable to evaluate the path obstacle avoidance level, whose initial value is 0. fobs can be calculated using Equations (31)–(33).
f l = i = 1 n ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2
d k i = ( x i x _ o b s k ) 2 + ( y i y _ o b s k ) 2 , i = 1 , 2 , , n , k = 1 , 2 , , n o b s
ε k = 1 n i = 1 n M A X ( 1 d k i / r k ,   0 ) , k = 1 , 2 , , n o b s
f o b s = k = 1 n o b s ε k
where xi and yi respectively represent the x-coordinate and y-coordinate of the ith interpolation point on a path. (x_obsk, y_obsk) is on behalf of the center coordinate of the kth obstacle. n is the number of interpolation points of the path. nobs is the total number of obstacles and rk is the corresponding radius of the obstacle. Equation (31) is to deal with the Euclidean distance between the interpolation point on the candidate path and the center of kth obstacle. Equation (32) could determine whether the path intersects with the kth obstacle. If there is a path point entering the kth obstacle, then εk > 0. Otherwise, εk = 0. Moreover, if a planned path avoids all the obstacles successfully, fobs = 0.

5.4. Experimental Environment and Parameter Setting

To pursue the objective and fair experimental results, RMPSO, MEGWO and IGWO all use the same software and hardware platform. The population size, the number of path nodes (i.e., individual dimension), number of interpolation points, and the maximum of iterations of the three algorithms remain consistent, as shown in Table 10.

5.5. Analysis of Path Planning Results

5.5.1. Single Contrast Experiment

In this subsection, IGWO is applied to robot path planning and compared with MEGWO and RMPSO to verify the superiority of IGWO in solving robot path planning problems. The simulation results of the two cases are shown in Figure 23, Figure 24, Figure 25 and Figure 26. The schematic diagrams of the routes planned by the three algorithms are exhibited in Figure 23 and Figure 24. Figure 25 and Figure 26 show the trend of the path length with the number of iterations.
Figure 23 and Figure 24 indicate that as the environment gets more complex, IGWO generates the path closest to the optimal in both cases. Additionally, Figure 25 demonstrates that IGWO can generate a shorter path compared with RMPSO and MEGWO in case 1. It can be seen from Figure 26 that when the number and density of environmental obstacles are increased, IGWO keep its strengths of the path length. Therefore, the improved algorithm can plan shorter paths in simple and complex environment and has better path optimization performance.

5.5.2. Thirty Independent Contrast Experiments

Since the results of the single experiment may be accidental, this subsection, we use three algorithms to conduct thirty independent experiments in two obstacle environments, which makes up for the deficiency of the single experiment. Simulation experiment results of path planning in two terrains are shown in Figure 27 and Figure 28. The statistical results are listed in Table 11 and Table 12, where Mean, Best, Worst represent the average, the shortest and the longest length of the thirty planned paths, respectively. Unsafe path represents the number of paths that touch obstacles in the thirty independent tests. Success rate signifies the percentage of secure paths obtained in thirty independent experiments. To visualize the differences between the three algorithms, we convert the experimental results into Figure 29 and Figure 30, where three indexes (Mean, Best, and Worst) are subtracted from the shortest path length listed in Table 9 to observe the gaps with the shortest path more conveniently. Moreover, Figure 31 and Figure 32 exhibit the path length obtained by three methods in thirty experiments, which make it easier to compare the performance of these methods in each experiment.
For the performance of the algorithms in case 1, Figure 27 shows that the planned trajectory of RMPSO and IGWO are relatively stable, while the thirty trajectories of MEGWO fluctuate slightly. Table 11 shows that the average, shortest, and longest length of the paths planned by IGWO are all the smallest among the three algorithms. Furthermore, RMPSO and MEGWO generate three failed paths in the thirty independent experiments, and IGWO plans thirty safe paths successfully, indicating that IGWO possesses better stability. Figure 29 turns the statistical results into a bar graph, more intuitively showing the superiority of IGWO in case 1.
With respect to case 2, Figure 28 denotes that with the increase of obstacles, the gaps between the three algorithms become prominent. The behaviors of RMPSO and MEGWO appear obvious jitter, while IGWO plans the path in more efficient and safer manner. Table 12 illustrates that although MEGWO plans the shortest path, IGWO owns a better performance in the remaining indicators. It is worth mentioning that the success rate of IGWO maintains in 100% despite the increased complexity of the environment, while the success rates of RMPSO and MEGWO are 86.67% and 90% respectively. In addition, the statistical data presented in Figure 30 vividly signifies the better search stability and accuracy of IGWO.
Figure 31 and Figure 32 exhibit the length of thirty paths obtained by each method. It can be observed that there are obvious differences in the performance of the three methods. In case 1, IGWO gets the optimal path in every attempt, and the path length is the most stable, while MEGWO has apparent oscillation. For case 2, IGWO obtains the maximum number of optimal paths compared with RMPSO and MEGWO. Even if the environment complexity is increased, the length of the paths generated by IGWO is stably between 14.5 and 15. In contrast, the path length obtained by RMPSO and MEGWO are sometimes less than 15 and sometimes more than 17. Therefore, it is concluded that the trajectories generated by IGWO are both the shortest and the safest.

5.6. Contrast Experiment in Complex Environment with Irregular Obstacles

To further test the capability of the proposed IGWO to solve the path planning problem, a 50 × 50 complex environment model with more irregular obstacles is designed in this section, as shown in Figure 33. Inspired by the polygonal obstacles designed by Dai et al. [64], 18 irregular-shaped obstacles are used in the new environment model. The path starts at (10, 30) and ends at (25, 12).
Beiravand et al. [65] provides the standards and guidelines for researchers to test the performance of optimization algorithms. In this section, comparison experiments are designed referring to [65]. To examine the performance of the proposed IGWO, RMPSO, MEGWO and mGWO are selected as the comparative algorithms. To ensure the fairness and objectivity of the comparison experiments, all experiments are run on CPU Core I5-9400F and 8GB RAM, and programmed on MATLAB R2018b. To avoid randomness of results, all algorithms are run independently for 30 times. Beiravand et al. indicated that algorithm performance metrics can be classified into three categories: efficiency, solution quality and reliability [65]. Therefore, Iteration, Path length and Success Rate are used as performance metrics in this section. The final results are shown in Table 13, with the optimal values among the four algorithms marked in bold. Iteration is the number of iterations when the algorithm obtains the final result, and Path length represents the length of the final path obtained by the algorithm. The values of Iteration and Path length take the average of the results of 30 independent experiments. Success Rate is equal to the ratio of the number of safe paths planned by the algorithm to the total number of independent experiments.
Figure 33 shows the trajectories planned by the four algorithms, visually demonstrating that the proposed IGWO is capable of planning the shorter safe path. Table 13 shows that IGWO achieves the best results in all three metrics compared to the comparative algorithms. In terms of the metric Iteration, the average number of iterations taken by IGWO is 72, which is 24, 21 and 29 generations earlier than RMPSO, MEGWO and mGWO, respectively, indicating that IGWO can search for safe paths more efficiently. In terms of metric Path length, IGWO plans shorter paths among the four algorithms. Since a shorter path length means lower energy consumption, the paths obtained by IGWO are of higher quality. In addition, the planning success rate of IGWO is 90%, ahead of the other algorithms, implying that the IGWO algorithm has higher safety and stability under complex environment model.

6. Conclusions and Future Work

In this paper, an improved grey wolf optimizer, namely IGWO, is proposed and applied to robot path planning. The proposed IGWO uses the three well-designed enhancement strategies to balance the global and local search abilities and settle the defects of prematurity and slow convergence to some extent. More exactly, a modified position update mechanism with an improved updating formula is firstly proposed to strengthen the leadership hierarchy of the pack and coordinate the global and local searches more reasonably in the meantime. Simultaneously, the population opposition-based learning and Cauchy mutation are introduced to ensure the diversity of the algorithm population. Then, inspired by the random walk strategy, a dynamic local optimum escape strategy is designed to help the algorithm jump out of the local stagnation. Finally, the bad individual repositioning method is proposed to effectively accelerate the convergence.
IGWO is testified on twenty benchmark functions, and first compared with five state-of-the-art GWO variants, and then seven other well-known meta-heuristic algorithms. Furthermore, Wilcoxon rank-sum test is utilized to judge the significance difference between IGWO and the comparison algorithm. Empirical results show that IGWO outperforms the competitors on most functions. Although IGWO misses the optimal solutions on some functions, by reference to No Free Lunch (NFL), no algorithm can always obtain optimal results in all fields. Afterwards, IGWO is used to deal with robot path planning problems, and cubic spline interpolation is imported to smooth the trajectory of robot. Experimental results reveal that IGWO is a reliable method to solve robot path planning problems, whether in the simple environment or complex environment.
For future work, we will conduct further research in terms of the following issues: First, IGWO will be applied to more challenging terrains, taking the dynamic obstacles into consideration. Second, it’s worth a try to carry out practical experiments to testify the application value and practical significance of the algorithm. Finally, combining GWO with other excellent meta-heuristic algorithms, and the hybrid algorithm may have a more promising performance.

Author Contributions

Conceptualization, L.D.; formal analysis, X.Y. (Xianfeng Yuan) and B.Y.; investigation, L.D. and X.Y. (Xianfeng Yuan); methodology, L.D. and X.Y. (Xianfeng Yuan); resources, X.Y. (Xianfeng Yuan), Y.S. and Q.X.; software, X.Y. (Xianfeng Yuan), B.Y., Q.X. and X.Y. (Xiongyan Yang); supervision, Y.S.; validation, L.D.; visualization, L.D.; writing—original draft, L.D.; writing—review & editing, X.Y. (Xianfeng Yuan). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The National Natural Science Foundation of China (Grant No. 61803227, 61773242, 61973184), Independent Innovation Foundation of Shandong University (Grant No. 2018ZQXM005), Young Scholars Program of Shandong University, Weihai (Grant No. 20820211010).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Cui, Z.; Sun, H.M.; Yu, J.T.; Yin, R.; Jia, R. Fast detection method of green peach for application of picking robot. Appl. Intell. 2022, 52, 1718–1739. [Google Scholar] [CrossRef]
  2. Shin, H.G.; Park, I.; Kim, K.; Kim, H.K.; Chung, W.K. Corneal suturing robot capable of producing sutures with desired shape for corneal transplantation surgery. IEEE Trans. Robot. 2021, 37, 304–312. [Google Scholar] [CrossRef]
  3. Zhang, M.; Tian, G.; Zhang, Y.; Duan, P. Service skill improvement for home robots: Autonomous generation of action sequence based on reinforcement learning. Knowl. Based Syst. 2021, 212, 106605. [Google Scholar] [CrossRef]
  4. Lv, Z.; Qiao, L. Deep belief network and linear perceptron based cognitive computing for collaborative robots. Appl. Soft Comput. 2020, 92, 106300. [Google Scholar] [CrossRef]
  5. Yao, W.; de Marina, H.G.; Lin, B.; Cao, M. Singularity-free guiding vector field for robot navigation. IEEE Trans. Robot 2021, 37, 1206–1221. [Google Scholar] [CrossRef]
  6. Li, C.; Tian, G. Transferring the semantic constraints in human manipulation behaviors to robots. Appl. Intell. 2020, 50, 1711–1724. [Google Scholar] [CrossRef]
  7. Gao, Y.; Wei, W.; Wang, X.; Yu, Q. Feasibility, planning and control of ground-wall transition for a suctorial hexapod robot. Appl. Intell. 2021, 51, 5506–5524. [Google Scholar] [CrossRef]
  8. Tognon, M.; Alami, R.; Siciliano, B. Physical human-robot interaction with a tethered aerial vehicle: Application to a force-based human guiding problem. IEEE Trans. Robot 2021, 37, 723–734. [Google Scholar] [CrossRef]
  9. Szczepanski, R.; Tarczewski, T. Global path planning for mobile robot based on artificial bee colony and dijkstra’s algorithms. In Proceedings of the IEEE 19th International Power Electronics and Motion Control Conference (PEMC), Gliwice, Poland, 25–29 April 2021; pp. 725–730. [Google Scholar]
  10. Deng, X.; Li, R.; Zhao, L.; Wang, K.; Gui, X. Multi-obstacle path planning and optimization for mobile robot. Expert. Syst. Appl. 2021, 183, 115445. [Google Scholar] [CrossRef]
  11. Liu, J.; Yang, J.; Liu, H.; Tian, X.; Gao, M. An improved ant colony algorithm for robot path planning. Soft Comput. 2016, 21, 5829–5839. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  13. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef]
  14. Yang, X.S.; Suash, D. Cuckoo search via Levy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  15. Yang, X.S. A new metaheuristic bat-inspired algorithm. Comput. Knowl. Technol. 2009, 284, 65–74. [Google Scholar]
  16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  17. Wu, J.; Srivastava, J.; Yun, U.; Tayeb, S.; Lin, J.C. An evolutionary computation-based privacy-preserving data mining model under a multithreshold constraint. Trans. Emerg. Telecommun. Technol. 2021, 32, e4209. [Google Scholar]
  18. Wu, Z.; Karimi, H.R.; Dang, C. An approximation algorithm for graph partitioning via deterministic annealing neural network. Neural Netw. 2019, 117, 191–200. [Google Scholar] [CrossRef] [PubMed]
  19. Wu, Z.; Gao, Q.; Jiang, B.; Karimi, H.R. Solving the production transportation problem via a deterministic annealing neural network method. Appl. Math. Comput. 2021, 411, 126518. [Google Scholar] [CrossRef]
  20. Zivkovic, M.; Stoean, C.; Chhabra, A.; Budimirovic, N.; Petrovic, A.; Bacanin, N. Novel improved salp swarm algorithm: An application for feature selection. Sensors 2022, 22, 1711. [Google Scholar] [CrossRef] [PubMed]
  21. Singh, N.; Son, L.H.; Chiclana, F.; Magnot, J.P. A new fusion of salp swarm with sine cosine for optimization of non-linear functions. Eng. Comput. 2020, 36, 185–212. [Google Scholar] [CrossRef]
  22. Qiang, N.; Gao, J.; Gao, F. Multi-robots global path planning based on PSO algorithm and cubic spline. J. Syst. Simul. 2017, 29, 1397–1404. [Google Scholar]
  23. You, D.; Kang, Y.; Liu, P.; Hu, Y. A path planning method for mobile robot based on improved grey wolf optimizer. Mach. Tool Hydraul. 2021, 49, 1–6. [Google Scholar]
  24. Xu, L.; Fu, W.; Jiang, W.; Tao, Z. Mobile robots path planning based on 16-directions 24-neighborhoods improved ant colony algorithm. Control. Decis. 2021, 36, 1137–1146. [Google Scholar]
  25. Song, B.; Wang, Z.; Zou, L. An improved PSO algorithm for smooth path planning of mobile robots using continuous high-degree Bezier curve. Appl. Soft Comput. 2021, 100, 106960. [Google Scholar] [CrossRef]
  26. Phung, M.D.; Ha, Q.P. Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization. Appl. Soft. Comput. 2021, 107, 107376. [Google Scholar] [CrossRef]
  27. Song, P.C.; Pan, J.S.; Chu, S.C. A parallel compact cuckoo search algorithm for three-dimensional path planning. Appl. Soft. Comput. 2020, 94, 106443. [Google Scholar] [CrossRef]
  28. Das, P.K.; Jena, P.K. Multi-robot path planning using improved particle swarm optimization algorithm through novel evolutionary operators. Appl. Soft. Comput. 2020, 92, 106312. [Google Scholar] [CrossRef]
  29. Medjahed, S.A.; Saadi, T.A.; Benyettou, A.; Ouali, M. Gray wolf optimizer for hyperspectral band selection. Appl. Soft. Comput. 2016, 40, 178–186. [Google Scholar] [CrossRef]
  30. Hou, Y.; Gao, H.; Wang, Z.; Du, C. Improved grey wolf optimization algorithm and application. Sensors 2022, 22, 3810. [Google Scholar] [CrossRef]
  31. Wang, S.; Yang, X.; Wang, X.; Qian, Z. A virtual force algorithm-lévy-embedded grey wolf optimization algorithm for wireless sensor network coverage optimization. Sensors 2019, 19, 2735. [Google Scholar] [CrossRef] [PubMed]
  32. Jarray, R.; Al-Dhaifallah, M.; Rezk, H.; Bouallègue, S. Parallel cooperative coevolutionary grey wolf optimizer for path planning problem of unmanned aerial vehicles. Sensors 2022, 22, 1826. [Google Scholar] [CrossRef]
  33. Yue, Z.; Zhang, S.; Xiao, W. A novel hybrid algorithm based on grey wolf optimizer and fireworks algorithm. Sensors 2020, 20, 2147. [Google Scholar] [CrossRef] [PubMed]
  34. Long, W.; Cai, S.; Jiao, J.; Xu, M.; Wu, T. A new hybrid algorithm based on grey wolf optimizer and cuckoo search for parameter extraction of solar photovoltaic models. Energy Conv. Manag. 2020, 203, 112243. [Google Scholar] [CrossRef]
  35. Peng, T.; Zhou, B. Hybrid bi-objective gray wolf optimization algorithm for a truck scheduling problem in the automotive industry. Appl. Soft Comput. 2019, 81, 105513. [Google Scholar] [CrossRef]
  36. Zhang, C.; Wang, W.; Pan, Y. Enhancing electronic nose performance by feature selection using an improved grey wolf optimization based algorithm. Sensors 2020, 19, 1577. [Google Scholar] [CrossRef] [PubMed]
  37. Goel, T.; Murugan, R.; Mirjalili, S.; Chakrabartty, D.K. OptCoNet: An optimized convolutional neural network for an automatic diagnosis of COVID-19. Appl. Intell. 2021, 51, 1351–1366. [Google Scholar] [CrossRef]
  38. Zhang, Q.; Wang, R.; Yang, J.; Lewis, A.; Chiclana, F.; Yang, S. Biology migration algorithm: A new nature-inspired heuristic methodology for global optimization. Soft Comput. 2019, 23, 7333–7358. [Google Scholar] [CrossRef]
  39. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  40. Qu, C.Z.; Gai, W.D.; Zhong, M.Y.; Zhang, J. A novel reinforcement learning based grey wolf optimizer algorithm for unmanned aerial vehicles (UAVs) path planning. Appl. Soft Comput. 2020, 89, 106099. [Google Scholar] [CrossRef]
  41. Heidari, A.A.; Abbaspour, R.A.; Chen, H.L. Efficient boosted grey wolf optimizers for global search and kernel extreme learning machine training. Appl. Soft. Comput. 2019, 81, 105521. [Google Scholar] [CrossRef]
  42. Heidari, A.A.; Pahlavani, P. An efficient modified grey wolf optimizer with Lévy flight for optimization tasks. Appl. Soft Comput. 2017, 60, 115–134. [Google Scholar] [CrossRef]
  43. Luo, K.P. Enhanced grey wolf optimizer with a model for dynamically estimating the location of the prey. Appl. Soft Comput. 2019, 77, 225–235. [Google Scholar] [CrossRef]
  44. Liu, J.; Wei, X.; Huang, H. An improved grey wolf optimization algorithm and its application in path planning. IEEE Access 2021, 9, 121944–121956. [Google Scholar] [CrossRef]
  45. Saxena, A.; Kumar, R.; Das, S. β-chaotic map enabled grey wolf optimizer. Appl. Soft Comput. 2019, 75, 84–105. [Google Scholar] [CrossRef]
  46. Zhao, X.; Fang, Y.; Liu, L.; Li, J.; Xu, M. An improved moth-flame optimization algorithm with orthogonal opposition-based learning and modified position updating mechanism of moths for global optimization problems. Appl. Intell. 2020, 50, 4434–4458. [Google Scholar] [CrossRef]
  47. Wang, Z.; Ding, H.; Yang, Z.; Li, B.; Guan, Z.; Bao, L. Rank-driven salp swarm algorithm with orthogonal opposition-based learning for global optimization. Appl. Intell. 2022, 52, 7922–7964. [Google Scholar] [CrossRef]
  48. Mahdavi, S.; Rahnamayan, S.; Deb, K. Opposition based learning: A literature review. Swarm Evol. Comput. 2018, 39, 1–23. [Google Scholar] [CrossRef]
  49. Miao, F.; Yao, L.; Zhao, X. Symbiotic organisms search algorithm using random walk and adaptive Cauchy mutation on the feature selection of sleep staging. Expert Syst. Appl. 2021, 176, 114887. [Google Scholar] [CrossRef]
  50. Miao, Z.; Yuan, X.; Zhou, F.; Qiu, X.; Song, Y.; Chen, K. Grey wolf optimizer with an enhanced hierarchy and its application to the wireless sensor network coverage optimization problem. Appl. Soft Comput. 2020, 96, 106602. [Google Scholar] [CrossRef]
  51. Martin, B.; Marot, J.; Bourennane, S. Mixed grey wolf optimizer for the joint denoising and unmixing of multispectral images. Appl. Soft Comput. 2019, 74, 385–410. [Google Scholar] [CrossRef]
  52. Gupta, S.; Deep, K. A memory-based grey wolf optimizer for global optimization tasks. Appl. Soft Comput. 2020, 93, 106367. [Google Scholar] [CrossRef]
  53. Gupta, S.; Deep, K. A novel random walk grey wolf optimizer. Swarm Evol. Comput. 2019, 44, 101–112. [Google Scholar] [CrossRef]
  54. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  55. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  56. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  57. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  58. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  59. Tu, Q.; Chen, X.; Liu, X. Multi-strategy ensemble grey wolf optimizer and its application to feature selection. Appl. Soft Comput. 2019, 76, 16–30. [Google Scholar] [CrossRef]
  60. Jana, B.; Mitra, S.; Acharyya, S. Repository and mutation based particle swarm optimization (RMPSO): A new PSO variant applied to reconstruction of gene regulatory network. Appl. Soft Comput. 2019, 74, 330–355. [Google Scholar] [CrossRef]
  61. Wang, H.; Yin, P.; Zheng, W. Mobile robot path planning based on improved A* algorithm and dynamic window method. Robot 2020, 42, 92–99. [Google Scholar]
  62. Liu, J.; Ji, H.; Li, Y. Robot path planning based on improved bat algorithm and cubic spline interpolation. Acta Autom. Sin. 2021, 47, 1710–1719. [Google Scholar]
  63. Zheng, F.; Zecchin, A.C.; Newman, J.P. An adaptive convergence-trajectory controlled ant colony optimization algorithm with application to water distribution system design problems. IEEE Trans. Evol. Comput. 2017, 21, 773–791. [Google Scholar] [CrossRef]
  64. Dai, J.; Qiu, J.; Yu, H.; Zhang, C.; Wu, Z.; Gao, Q. Robot Static Path Planning Method Based on Deterministic Annealing. Machines 2022, 10, 600. [Google Scholar] [CrossRef]
  65. Beiranvand, V.; Hare, W.; Lucet, Y. Best practices for comparing optimization algorithms. Optim. Eng. 2017, 18, 815–848. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Leadership hierarchy.
Figure 1. Leadership hierarchy.
Sensors 22 06843 g001
Figure 2. Update of position in GWO.
Figure 2. Update of position in GWO.
Sensors 22 06843 g002
Figure 3. Framework of IGWO.
Figure 3. Framework of IGWO.
Sensors 22 06843 g003
Figure 4. Adaptive weight coefficients of the leaders. (a) 2-D version of Sphere function; (b) Values of wα, wβ, and wδ.
Figure 4. Adaptive weight coefficients of the leaders. (a) 2-D version of Sphere function; (b) Values of wα, wβ, and wδ.
Sensors 22 06843 g004
Figure 5. Reposition the worst wolves.
Figure 5. Reposition the worst wolves.
Sensors 22 06843 g005
Figure 6. Comparison results of GWO variants on F1.
Figure 6. Comparison results of GWO variants on F1.
Sensors 22 06843 g006
Figure 7. Comparison results of GWO variants on F3.
Figure 7. Comparison results of GWO variants on F3.
Sensors 22 06843 g007
Figure 8. Comparison results of GWO variants on F8.
Figure 8. Comparison results of GWO variants on F8.
Sensors 22 06843 g008
Figure 9. Comparison results of GWO variants on F9.
Figure 9. Comparison results of GWO variants on F9.
Sensors 22 06843 g009
Figure 10. Comparison results of GWO variants on F12.
Figure 10. Comparison results of GWO variants on F12.
Sensors 22 06843 g010
Figure 11. Comparison results of GWO variants on F14.
Figure 11. Comparison results of GWO variants on F14.
Sensors 22 06843 g011
Figure 12. Comparison results of GWO variants on F18.
Figure 12. Comparison results of GWO variants on F18.
Sensors 22 06843 g012
Figure 13. Comparison results of GWO variants on F20.
Figure 13. Comparison results of GWO variants on F20.
Sensors 22 06843 g013
Figure 14. Comparison results on F1.
Figure 14. Comparison results on F1.
Sensors 22 06843 g014
Figure 15. Comparison results on F2.
Figure 15. Comparison results on F2.
Sensors 22 06843 g015
Figure 16. Comparison results on F7.
Figure 16. Comparison results on F7.
Sensors 22 06843 g016
Figure 17. Comparison results on F9.
Figure 17. Comparison results on F9.
Sensors 22 06843 g017
Figure 18. Comparison results on F12.
Figure 18. Comparison results on F12.
Sensors 22 06843 g018
Figure 19. Comparison results on F14.
Figure 19. Comparison results on F14.
Sensors 22 06843 g019
Figure 20. Comparison results on F18.
Figure 20. Comparison results on F18.
Sensors 22 06843 g020
Figure 21. Comparison results on F20.
Figure 21. Comparison results on F20.
Sensors 22 06843 g021
Figure 22. Basic environment models. (a) Simple obstacle environment model (case 1); (b) Complex obstacle environment model (case 2).
Figure 22. Basic environment models. (a) Simple obstacle environment model (case 1); (b) Complex obstacle environment model (case 2).
Sensors 22 06843 g022
Figure 23. Comparison of three algorithm in case 1, where the yellow square and green five-pointed star respectively represent the starting point and ending point.
Figure 23. Comparison of three algorithm in case 1, where the yellow square and green five-pointed star respectively represent the starting point and ending point.
Sensors 22 06843 g023
Figure 24. Comparison of three algorithm in case 2, where the yellow square and green five-pointed star respectively represent the starting point and ending point.
Figure 24. Comparison of three algorithm in case 2, where the yellow square and green five-pointed star respectively represent the starting point and ending point.
Sensors 22 06843 g024
Figure 25. Iteration curves in case 1.
Figure 25. Iteration curves in case 1.
Sensors 22 06843 g025
Figure 26. Iteration curves in case 2.
Figure 26. Iteration curves in case 2.
Sensors 22 06843 g026
Figure 27. Results of the three algorithms in case 1, where the yellow square, green five-pointed star and lines in different colors in the figure represent the starting point, ending point and paths, respectively. (a) Results of RMPSO; (b) Results of MEGWO; (c) Results of IGWO.
Figure 27. Results of the three algorithms in case 1, where the yellow square, green five-pointed star and lines in different colors in the figure represent the starting point, ending point and paths, respectively. (a) Results of RMPSO; (b) Results of MEGWO; (c) Results of IGWO.
Sensors 22 06843 g027
Figure 28. Results of the three algorithms in case 2, where the yellow square, green five-pointed star and lines in different colors in the figure represent the starting point, ending point and paths, respectively (a) Results of RMPSO; (b) Results of MEGWO; (c) Results of IGWO.
Figure 28. Results of the three algorithms in case 2, where the yellow square, green five-pointed star and lines in different colors in the figure represent the starting point, ending point and paths, respectively (a) Results of RMPSO; (b) Results of MEGWO; (c) Results of IGWO.
Sensors 22 06843 g028
Figure 29. The bar chart of the results in case 1.
Figure 29. The bar chart of the results in case 1.
Sensors 22 06843 g029
Figure 30. The bar chart of the results in case 2.
Figure 30. The bar chart of the results in case 2.
Sensors 22 06843 g030
Figure 31. The path length curves in case 1.
Figure 31. The path length curves in case 1.
Sensors 22 06843 g031
Figure 32. The path length curves in case 2.
Figure 32. The path length curves in case 2.
Sensors 22 06843 g032
Figure 33. Paths planned by four algorithms in complex obstacle environment. (a) Results of RMPSO; (b) Results of MEGWO; (c) Results of mGWO; (d) Results of IGWO.
Figure 33. Paths planned by four algorithms in complex obstacle environment. (a) Results of RMPSO; (b) Results of MEGWO; (c) Results of mGWO; (d) Results of IGWO.
Sensors 22 06843 g033
Table 1. Unimodal, multimodal and shifted and rotated multimodal benchmark functions. F * symbolizes the optimal value for each benchmark function.
Table 1. Unimodal, multimodal and shifted and rotated multimodal benchmark functions. F * symbolizes the optimal value for each benchmark function.
FunctionTest FunctionDimRangeF *
F1 f 1 = i = 1 n x i 2 30[−100, 100]n0
F2 f 2 = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]n0
F3 f 3 = i = 1 n ( j = 1 i x j ) 30[−100, 100]n0
F4 f 4 = max i n { | x i | ,   1 i n } 30[−100, 100]n0
F5 f 5 = i = 1 n [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]n0
F6 f 6 = i = 1 n i x i 4 + r a n d o m [ 0 ,   1 ) 30[−1.28, 1.28]n0
F7 f 7 = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]n0
F8 f 8 = 20 exp ( 0.2 1 n j = 1 n x j ) exp ( 1 n cos ( 2 π x j ) ) + 20 + e 30[−32, 32]n0
F9 f 9 = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30[−600, 600]n0
F10 f 10 = π n { 10 sin ( π y 1 ) + i = 1 n ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + i = 1 n u ( x i ,   10 ,   100 ,   4 ) } y i = 1 + x i + 1 4 u ( x i ,   a ,   k ,   m ) = { k ( x i a ) m x i > a 0 a < x i < a   k ( x i a ) m x i < a   30[−50, 50]n0
F11Shifted and Rotated Katsuura Function30[−100, 100]n1200
F12Shifted and Rotated HappyCat Function30[−100, 100]n1300
F13Shifted and Rotated HGBat Function30[−100, 100]n1400
Table 2. Fixed-dimension multimodal and composition benchmark functions. F * symbolizes the optimal value for each benchmark function.
Table 2. Fixed-dimension multimodal and composition benchmark functions. F * symbolizes the optimal value for each benchmark function.
FunctionTest FunctionDimRangeF *
F14 f 14 = i = 1 11 [ a i x i ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]n0.00030
F15 f 15 = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos ( x 1 ) + 10 2[−5, 5]n0.398
F16 f 16 = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]n3
F17 f 17 = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]n−10.1532
F18Composition Function 1 (N = 5)30[−100, 100]n2300
F19Composition Function 2 (N = 3)30[−100, 100]n2400
F20Composition Function 3 (N = 3)30[−100, 100]n2500
Table 3. Results of GWO variants on unimodal, multimodal and shifted and rotated multimodal benchmark functions. The best values are highlighted in bold.
Table 3. Results of GWO variants on unimodal, multimodal and shifted and rotated multimodal benchmark functions. The best values are highlighted in bold.
MixedGWOGWOCSLearnGWOmGWORW_GWOIGWO
F1Mean3.83 × 10−301.60 × 10−591.3498 × 10−1237.04 × 10−601.46 × 10−220
Std3.94 × 10−292.27 × 10−582.8395 × 10−1221.47 × 10−589.50 × 10−220
R/T5/+4/+2/+3/+6/+1
F2Mean1.09 × 10−184.00 × 10−356.09 × 10−693.02 × 10−381.04 × 10−110
Std1.02 × 10−172.50 × 10−344.57 × 10−687.56 × 10−371.96 × 10−110
R/T5/+4/+2/+3/+6/+1
F3Mean8.28 × 10−53.57 × 10−152.83 × 10−893.35 × 10+31.02 × 10−90
Std0.00119.19 × 10−147.08 × 10−881.23 × 10+41.41 × 10−80
R/T5/+3/+2/+6/+4/+1
F4Mean7.28 × 10−61.01 × 10−142.91 × 10−513.44 × 10−105.46 × 10−80
Std6.54 × 10−51.03 × 10−133.76 × 10−509.61 × 10−91.37 × 10−60
R/T6/+3/+2/+4/+5/+1
F5Mean28.837326.940128.549728.677526.537927.5219
Std0.10363.74771.92550.73132.95943.7676
R/T6/+2/−4/+5/+1/−3
F6Mean0.00258.88 × 10−48.10 × 10−54.16 × 10−49.20 × 10−48.95 × 10−5
Std0.00510.00183.21 × 10−40.00212.70 × 10−32.50 × 10−4
R/T6/+4/+1/−3/+5/+2
F7Mean19.34051.358607.04151.43710
Std50.521819.16160144.517417.63270
R/T5/+2/+1/≈4/+3/+1
F8Mean1.58 × 10141.55 × 10−144.56 × 10−154.56 × 10−156.18 × 10−128.88 × 10−16
Std2.87 × 10−141.58 × 10−143.49 × 10−156.12 × 10−151.19 × 10−110
R/T5/+4/+2/+3/+6/+1
F9Mean0.00215.75 × 10−41.11 × 10−40.00213.26 × 10−40
Std0.04090.01120.00330.06320.00960
R/T5/+4/+2/≈6/≈3/+1
F10Mean0.54190.03960.60790.06190.00120.1822
Std0.80620.11370.84320.18190.0090.5448
R/T5/+2/−6/+3/−1/−4
F11Mean1203.11202.51203.21202.01200.91201.5
Std1.46437.8372.58711.80165.29612.1432
R/T4/+6/+5/+3/+1/−2
F12Mean1302.11300.71304.41304.01300.61300.5
Std6.77833.10982.54924.01680.56210.0695
R/T4/+3/+6/+5/+2/+1
F13Mean1435.31410.81499.51472.91400.81400.4
Std107.684250.4025124.8757126.78761.76891.7390
R/T4/+3/+6/+5/+2/+1
Table 4. Results of GWO variants on fixed-dimension multimodal and composition benchmarks. The best values are highlighted in bold.
Table 4. Results of GWO variants on fixed-dimension multimodal and composition benchmarks. The best values are highlighted in bold.
MixedGWOGWOCSLearnGWOmGWORW_GWOIGWO
F14Mean0.00993.78 × 10−45.70 × 10−38.73 × 10−40.00233.33 × 10−4
Std0.00730.00134.37 × 10−20.00470.03292.58 × 10−4
R/T6/+2/+4/+3/+5/+1
F15Mean0.45760.3980.40030.3980.3980.398
Std08.74 × 10−70.02243.97 × 10−82.14 × 10−61.03 × 10−8
R/T6/+3/+5/+2/+4/+1
F16Mean333.0093333
Std03.69 × 10−50.2731.06 × 10−126.53 × 10−53.56 × 10−5
R/T1/−4/+6/+2/−5/+3
F17Mean−10.4028−9.873−4.8074−9.1627−9.481−10.057
Std2.83 × 10−47.10313.840112.313411.33245.7099
R/T1−3/+6/+5/+4/+2
F18Mean2662.42642.72745.22673.72626.22500
Std115.962169.9378210.9216190.916323.9520
R/T5/+3/+4/+6/+2/+1
F19Mean2600.2260026002600.426002600
Std0.32850.06042.1052 × 10−61.48860.090
R/T5/+3/+2/+6/+4/+1
F20Mean2704.52700.427002700.92712.52700
Std38.333613.13246.4311 × 10−1327.490630.30254.5475 × 10−13
R/T5/+3/+2/+4/+6/+1
Table 5. Overall Wilcoxon rank-sum test results and mean rank results of GWO variants.
Table 5. Overall Wilcoxon rank-sum test results and mean rank results of GWO variants.
ResultMixedGWOGWOCSLearnGWOmGWORW_GWOIGWO
+/≈/−18/0/218/0/217/2/117/1/217/0/3~
Mean rank4.73.253.54.053.751.5
Overall rank624531
Table 6. Results of unimodal, multimodal and shifted and rotated multimodal benchmark functions.
Table 6. Results of unimodal, multimodal and shifted and rotated multimodal benchmark functions.
GWOSCAPSOWOAABCTSAMVOIGWO
F1Mean2.07 × 10−591.25 × 10−29.43 × 10−96.59 × 10−1500.78876.02 × 10−60.29630
Std1.33 × 10−581.07 × 10−11.88 × 10−71.77 × 10−1482.19091.59 × 10−50.39690
R/T3/+6/+4/+2/+8/+5/+7/+1
F2Mean1.83 × 10−341.20 × 10−56.00041.27 × 10−1020.03630.02080.46060
Std1.78 × 10−331.26 × 10−538.98652.18 × 10−1010.0560.03051.10930
R/T3/+4/+8/+2/+6/+5/+7/+1
F3Mean1.20 × 10−126.57 × 10316.30221.76 × 1043.41 × 1042.67 × 10445.36380
Std3.44 × 10−114.86 × 10346.31054.31 × 1042.04 × 1041.49 × 104120.86180
R/T2/+5/+3/+6/+8/+7/+4/+1
F4Mean2.25 × 10−1417.90.607236.990253.330826.77211.01750
Std1.07 × 10−139.850.6718113.815624.655217.52721.74410
R/T2/+5/+3/+7/+8/+6/+4/+1
F5Mean26.7285579.701056.633627.27671.72 × 104125.1067425.073427.6068
Std3.19031.41 × 104233.98693.18565.29 × 104232.51863.53 × 1033.7676
R/T1/−6/+4/+2/−8/+5/+7/+3
F6Mean8.28 × 10−45.09 × 10−24.88650.00210.21270.29460.02229.13 × 10−5
Std0.00167.87 × 10−226.45960.01040.18510.25790.04652.49 × 10−4
R/T2/+5/+8/+3/+6/+7/+4/+1
F7Mean0.753110.486.79888.53 × 10−15224.9901107.1537111.75820
Std10.90816.4123.46361.21 × 10−1393.405653.0099198.30010
R/T3/+4/+5/+2/+8/+6/+7/+1
F8Mean1.67 × 10−1416.14.24 × 10−054.97 × 10−151.5691.33070.97058.88 × 10−16
Std1.55 × 10−148.482.64 × 10−041.36 × 10−142.57714.60033.20090
R/T3/+8/+4/+2/+7/+6/+5/+1
F9Mean0.00360.3350.00940.00360.83480.21450.5540
Std0.02990.3350.04430.06990.41220.39810.58810
R/T2/+6/+4/+3/≈8/+5/+7/+1
F10Mean0.04274.11660.01730.00696.3958 × 1030.69251.21450.1822
Std0.121468.76250.21160.02346.5686 × 1042.10674.91500.5448
R/T3/−7/+2/−1/−8/+5/+6/+4
F11Mean1202.31203.11200.41202.31203.31201.51200.71201.5
Std6.23341.25221.61063.95682.43212.89131.84051.1522
R/T6/+7/+1/−5/+8/+4/+2/−3
F12Mean1300.61303.81300.51300.61300.61300.61300.71300.5
Std2.08691.75310.65530.70980.43120.30430.84590.0700
R/T6/+8/+2/+5/+4/+3/+7/+1
F13Mean1405.51472.11400.71403.01400.81400.31400.71400.6
Std44.337074.10328.038326.97290.27150.21751.97331.9105
R/T7/+8/+4/+6/+5/+1/−3/+2
Table 7. Results of fixed-dimension multimodal and composition benchmark functions.
Table 7. Results of fixed-dimension multimodal and composition benchmark functions.
GWOSCAPSOWOAABCTSAMVOIGWO
F14mean0.00141.00 × 10−30.00576.09 × 10−46.22 × 10−43.83 × 10−40.00343.33 × 10−4
sd0.01954.23 × 10−40.03350.00154.51 × 10−42.51 × 10−40.03662.58 × 10−4
R/T6/+5/+8/+3/+4/+2/+7/+1
F15mean0.3980.3990.3980.3980.3980.3980.3980.398
sd9.97 × 10−51.45 × 10−302.43 × 10−6001.16 × 10−65.71 × 10−08
R/T5/+6/+1/−4/+1/−1/−3/+2
F16mean3333335.73
sd2.95 × 10−051.37 × 10−054.97 × 10−151.50 × 10−041.54 × 10−153.29 × 10−1579.63861.07 × 10−06
R/T6/+5/+3/−7/+1/−2/−8/+4
F17mean−9.8160−3.45−7.9464−8.5398−10.1526−10.1532−7.6246−10.0786
sd6.902612.576815.120113.65390.01751.46 × 10−1415.15072.3102
R/T4/+8/+7/+5/+2/−1/−6/+3
F18mean2641.62713.42616.42668.32617.42615.32623.72500
sd68.8862142.261612.1211264.55864.16570.041430.3390
R/T6/+8/+3/+7/+4/+2/+5/+1
F19mean26002607.12624.42608.72638.52633.52636.52600
sd0.072740.075543.33835.616820.20548.63735.54880
R/T2/+3/+5/+4/+8/+6/+7/+1
F20mean2712.32739.62718.12722.92741.42719.92708.12700
sd31.885650.147826.4854112.527142.179413.352111.72354.55 × 10−13
R/T3/+7/+4/+6/+8/+5/+2/+1
Table 8. Overall Wilcoxon rank-sum test results and mean rank results.
Table 8. Overall Wilcoxon rank-sum test results and mean rank results.
ResultGWOSCAPSOWOAABCTSAMVOIGWO
+/≈/−18/0/220/0/016/0/417/1/217/0/316/0/419/0/1~
Mean rank3.756.054.154.164.25.41.7
Overall rank28437561
Table 9. Environmental model information.
Table 9. Environmental model information.
Simple Environment (Case 1)Complex Environment (Case 2)
Obstacles39
Starting point(0,0)(0, 0)
Ending point(4,6)(10, 10)
The shortest length7.2114.14
Table 10. Experimental parameter configuration.
Table 10. Experimental parameter configuration.
Simple Environment (Case 1)Complex Environment (Case 2)
Population size3030
Path points22
Interpolation points100100
Iterations100100
Table 11. Results comparison in case 1, where the best values are highlighted in bold.
Table 11. Results comparison in case 1, where the best values are highlighted in bold.
MeanBestWorstUnsafe PathSuccess Rate
RMPSO7.84867.83087.8989390%
MEGWO7.95577.77648.3557390%
IGWO7.65297.56697.69810100%
Table 12. Results comparison in case 2, where the best values are highlighted in bold.
Table 12. Results comparison in case 2, where the best values are highlighted in bold.
MeanBestWorstUnsafe PathsSuccess Rate
RMPSO15.589414.928417.5323486.67%
MEGWO16.149114.566217.6817390%
IGWO14.705214.569114.86980100%
Table 13. Experimental results in complex obstacle environment, and the best values are highlighted in bold.
Table 13. Experimental results in complex obstacle environment, and the best values are highlighted in bold.
IterationPath LengthSuccess Rate
RMPSO9636.28986.67%
MEGWO9340.196383.33%
mGWO10139.332180%
IGWO7231.877990%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, L.; Yuan, X.; Yan, B.; Song, Y.; Xu, Q.; Yang, X. An Improved Grey Wolf Optimization with Multi-Strategy Ensemble for Robot Path Planning. Sensors 2022, 22, 6843. https://doi.org/10.3390/s22186843

AMA Style

Dong L, Yuan X, Yan B, Song Y, Xu Q, Yang X. An Improved Grey Wolf Optimization with Multi-Strategy Ensemble for Robot Path Planning. Sensors. 2022; 22(18):6843. https://doi.org/10.3390/s22186843

Chicago/Turabian Style

Dong, Lin, Xianfeng Yuan, Bingshuo Yan, Yong Song, Qingyang Xu, and Xiongyan Yang. 2022. "An Improved Grey Wolf Optimization with Multi-Strategy Ensemble for Robot Path Planning" Sensors 22, no. 18: 6843. https://doi.org/10.3390/s22186843

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop