Next Article in Journal
Evolutionary Game Analysis of SME Social Responsibility Performance under Public Health Emergencies
Previous Article in Journal
AdaBoost Algorithm Could Lead to Weak Results for Data with Certain Characteristics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient End-to-End Obstacle Avoidance Path Planning Algorithm for Intelligent Vehicles Based on Improved Whale Optimization Algorithm

1
College of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China
2
Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fuzhou 350118, China
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(8), 1800; https://doi.org/10.3390/math11081800
Submission received: 14 March 2023 / Revised: 6 April 2023 / Accepted: 7 April 2023 / Published: 10 April 2023
(This article belongs to the Section Engineering Mathematics)

Abstract

:
End-to-end obstacle avoidance path planning for intelligent vehicles has been a widely studied topic. To resolve the typical issues of the solving algorithms, which are weak global optimization ability, ease in falling into local optimization and slow convergence speed, an efficient optimization method is proposed in this paper, based on the whale optimization algorithm. We present an adaptive adjustment mechanism which can dynamically modify search behavior during the iteration process of the whale optimization algorithm. Meanwhile, in order to coordinate the global optimum and local optimum of the solving algorithm, we introduce a controllable variable which can be reset according to specific routing scenarios. The evolutionary strategy of differential variation is also applied in the algorithm presented to further update the location of search individuals. In numerical experiments, we compared the proposed algorithm with the following six well-known swarm intelligence optimization algorithms: Particle Swarm Optimization (PSO), Bat Algorithm (BA), Gray Wolf Optimization Algorithm (GWO), Dragonfly Algorithm (DA), Ant Lion Algorithm (ALO), and the traditional Whale Optimization Algorithm (WOA). Our method gave rise to better results for the typical twenty-three benchmark functions. In regard to path planning problems, we observed an average improvement of 18.95% in achieving optimal solutions and 77.86% in stability. Moreover, our method exhibited faster convergence compared to some existing approaches.

1. Introduction

In recent years, with the advancement of computer technology and the rise of artificial intelligence, intelligent vehicles and autonomous driving have developed rapidly. Ever more researchers are focusing on the field of path planning for intelligent vehicles and robots. The task of path planning for intelligent vehicles or robots is to find a continuous path from the initial point to the final goal. Deciding on the best path is determined by constraints and conditions, such as whether to find the shortest path between start point and end point or the shortest time to travel without any collisions [1], and is an NP-hard problem [2].
The path planning problem is a classic combinatorial optimization problem [3]. The traditional solutions mainly include Dijkstra algorithm, Floyd algorithm, artificial potential field algorithm, A* algorithm and so on. In recent years, swarm intelligence optimization algorithms have attracted more and more attention, and more researchers have adopted swarm intelligence optimization algorithms to solve path planning problems. In addition, with breakthroughs in key machine learning technologies, some scholars have applied this technology to path planning. Therefore, the designing of more efficient algorithms for path planning has certain research value. However, the design of the algorithm for the path planning problem is challenging in practical applications. The actual environment is relatively complex and changeable, and the design process of the algorithm has characteristics of complexity, randomness and multiple constraints to address. First of all, it is necessary to consider how to convert the real environment into a mathematical model to facilitate the solution of the swarm intelligence optimization algorithm, and this requires research. Secondly, designing and improving the algorithm to find the optimal solution under constraints of avoiding obstacles and ensuring the shortest path requires multiple, and more detailed, considerations. Finally, the applicability of the algorithm needs to be verified by a large number of experiments.
Although study on the path planning problem involves much complexity, the application value of path planning is very high. Path planning can be applied in military, industrial, agricultural and other fields. Path planning can plan the trajectory of missiles, guide robots to complete various tasks in dangerous environments, plan the navigation of unmanned vehicles in urban road networks, and plan the path for drones to spray drugs [4], etc. Therefore, the study of path planning problems has become increasingly practical.
In order to solve the end-to-end obstacle avoidance path planning problem of intelligent driving vehicles, this paper proposes an improved whale optimization algorithm and carries out a series of experimental studies. The main work and contributions of this paper are as follows:
(1)
According to the theory of directed graph, this paper gives the process of transforming an obstacle environment into a mathematical model in detail, which lays the foundation for research work on obstacle avoidance path planning.
(2)
This paper proposes an improved whale optimization algorithm, named IWOA. Firstly, the adaptive dynamic adjustment mechanism is designed. IWOA adds a new parameter to adjust the update mechanism of the algorithm and to improve the optimization ability of the whale optimization algorithm. Secondly, IWOA introduces a controllable variable, which developers can adjust according to actual problems to control whether the algorithm performs global optimization or local optimization to a greater or lesser extent. Finally, the IWOA algorithm uses the differential evolution strategy to update the individual position again after the end of an iteration to improve the optimization ability of the algorithm.
(3)
In order to test the performance of the improved whale optimization algorithm and its ability to solve the obstacle avoidance path planning problem, this paper designed a large number of benchmark test function experiments and path map experiments of different sizes, and compared the experimental results against six algorithms: PSO, BA, GWO, DA, ALO and WOA. An experimental analysis and summary are provided.
The rest of the paper is organized as follows. Section 2 mainly introduces the related work on swarm intelligence optimization algorithms. Section 3 mainly introduces the problem formulation for obstacle avoidance path planning. The improved whale optimization algorithm is described in detail in Section 4. Section 5 conducts experiments on 23 benchmark functions and compares them with the experimental results of six common population intelligent optimization algorithms. In Section 6, we apply the improved whale optimization algorithm to path planning, and conduct simulation experiments and results analysis on the path planning problem with the above six algorithms. Section 7 is the conclusion, which clarifies the study results of this paper, points out any insufficiency in the study and prospects for future study.

2. Related Work

The concept of swarm intelligence was first proposed by Beni G. and Wang J. in 1989 [5]. In the 1990s, inspired by the evolutionary processes and foraging behaviors of some biological groups in nature, scientists simulated the intelligent behaviors of these biological groups and developed many swarm intelligence optimization algorithms. As early as 1991, Marco D. et al., developed the ant colony optimization (ACO) [6], by simulating the behavior of ants in finding a path during foraging. This opened the door to the research field of swarm intelligence optimization algorithm. Then, in 1995, Eberhart and Kennedy proposed Particle Swarm Optimization (PSO), which mainly simulates the predatory behavior of birds [7,8]. With the advances in science and technology of the 21st century, the swarm intelligence optimization algorithm has developed comprehensively. In 2002, Mehdi N. et al., proposed the Artificial Fish Swarm Algorithm (AFSA) [9]. This algorithm considers that the place where the largest number of fish live is the place with the most nutrients in the water, and they simulate the feeding behavior of fish to achieve optimization. In 2008, Yang X.S., a Cambridge scholar, proposed the Firefly Algorithm (FA) [10,11], based on the luminous characteristics and mutual attraction behavior of individual fireflies. In 2010, Yang X.S. et al., proposed the Bat Algorithm (BA) [12], which mainly simulates the predatory behavior of bats using echolocation. In 2014, scholars Seyedali Mirgalili, Seyed Mohammad Mirgalili, and Andrew Lewis of Griffith University in Australia proposed the Grey Wolf Optimizer (GWO) [13], inspired by the predatory behavior of grey wolves. In 2015, Seyedali Mirjalili et al. developed the Dragonfly algorithm (DA) [14], based on the living habits of dragonflies in avoiding natural enemies and finding food. In 2016, Seyedali Mirgalili proposed the Ant Lion Optimizer (ALO) [15], by imitating the predatory behavior of ant lions. In the same year, Seyedali Mirgalili and Andrew Lewis proposed the whale optimization algorithm (WOA) [16]. In 2020, Xue Jiankai and Shen Bo of Donghua University in China proposed a sparrow search algorithm (SSA), which mainly simulates the foraging and anti-predatory behaviors of sparrow groups [17].
The excellent performance of the swarm intelligence algorithm in solving benchmark functions has led more and more researchers to use it to solve more practical engineering problems, including the path planning problem of intelligent vehicles [18,19,20,21,22]. However, most of the swarm intelligence optimization algorithms are random search algorithms based on probability, so they have strong randomness when solving specific engineering problems, weak coordination between global optimization and local optimization, and a propensity to fall into the local optimum [23]. Therefore, researchers have also improved the optimization algorithms, by strengthening individual optimization capabilities, deploying global and local optimization mechanisms, and fusing different algorithms. Studies have shown that the improved swarm intelligence optimization algorithms can achieve better results in solving path planning problems [24,25,26,27,28,29,30,31,32].
Among the many swarm intelligence optimization algorithms, the whale optimization algorithm has characteristics of simple operation and strong ability to jump out of local optimization [33]. However, shortcomings in solving the path planning problem of intelligent vehicles remain. Therefore, we improved the whale optimization algorithm, by adding controllable factors to strengthen the optimization ability of the algorithm, designing an adaptive dynamic adjustment mechanism to coordinate the search behavior of the algorithm, and integrating the idea of differential evolution into the whale optimization algorithm to optimize individual positions. The experimental results showed that the improved whale optimization algorithm not only obtains the optimal solution, but also has stronger robustness and faster convergence speed.

3. Problem Formulation for Obstacle Avoidance Path Planning

According to whether the road conditions and environment change with time, path planning can be divided into dynamic path planning and static path planning. According to the target scope of planning, path planning can be divided into global path planning and local path planning [34,35]. The category of path planning in this paper is global static path planning to find the optimal path from a given starting point to an end point in the road environment with obstacles. There are three important factors: knowing the starting point and end point, avoiding all obstacles, and finding the shortest path as far as possible. In general, our solution to this problem is to use graph theory to establish a mathematical model, create the objective function and constraints, and then use the optimization algorithm to solve it. Here, in order to facilitate the modeling of the problem and the display of the path, we established a directed graph from the starting point to the end point in combination with a grid graph. We then establish the mathematical model and the optimization formula of the problem, and then show the planned path in the grid diagram.

3.1. Grid Diagram

In this paper, the grid method is used to simulate a situation that may exist in the actual environment. The basic idea of the grid method is to decompose the terrain environment into several regular units (usually squares), which are grids. We describe the actual terrain through the feature information of the grid [36,37]. This paper uses the structure of the quadtree, so a moving object has four moving directions in the grid: up, down, left and right. Special attention is paid to the size of the grid. The smaller the unit grid, the more accurate the terrain described, but the greater the amount of calculation required. Conversely, the larger the unit grid, the smaller the computational effort, but the lower the accuracy. At the same time, when the number of grids increases, the space complexity of the algorithm also increases accordingly. In short, this method can approximate the description of the environment. Here, the actual driving environment of the vehicle is shown in Figure 1a, where there are static obstacles such as parks, lakes, schools, etc. The road environment can be divided into grid maps using the grid method, as shown n Figure 1b, where the black squares represent obstacles in the way, and the white areas are traversable paths. The grid has a side length of 1. In the graph, you can specify the start point (green point) and end point (red point).

3.2. Parameter Setting

We use the center of each grid in the grid graph to represent the node in path travel, that is, the vertex of the directed graph, V. Where V = { V 1 , V 2 , ..., V n }, there are a total of n vertices. V i , V j represent the i-th and j-th vertices, and V i V j represents vertex V i to vertex V j . The connection between two adjacent vertices constitutes an edge, denoted by E, V i V j E. In addition, we can assign a weight value to the edge E according to the time cost and distance cost of the vehicle running in the actual scene, which is represented by ω . ω i j represents the weight on the edge from vertex V i to V j . In this way, we combine the grid image to build a weighted directed graph G = (V, E, ω ), as shown in Figure 2. The parameters involved and their meanings are given in Table 1.

3.3. Objective Function and Constraints

Before describing the objective function, we make the following assumptions:
(1)
The motion area in the road environment is a two-dimensional limited area.
(2)
There are a finite number of stationary obstacles in the environment, and their heights have no effect on moving objects.
(3)
The position and size of obstacles in the motion area never change.
We can build the following mathematical optimization model.
Objective function:
f = m i n i = 1 n j = 1 n ω i j X i j ,
s.t.
j = 1 n X i j j = 1 n X j i = 1 i = 1 1 i = n 0 i 1 , n ,
where,
X i j = 1 A c c e s s i b l e f r o m V i t o V j N o t a c c e s s i b l e f r o m V i t o V j .
In Equation (1), ω i j represents the weight on side E, which is a value greater than 0. X i j is a decision variable. When the vertex from V i to V j is accessible, the value of X i j is 1, when the vertex from V i to V j is not accessible, the value of X i j is infinite, as shown in Equation (3). Equation (2) is a constraint condition to find all possible paths from vertex 1 to vertex n. The principle is as follows:
When i = 1, the value of the equation is 1, and there must be a path connected to point 1 that is 1. This means that point 1 must be able to go out.
When i = n, the value of the equation is −1, and there must be one path connected to point n that is −1. This means that there must be a path leading to point n.
When i≠ 1, n, the value of the equation is 0, the path can be continued through the path created by the constraint conditions until the point n.
So, by using the Formulas (1)–(3), we obtain the shortest value of the accessible path.
In this way, when solving the vehicle path planning problem, our solution can be described as follows: first, create a grid map, based on the actual environment, and specify the start and end points in the grid map. Then, using Formula (1) as the objective condition, solve it using the swarm intelligence optimization algorithm. Finally, record the solution results and display the driving path in a grid graph.

4. An Improved Whale Optimization Algorithm

The whale optimization algorithm is a meta-heuristic algorithm that realizes the optimization of complex problems by simulating the hunting behavior of humpback whales [38]. The predatory behavior of humpback whales is mainly divided into three parts: surrounding the prey, attacking the prey and searching for the prey.
When surrounding prey, humpback whales can autonomously identify prey and surround them. According to this feature, when the algorithm is designed, the value corresponding to the position of each individual is brought into the fitness function, and the position of an individual is determined to be the current optimal solution position according to the calculation result. This position can generally be understood as the position closest to the prey. Other individuals update their position according to the position of this individual. The position update formula is as follows:
X ( t + 1 ) = X ( t ) A · D ,
A = 2 a · r a ,
D = | C · X ( t ) X ( t ) | ,
C = 2 · r .
where, t is the current number of iterations, X is the best position of the individual so far, A and C are coefficient vectors, D is the distance between individuals, a is a parameter decreasing from 2 to 0, r is a random number between [0, 1]. Humpback whales have two behavioral modes when attacking their prey: shrinking encircling mechanism and spiral position update. From Formula (5) the value range of A is [−2, 2]. When the range of A is [−1, 1], the algorithm adopts the shrinking encircling mechanism, that is, the individual position is updated according to Formula (4). In addition to this, humpback whales can also use a spiral mechanism for position updates. The spiral position update is to create a spiral formula, as in (8), between the whale and prey positions to simulate the whale’s spiral motion.
X ( t + 1 ) = D · e b l · c o s ( 2 π l ) + X ( t ) ,
where D is shown in Equation (6). b is the logarithmic spiral shape constant and l is a random number in [−1, 1]. Assuming that humpback whales have the same probability of adopting these two behaviors when attacking their prey, we can represent this with the mathematical model in (9). In the formula, P is a random number in [0, 1]. The process of attacking the prey corresponds to the exploration process of the algorithm, which is local optimization.
X ( t + 1 ) = X ( t ) A · D i f p < 0.5 D · e b l · c o s ( 2 π l ) + X ( t ) i f p 0.5 .
In addition to the above behavior, humpback whales can also conduct random searches for prey. When searching for prey, humpback whales search randomly based on the location distance between individuals. When the range of A is in [−2, −1] or [1, 2], the algorithm adopts the mechanism expressed by Equation (10). Here, X r a n d is the randomly selected individual position in the current population. At this time, the behavior of humpback whales in searching for prey corresponds to the development stage in the algorithm, that is, global optimization.
X ( t + 1 ) = X r a n d ( t ) A · D ,
D = | C · X r a n d ( t ) X ( t ) | .
Based on learning and analysis of the whale optimization algorithm, we improve the whale optimization algorithm to overcome its weak global optimization ability, its ease in falling into the local optimum, its slow convergence speed, and other shortcomings. Finally, we propose an improved whale optimization algorithm (IWOA).

4.1. Adaptive Dynamic Adjustment Mechanism

According to the analysis of the principle of the whale optimization algorithm, we can see that the whale optimization algorithm decides which position update mechanism to perform according to the value of p. When p < 0.5 , the shrinkage and encirclement mechanism is performed, and when p 0.5 , the spiral position update is performed. Moreover, only when p < 0.5 and | A | 1 , does the algorithm perform a global search, which leads to the weak global search ability of the algorithm when solving, and its ease in falling into the local optimum. Therefore, we designed an adaptive dynamic adjustment mechanism to improve the global and local optimization capabilities of the whale optimization algorithm. We introduced a parameter p p that changes dynamically with the number of iterations. Its formula is shown in (12), and the improved position update formula is shown in (13). In the formula, T is the number of iterations of the algorithm. The values of p p before and after improvement are shown in Figure 3.
p p = 1 1 + e ( T / t ) 0.4 ,
X ( t + 1 ) = X ( t ) A · D i f p < p p D · e b l · c o s ( 2 π l ) + X ( t ) i f p p p .
From Figure 3, we can see that the improved p p gradually decreases from 0.6 to 0.34 with increase in the number of iterations, which means that at the beginning of the algorithm, we use the shrinking encircling mechanism to update the position. In this way, the probability of IWOA for global optimization increases. In addition, when the algorithm enters the later stage, as the p p decreases, more spiral position updates are used, and the probability of local optimization increases accordingly. That is to say, the improved whale optimization algorithm is not a mechanical random selection position update mechanism. On the contrary, it automatically adjusts the value of pp according to the number of iterations. The improved algorithm is initially developed more, and explored more as the number of iterations increases.

4.2. Introduction of Controllable Variables

From the previous description, it is understood that the whale optimization algorithm determines whether the algorithm performs global optimization or local optimization according to the value range of A in the shrinkage and encirclement mechanism. When | A | < 1 , the local position update is performed according to Formula (4), and when | A | 1 , the global position update is performed according to Formula (10). It can be seen from Formula (5) that the value of A is determined by a, and the value of a is a simple linear decrease in the design of the original algorithm. This obviously cannot meet the requirements of the actual situation, because. in the actual situation, we need to know whether the algorithm is more inclined to global optimization or local optimization according to different problems. Therefore, in order to achieve the adjustability of a, we introduce a controllable variable k for adjustment. The value of a is as in Formula (14). After the improvement, the effect of different k values on a is shown in Figure 4.
a = 2 t 2 k 2 i f t < 2 k 2 T 2 ( T t ) 2 T 2 2 k 2 i f 2 k 2 T t T .
As can be seen from the figure, different values of k have different effects on a. When k < 0.5 T , the proportion of a value less than 1 is more, and this trend increases as the value of k decreases. When k 0.5 T , the proportion of a value greater than 1 is more, and this trend increases as the value of k increases. The value of a directly affects the value of A, which, in turn, determines whether the algorithm performs global optimization or local optimization. The effect of different k values on A is shown in Figure 5. In the figure, we can see that as the value of k increases, the probability that the value of A is in [−1, 1] is smaller, and, on the contrary, the probability that the the value of A is in [−2, −1] and [1, 2] is greater. That is to say, the probability of | A | 1 increases as the value of k increases. In this way, we can directly control the value by introducing the controllable variable k, thereby affecting the value range of A. Therefore, we can achieve the goal whereby the control algorithm tends to seek global optimization or local optimization when solving practical problems.

4.3. Differential Mutation Evolution Strategy

Differential Evolution (DE) is a heuristic random search algorithm based on population differences. The algorithm was proposed by Bilal et al. to solve the Chebyshev polynomial [39]. In order to solve the problems of low search efficiency and ease of falling into the local optimum of the WOA algorithm, this paper introduces the idea of differential mutation. We now know that, in a whale group, each individual goes through three stages: surround prey, attack prey, and search for prey. When an individual completes an action, a corresponding strategy should be adopted to update the position. The differential mutation evolution strategy introduced in this paper updates the position of an individual through the random differential mutation strategy after the last update position is completed. The differential evolution formula in this paper is in Formula (15). Here, r is a random number in [0, 1]. X r a n d is a randomly selected individual position in the current population. The value of f decreases as the number of iterations increases. After each iteration, we compare the current individual position with the position obtained after updating with the differential mutation evolution strategy, and take the optimal position before and after the change. This not only accelerates the convergence of the population, but also effectively prevents the population from falling into the local optimum and premature converging, which makes the algorithm’s optimization effect better.
X ( t + 1 ) = r · ( X ( t ) X ( t ) ) + f · ( X r a n d ( t ) X ( t ) ) ,
f = 1 e t / T 1 e 1 .

4.4. Steps to Improve Whale Optimization Algorithm

The steps of the improved whale optimization algorithm are as follows:
Step 1. Initialize parameters (number of population, number of iterations, etc.).
Step 2. Calculate the fitness value of each individual according to the individual position in the population, and select the optimal individual X .
Step 3. When t is less than the maximum number of iterations, update parameters such as a, A, C, l, k, p, p p .
Step 4. Judge the value of p, and select a position update mechanism according to the judgment result. When p < p p , the position update is performed using the shrinking and enclosing mechanism. When p p p , the helical position update mechanism is used to update the position. Here, when p < p p and | A | < 1 , the algorithm performs local optimization. When p < p p and | A | 1 , the algorithm performs global optimization.
Step 5. Use the differential mutation evolution strategy to update the individual position and record the better individual position.
Step 6. Determine whether the maximum number of iterations is reached. If yes, calculate the fitness value of each individual and record the optimal value. If not, repeat step three.
Figure 6 is a flowchart of the improved whale optimization algorithm.
The pseudo code of our improved whale optimization algorithm is given as Algorithm 1.
Algorithm 1 Improved whale optimization algorithm (IWOA).
Input: The whales population, Maximum number of iterations, etc
Output: The best fitness value
 Calculate the fitness of each search agent and pick up the best search agent X .
while t < Max-iterations do
     for each search agent do
         if p < p p  then
            if |A| < 1 then
                Update the position of the current search agent using Equation (4)
            else
                if |A| ≥ 1 then
                    Update the position of the current search agent using Equation (10)
                end if
            end if
         else
            if p > p p  then
                Update the position of the current search agent using Equation (8)
            end if
         end if
   Update the position using Equation (15) and record the better value.
    end for
   Calculate the fitness of each search agent, and return the optimal value.
   Update X if there is a better solution.
   t = t + 1.
end while

5. Benchmark Function Experiment

In order to test the performance of the improved whale optimization algorithm, we used the improved whale optimization algorithm to solve 23 standard test functions commonly found in the literature [40,41,42,43,44]. Furthermore, we compared the solution results with six typical swarm intelligence algorithms: Particle Swarm Optimization (PSO), Bat Algorithm (BA), Gray Wolf Optimization Algorithm (GWO), Dragonfly Algorithm (DA), Ant Lion Algorithm (ALO), and Whale Optimization Algorithm (WOA). In this section, we provide experimental values and graphs.

5.1. Parameter Setting and Environment Configuration

In the experiment, we set the population to 30 individuals, and the number of iterations to 10,000. The other parameter settings of the various algorithms we compare with in the experiment are shown in Table 2. It is worth noting that the 23 standard test functions we used can be divided into three categories. F 1 F 7 are uni-modal benchmark test functions. This type of function has only one optimal solution in the global scope and is generally used to test the exploration ability of an algorithm, which is what we usually call the global optimization capability. F 8 F 13 are multi-modal benchmark test functions, which can be used to test the development ability of the algorithm, that is, the optimization ability in the global scope. In addition, we can also judge whether the algorithm easily falls into local optimization, according to the results obtained by the algorithm. F 14 F 23 are multi-modal benchmark functions with fixed dimensions. We can control the difficulty of solving the function by changing the dimension of the function, and then test the performance of the algorithm.
The operating platform of this experiment was MATLAB R2021a. The operating system was Microsoft Windows 11 Home Edition. The processor was AMD Ryzen 7 5800H with Radeon Graphics 3.20 GHz, operating memory 16 G.

5.2. Experimental Results and Analysis

In the standard test function experiment, we ran each algorithm 10 times independently of each function. The mean and standard deviations of the 10 experimental results were recorded. The experimental data are shown in Table 3, and the optimal results are shown in bold black.
The iterative graphs of the uni-modal test function, the multi-modal test function, and the fixed dimension multi-modal test function in the experiment are shown in Figure 7, Figure 8 and Figure 9, respectively.
Assessing the experimental results, we first observe, from the data in Table 3, that the improved algorithm IWOA solved the uni-modal test function. The solution result for F 6 was slightly worse than that of the ALO algorithm, but all other functions obtained the optimal result. More importantly, IWOA had the smallest standard deviation compared to other algorithms. Although the optimal results were also obtained for the F 1 and F 2 functions of GWO and WOA, it can be clearly seen from Figure 7 that IWOA had the fastest iteration speed. This result shows that the improved algorithm greatly improved exploration ability, provided strong local optimization ability, offered good stability and had fast convergence speed. Secondly, it can be seen from the data of the multi-modal test function in Table 3 and Figure 8 that, for the multi-modal test function, IWOA obtained the best results, the standard deviation was the smallest, and the iteration speed was the fastest. This shows that IWOA had a greater advantage in development capabilities. It is not only faster in global optimization, but also more stable and offers better solution results. Finally, for the fixed dimension multi-modal test function IWOA still obtained the best results. The stability was worse than ALO when solving F 19 , F 20 . GWO, ALO, and WOA sometimes achieved good results in this case. From Figure 9, it can be seen that the convergence speed of IWOA was still very fast in solving complex function optimization problems, but other algorithms could also obtain the optimal value in the solution results. At this time, IWOA still has certain advantages.
In general, compared with some existing algorithms, IWOA significantly improved the quality of solving the benchmark test functions, and had higher stability and faster convergence speed.

6. Application of Path Planning Problem

In order to test the performance of IWOA in regard to the path planning problem we applied the mathematical model described in Section 3 with IWOA and the above-mentioned six swarm intelligence algorithms. Under the same control for other environments, we gradually increased the size of the grid map from 10 × 10 to 100 × 100. We ran 10 solutions for each size of map model and compared their mean and standard deviations. Among them, the population number of the swarm intelligence algorithm was 30, and the number of iterations 100. Similarly, the optimal value is bolded in black. The experimental data are presented in Table 4.
During the experiment, we recorded the results of the algorithm solving grid maps for each size, and randomly selected experimental data once in every 10 experiments. Based on the variation of the obtained results with the number of iterations, we plotted a convergence curve, as shown in Figure 10. In addition, we randomly selected an experiment and recorded the actual driving paths of the seven algorithms when solving ten scale maps, and plotted the actual path trajectories. Here we present the path trajectories of the different sized maps in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20. In these figures, green dots represent the starting point and red dots represent the ending point
As can be seen from Table 4, except for the average value obtained by IWOA on the 60 × 60 sized map, which was slightly lower than GWO. The average value obtained by IWOA was the best in all other cases. Compared with other algorithms, when solving the optimal value, IWOA achieved a maximum improvement of 49.12% and a minimum improvement of 10.92%, with an average improvement of 18.95%. In terms of stability, except for maps with sizes 50 × 50 and 60 × 60, where the standard deviation of IWOA was smaller than that of ALO and GWO, the standard deviation of IWAO for all other sizes was the smallest. IWOA improved stability by up to 86.22% and at least 34.99%, with an average improvement of 77.86%. This not only shows the feasibility of the improved algorithm in path planning, but also implies that the effect and stability of IWOA in solving the path planning problem are better than those of other swarm intelligence algorithms.
It can be seen from Figure 10 that, compared to the other six population intelligent optimization algorithms, IWOA had the fastest convergence speed, while being able to find the optimal solution. In addition, from Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19 and Figure 20, it can be intuitively seen that IWOA effectively avoided obstacles when planning the path, and intelligently found the shortest path that met requirements. These experimental results show that the optimization ability of the algorithm was been greatly improved after adding controllable variables and the adaptive dynamic adjustment mechanism. After introducing the differential mutation evolution strategy, the stability and convergence speed of the algorithm greatly improved. These all illustrate the feasibility of IWOA in path planning problems and the effectiveness of our improvement strategy.
In short, compared to some existing algorithms, IWOA obtained better values and has stronger stability when solving path planning problems, and the convergence speed was also faster during the solving process.

7. Conclusions

This paper proposes an improved Whale Optimization Algorithm (IWOA) to ensure end-to-end obstacle avoidance path planning for intelligent driving vehicles and robots. IWOA can help robots avoid obstacles and reach their destinations in unknown or complex environments, plan the movement of robots and machines on assembly lines, optimize the distribution routes of logistics vehicles, and plan the most effective routes for agricultural vehicles spraying pesticides and so on.
In this paper, the adaptive dynamic adjustment mechanism is used to dynamically adjust the search behavior during the algorithm iteration. At the beginning of the iteration, the IWOA algorithm uses the contraction and bounding mechanism for global development. As the number of iterations increases, the local search behavior of the algorithm also increases. At the same time, we introduce a controllable variable to adjust the optimization range of the algorithm. Developers can adjust this parameter according to specific routing scenarios to determine whether the algorithm performs more global optimization or local optimization. In addition, after each individual updates its position in the algorithmic iteration process, we introduce the differential evolution strategy to update the individual’s position again, and record the better individual position before and after the update. In part of the experimental design, this paper carried out benchmark function experiments and path planning problem application experiments. We verified the feasibility of IWOA in path planning for intelligent vehicles and robots. What is more, we compared IWOA with PSO, BA, GWO, DA, ALO and WOA, and the experimental results showed that IWOA improved the solution results, enhanced the stability of the solution, and accelerated the convergence speed during the solution process, whether for benchmark testing functions or path planning problems.
There are still directions for study and improvement. Firstly, when modeling the problem with a grid graph, this paper utilized the idea of the quadtree structure, which has certain restrictions on the selection of paths. Modeling in combination with other structures may be considered in future study. In addition, although the differential mutation evolution strategy introduced in this paper improved the performance of the algorithm, the calculation time increased because the positional update of the individual was performed independently in each iterative process. Finally, when applied to intelligent driving vehicles and robots, the driving environment may be more complex, and dynamic obstacles would also need to be avoided, which requires designing precise algorithms to cope. These problems need further study and improvement.

Author Contributions

Conceptualization, C.-H.W.; methodology, C.-H.W. and S.C.; software, S.C. and Q.Z.; validation, C.-H.W., S.C. and Q.Z.; formalanalysis, S.C. and Y.S.; investigation, C.-H.W. and S.C.; data curation, S.C. and Y.S.; writing—original draft preparation, S.C. and Y.S.; writing—review and editing, C.-H.W. and Q.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by the Fujian Provincial Department of Science and Technology under Grant No. 2021J011070, and the Fujian University of Technology under Grant No. GY-Z18148.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Karur, K.; Sharma, N.; Dharmatti, C.; Siegel, J. A Survey of Path Planning Algorithms for Mobile Robots. Vehicles 2021, 3, 448–468. [Google Scholar] [CrossRef]
  2. Kim, M.; Park, J. Iearning collaborative policies to solve np-hard routing problems. Adv. Neural Inf. Process. Syst. 2021, 34, 10418–10430. [Google Scholar]
  3. Lu, N.; Cheng, N.; Zhang, N.; Shen, X.M.; Mark, J.W. Connected Vehicles solutions and challenges. IEEE Internet Things J. 2014, 4, 289–299. [Google Scholar] [CrossRef]
  4. Wang, C.-H.; Lee, C.-J.; Wu, X.J. A Coverage-Based Location Approach and Performance Evaluation for the Deployment of 5G Base Stations. IEEE Access 2020, 8, 123320–123333. [Google Scholar] [CrossRef]
  5. Beni, G.; Wang, J. Swarm Intelligence in Cellular Robotic Systems; Springer: Berlin/Heidelberg, Germany, 2021; Volume 1993, pp. 703–712. [Google Scholar]
  6. Marco, D.; Mauro, B.; Thomas, S. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 4, 28–39. [Google Scholar]
  7. James, K.; Russell, E. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN’95), Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  8. Gerhard, V. Particle swarm optimization. AIAA J. 2003, 41, 1583–1589. [Google Scholar]
  9. Mehdi, N.; Sepidnam, G.; Mehdi, S.; Adel, N.T. Artificial fish swarm algorithm a survey of the stateof-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev. 2014, 42, 965–997. [Google Scholar]
  10. Yang, X.S.; He, X. Swarm Intelligence Algorithms: Firefly algorithm. Int. J. Swarm Intell. 2020, 1, 36–50. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, C.-H.; Nguyen, T.-T.; Pan, J.-S.; Dao, T.-k. An Optimization Approach for Potential Power Generator Outputs Based on Parallelized Firefly Algorithm. Adv. Intell. Inf. Hiding Multimed. Signal Process. 2016, 64, 21–23. [Google Scholar]
  12. Yang, X.S. New metaheuristic bat-Inspired algorithm. Nat. Inspired Coop. Strateg. Optim. 2010, 2010, 65–74. [Google Scholar]
  13. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 2014, 46–61. [Google Scholar] [CrossRef] [Green Version]
  14. Mirjalili, S. Dragonfly algorithm a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  15. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Xue, J.K.; Shen, B. A novel swarm intelligence optimization approach sparrow search algorithm. Syst. Sci. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  18. Qin, Y.Q.; Sun, D.B.; Li, N.; Cen, Y.G. Path planning for mobile robot using the particle swarm optimiza. In Proceedings of the 2004 International Conference on Machine Learning and Cybernetics, Shanghai, China, 26–29 August 2004; Volume 4, pp. 2473–2478. [Google Scholar]
  19. Hsiao, Y.T.; Chuang, C.L.; Chien, C.C. Ant colony optimization for best path planning. In Proceedings of the International Symposium on Communications and Information Technology, Sapporo, Japan, 26–29 October 2004; Volume 1, pp. 109–113. [Google Scholar]
  20. Tsai, P.W.; Nguyen, T.T.; Dao, T.K. Robot path planning optimization based on multiobjective grey wolf optimizer. In Proceedings of the Tenth International Conference on Genetic and Evolutionary Computing, Fuzhou, China, 7–9 November 2016; pp. 166–173. [Google Scholar]
  21. Milica, P.; Zoran, M.; Aleksandar, J. A novel methodology for optimal single mobile robot scheduling using whale optimization algorithm. Appl. Soft Comput. 2019, 81, 105520. [Google Scholar]
  22. Guo, J.; Gao, Y.; Cui, G. The path planning for mobile robot based on bat algorithm. Int. J. Autom. Control 2015, 9, 50–60. [Google Scholar] [CrossRef]
  23. Dong, Z.Y.; Wang, C.-H.; Zhao, Q.G.; Wei, Y.; Chen, S.M.; Yang, Q.P. A Study on Intelligent Optimization Algorithms for Capacity Allocation of Production Networks. In Proceedings of the 2021 Chinese Intelligent Systems Conference, Fuzhou, China, 16–17 October 2021; Volume 804, pp. 734–743. [Google Scholar]
  24. Chen, S.M.; Wang, C.-H.; Dong, Z.Y.; Zhao, Q.G.; Wei, Y.; Huang, G.S. Performance Evaluation of Three Intelligent Optimization Algorithms for Obstacle Avoidance Path Planning. In Proceedings of the Fourteenth International Conference on Genetic and Evolutionary Computing, Jilin, China, 21–23 October 2021; Volume 833, pp. 60–69. [Google Scholar]
  25. Dewangan, R.K.; Shukla, A.; Godfrey, W.W. A solution for priority-based multi-robot path planning problem with obstacles using ant lion optimization. Mod. Phys. Lett. B 2020, 34, 2050137. [Google Scholar] [CrossRef]
  26. Liu, J.Y.; Wei, X.X.; Huang, H.J. An Improved grey wolf optimization algorithm and Its application in path planning. IEEE Access 2021, 9, 121944–121956. [Google Scholar] [CrossRef]
  27. Chhillar, A.; Choudhary, A. Mobile robot path planning based upon updated whale optimization algorithm. In Proceedings of the 2020 10th International Conference on Cloud Computing, Qufu, China, 11–12 December 2020; pp. 684–691. [Google Scholar]
  28. Xu, Y.; Wei, Y.; Jiang, K.; Wang, D.; Deng, H. Multiple UA Vs Path Planning Based on Deep Reinforcement Learning in Communication Denial Environment. Mathematics 2023, 11, 405. [Google Scholar] [CrossRef]
  29. Liu, L.J.; Luo, S.N.; Guo, F.; Tan, S.Y. Multi-point shortest path planning based on an Improved discrete bat algorithm. Appl. Soft Comput. 2020, 95, 106498. [Google Scholar] [CrossRef]
  30. Yuan, X.; Wang, X. Path planning for mobile robot based on improved bat algorithm. Sensors 2021, 21, 4389. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, F.; Wang, Z.; Lin, M. Robot path planning based on improved particle swarm optimization. In Proceedings of the IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering, Nanchang, China, 26–28 March 2021; pp. 887–891. [Google Scholar]
  32. Akka, K.; Khaber, F. Mobile robot path planning using an improved ant colony optimization. Int. J. Adv. Robot. Syst. 2018, 15, 1–7. [Google Scholar] [CrossRef] [Green Version]
  33. Zan, J.; Ku, P.T.; Jin, S.F. Research on robot path planning based on whale optimization algorithm. In Proceedings of the IEEE 2021 5th Asian Conference on Artificial Intelligence Technology, Haikou, China, 29–31 October 2021; pp. 500–504. [Google Scholar]
  34. Cao, Y.; Zhou, Y.; Zhang, Y. Path planning for obstacle avoidance of mobile robot based on optimized A* and DWA algorithm. Mach. Tool Hydraul. 2020, 48, 246–252. [Google Scholar]
  35. Rodríguez-Molina, A.; Herroz-Herrera, A.; Aldape-Pérez, M.; Flores-Caballero, G.; Antón-Vargas, J.A. Dynamic Path Planning for the Differential Drive Mobile Robot Based on Online Metaheuristic Optimization. Mathematics 2022, 10, 3990. [Google Scholar] [CrossRef]
  36. Su, Q.G.; Yu, W.W.; Liu, J. Mobile robot path planning based on improved ant colony algorithm. In Proceedings of the IEEE 2021 Asia-Pacific Conference on Communications Technology and Computer Science, Shenyang, China, 22–24 January 2021; Volume 35, pp. 220–224. [Google Scholar]
  37. Wang, C.-H.; Hsing, P.L. Analysis of Bandwidth Allocation on End-to-End QoS Networks under Budget Control. Comput. Math. Appl. 2011, 62, 419–439. [Google Scholar] [CrossRef] [Green Version]
  38. Becerra-Rozas, M.; Cisternas-Caneo, F.; Crawford, B.; Soto, R.; García, J.; Astorga, G.; Palma, W. Embedded Learning Approaches in the Whale Optimizer to Solve Coverage Combinatorial Problems. Mathematics 2022, 10, 4529. [Google Scholar] [CrossRef]
  39. Pant, M.; Zaheer, H.; Garcia-Hernandez, L.; Abraham, A. Differential Evolution: A review of more than two decades of research. Eng. Appl. Artif. Intell. 2020, 90, 103479. [Google Scholar]
  40. Marcin, M.; Czesław, S. Test functions for optimization needs. Test Funct. Optim. Needs 2005, 2005, 48. [Google Scholar]
  41. Digalakis, J.G.; Margaritis, K.G. On benchmarking functions for genetic algorithm. Int. J. Comput. Math. 2001, 77, 481–506. [Google Scholar] [CrossRef]
  42. Liang, J.J.; Suganthan, P.N.; Deb, K. Novel composition test functions for numerical global optimization. In Proceedings of the 2005 IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 68–75. [Google Scholar]
  43. Yang, Y.S. Firefly algorithm: Stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  44. Yao, X.; Liu, Y.; Lin, G.M. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar]
Figure 1. Demonstration of how to convert the actual environment into a grid map.
Figure 1. Demonstration of how to convert the actual environment into a grid map.
Mathematics 11 01800 g001
Figure 2. Weighted directed graph based on grid graph.
Figure 2. Weighted directed graph based on grid graph.
Mathematics 11 01800 g002
Figure 3. Trend of p p changes with iteration times before and after improvement.
Figure 3. Trend of p p changes with iteration times before and after improvement.
Mathematics 11 01800 g003
Figure 4. The trend of a changing with the number of iterations when k takes different values.
Figure 4. The trend of a changing with the number of iterations when k takes different values.
Mathematics 11 01800 g004
Figure 5. The impact on A when k takes different values, where T is the number of iterations.
Figure 5. The impact on A when k takes different values, where T is the number of iterations.
Mathematics 11 01800 g005
Figure 6. Flow chart of improved whale optimization algorithm.
Figure 6. Flow chart of improved whale optimization algorithm.
Mathematics 11 01800 g006
Figure 7. Different algorithms solve F 1 F 7 uni-modal test functions, and the solution value changes with the number of iterations.
Figure 7. Different algorithms solve F 1 F 7 uni-modal test functions, and the solution value changes with the number of iterations.
Mathematics 11 01800 g007
Figure 8. Different algorithms solve F 8 F 13 multi-modal test functions, and the solution value changes with the number of iterations.
Figure 8. Different algorithms solve F 8 F 13 multi-modal test functions, and the solution value changes with the number of iterations.
Mathematics 11 01800 g008
Figure 9. Different algorithms solve F 14 F 23 fixed dimension multi-modal test functions, and the solution value changes with the number of iterations.
Figure 9. Different algorithms solve F 14 F 23 fixed dimension multi-modal test functions, and the solution value changes with the number of iterations.
Mathematics 11 01800 g009
Figure 10. Different algorithms solve different size maps, and the solution value changes with the number of iterations.
Figure 10. Different algorithms solve different size maps, and the solution value changes with the number of iterations.
Mathematics 11 01800 g010
Figure 11. Actual path traces of seven different algorithms on 10 × 10 size map.
Figure 11. Actual path traces of seven different algorithms on 10 × 10 size map.
Mathematics 11 01800 g011
Figure 12. Actual path traces of seven different algorithms on 20 × 20 size map.
Figure 12. Actual path traces of seven different algorithms on 20 × 20 size map.
Mathematics 11 01800 g012
Figure 13. Actual path traces of seven different algorithms on 30 × 30 size map.
Figure 13. Actual path traces of seven different algorithms on 30 × 30 size map.
Mathematics 11 01800 g013
Figure 14. Actual path traces of seven different algorithms on 40 × 40 size map.
Figure 14. Actual path traces of seven different algorithms on 40 × 40 size map.
Mathematics 11 01800 g014
Figure 15. Actual path traces of seven different algorithms on 50 × 50 size map.
Figure 15. Actual path traces of seven different algorithms on 50 × 50 size map.
Mathematics 11 01800 g015
Figure 16. Actual path traces of seven different algorithms on 60 × 60 size map.
Figure 16. Actual path traces of seven different algorithms on 60 × 60 size map.
Mathematics 11 01800 g016
Figure 17. Actual path traces of seven different algorithms on 70 × 70 size map.
Figure 17. Actual path traces of seven different algorithms on 70 × 70 size map.
Mathematics 11 01800 g017
Figure 18. Actual path traces of seven different algorithms on 80 × 80 size map.
Figure 18. Actual path traces of seven different algorithms on 80 × 80 size map.
Mathematics 11 01800 g018
Figure 19. Actual path traces of seven different algorithms on 90 × 90 size map.
Figure 19. Actual path traces of seven different algorithms on 90 × 90 size map.
Mathematics 11 01800 g019
Figure 20. Actual path traces of seven different algorithms on 100 × 100 size map.
Figure 20. Actual path traces of seven different algorithms on 100 × 100 size map.
Mathematics 11 01800 g020
Table 1. Parameters in the formula and their corresponding meanings.
Table 1. Parameters in the formula and their corresponding meanings.
ParametersMeaning of Parameters
VThe vertices in the directed graph, such as V i , V j represents the i-th and j-th vertices.
EThe edge between two adjacent vertices.
ω The weight of the edge, such as ω i j represents the weight of the edge from the vertex V i to V j .
GRepresents a directed graph, which contains three elements: V, E, and ω .
Table 2. Comparison table of algorithm parameters.
Table 2. Comparison table of algorithm parameters.
AlgorithmsParametersValues
PSOLearning factor C 1 1.4
Learning factor C 2 1.4
Inertia coefficient w1.2
Initial speed vRandom number of [0, 1]
Maximum flight speed V m a x 3
BALoudness ARandom number of [0, 1]
Pulse emission frequency rRandom number of [0, 1]
Maximum frequency F m a x 2
Minimum frequency F m i n 0
GWOParameter ADecrement from 2 to 0
Coefficient vector A[−a, a]
Coefficient vector CRandom number of [0, 2]
DAResolution s0.05
Alignment a0.06
Degree of aggregation c0.1
Food attractiveness f0.5
Natural enemy dispel e0.1
Inertia coefficient w0.8
ALOWalking step length t2
Adjusting parameter w[2, 6]
WOAParamete aDecrement from 2 to 0
Coefficient vector A[−a, a]
Coefficient vector CRandom number of [0, 2]
Probability parameter pRandom number of [0, 1]
Spiral parameter l[−1, 1]
IWOAParamete aDecrement from 2 to 0
Controllable parameter k0.6 T
Coefficient vector A[−a, a]
Coefficient vector CRandom number of [0, 2]
Probability parameter pRandom number of [0, 1]
Probability adjustment parameter p p Decrement from 0.6 to 0.34
Spiral parameter l[−1, 1]
Table 3. Average and standard deviation of benchmark function results from different algorithms.
Table 3. Average and standard deviation of benchmark function results from different algorithms.
FPSOBAGWODAALOWOAIWOA
avestdavestdavestdavestdavestdavestdavestd
F 1 0.1040.0231.3481.597007.67210.1242.358 × 10 9 6.765 × 10 10 0000
F 2 4.2906.3711.7413.48001.0140.7710.0100.0250000
F 3 11.613.7662.9195.8917.733 × 10 18 087.34945.4783.760 × 10 6 3.812 × 10 6 68.235128.91800
F 4 0.9600.4250.1750.2600.5180.3041.8761.2260.0040.00112.11123.00200
F 5 107.24467.064125.617133.23726.4380.8191429.7291353.69854.39275.09824.3450.34722.7260.186
F 6 0.0840.0242.0232.9690.4380.3328.0946.4102.687 × 10 9 1.359 × 10 9 6.845 × 10 6 3.394 × 10 6 2.920 × 10 8 8.76 × 10 9
F 7 26.9748.1420.2530.4935.743 × 10 5 3.758 × 10 5 0.0210.0130.0150.0111.360 × 10 4 1.207 × 10 4 2.570 × 10 6 2.100 × 10 6
F 8 −7074.739933.797−116.5230.300−5732.601509.102−2821.746293.180−2233.767310.066−12,512.1891490.997−12,569.4850.001
F 9 55.50526.51219.13415.6980029.88917.31720.8949.3025.684 × 10 15 1.705 × 10 14 00
F 10 1.8700.6100.6900.8237.638 × 10 15 1.066 × 10 15 3.0661.4010.2310.4623.375 × 10 15 2.275 × 10 15 8.880 × 10 16 0
F 11 0.0220.0120.0310.065000.8940.2480.2560.1064.028 × 10 3 7.254 × 10 3 00
F 12 1.3271.4960.2820.1190.0300.0141.3450.6880.8591.2697.561 × 10 6 5.354 × 10 7 6.190 × 10 9 1.560 × 10 9
F 13 0.0930.0320.8120.1860.3500.1540.8920.8820.0020.0041.125 × 10 3 3.295 × 10 3 1.260 × 10 7 4.470 × 10 8
F 14 10.6025.63112.6713.042 × 10 10 6.4684.7390.9981.165 × 10 9 1.6930.6360.9982.866 × 10 15 0.9980
F 15 0.0080.0090.0050.0050.0060.0090.0016.361 × 10 4 0.0050.0084.226 × 10 4 2.742 × 10 4 3.075 × 10 4 0
F 16 −1.0323.844 × 10 5 −0.8660.161−1.0324.069 × 10 11 −1.0321.186 × 10 5 −1.0322.676 × 10 14 −1.0325.188 × 10 15 −1.0320
F 17 0.3980.0010.6850.2070.3984.102 × 10 9 0.3983.141 × 10 6 0.3983.881 × 10 14 0.3986.177 × 10 10 0.3980
F 18 3.0110.01278.73330.7673.0006.869 × 10 8 3.0006.925 × 10 5 3.0008.204 × 10 14 3.0001.278 × 10 7 3.0000
F 19 −3.8200.102−3.0520.408−3.8630.002−3.8631.042 × 10 4 −3.8636.732 × 10 15 −3.8632.235 × 10 3 −3.8630.002
F 20 −2.6590.436−0.8940.608−3.2850.056−3.2430.108−3.2860.054−3.2260.140−3.2860.087
F 21 −10.1531.491 × 10 4 −3.1020.997−10.1532.655 × 10 6 −7.8522.842−7.1132.843−10.1533.561 × 10 7 −10.1530
F 22 −10.4031.592 × 10 4 −2.7620.762−10.4032.464 × 10 6 −8.4573.027−6.1362.996−10.4032.366 × 10 7 −10.4030
F 23 −10.5360.001−3.267−3.267−10.5362.817 × 10 6 −9.2252.670−7.3513.249−10.3562.899 × 10 7 −10.3560
Bold in the table represents the best result compared to different algorithms when solving functions.
Table 4. Comparison of average value and standard deviation of different sized maps solved by different algorithms.
Table 4. Comparison of average value and standard deviation of different sized maps solved by different algorithms.
Map SizePSOBAGWODAALOWOAIWOA
avestdavestdavestdavestdavestdavestdavestd
10 × 1023.02.16022.23.32718.61.35020.22.57318.60.96619.22.70018.00.843
20 × 2045.62.06643.22.53038.61.35041.84.15839.42.11939.21.68738.20.632
30 × 3071.04.13763.87.20859.83.82469.610.14659.21.68762.86.19758.81.398
40 × 4087.08.12491.410.91679.62.63385.48.11383.02.16082.22.89879.21.687
50 × 50120.89.295113.411.853105.610.058113.48.644101.03.432104.08.743102.47.706
60 × 60139.03.162120.03.877118.20.632126.07.944120.02.828123.69.419118.81.932
60 × 70157.45.337143.210.799141.47.545163.019.419140.63.658142.28.867138.60.966
80 × 80180.42.066163.46.467159.02.160168.89.484162.23.327160.45.641158.61.897
90 × 90198.27.146190.415.629182.47.706205.415.693181.23.553180.05.657178.81.932
100 × 100225.215.411210.217.900204.213.147220.817.287204.06.532202.06.799200.65.168
Bold in the table represents the best results compared to algorithms for solving different sized maps.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, C.-H.; Chen, S.; Zhao, Q.; Suo, Y. An Efficient End-to-End Obstacle Avoidance Path Planning Algorithm for Intelligent Vehicles Based on Improved Whale Optimization Algorithm. Mathematics 2023, 11, 1800. https://doi.org/10.3390/math11081800

AMA Style

Wang C-H, Chen S, Zhao Q, Suo Y. An Efficient End-to-End Obstacle Avoidance Path Planning Algorithm for Intelligent Vehicles Based on Improved Whale Optimization Algorithm. Mathematics. 2023; 11(8):1800. https://doi.org/10.3390/math11081800

Chicago/Turabian Style

Wang, Chia-Hung, Shumeng Chen, Qigen Zhao, and Yifan Suo. 2023. "An Efficient End-to-End Obstacle Avoidance Path Planning Algorithm for Intelligent Vehicles Based on Improved Whale Optimization Algorithm" Mathematics 11, no. 8: 1800. https://doi.org/10.3390/math11081800

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop