Next Article in Journal
Convex Hull Obstacle-Aware Pedestrian Tracking and Target Detection in Theme Park Applications
Previous Article in Journal
Effect of the Red-Edge Band from Drone Altum Multispectral Camera in Mapping the Canopy Cover of Winter Wheat, Chickweed, and Hairy Buttercup
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality-Aware Autonomous Navigation with Dynamic Path Cost for Vision-Based Mapping toward Drone Landing

1
Mechatronics Engineering Department, Istanbul Technical University, Istanbul 34469, Turkey
2
Control and Automation Engineering Department, Istanbul Technical University, Istanbul 34469, Turkey
*
Authors to whom correspondence should be addressed.
Drones 2023, 7(4), 278; https://doi.org/10.3390/drones7040278
Submission received: 1 March 2023 / Revised: 30 March 2023 / Accepted: 17 April 2023 / Published: 19 April 2023
(This article belongs to the Section Drone Design and Development)

Abstract

:
This article presents a novel autonomous navigation approach that is capable of increasing map exploration and accuracy while minimizing the distance traveled for autonomous drone landings. For terrain mapping, a probabilistic sparse elevation map is proposed to represent measurement accuracy and enable the increasing of map quality by continuously applying new measurements with Bayes inference. For exploration, the Quality-Aware Best View (QABV) planner is proposed for autonomous navigation with a dual focus: map exploration and quality. Generated paths allow for visiting viewpoints that provide new measurements for exploring the proposed map and increasing its quality. To reduce the distance traveled, we handle the path-cost information in the framework of control theory to dynamically adjust the path cost of visiting a viewpoint. The proposed methods handle the QABV planner as a system to be controlled and regulate the information contribution of the generated paths. As a result, the path cost is increased to reduce the distance traveled or decreased to escape from a low-information area and avoid getting stuck. The usefulness of the proposed mapping and exploration approach is evaluated in detailed simulation studies including a real-world scenario for a packet delivery drone.

1. Introduction

The academic research on drones is increasing as drone applications have been successfully used in search and rescue missions [1,2], warehouse management [3,4], surveillance and inspection [5,6], agriculture [7], and package delivery [8]. Although most drone autopilots have autonomous take-off and cruise functions [9,10], landing in an unknown environment is still one of the challenges to be tackled. To accomplish a safe landing, the drone must be aware of the area underneath to pick a safe landing area autonomously. In this context, large unmanned aerial vehicles use lidar-based active scanners to generate elevation maps [11,12], yet the usage of these scanners increases not only the weight but also its energy consumption. Alternatively, drones are equipped with camera systems that are lightweight and energy efficient. In developing a camera-based method for autonomous landing, the main challenges are (1) terrain mapping and (2) exploration.
For terrain mapping, a single-camera setup can be used for distance measurements of the terrain. In one of the earliest works [13], a method based on the recursive multi-frame planar parallax algorithm [14] is presented to map the environment via a single camera. In [15], a dense motion stereo algorithm processing a single camera is presented to concurrently assess feasible rooftop landing sites for a drone. In [16], a drone equipped with a single camera explores a previously unknown environment and generates a 3D point cloud map with a monocular visual simultaneous localization and mapping system (SLAM). Then, a grid map is generated from the point cloud data and appropriate landing zones are selected. In [17], fast semi-direct monocular visual odometry (SVO) [18] with a single camera is used for generating a sparse elevation map with Bayes inference to select a safe landing area away from obstacles. SVO has the highest computational efficiency among monocular state estimation and mapping techniques according to [19]. On the other hand, a stereo-camera setup is useful for depth measurements based on [20,21,22]. Compared to a single camera, a stereo camera (compromising a depth sensor) provides more accurate measurements, but they are heavier and more expensive than a single camera. Additionally, the resulting measurement error is a challenge. For instance, in [23], a stereo-camera system is used for the depth map generation of catastrophe-struck environments. In the depth map, measurements from higher altitudes have higher costs due to lower accuracy, and landing area selection is performed by considering the cost.
For exploration, there are many methods in the literature for path planning. In [13], a box-search method as a global plan combined with a predictive controller acts as a local path planner with online sensor information. In [15], to improve the landing site assessment, a path planner calculates optimal polynomial trajectories while ensuring the requirement for providing visual monitoring of a point of interest. In [24], the use of SVO is extended to a perception-aware receding horizon approach for drones to solve the active SLAM [25] problem and reach a target location safely and accurately. They generate arc-shaped candidate trajectories with [26] by using the information from SVO, then they select the best trajectory that maximizes the following metrics: collision probability, perception quality, and distance to the target location. To calculate the perception quality of each candidate trajectory, position samples are taken from each trajectory using a constant time interval. Then for each sample, the position estimation error is calculated by using the visible landmarks from SVO. The intuition is that more visible landmarks result in less position estimation error and accurate pose estimation along the trajectory can help better triangulate new landmarks, which is beneficial for the pose estimation afterward. In [20], the receding horizon “Next-Best-View” (NBV) planner is employed. The NBV planner makes use of rapidly-exploring random tree (RRT) [27] and suggests a receding horizon planning strategy. In every planning iteration, an RRT is generated and the best branch of the tree (i.e., a sequence of waypoints) is determined for map exploration. The intuition is that when a drone visits a waypoint in an RRT branch that provides a better viewpoint for map exploration, the exploration will be faster. The NBV planner selects the best branch with this intuition, but only the first step is executed as is in the receding horizon fashion. This process continues until the environment is explored except for residual locations such as occluded places and narrow pockets between objects. To improve the NBV planner regarding global map exploration, Schmit et al. [21] proposed using RRT* [28]; it obtains global exploration by continuously expanding a single trajectory tree, allowing the algorithm to maximize a single objective function. This enables the method to reason about the global utility of a path and refine trajectories to maximize their value. For the same purpose, Selin et al. [22] suggest using frontier exploration planning [29] as a global planner while using the NBV planner as a local planner.
In this study, we propose a novel autonomous navigation approach by considering map exploration and accuracy while minimizing the distance traveled for autonomous drone landings. The proposed overall approach is illustrated in Figure 1.
For terrain mapping, we propose a “Probabilistic Sparse Elevation Map” to represent the area underneath a drone. As illustrated in Figure 1, the SVO approach takes image frames from a down-looking single camera attached to the drone, estimates the drone position, and generates point cloud data where each point is represented with a normal distribution. To generate the proposed map, we use point cloud data as height measurements and apply new measurements continuously with Bayes inference to increase the accuracy of the measurements. During this process, a path planner is also required to process map information and guide the drone to take the necessary measurements for map exploration and quality. When the map generation is completed, the final map can be used to identify safe landing areas for the drone.
For exploration, we propose the “Quality-Aware Best View (QABV) planner” to increase not only the accuracy of the map but also to decrease the traveled distance. To accomplish such a goal, we develop a novel information gain and dynamic path cost that allows the QABV planner to generate local waypoints considering map accuracy and exploration with traveled distance. In every planning iteration of QABV, an RRT with root node drone position is built and the best branch of the RRT is determined according to the information gain, which comprises the sum of the information contribution of each visible map cell for the subject branch. The intuition is that the more information we obtain from the visible area when we follow the RRT branch, the higher the map quality and exploration will increase. In this context, we calculate the information contribution of a visible map cell using its variance from the proposed probabilistic sparse elevation map information if it is updated before. If it is not, then we estimate its information contribution by calculating the height deviation of the visible area. After determining the best branch, the first node of the branch is sent to the position controller as the best waypoint. Through the proposed dynamic path cost approach, the QABV is capable of increasing the map exploration and accuracy while not increasing the traveled distance, thus ensuring traveling efficiency. We proposed using the node count between the subject node and the root node as the path cost and handling the dynamic path cost problem within the framework of control theory. We developed four novel approaches to increase efficiency by handling the QABV planner as the system to be controlled in order to regulate the information contribution of the generated paths. As a result, the path cost is increased to reduce the distance traveled or decreased to escape from a low-information area and avoid getting stuck.
The main contributions and outcomes of the study are summarized as follows:
  • A probabilistic sparse elevation map generation algorithm with Bayes inference using SVO point cloud data and a down-looking single-camera setup. The generated map enables the representation of the sensor measurement accuracy to increase map quality by cooperating with the planner.
  • A novel QABV planner for autonomous navigation with a dual focus: map exploration and quality. Generated paths via the novel information gain allow for visiting viewpoints that provide new measurements that increase the performance of the probabilistic sparse elevation map concerning exploration and accuracy.
  • A novel dynamic path cost representation for the QABV planner to reduce the distance traveled and escape from a low-information area. To dynamically adjust the path cost, we propose four control methods: method 1 offers a simple approach with an on–off controller while method 2 employs a PD controller to adjust the path cost, and method 3 switches its control action depending on the reference while method 4 generates its reference depending on the map information.
  • To show the performance improvements of the proposed approach, we present comparative simulation results. We demonstrate that the developed QABV planners have significantly better performances compared to the NBV planner regarding measurement accuracy and distance traveled.
  • To verify the performance, we also solve the safe landing problem of a delivery drone within a realistic simulation environment. The results verify the usefulness of the proposed mapping and exploration approach.

2. Probabilistic Sparse Elevation Map Generation with SVO Point Cloud Data

In this section, we present the generation of the proposed probabilistic sparse elevation map that enhances the SVO point cloud data through the deployment of Bayes inference. The generated map provides measurement accuracy alongside the height of the terrain, which is processed by the proposed QABV planner to increase the accuracy of a location through Bayes inference.

2.1. SVO Point Cloud Data Generation

SVO estimates the position of image features (e.g., edges, points, and lines), and the estimation accuracy is defined by its uncertainty model [17,30]. In a single down-looking camera scenario, as illustrated in Figure 2, SVO first finds the transition vector d = C 0 C 1 between camera positions, and then solves the feature correspondence problem between consecutive images I 0 and I 1 . Once an image feature correspondence is found between two images as u and u , the position of the feature p is estimated with triangulation.
As shown in Figure 2, the accuracy of this estimation is modeled as N ( μ ,   σ 2 ) and σ 2 corresponds to the assumption that a pixel error has occurred along the epipolar line passing through u and u while solving the feature correspondence problem. In this case, the position of the feature p occurs in p + , and the distance between them determines σ 2 . To provide a clear explanation of the calculation of σ 2 and p + , let us define the unit vector f = C 0 p C 0 p and the vector a = C 1 p as follows:
a = C 0 p d .
By using vector algebra, we can define the angle α between vectors f and d as:
α = arccos ( f · d d )
and the angle β between vectors a and d as follows:
β = arccos ( a · d a d ) .
We can add the angle spanning a pixel to β to find β + when ζ is the camera focal length as [30]:
β + = β + 2   tan 1 ( 1 2 ζ ) .
Now, we can calculate the angle γ illustrated in Figure 2 by using the triangle sum theorem as follows:
γ = π α β + .
Then, the norm of C 0 p + is defined via the law of sines:
C 0 p + = d s i n β + sin γ .
Therefore, when there is a pixel error while solving the feature correspondence problem for p , the measurement uncertainty is [30]:
σ 2 = ( C 0 p + C 0 p   ) 2 .

2.2. Probabilistic Sparse Elevation Map Generation via Bayesian Inference

The proposed probabilistic sparse elevation map M uses a 2D matrix to represent the area underneath the drone on a grid, with each element ( C e l l ( i , j ) ) corresponding to a point on the grid and representing a height measurement. During the generation of the map, each point p = [ x , y , z ] in the SVO point cloud, along with its corresponding uncertainty ( σ 2 ), is continuously processed using Bayesian inference to update the corresponding C e l l ( i , j ) in M . This process takes place as the drone flies over an area, as shown in Figure 3.
In Algorithm 1, the pseudo-code of the proposed map generation approach is presented. Here, each C e l l ( i , j ) of M is initialized with an unmapped cell defined as follows:
U n m a p p e d   C e l l = { C e l l ( i , j )   |   C e l l ( i , j ) M ,   a n d   C e l l ( i , j ) = } .
Then, G e t P o i n t C l o u d ( ) parses the SVO data stream and extracts every p = [ x , y , z ] together with its σ 2 . For each p in the point cloud, G e t R e l a t e d C e l l   ( M ,   p ) retrieves the corresponding C e l l ( i , j ) according to the x and y positions of p so that it can be updated. In updating C e l l ( i , j ) :
  • If it is an unmapped cell, then its height z and variance σ 2 are updated via p .
  • If it is updated before but has more than ±   1   cm height error (i.e., a threshold value), then it is defined as an inaccurate cell as follows:
I n a c c u r a t e   C e l l = { C e l l ( i , j )   |   C e l l ( i , j )   M ,   a n d    σ 2 > 0.00001 } ,
and updated with N ( μ ,   σ 2 ) by applying the Bayes inference that is defined as follows:
N n e w ( μ n , σ n 2 ) = N o l d ( μ o , σ o 2 )   N ( μ ,   σ 2 ) .
where μ is the height measurement z , and N o l d ( μ o , σ o 2 ) is the prior distribution. The resulting μ n and σ n 2 are defined using Bayes inference as:
μ n = σ o 2 σ o 2 + σ   2 μ + σ   2 σ o 2 + σ   2 μ o
σ n 2 = 1 1 σ o 2 + 1 σ 2 .
The generation of M continues by parsing the SVO point cloud and updating the corresponding C e l l ( i , j ) until one of the selected stop criteria, such as map exploration or map accuracy, is satisfied. The map exploration measure is defined as follows:
M M a p   ( % ) = n ( M ) n ( U n m a p p e d   C e l l ) n ( M )   x   100 .
Map accuracy, on the other hand, is a more challenging measure since inaccurate cells include all unmapped cells and is defined as:
M A c c   ( % ) = n ( M ) n ( I n a c c u r a t e   C e l l ) n ( M )   x   100 .
The computational complexity of Algorithm 1 cannot be calculated since satisfying the stop criterion is dependent on external processes such as the QABV planner and SVO; however, for a single run inside the while loop, the complexity of the G e t P o i n t C l o u d ( ) is O ( s i z e p c ) where s i z e p c is the size of the point cloud data. The for loop runs for every point in the cloud, and inside the loop, every operation costs the same amount of time, so its complexity is O ( s i z e p c ) . Lastly, the complexity of checking either of the stop criterion defined in (13) and (14) is O ( s i z e M ) , where s i z e M is the size of M . In this case, total complexity is O ( 2 s i z e p c + s i z e M ) .
Algorithm 1. Sparse Elevation Map Generation via Bayes Inference
1:   M Initialize with unmapped cells
2: while the stop criterion is not satisfied ( M M a p or M A c c )
3:      p c G e t P o i n t C l o u d ( )
4:     for each p in p c  do
5:          C e l l ( i , j ) G e t R e l a t e d C e l l ( M ,   p )
6:         if C e l l ( i , j ) is U n m a p p e d   C e l l
7:              C e l l ( i , j ) . h e i g h t p . h e i g h t
8:              C e l l ( i , j ) . v a r i a n c e p . v a r i a n c e
9:         else if C e l l ( i , j ) is I n a c c u r a t e   C e l l
10:             C e l l ( i , j ) . h e i g h t B a y e s I n f e r e n c e ( C e l l ( i , j ) . h e i g h t ,   p . h e i g h t )
11:              C e l l ( i , j ) . v a r i a n c e B a y e s I n f e r e n c e ( C e l l ( i , j ) . v a r i a n c e ,   p . v a r i a n c e )
12:         end if
13:     end for
14: end while
Figure 4 is an example 3D illustration of the proposed probabilistic sparse elevation map constructed from M . Different than showing only height information such as in [17], our map keeps the variance information of the SVO point cloud and presents height information in a probabilistic way.

3. Autonomous Navigation for Map Exploration and Accuracy

In this section, we present the proposed autonomous navigation method for map exploration and accuracy that is illustrated in Figure 1. Here, the developed QABV planner generates the best viewpoints for map exploration while considering measurement accuracy and distance traveled. The proposed QABV planner, similar to the NBV planner [20], generates a geometric tree with RRT [27] and selects the best branch of the tree (i.e., a series of waypoints that provides the best viewpoints) by analyzing the map M. The main differences between the QABV and NBV planners are:
  • The QABV planner is capable of acquiring information and guiding the drone with a single down-looking camera unlike using a dual camera as in the NBV planner, as it uses the generated sparse probabilistic elevation map described in Section 2.
  • Unlike the NBV planner which focuses only on map exploration, the proposed QABV planner has a dual focus on map exploration and quality thanks to a novel information gain function definition. The QABV analyzes the unmapped and inaccurate cells defined in (8) and (9) to generate the best waypoints such that new map cells are updated while the accuracy of previously updated cells is improved.
  • The NBV planner has a global map coverage problem and gets stuck in large environments [21,22]. This increases the distance traveled or even causes failure in exploring all regions. On the other hand, the QABV planner uses a dynamic path cost to reduce the distance traveled.
  • Especially in real-world applications, the path cost of the NBV planner becomes ineffective when the drone operates at a constant speed and the planner algorithm runs periodically. The QABV planner tackles this issue by processing the node numbers of the RRT branch as the path cost and adjusting its behavior dynamically while the map is being generated.
In the remaining part of this section, we first present a brief information NBV planner and describe its potentiation issues, then present all the details of the proposed QABV planner.

3.1. The NBV Planner: The Receding Horizon Next Best View Planner

As presented in Algorithm 2, the NBV planner employs a sampling-based receding horizon path planning paradigm for exploring a bounded 3D space V R 3 via a drone equipped with a stereo camera [20]. Through the acquired information from depth images of the stereo camera, it generates an occupancy map M , dividing V into cubical volumes m M [31]. These cubical volumes can either be marked as free, occupied, or unmapped. The map M is used for both collision-free navigation through the known free space and for determining the exploration progress.
In [20], a collision-free vehicle position is ξ = ( x ,   y , z , φ ) T , a path from ξ k 1 to ξ k is denoted by l k 1 k , and the cost of following this path is defined as:
c ( l k 1 k ) = ( x k x k 1 ) 2 + ( y k y k 1 ) 2 + ( z k z k 1 ) 2
In every planning iteration, starting from the current position of the vehicle ξ 0 , a geometric tree T is incrementally built with RRT [27] by adding a new node n . The resulting T contains N T nodes connected with collision-free and trackable paths, considering the dynamic and kinematic constraints of the drone.
NBV planner determines the best branch of T with a gain function, focusing on map exploration. For a given M , the set of visible and unmapped cells from position ξ is denoted as V i s i b l e ( M ,   ξ ) [20]. Every cell in this set lies in the Field of View (FoV) of the camera and the direct line of sight does not cross other cells, so the cells are not occluded. In this case, the gain of a node G a i n ( n ) is the summation of unmapped cells that can be explored by the nodes along the branch. The gain of the node k is as follows [20]:
G a i n ( n k ) = G a i n ( n k 1 ) + V i s i b l e ( M ,   ξ k ) e λ c ( l k 1 k )
where λ is the tuning factor for penalizing high path costs.
As presented in Algorithm 2, after every replanning of the NBV, the first segment of the branch to the best node E x t r a c t B e s t P a t h S e g m e n t ( n b e s t ) is returned and executed by the drone, where n b e s t is the node with the highest gain. The remainder of the best branch is used to reinitialize T in the next run after re-evaluating its gain using the updated map M [20]. The creation of T is stopped at N T = N m a x in general, but if the best gain g b e s t remains zero, tree construction is continued, until g b e s t > 0 . In the NBV algorithm, the map exploration problem is considered to be solved when N T = N T O L while g b e s t remained zero. N T O L is a tolerance value that is significantly higher than N m a x .
Algorithm 2. Receding Horizon Next Best View Planner [20]
1:   ξ 0 Current vehicle position
2: Initialize T with ξ 0 and, unless the first planner call, also previous best branch
3:   g b e s t 0 , set the best gain to zero
4:   n b e s t n 0 ( ξ 0 ), set the best node to the root
5:   N T Number of initial nodes in T
6: while N T < N m a x  or g b e s t = 0  do
7:     Incrementally build T by adding n n e w ( ξ n e w )
8:      N T N T + 1
9:     if  G a i n ( n n e w ) > g b e s t  then
10:        n b e s t n n e w
11:        g b e s t   G a i n ( n n e w )
12:     end if
13:     if  N T > N T O L  then
14:       Terminate exploration
15:     end if
16: end while
17:   l E x t r a c t B e s t P a t h S e g m e n t ( n b e s t )
18: Delete T
17: return l
The computational complexity of Algorithm 2 is dependent on the number of nodes in the tree N T , stereo camera range d s e n s o r , and a bounded 3D space to be explored V . The RRT creation complexity in a fixed environment is O ( N T l o g ( N T ) ) , and the query of the best node is O ( N T ) [20]. A query complexity to map M is O ( l o g ( V / v 3 ) ) , where v 3 is a map element volume and collision check complexity is O ( N T / v 3 l o g ( V / v 3 ) ) for planning paths through known free space in M [20]. Additionally, the gains for every node in the RRT are computed. While considering a visible area by the stereo camera proportional to V s e n s o r   ( d s e n s o r ) 3 , the number of queries to M approximates to V s e n s o r / v 3 for a node gain calculation. Additionally, the map elements on the ray d s e n s o r / v are checked for visibility. So, the complexity of gain calculation for one node is O ( ( d s e n s o r / v ) 4 l o g ( V / v 3 ) ) [20]. As a result, the total complexity of a single planning iteration is the sum of tree construction, collision-checking, and gain computation as follows [20]:
O ( N T l o g ( N T ) + N T / v 3 l o g ( V / v 3 ) + N T ( d s e n s o r / v ) 4 l o g ( V / v 3 ) ) .
The potential issues and improvement points of the NBV planner are as follows:
(P-i)
The gain function presented in (16) focuses only on map exploration without considering map quality; however, the measurement accuracy of the camera affects the quality of the map. By improving the V i s i b l e ( M ,   ξ ) function, determining the exploration progress can be performed while improving the map quality.
(P-ii)
The path cost function in (15) uses the distance between adjacent nodes; when the algorithm runs periodically and drone speed is constant, then the distance between adjacent nodes is equal. Thus, the NBV planner cannot punish long-path costs. For a clear understanding, let us explain this issue via the scenario depicted in Figure 5. In this case, the path cost (15) is the same for the following paths:
c ( l 1 2 ) = c ( l 1 3 ) = c ( l 3 4 )
When visible unexplored map information at RRT Branch 1 is equal to the visible unexplored map information of both n 3 ( ξ 3 ) and n 4 ( ξ 4 ) in RRT Branch 2 as follows:
V i s i b l e ( M ,   ξ 2 ) = V i s i b l e ( M , ξ 3 ) + V i s i b l e ( M , ξ 4 ) ,
then, according to (16), the gain of n 2 ( ξ 2 ) and n 4 ( ξ 4 ) are identical since (18) and (19) are equal. Thus, we can state that:
V i s i b l e ( M ,   ξ 2 ) e λ c ( l 1 2 ) = V i s i b l e ( M ,   ξ 3 ) e λ c ( l 1 3 ) + V i s i b l e ( M ,   ξ n e w ) e λ c ( l 3 4 )
G a i n ( n 2 ) = G a i n ( n 4 ) .
This shows that the NBV planner cannot decide in favor of Branch 1 despite having the same visible unexplored map information as Branch 2 at a shorter distance. Therefore, there is a need to improve the path cost function definition.
(P-iii)
As mentioned in [21,22], the NBV planner has a global map coverage problem and gets stuck in large environments. This can partly be handled by the careful tuning of the λ parameter in (16); however, this has to be completed for every environment and it typically requires several attempts. As a better approach, λ can be updated while the map is being explored rather than keeping it constant as is in the NBV planner.

3.2. The Proposed QABV Planner: Quality-Aware Best View Planner

The QABV planner presented in Algorithm 3 is a new local planner for guiding the drone to the best viewpoints with a dual focus, namely map exploration and quality. It increases the map quality during exploration, as it processes the probabilistic sparse elevation map M presented in Section 2. The QABV planner employs a sampling-based receding horizon path planning paradigm such as the NBV planner as presented in Algorithm 3. In every planning iteration ( r ), T is generated with RRT, starting from ξ 0 , by incrementally adding new collision-free nodes (i.e., waypoints) until the number of nodes reaches N T .
Algorithm 3. Quality-Aware Best View Planner
1:   ξ 0 Current vehicle position
2: Initialize T with ξ 0 in the first run, unless the previous best branch
3: Initialize g ´ b e s t with −1 in the first run, unless the previous g b e s t
4: Initialize Δ g b e s t as an empty vector in the first run
5: Initialize λ with λ i n i t in the first run
6: Set λ , update method in the first run: method {   1 ,   2 ,   3 ,   4 }
7:   g b e s t 0 , set the best gain to zero
8:   n b e s t n 0 ( ξ 0 ), set the best node to the root
9:   N T Number of initial nodes in T
10: switch method
11:     case 1: λ M e t h o d 1 ( Δ g b e s t , λ )
12:     case 2:  λ M e t h o d 2 ( g ´ b e s t ,   Δ g b e s t , λ )
13:     case 3: λ M e t h o d 3 ( g ´ b e s t ,   Δ g b e s t , λ )
14:     case 4:
15:        M A c c G e t M a p A c c u r a c y ( M )
16:        λ M e t h o d 4 ( g ´ b e s t ,   Δ g b e s t , λ ,   M A c c )
17: end switch
18: while N T < N m a x  or g b e s t = 0  do
19:     Incrementally build T by adding n n e w ( ξ n e w )
20:      N T N T + 1
21:     if  G a i n ( n n e w ,   λ ) > g b e s t  then
22:        n b e s t n n e w
23:        g b e s t   G a i n ( n n e w ,   λ )
24:     end if
25: end while
26: if g ´ b e s t 0
27:      Δ g b e s t   g b e s t g ´ b e s t
28:     Append Δ g b e s t to Δ g b e s t
29: end if
30:   w p E x t r a c t B e s t P a t h W a y p o i n t ( n b e s t )
31: Delete T
32: return w p
The QABV planner provides solutions to the underlined issues of the NBV planner while determining the best branch of T in Algorithm 3 as follows:
  • As a solution to (P-i), with a novel information gain function, the QABV planner determines the best branch of T to increase the measurement accuracy of M and explores new areas. We propose the I n f o r m a t i o n G a i n ( M ,   ξ ) , instead of V i s i b l e ( M ,   ξ ) of the NBV planner, to calculate the information contribution of the visible unmapped and inaccurate cells from position ξ . The I n f o r m a t i o n G a i n ( M ,   ξ ) is defined as follows:
I n f o r m a t i o n G a i n ( M ,   ξ k ) = v u c c σ k + m = 1 v i c c σ m
v u c c = V i s i b l e U n m a p p e d C e l l C o u n t ( M ,   ξ k )
v i c c = V i s i b l e I n a c c u r a t e C e l l C o u n t ( M ,   ξ k )
where v u c c is the set of visible unmapped cells defined in (8) and v i c c is the set of inaccurate cells defined in (9) which are calculated via M as illustrated in Figure 6. Every cell in these sets lies in the FOV of the camera, and the direct line of sight does not cross other cells.
The variance of visible inaccurate cells ( σ m ) is obtained from the Bayes inference defined in (10), whereas that of visible unmapped cells ( σ k ) is calculated from height deviations for the visible area ξ k as follows:
σ k = C a l c u l a t e S i g m a ( M ,   ξ k ) .
  • As stated in (P-ii), if the planner algorithm runs periodically every r T seconds, where r is the planning iteration, and T is the sampling period, while the drone flies at a constant speed, then the cost function given in (15) of the NBV planner becomes ineffective. Here, we solve (P-ii) by using the number of nodes between the subject node and the root node as the path cost instead of (15). Together with the new path cost and the information gain, the new gain function for RRT nodes to determine the best branch in Algorithm 3 in steps 21 and 23 is constructed as follows:
G a i n ( n k , λ ) = G a i n ( n k 1 ) + I n f o r m a t i o n G a i n ( M ,   ξ k ) e λ η .
In (26), η is the path cost representing the number of nodes between the root and the subject node, and λ > 0 determines the behavior of the path cost. To test our proposal, let us handle the scenario again depicted in Figure 5. When the information contribution of visible unmapped and inaccurate cells in RRT Branch 1 is equal to that of both n 3 ( ξ 3 ) and   n 4 ( ξ 4 ) in RRT Branch 2 as below:
I n f o r m a t i o n G a i n ( M ,   ξ 2 ) = I n f o r m a t i o n G a i n ( M ,   ξ 3 ) + I n f o r m a t i o n G a i n ( M ,   ξ 4 )
then by using (26), the relation of node gain between n 2 ( ξ 2 ) and n 4 ( ξ 4 ) becomes:
I n f o r m a t i o n G a i n ( M ,   ξ 2 ) e λ 2   > I n f o r m a t i o n G a i n ( M ,   ξ 3 ) e λ 2 + I n f o r m a t i o n G a i n ( M ,   ξ 4 ) e λ 3
G a i n ( n 2 , λ ) > G a i n ( n 4 , λ ) .
As can be seen from (29), the QABV planner always decides in favor of the shorter branch by using the new gain function given in (26).
  • For (P-iii), we propose updating λ > 0 to dynamically change the path cost within (26) while M is being generated or updated. This allows the QABV planner to reduce the distance traveled while maintaining map coverage as follows:
    When there are many areas to explore nearby the drone, the information contribution of the nodes in T increases. In this case, we increase λ to aggressively penalize long paths. This increases the focus of the drone to explore nearby information-rich areas. Additionally, the distance traveled reduces since closer paths are being followed.
    The information contribution of the nodes diminishes while the nearby areas are being explored. In this case, the drone should exit the area, but aggressive path cost prevents further exploration and causes the drone to get stuck. We decrease λ to prevent this situation so that the drone can reach information-rich areas at longer distances.
In this context, we handle the λ generation problem from a control-theoretic perspective. The proposed QABV planner, presented in Algorithm 3, can be seen as a system to be controlled via the hyperparameter λ (i.e., control signal) such that the best node gain ( g b e s t ) (i.e., output signal) results in a waypoint satisfying the presented effects. We propose 4 novel methods to be used in Algorithm 3 in steps 11, 12, 13, and 16 based on control system theory to regulate the proposed QABV planner. The proposed control methods are shown in Figure 7 and will be explained in the remaining part of this section.
After obtaining the best branch of T via (22) and (26), the change in the best node gain ( g b e s t ) is stored in Algorithm 3 in step 28 to be used in the proposed λ update methods in the next planning iterations. Then, the first waypoint of the branch to the best node w p is determined with E x t r a c t B e s t P a t h W a y p o i n t ( n b e s t ) and returned to the drone position controller, where n b e s t is the node with the highest gain. The rest of the branch is used for initializing T after the first iteration. The creation of T is stopped at N T = N m a x in general, but if the best gain g b e s t remains zero, tree construction is continued until g b e s t > 0 . The QABV planner depends on the generation of the probabilistic sparse elevation map, and it is terminated when Algorithm 1 finishes. Therefore, N T O L in Algorithm 2 is not used in the termination of the exploration.
The computational complexity of Algorithm 3 is dependent on the number of nodes N T for RRT creation, the distance between adjacent nodes for collision checking, and the visible map segment M v , which is centered at the drone position and proportional to the flight altitude for the node gain calculations. The complexity of the λ update methods is insignificant compared to the rest of the algorithm, and it will be mentioned in the remaining part of this section. The RRT creation complexity is O ( N T l o g ( N T ) ) as is in the NBV planner, and the query of the best node is O ( N T ) . The complexity of a single query to M is O ( 1 ) because the presented map in Section 2 stores data in a 2D matrix, and positions are encoded into matrix indexes as M [ x ] [ y ] . The collision check complexity for N T 1 path in the RRT is O ( N T s i z e M p ) , where s i z e M p is the size of a rectangle map segment with adjacent nodes in diagonal vertices. For node gain calculations of the RRT, the number of queries to M is equal to the size of M v   ( s i z e M v ) . For visibility, a rectangle area with drone and subject positions in diagonal vertices is checked, and in the worst case, the number of queries to M is equal to the size of M v ( s i z e M v / 4 ) when checking for edges of M v . So, the complexity of gain calculation for one node is O ( ( s i z e M v ) 2 / 4 ) . As a result, the total complexity of a single planning iteration is the sum of tree construction, collision check, and gain computation as follows:
O ( N T l o g ( N T ) + N T s i z e M p + N T ( s i z e M v ) 2 / 4 ) .

3.2.1. λ Update Method 1: On–Off Controller Approach

As shown in Figure 7a, Method 1 acts like an on–off controller [32] to generate λ; however, rather than using an error signal, the QABV planner with Method 1 (QABV-1) observes Δ g b e s t to understand if the information contribution of the area is increasing or decreasing and updates λ according to our update strategy as presented in Algorithm 4. Δ g b e s t is calculated with the backward difference method:
Δ g b e s t = g b e s t g ´ b e s t
in Algorithm 3 step 27, where g ´ b e s t is the previous g b e s t value from the previous planning iteration. Based on Δ g b e s t , QABV-1 increases λ when Δ g b e s t > 0 :
λ n e w = K   λ .
On the other hand, when Δ g b e s t < 0 , QABV-1 decreases λ:
λ n e w = λ / K ,
the λ change rate can be adjusted by K > 1 . Every operation in Method 1 takes constant time, therefore the computational complexity is O ( 1 ) .
Algorithm 4. Method 1: On–off controller
1: M e t h o d 1 ( Δ g b e s t , λ )
2:       if size ( Δ g b e s t ) > 0
3:         Δ g b e s t   Δ g b e s t [ e n d ]
4:          if Δ g b e s t > 0
5:            λ n e w K   λ
6:         else
7:            λ n e w   λ / K
8:         end if
9:       else
10:            λ n e w λ
11:          end if
12: return  λ n e w

3.2.2. λ Update Method 2: PD Controller Approach

As shown in Figure 7b, rather than directly setting λ such as in Method 1, the QABV planner with Method 2 (QABV-2), updates λ as follows:
λ n e w = λ + Δ λ
where Δ λ is generated via PD controller defined as:
Δ λ = K P E + K D Δ g ¯ b e s t .
Here, K P and K D are the proportional and derivative gains of the PD controller, respectively. As shown in Figure 7b, the derivative action of the PD controller is placed in the feedback loop to eliminate derivative kicks [33], and is implemented with a moving average filter over the past w values of Δ g b e s t   ( Δ g b e s t ) from previous planning iterations of the planner:
Δ g ¯ b e s t = 1 w h = 0   w 1 Δ g b e s t [ e n d h ] .
The proportional action of the PD controller is driven via the error signal E:
E = R g ´ b e s t ,
where R is the desired information contribution value (i.e., the reference signal) so that the best nodes provide the expected information contribution in every planning iteration. Note that (34) and (35) can be also seen as the velocity form of a PI controller. In defining R in (37), we need to consider:
  • If the value of R is too big, then λ may become saturated at later stages of the map generation process. Because when the stop criterion is close to being met, there will be limited information available in the environment, so the drone may not be able to collect enough information and the PD controller keeps decreasing λ to elevate g b e s t to a big R value until λ saturates.
  • If R is too small, then the PD controller may increase λ excessively at the first stages of the map generation process. Because in the beginning, there is plenty of information to gather from the environment, and information gain is at its highest point, this situation can slow down the exploration process and even make the drone get stuck in the early stages.
The implementation of Method 2 is presented in Algorithm 5. Note that if λ n e w 0 , to satisfy λ > 0 , we set λ n e w to positive small value ϵ . The computational complexity is O ( w ) and is dependent on the filter size in (36).
Algorithm 5. Method 2: PD controller
1: M e t h o d 2 ( g ´ b e s t ,   Δ g b e s t , λ )
2:     if size ( Δ g b e s t ) w
3:         calculate Δ g ¯ b e s t using (36)
4:         E R g ´ b e s t
5:          Δ λ   K P E + K D Δ g ¯ b e s t
6:          λ n e w   λ + Δ λ
7:         if  λ n e w 0
8:              λ n e w ϵ
9:         end if
10:       else
11:              λ n e w   λ
12:     end if
13: return λ n e w

3.2.3. λ Update Method 3: Switching Controller Approach

The QABV Planner with Method 3 (QABV-3) is a variation of QABV-2 with the same computational complexity and can be seen as a switching controller as shown in Figure 7c. In Algorithm 6, we presented the implementation where Δ g ¯ b e s t is calculated in (36).
We defined two switching conditions:
  • When R > g ´ b e s t , the proportional action in the forward loop is activated to update λ using (35). Because when Δ g ¯ b e s t is small, only derivative action cannot decrease λ fast enough, and the drone gets stuck in the information-poor area. The proportional action solves this problem by reducing λ proportionally to the magnitude of E.
  • When R < g ´ b e s t , the proportional action is deactivated. Because, in this case, map exploration and accuracy will increase beyond the desired level, therefore, only a derivative action updates λ to allow extra information and stabilize g b e s t as follows:
Δ λ = K D Δ g ¯ b e s t .
In this way, when there is a change in g b e s t , the derivative action updates λ as much as Δ g ¯ b e s t to change g b e s t in the reverse direction. So, the best viewpoints collect a similar amount of information from the environment during map generation.
Note that, if λ n e w 0 , λ n e w is set to ϵ > 0 .
Algorithm 6. Method 3: Switching controller
1: M e t h o d 3 ( g ´ b e s t ,   Δ g b e s t , λ )
2:         if size ( Δ g b e s t ) w
3:             calculate Δ g ¯ b e s t using (36)
4:             E initialize with zero
5:             if R > g ´ b e s t
6:                 E R g ´ b e s t
7:             end if
8:              Δ λ   K P E + K D Δ g ¯ b e s t
9:              λ n e w   λ + Δ λ
10:             if  λ n e w 0
11:                 λ n e w ϵ
12:             end if
13:         else
14:                  λ n e w   λ
15:         end if
16: return λ n e w

3.2.4. λ Update Method 4: 2 DOF PD Controller Approach

As shown in Figure 7d, rather than keeping R as a constant, the QABV planner with Method 4 (QABV-4) generates R dynamically while the map is generated via:
R = R m i n + ( R m a x R m i n ) e M A c c / τ
where M A c c is the map accuracy of the current planning iteration and is defined in (14). The desired information contribution value R is decreased exponentially from R m a x to R m i n as M a c c is increasing. The speed is determined by the exponential decay constant τ , and lower τ values speed up the decay from R m a x to R m i n as shown in Figure 8.
The motivation behind using M A c c is to estimate the amount of information the drone can gather from the environment is as follows:
  • In the first planning iterations, the drone can gather maximum information from the environment as there is no prior information on the map and M A c c is zero. That is why we prefer a big R value in the first planning iteration and set it to R m a x , which is equal to the value of g b e s t in the first planning iteration.
  • As we progress through the planning iterations and represent the gathered information on the map, M A c c increases. As a result, available information in the environment becomes limited; therefore, we gradually decrease R until it reaches its minimum value, R m i n , towards the end of the map generation process.
In Algorithm 7, we presented the implementation of Method 4. Here,   E is defined in (37) and calculated with R as defined in (39). λ is updated using (35) after calculating Δ g ¯ b e s t via (36). Similarly, if λ n e w 0 , λ n e w is set to ϵ > 0 . When calculating the computational complexity, notice that the precalculated M A c c is obtained from the proposed map in Section 2. So, the complexity of Method 4 is the same as Methods 2 and 3.
Algorithm 7. Method 4: 2 DOF PD controller
1: M e t h o d 4 ( g ´ b e s t ,   Δ g b e s t , λ ,   M A c c )
2:     if size ( Δ g b e s t ) w
3:         calculate Δ g ¯ b e s t using (36)
4:          R R m i n + ( R m a x R m i n ) e M A c c / τ
5:         E R g ´ b e s t
6:          Δ λ   K P E + K D Δ g ¯ b e s t
7:         λ n e w   λ + Δ λ
8:         if  λ n e w 0
9:              λ n e w ϵ
10:         end if
11:     else
12:          λ n e w   λ
13:     end if
14: return λ n e w

4. Comparative Performance Analysis

In this section, to validate our proposed approach and show its superiority, we present comparative experimental results. All experiments were conducted in Gazebo [34] simulation environment and the test drone used in the environment is specified in Table 1. The test drone has a down-looking camera that is used by SVO for position estimation and the creation of the point cloud. Figure 9 shows the simulation environment alongside a sample snapshot from the down-looking camera.
The performance of the NBV planner and our proposal QABV planner are tested against 3 different maps shown in Figure 10. To compare the performances of the planners, the following criteria are used:
  • Map exploration as defined in (13);
  • Map accuracy as defined in (14);
  • Distance traveled to indicate the path length followed by the drone during the execution of Algorithm 1.
To consider the random behavior of RRT and to have a general understanding of the performance of the planners, all tests are repeated three times for each map.
In the test scenarios, the drone flies at a 2 m constant height on the XY plane in the map. ArduPilot flight controller [10] is used for the position controller of the drone and position estimations are supported by SVO. In our simulation studies, there was a need for the closed-loop system model since both planners require a system model of the drone to generate a feasible node tree considering the dynamic and kinematic constraints on the XY plane. In this context, we performed a naïve system identification experiment with a sampling (i.e., planning) period T = 3   s and fixed the linear velocity V = 0.2   m / s . The XY dynamics are obtained via the system identification toolbox of Matlab® [35] and are as follows:
X ( t ) 0.9944 X ( t 1 ) + 9.197 × 10 12 X ( t 2 ) + 2.86 U x ( t 1 ) + 0.1169 U x ( t 2 ) ,   U x ( t ) = V c o s θ ( t )
Y ( t ) 0.9968 Y ( t 1 ) + 3.209 × 10 15 Y ( t 2 ) + 2.894 U y ( t 1 ) + 0.08926 U y ( t 2 ) , U y ( t ) = V s i n θ ( t ) ,
where θ ( t ) is the yaw angle ( π < θ < π ) .

4.1. NBV Planner Results

NBV planner focuses on map exploration and does not consider map accuracy as given in (16). Thus, we defined the stop criterion as 95% map exploration. Additionally, as an enhancement to the NBV planner and for a fair comparison, we used the map representation defined in Section 2 instead of the occupancy map in the original study [20].
In Table 2, the averaged test results are shown for each map when λ in (16) is set as 0.5. On average, the drone travels 79.29 m to explore 95% of the map and 74.2% of the explored map is accurate according to Bayes inference defined in (10) in Section 2 with SVO point cloud measurements.
To obtain more insight into the NBV planner, we can look at the test result in Figure 11. The resultant elevation map from the top view is shown in Figure 11a. Although inaccurate cells also need to be visited again to become accurate with Bayes inference, the path created by the planner focuses on only exploring empty cells. Furthermore, the drone got stuck as mentioned in (P-iii). Because in Figure 11b, map exploration, and accuracy remain stable at iterations between 70 and 100, and a tangle occurs in Figure 11c. This results in low g b e s t , as we can see from Figure 11d, and the drone stays at the local minimum until the local minimum gain drops off.

4.2. QABV-1 Planner Results

The QABV-1 planner focuses on map accuracy and exploration at the same time as given in (22) and (26) in Section 3, so the stop criteria are chosen as 75%, 80%, and 85% map accuracy as defined in (14). We set λ i n i t in Algorithm 3 to the same λ value of NBV as λ i n i t = 0.5 .
In the QABV-1 planner tests, we examined how K is affecting the performance by analyzing the results for K { 1.2 , 1.3 , 1.4 } . Table 3, Table 4 and Table 5 show the averaged test results for each test map and each K value. As can be seen from the results, the drone generally travels more in Map 1 to achieve similar results in other maps. The reason for this behavior can be explained by the number and different positions of the cubes. Because of the nature of the camera view angle, cubes occlude the area behind if they are intercepting the line of sight of the camera. The planner finds alternative routes for those occluded areas which naturally causes longer distance traveled.
To compare the performance of the QABV-1 planner against different K values, we qabv-1can look at the average performance of the method on the test maps. In Figure 12, the average performances of map exploration and distance traveled are presented. Map accuracy is the stop criterion, which is why it is the same for all test cases. As can be seen from the figure, QABV-1 performs better with K = 1.2 because the drone travels less while having a better map exploration on average compared to K = 1.3 and K = 1.4 .
To obtain more insight into the QABV-1 planner, we can look at the test result in Figure 13 when K = 1.2 and the stop criterion is 75% map accuracy. The resultant elevation map is shown in Figure 13a. Notice that QABV-1 is updating λ during map exploration differently than NBV. Although the drone stays in a low-gain area during iterations between 70 and 100 because map statistics remain stable in Figure 13b, and a tangle occurs in the followed path in Figure 13c, QABV-1 decreases λ to escape from the low-gain area and go to high-gain areas at a longer distance. After that, QABV-1 starts to increase λ to reduce the distance traveled. After iteration 70, λ is decreased according to the decreasing g b e s t value and is increased again after iteration 100 with the increasing g b e s t value (See Figure 13d).

4.3. QABV-2 Planner Results

In the QABV-2 planner tests, we set the moving average filter size as w = 5 , λ i n i t = 0.5 , and K P = 0.0001 and K D = 0.001 to scale with the magnitude of E and Δ g ¯ b e s t in (35). Note that we set K P < 0 as there is an inverse relation between λ and g b e s t . We examined the performance for R { 200 , 300 } . Table 6 and Table 7 show the averaged test results for test maps.
To compare the performance of the QABV-2 planner against different R values, we can look at the average performance of the method on the test maps. In Figure 14, the average performance of map exploration and distance traveled are shown. As can be seen from the figure, QABV-2 performs better with R = 300 because the drone travels less while having more map exploration on average compared to R = 200 .
To obtain more insight into the QABV-2 planner, we can look at the test result in Figure 15 when R = 300 and the stop criterion is 75% map accuracy. The resultant elevation map from the top view is shown in Figure 15a. Notice that QABV-2 updates λ more smoothly by using a PD controller than QABV-1 in Figure 13. On the other hand, QABV-2 waits before updating λ more than QABV-1 until enough information is gathered because of the moving average filter that is defined in (36) in Section 3.2.2; however, QABV-1 only waits for the calculation of Δ g b e s t and starts updating λ in the third iteration.

4.4. QABV-3 Planner Results

In the QABV-3 planner tests, we used the same values as the QABV-2 planner tests. Table 8 and Table 9 show the averaged test results for each test map and each R value.
To compare the performance of the QABV-3 planner against different R values, we can look at the average performance of the method on the test maps. In Figure 16, the average performance of map exploration and distance traveled are shown. As can be seen from the figure, QABV-3 performs better with R = 300 because the drone travels less while having more map exploration on average compared to R = 200 .
To obtain more insight into the QABV-3 planner, we can look at the test result in Figure 17 when R = 300 and the stop criterion is 75% map accuracy. The resultant elevation map from the top view is shown in Figure 17a. Notice that QABV-3 allows for g b e s t > R with its switching controller compared to QABV-2 in Figure 15.

4.5. QABV-4 Planner Results

In the QABV-4 tests, we set R m a x = 1000 and examined the performance for R m i n { 200 , 300 } and τ { 20 , 30 , 40 } . The remaining parameters are set to the same values as QABV-3. Table 10 and Table 11 show the results for R m i n = 200 and R m i n = 300 , respectively.
To compare the performance of the QABV-4 planner with different R m i n and τ settings, we can examine the average performances presented in Figure 18. As can be seen, QABV-4 performs better with R m i n = 300 and τ = 30 since the drone travels less while having more map exploration on average than other combinations.
To obtain more insight into the QABV-4 planner, we can look at the test result in Figure 19 when R m i n = 300 , τ = 30 , and the stop criterion is 75% map accuracy. The resultant elevation map from the top view is shown in Figure 19a. In Figure 19b, notice that map exploration and accuracy remain stable at iterations between 70 and 100 because in Figure 19c, we can see that the planned path is overlapping. In Figure 19d, notice that λ is decreased at the same iterations to keep g b e s t at R . Also notice that λ is not increased at the first iterations as is in QABV-2 in Figure 15d because R is set to R m a x at the first iteration according to (39) in Section 3.2.4, and λ is decreased to elevate g b e s t . On the other hand, in the QABV-2 test, R is set to 300, and λ is increased to reduce g b e s t .

4.6. Overall Performance Comparison

To make an overall performance comparison, we tabulate the baseline NBV planner results against the best resulting QABV planners which are:
  • QABV-1 when K = 1.2 ;
  • QABV-2 when R = 300 ,   K P = 0.0001 ,   K D = 0.001 ,   w = 5 ;
  • QABV-3 when R = 300 , K P = 0.0001 ,   K D = 0.001 ,   w = 5 ;
  • QABV-4 when R m a x = 1000 ,   R m i n = 300 ,   τ = 30 ,   K P = 0.0001 , K D = 0.001 ,   w = 5 .
As tabulated in Table 12, when the test stop criterion is 75% map accuracy, the QABV planner significantly reduces the distance traveled compared to the NBV planner:
  • 20.28% less distance traveled using QABV-1;
  • 30.19% less distance traveled using QABV-2;
  • 31% less distance traveled using QABV-3;
  • 31.02% less distance traveled using QABV-4;
while having negligible differences in map exploration and accuracy.
From the results presented in Table 13, when the test stop criterion is 80% map accuracy, the QABV planner continues to reduce the distance traveled although 7.8% more map accuracy is being asked compared to the NBV planner:
  • 5% less distance traveled using QABV-1;
  • 3.72% less distance traveled using QABV-2;
  • 15.69% less distance traveled using QABV-3;
  • 10.76% less distance traveled using QABV-4;
while having negligible differences in map exploration; however, the distance traveled increases when the stop criterion is 85% map accuracy.
Lastly, to compare the performance of the dynamic λ update methods, Figure 20 visualizes the test results in Table 12 and Table 13 for average map exploration and distance traveled. Referring to the figure, the average map exploration is around 96% for each method, and QABV-4 performs slightly better than others; however, the average distance traveled differs between the methods, and QABV-3 performs significantly better than the others with 68.82 m. Although the map exploration performance of the QABV-4 is slightly better, its performance in reducing the distance traveled makes the QABV-3 to be the first choice of implementation. On the other hand, QABV-1 is the least performant method but the easiest to implement at the same time.

5. A Real-World Scenario

To see the performance of the proposed planner in a real-world scenario such as delivering a package to a residential area, the map in Figure 21 is generated. In the scenario, the drone flies at a 5 m constant height on the x-y plane. Thus, the down-looking camera of the drone can see a broader area than before since the drone flight altitude is increased. Additionally, the distance between adjacent nodes in the RRT tree is increased in response to a broader visible area of the camera.
Table 14 shows test results when the stop criterion is 95% map exploration for the NBV planner and 75% map accuracy for the QABV planner. Tests are repeated for each dynamic λ update method with the same parameters as given in Section 4.6. Referring to Table 14, there is a noticeable increase in the average distance traveled results because of two main reasons. First, the map area is doubled, and second, big objects are causing occlusions for the camera; more specifically, the houses in the environment are affecting the visibility of the area behind since they are closer to the camera because of their height.
When the planners are calculating the gain of the RRT nodes, the nodes nearby the houses obtain low gains because of the occlusion. That is why in Figure 22, we can see that the drone prefers to travel in the non-occluded area until the information depletes because in Figure 22a, the middle of the map is marked with blue cells indicating that the garden is relatively lower than other objects, and the drone prefers to stay in this region more as we can see from the tangles in the followed path in Figure 22c; however, when the drone finds itself in a low-information gain area, the dynamic cost function decreases λ and the QABV planner selects the nodes nearby the houses as the best viewpoints.

6. Conclusions and Future Work

In this study, we proposed a new terrain mapping and exploration system to be used for autonomous drone landings. For mapping, we presented the probabilistic sparse elevation map to enable the representation of sensor measurement error. For exploration, we presented the QABV planner for autonomous drone navigation with a dual focus: map exploration and quality. We showed that by exploiting the probabilistic nature of the proposed map with the novel information gain function, the QABV planner increases the measurement accuracy while exploring the map.
To reduce the distance traveled, we presented that using the node count as path cost is very effective in our scenario rather than the distance between adjacent nodes as was in the NBV planner. Furthermore, we presented four different control methods to dynamically adjust the path cost. We showed that the QABV planner can be seen as a system to be controlled via λ such that shorter paths are generated when the drone is in an information-rich area, or longer paths are allowed to exit from an information-poor area and to avoid getting stuck. In simulation studies, we tested our proposals against the NBV planner and verified their usefulness. In particular, QABV-3 and QABV-4 achieved superior results regarding the distance traveled and map quality. Although QABV-4 has slightly better map exploration performance, its performance in reducing the distance traveled makes the QABV-3 the first option for implementation.
One limitation of our proposed system is that the drone flies at a constant altitude while the probabilistic sparse elevation map is generated. This may limit the accuracy and resolution of the map, especially in areas with varying terrain or elevation. Another limitation is the use of hyperparameters in the λ update methods such as K, R, and τ which must be determined in a heuristic mannner. In the test results section, our analysis shows that the performance of the methods is sensitive to these parameters, and better-determined values can result in better performances.
As for our future work, we will focus on addressing these limitations by exploring adaptive control methods for generating elevation maps and automating the determination of hyperparameters. Furthermore, we plan to work on the proposed approach and deploy it in a real-world environment to examine how it performs in the presence of real-world uncertainties.

Author Contributions

Conceptualization, O.S. and T.K.; methodology, O.S. and T.K.; software, O.S.; investigation, O.S.; data curation, O.S.; writing—original draft preparation, O.S.; writing—review and editing, O.S. and T.K.; visualization, O.S.; supervision, T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, S.; Han, Y.; Chen, J.; Zhang, Z.; Wang, G.; Du, N. A Deep-Learning-Based Sea Search and Rescue Algorithm by UAV Remote Sensing. In Proceedings of the 2018 IEEE CSAA Guidance, Navigation and Control Conference (CGNCC), Xiamen, China, 10–12 August 2018; IEEE: New York, NY, USA, 2018; pp. 1–5. [Google Scholar]
  2. Wang, C.; Liu, P.; Zhang, T.; Sun, J. The Adaptive Vortex Search Algorithm of Optimal Path Planning for Forest Fire Rescue UAV. In Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 October 2018; IEEE: New York, NY, USA, 2018; pp. 400–403. [Google Scholar]
  3. Chen, S.; Meng, W.; Xu, W.; Liu, Z.; Liu, J.; Wu, F. A Warehouse Management System with UAV Based on Digital Twin and 5G Technologies. In Proceedings of the 2020 7th International Conference on Information, Cybernetics, and Computational Social Systems (ICCSS), Guangzhou, China, 13–15 November 2020; IEEE: New York, NY, USA, 2020; pp. 864–869. [Google Scholar]
  4. Liu, H.; Chen, Q.; Pan, N.; Sun, Y.; An, Y.; Pan, D. UAV Stocktaking Task-Planning for Industrial Warehouses Based on the Improved Hybrid Differential Evolution Algorithm. IEEE Trans. Ind. Inform. 2021, 18, 582–591. [Google Scholar] [CrossRef]
  5. Masmoudi, N.; Jaafar, W.; Cherif, S.; Abderrazak, J.B.; Yanikomeroglu, H. UAV-Based Crowd Surveillance in Post COVID-19 Era. IEEE Access 2021, 9, 162276–162290. [Google Scholar] [CrossRef]
  6. Song, H.; Yoo, W.-S.; Zatar, W. Interactive Bridge Inspection Research Using Drone. In Proceedings of the 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC), Alamitos, CA, USA, 27 June–1 July 2022; IEEE: New York, NY, USA, 2022; pp. 1002–1005. [Google Scholar]
  7. Worakuldumrongdej, P.; Maneewam, T.; Ruangwiset, A. Rice Seed Sowing Drone for Agriculture. In Proceedings of the 2019 19th International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 15–18 October 2019; IEEE: New York, NY, USA, 2019; pp. 980–985. [Google Scholar]
  8. Raivi, A.M.; Huda, S.M.A.; Alam, M.M.; Moh, S. Drone Routing for Drone-Based Delivery Systems: A Review of Trajectory Planning, Charging, and Security. Sensors 2023, 23, 1463. [Google Scholar] [CrossRef] [PubMed]
  9. PX4 Autopilot. Available online: https://px4.io/ (accessed on 19 February 2023).
  10. ArduPilot. Available online: https://ardupilot.org/copter (accessed on 19 February 2023).
  11. Johnson, A.E.; Klumpp, A.R.; Collier, J.B.; Wolf, A.A. Lidar-Based Hazard Avoidance for Safe Landing on Mars. J. Guid. Control. Dyn. 2002, 25, 1091–1099. [Google Scholar] [CrossRef]
  12. Scherer, S.; Chamberlain, L.; Singh, S. Autonomous Landing at Unprepared Sites by a Full-Scale Helicopter. Robot. Auton. Syst. 2012, 60, 1545–1562. [Google Scholar] [CrossRef]
  13. Templeton, T.; Shim, D.H.; Geyer, C.; Sastry, S.S. Autonomous Vision-Based Landing and Terrain Mapping Using an MPC-Controlled Unmanned Rotorcraft. In Proceedings of the Proceedings 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; IEEE: New York, NY, USA, 2007; pp. 1349–1356. [Google Scholar]
  14. Geyer, C.; Templeton, T.; Meingast, M.; Sastry, S.S. The Recursive Multi-Frame Planar Parallax Algorithm. In Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’06), Chapel Hill, NC, USA, 14–16 June 2006; IEEE: New York, NY, USA, 2006; pp. 17–24. [Google Scholar]
  15. Desaraju, V.R.; Michael, N.; Humenberger, M.; Brockers, R.; Weiss, S.; Nash, J.; Matthies, L. Vision-Based Landing Site Evaluation and Informed Optimal Trajectory Generation toward Autonomous Rooftop Landing. Auton. Robot. 2015, 39, 445–463. [Google Scholar] [CrossRef]
  16. Yang, T.; Li, P.; Zhang, H.; Li, J.; Li, Z. Monocular Vision SLAM-Based UAV Autonomous Landing in Emergencies and Unknown Environments. Electronics 2018, 7, 73. [Google Scholar] [CrossRef]
  17. Forster, C.; Faessler, M.; Fontana, F.; Werlberger, M.; Scaramuzza, D. Continuous On-Board Monocular-Vision-Based Elevation Mapping Applied to Autonomous Landing of Micro Aerial Vehicles. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; IEEE: New York, NY, USA, 2015; pp. 111–118. [Google Scholar]
  18. Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast Semi-Direct Monocular Visual Odometry. In Proceedings of the 2014 IEEE international conference on robotics and automation (ICRA), Hong Kong, China, 31 May–5 June 2014; IEEE: New York, NY, USA, 2014; pp. 15–22. [Google Scholar]
  19. Delmerico, J.; Scaramuzza, D. A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots. In Proceedings of the 2018 IEEE international conference on robotics and automation (ICRA), Brisbane, Australia, 21–26 May 2018; IEEE: New York, NY, USA, 2018; pp. 2502–2509. [Google Scholar]
  20. Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding Horizon"next-Best-View" Planner for 3d Exploration. In Proceedings of the 2016 IEEE international conference on robotics and automation (ICRA), Stockholm, Sweden, 16–20 May 2016; IEEE: New York, NY, USA, 2016; pp. 1462–1468. [Google Scholar]
  21. Schmid, L.; Pantic, M.; Khanna, R.; Ott, L.; Siegwart, R.; Nieto, J. An Efficient Sampling-Based Method for Online Informative Path Planning in Unknown Environments. IEEE Robot. Autom. Lett. 2020, 5, 1500–1507. [Google Scholar] [CrossRef]
  22. Selin, M.; Tiger, M.; Duberg, D.; Heintz, F.; Jensfelt, P. Efficient Autonomous Exploration Planning of Large-Scale 3-d Environments. IEEE Robot. Autom. Lett. 2019, 4, 1699–1706. [Google Scholar] [CrossRef]
  23. Mittal, M.; Mohan, R.; Burgard, W.; Valada, A. Vision-Based Autonomous UAV Navigation and Landing for Urban Search and Rescue. In Proceedings of the Robotics Research: The 19th International Symposium ISRR, Hanoi, Vietnam, 6–10 October 2019; Springer: Cham, Switzerland, 2022; pp. 575–592. [Google Scholar]
  24. Zhang, Z.; Scaramuzza, D. Perception-Aware Receding Horizon Navigation for MAVs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–26 May 2018; IEEE: New York, NY, USA, 2018; pp. 2534–2541. [Google Scholar]
  25. Davison, A.J.; Murray, D.W. Simultaneous Localization and Map-Building Using Active Vision. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 865–880. [Google Scholar] [CrossRef]
  26. Mueller, M.W.; Hehn, M.; D’Andrea, R. A Computationally Efficient Algorithm for State-to-State Quadrocopter Trajectory Generation and Feasibility Verification. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; IEEE: New York, NY, USA, 2013; pp. 3480–3486. [Google Scholar]
  27. LaValle, S.M. Rapidly-Exploring Random Trees: A New Tool for Path Planning. Annu. Res. Rep. 1998. [Google Scholar]
  28. Karaman, S.; Frazzoli, E. Sampling-Based Algorithms for Optimal Motion Planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
  29. Yamauchi, B. A Frontier-Based Approach for Autonomous Exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97.’Towards New Computational Principles for Robotics and Automation, Monterey, CA, USA, 10–11 July 1997; IEEE: New York, NY, USA, 1997; pp. 146–151. [Google Scholar]
  30. Pizzoli, M.; Forster, C.; Scaramuzza, D. REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; IEEE: New York, NY, USA, 2014; pp. 2609–2616. [Google Scholar]
  31. Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
  32. Ogata, K. Two-Position or On–Off Control Action. In Modern Control Engineering; Prentice Hall: Upper Saddle River, NJ, USA, 2010; Volume 5, pp. 22–23. [Google Scholar]
  33. Ogata, K. PI-D Control. In Modern Control Engineering; Prentice Hall: Upper Saddle River, NJ, USA, 2010; Volume 5, pp. 590–591. [Google Scholar]
  34. Gazebo. Available online: https://gazebosim.org/home (accessed on 22 February 2023).
  35. System Identification Toolbox. Available online: https://nl.mathworks.com/products/sysid.html (accessed on 22 February 2023).
Figure 1. Probabilistic sparse elevation map generation and the QABV planner, marked in blue, are the main components of the proposed autonomous navigation approach for map exploration and accuracy toward drone landing. The flight controller and the vision system are used by the proposed approach during the map generation process.
Figure 1. Probabilistic sparse elevation map generation and the QABV planner, marked in blue, are the main components of the proposed autonomous navigation approach for map exploration and accuracy toward drone landing. The flight controller and the vision system are used by the proposed approach during the map generation process.
Drones 07 00278 g001
Figure 2. Illustration of the position estimation approach with SVO through a single down-looking camera.
Figure 2. Illustration of the position estimation approach with SVO through a single down-looking camera.
Drones 07 00278 g002
Figure 3. Update of a map cell with a point from SVO point cloud. By analyzing consecutive images from a single down-looking camera, the SVO algorithm does a probabilistic measurement of a point p = [ x , y , z ] represented as normal distribution N ( μ ,   σ 2 ) . C e l l ( i ,   j ) is updated with z and σ 2 after matching p in the 2D x y map M .
Figure 3. Update of a map cell with a point from SVO point cloud. By analyzing consecutive images from a single down-looking camera, the SVO algorithm does a probabilistic measurement of a point p = [ x , y , z ] represented as normal distribution N ( μ ,   σ 2 ) . C e l l ( i ,   j ) is updated with z and σ 2 after matching p in the 2D x y map M .
Drones 07 00278 g003
Figure 4. An example of constructed 3D sparse elevation map with SVO point cloud data. Here, the ground area is 12 m × 18.6 m and it consists of 10 cm × 10 cm map cells. Color codes indicate the status of the cells; green indicates inaccurate cells, and color tones between blue and red indicate cell height: blue for low, and red for high height. Void sections indicate unmapped cells.
Figure 4. An example of constructed 3D sparse elevation map with SVO point cloud data. Here, the ground area is 12 m × 18.6 m and it consists of 10 cm × 10 cm map cells. Color codes indicate the status of the cells; green indicates inaccurate cells, and color tones between blue and red indicate cell height: blue for low, and red for high height. Void sections indicate unmapped cells.
Drones 07 00278 g004
Figure 5. Illustration of a geometric tree based on RRT.
Figure 5. Illustration of a geometric tree based on RRT.
Drones 07 00278 g005
Figure 6. Illustration of v u c c and v i c c in the visible area: (a) drone position and camera viewpoint in the simulation environment; and (b) corresponding M segment according to the drone position. Unmapped cells are represented as white cells. Inaccurate cells are represented as green cells.
Figure 6. Illustration of v u c c and v i c c in the visible area: (a) drone position and camera viewpoint in the simulation environment; and (b) corresponding M segment according to the drone position. Unmapped cells are represented as white cells. Inaccurate cells are represented as green cells.
Drones 07 00278 g006
Figure 7. The proposed λ updating methods for the QABV planner: (a) Method 1: on–off controller, (b) Method 2: PD controller, (c) Method 3: switching controller, and (d) Method 4: 2 DOF PD controller.
Figure 7. The proposed λ updating methods for the QABV planner: (a) Method 1: on–off controller, (b) Method 2: PD controller, (c) Method 3: switching controller, and (d) Method 4: 2 DOF PD controller.
Drones 07 00278 g007
Figure 8. R against M A c c with different decay constants.
Figure 8. R against M A c c with different decay constants.
Drones 07 00278 g008
Figure 9. The test drone flies at a 2 m altitude in the simulation environment on the left side. On the right side, a sample picture captured by the down-looking camera is shown. The camera can see a 4 m × 6.2 m ground area at a 2 m altitude. Green points in the picture are corner features that are measured by SVO for position estimation and point cloud data.
Figure 9. The test drone flies at a 2 m altitude in the simulation environment on the left side. On the right side, a sample picture captured by the down-looking camera is shown. The camera can see a 4 m × 6.2 m ground area at a 2 m altitude. Green points in the picture are corner features that are measured by SVO for position estimation and point cloud data.
Drones 07 00278 g009
Figure 10. Test maps in Gazebo simulation environment: (a) map 1; (b) map 2; and (c) map 3. In the maps, cubes with side lengths of 1 m and 0.5 m are put on a flat surface with different positions, and each map is 18.6 m × 12 m in size.
Figure 10. Test maps in Gazebo simulation environment: (a) map 1; (b) map 2; and (c) map 3. In the maps, cubes with side lengths of 1 m and 0.5 m are put on a flat surface with different positions, and each map is 18.6 m × 12 m in size.
Drones 07 00278 g010
Figure 11. A detailed result for the NBV planner: (a) generated sparse elevation map (top view); (b) map rxploration and map accuracy; (c) the planned path; and (d) g b e s t and λ .
Figure 11. A detailed result for the NBV planner: (a) generated sparse elevation map (top view); (b) map rxploration and map accuracy; (c) the planned path; and (d) g b e s t and λ .
Drones 07 00278 g011
Figure 12. Analyzing test results for different hyperparameters for QABV-1 planner. Average map exploration and distance traveled from 75%, 80%, and 85% map accuracy test results are shown.
Figure 12. Analyzing test results for different hyperparameters for QABV-1 planner. Average map exploration and distance traveled from 75%, 80%, and 85% map accuracy test results are shown.
Drones 07 00278 g012
Figure 13. A detailed result of the QABV-1 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when K = 1.2 ; and (d) g b e s t and λ .
Figure 13. A detailed result of the QABV-1 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when K = 1.2 ; and (d) g b e s t and λ .
Drones 07 00278 g013
Figure 14. Analyzing test results for different hyperparameters for QABV-2 planner. Average map exploration and distance traveled from 75%, 80%, and 85% map accuracy test results are shown.
Figure 14. Analyzing test results for different hyperparameters for QABV-2 planner. Average map exploration and distance traveled from 75%, 80%, and 85% map accuracy test results are shown.
Drones 07 00278 g014
Figure 15. A detailed result of the QABV-2 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R = 300 ; and (d) g b e s t and λ .
Figure 15. A detailed result of the QABV-2 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R = 300 ; and (d) g b e s t and λ .
Drones 07 00278 g015
Figure 16. Analyzing test results for different hyperparameters for QABV-3 planner. Average map exploration and distance traveled from 75%, 80%, and 85% map accuracy test results are shown.
Figure 16. Analyzing test results for different hyperparameters for QABV-3 planner. Average map exploration and distance traveled from 75%, 80%, and 85% map accuracy test results are shown.
Drones 07 00278 g016
Figure 17. A detailed result of the QABV-3 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R = 300 ; and (d) g b e s t and λ.
Figure 17. A detailed result of the QABV-3 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R = 300 ; and (d) g b e s t and λ.
Drones 07 00278 g017
Figure 18. Analyzing test results for different hyperparameters for QABV-4 planner. Average map exploration and distance traveled for 75%, 80%, and 85% map accuracy test results are shown.
Figure 18. Analyzing test results for different hyperparameters for QABV-4 planner. Average map exploration and distance traveled for 75%, 80%, and 85% map accuracy test results are shown.
Drones 07 00278 g018
Figure 19. A detailed result of the QABV-4 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R m i n = 300 ,   τ = 30 ; and (d) g b e s t and λ.
Figure 19. A detailed result of the QABV-4 planner: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R m i n = 300 ,   τ = 30 ; and (d) g b e s t and λ.
Drones 07 00278 g019
Figure 20. Analyzing test results for different QABV planner dynamic λ update methods. Average map exploration and distance traveled for 75%, 80%, and 85% map accuracy test results are shown.
Figure 20. Analyzing test results for different QABV planner dynamic λ update methods. Average map exploration and distance traveled for 75%, 80%, and 85% map accuracy test results are shown.
Drones 07 00278 g020
Figure 21. Test map for real-world scenario tests. The map is 37.2 m × 24 m in size, and it represents the backyard of the houses.
Figure 21. Test map for real-world scenario tests. The map is 37.2 m × 24 m in size, and it represents the backyard of the houses.
Drones 07 00278 g021
Figure 22. A detailed result of the QABV-3 planner in a real-world scenario: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R = 300 ,   k p = 0.0001 ,   k d = 0.001 ,   w = 5 ; and (d) g b e s t and λ.
Figure 22. A detailed result of the QABV-3 planner in a real-world scenario: (a) generated sparse elevation map (top view); (b) map exploration and map accuracy; (c) the planned path when R = 300 ,   k p = 0.0001 ,   k d = 0.001 ,   w = 5 ; and (d) g b e s t and λ.
Drones 07 00278 g022
Table 1. Specifications of the simulation environment.
Table 1. Specifications of the simulation environment.
Drone Specifications
Motor-to-motor dimension550 mm
Height100 mm
Weight (with battery)1282 g
Propellers(2) 10 × 4.7 normal-rotation
(2) 10 × 4.7 reverse-rotation
MotorsAC 2830, 850 kV
Camera Specifications
Image width752 pixels
Image height480 pixels
Image formatBlack and white
Horizontal FOV115 degrees
Update rate20 Hz
Simulation Environment Apecifications for Rendering
Near clip plane0.05
Far clip plane15,000
Table 2. Experiment results for the NBV planner.
Table 2. Experiment results for the NBV planner.
Performance CriteriaMap 1Map 2Map 3Average
Map Exploration (%)95.0095.0095.0095.00
Map Accuracy (%)72.0073.0077.7074.20
Distance Traveled (m)82.6073.5081.7679.29
Table 3. QABV-1 planner test results for K = 1.2 .
Table 3. QABV-1 planner test results for K = 1.2 .
Performance CriteriaMap 1Map 2Map 3Average
Map Accuracy 75%Map Exploration (%)96.6093.6093.2594.48
Distance Traveled (m)68.0364.4557.1563.21
Map Accuracy 80%Map Exploration (%)96.8096.8095.2596.28
Distance Traveled (m)87.0071.9467.0675.33
Map Accuracy 85%Map Exploration (%)98.5097.7596.5097.58
Distance Traveled (m)101.4883.8573.3586.23
Table 4. QABV-1 planner test results for K = 1.3 .
Table 4. QABV-1 planner test results for K = 1.3 .
Performance CriteriaMap 1Map 2Map 3Average
Map Accuracy 75%Map Exploration (%)95.0093.3391.6693.33
Distance Traveled (m)63.5968.1368.2866.67
Map Accuracy 80%Map Exploration (%)97.2097.0095.5096.57
Distance Traveled (m)81.5573.1573.0075.90
Map Accuracy 85%Map Exploration (%)98.6098.0097.6698.09
Distance Traveled (m)91.5094.3777.1587.67
Table 5. QABV-1 planner test results for K = 1.4 .
Table 5. QABV-1 planner test results for K = 1.4 .
Performance CriteriaMap 1Map 2Map 3Average
Map Accuracy 75%Map Exploration (%)95.0094.8089.0092.93
Distance Traveled (m)82.859.2159.6967.23
Map Accuracy 80%Map Exploration (%)97.2095.0095.3395.84
Distance Traveled (m)95.2474.0069.7079.65
Map Accuracy 85%Map Exploration (%)98.6097.2597.0097.62
Distance Traveled (m)119.5585.2987.8297.55
Table 6. QABV-2 planner test results for R = 200 .
Table 6. QABV-2 planner test results for R = 200 .
Performance CriteriaMap 1Map 2Map 3Average
Map Accuracy 75%Map Exploration (%)95.6793.3391.6693.55
Distance Traveled (m)62.8165.2759.1262.40
Map Accuracy 80%Map Exploration (%)97.3394.6694.6695.55
Distance Traveled (m)75.5679.7571.6575.65
Map Accuracy 85%Map Exploration (%)98.0098.6696.0097.55
Distance Traveled (m)102.584.1883.0889.92
Table 7. QABV-2 planner test results for R = 300 .
Table 7. QABV-2 planner test results for R = 300 .
Performance CriteriaMap 1Map 2Map 3Average
Map Accuracy 75%Map Exploration (%)95.6794.3393.6794.56
Distance Traveled (m)57.4556.0052.6155.35
Map Accuracy 80%Map Exploration (%)97.0096.5095.0096.17
Distance Traveled (m)73.7977.1378.1076.34
Map Accuracy 85%Map Exploration (%)97.6798.0096.6797.45
Distance Traveled (m)93.1880.6691.5588.46
Table 8. QABV-3 planner test results for R = 200 .
Table 8. QABV-3 planner test results for R = 200 .
Performance CriteriaMap 1Map 2Map 3Average
Map Accuracy 75%Map Exploration (%)94.0094.6693.0093.89
Distance Traveled (m)68.4154.8653.6258.96
Map Accuracy 80%Map Exploration (%)97.3396.0095.3396.22
Distance Traveled (m)75.1866.1663.2768.20
Map Accuracy 85%Map Exploration (%)97.3397.3398.0097.55
Distance Traveled (m)89.7695.8889.6291.75
Table 9. QABV-3 planner test results for R = 300 .
Table 9. QABV-3 planner test results for R = 300 .
Performance CriteriaMap 1Map 2Map 3Average
Map Accuracy 75%Map Exploration (%)96.0095.0094.0095.00
Distance Traveled (m)55.3052.1256.7054.71
Map Accuracy 80%Map Exploration (%)97.0096.3394.3395.89
Distance Traveled (m)66.4662.7871.3166.85
Map Accuracy 85%Map Exploration (%)97.6696.6696.0096.77
Distance Traveled (m)77.9087.4389.4284.92
Table 10. QABV-4 planner test results for R m i n = 200 .
Table 10. QABV-4 planner test results for R m i n = 200 .
Decay ConstantPerformance CriteriaMap 1Map 2Map 3Average
τ = 20Map Accuracy 75%Map Exploration (%)95.0094.0095.0094.67
Distance Traveled (m)71.9964.8262.5266.44
Map Accuracy 80%Map Exploration (%)96.6096.0096.0096.20
Distance Traveled (m)73.5372.5671.6772.59
Map Accuracy 85%Map Exploration (%)98.0098.6097.6098.07
Distance Traveled (m)95.7888.6186.8690.42
τ = 30Map Accuracy 75%Map Exploration (%)93.6693.0095.3394.00
Distance Traveled (m)67.8764.7862.5265.06
Map Accuracy 80%Map Exploration (%)96.0093.6696.3395.33
Distance Traveled (m)92.2688.1371.6784.02
Map Accuracy 85%Map Exploration (%)98.3398.3397.6698.11
Distance Traveled (m)96.18100.186.8694.38
τ = 40Map Accuracy 75%Map Exploration (%)95.6693.3391.3393.44
Distance Traveled (m)69.3760.0764.1264.52
Map Accuracy 80%Map Exploration (%)95.6696.3395.0095.66
Distance Traveled (m)72.2470.7671.5671.52
Map Accuracy 85%Map Exploration (%)97.3397.6695.6696.88
Distance Traveled (m)94.7486.6785.7689.06
Table 11. QABV-4 planner test results for R m i n = 300 .
Table 11. QABV-4 planner test results for R m i n = 300 .
Decay ConstantPerformance CriteriaMap 1Map 2Map 3Average
τ = 20Map Accuracy 75%Map Exploration (%)93.0092.3392.0092.44
Distance Traveled (m)70.3662.4060.0064.25
Map Accuracy 80%Map Exploration (%)96.0096.3393.6695.33
Distance Traveled (m)80.9478.7269.4576.37
Map Accuracy 85%Map Exploration (%)97.3397.0096.0096.78
Distance Traveled (m)93.2997.0478.5889.64
τ = 30Map Accuracy 75%Map Exploration (%)96.3395.6694.0095.33
Distance Traveled (m)50.3956.6157.0854.69
Map Accuracy 80%Map Exploration (%)96.3395.3396.0095.89
Distance Traveled (m)70.573.1968.670.76
Map Accuracy 85%Map Exploration (%)98.0098.2097.0097.73
Distance Traveled (m)107.692.6176.3592.19
τ = 40Map Accuracy 75%Map Exploration (%)94.3394.0093.0093.70
Distance Traveled (m)69.8158.0661.8163.23
Map Accuracy 80%Map Exploration (%)96.0095.0095.0095.33
Distance Traveled (m)78.9578.7169.5175.72
Map Accuracy 85%Map Exploration (%)97.6697.6697.0097.44
Distance Traveled (m)93.0282.9386.5687.50
Table 12. Baseline NBV planner performance results against the best resulting QABV planners.
Table 12. Baseline NBV planner performance results against the best resulting QABV planners.
PlannerPerformance CriteriaMap 1Map 2Map 3Average
NBV
planner
Map Accuracy (%)72.0073.0077.7074.20
Map Exploration (%)95.0095.0095.0095.00
Distance Traveled (m)82.6073.5081.7679.29
QABV-1
planner
Map Accuracy (%)75.0075.0075.0075.00
Map Exploration (%)96.6093.6093.2594.48
Distance Traveled (m)68.0364.4557.1563.21
QABV-2
planner
Map Accuracy (%)75.0075.0075.0075.00
Map Exploration (%)95.6794.3393.6794.56
Distance Traveled (m)57.4556.0052.6155.35
QABV-3
planner
Map Accuracy (%)75.0075.0075.0075.00
Map Exploration (%)96.0095.0094.0095.00
Distance Traveled (m)55.3052.1256.7054.71
QABV-4
planner
Map Accuracy (%)75.0075.0075.0075.00
Map Exploration (%)96.3395.6694.0095.33
Distance Traveled (m)50.3956.6157.0854.69
Table 13. The best resulting QABV planners with stop criteria of 80% and 85% map accuracy.
Table 13. The best resulting QABV planners with stop criteria of 80% and 85% map accuracy.
PlannerPerformance CriteriaMap 1Map 2Map 3Average
QABV-1
planner
Map Accuracy 80%Map Exploration (%)96.8096.8095.2596.28
Distance Traveled (m)87.0071.9467.0675.33
Map Accuracy 85%Map Exploration (%)98.5097.7596.5097.58
Distance Traveled (m)101.4883.8573.3586.23
QABV-2
planner
Map Accuracy 80%Map Exploration (%)97.0096.5095.0096.17
Distance Traveled (m)73.7977.1378.1076.34
Map Accuracy 85%Map Exploration (%)97.6798.0096.6797.45
Distance Traveled (m)93.1880.6691.5588.46
QABV-3
planner
Map Accuracy 80%Map Exploration (%)97.0096.3394.3395.89
Distance Traveled (m)66.4662.7871.3166.85
Map Accuracy 85%Map Exploration (%)97.6696.6696.0096.77
Distance Traveled (m)77.9087.4389.4284.92
QABV-4
planner
Map Accuracy 80%Map Exploration (%)96.3395.3396.0095.89
Distance Traveled (m)70.573.1968.670.76
Map Accuracy 85%Map Exploration (%)98.0098.2097.0097.73
Distance Traveled (m)107.692.6176.3592.19
Table 14. Performance measures of NBV and QABV planners in real-world scenario test.
Table 14. Performance measures of NBV and QABV planners in real-world scenario test.
Performance CriteriaNBV
Planner
QABV-1
Planner
QABV-2
Planner
QABV-3
Planner
QABV-4
Planner
Map Exploration (%)95.0095.0095.5095.5095.50
Map Accuracy (%)74.0075.0075.0075.0075.00
Distance Traveled (m)276.34216.41205.90200.88203.29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sözer, O.; Kumbasar, T. Quality-Aware Autonomous Navigation with Dynamic Path Cost for Vision-Based Mapping toward Drone Landing. Drones 2023, 7, 278. https://doi.org/10.3390/drones7040278

AMA Style

Sözer O, Kumbasar T. Quality-Aware Autonomous Navigation with Dynamic Path Cost for Vision-Based Mapping toward Drone Landing. Drones. 2023; 7(4):278. https://doi.org/10.3390/drones7040278

Chicago/Turabian Style

Sözer, Onuralp, and Tufan Kumbasar. 2023. "Quality-Aware Autonomous Navigation with Dynamic Path Cost for Vision-Based Mapping toward Drone Landing" Drones 7, no. 4: 278. https://doi.org/10.3390/drones7040278

Article Metrics

Back to TopTop