Next Article in Journal
Stylized Model of Lévy Process in Risk Estimation
Next Article in Special Issue
Biased Random-Key Genetic Algorithm with Local Search Applied to the Maximum Diversity Problem
Previous Article in Journal
Optimization of Differential Pricing and Seat Allocation in High-Speed Railways for Multi-Class Demands: A Chinese Case Study
Previous Article in Special Issue
Impulsive Pinning Control of Discrete-Time Complex Networks with Time-Varying Connections
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Trajectories Generation for Unmanned Aerial Vehicles Based on Obstacle Avoidance Located by a Visual Sensing System

by
Luis Felipe Muñoz Mendoza
1,
Guillermo García-Torales
1,
Cuauhtémoc Acosta Lúa
2,3,
Stefano Di Gennaro
3,4 and
José Trinidad Guillen Bonilla
1,*
1
Departamento de Electro–Fotónica, Centro Universitario de Ciencias Exactas e Ingenierías (C.U.C.E.I.), Universidad de Guadalajara (U. de G.), Blvd. M. García Barragán 1421, Guadalajara 44410, Jalisco, Mexico
2
Departamento de Ciencias Tecnológicas, Centro Universitario de La Ciénega, Universidad de Guadalajara, Av. Universidad 1115, Ocotlán 47820, Jalisco, Mexico
3
Center of Excellence DEWS, University of L’Aquila, Via Vetoio, Loc. Coppito, 67100 L’Aquila, Italy
4
Department of Information Engineering, Computer Science and Mathematics, University of L’Aquila, Via Vetoio, Loc. Coppito, 67100 L’Aquila, Italy
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1413; https://doi.org/10.3390/math11061413
Submission received: 30 January 2023 / Revised: 8 March 2023 / Accepted: 10 March 2023 / Published: 15 March 2023

Abstract

:
In this work, vectorial trajectories for unmanned aerial vehicles are completed based on a new algorithm named trajectory generation based on object avoidance (TGBOA), which is presented using a UAV camera as a visual sensor to define collision-free trajectories in scenarios with randomly distributed objects. The location information of the objects is collected by the visual sensor and processed in real-time. This proposal has two advantages. First, this system improves efficiency by focusing the algorithm on object detection and drone position, thus reducing computational complexity. Second, online trajectory references are generated and updated in real-time. To define a collision-free trajectory and avoid a collision between the UAV and the detected object, a reference is generated and shown by the vector, symmetrical, and parametric equations. Such vectors are used as a reference in a PI-like controller based on the Newton–Euler mathematical model. Experimentally, the TGBOA algorithm is corroborated by developing three experiments where the F-450 quadcopter, MATLAB® 2022ª, PI-like controller, and Wi-Fi communication are applied. The TGBOA algorithm and the PI-like controller show functionality because the controller always follows the vector generated due to the obstacle avoidance.

1. Introduction

The study of unmanned aerial vehicles (UAVs) has grown in recent decades due to their features and the large field of applications that have been found in agronomy, surveillance, and tracking, among others [1,2]. Researchers’ work on new methods for creating autonomous navigation on systems are focused on three stages, mainly the following [3]: obtaining information about the environment using different sensors and methodologies; [4,5] processing data to obtain a reference trajectory using a localization, mapping, or optimization algorithm [6,7]; and implementing flight controllers to keep the drone altitude and track the signal reference [8,9,10]. The first depends on a great variety of sensors such as cameras, light detection and ranging (LiDAR), or inertial measurement units (IMU). The second stage uses algorithms for object detection, and collision avoidance, as well as route optimization to find the best trajectory. The third stage consists of developing the flight controller, which is focused on approaching the reference to the position and posture desired for the UAV by means of a mathematical model and some control technique.
One way to obtain the reference is based on generating a vector, whose origin point is the position where the UAV starts its route, and the endpoint is the target to be reached, as shown in reference [11]: here, a vector is generated to move the UAV among separate trees by means of a photometric technique. However, various factors influence the drone path, such as changes in the environment and collisions; therefore, it becomes necessary to implement methods for obstacle detection and avoidance while the UAV trajectory is updated [5]. For this case, two methods are considered: manual and automatic. In the first, an operator sends the signal from a remote control to enable a flight mode [12,13]. In the second, a trajectory generator algorithm and a flight controller are combined [14,15]. The generator can be executed from the UAV or with a remote workstation.
To solve the trajectory generation problem, two main methods are considered: offline and online [16]. In the offline method, an algorithm analyzes and calculates a trajectory using a pre-load scenario that consists of a static image or model of the place with constrains already known by the system before the UAV take-off and calculates a trajectory for the controller. The environment must not change, since if it does (if some object appears), the technique loses its functionality due to the existence of a high possibility of a collision between the UAV and the object. On the other hand, the online method uses an algorithm based on a dynamic analysis of its sensors, detecting the object’s movement to generate a customized trajectory [17,18]. Both methods are applicable; however, each method has its advantages and disadvantages. For example, the offline method is effective if the scenario is static because the UAV trajectory is calculated before its movement, but for a changing scenario, this method should not be considered because the collision between the UAV and an object is very possible. If the scenario is changing because of an object’s movement or the UAV movement, the trajectory creation online is recommended [19] because the UAV trajectory is updated based on the scenario analysis. The algorithm’s effectiveness, however, depends on the complexity of the data and the execution time. The literature has proposed different methods and devices to counter these drawbacks. Some works report waypoints with the GPS, as in [20], where a strategic route planning for a multi-UAV routing problem is shown, using a multi-step plan which includes automated definition for GNSS based on a georeferenced three-dimensional domain model, derivation of obstacle-free candidate routes, assignment of waypoints, and definition of time-stamped trajectories for all UAVs. Their aim is to ensure that the designed trajectories are reliable with the given positioning accuracy using propagated covariance as a metric. On the other hand, the authors of [21] improve the computationally fast trajectory generation algorithm using linear interpolation, which gives a fast calculation response, numerical simulation results are shown, and a nonlinear flight controller is also designed to track the generated trajectories. In addition, different works use artificial intelligence and optimization methods to obtain the best path in predetermined scenarios, as in [22], where a vision-based controller for a quadcopter is presented, to enable tracking of a moving target by a neural network which obtains image features.
For online trajectory cases, the most complex situation occurs when both the UAV and the objects are moving at the same time. Here, the algorithm must develop dynamic trajectories at a high speed according to the data analysis. To solve the problems mentioned above, several authors have proposed different methodologies. For example, in [23], the authors use a convolutional neural network to recognize images, path planning, and its implementation on an FPGA. In reference [24], the authors use the so-called simultaneous localization and mapping (SLAM) algorithm that allows the map building or updating of an unknown environment while tracking the drone location. A route planner is developed by updating itself as it explores various areas of the surrounding environment. Such methods can be functional in unknown scenarios but not dynamic. The processing tasks are hard to process due to the constant updating of the map. In [19], the authors propose a direct visual serving system for UAVs, using an onboard high-speed monocular camera with an integrated GPU. This technology is characterized by environment recognition through artificial vision and by sending signals to the controller, which reduces latency. However, the equipment used for this technique is expensive and requires prior knowledge or estimation of the objects in its spatial environment. In [25], a vision-guided autonomous flight system for quadrotor navigation is presented. Using a real-time obstacle avoidance technique based on optical flow, the frontal obstacles can be detected and guide the quadrotor with a proper direction. Nevertheless, optical flow is difficult to achieve in the real-time performance on the low-cost hardware. On the other hand, implementing the generated trajectories is carried out by flight controllers that allow the drone to move into space by manipulating the speed of its rotors. The flight control system is the fundamental aspect of the quadrotor, becoming an important platform for UAV research and development, as in [25], where different control techniques are reported. Furthermore, Newton–Euler, Euler–Lagrange, or quaternions [26,27,28,29,30,31] are examples of mathematical models applied in controllers whose aim is to follow the reference by reducing the tracking error by adjusting the UAV posture and its rotational and translational movement.
Finally, Table 1 shows a comparation between some of the related work mentioned above.
In this article, a solution with low computational resources is presented for obstacle detection and avoidance, trajectory generation, and autonomous drone navigation using only a camera and without specialized high-performance hardware. It focuses on implementing a novel algorithm called trajectory generation based on object avoidance (TGBOA), which works in a real-time analysis through video frames taken from the camera placed on the UAV and communication with a workstation. Furthermore, a PI-like controller is developed and simulated, using as a reference the vector generated by the TGBOA algorithm. The Newton–Euler mathematical model is considered by the controller to represent the equations of the rigid body of the drone.

2. Materials and Methods

2.1. Materials

Figure 1 shows the proposed system for trajectory generation based on a video analysis and the evasion of localized objects through a vision system. The required material consists of a model OV5647 camera (Raspberry PI REES52, Taiwan) that works as a sensor with a Raspberry Pi 4B (Raspberry Pi Foundation, Raspberry Pi Foundation Maurice Wilkes Building St. John’s Innovation Park Cambridge, UK) responsible for transmitting the video, a model F-450 quadcopter (INVENTO, India) (shown in Figure 2), a flight controller card (Pixhawk 3 (Bombay Electronics, Mumbai, India)) installed, and a laptop as a workstation. The camera technical features are M12 × 0.5 lenses, a 5MPixels sensor, a resolution of 640 × 480 pixels, 90 frames/s, and a size of 2.5 cm × 2.5 cm × 2.6 cm. The Raspberry features are a 1.5 GHz quad-core 64-bit ARM Cortex processor, 2 GB of LPDDR4 SDRAM, and Dual-band 802.11 ac wireless networking. The drone has four brushless direct current motors (BLDC) and a 3DR-PIXHAWK controller card. The features of the laptop are 8Gb RAM and Intel i7 CPU. Finally, the software used is MATLAB® 2022a executing on the Windows 11 Operative System.
The drone camera observes the front part where the UAV moves and generates a video during the flight. The video is transmitted in real-time to the workstation via Wi-Fi using the Raspberry Pi, and the transmission rate depends on the bandwidth Wi-Fi signal [32] and the adaptor of the workstation. In this case, the connection is made by wireless transmission at 2.4 GHz with a usable speed of 50–70 Mbps. Subsequently, the workstation analyzes the video frames considering the following objectives: object detection on the stage, collision-free zone calculation, and generation of free-collision trajectories for the flight controller.

2.2. Object Detection and Assignation of Collision-Free Zones

The generated video is 90 frames/second with 640 × 480 pixels for each frame and is sent from the UAV to the workstation. For the analysis, a frame is taken from the video, obtaining a digital RGB image as shown in Figure 3a. The image is represented through the discrete function f ( m , n ) where m = 1 , 2 , , M   and   n = 1 , 2 , , N are the coordinates of the image pixels. Subsequently, the RGB image is transformed into an 8-bit gray-scale image and converted by the Otsu algorithm into a binary image [33]. In the binary image, the detected objects are white, and the background is marked in black, as illustrated in Figure 3b. Finally, the size, position, and centroid are obtained for each object.
The next step is calculating the collision-free regions, so the image f ( m , n ) is divided into thirteen regions: R k ( k = 1 , 2 , , 13 ) . The first region R 1 has been assigned concerning the UAV size and is in the image center, and the image resolution limits the remaining thirteen zones, as observed in Figure 3c. If objects in the scenario are in one or more regions, then a correlation A between the detected objects and the region exists, and the following condition is true:
R k { R k , α i f f   A = 0 R k , β i f f   A 0 ,
where R k , α indicates the α t h collision-free region and R k , β indicates the β t h region with possible collision. Now, based on Equation (1), a region R k , α is collision-free if, and only if, the correlation area A is equal to 0. On the other hand, region R k , β has a possible collision if, and only if, the correlation area A is different from zero. To illustrate the above, Figure 2b is considered, where the objects are in the regions R 4 , R 5 , R 9 13 (marked as red), and consequently, collision-free regions are R 1 3 y R 6 8 (marked in blue).

2.3. Trajectory Generation

Once the collision-free regions have been defined, it is possible to generate flight trajectories for the UAV controller to avoid a collision and improve the success of missions.
Figure 4 shows the geometry for generating the trajectories of the controller. Based on the figure mentioned above, the origin is the drone position whose coordinates are P c ( 0 , 0 , 0 ) , and the endpoint of the trajectory is the center of the collision-free region. In this case, the coordinates will be C k ( x k ,   y k , z k ) :   x k is the distance from the UAV to the scenario, y k represents the length, and z k is the altitude. To generate the trajectory, the vector equation is used:
f k = P c + t i v k                   i     ,
where P c is a vector such that v k = P c C k vector calculated from the UAV to the center of the k-th region R k , f k is the collision-free trajectory, and t are instants of time. From here, we go from a 2D problem to 3D as illustrated in Figure 4.
Taking into consideration the geometry of the problem, Equation (2) can be rewritten as follows:
f k = ( 0 ,   0 ,   0 ) + t ( x k , y k , z k   ) ,
From Equation (3), the following parametric equations are obtained,
x = x k t y = y k t z = z k t ,
Finally, the vector equation is represented in the form of symmetric equations,
x x k = y y k = z z k ,
Equations (4)–(6) can be applied as a reference in flight controllers for the UAV. Figure 4 shows the graphical representation for Equations (3)–(5) (possible trajectory vectors), f k   ( k = 1 , 2 , 3 , 6 8 ) where free regions are R k   ( 1 , 2 , 3 , 6 8 ) and the objects are detected in regions R k   ( 4 , 5 , 9 13 ) . Notice that vectors f k   ( k = 1 , 2 , 3 , 6 8 ) are free trajectories and the drone does not have a collision due to the object evasion.

2.4. Controller Design

In this work, a controller is designed to verify the functionality of the trajectories generated in Section 2.3. To achieve this, the Newton–Euler mathematical model is used, which is presented below.

2.4.1. Mathematical Model

The UAV considered in this work is based on a quadcopter shown in Figure 5, which consists of a rigid body with four rotors. The movement of the drone is generated from the interaction of the forces resulting from each motor, which are represented as F i = b ω p . i 2 , where b is the thrust coefficient calculated by the physical and dynamic characteristics of the propellers while   ω p i , i = 1 , 2 , 3 , 4 are the angular velocities of each motor. Propellers 1 and 3 rotate clockwise, and propellers 2 and 4 rotate counterclockwise.
For the mathematical model, two frameworks are considered: the one that remains fixed with respect to the Earth that we will call E E ( O E , ϵ E 1 , ϵ E 2 , ϵ E 3 ) and the frame of reference of the drone located at its center of mass E D ( O D , ϵ D 1 , ϵ D 2 , ϵ D 3 ) . The absolute position of the UAV in terms of E E is expressed by ρ = ( x , y , z ) T and its attitude is described by Euler angles Θ = ( x , y , z ) T : roll ( ϕ ) rotates on the x-axis, pitch ( θ ) works around the y-axis, and the z-turn is called yaw ( ψ ). All angles exist in the range ( π / 2   , π / 2 ) . This allows the drone to have six degrees of freedom, according to the framework E D , three to translation v D = ( v x , v y , v z ) T and three to rotation Ω = ( ω 1 , ω 2 , ω 3 ) T , which are linear and angular velocities, respectively.
To continue with the Newton–Euler model, the dynamics of the rigid body are obtained by the forces applied to the center of mass of the quadcopter, and the equation of motion for the quadcopter can be described by means of Figure 4 and the translation dynamics, expressed in the following system of equations.
    ρ ˙ = v D v ˙ D = 1 m R D E F p r o p 1 m F g r a v   Θ ˙   = R ˙ D E S ω J   Ω ˙   = S ω J Ω + τ p r o p τ g y r o
where m is the quadcopter mass, J = d i a g ( I x , I y , I z ) is the inertia matrix of the quadcopter expressed in E E , and F p r o p and τ p r o p   are forces and moments produced by the propellers. Moreover, F g r a v are forces and moments applied to the body of the helicopter, consisting of its own weight, and τ g y r o   is defined as the gyroscopic effects resulting from helix rotations. The forces and torques due to the external disturbances are here assumed to be negligible.
In Equation (6), S ω is the antisymmetric matrix defined as:
S ω = ( 0 ω 3 ω 2 ω 3 0 ω 1 ω 2 ω 1 0 ) ,
Equation (7) in known as the dyadic representation of Ω .
Matrix R D E describes the spatial orientation of the drone framework E D to the Earth framework E E . The sequence R z , ψ R y , θ R x , ϕ is used [34], and it is obtained by applying the product of three rotation matrices for each axis, so that
R x , ϕ = ( 1 0 0 0 c ϕ s ϕ 0 s ϕ c ϕ ) ;   R y , θ = ( c θ 0 s θ 0 1 0 s θ 0 c θ ) ; R z , ψ = ( c ψ s ψ 0 s ψ c ψ 0 0 0 1 )
R D E = R z , ψ R y , θ R x , ϕ = ( c θ c ψ s ϕ s θ c ψ c ϕ s ψ c ϕ s θ c ψ + s ψ s ϕ c θ s ψ c ϕ c ψ + s ϕ s θ s ψ c ϕ s θ s ψ c ψ s ϕ   s θ s ϕ c θ c ϕ c θ ) ,
with c γ = cos ( γ ) and s γ = sin ( γ ) . γ = ϕ , θ , ψ .
The dynamics of angular velocities are expressed in matrix form:
R ˙ D E = ( 1 s ϕ t g θ c ϕ t g θ 0 c ϕ s ϕ 0 s ϕ s c θ c ϕ s c θ ) ,
with t g γ = tan ( γ ) and s c γ = sec   ( γ ) . Moreover, using small angles, Equation (9) reduces to an identity matrix I 3 x 3 . This assumption is justified by the fact that the movements of the quadcopter are slow and soft [35].
Furthermore, the input forces and moments are the torque applied to the roll angle τ 1 , produced by the forces of the motors F 4 and F 2 , respectively; the torque applied to pitch τ 3 is produced by the forces F 1 and F 3 , respectively; due to Newton’s third law, the propellers produce a yawing torque τ 2 on the body of the quadrotor, in the opposite direction of the propeller rotation. The helices’ drag produces a torque in the angle of rotation in the body of the quadcopter and is represented by c . The distance between the quadcopter’s center of mass and rotor shaft is represented by l . All these variables are defined as:
F p r o p = ( 0 0 Σ F i ) F g r a v = (   0 0 m g ) τ p r o p = ( τ 1 τ 3 τ 2 ) = ( l ( F 4 F 2 ) l ( F 1 F 3 ) c ( F 1 + F 2 F 3 + F 4 ) ) = ( l u 3 l u 2 u 4 ) .  
Next, to the dynamic model described in (6), it is possible to include the gyroscopic term τ g y r o , caused by the combination of the rotations of the airframe and the four rotors, i.e., due to the fact that the pair of rotors 1–3 rotate in the opposite direction of the pair 2–4 [36]:
τ g y r o = i = 1 4 ( 1 ) i I R ω R S ϵ 3 ,
where ϵ 3 = ( 0   0   1 ) T is the vector along the axis z in E E , J R is the moment of inertia of the propeller with respect to its axis of rotation, and ω R = ω R , 1 ω R , 2 + ω R , 3 ω R , 4   is the so-called relative rotor speed.
Finally, using Equations (6) and (10) and under the small angle assumption, the quadcopter model is
x ¨ = 1 m ( c ϕ s θ c ψ + s ϕ s ψ   ) u 1 y ¨ = 1 m ( c ϕ s θ s ϕ s ϕ c ψ   ) u 1 z ¨ = g + 1 m c ϕ c θ u 1 ϕ ¨ = ( J y J z J x ) θ ˙ ψ ˙ J R J x θ ˙ ω R + l J x u 3 θ ¨ = ( J z J x J y ) ϕ ˙ ψ ˙ + J R J y ϕ ˙ ω R + l J y u 2 ψ ¨ = ( J x J y J z ) ϕ ˙ θ ˙ + 1 J z u 4
where the control variable is u i ,   i = 1 ,   2 ,   3 ,   4 are defined as in (10), and the parameter values used in (12) are shown in Table 2.
Subsequently, the control problem consists of ensuring the convergence of the set of variables X = ( x , y , z , ψ ) to references X r e f = ( x r e f , y r e f , z r e f , ψ r e f ) . To achieve this, control subsystems composed of the rotational and translational dynamics of the system are considered, which relate the control inputs u i   with spatial coordinates ( x , y , z ) by means of Euler angles ( ϕ ,   θ ,   ψ ) . The following coordinate change is considered: ϕ = ϕ 1 ,   ϕ ˙ 1 = ϕ 2 ;   θ = θ 1 ,   θ ˙ 1 = θ 2 ;   ψ = ψ 1 ,   ψ ˙ 1 = ψ 2 and the system of Equation (12) became:
x ˙ 1 = x 2 x 2 = 1 m ( c ϕ 1 s θ 1 c ψ 1 + s ϕ 1 s ψ 1   ) u 1 y ˙ 1 = y 2 y 2 = 1 m ( c ϕ 1 s θ 1 s ψ 1 s ϕ 1 c ψ 1   ) u 1 z ˙ 1 = z 2 z ¨ = g + 1 m c ϕ 1 c θ 1 u 1 ϕ 1 ˙ = ϕ 2 ϕ ˙ 2 = ( J y J z J x ) θ 2 ψ 2 J R J x θ 2 ω R + l J x u 3 θ ˙ 1 = θ 2 θ ˙ 2 = ( J z J x J y ) ϕ 2 ψ 2 + J R J y ϕ 2 ω R + l J y u 2 ψ ˙ 1 = ψ 2 ψ ˙ 2 = ( J x J y J z ) ϕ 2 θ 2 + 1 J z u 4  
Therefore, the following four subsystems shall be considered: the altitude subsystem S 1 , the latitudinal subsystem S 2 , the longitudinal subsystem S 3 , and the yaw S 4 , and then the set of Equation (13) becomes:
S 1 = { z ˙ 1 = z 2 z ˙ 2 = g + 1 m c ϕ 1 c θ 1 u 1 S 2 = { x ˙ 1 = x 2 x ˙ 2 = 1 m ( c ϕ 1 s θ 1 c ψ 1 + s ϕ 1 s ψ 1 ) u 1 θ ˙ 1 = θ 2 θ ˙ 2 = ( J z J x J y ) ϕ 2 ψ 2 + J R J y ϕ 2 ω R + l J y u 3 S 3 = { y ˙ 1 = y 2 y ˙ 2 = 1 m ( c ϕ 1 s θ 1 s ψ 1 s ϕ 1 c ψ 1 ) u 1 ϕ ˙ 1 = ϕ 2 ϕ ˙ 2 = ( J y J z J x ) θ 2 ψ 2 J R J x θ 2 ω R + l J x u 2 S 4 = { ψ ˙ 1 = ψ 2 ψ ˙ 2 = ( J x J y J z ) ϕ 2 θ 2 + 1 J z u 4
Following this, each of the subsystems is described.

2.4.2. Altitude Control

As mentioned, in order to verify the applicability of the references for the UAV flight controller, an altitude control is proposed in this section. The altitude control subsystem S 1 is defined by the following:
z ˙ 1 = z 2 z ˙ 2 = g + 1 m c ϕ 1 c θ 1 u 1  
Since the aim is to keep the UAV at a constant altitude according to the desired reference z r e f , with z 2 , r e f = z ˙ 1 , r e f , then the tracking error and its dynamics are set for altitude control via the following:
e z 1 = z 1 z 1 , r e f e ˙ z 1 = z 2 z ˙ 1 , r e f
For virtual control gains k are used such that
k z 1 e z 1 k 0 z 1 I e z 1 = z 2 , d z ˙ 1 , r e f
where an integrative term is added in e z 1 , in such a way that, I ˙ e z 1 = e z 1 , and therefore,
z 2 , d = z ˙ 1 , r e f k z 1 e z 1 k 0 z 1 I e z 1 ,
The derivative of (17) with respect to time will be:
z ˙ 2 , d = z ¨ 1 , r e f k z 1 e ˙ z 1 k 0 z 1 I ˙ e z 1 z ˙ 2 , d = z ¨ 1 , r e f k z 1 ( z 2 z ˙ 1 , r e f ) k 0 z 1 e z 1 ,
Now, the tracking error e z 2 is obtained by the following:
e z 2 = z 2 z 2 d ;
and its dynamics results are
e ˙ z 2 = g + 1 m c ϕ 1 c θ 1 u 1 z ˙ 2 , d         ,
then, the control input of altitude u1 proposed is
u 1 = m c ϕ 1 c θ 1 ( z ˙ 2 , d + g k z 2 e z 2 k 0 z 2 I e z 2 ) ,
with c ϕ 0 ,   c θ 0 .
Considering [37], the closed-dynamics considering (17) and (20) become
( I e z 1 ˙ e ˙ z , 1 ) = A z , 1 ( I e z 1 e z , 1 ) ,     A z , 1 = ( 0 1 k 0 z 1 k z 1 ) ( I e z 2 ˙ e ˙ z , 2 ) = A z , 2 ( I e z 2 e z , 2 ) ,     A z , 2 = ( 0 1 k 0 z 2 k z 2 )
where the integral gains k 0 z 1 ,   k 0 z 2 and proportional gains k z 1 , k z 2 are fixed so that the error dynamics are exponentially stable.

2.4.3. Longitudinal Motion Control

In this subsection, making use of the subsystem S 2 and applying the technique of virtual control, longitudinal motion is studied, which is described by the following system of equations.
x ˙ 1 = x 2 x ˙ 2 = 1 m ( c ϕ 1 s θ 1 c ψ 1 + s ϕ 1 s ψ 1 ) u 1 θ ˙ 1 = θ 2 θ ˙ 2 = ( J z J x J y ) ϕ 2 ψ 2 + J R J y ϕ 2 ω R + l J y u 3
The tracking error e x 1 = x 1 x 1 , r e f   is considered, and the dynamics of error results:
e ˙ x 1 = x 2 x ˙ 1 , r e f .
To eliminate disturbances, an integrative term I ˙ e x 1 = e x 1 is added to the controller, and the virtual control is proposed,
x 2 , d = x ˙ 1 , r e f k x 1 e x 1 k 0 x 1 I e x 1 .  
Differentiation x 2 , d with respect to the time,
x ˙ 2 , d = x ¨ 1 , r e f k x 1 e ˙ x 1 k 0 x 1 I ˙ e x 1 x ˙ 2 , d = x ¨ 1 , r e f k x 1 ( x 2 x ˙ 1 , r e f ) k 0 x 1 e x 1 .
The error is e x 2 = x 2 x 2 d   and its dynamics results are:
e ˙ x 2 = x ˙ 2 x ˙ 2 , d e ˙ x 2 = 1 m ( c ϕ 1 s θ 1 c ψ 1 + s ϕ 1 s ψ 1   ) u 1 x ˙ 2 , d ;
for stabilization of e x 2 , the value for s θ 1 can be fixed by solving (26) as follows:
s θ 1 , d = m c ϕ 1 c ψ 1 u 1 ( x ˙ 2 , d 1 m s ϕ 1 s ψ 1 u 1 k x 2 e x 2 k 0 x 2 I e x 2 ) .
To impose the reference, θ is considered the tracking error e θ 1 = s θ 1 s θ 1 , d and its derivative
e ˙ θ 1 = s ˙ θ 1 s ˙ θ 1 , d                       e ˙ θ 1 = c θ 1 θ 2 s ˙ θ 1 , d ,
For virtual control, θ 2 , d is used
k θ 1 e θ 1 k 0 θ 1 I e θ 1 = c θ 1 θ 2 , d s ˙ θ 1 , d ; θ 2 , d = 1 c θ ( s ˙ θ 1 , d k θ 1 e θ 1 k 0 θ 1 I e θ 1 ) ,
Defining e θ 2
e θ 2 = θ 2 θ 2 , d ; e ˙ θ 2 = θ ˙ 2 θ ˙ 2 , d ,
the dynamics of the error results
e ˙ θ 2   = J z J x J y ϕ 2 ψ 2 + J R J y ϕ 2 ω R + l J y u 3 θ ˙ 2 , d ,
and finally, the control input u 3 is obtained as
u 3 = J y d ( θ ˙ 2 , d k θ 2 e θ 2 k 0 θ 2 I e θ 2 J z J x J y ϕ 2 ψ 2 J R J y ϕ 2 ω R ) ,

2.4.4. Lateral Movement Control

A lateral movement controller is designed in this section and its operation is along the y-axis and the UAV. Using a procedure similar to that described in Section 2.4.2, the subsystem S 3 is used:
y ˙ 1 = y 2 y ˙ 2 = 1 m ( c ϕ 1 s θ 1 s ψ 1 s ϕ 1 c ψ 1 ) u 1 ϕ ˙ 1 = ϕ 2 ϕ ˙ 2 = ( J y J z J x ) θ 2 ψ 2 J R J x θ 2 ω R + l J x u 2
the tracking error and dynamic error are defined by,
e y 1 = y y 1 , r e f               e ˙ y 1 = y 2 y ˙ 1 , r e f ,
adding an integrative term to the controller I ˙ e y 1 = e y 1 , the virtual control for y 2 , d is obtained
k y 1 e y 1 k 0 y 1 I e y 1 = y 2 , d y ˙ 1 , r e f y 2 , d = y ˙ 1 , r e f k y 1 e y 1 k 0 y 1 I e y 1 ,
differentiation with respect to the time of Equation (35) yields
y ˙ 2 , d = y ¨ 1 , r e f k y 1 e ˙ y 1 k 0 y 1 I ˙ e y 1 y ˙ 2 , d = y ¨ 1 , r e f k y 1 ( y 2 y ˙ 1 , r e f ) k 0 y 1 e y 1 .  
Subsequently, the tracking e y 2 = y 2 y 2 , d   is defined, and the dynamic is as follows:
e ˙ y 2 = y ˙ 2 y ˙ 2 , d ;   e ˙ y 2 = 1 m ( c ϕ 1 s θ 1 s ψ 1 s ϕ 1 c ψ 1 ) u 1 y ˙ 2 , d .  
Now, it clears s ϕ 1 to obtain
k y 2 e y 2 k 0 y 2 I e y 2 = 1 m c ϕ 1 s θ 1 s ψ 1 u 1 1 m s ϕ 1 c ψ 1 u 1 y ˙ 2 , d s ϕ 1 , d = m c ψ 1 u 1 ( 1 m c ϕ 1 s θ 1 s ψ 1 u 1 y ˙ 2 , d + k y 2 e y 2 + k 0 y 2 I e y 2 ) .  
Considering the angular error e ϕ 1 = s ϕ 1 s ϕ 1 , d , its dynamics can be expressed as
e ˙ ϕ 1 = s ˙ ϕ 1 s ˙ ϕ 1 , d e ˙ ϕ 1 = c ϕ 1 ϕ 2 s ˙ ϕ 1 , d .  
On the other hand, for virtual control, ϕ 2 , d   is used
k ϕ 1 e ϕ 1 k 0 ϕ 1 I e ϕ 1 = c ϕ 1 ϕ 2 , d s ˙ ϕ 1 , d ϕ 2 , d = 1 c ϕ ( s ˙ ϕ 1 , d k ϕ 1 e ϕ 1 k 0 ϕ 1 I e ϕ 1 ) .
Furthermore, with e ϕ 2 = ϕ 2 ϕ 2 , d , the dynamic is defined by
e ˙ ϕ 2 = ϕ ˙ 2 ϕ ˙ 2 , d .  
Now, using (33) in (41), the angular velocity error dynamic is expressed as
e ˙ ϕ 2   = J y J z J x θ 2 ψ 2 J R J x θ 2 ω R + l J x u 2 ϕ ˙ 2 , d .
Finally, the control input u 2 proposed is
u 2 = J x l ( e ˙ ϕ 2 J y J z J x θ 2 ψ 2 + J R J x θ 2 ω R + ϕ ˙ 2 , d ) ; u 2 = J x l ( ϕ ˙ 2 , d k ϕ 2 e ϕ 2 k 0 ϕ 2 I e ϕ 2 J y J z J x θ 2 ψ 2 + J R J x θ 2 ω R ) .

2.4.5. Yaw Motion Control

For rotation control, the subsystem S 4 is considered
ψ ˙ 1 = ψ 2 ; ψ ˙ 2 = ( J x J y J z ) ϕ 2 θ 2 + 1 J z u 4 ,  
where the error system is e ψ 1 = ψ 1 ψ 1 , r e f , and its dynamic results are
e ˙ ψ 1 = ψ 2 ψ ˙ 1 , r e f .                                            
Defining the virtual control as in the previous sections, ψ 2 , d is:
ψ 2 , d = ψ ˙ 1 , r e f k ψ 1 e ψ 1 k 0 ψ 1 I e ψ 1 .
The differentiation of (46) with respect to time is obtained as
ψ ˙ 2 , d = ψ ¨ 1 , r e f k ψ 1 e ˙ ψ 1 k 0 ψ 1 I ˙ e ψ 1 .
Finally, with the error e ψ 2 = ψ 2 ψ 2 , d   , its dynamic is defined as
    e ˙ ψ 2 = ψ ˙ 2 ψ ˙ 2 , d e ˙ ψ 2 = ( J x J y J z ) ϕ 2 θ 2 + 1 J z u 4 ψ ˙ 2 , d ,  
then the controller input u 4 is defined by
u 4 = J z ( ψ ˙ 2 , d k ψ 2 e ψ 2 k 0 ψ 2 I e ψ 2 ( J x J y J z ) ϕ 2 θ 2 ) .
As in Section 2.4.2., the exponential stability can be inferred from the yaw error dynamics, i.e.:
( I e ψ 1 ˙ e ˙ ψ 1 ) = A ψ , 1 ( I e ψ 1 e ψ , 1 ) ,     A ψ , 1 = ( 0 1 k 0 ψ 1 k ψ 1 ) ( I e ψ 2 ˙ e ˙ ψ , 2 ) = A ψ , 2 ( I e ψ 2 e ψ , 2 ) ,     A ψ , 2 = ( 0 1 k 0 ψ 2 k ψ 2 )
where the integral gains k 0 ψ 1 ,   k 0 ψ 2 and proportional gains k ψ 1 , k ψ 2 are fixed so that the error dynamics are exponentially stable.

3. Results

Three experiments are documented to show the functionality of our proposal. For each case, the collision-free trajectories are calculated with one frame by the TGBOA algorithm in real-time. Subsequently, a trajectory is selected and employed as a reference for the PI-like controller. MATLAB 2022a is used for the controller simulation because of its compatibility with other platforms and its wide variety of tools and different uses in the industry, allowing the use of the algorithm in later works.

3.1. Experiment 1

Figure 6a shows the video frame transmitted from the UAV to the workstation used for the first experiment. In the digital image, thirteen regions are defined as R 1 13 , and objects are detected in regions R 1 and R 9 13 whose centers are marked in red. Consequently, collision-free regions are detected from region R 2 to region R 8 (centers are marked in blue). Subsequently, using the procedure described in Section 2.3, the collision-free trajectories between the object and UAV are calculated. The time required for the TGBOA algorithm is 26 ms in this frame. These trajectories are expressed using their vector equations, as seen in Table 3. Then, the trajectory f 7 is selected, and it is marked in blue. Figure 6b illustrates the reference vector for the controller described in Section 2.4.
On the other hand, the simulation controller results are shown by a graphic in Figure 7: Figure 7a is the result for the position control, including horizontal, frontal, and altitude control; Figure 7b presents the results for the attitude control; Figure 7c shows the control input signal controller; and finally Figure 7d shows a 3D representation of the simulation. In each case, the generated reference is taken and discretized at five points, starting from the origin of the center of the selected free zone determined by the video analysis (see Figure 6a). The interval between each reference point is 5 s. Based on Figure 7, at all points, the PI-like controller reaches its stability in 4 s, and then 25 s is the total time required to complete the reference vector.

3.2. Experiment 2

For this case, the video frame acquired from the UAV camera is shown in Figure 8a. As observed, there are six regions ( R 2 , R 9 13 ) with detected objects whose centers are marked with red dots, and the free regions found are R 1 ,   R 3 8 (center marked with a blue dot). Using the UAV position coordinates and the central coordinates of the collision-free regions, trajectories are generated and represented through vector, parametric, and symmetric equations. These equations can be seen in Table 4. Next, the vector f 1 is used as a reference for the UAV controller, and its graph is displayed in Figure 8b. In this case, the TGBOA algorithm displays the trajectory in 21 ms. On the other hand, Figure 9 shows the results obtained for the second experiment when the vector f 1 is applied as a reference in the controller.
For this experiment, five points are taken from the chosen trajectory with a duration of five seconds between them. Each reference point is reached in 4 s as observed in Figure 9a. The longitudinal movement retains the same behavior as in Experiment 1, while the lateral movement remains constant as the UAV follows a line forward. In addition, the altitude is maintained until the final point is reached. Again, the last point stabilizes after 24 s.

3.3. Experiment 3

This experiment uses the same methodology as the first by moving objects and taking a new frame for the TGBOA algorithm, detecting the position, finding collision-free zones, and generating the optimal trajectory. In Figure 10a, we can see the video frame for Experiment 3, where the objects are in the regions R 1   and R 8 with the centers marked in red. Therefore, the collision-free regions are R 2 7 and R 9 13 , whose centers are marked in blue. Following the procedure described in Section 2.3, the possible trajectories for the UAV are calculated, and each trajectory is represented through a vector, parametric, and symmetric equation. Such equations are observable in Table 5. The references are obtained in 28 ms by the TGBOA algorithm. Subsequently, the trajectory f 12 is used as a reference in the controller (see Figure 10b), and then the numerical results can be seen in Figure 11.
For the last experiment presented in this paper, the trajectory pointing towards zone number 12 is used, and the simulation controller results are shown in Figure 11, where the dotted line represents the reference, and the line represents the numerical results. It is observed that the PI-like controller is stable since it reaches each point of the trajectory in approximately 4 s and it is maintained until the next point, reaching the center of the collision-free zone in 25 s.

3.4. Object Avoidance

With the results obtained in the three experiments (Section 3.1Section 3.3), the behavior graph of object avoidance is shown in Figure 12.
Analyzing Figure 12 reveals the UAV travels in an initial charged trajectory in the controller, until an object is detected with the vision sensing system using the TGBOA algorithm to create the new trajectory. Then, the new trajectory is added to the PI-like controller. Note that the new trajectory f 7 (Experiment 1) evades the object located during the flight, as observed in Figure 6. This new trajectory f 7 is followed until the visual sensing systems detect other objects in the UAV trajectory, and as a consequence, the TGBOA algorithm creates a new trajectory again f 1 (Experiment 2), and then is added to the PI-like controller, updating the UAV with the new objects in the scenario. The f 1 makes the drone to evade in accordance with the new values added to the algorithm. Notice that the UAV follows the f 1 until the vision sensing system detects objects in the scenario, and then it is necessary to create a new trajectory with the TGBOA algorithm; in this case, the trajectory vector f 12 (Experiment 3) is created. Please refer to Figure 11 and Table 5. As in the other cases, f 12 allows the drone to evade objects thanks to a change in the direction or trajectory.
Therefore, considering the results shown in Figure 12, the TGBOA algorithm creates new trajectories in real-time using the evading of objects. These are detected with the visual sensing system. The trajectory vectors are then added to the controller, and with this new information the drone moves to a new position.

4. Discussion

In this work, a vision system was applied for object detection to generate real-time trajectories for drones and was probed as references in a newly designed PI-like controller. A new algorithm called trajectory generation based on object avoidance (TGBOA) was reported. In the TGBOA algorithm, the trajectories were created based on object evasion and using the drone position and the central coordinates of collision-free regions. Its mathematical representation was calculated and shown on vectors, parametric, and symmetric equations. In each experiment, one vector was selected and used as a reference in the PI-like controller (see Section 2.4) by a simulation software. The numerical results obtained verify the operation of our proposal since the drone always reached the reference generated by the TGBOA algorithm. Based on the results, the following points can be highlighted:
  • The new trajectory generation based on object avoidance (TGBOA) algorithm is proposed to generate real-time trajectories for drones. The TGBOA algorithm is based on evading detected objects using a vision sensing system.
  • The TGBOA algorithm is experimentally checked since the trajectory vectors were generated from a video analysis. Then, they are applied as references in the designated PI-like controller.
  • Our new proposal generates new trajectories in real-time for drone controllers, taking into consideration changing scenarios because of its movement or random natural scenarios.
  • The TGBOA algorithm is more efficient in the creation of new drones’ trajectories.
  • The TGBOA algorithm is easy to implement because it is based on the detection of objects in a changing scenario, and it is not based on the optic flow concept [35]. As a consequence, the number of computational operations and the mathematical complexity are reduced.
  • With the objects evading model from the TGBOA algorithm, the drone can be more autonomous when it travels from an initial point f A to a final point f B , even though this is a changing scenario.
  • The TGBOA algorithm can be implemented in specialized development boards.
However, due to the applied software, the data transmission, and the controller, time limitations take place in the execution such as:
  • For each frame, the TGBOA algorithm creates a new trajectory in 30 ms.
  • Including the transmission time between the UAV and the workstation, the total time to produce and update a trajectory is up to 390 ms, due to the delay in the Wi-Fi network. Then, the data transmission time is 360 ms.
  • Experimentally, the execution time is long since the drone’s controller requires up to 25 s to reach the center of the chosen free-collision zone.
  • The total execution time is experimentally 25.360 s.
  • MATLAB 2022a is used due to its compatibility with other platforms and its wide variety of tools and different uses in the industry, allowing the use of the algorithm in future projects.
Comparing our trajectory generation proposal (TGBOA algorithm) with those reported in the references, the TGBOA algorithm is functional for random and dynamic scenarios, is based on object evasion, operates in real-time, and can work online. However, in the reported techniques, the surrounding environment cannot change, the controller references must be preloaded, and the trajectory works offline. Based on the above, the TGBOA algorithm offers advantages compared to the previously described techniques, due to how it operates in changing and random situations.
Comparing the TGBOA algorithm with the proposals reported in the references [7,19,24,25], the TGBOA algorithm reduces complexity because our work is based on the detection of moving objects, while the indicated references are based on the calculation of optical flow, which is calculated through the frequency components of the image or optimization algorithms. Table 6 shows the comparison between the TGBOA algorithm and the references where the trajectory of the drone is determined by applying the optical flow technique.
Based on Table 3, Table 4 and Table 5 and Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12, the TGBOA algorithm works correctly in the generation of trajectories in real-time only if the vision system is correctly applied in the evasion of objects. However, the runtime is high due to the software used in our experiments. Therefore, our future work has the following directions: the first is to reduce the execution time using electronic development cards, the second is to improve the transmission data rate, and the third is to apply the TGBOA algorithm when the UAV will move through random and changing scenarios.

5. Conclusions

The new algorithm called trajectory generation based on object avoidance (TGBOA) is capable of flight planning in real-time from object detection by a vision sensor and with low computational resources by processing a video transmitted from a UAV to a workstation. The algorithm can generate collision-free trajectories that can be used as a reference in the UAV controller to avoid objects during the flight. However, the optimal trajectory will depend on the detected objects, UAV positions, and the endpoint. Thirteen regions were defined in the image for the video analysis. This number can be increased to have more trajectories generated in different directions; however, as a consequence, the execution time increases due to the number of computational operations required. This problem can be solved by programming the TGBOA algorithm on electronic cards with greater computational power and using different platforms dedicated to robotic systems. A PI-like controller was developed for a quadcopter position and yaw control. The dynamics of the UAV have been divided into four subsystems with a control input and an output variable each. Using the references obtained from the TGBOA algorithm, the flight controller led to satisfactory results in the experiments, due to where the objects were located and evaded in different areas and choosing the appropriate trajectory for each case. The numerical simulations of the proposed controllers have been implemented using the mathematical software MATLAB 2022a, and the results show a good performance of the proposed control law, reaching the proposed references in 4 s. Future work will concern the implementation of the algorithm and different controllers using other simulation platforms and development boards.

Author Contributions

Conceptualization, J.T.G.B., C.A.L., S.D.G. and G.G.-T.; methodology, J.T.G.B., C.A.L. and S.D.G.; software, L.F.M.M.; validation, L.F.M.M., C.A.L. and J.T.G.B.; formal analysis, J.T.G.B. and C.A.L.; investigation, L.F.M.M. and J.T.G.B.; resources, J.T.G.B. and C.A.L.; data curation, L.F.M.M.; writing—original draft preparation, L.F.M.M.; writing—review and editing, J.T.G.B., C.A.L., S.D.G. and G.G.-T.; methodology, J.T.G.B., C.A.L., S.D.G. and G.G.-T.; visualization, L.F.M.M.; supervision, G.G.-T., J.T.G.B. and C.A.L.; project administration, J.T.G.B. and C.A.L.; funding acquisition, L.F.M.M., J.T.G.B. and C.A.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank Mexico’s National Council of Science and Technology (CONACyT) and the University of Guadalajara for the support granted. Luis Felipe Muñoz Mendoza also thanks CONACyT for the scholarship.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alzahrani, B.; Oubbati, O.S.; Barnawi, A.; Atiquzzaman, M.; Alghazzawi, D. UAV assistance paradigm: State-of-the-art in applications and challenges. J. Netw. Comput. Appl. 2020, 166, 102706. [Google Scholar] [CrossRef]
  2. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A review on UAV-based applications for precision agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  3. Woo, J.W.; An, J.-Y.; Cho, M.G.; Kim, C.-J. Integration of path planning, trajectory generation and trajectory tracking control for aircraft mission autonomy. Aerosp. Sci. Technol. 2021, 118, 107014. [Google Scholar] [CrossRef]
  4. Ahmed, F.; Jenihhin, M. A Survey on UAV Computing Platforms: A Hardware Reliability Perspective. Sensors 2022, 22, 6286. [Google Scholar] [CrossRef]
  5. Ahmed, F.; Mohanta, J.C.; Keshari, A.; Yadav, P.S. Recent Advances in Unmanned Aerial Vehicles: A Review. Arab. J. Sci. Eng. 2022, 47, 7963–7984. [Google Scholar] [CrossRef] [PubMed]
  6. Gupta, A.; Fernando, X. Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges. Drones 2022, 6, 85. [Google Scholar] [CrossRef]
  7. Chen, Y.; Yu, J.; Mei, Y.; Zhang, S.; Ai, X.; Jia, Z. Trajectory optimization of multiple quad-rotor UAVs in collaborative assembling task. Chin. J. Aeronaut. 2016, 29, 184–201. [Google Scholar] [CrossRef] [Green Version]
  8. Li, L.; Sun, L.; Jin, J. Survey of advances in control algorithms of quadrotor unmanned aerial vehicle. In Proceedings of the 2015 IEEE 16th International Conference on Communication Technology (ICCT), Hangzhou, China, 18–21 August 2015; pp. 107–111. [Google Scholar] [CrossRef]
  9. Li, Y.; Song, S. A survey of control algorithms for Quadrotor Unmanned Helicopter. In Proceedings of the 2012 IEEE Fifth International Conference on Advanced Computational Intelligence (ICACI), Nanjing, China, 18–20 October 2012; pp. 365–369. [Google Scholar] [CrossRef]
  10. Antonio-Toledo, M.E.; Sanchez, E.N.; Alanis, A.Y. Neural Inverse Optimal Control Applied to Quadrotor UAV. In Proceedings of the 2018 IEEE Latin American Conference on Computational Intelligence (LA-CCI), Guadalajara, Mexico, 7–9 November 2018; pp. 1–8. [Google Scholar] [CrossRef]
  11. Guillén-Bonilla, J.T.; García, C.C.V.; Di Gennaro, S.; Morales, M.E.S.; Lúa, C.A. Vision-Based Nonlinear Control of Quadrotors Using the Photogrammetric Technique. Math. Probl. Eng. 2020, 2020, 5146291. [Google Scholar] [CrossRef]
  12. Rahman, Y.A.A.; Hajibeigy, M.T.; Al-Obaidi, A.S.M.; Cheah, K.H. Design and Fabrication of Small Vertical-Take-Off-Landing Unmanned Aerial Vehicle. MATEC Web Conf. 2018, 152, 02023. [Google Scholar] [CrossRef] [Green Version]
  13. Gupta, N.; Chauhan, R.; Chadha, S. Unmanned Aerial Vehicle (UAV) for Parcel Delivery. Int. J. Eng. Res. Technol. 2020, 13, 2824–2830. [Google Scholar] [CrossRef]
  14. Yang, H.; Lee, Y.; Jeon, S.; Lee, D. Multi-rotor drone tutorial: Systems, mechanics, control and state estimation. Intell. Serv. Robot. 2017, 10, 79–93. [Google Scholar] [CrossRef]
  15. Eslamiat, H.; Li, Y.; Wang, N.; Sanyal, A.K.; Qiu, Q. Autonomous Waypoint Planning, Optimal Trajectory Generation and Nonlinear Tracking Control for Multi-rotor UAVs. In Proceedings of the 2019 18th European Control Conference (ECC), Naples, Italy, 25–28 June 2019. [Google Scholar] [CrossRef]
  16. Shiller, Z. Off-Line and On-Line Trajectory Planning. In Motion and Operation Planning of Robotic Systems. Mechanisms and Machine Science; Springer: Cham, Switzerland, 2015; Volume 29, pp. 29–62. [Google Scholar]
  17. Muñoz, J.; López, B.; Quevedo, F.; Monje, C.A.; Garrido, S.; Moreno, L.E. Multi UAV Coverage Path Planning in Urban Environments. Sensors 2021, 21, 7365. [Google Scholar] [CrossRef]
  18. Youn, W.; Ko, H.; Choi, H.S.; Choi, I.H.; Baek, J.-H.; Myung, H. Collision-free Autonomous Navigation of A Small UAV Using Low-cost Sensors in GPS-denied Environments. Int. J. Control Autom. Syst. 2021, 19, 953–968. [Google Scholar] [CrossRef]
  19. Chuang, H.-M.; He, D.; Namiki, A. Autonomous Target Tracking of UAV Using High-Speed Visual Feedback. Appl. Sci. 2019, 9, 4552. [Google Scholar] [CrossRef] [Green Version]
  20. Causa, F.; Fasano, G. Multiple UAVs trajectory generation and waypoint assignment in urban environment based on DOP maps. Aerosp. Sci. Technol. 2021, 110, 106507. [Google Scholar] [CrossRef]
  21. Farid, G.; Mo, H.; Zahoor, M.I.; Liwei, Q. Computationally efficient algorithm to generate a waypoints-based trajectory for a quadrotor UAV. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018; pp. 4414–4419. [Google Scholar] [CrossRef]
  22. Shirzadeh, M.; Asl, H.J.; Amirkhani, A.; Jalali, A.A. Vision-based control of a quadrotor utilizing artificial neural networks for tracking of moving targets. Eng. Appl. Artif. Intell. 2017, 58, 34–48. [Google Scholar] [CrossRef]
  23. Zhilenkov, A.A.; Epifantsev, I.R. The use of convolution artificial neural networks for drones autonomous trajectory planning. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), Moscow, Russia, 29 January–1 February 2018; pp. 1044–1047. [Google Scholar] [CrossRef]
  24. Alzugaray, I.; Teixeira, L.; Chli, M. Short-term UAV path-planning with monocular-inertial SLAM in the loop. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2017), Singapore, 29 May–3 June 2017; pp. 2739–2746. [Google Scholar] [CrossRef] [Green Version]
  25. Lin, H.-Y.; Peng, X.-Z. Autonomous Quadrotor Navigation with Vision Based Obstacle Avoidance and Path Planning. IEEE Access 2021, 9, 102450–102459. [Google Scholar] [CrossRef]
  26. Mo, H.; Farid, G. Nonlinear and Adaptive Intelligent Control Techniques for Quadrotor UAV–A Survey. Asian J. Control. 2018, 21, 989–1008. [Google Scholar] [CrossRef]
  27. Leon, J.A.R.; Lua, C.A.; Morales, M.E.S.; Di Gennaro, S.; Guzman, A.N. Altitude and attitude non linear controller applied to a quadrotor with a slung load. In Proceedings of the IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), Ixtapa, Mexico, 13–15 November 2019; pp. 1–6. [Google Scholar] [CrossRef]
  28. Raiesdana, S. Control of quadrotor trajectory tracking with sliding mode control optimized by neural networks. Proc. Inst. Mech. Eng. Part I: J. Syst. Control. Eng. 2020, 234, 1101–1119. [Google Scholar] [CrossRef]
  29. Wu, W.; Jin, X.; Tang, Y. Vision-based trajectory tracking control of quadrotors using super twisting sliding mode control. Cyber-Phys. Syst. 2020, 6, 207–230. [Google Scholar] [CrossRef]
  30. Shankaran, V.P.; Azid, S.I.; Mehta, U.; Fagiolini, A. Improved Performance in Quadrotor Trajectory Tracking Using MIMO PIλ-D Control. IEEE Access 2022, 10, 110646–110660. [Google Scholar] [CrossRef]
  31. Kumar, R.; Bhargavapuri, M.; Deshpande, A.M.; Sridhar, S.; Cohen, K.; Kumar, M. Quaternion Feedback Based Autonomous Control of a Quadcopter UAV with Thrust Vectoring Rotors. In Proceedings of the American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020. [Google Scholar] [CrossRef]
  32. Benhadhria, S.; Mansouri, M.; Benkhlifa, A.; Gharbi, I.; Jlili, N. VAGADRONE: Intelligent and Fully Automatic Drone Based on Raspberry Pi and Android. Appl. Sci. 2021, 11, 3153. [Google Scholar] [CrossRef]
  33. Xing, H.; Zhu, L.; Chen, B.; Liu, C.; Niu, J.; Li, X.; Feng, Y.; Fang, W. A comparative study of threshold selection methods for change detection from very high-resolution remote sensing images. Earth Sci. Informatics 2022, 15, 369–381. [Google Scholar] [CrossRef]
  34. Hughes, P.C. Spacecraft Attitude Dynamics; Dover Publications, Inc.: Mineola, NY, USA, 1986. [Google Scholar]
  35. Nagaty, A.; Saeedi, S.; Thibault, C.; Seto, M.; Li, H. Control and Navigation Framework for Quadrotor Helicopters. J. Intell. Robot. Syst. 2012, 70, 1–12. [Google Scholar] [CrossRef]
  36. Tayebi, A.; McGilvray, S. Attitude stabilization of a VTOL quadrotor aircraft. IEEE Trans. Control. Syst. Technol. 2006, 14, 562–571. [Google Scholar] [CrossRef] [Green Version]
  37. Guzman, A.N.; Di Gennaro, S.; Dominguez, J.R.; Lua, C.A.; Loukianov, A.G.; Castillo-Toledo, B. Enhanced Discrete-Time Modeling via Variational Integrators and Digital Controller Design for Ground Vehicles. IEEE Trans. Ind. Electron. 2016, 63, 6375–6385. [Google Scholar] [CrossRef]
  38. Cheng, H.-W.; Chen, T.-L.; Tien, C.-H. Motion Estimation by Hybrid Optical Flow Technology for UAV Landing in an Unvisited Area. Sensors 2019, 19, 1380. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Vision system proposed for the TGBOA algorithm.
Figure 1. Vision system proposed for the TGBOA algorithm.
Mathematics 11 01413 g001
Figure 2. Quadcopter F-450.
Figure 2. Quadcopter F-450.
Mathematics 11 01413 g002
Figure 3. Test case: (a) RGB image obtained from vision sensor; (b) objects located using the binary image obtained from the environment; (c) free zones marked in blue, and zones with objects marked in red.
Figure 3. Test case: (a) RGB image obtained from vision sensor; (b) objects located using the binary image obtained from the environment; (c) free zones marked in blue, and zones with objects marked in red.
Mathematics 11 01413 g003
Figure 4. Geometry and collision-free paths generated from object detection and definition of collision-free regions.
Figure 4. Geometry and collision-free paths generated from object detection and definition of collision-free regions.
Mathematics 11 01413 g004
Figure 5. Frameworks used for quadcopter orientation.
Figure 5. Frameworks used for quadcopter orientation.
Mathematics 11 01413 g005
Figure 6. First experiment. (a) Frame acquired from the UAV video transmitted to the workstation, which is divided into regions: free regions marked in blue and collision regions marked in red; (b) graphical representation of vector reference f 7 is selected for the controller, the reference originates from the drone and its endpoint is the central coordinates of the region R7.
Figure 6. First experiment. (a) Frame acquired from the UAV video transmitted to the workstation, which is divided into regions: free regions marked in blue and collision regions marked in red; (b) graphical representation of vector reference f 7 is selected for the controller, the reference originates from the drone and its endpoint is the central coordinates of the region R7.
Mathematics 11 01413 g006
Figure 7. Experimental numerical results obtained when the vector f 7 is used as a reference: (a) motion control; (b) attitude control; (c) control inputs; (d) 3D simulation.
Figure 7. Experimental numerical results obtained when the vector f 7 is used as a reference: (a) motion control; (b) attitude control; (c) control inputs; (d) 3D simulation.
Mathematics 11 01413 g007aMathematics 11 01413 g007b
Figure 8. Second experiment. (a) Frame used during the second experiment: objects located in regions R2, R9–13 Such regions are identified in red and collision−free regions R1, R3–8 are marked with a blue color; (b) graphical representation of vector reference f 1 : the reference starts from the drone and its endpoint is the central coordinates of the region R1.
Figure 8. Second experiment. (a) Frame used during the second experiment: objects located in regions R2, R9–13 Such regions are identified in red and collision−free regions R1, R3–8 are marked with a blue color; (b) graphical representation of vector reference f 1 : the reference starts from the drone and its endpoint is the central coordinates of the region R1.
Mathematics 11 01413 g008
Figure 9. Results obtained when the vector f 1 is used as a reference: (a) motion control; (b) attitude control; (c) control inputs; (d) 3D simulation.
Figure 9. Results obtained when the vector f 1 is used as a reference: (a) motion control; (b) attitude control; (c) control inputs; (d) 3D simulation.
Mathematics 11 01413 g009aMathematics 11 01413 g009b
Figure 10. Third experiment. (a) Frame used in the third experiment: the object is in the regions R2–7, and R9–13 identified with a red circle and R1 collision−free regions and are identified R8 with a blue circle; (b) graphical representation of vector reference f 12 .
Figure 10. Third experiment. (a) Frame used in the third experiment: the object is in the regions R2–7, and R9–13 identified with a red circle and R1 collision−free regions and are identified R8 with a blue circle; (b) graphical representation of vector reference f 12 .
Mathematics 11 01413 g010
Figure 11. Results obtained by the third experiment when the vector f 12 is used as a reference: (a) motion control; (b) attitude control; (c) control inputs; (d) 3D simulation.
Figure 11. Results obtained by the third experiment when the vector f 12 is used as a reference: (a) motion control; (b) attitude control; (c) control inputs; (d) 3D simulation.
Mathematics 11 01413 g011
Figure 12. Trajectory generated by TGBOA algorithm results of three experiments and their use in the PI-like controller.
Figure 12. Trajectory generated by TGBOA algorithm results of three experiments and their use in the PI-like controller.
Mathematics 11 01413 g012
Table 1. Comparation between different UAV trajectories’ generation methods.
Table 1. Comparation between different UAV trajectories’ generation methods.
Relevant AspectDisadvantage
[19]High image sampling rates to improve the environment recognition. The position control signals are transmitted to the flight controller directly.Expensive hardware and requires prior knowledge or estimation of the objects in its spatial environment.
[23]Convolutional neural networks used to possible path recognition. ARM and FPGA implementation.Large amount of computational resources used for CNN. Three vision sensors needed.
[24]Monocular-inertial SLAM in the loop of navigation in a previously unknown environment. Updating environment requires a lot of time and computer resources. Dynamic scenarios are not considered.
[25]Path planning and real-time obstacle avoidance techniques. Frontal static obstacles can be detected and guide the quadrotor with a proper direction.The presence of moving obstacles is not considered. Optical flow is not sufficient to achieve the real-time performance on the low-cost hardware.
Table 2. Coefficients and variables of the quadcopter.
Table 2. Coefficients and variables of the quadcopter.
ParameterVariableValue
Mass m 1.3 kg
Frame lengthl0.233 m
Inertia J x
J y
J z
8.825 × 10−3 kg m2
8.825 × 10−3 kg m2
14.39 × 10−3 kg m2
Propeller inertia J R 104 × 10−6 kg m 2
Gravity g9.81 m/s2
Thrustb52.5 × 10−6
Dragc1.15 × 10−6 N s2 rad−2
Table 3. Collision-free trajectories generated by object detection for frame shown in Figure 6a.
Table 3. Collision-free trajectories generated by object detection for frame shown in Figure 6a.
RegionVector EquationParametric EquationsSymmetric Equations
R k f k = ( 0 , 0 , 0 ) + t ( x , y k , z k )   ( m ) x = x k t y = y k t z = z k t   ( m ) x x k = y y k = z z k   ( m )
R 2 f 2 = ( 0 , 0 , 0 ) + t ( 10 , 2.4   ,   0.613 ) x 2 = 10 t   y 2 = 2.4 t   z 2 = 1.613 t x 2 10 = y 2 2.4 = z 2 0.613
R 3 f 3 = ( 0 , 0 , 0 ) + t ( 10 , 2.4   ,   1.838 ) x 3 = 10 t   y 3 = 2.4 t   z 3 = 1.838 t x 3 10 = y 3 2.4 = z 3 1.838
R 4 f 4 = ( 0 , 0 , 0 ) + t ( 10 , 0.8   ,   1.838 ) x 4 = 10 t   y 4 = 0.8 t   z 4 = 1.838 t x 4 10 = y 4 0.8 = z 4 1.838
R 5 f 5 = ( 0 , 0 , 0 ) + t ( 10 ,   0.8   ,   1.838 ) x 5 = 10 t   y 5 = 0.8 t   z 5 = 1.838 t x 5 10 = y 5 0.8 = z 5 1.838
R 6 f 6 = ( 0 , 0 , 0 ) + t ( 10 ,   2.4   ,   1.838 ) x 6 = 10 t   y 6 = 2.4 t   z 6 = 1.838 t x 6 10 = y 6 2.4 = z 6 1.838
R 7 * f 7 = ( 0 , 0 , 0 ) + t ( 10 ,   2.4   ,   0.613 ) x 7 = 10 t   y 7 = 2.4 t   z 7 = 0.613 t x 7 10 = y 7 2.4 = z 7 0.613
* The vector f 7 is marked in blue because this vector is used as a reference in the UAV controller.
Table 4. Collision-free trajectories generated by the TGBOA algorithm using the frame shown in Figure 8a.
Table 4. Collision-free trajectories generated by the TGBOA algorithm using the frame shown in Figure 8a.
RegionVector EquationParametric EquationsSymmetric Equations
R k f k = ( 0 , 0 , 0 ) + t ( x , y k , z k )   ( m ) x = x k t y = y k t z = z k t   ( m ) x x k = y y k = z z k   ( m )
R 1 * f 1 = ( 0 , 0 , 0 ) + t ( 10 ,   0   ,   0 )   x 1 = 10 t   y 1 = 0 t   z 1 = 0 t x 1 10 = y 1 0 = z 1 0
R 3 f 3 = ( 0 , 0 , 0 ) + t ( 10 , 2.4   ,   1.838 ) x 3 = 10 t   y 3 = 2.4 t   z 3 = 1.838 t x 3 10 = y 3 2.4 = z 3 1.838
R 4 f 4 = ( 0 , 0 , 0 ) + t ( 10 , 0.8   ,   1.838 ) x 4 = 10 t   y 4 = 0.8 t   z 4 = 1.838 t x 4 10 = y 4 0.8 = z 4 1.838
R 5 f 5 = ( 0 , 0 , 0 ) + t ( 10 ,   0.8   ,   1.838 ) x 5 = 10 t   y 5 = 0.8 t   z 5 = 1.838 t x 5 10 = y 5 0.8 = z 5 1.838
R 6 f 6 = ( 0 , 0 , 0 ) + t ( 10 ,   2.4   ,   1.838 ) x 6 = 10 t   y 6 = 2.4 t   z 6 = 1.838 t x 6 10 = y 6 2.4 = z 6 1.838
R 7 f 7 = ( 0 , 0 , 0 ) + t ( 10 ,   2.4   ,   0.613 ) x 7 = 10 t   y 7 = 2.4 t   z 7 = 0.613 t x 7 10 = y 7 2.4 = z 7 0.613
R 8 f 8 = ( 0 , 0 , 0 ) + t ( 10 ,   2.4   , 0..613 ) x 8 = 10 t   y 8 = 2.4 t   z 8 = 0.613 t x 8 10 = y 8 2.4 = z 8 0.613
* The vector f 1 is marked in blue due to this vector being used as a reference in the UAV controller.
Table 5. Collision-free trajectories generated by the TGBOA algorithm using the frame shown in Figure 10a.
Table 5. Collision-free trajectories generated by the TGBOA algorithm using the frame shown in Figure 10a.
RegionVector EquationParametric EquationsSymmetric Equations
R k f k = ( 0 , 0 , 0 ) + t ( x , y k , z k )   ( m ) x = x k t y = y k t z = z k t   ( m ) x x k = y y k = z z k   ( m )
R 2 f 2 = ( 0 , 0 , 0 ) + t ( 10 , 2.4   ,   0.613 ) x 2 = 10 t   y 2 = 2.4 t   z 2 = 0.613 t x 2 10 = y 2 2.4 = z 2 0.613
R 3 f 3 = ( 0 , 0 , 0 ) + t ( 10 , 2.4   ,   1.838 ) x 3 = 10 t   y 3 = 2.4 t   z 3 = 1.838 t x 3 10 = y 3 2.4 = z 3 1.838
R 4 f 4 = ( 0 , 0 , 0 ) + t ( 10 , 0.8   ,   1.838 ) x 4 = 10 t   y 4 = 0.8 t   z 4 = 1.838 t x 4 10 = y 4 0.8 = z 4 1.838
R 5 f 5 = ( 0 , 0 , 0 ) + t ( 10 ,   0.8   ,   1.838 ) x 5 = 10 t   y 5 = 0.8 t   z 5 = 1.838 t x 5 10 = y 5 0.8 = z 5 1.838
R 6 f 6 = ( 0 , 0 , 0 ) + t ( 10 ,   2.4   ,   1.838 ) x 6 = 10 t   y 6 = 2.4 t   z 6 = 1.838 t x 6 10 = y 6 2.4 = z 6 1.838
R 7 f 7 = ( 0 , 0 , 0 ) + t ( 10 ,   2.4   ,   0.613 ) x 7 = 10 t   y 7 = 2.4 t   z 7 = 0.613 t x 7 10 = y 7 2.4 = z 7 0.613
R 9 f 9 = ( 0 , 0 , 0 ) + t ( 10   ,   2.4 , 1.838 ) x 9 = 10 t   y 9 = 2.4 t   z 9 = 1.838 t x 9 10 = y 9 2.4 = z 9 1.838
R 10 f 10 = ( 0 , 0 , 0 ) + t ( 10 ,   0.8 , 1.838 ) x 10 = 10 t   y 10 = 0.8 t   z 10 = 1.838 t x 10 10 = y 10 0.8 = z 10 1.838
R 11 f 11 = ( 0 , 0 , 0 ) + t ( 10 , 0.8 , 1.838 ) x = 10 t   y 2 = 0.8 t   z 11 = 1.838 t x 11 10 = y 12 0.8 = z 12 1.838
R 12 * f 12 = ( 0 , 0 , 0 ) + t ( 10 , 2.4 , 1.838 ) x 12 = 10 t   y 12 = 2.4 t   z 12 = 1.838 x 12 10 = y 12 2.4 = z 12 1.838
R 13 f 13 = ( 0 , 0 , 0 ) + t ( 10 , 2.4 , 0.613 ) x 13 = 10 t   y 13 = 2.4 t   z 13 = 0.613 t x 13 10 = y 13 2.4 = z 13 0.613
* The vector f 12 is marked in blue due to this vector being used as a reference in the UAV controller.
Table 6. Comparison between related work and TGBOA algorithm.
Table 6. Comparison between related work and TGBOA algorithm.
Relevant AspectOur Proposal
[19]High image sampling and position control signal rate. Expensive hardware and requires prior knowledge or estimation of the objects in its spatial environment.Only needs a vision sensor, and online trajectory references are generated and updated in real-time.
[23]Large amount of computational resources used for CNN. Three vision sensors needed. High performance boards used to improve the performance.Improves efficiency by focusing the algorithm on object detection and drone position, thus reducing computational complexity.
[24]Updating environment requires a lot of time and computer resources. Dynamic scenarios are not considered. Focuses on the position of moving objects and dynamic scenarios without relying on full scenario or feature extraction.
[25]The optical flow constructed from the image sequence makes it difficult to achieve the real-time performance on the low-cost hardware. The presence of moving obstacles is not considered.Easy to implement because it is based on the detection of objects in a changing scenario and not based on the optic flow concept with Fourier transform [38].
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Muñoz Mendoza, L.F.; García-Torales, G.; Acosta Lúa, C.; Di Gennaro, S.; Guillen Bonilla, J.T. Trajectories Generation for Unmanned Aerial Vehicles Based on Obstacle Avoidance Located by a Visual Sensing System. Mathematics 2023, 11, 1413. https://doi.org/10.3390/math11061413

AMA Style

Muñoz Mendoza LF, García-Torales G, Acosta Lúa C, Di Gennaro S, Guillen Bonilla JT. Trajectories Generation for Unmanned Aerial Vehicles Based on Obstacle Avoidance Located by a Visual Sensing System. Mathematics. 2023; 11(6):1413. https://doi.org/10.3390/math11061413

Chicago/Turabian Style

Muñoz Mendoza, Luis Felipe, Guillermo García-Torales, Cuauhtémoc Acosta Lúa, Stefano Di Gennaro, and José Trinidad Guillen Bonilla. 2023. "Trajectories Generation for Unmanned Aerial Vehicles Based on Obstacle Avoidance Located by a Visual Sensing System" Mathematics 11, no. 6: 1413. https://doi.org/10.3390/math11061413

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop