Next Article in Journal
Deep Learning for Combating Misinformation in Multicategorical Text Contents
Next Article in Special Issue
A Comparison Study between Traditional and Deep-Reinforcement-Learning-Based Algorithms for Indoor Autonomous Navigation in Dynamic Scenarios
Previous Article in Journal
Bicycle Data-Driven Application Framework: A Dutch Case Study on Machine Learning-Based Bicycle Delay Estimation at Signalized Intersections Using Nationwide Sparse GPS Data
Previous Article in Special Issue
Exploration–Exploitation Tradeoff in the Adaptive Information Sampling of Unknown Spatial Fields with Mobile Robots
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Innovative Collision-Free Image-Based Visual Servoing Method for Mobile Robot Navigation Based on the Path Planning in the Image Plan

1
Department of Electrical Engineering, College of Engineering, Jouf University, Sakakah 72388, Saudi Arabia
2
NOCCS Laboratory, National School of Engineering of Sousse, University of Sousse, Sousse 4054, Tunisia
3
Electrical Engineering Department, College of Engineering, University of Business and Technology, Jeddah 21589, Saudi Arabia
4
Engineering Mathematics Department, Faculty of Engineering, Alexandria University, Alexandria 5424041, Egypt
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(24), 9667; https://doi.org/10.3390/s23249667
Submission received: 22 September 2023 / Revised: 30 November 2023 / Accepted: 5 December 2023 / Published: 7 December 2023
(This article belongs to the Special Issue Mobile Robots for Navigation)

Abstract

:
In this article, we present an innovative approach to 2D visual servoing (IBVS), aiming to guide an object to its destination while avoiding collisions with obstacles and keeping the target within the camera’s field of view. A single monocular sensor’s sole visual data serves as the basis for our method. The fundamental idea is to manage and control the dynamics associated with any trajectory generated in the image plane. We show that the differential flatness of the system’s dynamics can be used to limit arbitrary paths based on the number of points on the object that need to be reached in the image plane. This creates a link between the current configuration and the desired configuration. The number of required points depends on the number of control inputs of the robot used and determines the dimension of the flat output of the system. For a two-wheeled mobile robot, for instance, the coordinates of a single point on the object in the image plane are sufficient, whereas, for a quadcopter with four rotating motors, the trajectory needs to be defined by the coordinates of two points in the image plane. By guaranteeing precise tracking of the chosen trajectory in the image plane, we ensure that problems of collision with obstacles and leaving the camera’s field of view are avoided. Our approach is based on the principle of the inverse problem, meaning that when any point on the object is selected in the image plane, it will not be occluded by obstacles or leave the camera’s field of view during movement. It is true that proposing any trajectory in the image plane can lead to non-intuitive movements (back and forth) in the Cartesian plane. In the case of backward motion, the robot may collide with obstacles as it navigates without direct vision. Therefore, it is essential to perform optimal trajectory planning that avoids backward movements. To assess the effectiveness of our method, our study focuses exclusively on the challenge of implementing the generated trajectory in the image plane within the specific context of a two-wheeled mobile robot. We use numerical simulations to illustrate the performance of the control strategy we have developed.

1. Introduction

In the context of 2D visual servoing, also known as image-based visual servoing (IBVS), one of the major challenges we face is obstacle avoidance. Our main objective is to use visual data from a 2D camera to control the movement of a robotic system, with the image the camera captures serving as our central reference point. The complexity of the task arises when we need to guide the robot to a specific destination while avoiding possible obstacles that may be in its path. This requires real-time analysis of visual data to detect the presence of obstacles, estimate their position and trajectory, and then adjust the robot’s path accordingly while ensuring that the visual target remains within the camera’s field of view (FOV). This task requires a combination of skills in computer vision, trajectory planning in the Cartesian plane, and control to ensure both environmental safety and mission accomplishment. In summary, obstacle avoidance in the context of 2D visual servoing is an essential and challenging task that requires an innovative and effective approach to ensure the success of robotic applications.
In many research studies, various approaches have been explored to solve this complex problem. For example, in [1], navigation is based on vanishing points to guide a quadcopter through corridors without colliding with walls. Other researchers, such as [2,3,4,5,6], have used different types of sensors, including stereo, monocular, and RGB-D cameras, to obtain three-dimensional information about the environment. They have also implemented techniques such as simultaneous localization and mapping (SLAM) to reconstruct maps of the environment, which are then used for motion planning and obstacle avoidance. Stereo cameras have also been used, as seen in the works of [7,8,9], combining visual, depth, and inertial information for navigation and obstacle avoidance. Studies like [10,11,12,13] have explored optical flow to find obstacles and estimate the speed difference between the robot and the obstacles. This is followed by trajectory planning and adaptive control to achieve navigation tasks.
In parallel, some studies have focused on the “teach and repeat” strategy, where a set of key images of the environment is captured, stored, and ordered for the robot’s navigation. This allows the robot to follow the same path multiple times, with examples such as [14,15,16,17,18,19]. A learning phase is required to capture the environment’s topology from these key images.
In a recent study, authors in [20] proposed a hierarchical visual servo control scheme for visual trajectory tracking of quadcopters in indoor environments. The control scheme is capable of handling three tasks, taking into account a hierarchy order: first, the collision avoidance task, then the visibility task, and finally, the visual task.
Path planning and tracking in the control loop can solve IBVS’s above issues and deal with complex scenes, including obstacles. Ideally, the idea is to generate a trajectory in the image plane and force the robot to follow it. According to [21], this is an extremely complex problem. To date, no control law generated, in a deterministic way, from a trajectory imposed in the image plane has been proposed. Despite this, there are many efforts being made. In [21], when the target is very far from the robot, the visual servoing algorithm may diverge. As a solution, the authors create a sequence of images between the image captured at the initial instant and the desired (final) image. They use a trajectory in 3D Cartesian space, which they project into the image plane. The trajectory is generated using the potential field method. Refs. [22,23] proposed a driving assistance system in which a trajectory in the image plane is generated. The outcomes of these two projects are used in a remote operation. The works presented in [24,25,26,27,28,29] seek to find a feasible trajectory in Cartesian space (3D) while avoiding obstacles and guaranteeing the visibility of the object in the image plane. An estimation of the pose of the robot and a priori knowledge of the obstacles are necessary. Once the trajectory is generated, an IBVS control algorithm called the Image-Based Tracker algorithm is used to ensure this trajectory. In [30], a 2D visual servoing method without camera calibration based on homographic projection is proposed. This method uses another form of the interaction matrix based on the homographic projection on a horizontal plane. The control law resembles that of classical visual servoing by replacing the interaction matrix with the proposed matrix. Since the proposed method does not allow direct control of the robot’s motion, leading to undesirable movements, Ref. [30] suggested adding a trajectory optimization method to the homographic projection. The idea is to generate a feasible trajectory in the projective homographic space. Subsequently, the large initial error is divided into small errors used to follow the discretized trajectory. The optimization of the trajectory is performed on the image plane. Ref. [31] attempted to determine an optimal trajectory using dynamic programming. The possible trajectories are divided into several smaller trajectories, and a cost function is integrated to penalize undesirable sub-trajectories. Constraints are added on the borders of the image (to guarantee the visibility of the scene), on the limits of the control, and on the limits of the joints of the robot. In this work, the authors did not consider the presence of obstacles. For this, a priori knowledge of the location of the obstacles in the Cartesian coordinate system is necessary. A navigation algorithm using visual memory is proposed in [32]. A keyframe topology map is generated from the movement of a camera over a model of the real scene. During navigation, a sequential comparison of stored images with captured images is performed. The desired trajectory here is defined by a number of waypoints. Each waypoint takes the form of a desired intermediate image. No direct planning on the image plane is performed.
In this paper, we propose a new method of path planning and tracking in the image plane in order to perform a visual servoing of a differential mobile robot using the concept of differential flatness. For the first time in the field of visual servoing based on path planning, it would be possible to generate any trajectory in the image plane and ensure its tracking while guaranteeing the controllability of the robot as well as the mechanical feasibility, high tracking accuracy, and position, velocity, and acceleration control on any desired trajectory and consequently on the robot.
We show that using the differential flatness of the system’s dynamics lets us limit arbitrary trajectories based on the number of points the object needs to reach in the image plane, creating a link between the current configuration and the desired configuration. The number of required points depends on the number of control inputs of the robot being used and determines the dimension of the flat output of the system. For a two-wheeled mobile robot, for instance, the coordinates of a single point on the object in the image plane are sufficient, whereas, for a quadrotor with four rotating motors, the trajectory needs to be defined by the coordinates of two points in the image plane. By ensuring precise tracking of the chosen trajectory in the image plane, we ensure that collision problems with obstacles and leaving the camera’s field of view are avoided. Our approach is based on the inverse problem principle, meaning that when any point on the object is selected in the image plane, it will in no way be occluded by obstacles or exit the camera’s field of view during movement. It is true that proposing any trajectory in the image plane can result in non-intuitive movements (back and forth) in the Cartesian plane. In the case of backward movement, the robot may collide with obstacles as it navigates without direct vision. Therefore, it is essential to perform optimal path planning that avoids backward movements. To evaluate the effectiveness of our method, we will focus on the specific case of a two-wheeled mobile robot. Numerical simulations are presented to demonstrate the efficiency of the proposed control strategy.
The following points outline the major contributions of this work:
The proposed trajectory planning algorithm includes, implicitly, a permanent guarantee of target visibility and thus the impossibility of encountering an obstacle while ensuring robot controllability.
The problem of a distant target no longer arises. The FOV is guaranteed even in the presence of obstacles using only a single monocular sensor.
The integration of multiple sensors may require complex data fusion algorithms to combine information from different sources. With a single sensor, software complexity is reduced, which can simplify system development and maintenance.
In the case of a mobile robot, the trajectory of a single point on the target object is sufficient. Any curve in the image plane between the two boundary points (initial and desired) can be ensured.
Using a single point on the object as a descriptor provides greater robustness to the detection and recognition phases.
Since this trajectory uses a certain number of points (called collocation points), we can introduce the concept of time between the points. This implies that we have just imposed a dynamic on this trajectory (in position and speed) and thus a dynamic on the robot.
The imposed trajectory is physically feasible thanks to the concept of differential flatness (there is equivalence between differential flatness and controllability).
No a priori knowledge of the 3D environment is necessary.
The proposed algorithm is able to ensure robust tracking of the imposed trajectory.
The time required to ensure that this trajectory can be fixed (within the limits of the robot’s physical constraints).
The paper’s structure is as follows: Section 2 exposes the problem formulation. In Section 3, we present the general control law design, including the flatness model, path planning, and path tracking process. The synthesis of the mobile robot control law is detailed in Section 4. Finally, Section 5 discusses simulation results that prove the effectiveness of the proposed method.

2. Problem Formulation

2.1. Differential Mobile Robot

The mobile robot considered in this work is of the single-cycle type shown schematically in Figure 1. This robot is equipped with two independently controlled drive wheels and a freewheel, ensuring its stability.
x r , y r  are midpoint coordinates between the two driving wheels,  θ  the orientation of the robot,  ω 1  and  ω 2  are velocities of the two driving wheels,  l  is the distance between the two driving wheels, and  L  is the diameter of a drive wheel. It is assumed that the robot movement occurs without sliding. The robot kinematic model can be expressed as follows:
x ˙ r = υ r c o s θ y ˙ r = υ r s i n θ θ ˙ = ω r
  υ r  is the linear speed of the robot given by
υ r = L 4 ω 1 + ω 2 ,
and  ω r  is the angular velocity of the robot expressed by
ω r = L 2 l ω 2 ω 1 .
The mobile robot state is given by  q = x r , y r , θ T , and the two control inputs are  υ r  and  ω r .

2.2. Relations between Coordinate Systems

According to Figure 1, the robot is equipped with a camera. By utilizing the following homogeneous transformation matrix, we can calculate the robot–camera coordinate system transformation:
T c r = R c r t c r 0 1 ,
where  T c r R 4 × 4 R c r = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 R 3 × 3  is the rotation matrix.
t c r = t x t y t z T R 3  is the position vector where  t x ,   t y ,   t z  are the relative displacements between the robot coordinate system and the camera coordinate system. Since the camera is fixed on the base of the mobile robot, the robot–camera velocity relationship can be expressed as follows:
ζ r r = 0 ,   0 , υ r , 0 , ω r , 0   T = R c r s ( t c r ) R c r 0 3 × 3 R c r ζ c c .
In Equation (5),  ζ c c = υ x , υ x , υ x , ω x , ω y , ω z T  represents the velocity of the camera expressed in the camera coordinate system.  ζ r r  is the velocity of the robot expressed in the robot coordinate system.  s ( t c r )  is the skew-symmetric matrix of the vector  t c r υ r  and  ω r  are, respectively, the translation and rotation velocity of the robot.
By simplifying Equation (5), we obtain
υ r ω r = r 33 r 22 t x r 12 t y 0 r 22 υ c ω c ,
where  υ c ω c T  is the camera velocity vector expressed in the camera coordinate system. The rotational velocities of the two drive wheels are given by the following expression:
ω 1 ω 2 = L / 2 L / l L / 2 L / l υ r ω r

2.3. Perspective Transformation Model

Figure 2 shows the perspective transformation model.
Let  P  be any point in space.  p  represents the projection of  P  in the 2D plane called  π O  is the center of projection.  o  is the intersection between the optical axis passing through  O  and the image plane.  f  is the focal length of the camera.  R c  is the camera coordinate system. Let  X = X c , Y c ,   Z c  the coordinates of the point  P  in the 3D Cartesian coordinate system; the projection of this point on the plane  π  is located at  p  whose coordinates are  ( x , y )  in the plane  π  expressed in  m m . The expressions of these coordinates are given by the following relations:
x = X c Z c = u c u f . α u y = Y c Z c = v c v f . α v .
The pair  ( u , v )  represents the image point  p  coordinates expressed in pixels.  a = c u ,   c v ,   f ,   α u ,   α v  represents the camera intrinsic parameter:  c u ,   c v  are the coordinates of the principal point,  f  being the focal distance, and  α u ,   α v  are the vertical and horizontal scale factors expressed in pixel/mm. The time derivative of Equations (8) returns
x ˙ = X c ˙ / Z c X c Z c ˙ / Z c 2 = ( X c ˙ x Z c ˙ ) / Z c y ˙ = Y c ˙ / Z c Y c Z c ˙ / Z c 2 = ( Y c ˙ y Z c ˙ ) / Z c .
Using Equation (9), we can connect the velocity of the 3D point  P  to the velocity of the camera using Equation (9), which give
X ˙ = v c ω c   · X .
Equation (10) can be expanded to obtain the following form:
X c ˙ = v x ω y Z c + ω z Y c Y ˙ c = v y ω z X c + ω x Z c Z c ˙ = v z ω x Y c + ω y X c   .
Using Equations (9) and (11) we can write
x ˙ = v x Z c + x v z Z c + x y ω x 1 + x 2 ω y + y ω z y ˙ = v y Z c + y v z Z c + 1 + y 2 ω x x y ω y x ω z .
We can deduce the following compact form:
X ˙ = L s ·   V ,
where  L s  is the interaction matrix, commonly known as the image’s Jacobian:
L s = 1 Z c 0 x Z c 0 1 Z c y Z c           x y 1 + x 2 y             1 + y 2 x y x ,
L s  ensures the link between the variation in the position of the point  p  and, consequently, the variation of its coordinates in pixels  ( u ,   v )  and the velocity of the camera  V = v x v y v z   ω x ω y ω z T . In the interaction matrix expression,  Z c  represents the depth of the point  P . Any control law using this form of interaction matrix must first estimate the value of  Z c . Intrinsic parameters of the camera are involved in the calculation of  x  and  y . Since the movement of the robot is performed in the 2D plane, the linear and angular velocities are in the direction of the  Z c  axis and the  Y c  axis, respectively, the interaction matrix can be reformulated as follows:
L s = x Z c 1 + x 2 y Z c x y .

3. Proposed Control Law Design

3.1. Flatness of a Model

A system is described by the equation
Φ ( X   ̇ ( t ) , X ( t ) , u ( t ) ) = 0 ,
where  X ( t )  denotes the state and  u ( t )  denotes the command, is flat if there is a vector  z ( t )  such that
z ( t ) = h ( X t ,   u t ,   u 1 t ,   ,   u δ ) ,
whose components are differentially independent and have two functions  A ( . )  and  B ( . )  such that
X ( t ) = A ( z t ,   z 1 t ,   ,   z α t ) ,
u t = B z t ,   z 1 t ,   ,   z β t ,
where  α , β , and  δ  are three finite integers. This formulation refers to the vector  z ( t )  as the flat output of the system. By introducing the functions  A ( . )  and  B ( . ) , this flat output is composed of a set of variables that enables parameterization of all other system variables: the state, the command, and also the output  y   ( t ) . Indeed, if the output of the system is defined by a relation of the form  y ( t ) = Ψ ( X t ,   u t ,   ,   u p t )  then necessarily the quantities described in (18) and (19) make it possible to affirm that there exists an integer  γ  such that
y ( t ) = C ( z t ,   ,   z γ t ) .
The flat output groups all of the free (unconstrained) variables of the system since the components of  z ( t )  are differentially independent. However, we can also consider, on the basis of Equation (17), that the flat output  z ( t )  only depends on the state and the command. This would make it an endogenous variable of the system, in contrast to the state of an observer, which would be an example of an exogenous variable of the observed system. In addition, Lie-Bäcklund’s notion of differential equivalence [33] shows that the number of components of  z ( t )  is the same as the number of components of the command:
dim z ( t ) = dim u ( t ) .
This is a basic property that permits one to know how many free variables must be found on a model to show that it is flat. One of the advantages of the flatness property is that the previous definition is not restricted to state models but to any model of the form
Φ ( X n t ,   ,   X 1 t ,   X t ,   u m t ,   ,   u 1 ,   u t ) = 0 .
This means that we can start right from the equations that describe how the system works. We do not have to rewrite all the equations into an equation of state.

3.2. Path Planning in the Image Plane

From Equation (19), if we decide to obtain, for the flat system described by Equation (16), the trajectory:  z d ( t )  for  t  from  t 0  to  t f , it is sufficient to apply, on the same time segment, the open loop control given is
u d ( t ) = B ( z d t ,   z d 1 t ,   ,   z d β t ) .
Assuming a perfect model, we will then have, for  t  from  t 0  to  t f z ( t ) = z d   ( t ) , and therefore
X t = X d t = A z d t ,   z d 1 t ,   ,   z d α t ,
y ( t ) = y d ( t ) = C ( z d t ,   z d 1 t ,   ,   z d γ t ) .
The only constraint is that the desired trajectory on the flat output must necessarily be at least  m a x α , β , γ  times differentiable on  [ t 0   t f ] . To ensure the differentiability constraint throughout the entire trajectory, we generally envisage, and without being restrictive, piecewise polynomial trajectories, interpolation polynomials, or  C  functions with, most of the time, continuity conditions at the start and at the arrival. We can also impose passing or reversal points or also avoidance trajectories planned according to possible events.

3.3. Asymptotic Trajectory Pursuit

With knowledge of Equation (19), the following command is proposed:
u ( t ) = B ( z t ,   z 1 t ,   ,   z β 1 t ,   ϑ ( t ) ) ,
where  ϑ ( t )  is a new command. When  B ( . ) z ( β )  is locally invertible, this leads to the following decoupled system:
z β ( t ) = ϑ ( t ) .
This result is to be compared to the linearization and to the decoupling by looping of the nonlinear systems, which are always conditioned by the stability of the zeros of the system [34]. Indeed, we obtain here an unconditional decoupling and linearization (note that this property is at the origin of the choice of the term flatness). However, it is obvious that an additional stabilization loop is necessary;  ϑ ( t )  then becomes
ϑ ( t ) = z d β ( t ) + i = 0 β 1 k i ( z d i t z i t ) .
Let  K ( p ) = p β + i = 0 β 1 k i p i  is a diagonal matrix whose elements are polynomials with negative real part roots,  u ( t )  then becomes
u = B z ,   ,   z ( β 1 ) ,   z d β ( t ) + i = 0 β 1 k i ( z d i t z i t ) ,
or
u = Φ ( z ,   ,   z β 1 , K p z d t ) ,
which makes it possible to ensure an asymptotic trajectory pursuit with
lim t z d t z ( t ) = 0 .
As  z ( t )  and all its derivatives are endogenous variables of the process, the looping  u ( t )  is called an endogenous looping.

4. Synthesis of the Proposed Mobile Robot Control Law

We showed in [35] that the coordinates of a single point of the object in the image plane  p ( x , y ) , can act as a flat output for the mobile robot. To simplify the problem, we can consider, in Figure 1, the coordinate system of the camera confused with the coordinate system of the robot. In this case, we can write
x ˙ y ˙ = x Z c 1 + x 2 y Z c x y v r ω r .
The two mobile robot control inputs are given by
v r ω r = x Z c 1 + x 2 y Z c x y 1 x ˙ y ˙ .
By explaining the inverse of the interaction matrix, we obtain the following:
v r ω r = x Z c Z c 1 + x 2 y 1 x y x ˙ y ˙ .
Utilizing the concept of differential flatness provided by Hagenmeyer-Delaleau in [36], we can achieve a precise linearization. The resultant linearized system is comparable to a system with the following integration form:
x ˙ = ϑ x y ˙ = ϑ y ,
where  ϑ x  and  ϑ y  are the two auxiliary control inputs to be specified, given by Equation (28), which enable the asymptotic pursuit of the planned trajectory. The final form of the control law for the mobile robot is as follows:
v r ω r = x Z c Z c 1 + x 2 y 1 x y ϑ x ϑ y .
The two auxiliary control inputs ensuring the asymptotic tracking of the desired trajectory are given by
ϑ x = x ˙ + k 1 x x ,
ϑ y = y ˙ + k 2 y y .
Let us consider  e x = x * x  and  e y = y * y . The error dynamics can be written as follows:
e ˙ x + k 1 e x = 0 ,
e ˙ y + k 2 e y = 0 ,
where  k 1  and  k 2  are chosen so that the error dynamics are asymptotically stable. In this case, it suffices to take  k 1 > 0  and  k 2 > 0 ,  which ensures an asymptotic pursuit of the desired trajectory  x ,   y .
Figure 3 shows the proposed control scheme.

5. Simulation Results

To demonstrate the efficiency of our control algorithm, we propose an arbitrary trajectory in the image plane which connects the initial and final position of any point or  P  of the object, except those that are situated along the horizontal axis of symmetry of the object. Indeed, the movement of the mobile robot is conducted in the 2D plane (presented in yellow in Figure 4); the horizontal axis of symmetry of the object remains invariant during the movement of the robot. Consequently, the dynamics of any point situated along the horizontal axis always remain zero or constant (unless the object is centered in the image). This situation contradicts the principle of differential flatness, which asserts that knowledge of the dynamics of the flat output allows for deducing the dynamics of all variables in the system. If the dynamics along the y-axis remain zero or constant, it creates a singularity, also known as the controllability problem. It is important to emphasize that it is possible to select any trajectory connecting the starting point to the endpoint, even by freehand drawing on the screen and using an interpolation algorithm to generate an analytical expression for this trajectory. In our specific example, we will choose a polynomial trajectory due to its simplicity in calculating derivatives.
Since we can impose a dynamic (in velocity and acceleration) on any desired trajectory, we propose a polynomial-type trajectory. Let  P i = x i ,   y i , the initial position, in pixels, of a point  P  of the object in the image plane at time  t i , and  P f = x f ,   y f , its final position at time  t f . Let us suppose we want to have a trajectory connecting these two points pass through a maximum. Let us consider, for example, the coordinate point  x f + x i 2 ,   2 y f y i ,  which represents the maximum of a curve between  y i  and  y f . It should be noted here that the initial situation and the desired final situation must be taken by the camera installed on the real robot. We propose the following dynamic given by a slow start, acceleration in the middle of the trajectory, and finally, a slow convergence. The desired trajectory  y * x *  must therefore satisfy the following constraints:
y * x i * = y i * , y x f = y f , y x f + x i 2 = 2 y f y i , d y d x x f + x i 2 = 0 , d 2 y d 2 x x f + x i 2 < 0 .
As an example, we suggest the following polynomial equation in  x  that satisfies the conditions stated in (41):
y x = y i + y f y i x x i x f x i 9 12 x x i x f x i + 4 x x i x f x i 2 .
Therefore, it is necessary to create a variation of  x t , which satisfies the following boundary conditions:
x t i = x i ,   x ˙ t i = 0 ,   ,   x ( 5 ) t i = 0 ,
x t f = x f ,   x ˙ t f = 0 ,   ,   x ( 5 ) t f = 0 .
This produces the following 11-degree polynomial:
x t = x i + x f x i σ 6 t ( 462 1980 σ t + 3465 σ 2 t 3080 σ 3 t + 1386 σ 4 t 252 σ 5 t t ) ,
with
σ t = t t i t f t i .
The dynamics of the desired trajectory are described in Figure 5. Figure 5a,b present the desired trajectory, in position, which connects the two points to the limits in the image plane and which passes through a maximum. Figure 5c,d show the velocity dynamics of this trajectory, and Figure 5e,f give the acceleration dynamics. We impose on the robot a slow start, acceleration in the middle of the trajectory, and finally, a slow convergence. Camera intrinsic parameters are considered as follows:  c u ,   c v = ( 1024 2 , 1024 2 ) f = 0.718  and  α u ,   α v = ( 800 ,   800 ) .

5.1. Simulation Conditions

We consider an object described by four points. Figure 5a shows this rectangular-shaped object in its initial position (the small blue rectangle) and in the final position (the large blue rectangle). Our algorithm uses only one point (denoted  P  in Figure 5a) to generate the control law. The initial position  P i  and the desired position  P d  of the object in the image plane are given by the coordinates of the following characteristic points of the object (in blue the coordinates of point  P ):
P i = 472 472 472 552         552 552 552 472 ,
P d = 199 199 406 618         411 411 613 411 .
The simulation parameters are chosen as follows:  L = 0.3   m , R = 0.1   m , k 1 = k 2 = 100 ,   T = 10   s , and  Z c = 2.5   m .

5.2. Results and Interpretations

The simulation results are given in Figure 6. Figure 6a shows the trajectories performed by the four points. The desired end position and the reached end position of the four points are:
P d = 199 199 406 618         411 411 613 411 ,
P a = 199 199 406 618         412 412 612 410 .
We notice that the two positions are almost similar; this proves that a single object point is enough to perform 2D visual servoing in the case of a mobile robot. Figure 6b represents the performed trajectory and the desired trajectory of a point  P  of the object in the image plane. We notice a high accuracy of the tracking process. The percentages of errors between the performed trajectory and the desired trajectory do not exceed 0.3% (Figure 6c). Figure 6d represents the movement performed by the robot in the Cartesian plane. We note that the robot has come and gone to reach the final position, which is normal since we have proposed any trajectory in the image plane, which necessarily generates any trajectory in the Cartesian plane. Figure 6e represents the two control laws  v r  and  ω r  applied to the robot. Note that both controls are continuous and smooth. We also notice that the two control laws begin with a slow variation (a slow start of the robot), an acceleration in the middle, and a slow convergence. This proves that we have just controlled the dynamics of the robot via the choice of the desired trajectory in the image plane. The variation in the two rotational speeds of the two driving wheels is given in Figure 6f.

6. Conclusions

In this article, we introduced a novel approach to 2D visual servoing in complex environments containing obstacles. Our method only makes use of visual data obtained by a single monocular sensor. The fundamental idea is to manage and control the dynamics associated with any trajectory generated in the image plane. By ensuring precise tracking of the selected trajectory in the image plane, we prevent issues such as collisions with obstacles and loss of camera visibility. In the specific context of this work, which focuses on a mobile robot, we have demonstrated using the concept of differential flatness that the use of a single point on the object in the image plane as a descriptor is sufficient. In the future, this approach could be extended to robots with multiple degrees of freedom, such as manipulator arms or quadcopters, using other points on the object to represent the flat output dimension of the robot. It is important to note that proposing an arbitrary trajectory in the image plane can lead to non-intuitive movements, including back-and-forth motions, in the Cartesian plane. This can create a problem, especially when the robot is moving backward, which could result in collisions with obstacles as the robot moves without direct vision. Therefore, in future work, it would be essential to develop optimal trajectory planning that avoids backward movements. We were able to impose a dynamic in position, velocity, and acceleration on the robot by using the concept of flatness. It thus becomes possible to decide on any trajectory in the image plane of a primitive initially visible by the robot so that the robot executes the necessary movement that guarantees the exact following of this trajectory. In this paper, we were interested in visual servoing in the case of a static target. Since we have demonstrated that it is possible to control positions, velocities, and accelerations in the image plane, we can also use the proposed concept to carry out dynamic visual servoing.

Author Contributions

Conceptualization, M.A. and H.M.; methodology, M.A. and H.M.; software, K.K.; validation, K.K., H.M., M.A. and A.Y.; formal analysis, K.K.; investigation, M.A.; resources, K.K. and H.M.; writing—original draft preparation, M.A.; writing—review and editing, H.M. and A.Y.; visualization, K.K.; supervision, K.K. and H.M.; project administration, M.A.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Deanship of Scientific Research at Jouf University under grant No (DSR2022-NF-06).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Garcia, A.; Mattison, E.; Ghose, K. High-Speed Vision-Based Autonomous Indoor Navigation of a Quadcopter. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015. [Google Scholar] [CrossRef]
  2. Iacono, M.; Sgorbissa, A. Path Following and Obstacle Avoidance for an Autonomous UAV Using a Depth Camera. Robot. Auton. Syst. 2018, 106, 38–46. [Google Scholar] [CrossRef]
  3. Mercado, D.; Castillo, P.; Lozano, R. Sliding Mode Collision-Free Navigation for Quadrotors Using Monocular Vision. Robotica 2018, 36, 1493–1509. [Google Scholar] [CrossRef]
  4. Park, J.; Kim, Y. Collision Avoidance for Quadrotor Using Stereo Vision Depth Maps. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 3226–3241. [Google Scholar] [CrossRef]
  5. Yang, X.; Chen, J.; Dang, Y.; Luo, H.; Tang, Y.; Liao, C.; Chen, P.; Cheng, K.-T. Fast Depth Prediction and Obstacle Avoidance on a Monocular Drone Using Probabilistic Convolutional Neural Network. IEEE Trans. Intell. Transp. Syst. 2021, 22, 156–167. [Google Scholar] [CrossRef]
  6. Roy, R.; Tu, Y.-P.; Sheu, L.-J.; Chieng, W.-H.; Tang, L.-C.; Ismail, H. Path Planning and Motion Control of Indoor Mobile Robot under Exploration-Based SLAM (e-SLAM). Sensors 2023, 23, 3606. [Google Scholar] [CrossRef] [PubMed]
  7. Lin, J.; Zhu, H.; Alonso-Mora, J. Robust Vision-Based Obstacle Avoidance for Micro Aerial Vehicles in Dynamic Environments. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA) 2020, Paris, France, 31 May–31 August 2020. [Google Scholar] [CrossRef]
  8. Loianno, G.; Brunner, C.; McGrath, G.; Kumar, V. Estimation, Control, and Planning for Aggressive Flight with a Small Quadrotor with a Single Camera and IMU. IEEE Robot. Autom. Lett. 2017, 2, 404–411. [Google Scholar] [CrossRef]
  9. Nam, D.V.; Gon-Woo, K. Robust Stereo Visual Inertial Navigation System Based on Multi-Stage Outlier Removal in Dynamic Environments. Sensors 2020, 20, 2922. [Google Scholar] [CrossRef]
  10. Chiang, M.-L.; Tsai, S.-H.; Huang, C.-M.; Tao, K.-T. Adaptive Visual Servoing for Obstacle Avoidance of Micro Unmanned Aerial Vehicle with Optical Flow and Switched System Model. Processes 2021, 9, 2126. [Google Scholar] [CrossRef]
  11. Lin, H.-Y.; Peng, X.-Z. Autonomous Quadrotor Navigation with Vision Based Obstacle Avoidance and Path Planning. IEEE Access 2021, 9, 102450–102459. [Google Scholar] [CrossRef]
  12. Tang, Z.; Cunha, R.; Cabecinhas, D.; Hamel, T.; Silvestre, C. Quadrotor Going through a Window and Landing: An Image-Based Visual Servo Control Approach. Control Eng. Pract. 2021, 112, 104827. [Google Scholar] [CrossRef]
  13. Cho, G.; Kim, J.; Oh, H. Vision-Based Obstacle Avoidance Strategies for MAVs Using Optical Flows in 3-D Textured Environments. Sensors 2019, 19, 2523. [Google Scholar] [CrossRef]
  14. Courbon, J.; Mezouar, Y.; Guénard, N.; Martinet, P. Vision-Based Navigation of Unmanned Aerial Vehicles. Control Eng. Pract. 2010, 18, 789–799. [Google Scholar] [CrossRef]
  15. Do, T.; Carrillo-Arce, L.C.; Roumeliotis, S.I. Autonomous Flights Through Image-Defined Paths. In Springer Proceedings in Advanced Robotics; Springer: Cham, Switzerland, 2017; pp. 39–55. [Google Scholar] [CrossRef]
  16. Kozak, V.; Pivonka, T.; Avgoustinakis, P.; Majer, L.; Kulich, M.; Preucil, L.; Camara, L.G. Robust Visual Teach and Repeat Navigation for Unmanned Aerial Vehicles. In Proceedings of the 2021 European Conference on Mobile Robots (ECMR) 2021, Bonn, Germany, 31 August–3 September 2021. [Google Scholar] [CrossRef]
  17. Nguyen, T.; Mann, G.K.I.; Gosine, R.G.; Vardy, A. Appearance-Based Visual-Teach-And-Repeat Navigation Technique for Micro Aerial Vehicle. J. Intell. Robot. Syst. 2016, 84, 217–240. [Google Scholar] [CrossRef]
  18. Warren, M.; Greeff, M.; Patel, B.; Collier, J.; Schoellig, A.P.; Barfoot, T.D. There’s No Place Like Home: Visual Teach and Repeat for Emergency Return of Multirotor UAVs During GPS Failure. IEEE Robot. Autom. Lett. 2019, 4, 161–168. [Google Scholar] [CrossRef]
  19. Zhang, T.; Hu, X.; Xiao, J.; Zhang, G. A Machine Learning Method for Vision-Based Unmanned Aerial Vehicle Systems to Understand Unknown Environments. Sensors 2020, 20, 3245. [Google Scholar] [CrossRef] [PubMed]
  20. Toro-Arcila, C.A.; Becerra, H.M.; Arechavaleta, G. Visual Path Following with Obstacle Avoidance for Quadcopters in Indoor Environments. Control Eng. Pract. 2023, 135, 105493. [Google Scholar] [CrossRef]
  21. Mezouar, Y.; Chaumette, F. Path Planning for Robust Image-Based Control. IEEE Trans. Robot. Autom. 2002, 18, 534–549. [Google Scholar] [CrossRef]
  22. Hummel, B.; Kammel, S.; Dang, T.; Duchow, C.; Stiller, C. Vision-Based Path-Planning in Unstructured Environments. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Meguro-Ku, Japan, 13–15 June 2006. [Google Scholar] [CrossRef]
  23. Otte, M.W.; Richardson, S.G.; Mulligan, J.; Grudic, G. Local Path Planning in Image Space for Autonomous Robot Navigation in Unstructured Environments. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007. [Google Scholar] [CrossRef]
  24. Abdul Hafez, A.H.; Nelakanti, A.K.; Jawahar, C.V. Path Planning Approach to Visual Servoing with Feature Visibility Constraints: A Convex Optimization Based Solution. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007. [Google Scholar] [CrossRef]
  25. Chesi, G.; Hung, Y.S. Global Path-Planning for Constrained and Optimal Visual Servoing. IEEE Trans. Robot. 2007, 23, 1050–1060. [Google Scholar] [CrossRef]
  26. Chesi, G. Visual Servoing Path Planning via Homogeneous Forms and LMI Optimizations. IEEE Trans. Robot. 2009, 25, 281–291. [Google Scholar] [CrossRef]
  27. Chesi, G.; Shen, T. Conferring Robustness to Path-Planning for Image-Based Control. IEEE Trans. Control Syst. Technol. 2012, 20, 950–959. [Google Scholar] [CrossRef]
  28. Shen, T.; Chesi, G. Visual Servoing Path-Planning with Spheres. In Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics, ICINCO, Rome, Italy, 28–31 July 2012. [Google Scholar] [CrossRef]
  29. Hafez, A.H.A.; Nelakanti, A.K.; Jawahar, C.V. Path planning for visual servoing and navigation using convex optimization. Int. J. Robot. Autom. 2015, 30, 299–307. [Google Scholar] [CrossRef]
  30. Gong, Z.; Tao, B.; Qiu, C.; Yin, Z.; Ding, H. Trajectory Planning With Shortest Path for Modified Uncalibrated Visual Servoing Based on Projective Homography. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1076–1083. [Google Scholar] [CrossRef]
  31. Allen, M.; Westcoat, E.; Mears, L. Optimal Path Planning for Image Based Visual Servoing. Procedia Manuf. 2019, 39, 325–333. [Google Scholar] [CrossRef]
  32. Rodriguez Martinez, E.A.; Caron, G.; Pegard, C.; Alabazares, D.L. Photometric Path Planning for Vision-Based Navigation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar] [CrossRef]
  33. Fliess, M.; Levine, J.; Martin, P.; Rouchon, P. A Lie-Backlund Approach to Equivalence and Flatness of Nonlinear Systems. IEEE Trans. Autom. Control 1999, 44, 922–937. [Google Scholar] [CrossRef]
  34. Levine, J. Analysis and Control of Nonlinear Systems; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  35. Kaaniche, K.; Rashid, N.; Miraoui, I.; Mekki, H.; El-Hamrawy, O.I. Mobile Robot Control Based on 2D Visual Servoing: A New Approach Combining Neural Network With Variable Structure and Flatness Theory. IEEE Access 2021, 9, 83688–83694. [Google Scholar] [CrossRef]
  36. Hagenmeyer, V.; Delaleau, E. Exact Feedforward Linearization Based on Differential Flatness. Int. J. Control 2003, 76, 537–556. [Google Scholar] [CrossRef]
Figure 1. A two-wheeled mobile robot.
Figure 1. A two-wheeled mobile robot.
Sensors 23 09667 g001
Figure 2. Perspective projection model.
Figure 2. Perspective projection model.
Sensors 23 09667 g002
Figure 3. Proposed control scheme.
Figure 3. Proposed control scheme.
Sensors 23 09667 g003
Figure 4. Simulation environment. (a) Three-dimensional environment; (b) image captured using the camera (size 1024 × 1024). Red point represents a single point (detected) in the object.
Figure 4. Simulation environment. (a) Three-dimensional environment; (b) image captured using the camera (size 1024 × 1024). Red point represents a single point (detected) in the object.
Sensors 23 09667 g004
Figure 5. (a,b) The desired trajectory in the image plane of a point  P  of the object (The big and small boxes represent respectively the desired and initial target position—Only the bleu point is used to generate control laws); (c,d) the pixel variation in the desired trajectory; (e,f) the accelerations (in pixel) of the desired trajectory.
Figure 5. (a,b) The desired trajectory in the image plane of a point  P  of the object (The big and small boxes represent respectively the desired and initial target position—Only the bleu point is used to generate control laws); (c,d) the pixel variation in the desired trajectory; (e,f) the accelerations (in pixel) of the desired trajectory.
Sensors 23 09667 g005aSensors 23 09667 g005b
Figure 6. (a) The trajectory made by the four points (The big and small boxes represent, respectively, the desired and initial target positions; blue, green, red, and turquoise thick lines represent the four point trajectories in the image plane when the robot moves); (b) the actual and the desired trajectory in the image plane of a point  P  of the object; (c) the percentage of the error of the performed trajectory according to  u  and according to  v ; (d) the trajectory performed by the robot in the 2D Cartesian plane; (e) the evolution of the two commands  v r  and  ω r  applied to the robot; (f) the variations in the two angular speeds  ω 1  and  ω 2  of the two driven wheels.
Figure 6. (a) The trajectory made by the four points (The big and small boxes represent, respectively, the desired and initial target positions; blue, green, red, and turquoise thick lines represent the four point trajectories in the image plane when the robot moves); (b) the actual and the desired trajectory in the image plane of a point  P  of the object; (c) the percentage of the error of the performed trajectory according to  u  and according to  v ; (d) the trajectory performed by the robot in the 2D Cartesian plane; (e) the evolution of the two commands  v r  and  ω r  applied to the robot; (f) the variations in the two angular speeds  ω 1  and  ω 2  of the two driven wheels.
Sensors 23 09667 g006aSensors 23 09667 g006b
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Albekairi, M.; Mekki, H.; Kaaniche, K.; Yousef, A. An Innovative Collision-Free Image-Based Visual Servoing Method for Mobile Robot Navigation Based on the Path Planning in the Image Plan. Sensors 2023, 23, 9667. https://doi.org/10.3390/s23249667

AMA Style

Albekairi M, Mekki H, Kaaniche K, Yousef A. An Innovative Collision-Free Image-Based Visual Servoing Method for Mobile Robot Navigation Based on the Path Planning in the Image Plan. Sensors. 2023; 23(24):9667. https://doi.org/10.3390/s23249667

Chicago/Turabian Style

Albekairi, Mohammed, Hassen Mekki, Khaled Kaaniche, and Amr Yousef. 2023. "An Innovative Collision-Free Image-Based Visual Servoing Method for Mobile Robot Navigation Based on the Path Planning in the Image Plan" Sensors 23, no. 24: 9667. https://doi.org/10.3390/s23249667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop