Next Article in Journal
Detection Method of Straw Mulching Unevenness with RGB-D Sensors
Previous Article in Journal / Special Issue
Online Gain Tuning Using Neural Networks: A Comparative Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Autonomous Navigation of a Forestry Robot Equipped with a Scanning Laser

LERMAB, University of Lorraine, USC1445 INRAE, 88026 Epinal, France
*
Authors to whom correspondence should be addressed.
AgriEngineering 2023, 5(1), 1-11; https://doi.org/10.3390/agriengineering5010001
Submission received: 1 August 2022 / Revised: 18 November 2022 / Accepted: 25 November 2022 / Published: 20 December 2022
(This article belongs to the Special Issue Selected Papers from The Ag Robotic Forum—World FIRA 2021)

Abstract

:
This abstract is an overview of our research project entitled “Innovative Forest Plantation”, currently in progress. The aim of this project is to automate traditionally manual tasks for poplar plantations in the first years after planting, in particular mechanical weeding without the use of herbicides. The poplar forest is considered as a semi-structured environment where the dense canopy prevents the use of GPS signals and laser sensors are often preferred to localize the vehicle. In this paper, we focus on one of the main functionalities: autonomous navigation, which consists in detecting and locating trees to move safely in such complex environment. Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is very well-suited solution. The constructed map can be reliably used to plan semantic paths of the mobile robot in order treat specifically each tree. Simulations conducted on Gazebo and Robot Operation System (ROS) have proven that the robot could navigate autonomously in a poplar forest.

1. Introduction

Faced with the challenges of global warming, objectives of carbon neutrality in our consumption patterns have been set by various decision makers and governments. All scenarios show that to reach these objectives, it is necessary to use wood as a resource. This is, for example, the case in construction. The emergence of wooden buildings is one of the ways to reduce the carbon footprint. This is only possible if the forestry exploitation is done in a reasonable and reasoned way. This means, for example, intensifying the means of forestry, planting more trees to meet the growing demand for wood, while protecting our forests. Thus, technological development is strongly recommended in this field to reduce planting costs, to improve working conditions and to enhance wood quality for massive reforestation.
At a time where mobile robotics change agricultural practices by increasing productivity and reducing work intensity as well as drudgery, the technological and innovation gap has emerged between agricultural and silvicultral practices. Actually, autonomous navigation of vehicles in structured agricultural environment has already received a great deal of attention in last few decades. However, navigation in a forest is considered more challenging than in open crop fields. First, the large dimensions of the forest lead to accumulated error, having a significant effect on the estimated pose and making wheel odometry not a reliable measure for mobile robot localization. Indeed, the irregular terrain may cause the vehicle to skid. Another common approach often used in crop field is Real-Time Kinematic (RTK) GPS. In the forestry case, the GPS signals can frequently be degraded at the ground level due to the bigger tree canopy sizes.
In a forest, trees are natural landmarks providing sufficient information for the mobile robot to navigate and track its location. Hence, one of the main tasks of a mobile robot is to detect the individual trees or tree rows in order to construct a substantive map. Different parts of the trees, such as trunk, stem, and canopy, can be detected using the mobile robot on-board laser scanner or machine-vision-based sensor. Therefore, [1] proposed a classification-based tree trunk detection method using an image segmentation to extract useful information for navigation of a forest vehicle. They also estimated the distance between the vehicle and the base of the detected trees. However, one limiting disadvantage of machine vision over laser sensors was the influence of varying ambient illumination conditions, especially in outdoor environments. Actually, laser scanners provide the benefits of high resolution and more robust ranging data for object detection under different weather conditions. Researchers have investigated the use of 2D or 3D terrestrial laser scanners for tree detection and row following in orchards and forests. These studies are based on different methods such as the RANSAC algorithm [2], Hough transform [3] or least square adjustment [4]. Further, ref. [5] have used combinations of cameras and laser scanners to detect the individual tree trunks. This approach leads to a discrimination between trees and non-tree objects. For localization, an Extended Kalman Filter (EKF) is proposed to fuse data from different sensors such as wheel odometry, Inertial Measurement Unit (IMU) data, Light Detection and Ranging (LiDAR) scans and camera streams.
GPS-independent localization process of an Unmanned Ground Vehicle (UGV) from a ground-level perspective is a challenging problem for several mobile robot applications. To overcome the above challenges, [6] provided an additional information to the ground vehicle using top-view image of the orchard acquired in real time by Unmanned Aerial Vehicles (UAV). In this work, they firstly designed two computer vision procedures that were applied on the top-view images: canopies extraction and trunks approximation. These techniques provided a semantic tree map that transform a two-dimensional high altitude image to a binary canopies mask and a set of points representing the approximated trunk positions. Secondly, they have used Adaptive Monte Carlo Localization (AMCL) to estimate the robot pose against the canopy map. Additionally, they proposed a semantic global path planner that plans path between trees using a cost map derived from the canopies mask.
However, less research works have been developed in path planning and motion control of the mobile forestry robot. Our contribution is to specify the computational methods suitable for the complex forest environment. Actually, our forestry robot needs to navigate between rows, to detect trees and weed at the foot of the plants. To accomplish the mission, the map construction of the forest environment is needed. This map allows us to locate the robot and plan the movement. For this, we equipped the robot with a LiDAR, used to correct the robot position given by the odometer. Indeed, the laser scans allow the perception of the environment. The proposed approach relies only on data from a LiDAR and odometer for state estimation and path following, independently from GPS signals.
To solve this issue, we have used ROS middleware. ROS relies on the concept of computational graph that collects and connects software frameworks. The core elements of the ROS are nodes that communicate with each other through topics [7]. This communication between nodes is possible thanks to the ROS Master. ROS has the benefit of being an open-source robotic middleware for the design of highly sophisticated applications such as mapping; localization and path planning [8]. ROS also provides hardware abstraction, low-level device control, inter-processes message-passing and package management. This fact allows the researchers to focus on specific researcher problems without having to deal with hardware drivers and interfaces. Actually, their contributions will be to integrate new algorithms into ROS packages and test their efficiency.
The paper is organized as follows; the concept of a forestry robot is presented in Section 2. The mapping and localization algorithms are described in Section 3, as well as the motion planning algorithm. The reachable performances are analyzed in Section 4 thanks to ROS simulations. Global conclusions and perspectives are finally proposed in Section 5.

2. Forestry Robot Concept

Poplar forests are considered as a semi-structured environment, since trees of the same type are planted systematically with a uniform row arrangement. Natural landmarks such as trees are important in providing adequate information for localization, mapping and path planning tasks in poplar forest. For this, we equip the forestry robot with a LiDAR used to detect object in particular individual trees. Then, the map construction of the forest environment allows us to locate the robot, planify and execute the movement. This strategy permits us to specifically treat each plant and weed around the tree trunk.
As shown in the following flowchart Figure 1, the overall process can be summarized by the following six steps.
  • Tree detection algorithm discriminates between trees and non-tree objects using laser scanner data, as shown in Figure 2. Some studies [9] focus on the use of terrestrial 2D laser scanners to estimate the width of the trees trunk and non-tree objects. The width is determined by detecting the start and the end edges of each object. Such laser scanners can provide scan lines in a typically horizontal plane. In this study, the LiDAR scan data have been used to detect tree trunks.
  • Map generation is the process of creating a map of the robot environment based on a laser scanner. The map can be loaded manually in the memory of the robot (graphical representation, matrix representation) or can be built progressively while the robot explores the new environment. To implement tree weeding tasks, the ideal option is to estimate the position of the individual trees rather than the tree rows. Therefore, the proposed forest map is built based on tree trunk detection and consists of the 2D location of the individual trees.
  • Localization is to estimate the robot position and orientation with respect to the environment. In this case, the forestry robot does not have any prior details about its environment. For accurate localization, a sensor data fusion is proposed to correct the robot position given by the odometer and IMU data.
  • Path planning or motion planning is to find the optimal path between initial and final locations. The path planning problem can be classified into global and local planners. The local planner takes the path generated by the global planner and send the velocity command to the base controller to execute it.
  • Path execution or the base controller is the lowest level of motion control that can turn the drive wheels at a desired velocity. It is usually done in two separated steps: ( i ) the lateral control calculates the vehicle cap ( i i ) and the longitudinal control determines the braking and traction torques of the driving wheels in order to follow the desired path.
  • Weeding tree is one common manual task done by the professional forester, in the first years after planting. Our forestry robot is equipped with a robotic manipulator that can perform weeding tasks around the tree trunk, as shown in Figure 3. The onboard manipulator could be further improved with an interchangeable end-effector allowing different functionalities.
To sum up, the forestry robot has three tasks: first task is to navigate in the forest, second task is to recognize trees so the robot is able to tell the difference between trees and other objects, and the third task is weeding around the tree trunk.

3. Autonomous Navigation

3.1. Localization and Mapping Algorithm

Localization is to estimate the robot position and orientation with respect to the environment. With a known map, a robot is able to observe its surroundings using sensors. However, in many cases, robots do not have any prior information about their environment. One of the most minimize mapping process is the Simultaneous Localization and Mapping (SLAM). SLAM algorithms not only estimate the robot location but also construct and update a map. This will minimize the errors in term of mapping accuracy by using the map as means of feedback for the location estimator and vice versa. There are many algorithms that are developed under SLAM, such as GMapping, Hector SLAM, Graph SLAM, and Core SLAM. GMapping is one of the widely used laser-based SLAM algorithm that is available in the ROS platform.
GMapping uses a Rao-Blackwellized Particle Filter (RBPF) to deal with the SLAM problem. Each particle in a RBPF represents a possible robot trajectory and a map. Based on the approach of [10], ref. 11] present an improved approach to learning grid maps using raw laser range finder data. Instead of using a fixed distribution, the proposed algorithm computes a highly accurate distribution that takes into account the observation probability of the most recent sensor data, a scan-matching process and the odometry. Therefore, this approach provides more accurate robot localization that dramatically minimize the number of required particles. Additionally, they apply a selective resampling strategy to overcome the risk of particle depletion (where the removal of a large number of particles from the sample set during the resampling step may reject correct assumptions and reduce accuracy).

3.2. Motion Planning Algorithm

The mobile robot has to find the shortest or fastest path from a start to final position while maintaining safety constraints. The path planning can be divided into two classes: global and local planners [12]. The global planner provides an obstacle-free path from the start point to the goal location, having a prior perception about the environment modeled as a static map. However, the local planner supposes that the robot has no preliminary information about its surroundings (i.e., an uncertain environment). Therefore, it uses local sensor data to build an estimated local cost map in real time during the search process in order to avoid obstacles.
As shown in Algorithm 1, we start firstly by analyzing the global planner algorithm and focus on grid map path planning (eight neighbours) such as Astar ( A ) [13]. To address the path planning problem, the initial and final positions must be known in advance by the robot as well as the grid map. In the process of finding the optimal path, each grid cell is evaluated according to the following criteria
T o t a l C o s t = H e u r i s t i c C o s t + C e l l C o s t
where Total-Cost is the minimum cost between the initial cell S and the final cell G. Cell-Cost is the accumulated cost of getting the current cell N from the initial position S and Heuristic-Cost is equal to the Euclidean distance from the current cell N to the final cell G.
The A approach relies on two lists: ( i ) the open Set contains the cells having already Total-Cost calculated, ( i i ) the closed Set is a list of cells evaluated once we have checked all their neighbors. The search begins by exploring the neighboring cells of the initial position S. The neighbor cell with the lowest Total-Cost is picked from the open list, extended and placed on the closed list. At each iteration, this process is repeated until the target position is achieved. Finally the optimal path in the grid map is founded by looking at all the parents going back from the goal cell until we find the starting cell (working backward from the goal cell).
Secondly, local planners take the path generated by the global planner and try to track it as closely as possible taking into account the kinematics and dynamics constraint of the robot as well as moving obstacles information. The local planner has the approach of first discretely sampling the control area, then running a simulation, and selecting among potential controls based on a cost function. In order to evaluate, the controller considers these measurements: distance to the final position, distance to obstacles, distance to the global plane, and robot velocity. Finally, the controller sends the speed command of the best evaluated path to the base controller for execution.
Algorithm 1: The A Algorithm.
Agriengineering 05 00001 i001

4. ROS Architecture and Simulation Results

We have used ROS and Gazebo as the main simulation environments in this project. Different autonomous mobile platforms are compatible with ROS, such as Husky robot which is an out-door mobile robot developed by [14] as seen in Figure 4.
Husky robot could be used in an unstructured environment such as a poplar forest. It is also compatible with many robot accessories, including robotic arms. The following Table 1 presents the technical specifications of Husky robot.

4.1. Data Visualization and 3D Simulator

Gazebo is a dynamic 3D simulator required to simulate mobile robotic in indoor and outdoor. It allows to test the performance of the robotic model in the simulated environment and integrate sensor information. An XML file called the Universal Robotic Description Format (URDF) is provided to describe the robot and a simplified model of poplar trees.
Rviz is an effective 3D visualization tool for ROS. It allows the user to view the simulated robot model and replay the logged sensor information. The Figure 5 shows the data that are collected and visualized by the Rviz package.

4.2. Coordinate Frames Transformations

Coordinate frames are a commonly used concept in robots equipped with sensors. In localization, the controller has to to continuously know the position of the robot center with respect to the environment. The planning and obstacle avoidance applications are performed using sensor data that can be represented relative to the robot center position, possibly with a noise model. For this, the tf library is designed to support a standard way to track coordinate frames and transform data in an entire system [15]. The standard coordinate frames for a basic mobile robot are:
  • map frame represents the world map to which the robot pose is related. Its origin is located at some arbitrarily chosen point in the world. This coordinate frame is fixed in the world.
  • odom frame is fixed in the world. Its origin is at the point where the robot is initialized.
  • base-footprint frame moves as the robot moves. Its origin is directly under the center of the robot.
  • base-link frame is rigidly attached to the mobile robot base. This frame is considered as a moving frame.
  • sensor-link frame remains fixed relative to the base-link frame. This coordinate frame has its origin at the center of the onboard sensor (LiDAR, IMU).
Frames transformations represent the rotation and translation matrices specifying the linear and rotational offset between coordinate frames. For instance, the ekf-localization node publishes the transform from odom to /base-link frame.

4.3. Navigation Stack

The Navigation Stack is utilised to incorporate the localization, mapping, as well as path planning [16]. It can be initialized without or with a prior map. When initialized without a map, the robot will receive only information from sensors and can detect and avoid the seen obstacles. However when the navigation stack is initialized with a static map, the robot can provide an informed plans to reach its goal, using the map as prior obstacle information. Further, we detail each module of the ROS architecture shown in Figure 5.

4.3.1. Mapping Generation

Map generation is carried out by gmapping package. The salm-gmapping node obtains the pose the laser scan data from /scan topic and the frame transformations from /tf topic. As a result, the node publishes /map topic that is visualized in Rviz. The Figure 6 illustrates an occupancy grid map of an unknown environment. The white, black and gray pixels represent the environment free, occupied, and non-explored areas, respectively. Then, the map is stored in a pair of files: YAML file and image file. The map metadata are described by the YAML file. The image file encodes the occupancy data. The proposed forest map is built based on tree trunk detection and consists of the 2D location of the individual trees.

4.3.2. Localization

Localization is solved using EKF as a sensor fusion technique [17,18]. The localization algorithm consists of two main parts, which are prediction and correction. Firstly in the prediction stage, an estimation of the robot 2D pose is obtained using the drive model. Secondly based on the sensor measurements, the obtained state in the last step is updated for accurate localization. The ekf-localization node receives the wheel odometry data from /wheel/odom topic, the IMU data from /imu/data topic and frame transformation to publish the estimated pose of the robot into the /odom topic. In this project, the robot pose is tracked without any prior information about its environment.

4.3.3. Motion Planning

Motion planning is realized in the move-base package and is composed of global and local planning modules. These planners need the global and local cost maps that are frequently updated based on the new sensor data in order to maintain an up-to-date view of the robot localization and the environment. The global cost maps are generally initialized from a static map (if available). The global planner used in the move-base package is A .
The move-base node subscribes to the odometry data from /odom topic, the laser scan data from /scan topic, a prebuilt map from /map topic, a goal position and frame transformation. By providing all the required parameters, the global planner generates a high-level plan from the start to goal location, published as /plan topic. Once having the global plan, the local planner will produce velocity commands, /cmd-vel topic while taking into consideration the kinematics and dynamics constraints of the robot and the obstacle information provided in the Costmap. As can be seen from Figure 7, the robot follows the green global plan as closely as possible and reaches successfully the goal position.
For each cycle, the controller creates a grid around the robot (the grid size determined by the size of the local costmap). As the grid covers small area from global path, a local goal position is defined each time. Then, the controller generates a local path published as /local-plan topic. In order to examine the performance of the local planner, a new static obstacle is added in the middle of the robot path. As can be shown from Figure 8, the robot is able to detect and overcome this unseen obstacle thanks to the local cost map and local planner.
To generate safe velocity commands, the local planner uses the Dynamic Window Approach (DWA) based on cost function [19]. Consequently, the robot behavior can be significantly modified by adjusting the weights of each component of the cost function. To test the performance of the controller, a new goal position is chosen for the robot. As can be seen in the Figure 9, the robot can move between two trees.

4.4. Base Controller

The base controller node subscribes to the /cmd-vel topic and translates Twist messages into motor signals that actually turn the wheels. The robot position and orientation in the global coordinate frame is specified by the vector [ x , y , θ ] T , where [ x , y ] T is the 2D position of the robot in the inertial frame and θ is the yaw angle of the robot in inertial frame. Since, the Husky robot can move forward/backward along its longitudinal axis and rotate around its vertical axis, we only need the linear longitudinal component v and the yaw angular component w of the Twist message. The proposed controller forces the tracking error [ x e , y e , θ e ] T to zero and is given by the following equation
v = v r cos θ e + k x x e w = w r + v r ( k y y e + k θ sin θ e )
where v r and w r are respectively the linear and angular velocities. The two parameters k x and k θ are bounded continuous functions and k y is a positive constant. According to [20], the parameters k x and k θ are adjusted using the universal approximation ability of neural networks in this controller.

5. Conclusions

The forest environment presents a major challenge for autonomous navigation. Mapping each individual tree, rather than the tree row, allows the forestry robot to navigate to a specific tree in the poplar forest in order to perform weeding tasks. To construct an accurate map of such complex environment, we have used the SLAM approach. In addition, a suitable motion planning has been proposed to generate optimal path and reach the goal position. In order to validate the proposed approach, a simulation on ROS is presented using the Husky robot in a poplar forest. Further, the obstacle avoidance and individual tree detection performances are evaluated in a simulation software named Gazebo. Simulation results have proven that the robot could navigate autonomously in a semi-structured environment. An implementation for a real robot system will be presented in future work.

Author Contributions

Conceptualization, P.-J.M., F.B.A. and A.B.; methodology, F.B.A.; software, F.B.A.; validation, P.-J.M. and A.B.; formal analysis, F.B.A.; investigation, F.B.A.; resources, F.B.A.; data curation, F.B.A.; writing—original draft preparation, F.B.A.; writing—review and editing, P.-J.M. and A.B.; visualization, F.B.A.; supervision, P.-J.M. and A.B.; project administration, P.-J.M.; funding acquisition, P.-J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Union [FEADER] and East Region of France [RLOR160219CR041000801].

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ali, W.; Georgsson, F.; Hellstrom, T. Visual tree detection for autonomous navigation in forest environment. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 560–565. [Google Scholar]
  2. Olofsson, K.; Holmgren, J.; Olsson, H. Tree stem and height measurements using terrestrial laser scanning and the ransac algorithm. Remote Sens. 2014, 6, 4323–4344. [Google Scholar] [CrossRef] [Green Version]
  3. Safaie, A.H.; Rastiveis, H.; Shams, A.; Sarasua, W.A.; Li, J. Automated street tree inventory using mobile lidar point clouds based on hough transform and active contours. ISPRS J. Photogramm. Remote Sens. 2021, 174, 19–34. [Google Scholar] [CrossRef]
  4. Pfeifer, N.; Gorte, B.; Winterhalder, D. Automatic reconstruction of single trees from terrestrial laser scanner data. In Proceedings of the 20th ISPRS Congress, Istanbul, Turkey, 12–13 July 2004; pp. 114–119. [Google Scholar]
  5. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion—Part a: Tree detection. Comput. Electron. Agric. 2015, 119, 267–278. [Google Scholar] [CrossRef]
  6. Shalev, O.; Degani, A. Canopy-based monte carlo localization in orchards using top-view imagery. IEEE Robot. Autom. Lett. 2020, 5, 2403–2410. [Google Scholar] [CrossRef]
  7. Goebel, P. ROS by Example; Lulu: Morrisville, NC, USA, 2015. [Google Scholar]
  8. Koubâa, A.; Bennaceur, H.; Chaari, I.; Trigui, S.; Ammar, A.; Sriti, M.F.; Alajlan, M.; Cheikhrouhou, O.; Javed, Y. Robot Path Planning and Cooperation: Foundations, Algorithms and Experimentations; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  9. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. Orchard mapping and mobile robot localisation using on-board camera and laser scanner data fusion—Part b: Tree detection. Comput. Electron. Agric. 2015, 119, 254–266. [Google Scholar] [CrossRef]
  10. Murphy, K.P. Bayesian map learning in dynamic environments. In Proceedings of the Advances in Neural Information Processing Systems 12 (NIPS 1999), Denver, CO, USA, 29 November–4 December 1999; pp. 1015–1021. [Google Scholar]
  11. Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef] [Green Version]
  12. Ajeil, F.H.; Ibraheem, I.K.; Azar, A.T.; Humaidi, A.J. Autonomous navigation and obstacle avoidance of an omnidirectional mobile robot using swarm optimization and sensors deployment. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420929498. [Google Scholar] [CrossRef]
  13. Muhtadin; Zanuar, R.M.; Purnama, I.K.E.; Purnomo, M.H. Autonomous Navigation and Obstacle Avoidance For Service Robot. In Proceedings of the 2019 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), Surabaya, Indonesia, 19–20 November 2019; pp. 1–8. [Google Scholar]
  14. Clearpathrobotics. Available online: https://clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/ (accessed on 24 November 2022).
  15. Foote, T. tf: The transform library. In Proceedings of the IEEE Conference on Technologies for Practical Robot Applications, Maribor, Slovenia, 10–12 December 2013; pp. 1–6. [Google Scholar]
  16. Blok, P.M.; van Boheemen, K.; Evert, F.K.v.; Jsselmuiden, J.I.; Kim, G.-H. Robot navigation in orchards with localization based on Particle filter and Kalman filter. Comput. Electron. Agric. 2019, 157, 261–269. [Google Scholar] [CrossRef]
  17. Terejanu, G.A. Extended Kalman Filter Tutorial; University at Buffalo: Getzville, NY, USA, 2008. [Google Scholar]
  18. Alatise, M.B.; Hancke, G.P. Pose estimation of a mobile robot based on fusion of imu data and vision data using an extended kalman filter. Sensors 2017, 10, 2164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Missura, M.; Maren, B. Predictive collision avoidance for the dynamic window approach. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
  20. Ye, J. Tracking control for nonholonomic mobile robots: Integrating the analog neural network into the backstepping technique. Neurocomputing 2008, 71, 3373–3378. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the weeding forestry robot.
Figure 1. Flowchart of the weeding forestry robot.
Agriengineering 05 00001 g001
Figure 2. Map of individual trees.
Figure 2. Map of individual trees.
Agriengineering 05 00001 g002
Figure 3. Weeding around the tree trunk.
Figure 3. Weeding around the tree trunk.
Agriengineering 05 00001 g003
Figure 4. Clearpath Husky A200TM. This Figure has been reproduced with permission form Clearpath Robotics [14].
Figure 4. Clearpath Husky A200TM. This Figure has been reproduced with permission form Clearpath Robotics [14].
Agriengineering 05 00001 g004
Figure 5. ROS Architecture for autonomous navigation.
Figure 5. ROS Architecture for autonomous navigation.
Agriengineering 05 00001 g005
Figure 6. Forest map of individual trees.
Figure 6. Forest map of individual trees.
Agriengineering 05 00001 g006
Figure 7. Tracking the green path.
Figure 7. Tracking the green path.
Agriengineering 05 00001 g007
Figure 8. Avoiding unseen obstacle.
Figure 8. Avoiding unseen obstacle.
Agriengineering 05 00001 g008
Figure 9. Moving between two trees.
Figure 9. Moving between two trees.
Agriengineering 05 00001 g009
Table 1. Technical specifications of Husky robot. This Table has been reproduced with permission form Clearpath Robotics [14].
Table 1. Technical specifications of Husky robot. This Table has been reproduced with permission form Clearpath Robotics [14].
Technical Specifications
Dimensions 990 × 670 × 390 mm
Weight50 kg
Wheel radius330 mm
Maximum payload75 kg
Maximum payload all terrain20 kg
Maximum speed1 m/s
Transmission 4 × 4 driven wheels
Maximum slope 45
Drivers/APIROS, C + + , Python
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ben Abdallah, F.; Bouali, A.; Meausoone, P.-J. Autonomous Navigation of a Forestry Robot Equipped with a Scanning Laser. AgriEngineering 2023, 5, 1-11. https://doi.org/10.3390/agriengineering5010001

AMA Style

Ben Abdallah F, Bouali A, Meausoone P-J. Autonomous Navigation of a Forestry Robot Equipped with a Scanning Laser. AgriEngineering. 2023; 5(1):1-11. https://doi.org/10.3390/agriengineering5010001

Chicago/Turabian Style

Ben Abdallah, Fida, Anis Bouali, and Pierre-Jean Meausoone. 2023. "Autonomous Navigation of a Forestry Robot Equipped with a Scanning Laser" AgriEngineering 5, no. 1: 1-11. https://doi.org/10.3390/agriengineering5010001

Article Metrics

Back to TopTop