Next Article in Journal / Special Issue
Advances in Forest Robotics: A State-of-the-Art Survey
Previous Article in Journal
Explanations from a Robotic Partner Build Trust on the Robot’s Decisions for Collaborative Human-Humanoid Interaction
Previous Article in Special Issue
Localization and Mapping for Robots in Agriculture and Forestry: A Survey
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advances in Agriculture Robotics: A State-of-the-Art Review and Challenges Ahead

by
Luiz F. P. Oliveira
1,
António P. Moreira
1,2 and
Manuel F. Silva
1,3,*
1
Centre for Robotics in Industry and Intelligent Systems (CRIIS), INESC TEC, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
2
Department of Electrical and Computer Engineering, Faculty of Engineering; University of Porto, Rua Dr. Roberto Frias, s/n, Porto, 4200-465 Porto, Portugal
3
Department of Electrical Engineering, School of Engineering, Polytechnic of Porto, Rua Dr. António Bernardino de Almeida, 431, 4249–015 Porto, Portugal
*
Author to whom correspondence should be addressed.
Robotics 2021, 10(2), 52; https://doi.org/10.3390/robotics10020052
Submission received: 11 February 2021 / Revised: 9 March 2021 / Accepted: 18 March 2021 / Published: 24 March 2021
(This article belongs to the Special Issue Advances in Agriculture and Forest Robotics)

Abstract

:
The constant advances in agricultural robotics aim to overcome the challenges imposed by population growth, accelerated urbanization, high competitiveness of high-quality products, environmental preservation and a lack of qualified labor. In this sense, this review paper surveys the main existing applications of agricultural robotic systems for the execution of land preparation before planting, sowing, planting, plant treatment, harvesting, yield estimation and phenotyping. In general, all robots were evaluated according to the following criteria: its locomotion system, what is the final application, if it has sensors, robotic arm and/or computer vision algorithm, what is its development stage and which country and continent they belong. After evaluating all similar characteristics, to expose the research trends, common pitfalls and the characteristics that hinder commercial development, and discover which countries are investing into Research and Development (R&D) in these technologies for the future, four major areas that need future research work for enhancing the state of the art in smart agriculture were highlighted: locomotion systems, sensors, computer vision algorithms and communication technologies. The results of this research suggest that the investment in agricultural robotic systems allows to achieve short—harvest monitoring—and long-term objectives—yield estimation.

Graphical Abstract

1. Introduction

There are currently 193 countries that have officially joined the United Nations (UN), each with its particular problems. However, according to the UN, all of these countries must pay attention to a common problem, global population growth. The planet Earth has about 7.6 billion people and it is estimated that by 2050 the world population will increase to 9.8 billion people, an increase of 28.94%, with half of the world’s population growth being concentrated in just nine countries: India, Nigeria, the Democratic Republic of the Congo, Pakistan, Ethiopia, the United Republic of Tanzania, the United States of America, Uganda and Indonesia [1]. The population growth challenges farmers to make changes in the form of control, monitoring and management of their farms to meet the growing demand for food (requiring double the current food production capacity for 2050 [2]) of high quality because more and more people are looking for healthier foods, without the addition of herbicides and pesticides [3]. On the other hand, the global urbanization process is transforming rural landscapes in urban, causing 68% of the population to reside in urban environments by 2050 [4]. As a result, rural producers are looking for new ways to produce their food in increasingly smaller environments, since in 1991 the percentage of world arable land was around 39.47% and in 2013 it was around 37.7%, that is, a 1.77% reduction in the availability of arable land [5].

1.1. Global Socio-Economic Challenges in Times of the Pandemic

Agricultural activities still rely heavily on human labor, which in turn is subject to health problems such as the worldwide public health crisis generated by the Coronavirus pandemic (COVID-19), which in addition to causing a large number of deaths around the world (about 2,527,891 confirmed deaths until 01/03/2021, according to the World Health Organization (WHO) [6]), imposed several forms of restriction of social and economic activities. In that sense, poor countries that are heavily dependent on food provided by small farmers, livestock farmers and artisanal fishermen will be most impacted by the pandemic. According to the Food and Agriculture Organization (FAO), the social isolation measures imposed by COVID-19 prevent farmers from accessing input and product markets by increasing post-harvest losses [7]. On the other hand, in rich countries, such as the United States, because agricultural tasks are considered “hard work” and low profitability, youngsters are looking for job opportunities in urban regions, so farmers are looking for new ways to automate their farms and thus recover any losses [8].

1.2. Precision Agriculture

Fortunately, to overcome the challenges imposed by population growth, accelerated urbanization, high competitiveness of high-quality products, lack of qualified labor and the vulnerability of human labor to situations of health risk, scientific advances in different areas of human knowledge are transforming the way of managing agricultural activities, increasingly reducing human intervention. According to [9], the term Precision Agriculture (PA) can be defined as “that kind of agriculture that increases the number of (correct) decisions per unit area of land per unit time with associated net benefits”. This definition is more generic so that decision-making can be done either by electronic devices or humans. On the other hand, in [10] the PA is described as being “a management strategy that uses electronic information and other technologies to gather, process, and analyze spatial and temporal data for the purpose of guiding targeted actions that improve efficiency, productivity, and sustainability of agricultural operations”. This definition clearly describes the use of technologies as a means of improving agricultural operations. In this review work, these technologies were grouped into three main areas: robotics, Artificial Intelligence (AI) and Internet of Things (IoT). These technologies can be used either alone, or together, according to Figure 1.
PA has gained prominence with the use of robotic systems and electronic devices in agricultural tasks such as: land preparation, sowing, planting, pest control and harvesting [11]. The market value of Precision Agriculture in 2016 was estimated at $3.67 billion and is estimated to be growing at a rate of 14.7%, to reach $7.29 billion by 2021 [12].
As seen in Figure 1, although this review will address the use of robots in agriculture, often the technological areas of AI and IoT are part of the subsystems of an application involving the use of robots for the execution of agricultural activities. According to Zha, who conducted a review of the use of AI in agriculture, AI can be used in soil management, weed management and in cooperation with IoT technologies [13]. He describes that computer vision algorithms, such as Deep Belief Networks (DBN) and Convolution Neural Network (CNN), have promising results in fruit classification and weed detection in complex environments, that is, with varying ambient lighting, the complexities in the background, the angle when capturing and the variation of the shapes and colors of the fruits/weeds. Elijah et al. carried out a review work on the benefits of using IoT and data analytics in agriculture [14]. According to them, IoT technologies make it possible to monitor farms through sensors of different types, such as optical, mechanical, electrochemical, dielectric soil moisture and location. Due to the existence of short/long-range communication technologies, these sensors work as a data source for prediction, storage management, decision, farm management and precise application algorithms. The advantages of using IoT in agriculture include: safety control and fraud prevention, competitive advantages, wealth creation and distributions, cost reduction and wastage, operational efficiency, awareness and asset management. As open challenges, the authors highlight the need for technological innovations, realization of applications in real large-scale scenarios (pilot project) and cost reduction, standardization and regulation of IoT technologies that facilitate their use in agriculture. Another review work on IoT-based smart agriculture, which also describes the use of several electronic sensors in the improvement of agricultural control and monitoring tasks, is described in [3]. One aspect in common between the survey works mentioned above is that all report the use of robots as tools of technological improvement of agricultural activities. The performance of robots in agriculture depends not only on the type of crop, but also on the type of task that the robot is intended to perform. In this sense, there are review works that address the use of robots in agriculture to perform general tasks, as in [15,16,17], or to improve the performance of specific tasks, such as harvesting high-value crops [18] and solving wheeled mobile robots’ navigation problems [19].
Bac et al. analyzed harvesting robots in the period between 1984 and 2014 [18], Oliveira et al. studied about 21 robots [15] and Fountas et al. carried out a systematic review on various agricultural tasks [16]. Therefore, this review article contributes to an updated analysis of the works carried out until 2021, expanding the analysis made by Bac et al. [18], explores new robotic systems, expanding the results obtained by Oliveira et al. [15], and suggests new challenges for future research work, complementing the conclusions obtained by Fountas et al. [16].
Bearing in mind these ideas, the rest of the work is organized as follows: Section 2 describes the materials and methods adopted as a source of research in this review work. The main robotics applications in agriculture are described in Section 3, presenting a critical analysis of each of the tasks performed by robots in such environments. In Section 4 it is discussed which are the locomotion, vision and navigation systems most used in the reviewed works, unresolved problems and proposals for new research work. Finally, Section 5 presents the conclusions reached by its authors in this review paper.

2. Materials and Methods

The materials used as a source of data and scientific information for this review article were: Google Scholar, Scopus, IEEE Xplorer, Wiley, SpringerLink and Web of Science. For the case of robotic systems in the commercial stage, the search was made through the analysis of their datasheets and commercial websites. A total of about 62 works/projects of robotic systems applied in the agricultural area were counted at the end of the information search process.
In general, all robots were evaluated according to the following criteria: its locomotion system, what is the final intended application, if it has sensors, robotic arm and/or computer vision algorithm, what is its development stage and to which country and continent they belong. These evaluation criteria were chosen in order to highlight the main technologies used to perform each agricultural task and to highlight which are the countries and continents that have the largest number of agricultural robotic systems, in order to correlate the use of such systems with the real needs of the agricultural market in each region.

3. Robotic Applications in Agriculture

The various agricultural activities were grouped into subsections: land preparation before planting, sowing/planting, plant treatment, harvesting and yield estimation and phenotyping. Therefore, the following subsections will address the various types of robotic systems applications in individual agricultural environments.

3.1. Robotic Applications in Agriculture for Land Preparation before Planting

Preparing the land before planting is one of the first agricultural tasks to be carried out, such as plowing the land and applying fertilizers. The practice of plowing the land (inversion of soil layers) allows a greater introduction of oxygen and expulsion of carbon dioxide; however, depending on the local climate conditions, it can harm future crops by considerably reducing the soil’s carbon stocks. On the other hand, soil fertilization serves to replace the nutrients necessary for the development of crops. The review article [20], published in 1987, refers that one of the main challenges in the development of robots that operate in rough terrain, as in a plowed field, is the creation of a robotic system that is precisely controlled. To create a robot capable of assisting rural workers not only in soil fertilization but also for pest control, soil management, harvesting and transportation, the German company Raussendorf developed, in 2014, the robot entitled Cäsar, as shown in Figure 2a. The commercially available Cäsar robot [21] can fertilize the soil in two ways: via remote control or autonomously. To perform tasks automatically, it uses Real-Time Kinematic (RTK) technology for the Global Navigation Satellite System (GNSS) (GNSS is the navigation device, however, it can use different services, such as Global Positioning Systems (GPS) (North American), GLONASS (Russian) or GALILEO (European) [22,23]), resulting in a location accuracy of up to 3 cm. Designed to work on the farm together with human workers, the Cäsar robot has a collision detection system based on ultrasonic sensors that guarantee its immediate stop, with a maximum detection distance of 5 m.
The Greenbot robot (shown in Figure 2b), also available commercially [24], is capable of carrying out the tasks of fertilizing, plowing or seeding throughout the day. It has a Four-Wheel Steering (4WS) system and, through its 100 HP diesel engine, it is capable of transporting up to 750 kg and 1500 kg in its front and rear compartments, respectively. Like the Cäsar robot, the Greenbot also has collision detector sensors to detect objects ahead and make emergency stops [24].
Unlike the terrestrial robots Cäsar and Greenbot, the Chinese company DJI developed an Unmanned Aerial Vehicle (UAV) to carry out agricultural activities. The ability to fly over farms eliminates the interference of any rocks, holes, elevations and branches during the course of the UAV, as these are terrestrial obstacles. However, UAV can collide with branches of high vegetation or high voltage wires (power lines), have flight time limited by battery power, in addition to its trajectories being strongly affected by rain and winds. In this sense, the improvement of the energy consumption of UAV and, therefore, its flight time, enabled its greater efficiency in carrying out agricultural tasks without contact with the soil. In general, the greater the number of rotors in a UAV, the greater its load and control capacity. Thus, in 2016 DJI developed the octocopter AGRAS MG-1P to carry out the precise application of liquid fertilizers, pesticides and herbicides. With the capacity to transport up to 10 l of payload over a maximum distance of up to 3 km, it has a spraying capacity of 6 ha/h and allows to control up to 5 UAV with a single remote control. To avoid collisions with high voltage wires or high vegetation, it has an anti-collision system using an omnidirectional radar with a maximum detection distance of up to 15 m. To carry out the spraying precisely, it presents the integration between the RTK GPS and an Inertial Measurement Unit (IMU) (gyroscope, accelerometer and compass), guaranteeing an accuracy of 1 cm + 1 ppm. The UAV can remain flying steadily even if there is a malfunction in one of its rotors, because it has propeller rotor redundancy [25].
Unlike the robots mentioned above, the AgBot robot (seen in Figure 2c) is still in the research stage. With a Two-Wheel Drive (2WD) system, the robot was developed to apply fertilizer and herbicide on a corn farm. The AgBot robot has four distinct reservoirs, allowing for an exclusive reservoir for each type of herbicide and/or fertilizer. Its control and navigation system consists of components and platforms for the development of low-cost embedded systems (Arduino and Raspberry Pi). Through a low-cost Red Green Blue (RGB) camera and using the Haar feature-based cascade classifiers machine learning algorithm, the robot is able to detect three different types of weeds commonly found in corn farms: Giant ragweed, Redroot pigweed and Cocklebur. The authors conclude that, although the weed detection algorithm was able to identify three different species, the low-cost RGB camera used proved to be unsuitable for external use, requiring future research works [23].
Table 1 summarizes the main information from the aforementioned works.
As it is considered a semi-structured environment, to navigate the entire farm with precision in its positioning data, all the robots previously discussed present the integration between the RTK system and the GNSS. Therefore, in the aspect of control of robots in real agricultural environments, it is noted that, through the use of RTK/GNSS technologies, there has been a great improvement in the control of location and positioning of robots, compared to the review article published in 1987 [20], since RTK technology emerged in the mid-1990s. Figure 3 exemplifies the difference between the obstacle detection systems of the robots listed in Table 1.

3.2. Robotic Applications in Agriculture for Sowing and Planting

Traditionally, sowing and planting tasks are performed by specialized planting equipment connected behind the tractor. However, tractors are characterized by being heavy machinery and, therefore, their constant locomotion throughout the farm intensifies soil compaction. The soil compaction activity has several damaging effects for the agricultural environments, such as increasing the apparent density and soil resistance, reducing porosity, the rate of water infiltration and aeration, in addition to affecting the chemical properties and biogeochemical cycles, affecting the development of plants and soil biodiversity [26]. In 1996, Sakaue was already developing robotic systems to automate the sowing and planting process in Japan [27], with the elaboration of four variations of a wheeled robot. Despite having a rudimentary structure, prototype number four was capable of planting cabbage, lettuce, broccoli, cauliflower or celery of up to 2200 plants/h. However, the system was not able to move autonomously.
Motivated to produce a small, high precision and capable of agile and easily moving along Chinese wheat farms, Haibo et al. developed the Lumai-5 robot, as shown in Figure 4a. For this type of task, the robot must ensure that the sowing process remains constant, even for different speeds of detachment. The Lumai-5 robot has a 4WS, a closed-loop control system and speed, angle and pressure sensors to accurately carry out wheat sowing. The size of the planting tray, the vacuum chamber pressure and the planting speed were the main factors that directly affected the seeding quality [28].
The Australian Center for Field Robotics (ACFR) at the University of Sydney started, in 2015, the development of the Digital Farmhand project. With the clear objectives of developing a robotic system with the concept of off-the-shelf, both digital and physical, the Australian researchers created the Di-Wheel robot (depicted in Figure 4b). The Di-Wheel concept consists of a 2WD robot that supports and moves only under two wheels, reducing the size, weight and mechanical complexity of the robot, in addition to simplifying its transport and assembly (about 15 min). The robot, whose distance between the wheels can vary to adjust to different crop types, was designed to perform the tasks of precision seeding (depicted in Figure 5), spraying and weeding, with all electronic devices located in its central part. Since the financial cost is the biggest barrier that prevents the use of robotic systems by small producers, the Di-Wheel has support to fix smartphones at a height and angle that enables the use of the smartphone’s internal sensors, such as temperature, light and humidity sensors, RGB cameras, as well as gyroscopes, accelerometers and GNSS devices. The author concludes that, due to constant advances in the field of computing, the use of open-source machine learning algorithms (OpenAI, TensorFlow and PaddlePaddle) can contribute to better crop control and management [29].
Unlike tractors (heavy vehicles) that are used to work on large-scale farms, small and low-cost robots can be used on small farms. As shown in Figure 4c, a 4WD seeding robot under development in Pakistan was used to sow corn using an individual seed selector, capable of distributing the number of seeds suitable for planting. According to the authors, the prototype covers sowing 5 times faster than the conventional way of sowing, at a sowing rate of 90 seeds/min, totaling a cycle of 0.66 acres/h [30]. To maximize the robot’s weight versus soil compaction ratio, Indian researchers presented in 2016 a prototype of a small-sized seed drill robot with the capacity to transport a reservoir with up to 17 kg of payload, using the locomotion system with tracked drives to carry heavy loads on non-uniform soils. As it is a prototype in the initial research phase, the authors propose the use of a photovoltaic panel to recharge the robot’s electrical system, as well as the implementation of a Kalman Filter to improve the estimation of the robot’s position and location along with the crop [31].
Table 2 summarizes the works previously mentioned.
According to Table 2, all robots studied are still in the prototype/research phase and have different locomotion systems. Although the proposals bet on low-cost devices, only the Di-Wheel robot has a modular physical and digital structure, using the off-the-shelf concept to eliminate the cost of using unnecessary sensors in the robot, since the robot makes use of the embedded sensors of smartphones.

3.3. Robotic Applications in Agriculture for Plant Treatment

After finishing the sowing stage, maintaining the planting growth properly, that is, free of diseases and pests, requires constant monitoring by the farmer. According to FAO data, about 20 to 40% of world crop production is lost due to pests and diseases [32]. The good growth of crops invaded by weeds is strongly compromised and can even destroy crops completely. Weeds can also attract pests in addition to small animals like snakes and mice. Thus, the sooner it is extracted, the lower the financial losses. As an example of these costs, in Australia weed management costs are close to $4 billion a year [33]. Plant treatment is commonly done by applying herbicides and pesticides (insecticides and fungicides). The searches for the automation of the diseases’ identification process in plants and detection of weeds are not recent—there have already been works in this area since 1998. In [34] a robotic weed control system for tomatoes is described, based on the Bayesian classifier algorithm, that correctly identified 73.1% of tomatoes and 68.8% of weeds in the validation set of field images. In [35], researchers Lee and Slaughter chose to develop the hardware-based neural network, rather than the Bayesian classifier, for increasing the plant identification. With this new classification method, the robotic system correctly identified 38.9% of tomato cotyledons and 85.7% of weeds. These were just the first robotic systems applied to weed control, from a series of robots that have been developed over the years. Therefore, this subsection will address several works related to these agricultural tasks, focusing on its latest developments.
Using a 6 Degrees of Freedom (DoF) manipulator arm, an RGB camera and a laser distance sensor (DT35, SICK), all mounted on a fixed platform, Schor et al. developed a robotic system to perform the detection of powdery mildew and tomato spotted wilt virus diseases in greenhouses. The RGB camera and the laser sensor were placed on the final actuator of the manipulator (shown in Figure 6a) to extract the images from different angles and avoid collisions with the plant. The images were used in the disease detection process, using Principal Component Analysis (PCA) and Coefficient of Variation (CV). The system obtained an accuracy rate of 64% for the process of classification of plants with powdery mildew disease in an early stage of evolution and up to 90% for the case of tomato spotted wilt virus, thus enabling accurate detection of diseases in its initial stage [36]. For the same purpose, the mobile robot eAGROBOT (depicted in Figure 6b) was used to identify pests in cotton and groundnut crops. Through the application of artificial intelligence algorithms, such as artificial neural networks and K-means, in the images acquired by an RGB camera of crops in the initial sowing stage (period of the emergence of diseases such as leaf spot and anthracnose), the robot achieved a precision of 83–96% for disease identification in normal images and 89% for wide images [37].
Concerning weed control, the robots can perform the detection and its subsequent removal, through the application of herbicides in the weeds and/or through mechanical tools. A study focused on the detection of weeds in broccoli and lettuce crops was carried out using an RGB-Depth (RGB-D) Kinect v2 camera. Image processing was divided into four stages: using the Random Sample Consensus (RANSAC) technique, plant extraction (2D connected-component method), resource extraction (length, width and height of leaves, an arrangement of ribs and area) and classification of plants (based on characteristics). The authors report that through the proposed method, tested in a real environment, the robotic system (shown in Figure 6c) achieved a detection rate of 91.7% for broccoli and 90.8% for lettuce [38].
According to Jorgensen et al., weed control can be done in two ways: mechanical combined with the use of herbicides or fully mechanical, covering 90% and 10%, respectively, of the total outdoor gardening area in Denmark [39]. In this review, this concept was generalized, dividing the weed control into two ways: the use of mechanical tools and the use of chemical products (herbicides). Following this concept, in addition to detecting weeds, the Australian AgBot II robot mechanically performs the removal of weeds from crops, through the action of three types of tools: an arrow-shaped hoe, a toothed tool and a cutting tool. The AgBot II uses techniques such as Local Binary Pattern (LBP) and Covariance Feature in its image processing stage, collected by the RGB camera, to identify weeds [40].
As examples of the use of autonomous robots in weed control at a commercial-stage, one can mention the French robots Oz (Figure 6d), Dino (Figure 6e) and Ted (Figure 6f), all from the company Naio Technologies and designed for the publics of market farmers (vegetables, nurseries, horticulture), for large scale vegetable farms (vegetables in a row and on beds) and wine growers (vines–row width > 150 cm/60 inches), respectively [41,42,43]. All of these robots use mechanical tools to eliminate weeds and are powered entirely by Lithium batteries with an autonomy of 8 h (depending on the type of tool used and the soil). According to the Naio Technologies company, 70 Oz robots were sold in 2018 alone, being 80% of sales to the French internal market, 15% to European countries and 5% to the rest of the world. As they are high-tech robots (containing RTK/GPS sensors, RGB cameras and Light Detection and Ranging (LiDAR)), operating autonomously in large crops (avoiding the need for human supervision), they are all tracked and have a communication protocol for sending telephone messages (Short Message Service (SMS)), for possible theft situations.
Through mechanical cutting tools, the VITIROVER and Tertill robots (depicted in Figure 6g,h, respectively) are light, small, have photovoltaic panels in their mechanical structures, operate under conditions of sun and rain and perform the removal of weeds. Both allow the control and monitoring of robot information through a mobile application [44,45], exploring the IoT concepts. In addition to having a cutting tool, the Tertill (the first robot designed to remove weeds from residential gardens) has wheels designed to assist in weed removal.
Small robots with mechanical weeding were also used in paddy fields. Rice is grown within a field flooded with arable land, in which case both the rice seed and the weed seed grow underwater. The 4WS K-Weedbot robot (shown in Figure 6i) was developed to perform the removal of weeds while moving, guided by a high precision image processing system (uses gray image, median filter, Otsu method, noise elimination, image segmentation and K-means clustering) so that the robot does not collide with the plants. To improve the extraction of weeds, the K-Wheedbot has gears instead of wheels. Through a common RGB camera and a row detection algorithm, the robot moves around the rice field, with a maximum deviation of 1° in its trajectory [46].
The AIGAMO-ROBOT (seen in Figure 6j) was developed to be compact, powered by batteries (to prevent oil leakage and emissions of polluting gases into the atmosphere) and easy to use. It uses its tracked locomotion system to remove weeds. Thus the robot reduces the emergence of weeds both inter ranks as intra ranks [47].
Japanese researchers Sori et al. analyzed the development of paddy fields with weeds, removing weeds with herbicide application and removal through the robot (shown in Figure 6k). As its results indicate in Figure 7, the use of the robotic system together with more spaced rice crops allows several improvements for the growth and crop yield, such as bigger roots, stems, leaves, height and weight of the rice [48].
Unlike the K-Weedbot (4WS) and AIGAMO-ROBOT (track) robots, a robotic system (shown in Figure 6l) capable of floating on the surface of paddy fields was used to cause disturbances in the water, leaving it turbid to reduce the incidence of sunlight and reduce weed photosynthesis. To carry out the disturbances in the water, the researchers used chains distributed evenly and fixed on the rear of the robot. During the dragging of these chains on the surface of the soaked soil, the weed species detach themselves from the soil surface and head towards the water surface. Although the robot used is not autonomous, it was also able to reduce the emergence of weeds [49].
In addition to being a high-value product, pesticide products are harmful to human health. For a rural worker to apply the pesticide on the plants, he must wear several personal protective equipment. To relocate the rural worker who performs the application of pesticides to a safe environment, through human–machine interaction, researchers Adamides et al. developed two robots, AgriRobot and SAVSAR (Figure 6m,n), to remotely spray pesticides on vineyards using a Human Machine Interface (HMI) [50]. Another mobile robot (shown in Figure 6o) was used to reduce the amount of material sprayed and, in this case, the robot has a RGB camera and distance sensors to automatically open the pesticide spray valve based on the machine vision Foliage Detection Algorithm (FDA) and Grape clusters Detection Algorithms (GDA), resulting in a 45% reduction in pesticide material [51].
Another way to perform weed control is through the use of herbicides. The RIPPA [52] and Ladybird [53] robots were also developed to remove weeds, as shown in Figure 6p,q, respectively. RIPPA is smaller than Ladybird, but uses several of its technology. The RIPPA is a prototype for the commercial version. Unlike the AgBot II, the RIPPA and Ladybird robots, in addition to having a photovoltaic panel in their mechanical structures, perform the elimination of weeds through spraying of herbicide in the appropriate place. The RIPPA and Ladybird robots also capture hyperspectral images. Spectral information can indicate plant health (through the use of machine learning algorithms). Thus, if there is a plant labeled with a low health value, the same system will spray an appropriate amount of fertilizer. It is noted, therefore, that robotic systems that have a liquid spraying system can be used not only to remove weeds, but also to strengthen crops by spraying fertilizers.
The BoniRob robot (as shown in Figure 6r) is more complete, as it performs the tasks of detecting (through cameras and ultrasonic sensors) and removing weeds, not only using mechanical tools, but also spraying herbicide [54].
Like the RIPPA and Ladybird robots, the Swagbot robot (seen in Figure 6s) was developed by Australian researchers from ACFR. The robot was built for general purposes, to perform activities such as autonomous weed identification and spraying, soil and pasture analysis, biomass estimation and livestock monitoring [55]. The objective of developing robots with a wide variety of applications is to be able to establish a process of standardization and modularization of robotic systems.
To navigate the farm, a UAV, using the GPS service and an IMU, was used to monitor the soil and the efficiency in the management of the irrigation system [56]. To determine the need to activate the irrigation system, the UAV has a multispectral camera to calculate the wine-growing vegetation indices, based on the Normalized Difference Vegetation Index (NDVI). In addition to avoiding the use of airplanes and satellites, crop monitoring by UAV is carried out at a low altitude, without the interference of clouds. A UAV can contribute to reducing the use of pesticides and maximizing the efficiency of crop management [3,57]. Proposing to assist in monitoring the planting and identification of weeds, Sánchez et al. carried out a behavioral study of the vegetation in wheat fields, before and after sowing through the application of herbicide. In this case they performed multi-temporal mapping of a fraction of vegetation at the beginning of the season through various indexes, such as CIVE, ExG, ExGR, Woebbecke Index, NGRDI and VEG [58,59].
Unlike the vineyards commonly found around the world, situated on flat terrain, the vineyards in the Ribeira Sacra region of Galicia in Spain, the Douro region in Portugal, Banyuls and Rhône–Alpes in France and the Rhine and Moselle in Germany are located on a steep slope terrain. This characteristic imposes strong difficulties in the mobility of robots. To carry out the controlled application of pesticides in such crops, Italian researchers developed the UAV Bly-c-agri (depicted in Figure 6t) with the capacity to load a tank of up to 10 l of pesticide and, thus, eliminate all problems related to land locomotion [60]. Another UAV was used to spray Urea (organic compound) within predefined regions. With a maximum load capacity of 5 L, this type of application also allows reducing expenses with the generalized application of herbicides [61].
The kiwi cultivation, among other treatments, is dependent on pollination. In [62], the authors perform this type of task using a machine vision system based on CNN and control the spray time of a mechanical system composed of 20 nozzles. The robot (shown in Figure 6u) was able to pollinate about 79.5% of the kiwi flowers at a speed of 3.5 km/h [62].
Although it is complex for a robot to perform, pruning the plants is an important task. According to Verbiest et al., who carried out research activities in pome orchards, the main challenges of this type of task consist of scanning and measuring the structures of the plants to know the exact place to prune [63]. Furthermore, one way to improve pruning performance is to adapt the crop geometric characteristics to the technical requirements of robotic systems [64]. As described in [65], which describes a robot system for the automatic pruning of grape vines, to inhibit the interference of variations in natural lighting, as well as the interference of the background landscape, researchers Botterill et al. developed a mobile platform (as shown in Figure 6v) that has a 6-DoF manipulator, Light-Emitting Diode (LED) and three RGB cameras. Pruning is carried out through a saw connected to the end of the robotic arm and the classification of branches to be cut is based on the Support Vector Machine (SVM) learning algorithm and Rapidly Exploring Random Tree (RRT), RRT-Connect. The system proved to be able to cut the branches of vines [65].
In [66] a platform composed of a 3-DoF prismatic manipulator equipped with low-cost RGB-D cameras was used to perform automated green shoot thinning in vineyards. In this case, the system design is composed of a Faster R-CNN-based approach to extract the cordon trajectories and a control system (6th order polynomial-based) to operate the thinning end-effector. The robotic platform obtained a Root Mean Square Error (RMSE) of 1.47 cm of thinning end-effector position at a forward speed of 6.6 cm/s.
Although the robots in Figure 6 are from different regions of the world and used in different crops, many have wheels, use 4WS and have sensors such as GNSS and RGB cameras. To standardize the components commonly used in agricultural robotic systems, the company SAGA Robotics developed a modular robotic system, the Thorvald II. Figure 6w shows the different possible configurations of the robot, namely the following versions: models with two possibilities for a differential motor drive with caster wheels for support, models with different track widths, models with/without suspension modules, versions with different heights and models containing three to six wheels [67]. Similarly, Clearpath [68] also has several general-purpose robotic systems (as shown in Figure 6x), being used in some of the robots described in the following sections. Finally, the Russian company Avrora Robotics designed its AgroBot universal control system that can be installed on any special equipment or tractor [69].
Table 3 summarizes the characteristics of the various applications of robots in plant treatment.
According to the data in Table 3, the following aspects can be discussed:
  • Disease identification: Using conventional RGB cameras, researchers Schor et al. [36] and Pilli et al. [37] were able to detect diseases in plants, with hit rates between 64–96%;
  • Mechanical weeding: Several works, both in the research phase and commercially available, use mechanical tools to remove weeds, eliminating the application of herbicidal products of high financial value and allowing the cultivation of organic products. As previously described, with the use of a low-cost robot, researchers Sori et al. [48] report the various benefits generated by the mechanical removal of weeds;
  • Chemical weeding: Most robots that perform this task have a specific computer vision technique/algorithm to reduce expenses with over spraying. Vegetation indices such as ExG-ExR and NDVI were used to extract the crop characteristics and perform its subsequent classification. After its proper classification, the specific spray system applies the herbicide to the weed. Therefore, the same spraying system for herbicidal products can be used for the precise application of fertilizers on plants classified as having a low health value [52,53];
  • General tasks: As seen in the column “Locomotion Systems” in Table 3, the robots used have terrestrial (4WD, 4WS, track), aerial (hexacopter, octocopter) and marine (boat) systems for locomotion. Not only to avoid the re-creation of existing systems but also to speed up the transition process from research to the commercial stage, Swagbot platforms, Thorvald II, Clearpath and AgroBot were developed for use in carrying out different tasks and in different agricultural environments.

3.4. Robotic Applications in Agriculture for Harvesting

In addition to being a repetitive task and requiring agile execution, the harvesting activity requires a lot of effort from the harvester. According to Hayashi et al., harvesting operations represent around 25% of all hours of agricultural work in Japan [70]. Concerning financial costs (an important decision factor in farmers’ decision-making), in 2014, in Australia, labor rent represented between 20% and 30% of total cash costs [71] and in China, in 2019, it represented more than 75% of production costs and increasing annually [72]. This difference in values, between Australia and China, is due to the higher level of automation of the tasks of Australian crops. In this way, several works (see Figure 8) are being carried out proposing the use of robotic systems to perform agricultural harvesting activities. Several scientific articles address different image processing techniques in different types of cultures. In [73] during the design and implementation of an aided fruit-harvesting robot, Ceres et al. found that the main difficulties in the development of these types of systems were: driving the robot through the field from tree to tree and from tree row to tree row; detection and localization of fruits; and grasping and detaching of selected targets. Another relevant work carried out in the 1990s was the autonomous mobile robot AURORA for greenhouse operation, capable of autonomously navigating the greenhouse corridors using only the information provided by ultrasonic sensors [74]. The robot was built to be a multitasking platform, to perform tasks such as: harvesting, transportation of fruits and plan inspection. In this case, the project requirements were: navigation in unaltered greenhouses, low cost, flexibility, multi-functionality, supervizable autonomous operation and friendly user interface. In their work, Bac et al. analyzed about 50 applications of robots from different regions of the world in carrying out agricultural harvesting activities [18]. However, despite all the reviewed works proposing the use of robots to carry out harvesting activities, only the task of capturing fruits showed an improvement trend, when compared to the works done in the period between 1984 and 2014. In other activities, the evaluated robots have not yet performed well enough to replace a human harvester [18].
The Agrobot E-Series robot is made entirely of stainless steel and military-grade aluminum to withstand the diverse climatic conditions of the field. It has 24 independent Cartesian robotic arms that move around the robot’s structure to pick up strawberries. With a total of three wheels (the electric motor is located in the central part), the robot allows the mechanical structure adjustment to suit the crop dimensions. As it is a large robot (as shown in Figure 8a) it uses data from the LiDAR sensor to avoid collisions with any workers in the farm [75].
With the equivalent yield of 25 to 30 human harvesters the Berry 5 robot has a picking speed of 8 s per fruit, moving through strawberries beds at a speed of 1.6 km/h, harvesting up to eight acres of strawberries a day. Created by the American company Harvest CROO Robotics, this automatic harvester is making strides towards commercialization [76]. Like the Agrobot E-Series, the various mechanisms of the Berry 5 robot are protected by patents, making its scientific analysis difficult.
In 2017, researchers Leu et al. developed a green asparagus harvesting robot (named GARotics) [77]. Asparagus must be harvested when they reach a height of between 15.24 and 20.32 cm, otherwise, they will not be accepted by the market. This type of harvest is difficult to be automated because asparagus tends to group and the tenderness of the stalk makes it susceptible to breaking. In this case, two robotic arms with customized grippers were developed to catch the asparagus without damaging it. The robotic arms have a single pneumatic cylinder that transfers linear motion in a resulting circular motion. To identify asparagus ready to be harvested in real-time, the robot has an RGB-D camera that provides the planting data for its vision module, consisting of the following tasks: point cloud generator, camera calibration (via Template Point Cloud (TPC) and Model Point Cloud (MPC)) and online asparagus tracking (RANSAC and Euclidean clustering methods). As a result of the work, the German robot achieved an average locomotion speed of 0.2 m/s and a harvest cycle of 2 s per robotic arm, resulting in 90% of successful harvests [77].
A robotic lettuce harvesting system, Vegebot (depicted in Figure 8b), was developed by English researchers in 2018 [78]. As a very fragile vegetable, the challenges of this task are the correct vegetable identification and removing the lettuce without damage. Using two RGB cameras located above and at 45° from the vegetation, the head of the lettuce was identified based on the Region-based Convolutional Neural Network (R-CNN). After locating the lettuce, Vegebot extracts it using a 6-DoF robotic arm and a gripper system with closed-loop monitoring of the applied force. The system obtained a success rate of 91% in locating lettuces and 82% accuracy in the correct classification of vegetables [78].
In 2019, Ge et al. developed an algorithm to locate and collect strawberries using a robot (developed by Noronn AS) equipped with an RGB-D camera, seen in Figure 8c. The detection of strawberries was done using R-CNN and the collision-free path-planning algorithm was based on 2D images and the 3D point cloud. After several tests in real environments, 74.1% of the identified ripe strawberries were successfully harvested with an F 1 score (average harmonic precision) of 0.9 [79].
Sepúlveda et al. addressed robotic aubergine harvesting using dual-arm manipulation [80]. Through two 6-DoF robotic arms, as shown in Figure 8d, underactuated grippers with a set of three flexible fingers (off-the-shelf) and two cameras, the robotic system was used to analyze the benefits generated by the collaborative action of the manipulators. Just as a human picker often uses one of his hands to clear the path to reach the fruit and harvest it with his other hand, the robotic system was used to identify possible aubergines partially blocked by leaves, to lift the leaves so that the camera captures the image of the aubergine in its entirety and performs its collection. An algorithm based on the SVM classifier was used to classify the image among four classes: aubergines, leaves, branches and background. The proposed occlusion algorithm is based on the difference between the distances between the centroids of the aubergines and the leaves identified in the image, creating a vector pointing to the direction in which the leaf must be lifted to unblock the aubergine. After several experimental tests, the success rate of the harvesting robot was 95% for two isolated parts and 80% for an occluded part, both using two arms. The system has improved the total processing time (image processing, inverse kinematic and action) of fruit from 42.90 s with one arm to 26.54 s with two arms. The main challenges encountered by the authors were changes in lighting conditions, directly interfering with the system’s performance.
Using the modularization concept described in Section 3.3, the strawberry picker robot seen in Figure 8e uses the Thorvalds II robots’ locomotion system, from the Saga Robotics group. To improve its harvest time, it features a 3-DoF Cartesian-type dual-arm system. With simpler inverse kinematics to be calculated, the two Cartesian robotic arms were used not only to optimize harvesting efficiency but also to avoid the collision. In this sense, Xiong et al. developed a novel active obstacle-separation path-planning strategy for cluster picking [81]. The occlusion of fruits (by leaves, branches or other unripe fruits) is a recurring problem in several applications of harvesting robots. Thus, using an RGB-D camera and an algorithm based on Hue Saturation Value (HSV) color-thresholding, the picker robot identifies strawberries and their obstacles, calculates the trajectory necessary to reach the target and when it approaches an obstruction it clears the path by pushing obstacles to the side. The HSV color-thresholding technique allows less sensitivity to changes in ambient lighting. If the robotic arm does not pick the fruit on the first attempt, the system makes a new attempt. On the first attempt, the strawberry picker robot achieved a success rate of 97.1% for isolated strawberries and without neighbors and 5% for strawberries completely surrounded by unripe strawberries; on the second attempt, the robot achieved a success rate of 100% and 20% for the same cases as previously described. With the dual arm system approach, the robot reduces the fruit handling time from 6.1 s (with only one robotic arm) to 4.6 s (with both robotic arms) [81].
Visual perception is increasingly being improved. As an example of the current advances in this task, Kang et al. developed a prototype of an apple harvesting robot equipped with a 6-DoF robotic arm and a soft-finger based gripper (not to damage the surface of the apples) together with an RGB-D camera (as shown in Figure 8f) to perform the following tasks: vision perception, motion planning, fruit verification and fruit detachment [82]. The fruit recognition was done through the use of the Dasnet deep convolution neural network and the fruit pose computation through 3D Sphere Hough Transform (3D-SHT). The ambient light strongly interfered in the distance measurements captured by the RGB-D camera and, to solve this problem, the authors applied the distance-based denoising method on points processing before pose computation. Thus, all fruits that do not have a sufficient number of points or have severely in-balance length on the X, Y, Z axis are removed from the list of discovered fruits. The point cloud obtained by the RGB-D camera was used to model the environment where the apples are located, in which case the authors used an octrees-based representation of occupied space in working environments. The system obtained an F 1 score for accuracy of fruit detection of 0.871 and as improvements, the authors suggest the recognition of ripe and damaged fruits [82].
To perform real-time visual localization of the picking points for a ridge-planting strawberry harvesting robot, Yu et al. propose the Rotate-YOLO (R-YOLO) method, a variation of the original YOLO deep learning algorithm [83]. The bounding box is rotated by an angle α , to follow the strawberry’s orientation, allowing greater precision in the location of the pick point. Specially customized to operate on a strawberry ridge-planting, the robot (seen in Figure 8g) has fiber sensors on its end-effector to speed up control, avoiding real-time distance measurement [83]. Using the R-YOLO recognition method proposed by the researchers, the robot achieved a strawberry detection accuracy rate of 94.43% with a speed of 0.056 s for 640 × 480 images captured using a conventional RGB camera.
As an example of harvesting robots that also use artificial intelligence algorithms, but are mounted on a mobile platform and are used to harvest sweet pepper in protected cropping environments, one can mention the Harvey platform and SWEEPER, shown in Figure 8h,i respectively. The Harvey platform opted for the use of Deep Convolutional Neural Network (DCNN), while the SWEEPER robot used the techniques of deep learning, shape and color-based detection algorithm and Hough Transform (HT) [84,85,86]. Both have 6-DoF robotic arms, but they have different capture methods and cutting systems for detachment, as Harvey performs the suction of sweet pepper employing a vacuum pump, while SWEEPER holds the sweet pepper through flexible fingers. As a result, the SWEEPER robot took an average of 4.3 s to detect sweet pepper and 14.5 s for detachment, while the Harvey platform took about 3.7 s and 2.2 s to perform the same activities.
Coconuts harvesting activities in developing countries are commonly done without any safety equipment. In addition to causing serious injuries, falling a coconut tree can be fatal. In this sense, the Indian researchers Megalingam et al. developed Amaran, an unmanned robotic coconut tree climber and harvester [87]. The Amaran robot climbs the coconut trees through a light mechanical structure composed of eight mechanum wheels, four located at the top and four at the bottom. Through a certain activation sequence, Amaran can move up, down, left or right. To detach the coconuts, Amaran has a 4-DoF robotic arm and a cutting tool as an end-effector, both customized to be light and not compromise the robot’s movement along the coconut tree, as shown in Figure 8j. Without any type of computer vision algorithm, the robot’s RGB camera serves to assist the human operator, located in a safe region of the ground, performing the remote control and monitoring of the robot. Taking advantage of the IoT concept, the Amaran robot can be controlled through an application for smartphones, via the Bluetooth communication protocol. After several tests, Amaran proved to be able to successfully climb trees up to 15.2 m in height, with slopes of 30° and with diameters ranging from 0.66 m to 0.92 m [87]. Although the total harvest time (preparation and harvest time) of the Amaran (21.9 min) is longer than that of a professional climber (11.8 min), the robot can climb as many coconut trees as necessary without exposing the human operator to harmful work to his health or even eventual fatal accidents.
Table 4 summarizes the main characteristics of the robotic applications for harvesting.
When analyzing the data in Table 4, it is observed:
  • Challenges: Despite the constant technological advances, the fruit occlusions and the changes in ambient lighting are still challenges that merit further scientific studies and work to enable the use of robots in agricultural environments;
  • Simplicity and efficiency: In addition to the challenges of occlusion and changes in ambient lighting, the simplicity of construction and efficiency of the robotic system are two factors that allow to streamline the commercialization process. The system efficiency is directly related to the computer vision algorithms used and, in this sense, the improvement of such algorithms will increase the efficiency of the robotic system as a whole. To the authors knowledge, only the Agrobot E-Series and Berry 5 robots in Table 4 are in the commercialization phase;
  • Evolution between 2014–2021: As previously described, Bac et al. [18] carried out a detailed study of 30 years of evolution (1984–2014) on harvesting robots. Thus, are compared the values (average; minimum–maximum) of his work with the analyzes of the present work (which vary from 2013–2021). Bac et al. reached the following values: harvest success rate (66%; 40–86%) and cycle time (33 s; 1–227 s) and this work found the following values: harvest success rate (81.17%; 50–100%) and cycle time (18.88 s; 2–36.9 s)—the cycle time of the Amaran robot was disregarded, as it is dependent on the skills of the operator. Thus, in general aspects, there is a 22.98% increase in the average harvest success rate and a 42.78% reduction in the average cycle time value, indicating an evolution in the performance of the harvesting robots.

3.5. Robotic Applications in Agriculture for Yield Estimation and Phenotyping

Through accurate data on the quantity and quality of the fruits growth, provided by more sophisticated tools, farmers can manage their crops more efficiently. Yield estimation is nothing more than monitoring the entire crop and estimate the produced fruit. On the other hand, several variables such as climate change and soil quality can interfere with the development of plants. Thus, by identifying the phenotype of the plants, it is possible to link to their respective genotype, making it possible to identify the proper growth conditions. It is noted, however, that for a robot to estimate phenotyping and/or yield it must have not only reliable sensory data, but also efficient computer vision algorithms. In 1998 and 2001 there were already researchers proposing the use of sensors and machine vision algorithms to detect crop rows and gather field information [88,89]. In this case, a camera and a RTK/GPS device were inserted in a tractor to be able to create spatial maps relating the crop height and width. Through ANN the robotic system obtained a hit rate of 84%, indicating that a machine vision system could be used as a crop prediction sensor. Thus, this subsection will address several robots developed over the years to perform the tasks of yield estimation and phenotyping, giving an emphasis on the ones developed in recent years.
The Shrimp robotic system shown in Figure 9a, equipped with six RGB cameras, was used to estimate the yield of apple orchards under natural lighting conditions. To locate each sampled image, the Shrimp platform relies on the integration of an Inertial Navigation System (INS) with a GPS. The image processing is based on Multiscale Multilayer Perceptron (MLP) and CNN and the detection of the apples was performed through Watershed (WS) segmentation and Circular Hough Transform (CHT). The Shrimp platform obtained an apple detection of 82.5%, F 1 of 0.791 and a coefficient of determination r 2 of 0.826, using CNN and WS [90].
With a highly competitive market, the yield of vines can vary from region to region, due to climatic conditions, variety, soil and the techniques specific to each agricultural producer. In this sense, monitoring the grapes throughout the crop allows the quality quantification of the harvested grapes. Two projects funded by the European Union’s Seventh Framework Program have been developed in this area, VINBOT and VineRobot, shown in Figure 9b,c respectively. VINBOT uses CNN to detect grapes, then computes the area of grape occupation in the images and estimates their respective weight in kilograms [91]. VineRobot monitors parameters such as grape yield, vegetative growth, vineyard water status and grape composition using the following techniques: Chlorophyll-based fluorescence, RGB machine vision and thermography [92].
Brazil is one of the largest food exporters in the world. For this reason, the Brazilian Agricultural Research Corporation (EMBRAPA) financed the development of the AgriBOT agricultural robot (seen in Figure 9d) to develop a modular robotic platform for data acquisition and yield estimation in orange and sugar cane crops. In 2011, the authors Abrahão et al. compared the navigation efficiency of algorithms D* and Focused D* to be used by AgriBOT with 4WS. The Focused D* algorithm proved to be more efficient than D* in environments in which maps were incomplete or inaccurate [93]. In 2016, Lulio and Lugli et al. implemented a J Segmentation (JSEG) algorithm, statistical Artificial Neural Networks (ANN) image segmentation techniques and sensory fusion in the AgriBOT robot, based on the extraction of objects from real natural scenes, identifying items such as fruits, grasses, stems, branches and leaves [94,95].
The 4WD Agrob V14 robot was developed to monitor the vineyards of the Douro region in Portugal, with steep slope terrain. The robot has RGB cameras, infrared (IR) sensors, LiDAR and encoders, as seen in Figure 9e and was designed to work autonomously even in cases of unavailability of the GNSS signal [96]. The high content of stones in the soil interferes with the odometer and IMU data. To solve these problems, Santos et al. [97] integrated the Simultaneous Localization and Mapping (SLAM) techniques with the data generated by Radio Frequency IDentification tags (RFID) located at the beginning and the end of each line of the vineyard. The authors conclude that Agrob V14 can overcome ditches, rocks and slopes of up to 30% inclination [97].
The robot Agrob V16 (shown in Figure 9f), developed for yield estimation and pruning tasks, approached another way to improve the robot’s location and positioning, based on a wireless sensor network. Integrating the concept of IoT and SLAM, Agrob V16 (which has Clearpath’s locomotion system mentioned in Section 3.3) reads the Received Signal Strength Indication (RSSI) signals generated by a Bluetooth Low Energy (BLE) transmission module and, according to the received signal strength, it estimates the position of the signal source. The fusion of encoder data with distance signals based on RSSI was done by an Extended Kalman Filter (EKF). After the application of the EKF filter, the use of RSSI allowed a 25% reduction in the standard deviation of the robot’s trajectory [98].
A biologically-inspired hexapod robot was developed to monitor the healthy growth of agricultural fields, such as agronomic information on soil nutrients. However, according to Iida et al., the use of high-precision navigation devices (RTK/GNSS) is not viable, due to the hexapod’s size and power supply limits [99]. Just as the locomotion system was biologically inspired by insects, the researchers presented a new way of guiding the robot without using RTK/GNSS, through the insect’s sense of smell. Through the use of an anemoscope and C O 2 gas sensors, the hexapod travels autonomously throughout the crop, following the sources of C O 2 and the wind direction. The hexapod robot in Figure 9g used the tripod gait, varying its speed according to the update control time and/or adjusting the length of steps. After experimental results, the authors suggest that the hexapod can be guided autonomously by air currents and simultaneously monitor the concentration of C O 2 gas discharged from crops and soil [99].
Like the hexapod robot, the TerraSentia robot also has small dimensions, monitors the crops and does not have an RTK/GNSS system. In this case, the TerraSentia robot depicted in Figure 9h was used to move between corn and sorghum crops, which have high height vegetation, having only one LiDAR to perform light detection under the canopy and ranging-based autonomous navigation [100]. TerraSentia moves around using a LiDAR-based navigation algorithm. After reading the LiDAR input data, TerraSentia performs a data filtering process, excluding outlier points and estimates its trajectory using a set of heuristics and least squares. Several tests were carried out and the robot covered more than 6 km of straight rows autonomously.
In [101] researchers have developed a canopy density estimation at four separate locations in South Australia using AgScan3D, a mobile vehicle-mounted 3D spinning LiDAR system, to generate a globally registered ray cloud. The AgScan3D is embedded in the rear of a Kubota farm vehicle and consists of a 3D spinning LiDAR, 3DM-Gx3 IMU and a GPS unit. The AgScan3D system applies a Continuous-Time SLAM algorithm into a globally registered 3D ray cloud, a series of extraction and segmentation algorithms for the ground and the rows of vines and makes use of the variable resolution method to perform the canopy density estimation. Through experimental tests, a total traversal of 160 km and approximately 93,000 vines were scanned and obtained repeatability with a root mean square error of 3.8% for the vehicle traveling at an average speed of 5 to 6 km/h.
The plant phenotyping can be extracted in various ways, such as, for example, by monitoring plant height, weight, biomass, shape, color, volume, light absorption and temperature [102]. Two robotic platforms were used together to extract the phenotypic characteristics of corn plants: Vinobot and Vinoculer, as shown in Figure 9i,j, respectively. Vinoculer is a fixed platform that constantly monitors height and extracts 3D data from the crop (reconstructed using Visual Structure From Motion—VisualSFM—depicted in Figure 10) and Vinobot extracts the individual phenotypic characteristics of each plant, moving around the entire crop. In this way, it is possible to correlate the data collected from Vinoculer (general data) with that from Vinobot (individual data). In addition to monitoring plant height, these robots calculate the Leaf Area Index (LAI) and measure light intensity and air temperature, both using multiple sensors. Filtering the point cloud based on the vegetation height allows reducing the interference of unwanted vegetation, such as weeds. Thus, the collection of such data from the farm greatly speeds up the phenotyping process [103].
The architecture consists of two robotic platforms: an autonomous ground vehicle (Vinobot) and a mobile observation tower (Vinoculer). The ground vehicle collects data from individual plants, while the observation tower oversees an entire field, identifying specific plants for further inspection by the Vinobot. The advantage of this architecture is threefold: first, it allows the system to inspect large areas of a field at any time, during the day and night, while identifying specific regions affected by biotic and/or abiotic stresses; second, it provides high-throughput plant phenotyping in the field by either comprehensive or selective acquisition of accurate and detailed data from groups or individual plants; and third, it eliminates the need for expensive and cumbersome aerial vehicles or similarly expensive and confined field platforms.
A customized helicopter (Pheno-Copter, depicted in Figure 9k) was used to estimate variations in a land cover of sorghum (early season), the temperature of cover in sugar cane (mid-season) and three-dimensional measures of crop lodging in wheat (late season), using concepts from RANSAC and Digital Elevation Models (DEM), showing the ability to meet different levels of needs and image coverage [104].
Designed for scouting and phenotyping applications, the Ara robot from ecoRoboticx (depicted in Figure 9l) can make corrections in RTK/GPS via GSM/3G communication; however, it does not provide any solution in algorithms and sensors, being designed to be integrated with several sensors of different manufacturers. Weighing around 130 kg, the robot can be controlled by a smartphone via WiFi or 3G/4G communication [105].
Table 5 shows a summary of the aforementioned works.
The following observations can be made after analyzing the data in Table 5:
  • Sensors: The micro observation of the biological phenomena of each plant, whether for yield estimation or phenotyping, requires specific and highly reliable sensors, from fluorescence level detection sensors, multispectral, Near-Infrared (NIR), IR, environmental to RGB cameras;
  • SLAM: Whether due to physical dimensions, power supply requirements or vegetation height, the unavailability of systems based on GNSS devices induces the improvement of SLAM techniques. Thus, to improve navigation in these conditions, robots use both natural characteristics (such as the generation of trajectories based on the average distance between rows and the direction of the airflow) and artificial ones (such as the use of RFID tags and wireless sensors). Several SLAM algorithms and path planning techniques for agricultural and forestry robots are described in detail in [106,107,108];
  • Artificial intelligence: Based on the specific characteristics of each crop, vegetation indices (such as NDVI and Chlorophyll-based fluorescence) and artificial intelligence algorithms (such as MLP, CNN and SVM) can be used. Therefore, one must seek to establish a balance between the computational complexity level of the proposed approach with the expected efficiency/result.

4. Discussion

After finding and discussing the main existing robotic systems presently available or in the research phase, several data were collected to be analyzed. Therefore, this section will discuss the research trends, common difficulties, the indices that hinder commercial development, as well as which countries are investing in the research of these types of solutions and, finally, what are the desirable requirements for robotic agricultural systems.

4.1. Agricultural Robots

When evaluating the various characteristics commonly found in all agricultural applications discussed in this article, it was possible to create the charts presented in Figure 11.
The data in Figure 11 reveal that, in general aspects, most are robotic systems applications in agricultural environments are directed towards the development of 4WD robots, without a robotic arm, used for the removal of weeds and use RGB cameras, as depicted in Figure 11a–d, respectively. The research shows that although 32.23% of the works use RGB cameras, most of them do not have/do not report the use of computer vision algorithms, as depicted in Figure 11e, where the specific functions are, for instance, Otsu method, HT and CV. Of all 62 projects analyzed, 80.65% are in the research stage (show in Figure 11f). Most of the work analyzed was done by Australian researchers/companies (see Figure 11g) and, according to the FAO [109], between 1997, 2007 and 2017, although Australia does not yet dominate the world market for agricultural production, it has been growing every 10 years.
Another interesting fact concerns the amount of work developed by researchers or companies by continent, depicted in Figure 11h. It is significant to note that no work was carried out by researchers/companies from the African continent, a continent characterized by having the highest rates of poverty, hunger and lack of qualified labor in the world.

4.2. Unsolved Issues

To improve current agricultural robotic systems, several proposals are presented to assist future researchers. According to Figure 11a, most agricultural robots are 4WD, however, the agricultural environment is classified as semi-structured and, in this case, 4WD robots are strongly affected by soil characteristics, as mentioned earlier in [97]. As reported in [23], there is still a very large trade-off between the quality and cost of cameras used in agriculture. Unlike the works carried out in the last century, there are currently several types of computer vision algorithms that can be implemented in high-performance embedded systems, but the choice of the most suitable algorithms for each type of situation still needs to be improved. As they are electronic systems that work in the same environment in which the robots are inserted, the IoT devices must be used in conjunction with the robots. Therefore, in general, it is possible to group the proposals in four areas: locomotion systems, sensors, computer vision algorithms and communication technologies.

4.2.1. Locomotion Systems

As noted in Figure 11, most locomotion systems for agricultural robots are 4WD. However, wheeled systems are strongly affected by the characteristics of local terrain, such as rocks and branches. Besides, the constant locomotion of these robots throughout the farm results in a high rate of soil compaction. Concerning UAV, its flight time improvements can increase its use in agricultural environments [57].
Another alternative for locomotion in unstructured environments is the use of legged robots, according to [15,18]. As advantages, these robots do not need constant contact with the ground to get around and adjust their posture according to the slope of the terrain, allowing them to navigate in rugged and difficult to access environments [110,111]. Figure 12 shows some commercially available off-the-shelf legged robots.
All robots shown in Figure 12 are quadrupeds and are already commercially available for sale. However, each one has its solutions for performing trajectory planning calculations, locomotion patterns, navigation systems and computer vision algorithms [112,113,114]. The common characteristics of these robots are: being relatively light, small, autonomous and have locomotion patterns that adapt to the environment. Despite allowing the reduction of damage to the ground, the robots feet in Figure 12 have a small contact area, creating great pressure on the foot placement region. In this sense, to prevent robots’ feet from penetrating soft soils and becoming trapped, it is necessary that legged robots have specifically designed foot-ground contact areas (based on the concepts of soil mechanics) to reduce pressure (increasing the foot-ground contact area) under the soil during locomotion. Thus, as they are widespread robotic platforms, their risks of rejection by the agricultural market are reduced.

4.2.2. Sensors

According to Figure 11d, the most used sensor in the reviewed works was the RGB camera. The RGB-D, thermal, hyperspectral and multispectral cameras, despite providing more information (depth, temperature and more spectral data), have a higher financial cost, hindering its practical implementation. The trade-off between financial cost and quality must be chosen according to the minimum system requirements to be developed, because in an agricultural environment the temperature, humidity and dust incidence can directly interfere with the proper functioning of the sensors. In this sense, the development of sensors with high Ingress Protection (IP) (IP65, IP66 or IP67), which operate with high temperature and humidity ranges and, mainly, with a low financial cost, may contribute to the construction of more robust agricultural robotic systems to variations in climatic conditions (sun and rain), extending its useful life.

4.2.3. Computer Vision Algorithms

Through the extraction of crop characteristics (such as ExG-ExR, NDVI, Chlorophyll-based fluorescence and RGB/thermal/hyperspectral/multispectral images), the use of artificial intelligence algorithms (such as MLP, CNN, R-CNN, R-YOLO and SVM) allows the identification of diseases, detection of weeds, selective application of herbicide/pesticide, location of fruits/vegetables/vegetables, classification of ripeness (ripe/unripe) and yield estimation. Again, the short-term (ambient lighting) and long-term (seasons) variations of the crop may interfere with the computer vision algorithm efficiency. Thus, it is proposed to improve and/or create new computer vision algorithms that adapt to the short and long-term changes of the crops and that are optimized to operate on devices with low processing power and cost.

4.2.4. IoT-Based Smart Agriculture

Just as the concept of smart cities is strongly linked to the use of IoT technologies [115,116], the concept of smart agriculture must also be. The use of various artificial intelligence techniques, such as CNN, demonstrated a high degree of adaptation to the rapid variation in natural lighting, changing seasons and crop growing. Thus, the various forms of integration of IoT devices used in agricultural activities [117,118], with the various types of artificial intelligence algorithms and with the numerous robotic systems reviewed and described in this article may contribute to better control, monitoring, preservation and standardization of processes, developing precise multi-purpose systems to solve short (harvest monitoring) and long-term (yield estimation) problems. In addition, the integration of IoT sensors with mobile robots has great potential for improving the concepts of parallelism and swarm of robots due to the possibility of exchanging Machine-to-Machine (M2M) information.

5. Conclusions

To propose new technical and scientific advances in the field of smart agriculture, it is first necessary to know the main existing works, exposing their advantages, limitations and common pitfalls in order to identify the real needs for improvements. After a systematic review of agricultural robotic systems applied in the execution of land preparation before planting, sowing, planting, plant treatment, harvesting, yield estimation and phenotyping, it was observed that 37% of robotic systems are 4WD, 64.52% don’t have a robotic arm, 22.06% are used in weeding tasks, 32.23% use RGB cameras, 35.48% not included/not reported computer vision algorithms, 80.65% are in the research stage, 16.67% are designed by Australian companies/researchers and 41.94% are developed by countries on the European continent. The main characteristics observed were the little use of the concept of off-the-shelf, parallelism, the swarm of robots, simple and efficient computer vision algorithms, in addition to multi-purpose platforms that adapt appropriately to the type of crop studied. To improve the current agricultural robotic systems, four main areas have been proposed for future research works: locomotion systems, sensors, computer vision algorithms and IoT-based smart agriculture. This paper covered about 62 agricultural robotic systems and, just as it was observed a 22.98% increase in the average harvest success rate and a 42.78% reduction in the average cycle of harvesting robots between 2014–2021 (described in Section 3.4), it is expected that, with the improvement of the previously mentioned areas, the development of agricultural robotic systems will continue to increase their efficiency and robustness. Therefore, it is believed that this work was able to show not only the notable advances in the field of mobile robotics but also to correlate the advantages of investing in technologies that act as tools for transforming nature.

Author Contributions

The contributions of the authors of this work are as follows: conceptualization, M.F.S. and A.P.M.; methodology, L.F.P.O. and M.F.S.; investigation, L.F.P.O., M.F.S. and A.P.M.; writing—original draft preparation, L.F.P.O., M.F.S. and A.P.M.; writing—review and editing, L.F.P.O., M.F.S. and A.P.M.; visualization, L.F.P.O. and M.F.S.; supervision, M.F.S. and A.P.M.; project administration, M.F.S. and A.P.M.. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financed by National Funds through the Portuguese funding agency, FCT—Fundação para a Ciência e a Tecnologia, within project UIDB/50014/2020.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the following individuals and institutions for giving permission to reprint the figures and photographs presented in this paper: Figure 2a: Sebastian Golbs (Raussendorf Maschinen and Gerätebau GmbH); Figure 2b: Angelique Houmes (Precision Makers); Figure 2c: Sohel Anwar (Purdue School of Engineering and Technology, IUPUI); Figure 4a: Lin Haibo (permission granted under Creative Commons CC BY 3.0 licence terms); Figure 4b,c, Figure 5 and Figure 6p,q,s: Khalid Rafique (The Australian Centre for Field Robotics at The University of Sydney); Figure 4c: Masood Ul Hassan (Deakin University); Figure 6a: Sigal Berman (Ben-Gurion University of the Negev); Figure 6b: Vivek Diwanji; Figure 6c: Lie Tang (Iowa State University); Figure 6d–f: Anouck Lefebvre (Naïo Technologies); Figure 6g: Arnaud de la Fouchardière (Vitirover Solutions); Figure 6h: Joe Jones (Franklin Robotics); Figure 6i: Keun Ha Choi (Korea Advanced Institute Science and Technology); Figure 6j: Takahiro Kobayashi (Institute of Advanced Media Arts and Sciences—IAMAS); Figure 6k: Hitoshi Sori (Tsuyama National College of Technology); Figure 6l: Osman Tokhi (permission granted under CLAWAR Association Ltd licence terms); Figure 6m,n: George Adamides (Agricultural Research Institute, Cyprus); Figure 6o: Ron berenstein (Agriculture Research Organization, Volcani Center); Figure 6r: Cyrill Stachniss (University of Bonn); Figure 6t: Daniele Sarri (permission granted under Creative Commons CC BY-NC 4.0 licence terms); Figure 6u: Henry Williams (The University of Auckland); Figure 6v: Tom Botterill (University of Canterbury); Figure 6w: Lars Grimstad (permission granted under Creative Commons CC BY-NC-ND 4.0 licence terms); Figure 6x: Christopher Bogdon (Clearpath Robotics); Figure 7: Hitoshi Sori (Tsuyama National College of Technology); Figure 8a: Juan Bravo (Agricultural robots—Agrobot); Figure 8b: Josie Hughes (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 8c: Yuanyue Ge (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 8d: Delia SepúLveda (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 8e: Ya Xiong (permission granted under Creative Commons CC BY-NC-ND 4.0 licence terms); Figure 8f: Hanwen Kang (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 8g: Kailiang Zhang (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 8h: Chris Lehnert (School of Electrical Engineering and Robotics, Queensland University of Technology); Figure 8i: Ola Ringdah (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 8j: Rajesh Kannan Megalingam (Amrita Vishwa Vidyapeetham University); Figure 9a: James Underwood (The Australian Centre for Field Robotics at The University of Sydney); Figure 9b: Carlos Lopes (University of Lisbon); Figure 9c: Manuel Javier Tardáguila Laso (University of La Rioja); Figure 9d: Marcelo Becker (University of São Paulo); Figure 9e,f: Filipe Neves Dos Santos (Centre for Robotics in Industry and Intelligent Systems—CRIIS INESC TEC); Figure 9g: Michihisa Iida (Kyoto University); Figure 9h: Chinmay Soman (EarthSense); Figure 9i: Guilherme de Souza (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 9j: Guilherme de Souza (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 9k: Scott Chapman (permission granted under Creative Commons CC BY-NC-SA 3.0 licence terms); Figure 9l: Team Vente Yverdon (ecoRobotix SA); Figure 10: Guilherme de Souza (permission granted under Creative Commons CC BY 4.0 licence terms); Figure 12a: Olivier Reinl (ANYbotics); Figure 12b: Irving Chen (Unitree Robotics); Figure 12c: WEILAN Team (WEILAN).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. United Nations. World Population Projected to Reach 9.8 Billion in 2050. Available online: https://www.un.org/development/desa/en/news/population/world-population-prospects-2017.html (accessed on 8 March 2021).
  2. Zhang, X.; Davidson, E.A. Improving Nitrogen and Water Management in Crop Production on a National Scale. AGU Fall Meeting Abstracts. 2018, Volume 1, pp. 1–2. Available online: https://ui.adsabs.harvard.edu/abs/2018AGUFM.B22B..01Z/abstract (accessed on 1 March 2021).
  3. Ayaz, M.; Ammad-Uddin, M.; Sharif, Z.; Mansour, A.; Aggoune, E.M. Internet of Things (IoT) Based Smart Agriculture: Toward Making the Fields Talk. IEEE Access 2019, 1. [Google Scholar] [CrossRef]
  4. United Nations. World Urbanization Prospects: The 2018 Revision. Econ. Soc. Aff. 2018, 1, 1–2. [Google Scholar]
  5. Zhang, L.; Dabipi, I.K.; Brown, W.L., Jr. Internet of Things Applications for Agriculture; John Wiley & Sons, Ltd: Hoboken, NJ, USA, 2018; Chapter 18; pp. 507–528. [Google Scholar]
  6. World Health Organization. WHO Coronavirus Disease (COVID-19) Dashboard. 2020. Available online: https://covid19.who.int/ (accessed on 1 March 2021).
  7. FAO. Keeping food and agricultural systems alive: Analyses and solutions in response to COVID-19. FAO 2020, 64. [Google Scholar] [CrossRef]
  8. CFBF. Still Searching for Solutions: Adapting to Farm Worker Scarcity Survey 2019. Available online: https://www.cfbf.com/wp-content/uploads/2019/06/LaborScarcity.pdf (accessed on 1 March 2021).
  9. McBratney, A.; Whelan, B.; Ancev, T.; Bouma, J. Future Directions of Precision Agriculture. Precis. Agric. 2005, 6, 7–23. [Google Scholar] [CrossRef]
  10. Lowenberg-DeBoer, J.; Erickson, B. Setting the Record Straight on Precision Agriculture Adoption. Agron. J. 2019, 111, 1552–1569. [Google Scholar] [CrossRef] [Green Version]
  11. Tarannum, N.; Rhaman, M.K.; Khan, S.A.; Shakil, S.R. A Brief Overview and Systematic Approch for Using Agricultural Robot in Developing Countries. J. Mod. Sci. Technol. 2015, 3, 88–101. [Google Scholar]
  12. Santesteban, L.G. Precision viticulture and advanced analytics. A short review. Food Chem. 2019, 279, 58–62. [Google Scholar] [CrossRef]
  13. Zha, J. Artificial Intelligence in Agriculture. J. Phys. Conf. Ser. 2020, 1693, 012058. [Google Scholar] [CrossRef]
  14. Elijah, O.; Rahman, T.A.; Orikumhi, I.; Leow, C.Y.; Hindia, M.N. An Overview of Internet of Things (IoT) and Data Analytics in Agriculture: Benefits and Challenges. IEEE Int. Things J. 2018, 5, 3758–3773. [Google Scholar] [CrossRef]
  15. Oliveira, L.F.P.; Silva, M.F.; Moreira, A.P. Agricultural Robotics: A State of the Art Survey. In Proceedings of the 23rd International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines (CLAWAR 2020), Moscow, Russian, 24–26 August 2020; pp. 279–286. [Google Scholar] [CrossRef]
  16. Fountas, S.; Mylonas, N.; Malounas, I.; Rodias, E.; Hellmann Santos, C.; Pekkeriet, E. Agricultural Robotics for Field Operations. Sensors 2020, 20, 2672. [Google Scholar] [CrossRef]
  17. Shamshiri, R.R.; Weltzien, C.; Hameed, I.A.; Yule, I.J.; Grift, T.E.; Balasundram, S.K.; Pitonakova, L.; Ahmad, D.; Chowdhary, G. Research and development in agricultural robotics: A perspective of digital farming. Int. J. Agric. Biol. Eng. 2018, 11, 1–14. [Google Scholar] [CrossRef]
  18. Bac, C.W.; Henten, E.J.v.; Hemming, J.; Edan, Y. Harvesting Robots for High-value Crops: State-of-the-art Review and Challenges Ahead. J. Field Robot. 2014, 31. [Google Scholar] [CrossRef]
  19. Gao, X.; Li, J.; Fan, L.; Zhou, Q.; Yin, K.; Wang, J.; Song, C.; Huang, L.; Wang, Z. Review of Wheeled Mobile Robots’ Navigation Problems and Application Prospects in Agriculture. IEEE Access 2018, 6, 49248–49268. [Google Scholar] [CrossRef]
  20. Sistler, F. Robotics and intelligent machines in agriculture. IEEE J. Robot. Autom. 1987, 3, 3–6. [Google Scholar] [CrossRef]
  21. Raussendorf. Fruit Robot. Available online: https://www.raussendorf.de/en/fruit-robot.html (accessed on 1 March 2021).
  22. Siciliano, B.; Khatib, O. Springer Handbook of Robotics, 2nd ed.; Springer Publishing Company: Cham, Switzerland, 2016. [Google Scholar]
  23. Khan, N.; Medlock, G.; Graves, S.; Anwar, S. GPS Guided Autonomous Navigation of a Small Agricultural Robot with Automated Fertilizing System; SAE Technical Paper; SAE International: Warrendale PA, USA, 2018; Volume 1, p. 1. [Google Scholar] [CrossRef]
  24. Precision Makers. GREENBOT. Available online: https://www.precisionmakers.com/en/greenbot-2/ (accessed on 1 March 2021).
  25. DJI. AGRAS MG-1P SERIES: Innovative Insights. Increased Efficiency. Available online: https://www.dji.com/br/mg-1p (accessed on 8 March 2021).
  26. Nawaz, M.; Bourrié, G.; Trolard, F. Soil compaction impact and modelling: A review. Agron. Sustain. Dev. 2012, 33. [Google Scholar] [CrossRef] [Green Version]
  27. Sakaue, O. Development of seeding production robot and automated transplanter system. Jpn. Agric. Res. Q. 1996, 30, 221–226. [Google Scholar]
  28. Haibo, L.; Dong, S.; Zunmin, L.; Chuijie, Y. Study and Experiment on a Wheat Precision Seeding Robot. J. Robot. 2015, 1, 1–9. [Google Scholar] [CrossRef]
  29. Sukkarieh, S. Mobile on-farm digital technology for smallholder farmers. In Proceedings of the 2017 Crawford Fund Annual Conference on Transforming Lives and Livelihoods: The Digital Revolution in Agriculture, Canberra, Australia, 7–8 August 2017; p. 9. [Google Scholar]
  30. Hassan, M.U.; Ullah, M.; Iqbal, J. Towards autonomy in agriculture: Design and prototyping of a robotic vehicle with seed selector. In Proceedings of the 2016 2nd International Conference on Robotics and Artificial Intelligence (ICRAI), Los Angeles, CA, USA, 20–22 April 2016; pp. 37–44. [Google Scholar]
  31. Srinivasan, N.; Prabhu, P.; Smruthi, S.S.; Sivaraman, N.V.; Gladwin, S.J.; Rajavel, R.; Natarajan, A.R. Design of an autonomous seed planting robot. In Proceedings of the 2016 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Agra, India, 21–23 December 2016; pp. 1–4. [Google Scholar]
  32. FAO. Keeping Plant Pests and Diseases at Bay: Experts Focus on Global Measures. Available online: http://www.fao.org/news/story/en/item/280489/icode/ (accessed on 15 February 2021).
  33. Sinden, J.A.; for Australian Weed Management (Australia), C.R.C. The Economic Impact of Weeds in Australia: Report to the CRC for Australian Weed Management; CRC Weed Management: Adelaide, Australia, 2004; p. 55. [Google Scholar]
  34. Lee, W.S.; Slaughter, D.C.; Giles, D.K. Robotic Weed Control System for Tomatoes. Precis. Agric. 1999, 1, 95–113. [Google Scholar] [CrossRef]
  35. Lee, W.S.; Slaughter, D.C. Plant recognition using hardware-based neural network. In Proceedings of the 1998 ASAE Annual International Meeting, Orlando, FL, USA, 12–16 July 1998; pp. 1–14. [Google Scholar]
  36. Schor, N.; Bechar, A.; Ignat, T.; Dombrovsky, A.; Elad, Y.; Berman, S. Robotic Disease Detection in Greenhouses: Combined Detection of Powdery Mildew and Tomato Spotted Wilt Virus. IEEE Robot. Autom. Lett. 2016, 1, 354–360. [Google Scholar] [CrossRef]
  37. Pilli, S.K.; Nallathambi, B.; George, S.J.; Diwanji, V. eAGROBOT—A robot for early crop disease detection using image processing. In Proceedings of the 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India, 26–27 February 2015; pp. 1684–1689. [Google Scholar]
  38. Gai, J.; Tang, L.; Steward, B.L. Automated crop plant detection based on the fusion of color and depth images for robotic weed control. J. Field Robot. 2020, 37, 35–52. [Google Scholar] [CrossRef]
  39. Jorgensen, R.; Sorensen, C.; Maagaard, J.; Havn, I.; Jensen, K.; Sogaard, H.; Sorensen, L. HortiBot: A System Design of a Robotic Tool Carrier for High-tech Plant Nursing. CIGR J. Sci. Res. Dev. 2006, IX, 1–13. [Google Scholar]
  40. McCool, C.; Beattie, J.; Firn, J.; Lehnert, C.; Kulk, J.; Bawden, O.; Russell, R.; Perez, T. Efficacy of Mechanical Weeding Tools: A Study Into Alternative Weed Management Strategies Enabled by Robotics. IEEE Robot. Autom. Lett. 2018, 3, 1184–1190. [Google Scholar] [CrossRef]
  41. Naio Tecnologies. OZ-Weeding, Transportation and Harvest Assistance Robot. Available online: https://www.naio-technologies.com/wp-content/uploads/2019/04/brochure-OZ-ENGLISH-HD.pdf (accessed on 20 February 2021).
  42. Naio Tecnologies. Dino-Autonomous Mechanical Weeding Robot. 2020. Available online: https://www.naio-technologies.com/wp-content/uploads/2019/04/brochure-DINO-ENGLISH-HD.pdf (accessed on 20 February 2021).
  43. Naio Tecnologies. Ted—Multifunctional Straddling Vineyard Robot. 2020. Available online: https://www.naio-technologies.com/wp-content/uploads/2019/04/brochure-TED-ENGLISH-3.pdf (accessed on 20 February 2021).
  44. VITIROVER Solutions. VITIROVER—A Revolution in Soil Grassing Management. 2020. Available online: https://www.vitirover.fr/en-home (accessed on 23 February 2021).
  45. Franklin Robotics. Meet Tertill—A Better Way to Weed. 2020. Available online: https://tertill.com/ (accessed on 23 February 2021).
  46. Choi, K.H.; Han, S.K.; Han, S.H.; Park, K.H.; Kim, K.S.; Kim, S. Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields. Comput. Electron. Agric. 2015, 113, 266–274. [Google Scholar] [CrossRef]
  47. Mitsui, T.; Kobayashi, T.; Kagiya, T.; Inaba, A.; Ooba, S. Verification of a Weeding Robot “AIGAMO-ROBOT” for Paddy Fields. J. Robot. Mechatron. 2008, 20, 228–233. [Google Scholar] [CrossRef]
  48. Sori, H.; Inoue, H.; Hatta, H.; Ando, Y. Effect for a Paddy Weeding Robot in Wet Rice Culture. J. Robot. Mechatron. 2018, 30, 198–205. [Google Scholar] [CrossRef]
  49. Uchida, T.F.; Yamano, T. Development of a remoto control type weeding machine with stirring chains for a paddy field. In Proceedings of the 22nd International Conference on Climbing and Walking Robots and Support Technologies for Mobile Machines (CLAWAR 2019), Kuala Lumpur, Malaysia, 26–28 August 2019; pp. 61–68. [Google Scholar] [CrossRef]
  50. Adamides, G.; Katsanos, C.; Constantinou, I.; Christou, G.; Xenos, M.; Hadzilacos, T.; Edan, Y. Design and development of a semi-autonomous agricultural vineyard sprayer: Human–robot interaction aspects. J. Field Robot. 2017, 34, 1407–1426. [Google Scholar] [CrossRef]
  51. Berenstein, R.; Edan, Y. Automatic Adjustable Spraying Device for Site-Specific Agricultural Application. IEEE Trans. Autom. Sci. Eng. 2018, 15, 641–650. [Google Scholar] [CrossRef]
  52. Bogue, R. Robots poised to revolutionise agriculture. Ind. Robot Int. J. 2016, 43, 450–456. [Google Scholar] [CrossRef]
  53. Underwood, J.P.; Calleija, M.; Taylor, Z.; Hung, C.; Nieto, J.M.G.; Fitch, R.; Sukkarieh, S. Real-time target detection and steerable spray for vegetable crops. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
  54. Wu, X.; Aravecchia, S.; Lottes, P.; Stachniss, C.; Pradalier, C. Robotic weed control using automated weed and crop classification. J. Field Robot. 2020, 37, 322–340. [Google Scholar] [CrossRef] [Green Version]
  55. Wallace, N.D.; Kong, H.; Hill, A.J.; Sukkarieh, S. Energy Aware Mission Planning for WMRs on Uneven Terrains. IFAC-PapersOnLine 2019, 52, 149–154. [Google Scholar] [CrossRef]
  56. Turner, D.; Lucieer, A.; Watson, C. Development of an Unmanned Aerial Vehicle (UAV) for Hyper-Resolution Vineyard Mapping Based on Visible, Multispectral and Thermal Imagery. The GEOSS Era: Towards Operational Environmental Monitoring. 2011, Volume 1. Available online: https://www.isprs.org/proceedings/2011/isrse-34/211104015Final00547.pdf (accessed on 26 February 2021).
  57. Kim, J.; Kim, S.; Ju, C.; Son, H.I. Unmanned Aerial Vehicles in Agriculture: A Review of Perspective of Platform, Control, and Applications. IEEE Access 2019, 7, 105100–105115. [Google Scholar] [CrossRef]
  58. Sánchez, J.T.; Peña, J.M.; Castro, A.I.; Granados, F.L. Multi-temporal mapping of the vegetation fraction in early-season wheat fields using images from UAV. Comput. Electron. Agric. 2014, 103, 104–113. [Google Scholar] [CrossRef]
  59. Xiang, H.; Tian, L. Development of a low-cost agricultural remote sensing system based on an autonomous unmanned aerial vehicle (UAV). Biosyst. Eng. 2011, 108, 174–190. [Google Scholar] [CrossRef]
  60. Sarri, D.; Martelloni, L.; Rimediotti, M.; Lisci, R.; Lombardo, S.; Vieri, M. Testing a multi-rotor unmanned aerial vehicle for spray application in high slope terraced vineyard. J. Agric. Eng. 2019, 50, 38–47. [Google Scholar] [CrossRef]
  61. Meivel, S.; Dinakaran, K.; Gandhiraj, N.; Srinivasan, M. Remote sensing for UREA Spraying Agricultural (UAV) system. In Proceedings of the 2016 3rd International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 22–23 January 2016; Volume 1, pp. 1–6. [Google Scholar] [CrossRef]
  62. Williams, H.; Nejati, M.; Hussein, S.; Penhall, N.; Lim, J.Y.; Jones, M.H.; Bell, J.; Ahn, H.S.; Bradley, S.; Schaare, P.; et al. Autonomous pollination of individual kiwifruit flowers: Toward a robotic kiwifruit pollinator. J. Field Robot. 2020, 37, 246–262. [Google Scholar] [CrossRef]
  63. Verbiest, R.; Ruysen, K.; Vanwalleghem, T.; Demeester, E.; Kellens, K. Automation and robotics in the cultivation of pome fruit: Where do we stand today? J. Field Robot. 2020. [Google Scholar] [CrossRef]
  64. Karkee, M.; Adhikari, B.; Amatya, S.; Zhang, Q. Identification of pruning branches in tall spindle apple trees for automated pruning. Comput. Electron. Agric. 2014, 103, 127–135. [Google Scholar] [CrossRef]
  65. Botterill, T.; Paulin, S.; Green, R.; Williams, S.; Lin, J.; Saxton, V.; Mills, S.; Chen, X.; Corbett-Davies, S. A Robot System for Pruning Grape Vines. J. Field Robot. 2017, 34, 1100–1122. [Google Scholar] [CrossRef]
  66. Majeed, Y.; Karkee, M.; Zhang, Q.; Fu, L.; Whiting, M.D. Development and performance evaluation of a machine vision system and an integrated prototype for automated green shoot thinning in vineyards. J. Field Robot. 2021. [Google Scholar] [CrossRef]
  67. Grimstad, L.; From, P.J. Thorvald II—A Modular and Re-configurable Agricultural Robot. IFAC-PapersOnLine 2017, 50, 4588–4593. [Google Scholar] [CrossRef]
  68. Clearpath Robotics. Boldy Go Where No Robot Has Gone before. 2020. Available online: https://clearpathrobotics.com/ (accessed on 25 February 2021).
  69. Avrora Robotics. Agrobot Project—Automation of Agriculture. 2020. Available online: https://avrora-robotics.com/en/projects/agrobot/ (accessed on 2 March 2021).
  70. Hayashi, S.; Yamamoto, S.; Saito, S.; Ochiai, Y.; Kamata, J.; Kurita, M.; Yamamoto, K. Field Operation of a Movable Strawberry-harvesting Robot using a Travel Platform. Jpn. Agric. Res. Q. 2014, 48, 307–316. [Google Scholar] [CrossRef] [Green Version]
  71. ABARES. Australian Vegetable Growing Farms: An Economic Survey, 2012-13 and 2013-14. 2014. Available online: https://data.gov.au/dataset/ds-dga-a00deb73-3fd1-4ae7-bc01-be5f37cffeee/details (accessed on 2 March 2021).
  72. Jie, L.; Jiao, S.; Wang, X.; Wang, H. A new type of facility strawberry stereoscopic cultivation mode. J. China Agric. Univ. 2019, 24, 61–68. [Google Scholar]
  73. Ceres, R.; Pons, J.; Jiménez, A.; Martín, J.; Calderón, L. Design and implementation of an aided fruit-harvesting robot (Agribot). Ind. Robot Int. J. 1998, 25, 337–346. [Google Scholar] [CrossRef]
  74. Mandow, A.; Gomez-de-Gabriel, J.M.; Martinez, J.L.; Munoz, V.F.; Ollero, A.; Garcia-Cerezo, A. The autonomous mobile robot AURORA for greenhouse operation. IEEE Robot. Autom. Mag. 1996, 3, 18–28. [Google Scholar] [CrossRef] [Green Version]
  75. Agrobot. The First Pre-Commercial Robotic Harvesters for Gently Harvest Strawberries. 2020. Available online: https://www.agrobot.com/e-series (accessed on 2 March 2021).
  76. Robert, B. Fruit picking robots: Has their time come? Ind. Robot. Int. J. Robot. Res. Appl. 2020, 47, 141–145. [Google Scholar] [CrossRef]
  77. Leu, A.; Razavi, M.; Langstädtler, L.; Ristić-Durrant, D.; Raffel, H.; Schenck, C.; Gräser, A.; Kuhfuss, B. Robotic Green Asparagus Selective Harvesting. IEEE/ASME Trans. Mechatron. 2017, 22, 2401–2410. [Google Scholar] [CrossRef]
  78. Birrell, S.; Hughes, J.; Cai, J.Y.; Iida, F. A field-tested robotic harvesting system for iceberg lettuce. J. Field Robot. 2020, 37, 225–245. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  79. Ge, Y.; Xiong, Y.; Tenorio, G.L.; From, P.J. Fruit Localization and Environment Perception for Strawberry Harvesting Robots. IEEE Access 2019, 7, 147642–147652. [Google Scholar] [CrossRef]
  80. Sepúlveda, D.; Fernández, R.; Navas, E.; Armada, M.; González-De-Santos, P. Robotic Aubergine Harvesting Using Dual-Arm Manipulation. IEEE Access 2020, 8, 121889–121904. [Google Scholar] [CrossRef]
  81. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P.J. An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. J. Field Robot. 2020, 37, 202–224. [Google Scholar] [CrossRef] [Green Version]
  82. Kang, H.; Zhou, H.; Chen, C. Visual Perception and Modeling for Autonomous Apple Harvesting. IEEE Access 2020, 8, 62151–62163. [Google Scholar] [CrossRef]
  83. Yu, Y.; Zhang, K.; Liu, H.; Yang, L.; Zhang, D. Real-Time Visual Localization of the Picking Points for a Ridge-Planting Strawberry Harvesting Robot. IEEE Access 2020, 8, 116556–116568. [Google Scholar] [CrossRef]
  84. Lehnert, C.; McCool, C.; Sa, I.; Perez, T. Performance improvements of a sweet pepper harvesting robot in protected cropping environments. J. Field Robot. 2020, 37, 1197–1223. [Google Scholar] [CrossRef]
  85. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  86. Lehnert, C.; English, A.; McCool, C.; Tow, A.W.; Perez, T. Autonomous Sweet Pepper Harvesting for Protected Cropping Systems. IEEE Robot. Autom. Lett. 2017, 2, 872–879. [Google Scholar] [CrossRef] [Green Version]
  87. Megalingam, R.K.; Kuttankulangara Manoharan, S.; Mohan, S.M.; Vadivel, S.R.R.; Gangireddy, R.; Ghanta, S.; Kotte, S.; Perugupally, S.T.; Sivanantham, V. Amaran: An Unmanned Robotic Coconut Tree Climber and Harvester. IEEE/ASME Trans. Mechatron. 2020, 26, 288–299. [Google Scholar] [CrossRef]
  88. Noguchi, N.; Reid, J.; Benson, E.; Stombaugh, T. Vision Intelligence for an Agricultural Mobile Robot Using a Neural Network. IFAC Proc. Vol. 1998, 31, 139–144. [Google Scholar] [CrossRef]
  89. Noguchi, N.; Reid, J.F.; Ishii, K.; Terao, H. Multi-Spectrum Image Sensor for Detecting Crop Status by Robot Tractor. IFAC Proc. Vol. 2001, 34, 111–115. [Google Scholar] [CrossRef]
  90. Bargoti, S.; Underwood, J.P. Image Segmentation for Fruit Detection and Yield Estimation in Apple Orchards. J. Field Robot. 2017, 34, 1039–1060. [Google Scholar] [CrossRef] [Green Version]
  91. Lopes, C.; Graça, J.; Sastre, J.; Reyes, M.; Guzman, R.; Braga, R.; Monteiro, A.; Pinto, P. Vineyard Yield Estimation by Vinbot Robot—Preliminary Results with the White Variety Viosinho. In Proceedings of the 11th International Terroir Congress, McMinnville, OR, USA, 10–14 July 2016. [Google Scholar] [CrossRef]
  92. VineRobot. Available online: http://www.vinerobot.eu/ (accessed on 3 March 2021).
  93. Abrahão, G.Q.S.; Megda, P.T.; Guerrero, H.B.; Becker, M. AgriBOT project: Comparison between the D* and focussed D* navigation algorithms. In Proceedings of the International Congress of Mechanical Engineering—COBEM, Natal, Brazil, 24–28 October 2011. [Google Scholar]
  94. Lulio, L.C. Fusão Sensorial por ClassificaçãO Cognitiva Ponderada no Mapeamento de Cenas Naturais AgríColas para AnáLise Quali-Quantitativa em Citricultura. Ph.D. Thesis, Escola de Engenharia de São Carlos, Sao Paulo, Brazil, 2016. [Google Scholar]
  95. Lugli, L.; Tronco, M.; Porto, V. JSEG Algorithm and Statistical ANN Image Segmentation Techniques for Natural Scenes. In Image Segmentation; IntechOpen: Rijeka, Croatia, 2011; Chapter 18. [Google Scholar] [CrossRef] [Green Version]
  96. Santos, F.N.; Sobreira, H.; Campos, D.; Morais, R.; Moreira, A.P.; Contente, O. Towards a Reliable Robot for Steep Slope Vineyards Monitoring. J. Intell. Robot. Syst. 2016, 83, 429–444. [Google Scholar] [CrossRef]
  97. Santos, F.B.N.; Sobreira, H.M.P.; Campos, D.F.B.; Santos, R.M.P.M.; Moreira, A.P.G.M.; Contente, O.M.S. Towards a Reliable Monitoring Robot for Mountain Vineyards. In Proceedings of the 2015 IEEE International Conference on Autonomous Robot Systems and Competitions, Vila Real, Portugal, 8–10 April 2015; pp. 37–43. [Google Scholar]
  98. Reis, R.; Mendes, J.; Santos, F.N.; Morais, R.; Ferraz, N.; Santos, L.; Sousa, A. Redundant robot localization system based in wireless sensor network. In Proceedings of the 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Torres Vedras, Portugal, 25–27 April 2018; pp. 154–159. [Google Scholar]
  99. Iida, M.; Kang, D.; Taniwaki, M.; Tanaka, M.; Umeda, M. Localization of CO2 source by a hexapod robot equipped with an anemoscope and a gas sensor. Comput. Electron. Agric. 2008, 63, 73–80. [Google Scholar] [CrossRef]
  100. Higuti, V.A.H.; Velasquez, A.E.B.; Magalhaes, D.V.; Becker, M.; Chowdhary, G. Under canopy light detection and ranging-based autonomous navigation. J. Field Robot. 2019, 36, 547–567. [Google Scholar] [CrossRef]
  101. Lowe, T.; Moghadam, P.; Edwards, E.; Williams, J. Canopy density estimation in perennial horticulture crops using 3D spinning lidar SLAM. J. Field Robot. 2021. [Google Scholar] [CrossRef]
  102. Shafiekhani, A.; Kadam, S.; Fritschi, F.B.; DeSouza, G.N. Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput Field Phenotyping. Sensors 2017, 17, 214. [Google Scholar] [CrossRef]
  103. Shafiekhani, A.; Fritschi, F.; Desouza, G. Vinobot and Vinoculer: From Real to Simulated Platforms. In Proceedings of the SPIE Commercial + Scientific Sensing and Imaging, Orlando, FL, USA, 15–19 April 2018. [Google Scholar]
  104. Chapman, S.C.; Merz, T.; Chan, A.; Jackway, P.; Hrabar, S.; Dreccer, M.F.; Holland, E.; Zheng, B.; Ling, T.J.; Jimenez-Berni, J. Pheno-Copter: A Low-Altitude, Autonomous Remote-Sensing Robotic Helicopter for High-Throughput Field-Based Phenotyping. Agronomy 2014, 4, 279–301. [Google Scholar] [CrossRef] [Green Version]
  105. EcoRobotix. ARA Swuitch to Smart Scouting. 2020. Available online: https://www.ecorobotix.com/wp-content/uploads/2019/09/ECOX_FlyerPres19-EN-3.pdf (accessed on 21 March 2021).
  106. Santos, L.C.; Aguiar, A.S.; Santos, F.N.; Valente, A.; Petry, M. Occupancy Grid and Topological Maps Extraction from Satellite Images for Path Planning in Agricultural Robots. Robotics 2020, 9, 77. [Google Scholar] [CrossRef]
  107. Aguiar, A.S.; dos Santos, F.N.; Cunha, J.B.; Sobreira, H.; Sousa, A.J. Localization and Mapping for Robots in Agriculture and Forestry: A Survey. Robotics 2020, 9, 97. [Google Scholar] [CrossRef]
  108. Iqbal, J.; Xu, R.; Sun, S.; Li, C. Simulation of an Autonomous Mobile Robot for LiDAR-Based In-Field Phenotyping and Navigation. Robotics 2020, 9, 46. [Google Scholar] [CrossRef]
  109. FAO. World Food and Agriculture—Statistical pocketbook 2019. FAO 2019, 1, 254. [Google Scholar]
  110. Oliveira, L.F.P.; Rossini, F.L. Modeling, Simulation and Analysis of Locomotion Patterns for Hexapod Robots. IEEE Latin Am. Trans. 2018, 16, 375–383. [Google Scholar] [CrossRef]
  111. Silva, M.F.; Machado, J.T. A literature review on the optimization of legged robots. J. Vib. Control 2012, 18, 1753–1767. [Google Scholar] [CrossRef]
  112. Fankhauser, P. ANYmal C. 2020. Available online: https://www.anybotics.com/anymal-legged-robot/ (accessed on 7 March 2021).
  113. Unitree Robotics. Available online: https://www.unitree.com/ (accessed on 7 March 2021).
  114. Weilan. AlphaDog. 2020. Available online: http://www.weilan.com/ (accessed on 7 March 2021).
  115. Oliveira, L.F.P.; Manera, L.T.; Luz, P.D.G. Development of a Smart Traffic Light Control System with Real-Time Monitoring. IEEE Int. Things J. 2020, 1. [Google Scholar] [CrossRef]
  116. Oliveira, L.F.P.; Manera, L.T.; Luz, P.D.G. Smart Traffic Light Controller System. In Proceedings of the Sixth International Conference on Internet of Things: Systems, Management and Security (IOTSMS), Granada, Spain, 22–25 October 2019; pp. 155–160. [Google Scholar]
  117. Neumann, G.B.; Almeida, V.P.; Endler, M. Smart Forests: Fire detection service. In Proceedings of the 2018 IEEE Symposium on Computers and Communications (ISCC), Natal, Brazil, 25–28 June 2018; pp. 01276–01279. [Google Scholar]
  118. Cui, F. Deployment and integration of smart sensors with IoT devices detecting fire disasters in huge forest environment. Comput. Commun. 2020, 150, 818–827. [Google Scholar] [CrossRef]
Figure 1. Technological areas associated with Precision Agriculture (PA).
Figure 1. Technological areas associated with Precision Agriculture (PA).
Robotics 10 00052 g001
Figure 2. Examples of robots used in agriculture for land preparation before planting.
Figure 2. Examples of robots used in agriculture for land preparation before planting.
Robotics 10 00052 g002
Figure 3. Difference between obstacle detection systems.
Figure 3. Difference between obstacle detection systems.
Robotics 10 00052 g003
Figure 4. Examples of robots used in agriculture for sowing and planting.
Figure 4. Examples of robots used in agriculture for sowing and planting.
Robotics 10 00052 g004
Figure 5. Di-Wheel’s seeding mechanism [29].
Figure 5. Di-Wheel’s seeding mechanism [29].
Robotics 10 00052 g005
Figure 6. Examples of robots used in agriculture for plant treatment.
Figure 6. Examples of robots used in agriculture for plant treatment.
Robotics 10 00052 g006
Figure 7. Crop yield effect of each area, described in detail in [48].
Figure 7. Crop yield effect of each area, described in detail in [48].
Robotics 10 00052 g007
Figure 8. Examples of robots used in agriculture for harvesting.
Figure 8. Examples of robots used in agriculture for harvesting.
Robotics 10 00052 g008
Figure 9. Examples of robots used in agriculture for yield estimation and phenotyping.
Figure 9. Examples of robots used in agriculture for yield estimation and phenotyping.
Robotics 10 00052 g009
Figure 10. Comparison between the 3D reconstruction of a corn plant by VisualSFM generated using different ways of collecting data, described in detail in [102].
Figure 10. Comparison between the 3D reconstruction of a corn plant by VisualSFM generated using different ways of collecting data, described in detail in [102].
Robotics 10 00052 g010
Figure 11. Summary of analyzed agricultural robots.
Figure 11. Summary of analyzed agricultural robots.
Robotics 10 00052 g011
Figure 12. Examples of commercially available off-the-shelf quadruped robots.
Figure 12. Examples of commercially available off-the-shelf quadruped robots.
Robotics 10 00052 g012
Table 1. Comparison between the revised robotic applications for land preparation.
Table 1. Comparison between the revised robotic applications for land preparation.
RobotsLocomotion
System
Final
Application
Navigation
Sensors
Obstacle Detection
Sensors
Development
Stage
Year
Cäsar [21]4WDOrchard or vineyardRTK GNSSUltrasonic sensorCommercial2014
Greenbot [24]4WSHorticulture, fruit
and arable farming
RTK GPSBump sensorCommercial2015
AGRAS MG-1P [25]UAV
Octocopter
Rice, soy and cornRTK GPS, RGB camera,
gyroscope,
accelerometer and compass
Omnidirectional
radar
Commercial2016
AgBot [23]2WDCornRTK GPS, RGB camera,
compass and accelerometer
Research2017
Table 2. Comparison between the revised robotic applications for sowing and planting.
Table 2. Comparison between the revised robotic applications for sowing and planting.
RobotsLocomotion
System
Final
Application
Guidance
Sensors
Seeding
Mechanism
Development
Stage
Year
Lumai-5 [28]4WSWheatAngle and speedSeeding motor
and vacuum fan
Research2010
Di-Wheel [29]2WDHorticulture
in general
Smartphone
embedded sensors
Roll type seederResearch2015
Sowing robot 1 [30]4WDCornUltrasonicLinear actuator
and vacuum motor
Research2016
Sowing robot 2 [31]TrackSeeds in generalUltrasonic and
magnetometer
Solenoid actuatorResearch2016
Table 3. Comparison between the analyzed robotic applications for plant treatment.
Table 3. Comparison between the analyzed robotic applications for plant treatment.
TaskRobotsLocomotion
System
Final
Application
Location
Sensors
Sensors Used
to Perform
the Task
Computer
Vision
Algorithms
Disease
identification
Disease robot [36]Not includedBell pepperRGB camera
and laser
PCA and CV
eAGROBOT [37]4WDCotton and
groundnut
RGB cameraK-means and
Neural Networks
Mechanical
weeding
Weeding robot 1 [38]4WDBroccoli and
lettuce
RGB-D cameraRANSAC
AgBot II [40]4WSCotton, sow
thistle, feather top
rhodes grass and
wild oats
RGB cameraLBP
Oz [41]4WDVegetables, nurseries,
and horticulture
LiDARRGB camera
Dino [42]4WSVegetables in row
and on beds
RTK/GPSRGB camera
Ted [43]4WSGrapeRTK/GPSRGB camera
VITIROVER [44]4WDSoil grassRTK/GNSS
Tertill [45]4WDResidential gardensCapacitive
sensors
K-Weedbot [46]4WSPaddy fieldRGB cameraHough
transform
AIGAMO-ROBOT [47]TrackPaddy field
Weeding robot 2 [48]4WDPaddy fieldCapacitive and
azimuth sensors
Weeding robot 3 [49]BoatPaddy fieldGPS and IMU
Chemical
weeding
AgriRobot [50]4WDGrapeRGB camera
and LiDAR
FDA and GDA
SAVSAR [50]4WDGrapeRGB camera
and LiDAR
FDA and GDA
Robotic sprayer [51]4WDGrapeRGB camera
and laser
FDA and GDA
RIPPA [52]4WSLettuce, cauliflower
and broccoli
RTK/GPS/INS
and LiDAR
Hyperspectral
and thermal
cameras
ExG-ExR
LadyBird [53]4WSLettuce, cauliflower
and broccoli
RTK/GPS/INS
and LiDAR
Hyperspectral
and thermal
cameras
ExG-ExR
BoniRob [54]4WSSugar beetRGB, NIR cameras
and ultrasonic
sensor
CNN
Aerial robot [56]UAV
(Octocopter)
GrapeGPS and IMUMultispectral
camera
NDVI
Bly-c-agri [60]UAV
(Hexacopter)
GrapeGNSS
PollinationPollinator robot [62]4WDKiwiOdometryRGB cameraCNN
PruningPruning robot 1 [65]Mobile
plataform
GrapeRGB cameraSVM
Pruning robot 2 [66]Mobile
plataform
GrapeRGB-D cameraFaster R-CNN
General
purpose
Swagbot [55]4WSGeneral farmsGPS and
LiDAR
RGB-D, IR and
hiperspectral
cameras
NDVI
Thorvald II [67]Many formsGeneral farmsDepends on the
application
Depends on the
application
Depends on the
application
Clearpath robots [68]Many formsGeneral farmsDepends on the
application
Depends on the
application
Depends on the
application
AgroBot [69]4WDGeneral farms
Table 4. Comparison between the analyzed robotic applications for harvesting.
Table 4. Comparison between the analyzed robotic applications for harvesting.
RobotRobotic
Arm
Final
Application
Location
Sensors
Sensors Used
to Perform
the Task
Computer Vision
Algorithm
Success Rate
(Cycle Time)
Agrobot E-Series [75]24 Cartesians
arms
StrawberryLiDARRGB camera,
ultrasonic and
inductive sensors
Berry 5 [76]Multiple robotic
components
StrawberryGPS and
LiDAR
RGB camera
GARotics [77]Pneumatic cylinder
with two blades
Green
asparagus
RGB-D cameraRANSAC and
euclidean clustering
90%
(2 s)
Vegebot [78]6-DoF and
a custom
end effector
LettuceRGB cameraR-CNN88.2%
(31.7 s)
Noronn AS [79]5-DoFStrawberryRGB-D cameraR-CNN74.1%
Harvester robot 1 [80]6-DoF dual-armAuberginesRGB-D and
ToF cameras
SVM91.67%
(26 s)
Harvester robot 2 [81]3-DoF cartesian
dual-arm
StrawberryLiDAR and
encoder
RGB-D cameraHSV
color-thresholding
50–97.1%
(4.6 s)
Harvester robot 3 [82]6-DoF soft-finger
based gripper
AppleRGB-D cameraDasnet, 3D-SHT
and Octree
F 1 : 0.81
(7 s)
Harvester robot 4 [83]6-DoFStrawberryRGB and
laser sensors
R-YOLO84.35%
Harvey plataform [84]6-DoFSweet pepperRGB-D camera,
pressure and
separation sensors
DCNN76.5%
(36.9 s)
SWEEPER [85]6-DoF with
custom designed
end effector
Sweet pepperRGB-D cameraDeep learning,
shape, color-based
detection and HT
61%
(24 s)
Amaran [87]4-DoFCoconutRGB camera80–100%
(21.9 min)
Table 5. Comparison between the reviewed robotic applications for yield estimation and phenotyping.
Table 5. Comparison between the reviewed robotic applications for yield estimation and phenotyping.
TaskRobotFinal
Application
Location
Sensors
Sensors Used to
Perform the Task
Computer Vision
Algorithm
Yield
Estimation
Shrimp [90]AppleRGB cameraMLP and CNN
VINBOT [91]GrapeRTK, DGPS
and LiDAR
RGB and NIR
cameras
NDVI
VineRobot [92]GrapeFA-Sense LEAF,
FA-Sense ANTH,
ultrasonic and
RGB camera
Chlorophyll-based
fluorescence and
RGB machine vision
AgriBOT [93]Orange and
sugar cane
GPS/INS
and LiDAR
RGB camera
Agrob V14 [96]GrapeLiDARRGB cameraSVM
Agrob V16 [98]GrapeRTK/GPS/INS
and LiDAR
Stereo, RGB-D
and RGB cameras
hLBP and SVM
Hexapod [99]General farms C O 2 gas module,
anemoscope and
infrared distance
sensor
Kubota farm vehicle [101]GrapeGPS and IMULiDARContinuous-Time SLAM
PhenotypingTerraSentia [100]CornRTK/GPS
and LiDAR
RGB cameraLiDAR-based navigation
Vinobot [102]CornDGPS and LiDARStereo camera
and environmental
sensors
VisualSFM
Vinoculer [103]CornStereo RGB
and IR cameras
and air temperature
sensors
VisualSFM
Pheno-Copter [104]Sorghum,
sugarcane and
wheat
RGB and thermal
cameras and LiDAR
RANSAC and DEM
Ara ecoRobotix [105]General farmsRTK/GPS
and compass
RGB camera
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Oliveira, L.F.P.; Moreira, A.P.; Silva, M.F. Advances in Agriculture Robotics: A State-of-the-Art Review and Challenges Ahead. Robotics 2021, 10, 52. https://doi.org/10.3390/robotics10020052

AMA Style

Oliveira LFP, Moreira AP, Silva MF. Advances in Agriculture Robotics: A State-of-the-Art Review and Challenges Ahead. Robotics. 2021; 10(2):52. https://doi.org/10.3390/robotics10020052

Chicago/Turabian Style

Oliveira, Luiz F. P., António P. Moreira, and Manuel F. Silva. 2021. "Advances in Agriculture Robotics: A State-of-the-Art Review and Challenges Ahead" Robotics 10, no. 2: 52. https://doi.org/10.3390/robotics10020052

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop