Next Article in Journal
Bridging the Implementation Gap between Pomace Waste and Large-Scale Baker’s Yeast Production
Next Article in Special Issue
TinyML Olive Fruit Variety Classification by Means of Convolutional Neural Networks on IoT Edge Devices
Previous Article in Journal
Requirements and Economic Implications of Integrating a PV-Plant-Based Energy System in the Dairy Production Process
Previous Article in Special Issue
A Machine Learning Approach for the Estimation of Alfalfa Hay Crop Yield in Northern Nevada
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Development Challenges of Fruit-Harvesting Robotic Arms: A Critical Review

by
Abdul Kaleem
1,†,
Saddam Hussain
1,2,3,*,†,
Muhammad Aqib
1,4,*,
Muhammad Jehanzeb Masud Cheema
1,5,
Shoaib Rashid Saleem
6,7,8 and
Umar Farooq
6
1
National Center of Industrial Biotechnology, PMAS Arid Agriculture University Rawalpindi, Rawalpindi 46000, Pakistan
2
Department of Irrigation and Drainage, University of Agriculture Faisalabad, Faisalabad 38000, Pakistan
3
Department of Agricultural and Biological Engineering, Tropical Research and Educational Center, IFAS, University of Florida, Homestead, FL 33031, USA
4
University Institute of Information Technology, PMAS Arid Agriculture University Rawalpindi, Rawalpindi 46000, Pakistan
5
Faculty of Agricultural Engineering and Technology, PMAS Arid Agriculture University Rawalpindi, Rawalpindi 46000, Pakistan
6
Data-Driven Smart Decision Platform, PMAS Arid Agriculture University Rawalpindi, Rawalpindi 46000, Pakistan
7
Center for Precision Agriculture, PMAS Arid Agriculture University Rawalpindi, Rawalpindi 46000, Pakistan
8
Department of Farm Machinery and Precision Engineering, PMAS Arid Agriculture University Rawalpindi, Rawalpindi 46000, Pakistan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work and should be considered co-first authors of this paper.
AgriEngineering 2023, 5(4), 2216-2237; https://doi.org/10.3390/agriengineering5040136
Submission received: 24 July 2023 / Revised: 31 October 2023 / Accepted: 6 November 2023 / Published: 17 November 2023
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)

Abstract

:
Promotion of research and development in advanced technology must be implemented in agriculture to increase production in the current challenging environment where the demand for manual farming is decreasing due to the unavailability of skilled labor, high cost, and shortage of labor. In the last two decades, the demand for fruit harvester technologies, i.e., mechanized harvesting, manned and unmanned aerial systems, and robotics, has increased. However, several industries are working on the development of industrial-scale production of advanced harvesting technologies at low cost, but to date, no commercial robotic arm has been developed for selective harvesting of valuable fruits and vegetables, especially within controlled strictures, i.e., greenhouse and hydroponic contexts. This research article focused on all the parameters that are responsible for the development of automated robotic arms. A broad review of the related research works from the past two decades (2000 to 2022) is discussed, including their limitations and performance. In this study, data are obtained from various sources depending on the topic and scope of the review. Some common sources of data for writing this review paper are peer-reviewed journals, book chapters, and conference proceedings from Google Scholar. The entire requirement for a fruit harvester contains a manipulator for mechanical movement, a vision system for localizing and recognizing fruit, and an end-effector for detachment purposes. Performance, in terms of harvesting time, harvesting accuracy, and detection efficiency of several developments, has been summarized in this work. It is observed that improvement in harvesting efficiency and custom design of end-effectors is the main area of interest for researchers. The harvesting efficiency of the system is increased by the implementation of optimal techniques in its vision system that can acquire low recognition error rates.

1. Introduction

Using robotics to accomplish work has become desirable over the last two decades in every field, i.e., industry [1], medical [2], agriculture [3], and military [4]. Many researchers have discussed the development challenges that arise in the field of agriculture for fruit and vegetable harvesting and other field operations, i.e., sowing, weeding, and spraying vegetables [5,6,7]. However, with advancements in the Internet of Things (IoT), sensors, high-speed internet (4G/5G/6G), and robotic automation with required attachments, developments in agriculture are at their peak, and many operations, i.e., crop quality and quantity measurement, disease finding, the planting process, and harvesting are taken from types of machinery in developed countries [8,9,10,11]. The word automation here refers to the equipment and devices that work in place of manual procedures. Globally, the demand for agricultural products is increasing rapidly. It is expected that, in the next 30 years, agriculture production needs to be increased by about 50% to meet food demands globally [12,13].
Many regions around the world are facing labor shortages in the agricultural sector [14]. Robotic arms can help address this issue by automating the harvesting process and reducing dependency on human labor, especially during peak harvesting seasons when the demand for workers is high. With the passage of time, the demand for manual farming is decreasing due to the unavailability of agricultural skilled labor and its shortage due to extreme weather conditions in the field [15]. Sometimes, the reason behind labor shortages occurs due to the pandemic situation and its effect on worldwide travel restrictions [16,17,18,19]. These acts may affect the availability of migrant laborers in addition to several other factors, such as people migrating from rural to urban life, lack of interest in farming, and traditional old farmers. As a result, a lot of fruit products are spoiled in the field, where harvesting relies on seasonal field workers [20]. In some cases, manual harvesting affects farmers’ health due to improper body posture, which can cause musculoskeletal disorders [21]. Farmers also highlight the problem of working in a harsh environment, which can cause several health issues. Moreover, another disadvantage of manual harvesting is the high labor cost and inefficient working. Surveys from multiple companies show about 35–40% of the yield of citrus is spent on manual harvesting [22]. Another study about cost calculation regarding the production of sweet cherries revealed that half of their production margin is utilized on their manual harvesting [23].
To overcome the issue of labor shortage, ensure labor health, and increase the productivity of agriculture in the current challenging environment, the promotion of research and development must be implemented in agriculture. Many agricultural products like potato, wheat, sugarcane, and corn ripen and are ready for harvesting in a single movement. However, some high-value crops, such as apples, sweet peppers, tomatoes, etc., do not ripen in a single movement [24]. Harvesting crops that require diverse ripening stages is a laborious, repetitive, and costly process, making it a prime candidate for automation in agriculture. Apart from addressing labor shortages, automation also plays a crucial role in ensuring the quality of the harvested crop, making it fit for human consumption [25]. Also, agricultural automation can harvest crops with high speed and precision, surpassing human capabilities. They can work tirelessly without getting fatigued, which leads to increased efficiency in the harvesting process [26]. Significant research and development efforts have been dedicated to advancing the field of agricultural automation, but no satisfactory commercial solutions have been introduced so far. Numerous automated machines have been designed to alleviate the challenges faced in agriculture, but this review will focus specifically on the robotic arm harvester.
Every effort is made in this review paper to discuss all the technical challenges that may occur in the development of fruit-harvesting robotic arms, their needs, and their importance in current scenarios. Furthermore, each main component of fruit-harvesting robotic arms, along with their concerns and challenges, is discussed.

2. Recent Developments

Previous related work in the development of robotic arm harvesters is discussed in this section. Also, their performance and limitations available in the literature are considered.
In the early stages of fruit-harvesting robotic arms, the designs were rudimentary, with basic grippers and single-axis movements characterizing their mechanical form [27,28,29]. These early arms lacked adaptability to different fruit shapes and sizes, leading to a quest for more sophisticated solutions [30]. As the years progressed, a breakthrough came with the introduction of multi-axis movement, granting these robotic arms greater flexibility and reach [31]. The incorporation of force sensors and vision systems marked another milestone, enabling the arms to delicately handle fruits and precisely locate them within their environment [32]. However, it was the 2010s that saw a true revolution in gripper design. Adaptive grippers emerged, their flexible fingers and suction cups adapting seamlessly to a variety of fruit shapes [33,34,35,36]. Soft grippers, employing compliant materials, further improved interactions, reducing the risk of damage. In the subsequent years, advanced materials and electromagnetic actuators transformed the landscape, making arms lightweight, energy-efficient, and precise. The hybrid gripper systems of the 2020s brought a new level of versatility, combining different gripping mechanisms for enhanced fruit handling. Simultaneously, artificial intelligence emerged as a key player, empowering these arms with real-time decision-making capabilities [37,38,39]. Through specialized attachments and end-effectors, from gentle suctions for delicate fruits to cutting tools for stalked ones, these robotic arms evolved into versatile, indispensable tools in fruit harvesting [40,41,42]. Ergonomics and safety considerations also received due attention, ensuring that these mechanical marvels not only excelled in function but also in user-friendly operation. The design of the fruit-harvesting robotic arm is one of continuous innovation, driven by a quest for precision, adaptability, and efficiency, ultimately reshaping the landscape of agricultural practices.
Researchers have made significant strides in addressing the challenge of localizing fruits in fruit-harvesting robotic arms since the last century [43]. Before the 2000s, research in the field of fruit-harvesting robotic arms faced significant challenges in localizing fruits due to the limited availability of advanced technology [44]. Their efforts primarily focused on rudimentary methods of fruit detection and localization. One of the earliest approaches involved basic computer vision techniques. Researchers developed algorithms to analyze 2D images of fruit-bearing trees [45,46,47]. These algorithms relied on color-based segmentation and shape recognition to identify potential fruit locations [48,49]. However, this method had limitations, especially in varying lighting conditions and with occluded or partially hidden fruits.
Another approach was the use of ultrasonic sensors [50,51]. These sensors emitted high-frequency sound waves and measured the time taken for the waves to bounce back after hitting an object. Although they were effective in detecting obstacles, they were less precise in identifying individual fruits. Furthermore, some researchers explored the use of infrared sensors to detect temperature variations associated with ripe fruits [52]. This approach, however, had limited success, as factors like ambient temperature and humidity could affect the readings. Researchers also experimented with mechanical contact-based systems. These systems involved probes or robotic arms that physically touched fruits to assess their ripeness [53]. Although this approach could provide valuable information, it was not suitable for delicate fruits or those in high-density clusters. Initially, in the early 2000s, the focus was on basic computer vision techniques for fruit recognition. This involved developing algorithms to detect and identify fruits based on color, shape, and texture. As technology advanced, researchers began integrating more sophisticated sensing systems, such as multi-spectral imaging, which allowed for a more detailed analysis of fruit characteristics. This enabled the robots to distinguish between ripe and unripe fruits with higher accuracy. In the mid-2010s, the implementation of depth sensors marked a crucial advancement. These sensors provided precise information about the location of fruits in three-dimensional space, allowing for more accurate localization [54,55,56]. Additionally, force feedback systems were introduced to ensure the gentle handling of delicate fruits during the harvesting process [38]. A significant breakthrough came with the introduction of machine learning algorithms around 2013. These algorithms enabled the robotic systems to learn and adapt to different fruit varieties and environmental conditions, enhancing their ability to accurately identify and localize fruits [57,58,59,60]. By 2015, researchers had achieved simultaneous multi-fruit harvesting capability, a milestone that greatly increased the efficiency of fruit harvesting robots [61]. Furthermore, the development of robotic arms with multiple degrees of freedom allowed for more flexible and precise manipulation, further improving localization accuracy. In recent years, advancements in artificial intelligence (AI) have played a pivotal role. Integration of AI algorithms has enabled real-time assessment of fruit ripeness, allowing the robots to make decisions on the spot. Edge computing, introduced around 2021, further enhanced the speed and efficiency of AI processing directly on the robot, reducing reliance on external computational resources [62]. Furthermore, the integration of LiDAR technology has provided the capability for 3D mapping of orchard terrain. This allows robots to navigate and localize fruits in complex and dynamic environments [63,64]. Overall, researchers have employed a combination of computer vision, sensor technologies, machine learning, and AI algorithms to progressively refine the localization capabilities of fruit harvesting robots. These advancements have not only improved the efficiency and accuracy of fruit harvesting but also have the potential to revolutionize the agricultural industry by addressing labor shortages and increasing productivity. Figure 1 shows the timeline of the significant contributions to the design and development of fruit-harvesting robotic arms in decades. It can be seen that there has been a major contribution to its development in the recent decade due to its growing demand in the agricultural sector.
An autonomous robotic arm developed by for apple harvesting that comprises a custom-made end-effector. A Pentium IV 2 GHz PC with 1 GB RAM, Panasonic VR006L robotic arm, tractor, generator, and touch panel PC with HMI are used in the manufacturing of a prototype.
Huang [65] developed a robotic arm with a path-planning plate form simulated in MATLAB. Their focus is to find a solution for the end-effector harvester to reach its destination with high precision by implementing an inverse kinematics technique.
Another low-cost prototype for a fruit harvester is designed by [66] using a stereo vision camera. The vision system is installed in the end-effector so that the updated information related to arm movement is sent to the controller for further necessary actions. The complete prototype is constructed using Minoru 3D for the vision system, DC gear motors, 3D printed arm parts using dimension SST 1200es, 3D printer, and an STM32F407VGT6 controller.
It is observed from the literature the main key issues are due to achieving low harvesting time, which was examined by [67]. A low-cost, high-speed robotic apple harvester has been proposed, which boasts the ability to sense, plan, and harvest. The entire prototype is mounted on a John Deere Gator electric utility vehicle and comprises a manipulator, custom-made end-effectors, TOF camera, and color camera. The detection of objects (apples in this case) is achieved using circular Hough transformation (CHT) and blob analysis (BA) techniques.
An autonomous sweet pepper recognition and tracking system has been developed using a simulation approach [68]. The simulation is carried out using MATLAB, V-REP, and ROS software for analysis. The algorithms are executed in MATLAB, and the real-time images captured by an external camera are filtered to remove color noise. The system’s performance was evaluated using 78 images.
A real-time image processing algorithm is used to determine the location of fruit, and their kinematic solution is obtained using C# [69]. A mechanical custom-made end-effector is developed to detach the fruit, and the pressure required for the desired fruit is calculated precisely by determining the tolerable pressure. Visual tasks are carried out using a two-dimensional camera with the addition of a depth sensor. The results were validated using 100 images, demonstrating an acceptable level of accuracy in fruit localization and harvesting.
A geometrical solution that can transform the exact location of peduncle coordinates into robotic arm joint functions using a Convolutional Neural Network (CNN) is validated [70]. Kinect RGB-Depth camera is mounted on the robot for real-time image processing. The result validation shows a harvested success rate of about 52%.
Another robotic arm harvesting prototype based on deep learning techniques implemented for fruit harvesting and recognition are Point Net-based and Mobile-DasNet, respectively, which is designed in [71]. The complete prototype is customized using a Universal Robotic manipulator UR5, self-design end-effector, intel D-435 depth camera, and NVIDIA GTX-1070 GPU.
Feng incorporated laser technology into an apple-harvesting robot [72]. A laser vision system that allows the robot to accurately measure the distance to the target. This laser vision system enables the robot to perform a three-dimensional scan of the target scene, capturing detailed spatial geometry information and analyzing the relationship between the fruits and branches.
An integrated prototype is designed in [73], whose testing and validation are carried out in the greenhouse. The whole system is implemented by acquiring six Degree of Freedom (DOF) robotic arms, a Fotonic F80 depth camera, GPU, and PLC. The software counterpart in this model consists of C++ and ROS indigo on Ubuntu. Although attractive work is carried out, it has several drawbacks in terms of harvest success rate and detachment time.
A novel apple harvesting robot end effector that incorporates a pneumatic flexible actuator as a curved joint is created in [74]. This innovative design allows for a substantial output force, enabling the end effector to firmly grasp the target fruit with excellent flexibility. This breakthrough development has the potential to significantly enhance the efficiency and effectiveness of apple harvesting processes.
A localization-based detection and fruit harvesting prototype is prepared in [75] with a custom cutting mechanism. The proposed system contains a Braccio robotic arm, depth camera, and Arduino Due controller. The software part is established using MATLAB and Arduino IDE environment. The prototype has several limitations in terms of performance, and the system is not fully automated because, during testing, it requires some manual modification.
A closed-range vision-based chili harvesting robot with a custom end-effector is designed in [52]. The proposed manipulator detaches the chili using a suction and grasping mechanism. To minimize fruit damage, the detachment process is developed the same as manual harvesting. The prototype contains a laser diode, a laser receiver [76], five DOF manipulators, and a computer. The drawback of this design is a single trunk training model that works only on aligning a single trunk, so this type of prototype is not used for harvesting in a field environment.
Zhang designed a manipulator for an apple-picking robot and conducted experiments to study the control stability of the manipulator [77]. The control area was divided into a stretching area and a harvesting area to ensure smooth operation. The trial showed that once the robot ranges the target, the required stabilization time is less than two seconds. This suggests that the manipulator’s control system is stable and capable of quickly stabilizing once it reaches the harvesting area.
Another designed prototype for an automated mushroom harvester is proposed in [78]. Special custom end-effectors are designed for the delicate control of mushrooms. The algorithm developed for localizing and identification of mushrooms is based on the calculated area of the mushroom top. The maximum diameter of the mushroom top harvested by the system is 75 mm. To solve the problem of low lightning conditions, a Phillips TLE 23Watt bulb is used near end-effectors to maintain a better environment for target identification by the vision system. The calculated damage rate claim by the author during the testing of the developed prototype is 3%.
Zhao has developed a robot for apple harvesting that features a 5° series joint with an integrated arm that has both lifting and telescopic capabilities [79]. The robot’s performance was tested under controlled laboratory conditions, resulting in an 80% success rate, with a harvesting period of 15 s.
An Automated Robotic arm for the orange harvester was developed in [80]. The vision system for fruit identification is composed of two cameras (ZED Stereo camera and A4tecch Webcam) and one ultrasonic sensor. A stereo camera is used to detect orange trees by applying a green detection algorithm. A webcam is used to serve the end-effector to reach the fruit, and an ultrasonic sensor is used for the distance measurement of the fruit from the end-effector.
The intelligent fruit-harvesting robot designed by Gu was equipped with advanced technologies such as autonomous navigation, computer vision, and robotic arm control [81]. The robot was able to autonomously navigate through the orchard using sensors and algorithms to detect obstacles and plan its path. It could identify mature fruits using computer vision algorithms with a high recognition rate in the field of agricultural robotics.
Most of the above developments show that the basic aim of their work is the successful harvesting of fruit using an optimal detachment mechanism. Results claimed by the manufacturer in the previous developments are shown in Table 1.
It is observed from previous work that harvesting time, harvesting success rate, and localization success rate are the main focused parameters that are discussed by the authors in the literature. The harvesting cycle is the period for the detachment of a single fruit that starts from the initial manipulator movement to the successful reach of the fruit to the container. The localization success rate is the ratio of the total number of available fruits and detected fruits by the vision systems. Harvesting success rate is the ratio of total number of fruits harvested and the number of fruits detected by the system.

3. System Requirements

In this section, the entire requirement for the development of fruit-harvesting robotic arms is discussed. The design contains a manipulator for mechanical movement, a vision system for localizing and recognition of objects, and an end-effector for detachment purposes. The development of a software platform is also an important part of the system that will manage mutual collaboration between components so that a well-defined workflow has been established. Data processing in software is dependent on manipulator coordinates, image information, and data from sensors. A basic flow chart for automated fruit harvesters working in a field environment is shown in Figure 2.
Working principle and their mechanical structure concern to each component is discussed in this section. However, their occurring challenges are considered in the next section.

3.1. Manipulator

For the successful harvesting of valuable fruit, a mechanical architecture that consists of a robotic manipulator and end-effectors is needed. The function of the manipulator is to guide the end-effector to reach the target, where the end-effector grasps it, and the manipulator brings fruit to the container [82]. The entire workspace is known by the system so that they will navigate the fruit to the target without damaging the system and fruit. Parameters considered in the workspace depend upon plant height, system height, number of fruits, the distance of target fruit, and size of target fruit.
Another specification for the development of a manipulator is to consider the payload of the desired fruit, i.e., The manipulator design for watermelon and apple has different types of load capacity conditions. An easy way of understanding a manipulator is that it resembles a human arm [83]. Where the human arm is related to a six DOF manipulator, each joint of the robotic arm is equivalent to the shoulder, arm, and wrist, as shown in Figure 3.
Where the shoulder joint of the manipulator is attached to a fixed body instead of a moveable structure, and the remaining joints are found inside the robotic arm. Usually, the fourth DOF of the robotic arm is used for wrist movement, the fifth for adjustment of rotation or orientation, and the sixth is for gripping [84]. An essential amount of driven power is required for operating motors that are found inside the joints for their movement, which is described by the manufacturer’s power rating depending upon the motor requirement, weight of the manipulator, and type of work taken from the robotic arm. Every DOF internally carries a motor inside its joint for mechanical movement. Heavy payload is lying on the second DOF, so a servo motor with high torque is especially required for this joint [85].

3.2. Vision System

Vision system is an important component throughout all automated robotic 1systems known as Vision Guided Robotic system (VGR). In agriculture robotic harvesting, the object (fruit in our case) is first identified by the vision system. Manipulators reach a target based on the information coordinates provided by the vision system [86]. For several decades, active research has been performed on the development of optimal visual plate forms that will identify the fruit with high accuracy. Many algorithms are applied in many prototypes where accuracy in computational and sensing parameters is the first choice to achieve efficient results. The most important feature of fruit sensing is color-based identification [87,88]. Several distinct algorithms based on segmentation were developed for apples, citrus, mangoes, tomatoes, pineapples, etc. For vision identification, researchers also used other techniques like Circular Hough Transform [89], a global mixture of Gaussians [90], Blob identification [91], and many more that can achieve a 70% to 95% success rate in a controlled environment. The development of efficient vision systems is the basic key feature needed to increase the success rate of harvesting devices. Different types of sensors are used for the identification of fruits, i.e., ultrasonic sensors, laser scanners, Light Detection and Ranging (LiDAR) sensors, stereo vision cameras, depth cameras, and RGB cameras. Different types of sensors used for robotic arm vision guiding are shown in Figure 4.
Many industries have made special universal sensors that have multiple functionalities in a single module, i.e., a single module has the ability of depth plus RGB color sensing and maybe more [96]. Using these sensors, different properties of plants and fruits, like fruit size, location, color, pose, and orientation, have been acquired. There are two types of VGR systems: arm-mounted camera VGR system and stationary camera mount VGR system. In the stationary camera mount system, the vision sensors are mounted on a base that is separate from the manipulator. This type of system requires an additional infrastructure that will monitor the field of view available in their range. The captured information has been sent to the software system. After receiving information from the software, they will calculate the position of the target using inverse kinematics [97]. The manipulator reaches the target according to defining variable values and completes the desired task. Certain calibrations are required that will interpret the vision sensor position and manipulator position concerning each other. In the arm-mounted VGR system, the vision sensor is attached to the manipulator and repeatedly changes its position due to arm movement. This type of system does not require additional infrastructure design for the camera. During their task, the updated information is received by the software, so the system knows their current position in the 3D plane as compared to the initial point. In this scenario, the distance of the fruit to the camera is first evaluated systematically and then, based on the displacement between the camera, gripper, and target available in the workspace, is estimated by scaling [98].
A monocular camera is used for vision purposes, and a stepper motor is used for manipulator movement [99]. In this scenario, a depth camera is not required; the depth information is acquired through their optical phenomenon. The degree of blurriness is estimated to find the distance of an object in the plane. The accuracy of the system is not too good as compared to the stereovision camera-based system. Another design of a system based on a stereovision camera that can estimate the depth of an object can be found [100]. The blob technique is used for extracting features of the image, and detection is carried out using each camera; segmentation occurs before delivering a command to further attached modules. The term “each camera image” refers to a stereovision camera that uses two cameras with a confined distance among them that work like human eyes. Results show better performance in accuracy and target size estimation as compared to monocular cameras.
An apple fruit picker that contains stereovision cameras has been developed [101]. Their performance is validated under different conditions where the target ranges from 30 cm to 110 cm. Validation is carried out in two scenarios; first, a manual procedure is performed by placing a small object in the center of an apple that has an error estimate of approx. 0.63% in distance. In the second scenario, the centroid of the apple is found through computation analysis that has an error of about 3.54%. For the detection of a ripened tomato, a structured light-based system was developed in [102]. In this work, the concept of structured light is used for the localization of fruit, and the Cb layer is used for the estimation of its ripeness. An error of a few millimeters is spotted during system validation. As seen in the literature, most of the development process is based on stereovision cameras. Another robotic manipulator designed for harvesting lychees is also based on a stereovision system [103]. This fruit picker is tested at the laboratory level for path planning and fruit detachment. The manipulator is developed in a virtual environment, and path planning kinematics is coded in Microsoft Visual C++. Most research papers reviewed up to this point are purely based on the arm-mounted camera technique or the stationary mount technique. In contrast, a novel technique is proposed that is based on the collaboration of both vision systems [104], where a stationary mount camera is responsible for the greater field of view of the workspace, and accuracy is gained from the arm-mounted camera. Testing of this technique was carried out in a citrus orchard, and the system obtained 95% accuracy. A prototype has been designed for detecting and picking small asparagus of 230 mm in height [105]. The vision system consists of two slit lasers and an old TV camera. The reason behind the two lasers is that they controlled the desired height of asparagus plants. Another laser-camera system is designed to be used for the detection and detachment of apples. A machine vision scenario is used for apple detection, and laser sensors are used for fruit localizing [106]. The accuracy of the system, as claimed by the author, is 100% in detection and has an accuracy of 90% in fruit detachment.

3.3. End-Effector

There are two types of fruit harvesting techniques: one is the bulk technique, and the second one is the fruit-by-fruit harvesting technique [107]. The design of end-effectors does not matter in the bulk fruit-picking technique, where the detachment process is performed by a Rapid Displacement Actuator (RDA). Branches of trees with ripened fruit are detected by the vision system, and then by actuating RDA, the fruit is harvested and caught by another system for collection in the container. RDA is used to apply vibration to the branches of trees [108]. However, this type of harvesting has many disadvantages that will be discussed in the subsequent section. The design of end-effectors has a had significant impact on individual fruit harvesting techniques. There is a lot of research work occurring on the custom shape of the harvester. So far, different types of cutting tools have been invented, such as scissor type, suction mechanism [109], and swallow harvester [110].
Normally, the suction-based approach is used for harvesting, and it is much more reliable and easy to design [73]. This approach is achieved through a cap-type end-effector and a tube body connecting the cap of the sucker. By squeezing with air pressure, the target fruit is harvested and transferred to a storage container via a discharge tube. The second type of harvesting tool is usually made for fragile crops and requires complex engineering for its cutting mechanism. This type of effector consists of scissors that will cut the peduncle of the fruit and a gripper that will hold the fruit until it reaches the target containers. A specially designed trained system is required to detect the cutting point of the peduncle. Research efforts have focused more on the development of a cutting peduncle-based system than on another harvesting system [111]. These types of systems for tomatoes and sweet peppers are developed in [112,113]. Another type of end-effector works almost like a human hand that will reach the target and grasp it in its mechanical design, just like fingers. After grasping them, the fruit will be detached from the plant. However, certain statistics may be required to calculate the gripping and detachment force to avoid damaging fruit and plant [114]. Force is applied using manipulator movement, and sometimes twisting the wrist will provide enough effort for their detachment. This type of harvesting technique is mainly applied to rigid plants and trees where fruit picking may cause some disturbance that will not affect them. So, in the development of swallow end-effectors, fruit shape, fruit size, fruit weight, and the plant’s nature are important components. A vast majority of researchers work on the development of reliable end-effectors, as shown in Figure 5.

4. Development Challenges

In this section, the hurdles faced in the development of automated fruit-harvesting robotic arms will be reviewed. Every component of the system suffers from a series of challenges. Challenges are dependent on the parameters cost, working environment, the efficiency of the system, the size of the prototype, types of sensors, implemented algorithm, harvesting time, manipulator material, and hardware. Adjusting one parameter in the name of improvement will result in compromising or reducing some other parameter. An overview of development challenges arising in different components of robotic arms is shown in Figure 6.

4.1. Vision Challenges

The vision system for fruit-harvesting robotic arms encounters a myriad of challenges. These encompass varying lighting conditions, potential obstruction by leaves and branches, overlapping fruits, and the challenge of accurately detecting distant fruits. Natural variations in fruit color and shape, as well as occlusion by plant elements, further complicate recognition. Environmental factors like wind and rain introduce dynamic elements, while seasonal changes in plant and fruit characteristics demand adaptability. Ensuring dust-free camera lenses, precise calibration, and real-time processing capabilities are imperative. Depth perception, resilience to environmental variations, and adaptability to diverse fruit types are also crucial. Additionally, the system’s capacity for learning and generalization across different environments and fruit types is essential for its effectiveness and versatility in the field.
Identification of fruit by visual sensing is an important element of the system that is suffering from several challenges. Over several decades, significant work has been carried out in the identification of fruit, and many techniques have been developed, and implemented in the prototypes. The researcher claims to have achieved an acceptable number of results in the previous work. One of the problems reported by the researchers is that of different lighting environments, which affect the segmentation process and play a vital role in fruit identification [115,116,117]. The different lighting conditions mean the designed work seems to look good and has better accuracy in the control lighting environment in laboratory conditions. However, these systems face problems in the orchard/field environment due to the presence of intense sunlight in the daytime and very dim light at night. These issues are addressed to some extent by harvesting the fruit at sunset or providing enough light during the night-time but will still reduce the efficiency of the system [118]. The prototype is said to be highly efficient if it can work in every environment without depending on the limited time window.
Variable lighting problems arise due to the effect of unbalanced exposure on the image, which will disturb saturation and contrast. So, in these situations, the quality of the image may cause problems in fruit identification. An exposure fusion technique is developed in [119]. This method produces a well-exposed picture by retaining just the best areas for every frame in various exposure-given images, as shown in Figure 7.
To calculate the optimum portions and weight, they merged the picture layer into a final image using visual quality characteristics such as sharpness, saturation, and well-exposedness. The use of this method resulted in an evenly focused view of the target with little shades. Another technique is proposed that will perform fruit identification independent of color properties; based on this work, the fruit is identified by a 3D geometry plane [120].
Like vision algorithm techniques, the selection of visual sensors is also an important issue [121]. Two types of vision scenarios are available in VGR, as described in Section 3.2. In stationary camera mount, the vision sensor is mounted on a fixed frame, also known as the eye-to-hand method. This type of system has the advantage of knowing their fixed position as a reference point in the working area. The limitation of this system is the installation of an extra frame for the vision sensor and the second high-resolution camera required for object detection. The camera must be three-dimensional, with the ability to extract depth information. For better identification of fruit quality, the image for target tracking should not be lower than 5 Mp. The selection of position for the camera and manipulator mounting is also a challenging task; their position is identified in such a manner that the view of the camera is not interrupted during manipulator movement. Complex computation is required for the localization of fruit from a fixed point, and the availability of error in depth data from a fixed position is also higher.
Challenges faced in this scenario are tackled using an arm-mounted technique, also known as the eye-in-hand technique. In this scenario, the vision camera is fixed in the hand of the manipulator or on the end-effector [122]. In this scenario, the distance of the target at every instance is monitored in a closed loop concerning the position of the end-effector and vision sensor. A low-cost two-dimensional camera is enough for vision purposes. Arm mounted solves many challenges that are faced in a stationary mount scenario, like the need for another frame for the camera, interference of camera and manipulator, and requirement for a high-quality camera. Along with these advantages, there are also limitations in the arm-mounted technique. A camera is mounted in the middle or at the top of the end-effector, which will affect the burden on the manipulator. Some manipulators could not tolerate the burden of heavily integrated cameras, as seen in the case of the Microsoft Kinect depth sensor used in [123], which is approximately 1.5 kg. So, in these types of scenarios, the implementation of a stationary camera mount system is optimal.

4.2. End-Effector Challenges

The selection of a harvesting technique is also a challenging task. In the bulk technique, a heavy mass of fruit is harvested in a short time by applying vibration to the plant or tree. This process was first adopted several years ago and gave a good output in time consumption [124], but it has several disadvantages reported by fruit growers that fruit and plant damage occur due to applied detachment force. However, this problem is very limited in big canopy trees and hardy fruits but has a high impact on fragile fruit and plants [125]. Another problem in bulk harvesting is that the fruit detachment is carried out without the identification of single characteristics of fruit, so most of the harvested fruit is unripe. As a result, bulk harvesting of these products is not acceptable in the market as compared to manually harvested products due to damaged and unripened conditions. Active research is still in progress in this field to harvest the fruit at their maturity level, which is not a trivial process, but it will affect the harvesting time. Another method for harvesting is fruit-by-fruit harvesting or selective harvesting [126]. Implementation of this technique also suffers from technical challenges like the selection of end-effector, type of vision system, and DOF of a robotic arm. In successful harvesting, the selection of an end-effector is an important factor. Three types of end-effectors are developed in selective harvesting: cutting end-effector, suction mechanism, and swallow end-effector. Commonly, a suction mechanism is used in most cases where the end-effector does not make any harsh contact with the plant, but physical contact occurs with fruit during harvesting. The problem facing the suction base technique is the need for another mechanism for suction that will burden extra costs on the system. Another limitation of this technique is damage to fragile plants and fruit.
A cutting mechanism is needed for harvesting fragile fruit and plants [127]. An additional mechanism is attached with an end-effector that consists of a scissor-type cutting tool. When the end-effector reaches the target fruit, the actuator forces the cutting mechanism for peduncle cutting. Not only is the cutting mechanism needed, but the griping mechanism needed to hold the fruit until it reaches the manipulator does not reach the destination container. This mechanism is very useful in the case of sensitive plants and fruit, but the challenge in this technique is the identification of a cutting point. High-resolution cameras and complex algorithms are used in their development. These types of developed cutting mechanism end-effectors are also problematic in a cluster of fruit. Sometimes, after harvesting fruit, part of the peduncle may be stuck in the cutting tool; as a result, the cutter will not cut further peduncle perfectly [128]. Before designing the cutting tool, a mathematical calculation is carried out for the desired applied force that depends on the strength of the peduncle, which is also a challenging task. Many developed prototypes based on the cutting mechanism use two power sources that will increase the cost of the system. An optimal solution is developed in [36]; a novel gripper is designed that will share a single power source for the fruit gripper and cutter.
Another type of end-effector is the swallowing end-effector, which is almost like a human hand. At the edge of the end-effector, a mechanism like the fingers of a human is attached that will pick the target fruit, almost like traditional manual harvesting. In the swallow-type end-effector, the detachment force burden is also experienced by the manipulator. The detachment is performed by grasping the fruit and applying the force by the manipulator’s movement that will cause damage to the plant, fruit, and the manipulator’s body. As a consequence of the above statements, the selection of a manipulator in the robotic arm is dependent on the type of task performed by the system. When the system is developing for hardy fruit and plants, the suction mechanism will be improved. The swallow technique is implemented for fragile fruit and hardy plants. However, a cutting mechanism is developed for that system where plant and fruit both are sensitive to force.

4.3. Manipulator Challenges

The selection of the number of DOFs in a robotic arm manipulator is also an important consideration in a design. The working feasibility of the robotic arm in a complex working environment is increased by increasing the DOF of the robotic arm so that extending to the target is much easier in that case. A better outcome is obtained by using kinematics simulations to control the manipulator precisely. The manipulator’s kinematic approach concentrates on the movement within each link before considering any force that causes the movement [129]. Every DOF denotes an important variable necessary to express the position of one link concerning the preceding link, adding to the complexity of a manipulator [130].
Additional constraints seem to be that the system needs to be able to operate continuously in its field environment, which might be either an open-field agricultural field environment or operating in lab testing. Outdoor-working robots will be subjected to more demanding operational circumstances. Rain, wind, dust, and extreme temperature variations may necessitate protection for actuators, sensors, and control systems. In lab testing, most of the operating tools are easily available, but in field operations, there is a requirement for additional generators, power sources, and sufficient lighting conditions.

5. Conclusions

The development of fruit-harvesting robotic arms presents a multifaceted set of challenges across three critical domains. In terms of vision challenges, the ability to accurately detect, locate, and assess the ripeness of fruits in complex and dynamically changing environments remains a challenging task. Overcoming this necessitates advanced sensor fusion techniques, robust computer vision algorithms, and real-time data processing capabilities. At the same time, in manipulator design, accommodating the diverse range of fruit sizes, shapes, and locations demands an adaptable kinematic structure coupled with precise control mechanisms. Furthermore, ensuring gentle yet effective fruit handling necessitates innovations in compliance and actuation systems. Lastly, end effector challenges require an intricate balance of delicacy and adaptability, requiring specialized materials, tactile sensors, and advanced computer vision algorithms. Addressing these challenges will be pivotal in realizing the full potential of fruit-harvesting robotic arms and revolutionizing agricultural practices for increased efficiency and sustainability. Advancement in technologies in the current era is merging into many fields, such as industries, the medical field, education systems, defense systems, navigation systems, and even domestic appliances. That makes work easier, lower cost, and faster than traditional approaches. Implementation of technologies in the field of agriculture also tends to be modernized with the advanced era. For several years, research has been performed in the field of agriculture to facilitate the former. Many commercial automated machines have been developed for harvesting crops that are ripening and harvested at the same time as potato, wheat, sugarcane, corn, etc. However, a robotic arm as a harvesting device for those agricultural products that require repetitive tasks for harvesting at different intervals is still being tested. The reason behind the delay in this effort is that there are several challenges in their development. These challenges appear in every component of the robotic arm and obstruct their development. The main components of automated robotic arms are vision systems, manipulators, and end-effectors. The main problem that arises in the vision system is the selection of a camera-fixing scenario and applied algorithm. The development of manipulators faces challenges in selecting the number of DOFs. The system is said to be highly efficient when the vision system identifies all the fruit in its field of view, the algorithm calculates their location, the manipulator reaches every target fruit, the end-effector detaches the fruit from the plant, and the manipulator successfully holds the fruit before transferring it to the fruit container. It is observed from previous work that the development of robotic arms is associated with their application. Factors like the working environment, fruit type, nature of the fruit, and the extent of the cost must be known by the developer. The effort made in this review article is to discover all the work that is necessary for consideration in the development of a robotic arm.

6. Future Works

Future advancements in the mechanical design domain of fruit-harvesting robotic arms will prioritize structural optimization for a balance of strength and weight (i.e., low weight and high strength). Innovations in articulation, linkage mechanisms, and kinematics will enhance maneuverability to navigate around complex orchard layouts. Integration of torque and force sensors, along with redundancy and safety features, will ensure precise and gentle fruit handling. Sealing mechanisms and protective enclosures can protect components from environmental factors, while vibration control techniques can minimize oscillations. Ergonomic considerations will streamline operation and maintenance, and advanced safety features will prevent accidents. Future strides in the vision systems of fruit-harvesting robotic arms will steer in a new era of precision. Advanced 3D cameras and machine vision technologies will be deployed to swiftly identify ripe fruits, assess their size, and determine their position and orientation. Additionally, multispectral imaging may play a pivotal role in distinguishing fruit from foliage, further enhancing accuracy. Integration of AI-powered algorithms for real-time decision-making will optimize picking trajectories. As for end effectors, the focus will shift towards adaptable, multi-modal designs. Grippers equipped with compliant materials and adjustable features will gently yet securely grasp fruits of varying shapes and sizes. Modular end-effector attachments will enable seamless transitions between different fruit types, ensuring versatility in harvesting operations. These advancements in vision systems and end effectors promise to revolutionize fruit harvesting with unprecedented speed and precision in the future.

Author Contributions

Conceptualization, A.K., M.A. and S.H.; methodology, A.K., S.H., M.J.M.C. and S.R.S.; investigation, A.K., S.H., S.R.S. and U.F.; resources, M.A., S.H. and M.J.M.C.; data curation, A.K., M.A. and U.F.; writing—original draft preparation, A.K. and S.H.; writing—review and editing, M.A., S.H., S.R.S., S.H. and U.F.; visualization, A.K. and S.H.; supervision, M.A., S.H., M.J.M.C. and S.R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This study is part of a pilot project for the Establishment of a National Center of Industrial Biotechnology for Pilot Manufacturing of Bio-products Using Synthetic Biology and Metabolic Engineering Technologies (ENCIB) PSDP-funded project No. 321.

Acknowledgments

The authors would like to acknowledge all funding agencies and the scientific and support staff of the NCIB project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kanade, P.; Alva, P.; Kanade, S.; Ghatwal, S. Automated Robot ARM using Ultrasonic Sensor in Assembly Line. Int. Res. J. Eng. Technol. 2020, 7, 615–620. [Google Scholar]
  2. Racu, C.M.; Doroftei, I.; Plesu, G.; Doroftei, I.A. Simulation of an ankle rehabilitation system based on scotch-yoke mechanism. IOP Conf. Ser. Mater. Sci. Eng. 2016, 147, 012084. [Google Scholar] [CrossRef]
  3. Fountas, S.; Mylonas, N.; Malounas, I.; Rodias, E.; Santos, C.H.; Pekkeriet, E. Agricultural robotics for field operations. Sensors 2020, 20, 2672. [Google Scholar] [CrossRef]
  4. Ghute, M.S. Design of Military Surveillance Robot. In Proceedings of the 2018 First International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, India, 15–17 December 2018; pp. 270–272. [Google Scholar] [CrossRef]
  5. Kumar, P.; Ashok, G. Design and fabrication of smart seed sowing robot. Mater. Today Proc. 2020, 39, 354–358. [Google Scholar] [CrossRef]
  6. Blasco, J.; Aleixos, N.; Roger, J.M.; Rabatel, G.; Moltó, E. Robotic weed control using machine vision. Biosyst. Eng. 2002, 83, 149–157. [Google Scholar] [CrossRef]
  7. Wang, Y.; Xie, L.; Wang, H.; Zeng, W.; Ding, Y.; Hu, T.; Zheng, T.; Liao, H.; Hu, J. Intelligent spraying robot for building walls with mobility and perception. Autom. Constr. 2022, 139, 104270. [Google Scholar] [CrossRef]
  8. Chaudhury, A.; Ward, C.; Talasaz, A.; Ivanov, A.G.; Huner, N.P.; Grodzinski, B.; Patel, R.V.; Barron, J.L. Computer Vision Based Autonomous Robotic System for 3D Plant Growth Measurement. In Proceedings of the 2015 12th Conference on Computer and Robot Vision, Halifax, NS, Canada, 3–5 June 2015; pp. 290–296. [Google Scholar] [CrossRef]
  9. Chen, G.; Muriki, H.; Sharkey, A.; Pradalier, C.; Chen, Y.; Dellaert, F. A Hybrid Cable-Driven Robot for Non-Destructive Leafy Plant Monitoring and Mass Estimation using Structure from Motion. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023. [Google Scholar]
  10. Concepcion, R., II; Lauguico, S.; Valenzuela, I.; Bandala, A.; Dadios, E. Denavit-Hartenberg-based Analytic Kinematics and Modeling of 6R Degrees of Freedom Robotic Arm for Smart Farming. J. Comput. Innov. Eng. Appl. 2021, 5, 1–7. [Google Scholar]
  11. Lauguico, S.C.; Concepcion, R.S.; MacAsaet, D.D.; Alejandrino, J.D.; Bandala, A.A.; Dadios, E.P. Implementation of Inverse Kinematics for Crop-Harvesting Robotic Arm in Vertical Farming. In Proceedings of the 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Bangkok, Thailand, 18–20 November 2019; pp. 298–303. [Google Scholar] [CrossRef]
  12. Peters, C.J.; Bills, N.L.; Wilkins, J.L.; Fick, G.W. Foodshed analysis and its relevance to sustainability. Renew. Agric. Food Syst. 2009, 24, 1–7. [Google Scholar] [CrossRef]
  13. Davis, K.F.; Gephart, J.A.; Emery, K.A.; Leach, A.M.; Galloway, J.N.; D’Odorico, P. Meeting future food demand with current agricultural resources. Glob. Environ. Change 2016, 39, 125–132. [Google Scholar] [CrossRef]
  14. Chand, R.; Srivastava, S.K. Changes in the rural labour market and their implications for agriculture. Econ. Polit. Wkly. 2014, 49, 47–54. [Google Scholar]
  15. Mutekwa, V.T. Climate change impacts and adaptation in the agricultural sector: The case of smallholder farmers in Zimbabwe. J. Sustain. Dev. Afr. 2009, 11, 237–256. [Google Scholar]
  16. Siche, R. What is the impact of COVID-19 disease on agriculture? Sci. Agropecu. 2020, 11, 3–6. [Google Scholar] [CrossRef]
  17. Martini, M.; Gazzaniga, V.; Bragazzi, N.L.; Barberis, I. The Spanish Influenza Pandemic: A lesson from history 100 years after 1918. J. Prev. Med. Hyg. 2019, 60, E64–E67. [Google Scholar] [CrossRef] [PubMed]
  18. Corley, A.; Hammond, N.E.; Fraser, J.F. The experiences of health care workers employed in an Australian intensive care unit during the H1N1 Influenza pandemic of 2009: A phenomenological study. Int. J. Nurs. Stud. 2010, 47, 577–585. [Google Scholar] [CrossRef]
  19. Piot, P.; Bartos, M.; Ghys, P.D.; Walker, N.; Schwartlander, B. The global impact of HIV/AIDS. Nature 2001, 410, 968–973. [Google Scholar] [CrossRef]
  20. Chandler, A. Fruit and Veg Risk Rotting in Australia on Second COVID-19 Wave. Available online: https://www.bnnbloomberg.ca/fruit-and-veg-risk-rotting-in-australia-on-second-COVID-19-wave-1.1469292 (accessed on 22 October 2023).
  21. Jain, R.; Meena, M.L.; Dangayach, G.S.; Bhardwaj, A.K. Risk factors for musculoskeletal disorders in manual harvesting farmers of Rajasthan. Ind. Health 2018, 56, 241–248. [Google Scholar] [CrossRef]
  22. Singerman, A.; Burani-Arouca, M.; Futch, S.H. Harvesting Charges for Florida Citrus: Picking, Roadsiding, and Hauling, 2015/16. EDIS 2017, 2017, 1–5. [Google Scholar] [CrossRef]
  23. Ampatzidis, Y.G.; Vougioukas, S.G.; Whiting, M.D.; Zhang, Q. Applying the machine repair model to improve efficiency of harvesting fruit. Biosyst. Eng. 2014, 120, 25–33. [Google Scholar] [CrossRef]
  24. Leu, A.; Razavi, M.; Langst, L.; Schenck, C.; Gr, A. Robotic Green Asparagus Selective Harvesting. Mechatronics 2017, 22, 2401–2410. [Google Scholar] [CrossRef]
  25. ARaffo, A.; Baiamonte, I.; Nardo, N.; Nicoli, S.; Moneta, E.; Peparaio, M.; Sinesio, F.; Paoletti, F. Impact of early harvesting and two cold storage technologies on eating quality of red ripe tomatoes. Eur. Food Res. Technol. 2018, 244, 805–818. [Google Scholar] [CrossRef]
  26. Bechar, A.; Vigneault, C. Agricultural robots for field operations: Concepts and components. Biosyst. Eng. 2016, 149, 94–111. [Google Scholar] [CrossRef]
  27. Ceres, R.; Pons, J.L.; Jiménez, A.R.; Martín, J.M.; Calderón, L. Design and implementation of an aided fruit-harvesting robot (Agribot). Ind. Rob. 1998, 25, 337–346. [Google Scholar] [CrossRef]
  28. Kondo, N.; Monta, M.; Fujiura, T. Fruit harvesting robots in Japan. Adv. Space Res. 1996, 18, 181–184. [Google Scholar] [CrossRef]
  29. Edan, Y. Design of an autonomous agricultural robot. Appl. Intell. 1995, 51, 41–50. [Google Scholar] [CrossRef]
  30. Plebe, A.; Grasso, G. Localization of spherical fruits for robotic harvesting. Mach. Vis. Appl. 2001, 13, 70–79. [Google Scholar] [CrossRef]
  31. Yoshikawa, T. Manipulability and redundancy control of robotic mechanisms. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; pp. 1004–1009. [Google Scholar] [CrossRef]
  32. Allotta, B.; Buttazzo, G.; Dario, P.; Quaglia, F.; Levi, P. A force/torque sensor-based technique for robot harvesting of fruits and vegetables. In Proceedings of the IEEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications, Ibaraki, Japan, 3–6 July 1990; pp. 231–235. [Google Scholar] [CrossRef]
  33. Friedrich, W.E.; Lim, P.K. Smart End-effector Sensing for Variable Object Handling. In Field and Service Robotics; Springer: Berlin/Heidelberg, Germany, 1998; pp. 447–450. [Google Scholar] [CrossRef]
  34. Jia, B.; Zhu, A.; Yang, S.X.; Mittal, G.S. Integrated gripper and cutter in a mobile robotic system for harvesting greenhouse products. In Proceedings of the 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guilin, China, 19–23 December 2009; pp. 1778–1783. [Google Scholar] [CrossRef]
  35. Baeten, J.; Donné, K.; Boedrij, S.; Beckers, W.; Claesen, E. Autonomous fruit picking machine: A robotic apple harvester. In Field and Service Robotics; Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2008; Volume 42, pp. 531–539. [Google Scholar] [CrossRef]
  36. Hayashi, S.; Shigematsu, K.; Yamamoto, S.; Kobayashi, K.; Kohno, Y.; Kamata, J.; Kurita, M. Evaluation of a strawberry-harvesting robot in a field test. Biosyst. Eng. 2010, 105, 160–171. [Google Scholar] [CrossRef]
  37. Paradkar, V.; Raheman, H.; Rahul, K. Development of a metering mechanism with serial robotic arm for handling paper pot seedlings in a vegetable transplanter. Artif. Intell. Agric. 2021, 5, 52–63. [Google Scholar] [CrossRef]
  38. Kultongkham, A.; Kumnon, S.; Thintawornkul, T.; Chanthsopeephan, T. The design of a force feedback soft gripper for tomato harvesting. J. Agric. Eng. 2021, 52, 1090. [Google Scholar] [CrossRef]
  39. Chen, J.I.-Z.; Chang, J.-T. Applying a 6-axis Mechanical Arm Combine with Computer Vision to the Research of Object Recognition in Plane Inspection. J. Artif. Intell. Capsul. Netw. 2020, 2, 77–99. [Google Scholar] [CrossRef]
  40. Bu, L.; Chen, C.; Hu, G.; Sugirbay, A.; Sun, H.; Chen, J. Design and evaluation of a robotic apple harvester using optimized picking patterns. Comput. Electron. Agric. 2022, 198, 107092. [Google Scholar] [CrossRef]
  41. Williams, H.A.; Jones, M.H.; Nejati, M.; Seabright, M.J.; Bell, J.; Penhall, N.D.; Barnett, J.J.; Duke, M.D.; Scarfe, A.J.; Ahn, H.S.; et al. Robotic kiwifruit harvesting using machine vision, convolutional neural networks, and robotic arms. Biosyst. Eng. 2019, 181, 140–156. Available online: https://www.sciencedirect.com/science/article/pii/S153751101830638X (accessed on 5 October 2023). [CrossRef]
  42. Kouritem, S.A.; Abouheaf, M.I.; Nahas, N.; Hassan, M. A multi-objective optimization design of industrial robot arms. Alex. Eng. J. 2022, 61, 12847–12867. Available online: https://www.sciencedirect.com/science/article/pii/S1110016822004355 (accessed on 5 October 2023). [CrossRef]
  43. Arikapudi, R.; Vougioukas, S.G. Robotic Tree-fruit harvesting with arrays of Cartesian Arms: A study of fruit pick cycle times. Comput. Electron. Agric. 2023, 211, 108023. Available online: https://www.sciencedirect.com/science/article/pii/S0168169923004118 (accessed on 5 October 2023). [CrossRef]
  44. Sarig, Y. Robotics of Fruit Harvesting: A State-of-the-art Review. J. Agric. Eng. Res. 1993, 54, 265–280. [Google Scholar] [CrossRef]
  45. Van Kollenburg-Crisan, L.M.; Bontsema, J.; Wennekes, P. Mechatronic System for Automatic Harvesting of Cucumbers. IFAC Proc. Vol. 1998, 31, 289–293. [Google Scholar] [CrossRef]
  46. Jiménez, A.R.; Ceres, R.; Pons, J.L. A Survey of Computer Vision Methods for Locating Fruit on Trees. Trans. ASAE 2000, 43, 1911–1920. [Google Scholar] [CrossRef]
  47. Edan, Y.; Rogozin, D.; Flash, T.; Miles, G.E. Robotic melon harvesting. IEEE Trans. Robot. Autom. 2000, 16, 831–835. [Google Scholar] [CrossRef]
  48. Harrell, R.C.; Slaughter, D.C.; Adsit, P.D. A fruit-tracking system for robotic harvesting. Mach. Vis. Appl. 1989, 2, 69–80. [Google Scholar] [CrossRef]
  49. Slaughter, D.C.; Harrell, R.C. Color Vision in Robotic Fruit Harvesting. Trans. ASAE 1987, 30, 1144–1148. [Google Scholar] [CrossRef]
  50. Harrell, R.C.; Adsit, P.D.; Pool, T.A.; Hoffman, R. The Florida Robotic Grove-Lab. Trans. ASAE 1990, 33, 391–399. [Google Scholar] [CrossRef]
  51. Guo, J.; Zhao, D.A.; Ji, W.; Xia, W. Design and control of the open apple-picking-robot manipulator. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010; pp. 5–8. [Google Scholar] [CrossRef]
  52. Tanigaki, K.; Fujiura, T.; Akase, A.; Imagawa, J. Cherry-harvesting robot. Comput. Electron. Agric. 2008, 63, 65–72. [Google Scholar] [CrossRef]
  53. Scimeca, L.; Maiolino, P.; Cardin-Catalan, D.; Pobil, A.P.D.; Morales, A.; Iida, F. Non-destructive robotic assessment of mango ripeness via multi-point soft haptics. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 1821–1826. [Google Scholar] [CrossRef]
  54. Neupane, C.; Koirala, A.; Wang, Z.; Walsh, K.B. Evaluation of Depth Cameras for Use in Fruit Localization and Sizing: Finding a Successor to Kinect v2. Agronomy 2021, 11, 1780. [Google Scholar] [CrossRef]
  55. Ge, Y.; Xiong, Y.; Tenorio, G.L.; From, P.J. Fruit Localization and Environment Perception for Strawberry Harvesting Robots. IEEE Access 2019, 7, 147642–147652. [Google Scholar] [CrossRef]
  56. Sarabu, H.; Ahlin, K.; Hu, A.P. Leveraging Deep Learning and RGB-D Cameras for Cooperative Apple-Picking Robot Arms. In Proceedings of the 2019 ASABE Annual International Meeting, Boston, MA, USA, 7–10 July 2019; p. 1. [Google Scholar] [CrossRef]
  57. Tang, Y.; Chen, M.; Wang, C.; Luo, L.; Li, J.; Lian, G.; Zou, X. Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review. Front. Plant Sci. 2020, 11, 520170. [Google Scholar] [CrossRef]
  58. Onishi, Y.; Yoshida, T.; Kurita, H.; Fukao, T.; Arihara, H.; Iwai, A. An automated fruit harvesting robot by using deep learning. ROBOMECH J. 2019, 6, 13. [Google Scholar] [CrossRef]
  59. Sun, T.; Zhang, W.; Miao, Z.; Zhang, Z.; Li, N. Object localization methodology in occluded agricultural environments through deep learning and active sensing. Comput. Electron. Agric. 2023, 212, 108141. [Google Scholar] [CrossRef]
  60. Li, T.; Fang, W.; Zhao, G.; Gao, F.; Wu, Z.; Li, R.; Fu, L.; Dhupia, J. An improved binocular localization method for apple based on fruit detection using deep learning. Inf. Process. Agric. 2023, 10, 276–287. [Google Scholar] [CrossRef]
  61. Yu, Y.; Zhang, K.; Yang, L.; Zhang, D. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput. Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
  62. Huang, X.R.; Chen, W.H.; Hu, W.C.; Chen, L.B. An AI Edge Computing-Based Robotic Arm Automated Guided Vehicle System for Harvesting Pitaya. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 7–9 January 2022. [Google Scholar] [CrossRef]
  63. Geer, L.; Gu, D.; Wang, F.; Mohan, V.; Dowling, R. Novel Software Architecture for an Autonomous Agricultural Robotic Fruit Harvesting System. In Proceedings of the 2022 27th International Conference on Automation and Computing (ICAC), Bristol, UK, 1–3 September 2022. [Google Scholar] [CrossRef]
  64. Vrochidou, E.; Tziridis, K.; Nikolaou, A.; Kalampokas, T.; Papakostas, G.A.; Pachidis, T.P.; Mamalis, S.; Koundouras, S.; Kaburlasos, V.G. An Autonomous Grape-Harvester Robot: Integrated System Architecture. Electronics 2021, 10, 1056. [Google Scholar] [CrossRef]
  65. Huang, G.S.; Tung, C.K.; Lin, H.C.; Hsiao, S.H. Inverse kinematics analysis trajectory planning for a robot arm. In Proceedings of the 2011 8th Asian Control Conference (ASCC), Kaohsiung, Taiwan, 15–18 May 2011; pp. 965–970. [Google Scholar]
  66. Font, D.; Pallejà, T.; Tresanchez, M.; Runcan, D.; Moreno, J.; Martínez, D.; Teixidó, M.; Palacín, J. A proposal for automatic fruit harvesting by combining a low cost stereovision camera and a robotic arm. Sensors 2014, 14, 11557–11579. [Google Scholar] [CrossRef]
  67. Silwal, A.; Davidson, J.R.; Karkee, M.; Mo, C.; Zhang, Q.; Lewis, K. Design, integration, and field evaluation of a robotic apple harvester. J. Field Robot. 2017, 34, 1140–1159. [Google Scholar] [CrossRef]
  68. Ivanova, N.; Gugleva, V.; Dobreva, M.; Pehlivanov, I.; Stefanov, S.; Andonova, V. We Are IntechOpen, the World’s Leading Publisher of Open Access Books Built by Scientists, for Scientists TOP 1%; Intech: London, UK, 2016; Volume i, p. 13. [Google Scholar]
  69. Kahya, E.; Arın, S. Design of a Robotic Pneumatic Pruner for Robotic Apple Harvesting. Turk. Eur. J. Eng. Nat. Sci. 2019, 3, 11–17. [Google Scholar]
  70. Zhang, T.; Huang, Z.; You, W.; Lin, J.; Tang, X.; Huang, H. An autonomous fruit and vegetable harvester with a low-cost gripper using a 3D sensor. Sensors 2020, 20, 93. [Google Scholar] [CrossRef]
  71. Kang, H.; Zhou, H.; Wang, X.; Chen, C. Real-time fruit recognition and grasping estimation for robotic apple harvesting. Sensors 2020, 20, 5670. [Google Scholar] [CrossRef]
  72. Feng, J.; Liu, G.; Si, Y.; Wang, S.; Zhou, W. Construction of laser vision system for apple harvesting robot. Nongye Gongcheng Xuebao/Trans. Chin. Soc. Agric. Eng. 2013, 29 (Suppl. S1), 32–37. [Google Scholar]
  73. Arad, B.; Balendonck, J.; Barth, R.; Ben-Shahar, O.; Edan, Y.; Hellström, T.; Hemming, J.; Kurtser, P.; Ringdahl, O.; Tielen, T.; et al. Development of a sweet pepper harvesting robot. J. Field Robot. 2020, 37, 1027–1039. [Google Scholar] [CrossRef]
  74. Bao, G.; Zhang, S.; Chen, L.; Yang, Q. Design of spherical fruit end-grasper based on FPA. Trans. Chin. Soc. Agric. 2013, 44, 242–246. [Google Scholar]
  75. Masood, M.U.; Haghshenas-Jaryani, M. A study on the feasibility of robotic harvesting for chile pepper. Robotics 2021, 10, 94. [Google Scholar] [CrossRef]
  76. Lescure, M. A Scanning Range Finder using the Self-Mixing Effect inside a Laser Diode for 3-D vision. In Quality Measurement: The Indispensable Bridge between Theory and Reality (No Measurements? No Science! Joint Conference—1996: IEEE Instrumentation and Measurement Technology Conference and IMEKO Tec, Brussels, Belgium, 4–6 June 1996; IEEE: Piscataway, NJ, USA, 1996; pp. 226–231. [Google Scholar]
  77. Zhang, Z.Y.; He, D.J.; Zhang, J.F. Research on controlled system of apple picking robot arm. J. Chin. Agric. Univ. 2008, 13, 78–82. [Google Scholar]
  78. Reed, J.N.; Miles, S.J.; Butler, J.; Baldwin, M.; Noble, R. Automatic Mushroom Harvester Development. J. Agric. Eng. Res. 2001, 78, 15–23. [Google Scholar] [CrossRef]
  79. De-An, Z.; Jidong, L.; Wei, J.; Ying, Z.; Yu, C. Design and control of an apple harvesting robot. Biosyst. Eng. 2011, 110, 112–122. [Google Scholar] [CrossRef]
  80. Almendral, K.A.M.; Babaran, R.M.G.; Carzon, B.J.C.; Cu, K.P.K.; Lalanto, J.M.; Abad, A.C. Autonomous Fruit Harvester with Machine Vision. J. Telecommun. Electron. Comput. Eng. 2018, 10, 79–86. [Google Scholar]
  81. Gu, B.; Ji, C.; Tian, G.; Zhang, G.; Wang, L. Design and experiment of intelligent mobile fruit picking robot. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2012, 43, 153–160. [Google Scholar] [CrossRef]
  82. Davidson, J.; Bhusal, S.; Mo, C.; Karkee, M.; Zhang, Q. Robotic manipulation for specialty crop harvesting: A review of manipulator and end-effector technologies. Glob. J. Agric. Allied Sci. 2020, 2, 25–41. [Google Scholar] [CrossRef]
  83. Zahedi, F.; Lee, H. Human arm stability in relation to damping-defined mechanical environments in physical interaction with a robotic arm. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; Available online: https://ieeexplore.ieee.org/abstract/document/9561794/ (accessed on 5 October 2023).
  84. Huertas, V.V.; Ilkiv, B.R.; Krasňanský, P.; Tóth, F.; Huertas, V.; Roha’-Ilkiv, B. Basic laboratory experiments with an educational robotic arm. In Proceedings of the 2013 International Conference on Process Control (PC), Strbske Pleso, Slovakia, 18–21 June 2013. [Google Scholar] [CrossRef]
  85. Haibin, Y.; Cheng, K.; Junfeng, L.; Guilin, Y. Modeling of grasping force for a soft robotic gripper with variable stiffness. Mech. Mach. Theory 2018, 128, 254–274. Available online: https://www.sciencedirect.com/science/article/pii/S0094114X17314957 (accessed on 5 October 2023). [CrossRef]
  86. Megalingam, R.K.; Vivek, G.V.; Bandyopadhyay, S.; Rahi, M.J. Robotic arm design, development and control for agriculture applications. In Proceedings of the 2017 4th International Conference on Advanced Computing and Communication Systems, Coimbatore, India, 6–7 January 2017. [Google Scholar] [CrossRef]
  87. Tu, S.; Pang, J.; Liu, H.; Zhuang, N.; Chen, Y.; Zheng, C.; Wan, H.; Xue, Y. Passion fruit detection and counting based on multiple scale faster R-CNN using RGB-D images. Precis. Agric. 2020, 21, 1072–1091. [Google Scholar] [CrossRef]
  88. Borianne, P.; Borne, F.; Sarron, J.; Faye, E. Deep Mangoes: From fruit detection to cultivar identification in colour images of mango trees. arXiv 2019, arXiv:1909.10939. [Google Scholar]
  89. Lin, G.; Tang, Y.; Zou, X.; Cheng, J.; Xiong, J. Fruit detection in natural environment using partial shape matching and probabilistic Hough transform. Precis. Agric. 2020, 21, 160–177. [Google Scholar] [CrossRef]
  90. Luo, W.; Sycara, K. Adaptive sampling and online learning in multi-robot sensor coverage with mixture of gaussian processes. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; Available online: https://ieeexplore.ieee.org/abstract/document/8460473/ (accessed on 5 October 2023).
  91. Fernandez, R.; Montes, H.; Surdilovic, J.; Surdilovic, D.; Gonzalez-De-Santos, P.; Armada, M. Automatic detection of field-grown cucumbers for robotic harvesting. IEEE Access 2018, 6, 35512–35527. Available online: https://ieeexplore.ieee.org/abstract/document/8399731/ (accessed on 5 October 2023). [CrossRef]
  92. Kim, E.-S.; Park, S.-Y. Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes. Sensors 2019, 20, 52. [Google Scholar] [CrossRef]
  93. Raju, V.B.; Sazonov, E. FOODCAM: A Novel Structured Light-Stereo Imaging System for Food Portion Size Estimation. Sensors 2022, 22, 3300. [Google Scholar] [CrossRef]
  94. Mendonck, M.; Hern, M.G. Ultrasonic Propagation in Liquid and Ice Water Drops. s. Effect of porosity. Sensors 2021, 21, 4790. [Google Scholar] [CrossRef] [PubMed]
  95. Kim, H.; Jeon, C.; Kim, K.; Seo, J. Uncertainty Assessment of Wave Elevation Field Measurement Using a Depth Camera. J. Mar. Sci. Eng. 2023, 11, 657. [Google Scholar] [CrossRef]
  96. Grunnet-Jepsen, A.; Tong, D. Depth Post-Processing for Intel® realsenseTM d400 Depth Cameras. Available online: https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/Intel-RealSense-Depth-PostProcess.pdf (accessed on 5 October 2023).
  97. Goldenberg, A.; Benhabib, B.; Fenton, R. A complete generalized solution to the inverse kinematics of robots. IEEE J. Robot. Autom. 1985, 1, 14–20. [Google Scholar] [CrossRef]
  98. Almusawi, A.R.J.; Dülger, L.C.; Kapucu, S. A new artificial neural network approach in solving inverse kinematics of robotic arm (Denso VP6242). Comput. Intell. Neurosci. 2016, 2016, 5720163. [Google Scholar] [CrossRef]
  99. Billiot, B.; Cointault, F.; Journaux, L.; Simon, J.-C.; Gouton, P. 3D image acquisition system based on shape from focus technique. Sensors 2013, 13, 5040–5053. [Google Scholar] [CrossRef]
  100. Mustafah, Y.M.; Noor, R.; Hasbi, H.; Azma, A.W. Stereo vision images processing for real-time object distance and size measurements. In Proceedings of the 2012 International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, 3–5 July 2012; Available online: https://ieeexplore.ieee.org/abstract/document/6271270/ (accessed on 5 October 2023).
  101. Li, J.; Cui, S.; Zhang, C.; Chen, H. Research on localization of apples based on binocular stereo vision marked by cancroids matching. In Proceedings of the 2012 Third International Conference on Digital Manufacturing & Automation, Guilin, China, 31 July–2 August 2012; Available online: https://ieeexplore.ieee.org/abstract/document/6298609/ (accessed on 5 October 2023).
  102. Feng, Q.; Cheng, W.; Zhou, J.; Wang, X. Design of structured-light vision system for tomato harvesting robot. Int. J. Agric. Biol. Eng. 2014, 7, 19–26. [Google Scholar] [CrossRef]
  103. Zou, X.; Zou, H.; Lu, J. Virtual manipulator-based binocular stereo vision positioning system and errors modelling. Mach. Vis. Appl. 2010, 23, 43–63. [Google Scholar] [CrossRef]
  104. Mehta, S.; Burks, T. Vision-based control of robotic manipulator for citrus harvesting. Comput. Electron. Agric. 2014, 102, 146–158. Available online: https://www.sciencedirect.com/science/article/pii/S0168169914000052 (accessed on 5 October 2023). [CrossRef]
  105. Irie, N.; Taguchi, N.; Horie, T.; Ishimatsu, T. Asparagus harvesting robot coordinated with 3-D vision sensor. In Proceedings of the 2009 IEEE International Conference on Industrial Technology, Churchill, Australia, 10–13 February 2009; Available online: https://ieeexplore.ieee.org/abstract/document/4939556/ (accessed on 5 October 2023).
  106. Bulanon, D.M.; Kataoka, T. Fruit detection system and an end effector for robotic harvesting of Fuji apples. Agric. Eng. Int. CIGR J. 2010, 12, 203–210. Available online: http://cigrjournal.org/index.php/Ejounral/article/view/1285 (accessed on 5 October 2023).
  107. Setiawan, A.I.; Furukawa, T.; Preston, A. A low-cost gripper for an apple picking robot. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Available online: https://ieeexplore.ieee.org/abstract/document/1302418/ (accessed on 5 October 2023).
  108. Wu, D.; Ding, D.; Cui, B.; Jiang, S.; Zhao, E.; Liu, Y.; Cao, C. Design and experiment of vibration plate type camellia fruit picking machine. Int. J. Agric. Biol. Eng. 2022, 15, 130–138. [Google Scholar] [CrossRef]
  109. Dimeas, F.; Sako, D.V.; Moulianitis, V.C.; Aspragathos, N.A. Design and fuzzy control of a robotic gripper for efficient strawberry harvesting. Robotica 2014, 33, 1085–1098. [Google Scholar] [CrossRef]
  110. Benavides, M.; Cantón-Garbín, M.; Sánchez-Molina, J.A.; Rodríguez, F. Automatic tomato and peduncle location system based on computer vision for use in robotized harvesting. Appl. Sci. 2020, 10, 5887. [Google Scholar] [CrossRef]
  111. Kounalakis, N.; Kalykakis, E.; Pettas, M.; Makris, A.; Kavoussanos, M.M.; Sfakiotakis, M.; Fasoulas, J. Development of a Tomato Harvesting Robot: Peduncle Recognition and Approaching. In Proceedings of the 2021 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, 11–13 June 2021; Available online: https://ieeexplore.ieee.org/abstract/document/9461281/ (accessed on 5 October 2023).
  112. van Herck, L.; Kurtser, P.; Wittemans, L.; Edan, Y. Crop design for improved robotic harvesting: A case study of sweet pepper harvesting. Biosyst. Eng. 2020, 192, 294–308. Available online: https://www.sciencedirect.com/science/article/pii/S1537511020300337 (accessed on 5 October 2023). [CrossRef]
  113. Chen, Y.; Gunderman, A.; Collins, J.A. Soft Robotic Gripper for Berry Harvesting. U.S. Patent Application 17/525, 12 May 2022. Available online: https://patents.google.com/patent/US20220142050A1/en (accessed on 5 October 2023).
  114. Lü, Q.; Cai, J.; Liu, B.; Deng, L.; Zhang, Y. Identification of fruit and branch in natural scenes for citrus harvesting robot using machine vision and support vector machine. Int. J. Agric. Biol. Eng. 2014, 7, 115–121. [Google Scholar] [CrossRef]
  115. Ghazal, S.; Qureshi, W.S.; Khan, U.S.; Iqbal, J.; Rashid, N.; Tiwana, M.I. Analysis of visual features and classifiers for Fruit classification problem. Comput. Electron. Agric. 2021, 187, 106267. Available online: https://www.sciencedirect.com/science/article/pii/S0168169921002842 (accessed on 5 October 2023). [CrossRef]
  116. Choudhary, P.; Khandekar, R.; Borkar, A.; Chotaliya, P. Image processing algorithm for fruit identification. Int. Res. J. Eng. Technol. 2017, 4, 2741–2743. Available online: https://www.academia.edu/download/53539706/IRJET-V4I3691.pdf (accessed on 5 October 2023).
  117. Payne, A.; Walsh, K.; Subedi, P.; Jarvis, D. Estimating mango crop yield using image analysis using fruit at ’stone hardening’ stage and night time imaging. Comput. Electron. Agric. 2014, 100, 160–167. Available online: https://www.sciencedirect.com/science/article/pii/S0168169913002810 (accessed on 5 October 2023). [CrossRef]
  118. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion: A simple and practical alternative to high dynamic range photography. In Computer Graphics Forum; Blackwell Publishing Ltd.: Oxford, UK, 2009; Volume 28, pp. 161–171. [Google Scholar] [CrossRef]
  119. Barnea, E.; Mairon, R.; Ben-Shahar, O. Colour-agnostic shape-based 3D fruit detection for crop harvesting robots. Biosyst. Eng. 2016, 146, 57–70. Available online: https://www.sciencedirect.com/science/article/pii/S1537511016000131 (accessed on 5 October 2023). [CrossRef]
  120. Perks, A. Advanced vision guided robotics provide ‘future-proof’ flexible automation. Assem. Autom. 2006, 26, 216–220. [Google Scholar] [CrossRef]
  121. Copot, C.; Shi, L.; Vanlanduit, S. Automatic tuning methodology of visual servoing system using predictive approach. In Proceedings of the 2019 IEEE 15th International Conference on Control and Automation (ICCA), Edinburgh, UK, 16–19 July 2019; Available online: https://ieeexplore.ieee.org/abstract/document/8899522/ (accessed on 5 October 2023).
  122. Nakamura, T. Real-time 3-D object tracking using Kinect sensor. In Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, Thailand, 7–11 December 2011; Available online: https://ieeexplore.ieee.org/abstract/document/6181382/ (accessed on 5 October 2023).
  123. Sola-Guirado, R.R.; Castro-Garcia, S.; Blanco-Roldá, G.L.; Gil-Ribes, J.A.; Gonzá lez-Sánchez, E.J. Performance evaluation of lateral canopy shakers with catch frame for continuous harvesting of oranges for juice industry. Int. J. Agric. Biol. Eng. 2020, 13, 88–93. [Google Scholar] [CrossRef]
  124. Zhou, H.; Wang, X.; Au, W.; Kang, H.; Chen, C. Intelligent robots for fruit harvesting: Recent developments and future challenges. Precis. Agric. 2022, 23, 1856–1907. [Google Scholar] [CrossRef]
  125. Huang, Z.; Gomez, A.; Bird, R.; Kalsi, A.; Jansen, C.; Liu, Z.; Miyauchi, G.; Parsons, S.; Sklar, E. Understanding human responses to errors in a collaborative human-robot selective harvesting task. In Proceeding of the 3rd UK-RAS Conference for PhD Students & Early Career Researchers, Virtual, 20 April 2020. [Google Scholar] [CrossRef]
  126. Ayomide, O.B.; Ajayi, O.O.; Ajayi, A.A.; Zhang, Q.; Zuo, J.; Yu, T. Design and research on the End Actuator of Tomato Picking Robot. J. Phys. Conf. Ser. 2019, 1314, 012112. [Google Scholar] [CrossRef]
  127. Han, K.-S.; Kim, S.-C.; Lee, Y.-B.; Kim, S.-C.; Im, D.-H.; Choi, H.-K.; Hwang, H. Strawberry Harvesting Robot for Bench-type Cultivation. J. Biosyst. Eng. 2012, 37, 65–74. [Google Scholar] [CrossRef]
  128. Raza, M.; Islam, U.; Khan, H.; Iqbal, J.; Ul Islam, R. Modeling and analysis of a 6 DOF robotic arm manipulator. Can. J. Electr. Electron. Eng. 2012, 3, 300–306. Available online: https://www.researchgate.net/profile/Jamshed-Iqbal-2/publication/280643085_Modeling_and_analysis_of_a_6_DOF_robotic_arm_manipulator/links/55c0a56b08aed621de13cf59/Modeling-and-analysis-of-a-6-DOF-robotic-arm-manipulator.pdf (accessed on 5 October 2023).
  129. Briot, S.; Khalil, W. Dynamics of Parallel Robots; Springer Science and Business Media LLC: Dordrecht, The Netherlands, 2015; p. 35. [Google Scholar] [CrossRef]
  130. Staicu, S. Dynamics of Parallel Robots; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar] [CrossRef]
Figure 1. Timeline showing significant contributions in fruit-harvesting robotic arms development.
Figure 1. Timeline showing significant contributions in fruit-harvesting robotic arms development.
Agriengineering 05 00136 g001
Figure 2. Flowchart of automated fruit picker.
Figure 2. Flowchart of automated fruit picker.
Agriengineering 05 00136 g002
Figure 3. Comparison of a human arm with a robotic arm.
Figure 3. Comparison of a human arm with a robotic arm.
Agriengineering 05 00136 g003
Figure 4. Sensors for robot vision guiding: (A) LiDAR distance measurement sensor [92], (B) stereovision camera [93], (C) ultrasonic sensor [94], and (D) depth camera sensor [95].
Figure 4. Sensors for robot vision guiding: (A) LiDAR distance measurement sensor [92], (B) stereovision camera [93], (C) ultrasonic sensor [94], and (D) depth camera sensor [95].
Agriengineering 05 00136 g004
Figure 5. Custom design end-effector of fruit pickers: (A) Citrus, (B,C) Sweet Pepper, (D,E) Tomato, (F) Cucumber, (G,H) Strawberry, and (IK) Apple [68].
Figure 5. Custom design end-effector of fruit pickers: (A) Citrus, (B,C) Sweet Pepper, (D,E) Tomato, (F) Cucumber, (G,H) Strawberry, and (IK) Apple [68].
Agriengineering 05 00136 g005
Figure 6. Overview of development challenges in every component of a robotic arm [17,18,20,23,26,62,63,64,67,68,69,73,74].
Figure 6. Overview of development challenges in every component of a robotic arm [17,18,20,23,26,62,63,64,67,68,69,73,74].
Agriengineering 05 00136 g006
Figure 7. Exposure-fused technique applied to an apple with variable lighting conditions.
Figure 7. Exposure-fused technique applied to an apple with variable lighting conditions.
Agriengineering 05 00136 g007
Table 1. Overview of recent developments in automated fruit harvesting prototypes.
Table 1. Overview of recent developments in automated fruit harvesting prototypes.
Ref.Component/Equipment UsedSoftware/Algorithm
Used
Performance Results
[75] Braccio Robotic Arm
Arduino Due
Intel real sense depth camera
MATLAB
Arduino IDE
Localization success rate37.7%
Detachment success rate65.5%
Harvest success rate24.7%
Cycle time7 s
[73]Six DOF robotic arm
Fotonic F80 camera
Custom-built LED-based illumination
GPU and PLC
C++
Python
ROS Indigo on Ubuntu 14.04
The harvest success rates.61%
Cycle time 24 s
[67] Seven DOF manipulator
Custom end-effector
Single CCD color camera
TOF-based 3D camera
MATLAB
C++ domain.
Localization success rate100%
The harvest success rates.84.6%
Cycle time7.6 s
[68] Logitech cam
Supplementary halogen lamps
Adafruit GPS breakout module
Arduino Uno
V-REP
ROS
MATLAB
Localization success rate94%
Cycle time2 s
[65] servos (RX-64)
Cine camera
MATLABN/A
[70] Kinect RGB-Depth camera
6 DOF ARM
Custom made gripper
Mask R-CNN
ROS
Fruit detection success87%
Cutting point detection71%
The harvest success52%
[35] Pentium IV 2 GHz PC with 1 GB RAM
Industrial robot (Panasonic VR006L)
Touch panel PC with HMI
Custom gripper
Halcon SoftwareFruit detection success80%
Fruit harvesting success 80%
Cycle time9 s
[71] Universal Robot UR5
Customized soft end-effector.
Intel D-435 camera
NVIDIA GTX-1070 GPU
ROS
Linux Ubuntu 16.04
RealSense package
ROS MoveIt
DasNet Recognition success91%
Mobile-DasNet Recognitio success90%
[69] Custom made end-effectorC#Detection success85%
Harvesting success73%
[66] Minoru 3D USB Webcam
3D printed robotic arm.
STM32F407VGT6 controller
N/ADistance error of6%
Cycle time16 s
[52] Five DOF robotic arm
Infra-red laser diodes
AC servo motors
Computer
N/ALocalization success rate60%
Fruit harvesting success80%
Cycle time14 s
[78] 486 DX 33 MHz Computer
Fluorescent tube, Phillips TLE 23 W/29
Pulnix TM 500 camera
N/ALocalization success rate70%
Fruit harvesting success76%
Cycle time8 s
[80] Six DOF Robotic manipulator
Electric cart
Arduino Microcontroller
ZED Stereo camera
Webcam (A4tecch)
Ultrasonic Sensor (US-100)
Arduino IDE
Green Detection Algorithm
Open CV Library
Visual Studio
Localization success rate75%
Fruit harvesting success85%
Cycle time9.41 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaleem, A.; Hussain, S.; Aqib, M.; Cheema, M.J.M.; Saleem, S.R.; Farooq, U. Development Challenges of Fruit-Harvesting Robotic Arms: A Critical Review. AgriEngineering 2023, 5, 2216-2237. https://doi.org/10.3390/agriengineering5040136

AMA Style

Kaleem A, Hussain S, Aqib M, Cheema MJM, Saleem SR, Farooq U. Development Challenges of Fruit-Harvesting Robotic Arms: A Critical Review. AgriEngineering. 2023; 5(4):2216-2237. https://doi.org/10.3390/agriengineering5040136

Chicago/Turabian Style

Kaleem, Abdul, Saddam Hussain, Muhammad Aqib, Muhammad Jehanzeb Masud Cheema, Shoaib Rashid Saleem, and Umar Farooq. 2023. "Development Challenges of Fruit-Harvesting Robotic Arms: A Critical Review" AgriEngineering 5, no. 4: 2216-2237. https://doi.org/10.3390/agriengineering5040136

Article Metrics

Back to TopTop