Next Article in Journal
A Lightweight Crop Pest Detection Algorithm Based on Improved Yolov5s
Next Article in Special Issue
YOLOv5-ASFF: A Multistage Strawberry Detection Algorithm Based on Improved YOLOv5
Previous Article in Journal / Special Issue
Design of a Tomato Sorting Device Based on the Multisine-FSR Composite Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Row Detection BASED Navigation and Guidance for Agricultural Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications

1
College of Engineering, Nanjing Agricultural University, Nanjing 210095, China
2
School of Electrical Information Engineering, Zhengzhou University of Light Industry, Zhengzhou 450002, China
3
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210095, China
*
Author to whom correspondence should be addressed.
Agronomy 2023, 13(7), 1780; https://doi.org/10.3390/agronomy13071780
Submission received: 12 June 2023 / Revised: 28 June 2023 / Accepted: 29 June 2023 / Published: 30 June 2023
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)

Abstract

:
Crop row detection is one of the foundational and pivotal technologies of agricultural robots and autonomous vehicles for navigation, guidance, path planning, and automated farming in row crop fields. However, due to a complex and dynamic agricultural environment, crop row detection remains a challenging task. The surrounding background, such as weeds, trees, and stones, can interfere with crop appearance and increase the difficulty of detection. The detection accuracy of crop rows is also impacted by different growth stages, environmental conditions, curves, and occlusion. Therefore, appropriate sensors and multiple adaptable models are required to achieve high-precision crop row detection. This paper presents a comprehensive review of the methods and applications related to crop row detection for agricultural machinery navigation. Particular attention has been paid to the sensors and systems used for crop row detection to improve their perception and detection capabilities. The advantages and disadvantages of current mainstream crop row detection methods, including various traditional methods and deep learning frameworks, are also discussed and summarized. Additionally, the applications for different crop row detection tasks, including irrigation, harvesting, weeding, and spraying, in various agricultural scenarios, such as dryland, the paddy field, orchard, and greenhouse, are reported.

1. Introduction

The global population and food challenges have called for advances in agricultural science. Integrating advanced technologies such as artificial intelligence, navigation, sensing systems, and communication, modern agricultural equipment can improve agricultural productivity and promote the development of smart agriculture [1]. Autonomous navigation technology is essential in realizing the intellectualization and modernization of agricultural machinery. This technology allows machinery to move with precision and accuracy, perform field tasks efficiently, and monitor crop growth and health. Figure 1 depicts a few examples of field applications, including fertilization robots, irrigation robots, weeding robots, and picking robots. However, the complexity and unstructured nature of the agricultural environment pose a challenge to achieve accurate navigation and the autonomous operation of agricultural machinery. Accurate row detection can effectively promote autonomous navigation and the safe operation of robots and vehicles in agricultural environments [2].
The application of robotic diversification operations through row detection makes great sense for precision agriculture [3,4]. Based on crop row detection results, the motion controller guides the agricultural robot to operate automatically and safely without damaging the crop by adjusting the forward speed and direction of the front and rear wheels. The row detection-based navigation of agricultural machinery is widely used in different agricultural tasks such as spraying, mowing, irrigation, harvesting, fertilization, and plant protection [5]. The complexity and diversity of the agricultural environment necessitate varying requirements to be met for crop row detection and the autonomous navigation of agricultural robots. Typical agricultural scenarios include drylands, paddy fields, orchards, and greenhouses. Drylands are usually uneven, and crop growth can be messy; paddy fields have different water depths, and different crops can grow together; orchards have a dense canopy of fruit trees, and there may be other weeds and plants growing with the fruit trees; greenhouses have a variety of plants that need to be distinguished [6]. In addition, curved crop rows are common in some agricultural fields due to topography. Curved crop rows usually have complex shapes and occur not only in agricultural terraces but also in flat plots with irregular geometry. This makes it difficult to accurately detect and measure target crop rows, and the traditional line detection algorithm is difficult to cope with in this situation as it can bring certain challenges for the safe navigation of agricultural robots [7]. Therefore, in order to improve the accuracy of crop row detection, appropriate sensors, and computer algorithms need to be selected according to different application scenarios [8].
With the development of computer technology and intelligent equipment, various types of intelligent sensors and systems have been widely used for row detection in agricultural environments to help solve the problem of the autonomous navigation of agricultural robots [9,10]. Vision sensors have been extensively used as a reliable source of information for agricultural robots due to the advantages of a wide measurement range, rich signal information, and low costs [11,12]. Based on the detected crop row images, the information on crop row spacing, contours, and obstacle locations can be obtained in real-time for the robot to achieve row tracking and navigation [13,14]. Scholars have developed many algorithms for crop row recognition, and path extraction based on the above vision sensors and systems, including traditional methods and learning-based methods. Traditional crop row detection methods are simple to implement, widely applicable, cost-effective, and well-visualized [15]. However, environmental factors such as lighting and shading have a deep impact on the accuracy and reliability of traditional crop row detection methods in practical applications. In recent years, the development and application of machine learning and deep learning have provided strong theoretical support for vision-based crop row detection [16]. In image segmentation, feature detection, target recognition and other visual information processing and machine learning can replace traditional methods to reduce the interference of environmental noise and vegetation overlap and improve the accuracy of crop row detection [17,18]. Moreover, traditional visual inspection techniques are prone to occlusion and missed detection in agricultural environments with high crop density, such as fruit gardens, sorghum fields, and corn fields. In this case, LIDAR is a good alternative, with strong penetration and high accuracy. In addition, multi-sensor fusion is an important development direction in crop row detection. By integrating information from multiple sensors, multi-sensor fusion makes up for the limitations of a single sensor and improves the accuracy and robustness of crop row detection. In general, the application of different sensors and their corresponding algorithms can effectively improve the crop row detection accuracy and navigation robustness of agricultural robots and promote the transition of modern agriculture to high efficiency, automation, and precision [19,20].
Despite the increasing popularity of this topic in agricultural navigation, the related methods and applications based on crop row detection have not been summarized in detail or systematically. This is detrimental to the understanding of agricultural robot navigation methods based on crop row detection. Therefore, in order to promote the further development of crop row detection technology and agricultural robotics navigation, this paper provides a comprehensive review of the literature regarding the current state of research on crop row detection. This paper is organized as follows: Section 2 provides a comprehensive introduction to the concepts of sensors and systems in navigation systems, as well as their advantages and disadvantages. In Section 3, recognition and detection algorithms and methods of crop row are classified into two main categories: traditional methods and machine vision methods. The applications of crop row detection in robot navigation are shown in Section 4 in accordance with various perceptual conditions. Section 5 discusses the challenges and prospects of the theme.

2. Sensors and Systems for Crop-Row Detection

2.1. Monocular Cameras

Monocular vision is a fundamental building block for other vision systems, such as binocular stereo-vision and multi-vision systems [21]. The imaging principle of a monocular camera is to generate a projection onto the camera plane, reflecting the three-dimensional (3D) world in a two-dimensional (2D) form [22]. According to different signal readout processes, monocular cameras are usually divided into two kinds of image sensors: the charge-coupled device (CCD) and the complementary metal-oxide-semiconductor (CMOS) [23]. The advantages of the monocular camera include a simple structure, low cost, and low power consumption, making it a convenient tool for crop row detection. Additionally, monocular cameras can detect color, texture, and other features in agricultural scenes, providing useful information for the positioning and navigation of agricultural robots and vehicles [24]. However, due to the limited amount of information provided by a single camera, auxiliary algorithms are often required to determine the distance of the relationship between the target and the camera [25,26]. Moreover, depth information cannot be directly collected from monocular cameras because of their single-view angle.

2.2. Binocular Cameras

The binocular stereo vision technique is commonly utilized in crop row detection because it can provide precise and efficient depth information [27]. A binocular camera comprises two monocular cameras and is commonly used as a passive rangefinder. By capturing two images of an object from different positions, binocular cameras can determine the 3D geometric information of the object by calculating the position deviation between the corresponding points of the images [28]. This process is based on the principle of parallax, and it enables the camera to accurately measure the distance to the object, providing important information for crop row detection [29]. Stereo vision-based crop row detection has demonstrated superior performance in challenging field conditions such as high crop density or varying lighting [30]. Compared to monocular vision, stereo vision is less sensitive to the effects of shadows and sunlight, providing more reliable results in these environments [31]. However, the configuration and calibration of binocular cameras can be complex and requires careful attention to detail. Additionally, in the absence of texture features, stereo-matching algorithms can fail to accurately identify the corresponding points between images [32].

2.3. RGB-D Cameras

RGB-D cameras have become increasingly widespread in crop row detection due to their capacity to capture both color and depth information [33]. Structured light and Time-of-Flight (TOF) cameras, which are capable of measuring both RGB and depth information, fall under the category of RGB-D cameras [34]. Structured light systems can capture both 2D planar and 3D depth information [35]. This system is composed of projectors, cameras, image acquisition systems, and processing systems, which work together to project a particular mode of a light signal onto the surface of the object to be measured and then calculate the position and depth information according to the changes in the light signal on the surface of the object [36]. ToF cameras work on the principle of emitting a short burst of infrared light and measuring the time it takes for the light to return after reflecting off objects in the scene [37]. The camera sensor detects the reflected light and calculates the distance to each point based on the time of flight. These data are then used to create a depth map of the scene, which provides information about the distances between the camera and various objects within the field of view [38]. Unlike the passive range of binocular cameras, RGB-D sensors can actively emit signal waves and capture these waves, which are reflected back from objects [39]. The depth information obtained from RGB-D cameras provides additional geometric information that can be used to detect crop rows and estimate plant height more accurately [40]. However, the performance of RGB-D cameras may be limited in situations where the crop density is high or occlusion. In such scenarios, some crop plants may be hidden from view, or the depth of data may be noisy, leading to inaccurate detection [41]. In addition, the processing of large amounts of 3D data generated by RGB-D cameras is also a computationally intensive and time-consuming process [42].

2.4. Panorama Cameras

Panoramic cameras can enable the large-scale and dead-angle-free monitoring of crops in the field [43]. Based on bionics technology, panoramic cameras work by using the spherical mirror transmission and reflection of physical optics for imaging. By rotating the camera, the photographic field can be scanned at a large angle, and the panoramic range can be achieved by stitching technology, which is enough to reach 360° [44]. The remarkable feature of panoramic cameras is that they can capture a large amount of the surrounding crop structure information in a single image, making them more suitable for large-scale farmland monitoring [45]. Fisheye panoramic cameras are one of the most commonly used types of panoramic cameras in agriculture due to their broad field of view and ability to capture rich environmental information [46]. However, the images captured by fisheye panoramic cameras suffer from large distortion and lack the details required for accurate object detection, which can limit their usefulness in certain applications [47,48]. To address these issues, deformation correction techniques, such as equidistant, stereographic, or orthographic projection models, can be used to rectify these images and remove the distortion [49].

2.5. Spectral Imaging Systems

The spectral imaging system is a combination of imaging technology and spectral information acquisition technology, which shows great potential in crop detection applications [50]. This system obtains the data cube composed of 2D spatial information and the one-dimensional (1D) spectral information of the measured object by spectral scanning [51]. According to different spectral resolution capabilities, common spectral imaging techniques can be divided into multi-spectral, hyper-spectral, and ultra-spectral [52]. Hyperspectral imaging systems provide a higher spectral resolution, enabling the more detailed and accurate identification of different objects in the field. By acquiring spectral data and analyzing the differences in their spectral reflection, this system can differentiate between crops and non-crop areas more accurately [53,54]. However, for early-growing crops, which may have similar spectral characteristics to weeds, spectral detection may not be as effective [55]. In addition, processing a large amount of imaging spectral data quickly and reliably remains a challenge for spectral imaging systems [56].

2.6. LiDAR Sensors

LiDAR, which stands for Light Detection and Ranging, is a highly advanced and reliable sensor that has been widely used in the field of crop row detection and robot navigation. This sensor is known for its high precision, wide range, and powerful anti-jamming capabilities [57]. The principle of the LiDAR operation is based on the emission of visible or near-infrared light waves by the transmitting system. These waves are then reflected off the target and detected by the receiving system. The data obtained are subsequently processed to produce parameter information, including distance [58]. LiDAR sensors have been utilized in crop row detection to provide highly accurate and detailed 3D maps of crop canopies [59]. Additionally, LiDAR sensors are able to penetrate vegetation and capture data from the ground surface, which can aid in detecting crop rows even in highly vegetated fields [60]. However, the high cost of LiDAR sensors remains a foremost drawback, limiting their use in small-scale farming operations. Additionally, LiDAR sensors require high computational power to process large amounts of data, which could be a bottleneck in real-time applications [61].

2.7. Multi-Sensor Fusion Systems

As mentioned above, both vision and LiDAR sensors have their own limitations in crop row detection and robot navigation. The multi-sensor fusion system can exploit the complementary and redundant nature of multiple sensor data to fuse different environmental information and enhance the performance of crop row detection [62,63]. Considering the variability of soil disturbance, vegetation level, and machinery speed in agricultural environments, the fusion of LiDAR and vision sensors can enable robust weed detection and crop row tracking tasks [64]. Combining the plane data given by the LiDAR with the colorful representation provided by the images, the noise of grass or leaves in the environment can be eliminated, and the detection ability of crop rows improved [65]. Since the boundaries between harvested and unharvested crops in the field are not always straight lines, the fusion of vision sensors with the Global Positioning System (GPS) is also an effective fusion scheme [66]. The use of vision sensors alone for crop row detection may result in missed detections during image processing while using a GPS device alone can produce certain errors in determining the navigation baseline. The fusion of vision and GPS improves the integrity of crop row feature information extraction and enhances localization accuracy and detection robustness. In addition, the fusion of vision sensors, encoders, and inertial measurement units (IMU) is also common in crop row detection [67]. Although multi-sensor fusion systems have additional complexities, these can be effectively mitigated by employing optimal techniques [68]. When the data are optimally integrated, the information from different sensors can give an accurate crop row detection model in the current agricultural environment.

3. Methods and Algorithms for Crop-Row Detection

3.1. Traditional Methods

3.1.1. Hough Transform (HT)

HT is a classical computer vision algorithm for crop row detection and navigation line extraction [69]. The idea behind this approach is to transform the image-coordinate space to the Hough-parameter space using the mapping relationship between points and lines, followed by detecting the target lines in the image. The HT-based detection approach is robust to image noise and outliers and performs well even in parallel structure crop fields with gaps [70]. To improve the efficiency and accuracy of these inspection results, edge detection, and image binarization are often performed prior to the HT-based detection process [71]. One limitation of the classic Hough transform is its high computational complexity, which makes it unsuitable for real-time applications. Another limitation of the classic Hough transform is its sensitivity to noise and outliers. To address this issue, researchers have proposed various modifications to HT, such as the Probabilistic Hough Transform (PHT), which uses a probabilistic voting scheme to reduce the effect of noise and outliers [72]. Other modifications include the Directional Hough Transform (DHT), which was designed to detect lines with a specific orientation [73], and the Multi-scale Hough Transform (MHT), which detects lines of different scales [74].

3.1.2. Linear Regression Method (LRM)

LRM is a widely utilized technique in detecting row crops in agriculture through image analysis. In regression analysis, one or more independent variables are studied to determine their impact on the dependent variable, with the aim of generating a hypothesis analysis [75]. The most common implementation of LRM is the least squares method, where the sum of the squared errors between the predicted and actual values is minimized to find the best-fit line. In the context of crop row detection, LRM can be used to predict the position and orientation of crop rows using image data. The goal is to find a linear relationship between the independent variables (such as pixel coordinates) and the dependent variable (crop row position or orientation). Before applying LRM to crop row detection, image preprocessing steps such as image segmentation and feature extraction can be performed to isolate the crop rows from the background and extract useful features for regression analysis [76]. One of the advantages of LRM is its simplicity and computational efficiency. However, it may encounter difficulties in handling complex data with noise in farmlands. In such cases, additional preprocessing steps, such as separating weed and crop pixels or using non-linear regression techniques, may be necessary to improve the accuracy of the model [77].

3.1.3. Horizontal Strips Method

The horizontal strips method is a reliable approach for detecting crop rows using agronomic image analysis [78]. The key concept of this technique is to divide the input image into several horizontal strips, which can serve as regions of interest (ROI). Within each ROI, feature points are determined based on the calculated center of gravity. Compared with other crop row detection methods, the horizontal strip analysis method does not require an additional image segmentation step, which improves the computational efficiency of image processing and reduces storage space [79]. Moreover, this technique was clearly superior in terms of real-time performance and precision in continuous crop rows with low weed density. Nevertheless, the horizontal strip method might not perform well in agricultural environments where crop rows are partially missing or overgrown with weeds, as these factors can affect the accuracy of feature point detection. Furthermore, the accuracy of this method is sensitive to the camera angle, which can affect the determination of feature pixel values. To mitigate this issue, the vertical projection method is often used in conjunction with the horizontal strip method to enhance accuracy [80].

3.1.4. Blob Analysis (BA)

The Blob Analysis (BA) method is a useful technique for crop row detection that operates on binarized images to group connected pixels into blobs with the same gray value [81]. The blobs that contain more than a certain number of pixels are then used to generate straight lines that represent crop rows. Unlike other machine vision techniques, BA considers features in an image as objects rather than individual pixels or lines, leading to more accurate identification of crop rows [82]. This approach leverages the unique shape and color characteristics of crop rows to accurately locate and identify them by calculating the center of gravity and principal axis position of each crop row [83]. In crop row detection, the BA technique has proven effective, particularly in situations where the crop rows have a clear definition and a distinct contrast with the surrounding field, such as in the case of newly planted crops with a different color or texture than the soil. However, BA may have limitations in fields with a high weed density or an unclear crop row definition. In such cases, the noise in the clustered blobs can lead to errors, which can affect the accuracy of the crop row detection results [84].

3.1.5. Random Sample Consensus (RANSAC)

The RANSAC algorithm is a robust and widely used technique for row detection in crops. The algorithm estimates a mathematical model and calculates the optimal solution of parameters from a dataset that may contain outliers [85]. In crop row detection, outliers can be weed points, soil points, or other objects that do not belong to the crop row. This property makes it suitable for the centerline fitting of crop rows, even when a significant proportion of weed data points are present [86]. Furthermore, the RANSAC algorithm can optimize point cloud matching and 3D coordinate calculations for complex 3D crop row detection [87]. However, the effectiveness of the RANSAC algorithm depends on several factors, such as the number of iterations, the threshold values, and the size of the data set. In the case of crop row detection, the quality of the feature points extracted from the image data also plays a crucial role in the success of the algorithm [88]. In recent years, several variations of the RANSAC algorithm have been proposed to address some of its limitations in crop row detection, such as the Progressive Sample Consensus (PROSAC) algorithm and the M-estimator Sample Consensus (MSAC) algorithm [89].

3.1.6. Frequency Analysis

Frequency analysis is a signal processing technique for analyzing local spatial patterns, which is widely used in crop row detection [90]. This mathematical method involves converting images from the image space to the frequency space through frequency domain filtering. By analyzing the resulting spectrum, this method can extract details from the image and enhance object detection with some simple logical operations. Common methods used in frequency-domain characterization include Fourier transform (FT), fast Fourier transform (FFT), and wavelet analysis [91]. Through these methods, the grayscale levels of weeds and shadows (tractors or crops) in field images can be attenuated, enabling the efficient detection of the position and direction of crop rows [92]. However, the frequency analysis method may not be suitable for the detection of curved crop rows with irregular crop spacing. Furthermore, the accuracy of this method may be affected by factors such as lighting conditions and the presence of noise in the image [93].

3.2. Machine Learning Methods

3.2.1. Clustering

The clustering algorithm is an unsupervised learning method that automatically groups data points into clusters according to various standard attributes or features like color, texture, or edge information [94]. This method does not require labeled data, which makes it a useful tool for detecting crop rows. The cluster-based algorithm is known for its quick detection of objects, high efficiency, and fast operation speed [95]. Data clustering methods mainly include partition-based methods, density-based methods, and hierarchical methods. Among these, the K-means clustering algorithm is the simplest and most commonly used method in crop row detection [96]. It can cluster data effectively, even when weed pixels are present between rows and are significantly smaller than planting crops. The scalability and efficiency of the K-means algorithm make it suitable for processing large datasets in cropland [97]. However, it has been noted that the K-means algorithm assumes that the clusters are spherical, equally sized, and have similar densities, which can lead to over-clustering or under-clustering in certain situations [98]. In recent years, several studies have attempted to address the limitations of traditional clustering algorithms in crop row detection. For example, some researchers have used hybrid clustering algorithms that combine the strengths of multiple clustering methods to achieve better results. Others have developed clustering algorithms that can detect irregularly shaped clusters, such as Gaussian mixture models (GMMs) or fuzzy clustering algorithms [99].

3.2.2. Deep Learning

Deep learning is a new research direction of machine learning that has been applied to crop row detection [100]. Unlike traditional shallow learning, deep learning places more emphasis on the depth and feature learning of model structures, with the goal of establishing a neural network that can analyze and learn in a manner similar to the human brain. This method has demonstrated significant improvements over traditional computer vision algorithms for identifying crop rows, especially in challenging conditions such as variable lighting, weather, and field conditions [101]. One of the main advantages of deep learning is that it can autonomously learn from large datasets and adapt to new data distributions. This makes it well-suited for precision agriculture, where it can be used to identify crops, pests, and diseases, optimize planting patterns, and monitor crop growth and health. Object detection and semantic segmentation play crucial roles in crop row detection by enhancing the accuracy and understanding of field images. Object detection algorithms enable the identification and localization of crop rows within an image, allowing for the precise mapping and measurement of their positions. This helps when optimizing planting patterns and ensuring uniform spacing between the rows. Moreover, object detection enables the detection of other objects or obstacles in the field, such as machinery or structures, which can help to avoid potential collisions or disturbances during farming operations [102]. On the other hand, semantic segmentation goes beyond object detection by providing detailed pixel-level labeling of an image. In the context of crop row detection, semantic segmentation helps differentiate the crop rows from other objects or background elements that are present in the image. By accurately segmenting the crop rows, semantic segmentation facilitates the analysis of their spatial distribution and arrangement [103]. It enables the identification of irregularities or gaps between rows, which can indicate potential issues such as missing plants, weed infestations, or uneven growth. This information is invaluable for farmers when making informed decisions regarding subsequent farming operations. Recent studies have used deep learning techniques such as Faster R-CNN, YOLOv3, Mask R-CNN, and DeepLabv3+ to detect crop rows from images captured by drones, tractors, or robots [104]. The significant challenge of deep learning-based crop detection is a lack of annotated training data for specific crops, growth stages, and field conditions [105]. Creating such datasets requires significant time and resources, and their quality and size can significantly impact the accuracy and robustness of the models. Moreover, the computational cost of training deep learning models can be prohibitive for resource-constrained devices and systems [106].

4. Applications of Row Detection Based Navigation in Row-Crop Fields

Most crops are planted or cultivated in fields with regular parallel structures and spaced line patterns. For one thing, this structure can make full use of land and space, help crop growth and development, and increase crop yields [107]. For another, the crops grown in rows are conducive to field operations and management, improving the working efficiency of agricultural robots [108]. To achieve row detection-based navigation, various sensors, systems, and detection algorithms have been developed and applied in different agricultural scenarios such as drylands, paddy fields, orchards, and greenhouses [109]. For example, in dryland fields, visible light and near-infrared cameras have been used to detect crop rows, while in paddy fields, depth sensors and LiDAR have been utilized due to the presence of water. In orchards, stereo vision and 3D laser scanning have been employed to detect tree trunks and branches, which can be used as reference points for navigation. These technologies provide dependable crop row information, which can facilitate the implementation of precise navigation in agricultural machinery [110]. Moreover, the focus and challenges of row detection in different applications vary due to different applied objects, differences in image interpretation, and differences in weed interference [102]. In irrigation, crop row detection usually needs to identify the position of the crop row in vegetation-covered images, while in weed detection and picking, crop row detection mainly assists when distinguishing crops from weeds [111]. In terms of weed interference differences, one of the main challenges in weed detection is to accurately detect and segment weed areas, and crop row detection can help establish a frame of reference for weed distribution analysis and targeted weed control. By contrast, in harvesting, the main concern is to accurately detect and locate crop rows. The specific implementations of crop row detection in different agricultural environments and applications are detailed and summarized below [82].

4.1. Applications of Row Detection in Drylands

Dryland refers to arable land that relies on natural precipitation to grow crops, accounting for about 65% of the total arable land [112]. Water and nutrients are the main factors affecting agricultural production in dryland. Common dryland crops include important food crops (legumes, cereals, potatoes, etc.), as well as typical cash crops (fiber, oilseeds, etc.). In drylands, complex and diverse terrain, and unstable light conditions are the main factors affecting the accuracy of crop row detection. To overcome these challenges, intelligent technology can be employed to increase the precise identification of characteristics such as the position, shape, and direction of crop rows [113,114]. This can provide technical support for the guidance and operation of agricultural robots, making agricultural production more intelligent and efficient. Table 1 details the applications of sensors for crop row detection in drylands, including sensors, scenarios, methods, and crop row detection accuracy (CRDA).

4.1.1. Row Detection for Irrigation

A dryland irrigation robot is intelligent equipment that realizes dryland irrigation and has the characteristics of high efficiency, safety, and environmental protection [173]. It can adjust irrigation quantity and irrigation time, reduce labor costs and avoid the waste of water resources [174]. Smart agricultural irrigation robots integrate unmanned driving, the Internet of Things, multi-sensor fusion, and other modern technologies. Irrigation tasks between crop rows can be performed by automatic irrigation robots in accordance with a set amount and time of irrigation, relying on positioning equipment and predefined paths [175]. Information such as the spacing of crop rows and the number of crop rows obtained by crop row detection technology, combined with navigation technology, can support the decision-making of dryland irrigation robots. Without human intervention, robots can independently judge the irrigation needs of each location, thus achieving targeted irrigation. Various approaches have been proposed in the design, development, and manufacture of irrigation robots. The flowchart of crop row detection applications in drylands is shown in Figure 2. The development of vision-based navigation in agricultural robots heavily relies on crop row detection, while machine vision technology remains a critical area that requires significant improvement. To avoid crop crushing by field machinery during irrigation, Wu et al. [176] proposed a partial differential equation (PDE)-based diffusion method that reduced the effect of local interference and strengthened the texture and detailed clarity of crop images. Considering the field variability of dryland crops, Ronchetti et al. [177] combined the threshold segmentation algorithm, classification algorithm, and Bayesian segmentation algorithm to effectively separate crop rows from soil background and weeds, optimizing the operational management of irrigation robots and improving the quality of crop yields. To solve the problem of slow visual navigation line extraction for irrigation robots, Cao et al. [163] enhanced the ENet semantic segmentation network model for the row segmentation of crop images in drylands. By designing the network structure of shunt processing and compressing the traditional ENet network, the accuracy of the beet field boundary’s location and row-to-row segmentation was improved. Faced with overwatering in irrigation systems, Singh et al. [178] designed a MAMDANI fuzzy inference method, which was applied to dryland irrigation robots to optimize the acquisition and processing of crop row information, helping to better control water flow and automate dryland crop irrigation.

4.1.2. Row Detection for Weeding

Efficient and environmentally friendly weeding robots working in drylands have advantages in saving pesticides and promoting healthy crop growth. Equipped with various sensors, intelligent control systems, and operating tools, weeding robots can integrate sensing, decision-making, and control to achieve real-time and autonomous weeding tasks [180]. The weeding robot’s sensing system, consisting of a variety of sensors such as vision and LIDAR, is used to sense environmental information and work objects in real time. With crop row detection, the robot can determine the spacing and number of rows of the crop, thus distinguishing weeds from the crop and reducing false weeding [181]. Finally, the goal of the robotic weeding robot in walking and weeding is achieved in the decision and control link. Due to the influence of illumination, weeds, and soil color, it is not an easy task for weeding robots to locate and navigate between crop rows. Wendel and Underwood [182] introduced a self-supervised training method that processed the hyperspectral imaging data of corn crop rows. This could adapt to seasonal, light or geographic variations and help distinguish confused weeds from crops, which supported the plant classification efforts of weed control robots. Louargant et al. [54] combined spatial and spectral information to detect linearly aligned maize seedlings and classified pixels within and between rows of SVM. Additionally, the weed detection rate was 89% through this method. Weeds within dryland crop rows can easily be mislabeled by crops and affect the performance of spot spraying. To solve this problem, Ota et al. [162] used deep learning and K-means clustering algorithms to detect cabbage rows and realize the automation of mechanical weeding robots. Extracting the target features of weeds or crops using traditional machine learning technology requires extensive manual feature engineering and the manual tuning of parameters, which can be solved by exploiting the powerful learning capabilities of deep learning. To minimize damage to the surrounding crops while the weeding robot is working, Su et al. [141] used a geometric position-based DNN learning method for segmentation training to improve the accuracy and speed of identifying ryegrass weeds between the rows.

4.1.3. Row Detection for Harvesting

Automatic harvesting technology for mature crops in dryland has great prospects in the agricultural robot industry [183]. The traditional manual harvesting method is time-consuming and laborious, while intelligent harvesting robots can save labor costs by virtue of their automation and mechanization. There have been many applications for harvesting dryland crops such as corn, grain, and wheat. Harvesting robots sense the terrain of the operating area and the height and location of the crop to determine the harvesting mode. Their orientation and trajectory can be generated by a positioning system installed in the vehicle, which allows for autonomous navigation and route planning. In addition, harvester selection and harvesting strategies need to vary with different crop types and characteristics. However, the image quality is easily affected by the change in outdoor lighting conditions, and errors may be caused by shadows and excessive or poor illumination through traditional color-based detection methods [184]. Considering this difficulty in the parametric navigation of the harvester at the crop boundary, Benson et al. [185] developed a new image processing method, namely an adaptive fuzzy sequential linear regression algorithm. Trials in corn fields demonstrated the accuracy of this method in crop row position and orientation, thus making the case for the more accurate navigation of the combine harvester. Pilarski et al. [186] designed a Demeter automatic harvesting system by combining a camera and GPS to solve the problem of the vision system being easily affected by light conditions and crop distribution density. It optimized the tracking and steering between crop rows and the navigation performance of harvesters. With the purpose of solving the interference caused by the tilting and omission of ramie, a U-net neural network-based method was applied by Chen et al. [187] for crop row detection to improve the navigation line fitting for harvesters. In cotton fields, a number of components and covers make it difficult to identify the track of crop rows. Therefore, Xu et al. [188] boosted the visual navigation system performance of a cotton-picking robot following the Two-Pass algorithm, iterative method, and LSM method. Impending conditions that robots face as they work include changing environments and vibrating machines, which can make noise for navigation. Fue et al. [108] reduced the effect of environmental noise and enhanced the performance of the boll harvester by sliding window and perspective transformation algorithms to detect the left and right boll rows in 3D images. The results showed that the vision system they designed achieved 92.3% accuracy when detecting cotton rows.

4.2. Applications of Row Detection in Paddy Fields

Paddy fields means farmland with seasonal water accumulation every year; they are often planted with aquatic crops such as rice, lotus roots, and gorgonians. The autonomous navigation of farming machinery in paddy fields allows farmers to use agricultural resources more efficiently and maintain ecological sustainability. Crop row detection and robots in paddy fields can be combined to fertilize, irrigate, weed, harvest, and perform other operations. This complex water environment is a major challenge for farm machinery operating on paddy fields because water depth, water quality, and water temperature can interfere with sensor data collection and processing. There are numerous uses for crop row detection in paddy fields when paired with autonomous farming techniques, as demonstrated in Figure 3. Additionally, Table 2 lists the applications of sensors for row detection in paddy fields.

4.2.1. Row Detection for Transplanting

The deployment of transplanting robots in paddy field operations is essential for increasing operational effectiveness and lowering the burden of physical labor [201]. The rice transplanter is equipped with devices, including sensors and controllers, that can detect information such as crop rows, soil quality, and moisture conditions in paddy fields in real-time, which maintains the depth and spacing of rice seedlings and prevents inconsistent depth and irregular spacing [202]. Currently, complex terrain, crop growth variability, and high accuracy requirements are the main challenges facing rice transplanting robots in paddy fields. It is not easy to obtain good crop segmentation in a complex rice paddy environment. A color index-based segmentation method for rice seedlings was proposed by [203], combining the conversion from the RGB to YCrCb color space and Otsu threshold segmentation. It was applied to a visual crop row detection system to provide reliable navigation information for the transplanter, and the experimental results also verified an advanced segmentation effect and work quality. Keeping the rows of seedlings evenly spaced is beneficial to increase rice yield and reduce crop damage, yet this remains a challenge for autonomous navigation transplanters. In response, Lin et al. [196] developed a Faster R-CNN algorithm for row detection to provide the navigation parameters for rice transplanters and to control transplanting operations in rice fields in a more intelligent and automated way. In view of the poor path adaptability of rice transplanters in the paddy field, Liao et al. [197] improved previous research and designed an integrated positioning method combining GPS, INS, and VNS to reduce rice seedling pressing and improve detection accuracy.

4.2.2. Row Detection for Harvesting

Paddy field harvesting machinery can achieve the autonomous harvest of crops, which has far-reaching significance for the realization of efficient and precise agricultural automation. To assist in autonomous harvesting, paddy robots need to incorporate image processing and machine learning techniques to determine the spatial distribution and growth pattern of crop rows, as well as autonomously avoid obstacles and accidental injuries [204]. Automatic harvesters have the characteristics of high performance, multi-function, and strong flexibility and can effectively prevent over-tillage or under-tillage in complex rice environments. However, harvesting in paddy fields presents unique challenges compared to other row-crop fields due to the presence of standing water, which may cause difficulties for machines to navigate and detect crop rows accurately [205]. Research on rice harvesting has led to many applications, especially in Japan, where the technology is relatively mature. Mud and bubbles in the rice field can easily cause interference in row detection. Therefore, Tian et al. [171] first combined a custom shear binary image algorithm and an LSM algorithm to help detect high-stubble rice plants. Their method has been proven to meet the needs of real-time processing due to a segmentation speed of 0.6s and a segmentation accuracy of 96.7%. At present, the navigation path of common harvesting robots in the market is prone to interference and is challenging to identify accurately. Li et al. [206] identified paths by analyzing the 3D spatial geometric relationships of rice fields and used an improved random sampling consistency algorithm, and perfected boundary identification by collecting boundary angles. This experiment was found to have a success rate of 94.6%. To avoid data errors caused by locally high crop in images, Wang et al. [207] realized the detection of rice rows and boundary lines through a series of conventional image processing algorithms, including morphological operations and the Sobel operator. It ensured the stability of the harvesting robot’s guidance and work in the paddy field.

4.2.3. Row Detection for Weeding

Paddy weeding robots safeguard the quality and yield of rice and other forms of aquatic crop production [208]. The distinction between the crop and weed requires that the paddy weed robot utilize sensors and algorithms to analyze the image information containing the crop and weed. According to the preset weeding scheme, the control system operates a robotic arm or spraying system to perform targeted weeding [209]. The autonomous weeding robot demonstrates a strong working ability when weeding plants in their early and late stages of growth, which helps reduce the impact of pesticide use on the environment and the human body. However, the detection of crops in the paddy fields is more challenging than in dry fields due to noise, such as green duckweed, cyanophytes, and eutrophication water. To reduce the use of herbicides, Chen et al. [210] designed a machine vision system based on passing a known point of Hough transform (PKPHT) to guide a micro-weeding robot between rows of rice seedlings. Zhang et al. [211] proposed a real-time crop row detection method based on a color model and nearest neighbor clustering, which could accurately extract features and adapt to environmental changes. To facilitate the weeding robot to weed the rice field environment without damaging the crop plants, Choi et al. [212] designed robust regression and HT algorithms to extract navigation lines based on rice morphological features. Given that weeds, duckweeds, and cyanobacteria growing in paddy fields tend to interfere with rice crop detection, Zhang et al. [213] proposed an improved sequential clustering algorithm and an angular-based image processing method.

4.3. Applications of Row Detection in Orchards

Orchards are typically agricultural lands that are planted with relatively tall trees or shrubs, which belong to a semi-structured environment. Many tasks, including monitoring, management, and harvesting, cannot be performed without the aid of orchard mobile vehicles and robotic autonomous navigation platforms [214]. Fruit tree row detection can help the robot to accurately locate the position and shape of fruit tree rows, and improve the accuracy of the robot’s autonomous navigation, thus helping fruit farmers to better manage their orchards and improve the yield and quality of fruit trees. Figure 4 shows the navigation and path detection applications under the orchard canopy. The applications of sensors for crop row detection in orchards are presented in Table 3.

4.3.1. Row Detection for Picking

The selective harvesting of fruits and vegetables is one of the most time-consuming and costly links in traditional agricultural production. The development of the orchard-picking robot can effectively replace the manual picking of fruit, improve picking efficiency, and reduce labor costs [226]. In the orchard environment, picking robots need to be equipped with GPS positioning or a LIDAR sensor to realize autonomous positioning and navigation and accurately move and locate the position of the fruit tree. Equipped with vision systems, robotic arms, and pickers, the picking robot can autonomously identify the location, shape, and ripeness of the fruit and accurately locate the fruit as well as complete the picking task [227]. Fruit tree row detection is the basis of the picking robot’s ability to achieve autonomous navigation and positioning. The accurate detection of fruit tree rows can help the picking robot to identify the position, shape, and distribution of fruit trees and then determine the path and attitude of the robot, as well as the placement and expansion of the robot arm [228]. In addition, it helps the robot to identify obstacles and passable areas to avoid damage to fruit trees and other robots. However, fruit trees are diverse and complex in form, and some fruit trees also have outstretched branches and weeds, which can easily block the view of the picking robot, making it more difficult to detect fruit tree rows [229]. Lyu et al. [219] applied the naive Bayesian classifier to detect tree trunk rows and nadir points and generate a centerline by connecting these points as information for the movement path or navigation of the picking robot. This algorithm was able to effectively classify trunk points and noise points in the orchard and reduce the noise caused by small branches, soil, and ground tree shadows. In response to the random spatial arrangement of tree trunks and inconspicuous target differences in a wolfberry plantation, Ma et al. [230] proposed an autonomous navigation method for a wolfberry-picking robot based on visual cues and fuzzy control. This method extracts the trunk rows of wolfberry plants in the far field of view and dynamically tracks the navigation line by calculating a variable-slope region of interest. LiDAR can provide high-precision 3D maps for picking robots and detecting and identifying fruit tree rows, fixed obstacles, etc. Blok et al. [231] evaluated the applicability of the Kalman filtering (KF) and particle filtering (PF) localization algorithms of 2D LiDAR scanners for the in-row navigation of a picking robot in apple orchards. The results showed that for the in-row navigation of orchard-picking robots, the PF algorithm with a laser beam model had better localization performance.

4.3.2. Row Detection for Spraying

Spraying robots are becoming increasingly popular in orchards due to their efficiency and precision in applying pesticides and fertilizers to crops. Spraying robots are equipped with tanks and spraying nozzles that can hold and distribute the necessary number of pesticides and fertilizers, which reduces the labor and costs associated with manual spraying [232]. These robots use advanced sensors and mapping technologies to accurately target specific areas of a crop, avoiding obstacles and ensuring precise application [233]. Research on orchard spraying robots for fertilization, pest control, weeding management, or other controlled treatments is constantly improving. Fruit tree row detection plays an equally important role in the operation of spraying robots in orchards. Based on fruit tree row detection, spraying robots can navigate through orchard rows more quickly and accurately, reducing the likelihood of missing crops or spraying too much in one area as well as the time and resources required for spraying [234]. This can allow for the more precise and effective application of pesticides and fertilizers. Kim et al. [235] proposed an intelligent spraying system for the semantic segmentation of fruit trees in pear orchards. The system applied the SegNet model to detect fruit trees and control nozzles to accurately spray pesticides on them, which reduced the overall pesticide usage. In the framework of a vineyard or orchard, Danton et al. [222] proposed a control method that applied LiDAR to sense fruit tree row information and control the movement of the robot, and automate spraying. The horizontal LiDAR was used to guide the spraying robot to ensure the accurate positioning of the sprayer relative to the vegetation, and a vertical LiDAR was used to achieve an estimate of the vegetation covered and optimize the spraying efficiency. Liu et al. [236] used a 3D LiDAR sensor to perceive the information on fruit trees around the spraying robot and performed 2D processing on the point cloud in the region of interest. The vertical distance from the robot to the centerline of the fruit tree rows was determined using the RANSAC algorithm based on the centroid coordinates on both sides of the fruit tree rows. This method achieved automatic navigation and the precise variable speed spraying of the spraying robot, reducing the amount of pesticide application, air drift, and ground loss and effectively controlling the pollution caused by pesticides to the environment. To improve the accuracy and reliability of the orchard spraying robot, Zhang et al. [237] designed an integrated BDS/IMU navigation algorithm for the position and heading measurement based on error Kalman filtering. Combining the kinematic model and the pure tracking model, the detection of citrus tree rows and the path tracking control of the spraying robot were realized.

4.4. Applications of Row Detection in Greenhouses

A greenhouse is a kind of agricultural application scene with environmental control equipment. Agricultural greenhouses consist of a skeleton and film covering to provide a controlled space in which to grow crops [238]. Due to the partially structured characteristics of greenhouses, their mechanization and automation are conducive to the implementation of precision agriculture. Autonomous navigation and the control of agricultural robots have a significant impact on the efficiency, productivity, and sustainability of greenhouse farming. In the greenhouse environment, the navigation precision and accuracy of the robot may be affected by the obstruction and influence of objects such as plants and equipment [239]. The robot needs crop row detection to determine plant location, growth, and health information and to better perform navigation, inspections, watering, fertilization, and other operations. The relatively enclosed environment inside greenhouses causes issues with signal interference, while insufficient lighting also impairs the accuracy of vision sensors. Therefore, reliable sensors and algorithms need to be selected to apply to crop rows in the greenhouse. Figure 5 illustrates the application of crop row detection technology in greenhouse robot navigation. In addition, reliable sensors also play an important role in the greenhouse. A detailed summary of sensors is displayed in Table 4.

4.4.1. Row Detection for Inspection

The greenhouse inspection robot is a kind of agricultural robot that is specially designed to perform inspection tasks in a greenhouse environment. Equipped with various sensors and cameras, these robots can collect data on the growing environment and health of plants in the greenhouse, detect pests and diseases, and monitor levels such as temperature and humidity [247]. The introduction of intelligent inspection robots in greenhouses can effectively improve the efficiency of greenhouse operations, reduce labor costs, and promote the transformation of facility agriculture from manual inspection to intelligent inspection [248]. With technologies such as maps, sensors, and algorithms, the autonomous navigation system can help inspection robots avoid bumping into plants and equipment in the greenhouse and plan the optimal path to optimize inspection time and energy consumption. Due to the dense growth of plants and complex environmental conditions in the greenhouse, the inspection robot needs to determine the position and growth of plants through crop row detection so as to perform inspection operations more accurately. Based on two Logitech C170 cameras mounted on the inspection robot, Mahmud et al. [249] implemented crop detection and robot navigation using the BT709 grayscale, HSL, and channel filtering algorithms. However, greenhouse crop row detection also has some difficulties, such as light and shadow changes, crop overlap, etc. In a lemongrass greenhouse environment, Mahmud et al. [245] obtained the coordinates of lemongrass using a color segmentation method based on Marxian distance and used this as a probabilistic roadmap input to achieve the navigation of the inspection robot. With an objective to achieve low-cost and non-destructive inspection of crops in a greenhouse environment, Wang et al. [250] designed information acquisition and motion control systems with a Raspberry Pi and an embedded chip as the core, respectively, to integrate a greenhouse mobile inspection robot. In the field of testing, it was found that the efficient measurement of crops and the agility of the designed robot in motion improved the efficiency of greenhouse crop research and inspection. It is a practical problem to realize the autonomous navigation of inspection robots in a greenhouse environment with obstacles. Zhang et al. [251] designed a hypervolume estimation algorithm to shorten the navigation distance and perform autonomous obstacle avoidance, solving the path-following problem between crop rows in greenhouses and improving the efficiency of inspection robots with additional cost reductions.

4.4.2. Row Detection for Spraying

The greenhouse spraying robot is intelligent agricultural machinery equipment integrating high-precision spraying technology, sensor technology, and mechatronics technology [252]. A greenhouse robot usually consists of a robotic chassis and a robotic arm that is equipped with a sprayer and a fertilizer tank, which is used to spray substances that are necessary for plant growth. The greenhouse spraying robot automatically sprays water and fertilizer based on environmental parameters in the greenhouse to ensure optimal growing conditions for plants and reduce the use of chemical fertilizers and their negative impact on the environment. Crop row detection can help to spray robots pinpoint the position and rows of plants so that spraying and curing can be performed more precisely and production efficiency can be improved. Similar to greenhouse inspection robots, greenhouse spraying robots may also face challenges in terms of false and missed inspections, occlusion and interference, and complex environmental issues when performing crop row inspections [253]. Several studies have been proposed to develop greenhouse spraying robots. Guiding vehicles through crop rows is a challenge in greenhouse spraying applications and has become a focus of research by experts. Wang [246] successfully segmented the rows of strawberry plants from plastic film, shadow, and light using a machine vision algorithm of threshold segmentation and center point detection method. This scheme had the characteristics of strong applicability and simple operation while meeting the requirements of robustness and real-time for path detection and following greenhouse spraying robots. Targeting the low efficiency and high cost of greenhouse robot operations, Xue et al. [243] designed a multi-functional spraying robot to perform crop row detection on an RGB image information output by vision sensors. The algorithms used included vertical projection and strip division to help the spraying robot to perform path-following and spraying tasks in a greenhouse with green vegetables. Using the RGB images of tomatoes and cucumbers in a greenhouse, Chen et al. [241] innovated a crop row segmentation and detection algorithm based on LSM and traditional HT algorithms. Robustness and rapidity were demonstrated by achieving the fitting of a navigation path that took 7.13 ms on a tracked spraying robot as an experimental platform. LIDAR has also contributed to the application of greenhouse spraying robots because of their long-range and contactless advantages. Abanay et al. [240] acquired the point cloud data of strawberry greenhouse crops based on an embedded 2D LIDAR sensor and applied an estimated value approach to guide the vehicle heading and speed between rows while automating motion control through a pesticide spraying applicator system.

5. Conclusions

In conclusion, crop row detection is a critical task in precision agriculture that enables various agricultural applications, including pesticide spraying, crop health monitoring, and weed detection. Traditional image processing techniques, machine learning-based approaches, and deep learning-based methods are all viable options for crop row detection, each with its own advantages and limitations. Nonetheless, recent advances in computer vision and machine learning technologies have made deep learning-based methods, notably convolutional neural networks, the most promising option for crop row detection.
Deep learning methods have demonstrated superior performance in various computer vision tasks, including object detection and semantic segmentation, which are fundamental to crop row detection. CNNs excel at learning complex features and patterns from large datasets, enabling them to automatically extract relevant information from field images and accurately identify crop rows. The ability of deep learning models to generalize well to different lighting conditions, weather variations, and field environments has further contributed to their suitability for crop row detection in real-world scenarios.
Despite the progress made, several challenges persist in the field of crop row detection. One challenge involves ensuring the robustness of detection algorithms to handle diverse lighting conditions and environmental factors, as agricultural settings can be highly variable and unpredictable. Additionally, their scalability to different crops is essential, as crop rows can exhibit variations in shape, size, and appearance across different plant species. Integrating crop row detection with other agricultural technologies, such as robotics, drones, and data analytics, also presents another challenge that requires seamless integration and interoperability.
The prospects for crop row detection in precision agriculture are promising. Researchers and industry experts are actively working on developing more accurate, efficient, and scalable methods that address existing challenges. Advancements in deep learning architectures, such as novel CNN architectures and attention mechanisms, hold the potential to further improve crop row detection performance. Furthermore, the fusion of crop row detection with other advanced technologies, including remote sensing, the Internet of Things (IoT), and big data analytics, could enhance the overall effectiveness of precision agriculture systems.
The future of crop row detection in precision agriculture looks promising as researchers and industry experts continue to develop more accurate, efficient, and scalable methods that can benefit farmers and improve agricultural sustainability. Overall, the advancement of crop row detection in precision agriculture has the potential to revolutionize farming practices, leading to improved productivity, resource management, and agricultural sustainability. By leveraging the power of deep learning and embracing collaboration, the future of crop row detection holds great promise for enhancing crop yield, minimizing the environmental impact, and transforming the agricultural industry as a whole.

Author Contributions

Conceptualization, J.S. and Y.B.; methodology, J.S. and Y.B.; analysis, J.S.; investigation, J.S., Y.B., Z.D., J.Z., X.Y. and B.Z.; resources, Z.D. and B.Z.; data curation, J.S.; writing—original draft preparation, J.S.; writing—review and editing, J.S., Y.B. and X.Y.; visualization, J.S.; supervision, J.Z. and Z.D.; funding acquisition, B.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Jiangsu Modern Agricultural Equipment and Technology Demonstration & Promotion Project (project No. NJ2021-60), the National Natural Science Foundation of China (project No. 31901415), and the Jiangsu Agricultural Science and Technology Innovation Fund (JASTIF) (Grant No. CX (21) 3146).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maddikunta, P.K.R.; Hakak, S.; Alazab, M.; Bhattacharya, S.; Gadekallu, T.R.; Khan, W.Z.; Pham, Q.V. Unmanned aerial vehicles in smart agriculture: Applications, requirements, and challenges. IEEE Sens. J. 2021, 21, 17608–17619. [Google Scholar] [CrossRef]
  2. Subeesh, A.; Mehta, C.R. Automation and digitization of agriculture using artificial intelligence and internet of things. Artif. Intell. Agric. 2021, 5, 278–291. [Google Scholar] [CrossRef]
  3. Karkee, M. Fundamentals of Agricultural and Field Robotics; Zhang, Q., Ed.; Springer International Publishing: Cham, Switzerland, 2021. [Google Scholar]
  4. Plessen, M.G. Freeform path fitting for the minimisation of the number of transitions between headland path and interior lanes within agricultural fields. Artif. Intell. Agric. 2021, 5, 233–239. [Google Scholar] [CrossRef]
  5. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. A preliminary evaluation of vision and laser sensing for tree trunk detection and orchard mapping. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA 2013), Sydney, Australia, 2–4 December 2013; pp. 1–10, Australasian Robotics and Automation Association. [Google Scholar]
  6. McCarthy, C.L.; Hancock, N.H.; Raine, S.R. Applied machine vision of plants: A review with implications for field deployment in automated farming operations. Intell. Serv. Robot. 2010, 3, 209–217. [Google Scholar] [CrossRef] [Green Version]
  7. Rocha, B.M.; Vieira, G.S.; Fonseca, A.U.; Sousa, N.M.; Pedrini, H.; Soares, F. Detection of Curved Rows and Gaps in Aerial Images of Sugarcane Field Using Image Processing Techniques. IEEE Can. J. Electr. Comput. Eng. 2022, 45, 303–310. [Google Scholar] [CrossRef]
  8. Singh, N.; Tewari, V.K.; Biswas, P.K.; Pareek, C.M.; Dhruw, L.K. Image processing algorithms for in-field cotton boll detection in natural lighting conditions. Artif. Intell. Agric. 2021, 5, 142–156. [Google Scholar] [CrossRef]
  9. Emmi, L.; Gonzalez-de-Soto, M.; Pajares, G.; Gonzalez-de-Santos, P. Integrating sensory/actuation systems in agricultural vehicles. Sensors 2014, 14, 4014–4049. [Google Scholar] [CrossRef]
  10. Bonadies, S.; Gadsden, S.A. An overview of autonomous crop row navigation strategies for unmanned ground vehicles. Eng. Agric. Environ. Food 2019, 12, 24–31. [Google Scholar] [CrossRef]
  11. Vázquez-Arellano, M.; Griepentrog, H.W.; Reiser, D.; Paraforos, D.S. 3-D imaging systems for agricultural applications—A review. Sensors 2016, 16, 618. [Google Scholar] [CrossRef] [Green Version]
  12. Tian, H.; Wang, T.; Liu, Y.; Qiao, X.; Li, Y. Computer vision technology in agricultural automation—A review. Inf. Process. Agric. 2020, 7, 1–19. [Google Scholar] [CrossRef]
  13. English, A.; Ross, P.; Ball, D.; Corke, P. Vision based guidance for robot navigation in agriculture. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Beijing, China, 20–21 May 2014; pp. 1693–1698. [Google Scholar]
  14. Zhai, Z.; Zhu, Z.; Du, Y.; Zhang, S.; Mao, E. Method for detecting crop rows based on binocular vision with Census transformation. Trans. Chin. Soc. Agric. Eng. 2016, 32, 205–213. [Google Scholar]
  15. Tang, Y.; Chen, M.; Wang, C.; Luo, L.; Li, J.; Lian, G.; Zou, X. Recognition and localization methods for vision-based fruit picking robots: A review. Front. Plant Sci. 2020, 11, 510. [Google Scholar] [CrossRef]
  16. Pajares, G.; García-Santillán, I.; Campos, Y.; Montalvo, M.; Guerrero, J.M.; Emmi, L.; Gonzalez-de-Santos, P. Machine-vision systems selection for agricultural vehicles: A guide. J. Imaging 2016, 2, 34. [Google Scholar] [CrossRef] [Green Version]
  17. Zheng, Y.Y.; Kong, J.L.; Jin, X.B.; Wang, X.Y.; Su, T.L.; Zuo, M. CropDeep: The crop vision dataset for deep-learning-based classification and detection in precision agriculture. Sensors 2019, 19, 1058. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Fayyad, J.; Jaradat, M.A.; Gruyer, D.; Najjaran, H. Deep learning sensor fusion for autonomous vehicle perception and localization: A review. Sensors 2020, 20, 4220. [Google Scholar] [CrossRef] [PubMed]
  19. R Shamshiri, R.; Weltzien, C.; Hameed, I.A.; J Yule, I.; E Grift, T.; Balasundram, S.K.; Chowdhary, G. Research and development in agricultural robotics: A perspective of digital farming. Int. J. Agric. Biol. Eng. 2018, 11, 1–14. [Google Scholar] [CrossRef]
  20. Han, X.; Xu, L.; Peng, Y.; Wang, Z. Trend of Intelligent Robot Application Based on Intelligent Agriculture System. In Proceedings of the 2021 3rd International Conference on Artificial Intelligence and Advanced Manufacture (AIAM), Manchester, UK, 23–25 October 2021; pp. 205–209. [Google Scholar]
  21. Delmerico, J.; Scaramuzza, D. A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2502–2509. [Google Scholar]
  22. Aharchi, M.; Ait Kbir, M. A review on 3D reconstruction techniques from 2D images. In Proceedings of the 4th International Conference on Smart City Applications (SCA‘19), Casablanca, Morocco, 2–4 October 2019; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 510–522, Innovations in Smart Cities Applications Edition 3. [Google Scholar]
  23. Huang, R.; Yamazato, T. A Review on Image Sensor Communication and Its Applications to Vehicles. Photonics 2023, 10, 617. [Google Scholar] [CrossRef]
  24. Ai, C.; Geng, D.; Qi, Z.; Zheng, L.; Feng, Z. Research on AGV Navigation System Based on Binocular Vision. In Proceedings of the 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, 15–19 July 2021; pp. 851–856. [Google Scholar]
  25. Chen, Y.; Hou, C.; Tang, Y.; Zhuang, J.; Lin, J.; He, Y.; Luo, S. Citrus tree segmentation from UAV images based on monocular machine vision in a natural orchard environment. Sensors 2019, 19, 5558. [Google Scholar] [CrossRef] [Green Version]
  26. Zhou, C.; Ye, H.; Hu, J.; Shi, X.; Hua, S.; Yue, J.; Yang, G. Automated counting of rice panicle by applying deep learning model to images from unmanned aerial vehicle platform. Sensors 2019, 19, 3106. [Google Scholar] [CrossRef] [Green Version]
  27. Ball, D.; Upcroft, B.; Wyeth, G.; Corke, P.; English, A.; Ross, P.; Bate, A. Vision-based obstacle detection and navigation for an agricultural robot. J. Field Robot. 2016, 33, 1107–1130. [Google Scholar] [CrossRef]
  28. Vrochidou, E.; Oustadakis, D.; Kefalas, A.; Papakostas, G.A. Computer vision in self-steering tractors. Machines 2022, 10, 129. [Google Scholar] [CrossRef]
  29. Ren, J.; Guan, F.; Wang, T.; Qian, B.; Luo, C.; Cai, G.; Li, X. High Precision Calibration Algorithm for Binocular Stereo Vision Camera using Deep Reinforcement Learning. Comput. Intell. Neurosci. 2022, 2022, 6596868. [Google Scholar] [CrossRef]
  30. Königshof, H.; Salscheider, N.O.; Stiller, C. Realtime 3d object detection for automated driving using stereo vision and semantic information. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 1405–1410. [Google Scholar]
  31. Kneip, J.; Fleischmann, P.; Berns, K. Crop edge detection based on stereo vision. Robot. Auton. Syst. 2020, 123, 103323. [Google Scholar] [CrossRef]
  32. Lati, R.N.; Filin, S.; Eizenberg, H. Plant growth parameter estimation from sparse 3D reconstruction based on highly-textured feature points. Precis. Agric. 2013, 14, 586–605. [Google Scholar] [CrossRef]
  33. Aghi, D.; Mazzia, V.; Chiaberge, M. Local motion planner for autonomous navigation in vineyards with a RGB-D camera-based algorithm and deep learning synergy. Machines 2020, 8, 27. [Google Scholar] [CrossRef]
  34. Giancola, S.; Valenti, M.; Sala, R. A Survey on 3D Cameras: Metrological Comparison of Time-of-Flight, Structured-Light and Active Stereoscopy Technologies; Springer Nature: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  35. Rosell-Polo, J.R.; Cheein, F.A.; Gregorio, E.; Andújar, D.; Puigdomènech, L.; Masip, J.; Escolà, A. Advances in structured light sensors applications in precision agriculture and livestock farming. Adv. Agron. 2015, 133, 71–112. [Google Scholar]
  36. Wu, D.; Chen, T.; Li, A. A high precision approach to calibrate a structured light vision sensor in a robot-based three-dimensional measurement system. Sensors 2016, 16, 1388. [Google Scholar] [CrossRef]
  37. Zanuttigh, P.; Marin, G.; Dal Mutto, C.; Dominio, F.; Minto, L.; Cortelazzo, G.M. Time-of-flight and structured light depth cameras. In Technology and Applications; Springer International: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef] [Green Version]
  38. Condotta, I.C.; Brown-Brandl, T.M.; Pitla, S.K.; Stinn, J.P.; Silva-Miranda, K.O. Evaluation of low-cost depth cameras for agricultural applications. Comput. Electron. Agric. 2020, 173, 105394. [Google Scholar] [CrossRef]
  39. Shahnewaz, A.; Pandey, A.K. Color and depth sensing sensor technologies for robotics and machine vision. In Machine Vision and Navigation; Springer: Cham, Switzerland, 2020; pp. 59–86. [Google Scholar]
  40. Chen, M.K.; Liu, X.; Wu, Y.; Zhang, J.; Yuan, J.; Zhang, Z.; Tsai, D.P. A Meta-Device for Intelligent Depth Perception. Adv. Mater. 2022, 9, 2107465. [Google Scholar] [CrossRef]
  41. Qiu, R.; Zhang, M.; He, Y. Field estimation of maize plant height at jointing stage using an RGB-D camera. Crop J. 2022, 10, 1274–1283. [Google Scholar] [CrossRef]
  42. Milella, A.; Marani, R.; Petitti, A.; Reina, G. In-field high throughput grapevine phenotyping with a consumer-grade depth camera. Comput. Electron. Agric. 2019, 156, 293–306. [Google Scholar] [CrossRef]
  43. Birklbauer, C.; Bimber, O. Panorama light-field imaging. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2014; Volume 33, pp. 43–52. [Google Scholar]
  44. Gao, S.; Yang, K.; Shi, H.; Wang, K.; Bai, J. Review on Panoramic Imaging and Its Applications in Scene Understanding. arXiv 2014, arXiv:2205.05570. [Google Scholar] [CrossRef]
  45. Lai, J.S.; Peng, Y.C.; Chang, M.J.; Huang, J.Y. Panoramic Mapping with Information Technologies for Supporting Engineering Education: A Preliminary Exploration. ISPRS Int. J. Geo-Inf. 2020, 9, 689. [Google Scholar] [CrossRef]
  46. Yang, K.; Hu, X.; Chen, H.; Xiang, K.; Wang, K.; Stiefelhagen, R. Ds-pass: Detail-sensitive panoramic annular semantic segmentation through swaftnet for surrounding sensing. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 457–464. [Google Scholar]
  47. Häne, C.; Heng, L.; Lee, G.H.; Fraundorfer, F.; Furgale, P.; Sattler, T.; Pollefeys, M. 3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection. Image Vis. Comput. 2017, 68, 14–27. [Google Scholar] [CrossRef] [Green Version]
  48. Kumar, V.R.; Eising, C.; Witt, C.; Yogamani, S. Surround-view Fisheye Camera Perception for Automated Driving: Overview, Survey and Challenges. arXiv 2022, arXiv:2205.13281. [Google Scholar] [CrossRef]
  49. Chan, S.; Zhou, X.; Huang, C.; Chen, S.; Li, Y.F. An improved method for fisheye camera calibration and distortion correction. In Proceedings of the 2016 International Conference on Advanced Robotics and Mechatronics (ICARM), Macau, China, 18–20 August 2016; pp. 579–584. [Google Scholar]
  50. Signoroni, A.; Savardi, M.; Baronio, A.; Benini, S. Deep learning meets hyperspectral image analysis: A multidisciplinary review. J. Imaging 2019, 5, 52. [Google Scholar] [CrossRef] [Green Version]
  51. Liu, X.; Jiang, Z.; Wang, T.; Cai, F.; Wang, D. Fast hyperspectral imager driven by a low-cost and compact galvo-mirror. Optik 2020, 224, 165716. [Google Scholar] [CrossRef]
  52. Shaikh, M.S.; Jaferzadeh, K.; Thörnberg, B.; Casselgren, J. Calibration of a hyper-spectral imaging system using a low-cost reference. Sensors 2021, 21, 3738. [Google Scholar] [CrossRef]
  53. Lottes, P.; Hoeferlin, M.; Sander, S.; Müter, M.; Schulze, P.; Stachniss, L.C. An effective classification system for separating sugar beets and weeds for precision farming applications. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5157–5163. [Google Scholar]
  54. Louargant, M.; Jones, G.; Faroux, R.; Paoli, J.N.; Maillot, T.; Gée, C.; Villette, S. Unsupervised classification algorithm for early weed detection in row-crops by combining spatial and spectral information. Remote Sens. 2018, 10, 761. [Google Scholar] [CrossRef] [Green Version]
  55. Su, W.H. Crop plant signaling for real-time plant identification in smart farm: A systematic review and new concept in artificial intelligence for automated weed control. Artif. Intell. Agric. 2020, 4, 262–271. [Google Scholar] [CrossRef]
  56. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef] [Green Version]
  57. Wang, X.; Pan, H.; Guo, K.; Yang, X.; Luo, S. The evolution of LiDAR and its application in high precision measurement. IOP Conf. Ser.: Earth Environ. Sci. 2020, 502, 012008. [Google Scholar] [CrossRef]
  58. Chazette, P.; Totems, J.; Hespel, L.; Bailly, J.S. Principle and physics of the LiDAR measurement. In Optical Remote Sensing of Land Surface; Elsevier: Amsterdam, The Netherlands, 2016; pp. 201–247. [Google Scholar]
  59. Moreno, H.; Valero, C.; Bengochea-Guevara, J.M.; Ribeiro, Á.; Garrido-Izard, M.; Andújar, D. On-ground vineyard reconstruction using a LiDAR-based automated system. Sensors 2020, 20, 1102. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Liu, J.; Sun, Q.; Fan, Z.; Jia, Y. TOF lidar development in autonomous vehicle. In Proceedings of the 2018 IEEE 3rd Optoelectronics Global Conference (OGC), Shenzhen, China, 4–7 September 2018; pp. 185–190. [Google Scholar]
  61. Wang, T.; Chen, B.; Zhang, Z.; Li, H.; Zhang, M. Applications of machine vision in agricultural robot navigation: A review. Comput. Electron. Agric. 2022, 198, 107085. [Google Scholar] [CrossRef]
  62. Gao, X.; Li, J.; Fan, L.; Zhou, Q.; Yin, K.; Wang, J.; Wang, Z. Review of wheeled mobile robots’ navigation problems and application prospects in agriculture. IEEE Access 2018, 6, 49248–49268. [Google Scholar] [CrossRef]
  63. Qu, Y.; Yang, M.; Zhang, J.; Xie, W.; Qiang, B.; Chen, J. An outline of multi-sensor fusion methods for mobile agents indoor navigation. Sensors 2021, 21, 1605. [Google Scholar] [CrossRef]
  64. Shalal, N.; Low, T.; McCarthy, C.; Hancock, N. A review of autonomous navigation systems in agricultural environments. In Proceedings of the SEAg 2013: Innovative Agricultural Technologies for a Sustainable Future, Barton, Australia, 22–25 September 2013. [Google Scholar]
  65. Benet, B.; Lenain, R. Multi-sensor fusion method for crop row tracking and traversability operations. In Proceedings of the Conférence AXEMA-EURAGENG 2017, Paris, France, 10–11 February 2017; p. 10. [Google Scholar]
  66. Shaikh, T.A.; Rasool, T.; Lone, F.R. Towards leveraging the role of machine learning and artificial intelligence in precision agriculture and smart farming. Comput. Electron. Agric. 2022, 198, 107119. [Google Scholar] [CrossRef]
  67. Yan, Y.; Zhang, B.; Zhou, J.; Zhang, Y.; Liu, X.A. Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments. Agronomy 2022, 12, 1740. [Google Scholar] [CrossRef]
  68. Kolar, P.; Benavidez, P.; Jamshidi, M. Survey of datafusion techniques for laser and vision based sensor integration for autonomous navigation. Sensors 2020, 20, 2180. [Google Scholar] [CrossRef] [Green Version]
  69. de Silva, R.; Cielniak, G.; Gao, J. Towards agricultural autonomy: Crop row detection under varying field conditions using deep learning. arXiv 2021, arXiv:2109.08247. [Google Scholar]
  70. Meng, Q.; Qiu, R.; He, J.; Zhang, M.; Ma, X.; Liu, G. Development of agricultural implement system based on machine vision and fuzzy control. Comput. Electron. Agric. 2015, 112, 128–138. [Google Scholar] [CrossRef]
  71. Xu, Z.; Shin, B.S.; Klette, R. Closed form line-segment extraction using the Hough transform. Pattern Recognit. 2015, 48, 4012–4023. [Google Scholar] [CrossRef]
  72. Marzougui, M.; Alasiry, A.; Kortli, Y.; Baili, J. A lane tracking method based on progressive probabilistic Hough transform. IEEE Access 2020, 8, 84893–84905. [Google Scholar] [CrossRef]
  73. Chung, K.L.; Huang, Y.H.; Tsai, S.R. Orientation-based discrete Hough transform for line detection with low computational complexity. Appl. Math. Comput. 2014, 237, 430–437. [Google Scholar] [CrossRef]
  74. Chai, Y.; Wei, S.J.; Li, X.C. The multi-scale Hough transform lane detection method based on the algorithm of Otsu and Canny. Adv. Mater. Res. 2014, 1042, 126–130. [Google Scholar] [CrossRef]
  75. Akinwande, M.O.; Dikko, H.G.; Samson, A. Variance inflation factor: As a condition for the inclusion of suppressor variable(s) in regression analysis. Open J. Stat. 2015, 5, 754. [Google Scholar] [CrossRef] [Green Version]
  76. Andargie, A.A.; Rao, K.S. Estimation of a linear model with two-parameter symmetric platykurtic distributed errors. J. Uncertain. Anal. Appl. 2013, 1, 13. [Google Scholar] [CrossRef] [Green Version]
  77. Milioto, A.; Lottes, P.; Stachniss, C. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2229–2235. [Google Scholar]
  78. Yang, Z.; Yang, Y.; Li, C.; Zhou, Y.; Zhang, X.; Yu, Y.; Liu, D. Tasseled Crop Rows Detection Based on Micro-Region of Interest and Logarithmic Transformation. Front. Plant Sci. 2022, 13, 916474. [Google Scholar] [CrossRef]
  79. Zheng, L.Y.; Xu, J.X. Multi-crop-row detection based on strip analysis. In Proceedings of the 2014 International Conference on Machine Learning and Cybernetics, Lanzhou, China, 13–16 July 2014; Volume 2, pp. 611–614. [Google Scholar]
  80. Zhou, Y.; Yang, Y.; Zhang, B.; Wen, X.; Yue, X.; Chen, L. Autonomous detection of crop rows based on adaptive multi-ROI in maize fields. Int. J. Agric. Biol. Eng. 2021, 14, 217–225. [Google Scholar] [CrossRef]
  81. Zhai, Z.; Zhu, Z.; Du, Y.; Song, Z.; Mao, E. Multi-crop-row detection algorithm based on binocular vision. Biosyst. Eng. 2016, 150, 89–103. [Google Scholar] [CrossRef]
  82. Benson, E.R.; Reid, J.F.; Zhang, Q. Machine vision–based guidance system for an agricultural small–grain harvester. Trans. ASAE 2003, 46, 1255. [Google Scholar] [CrossRef]
  83. Fontaine, V.; Crowe, T.G. Development of line-detection algorithms for local positioning in densely seeded crops. Can. Biosyst. Eng. 2006, 48, 7. [Google Scholar]
  84. Wang, A.; Zhang, W.; Wei, X. A review on weed detection using ground-based machine vision and image processing techniques. Comput. Electron. Agric. 2019, 158, 226–240. [Google Scholar] [CrossRef]
  85. Zhou, M.; Xia, J.; Yang, F.; Zheng, K.; Hu, M.; Li, D.; Zhang, S. Design and experiment of visual navigated UGV for orchard based on Hough matrix and RANSAC. Int. J. Agric. Biol. Eng. 2021, 14, 176–184. [Google Scholar] [CrossRef]
  86. Khan, N.; Rajendran, V.P.; Al Hasan, M.; Anwar, S. Clustering Algorithm Based Straight and Curved Crop Row Detection Using Color Based Segmentation. In Proceedings of the ASME 2020 International Mechanical Engineering Congress and Exposition, Virtual, 16–19 November 2020; American Society of Mechanical Engineers: New York, NY, USA, 2020; Volume 84553, p. V07BT07A003. [Google Scholar]
  87. Ghahremani, M.; Williams, K.; Corke, F.; Tiddeman, B.; Liu, Y.; Wang, X.; Doonan, J.H. Direct and accurate feature extraction from 3D point clouds of plants using RANSAC. Comput. Electron. Agric. 2021, 187, 106240. [Google Scholar] [CrossRef]
  88. Guo, J.; Wei, Z.; Miao, D. Lane detection method based on improved RANSAC algorithm. In Proceedings of the 2015 IEEE Twelfth International Symposium on Autonomous Decentralized Systems, Taichung, Taiwan, 25–27 March 2015; pp. 285–288. [Google Scholar]
  89. Ma, S.; Guo, P.; You, H.; He, P.; Li, G.; Li, H. An image matching optimization algorithm based on pixel shift clustering RANSAC. Inf. Sci. 2021, 562, 452–474. [Google Scholar] [CrossRef]
  90. Bossu, J.; Gée, C.; Jones, G.; Truchetet, F. Wavelet transform to discriminate between crop and weed in perspective agronomic images. Comput. Electron. Agric. 2009, 65, 133–143. [Google Scholar] [CrossRef]
  91. Arts, L.P.; van den Broek, E.L. The fast continuous wavelet transformation (fCWT) for real-time, high-quality, noise-resistant time–frequency analysis. Nat. Comput. Sci. 2022, 2, 47–58. [Google Scholar] [CrossRef]
  92. Hague, T.; Tillett, N.D. A bandpass filter-based approach to crop row location and tracking. Mechatronics 2001, 11, 1–12. [Google Scholar] [CrossRef]
  93. García-Santillán, I.D.; Montalvo, M.; Guerrero, J.M.; Pajares, G. Automatic detection of curved and straight crop rows from images in maize fields. Biosyst. Eng. 2017, 156, 61–79. [Google Scholar] [CrossRef]
  94. Saxena, A.; Prasad, M.; Gupta, A.; Bharill, N.; Patel, O.P.; Tiwari, A.; Lin, C.T. A review of clustering techniques and developments. Neurocomputing 2017, 267, 664–681. [Google Scholar] [CrossRef] [Green Version]
  95. Vidović, I.; Scitovski, R. Center-based clustering for line detection and application to crop rows detection. Comput. Electron. Agric. 2014, 109, 212–220. [Google Scholar] [CrossRef]
  96. Behura, A. The cluster analysis and feature selection: Perspective of machine learning and image processing. Data Anal. Bioinform. Mach. Learn. Perspect. 2021, 10, 249–280. [Google Scholar]
  97. Steward, B.L.; Gai, J.; Tang, L. The use of agricultural robots in weed management and control. Robot. Autom. Improv. Agric. 2019, 44, 1–25. [Google Scholar]
  98. Yu, Y.; Bao, Y.; Wang, J.; Chu, H.; Zhao, N.; He, Y.; Liu, Y. Crop row segmentation and detection in paddy fields based on treble-classification otsu and double-dimensional clustering method. Remote Sens. 2021, 13, 901. [Google Scholar] [CrossRef]
  99. Ezugwu, A.E.; Ikotun, A.M.; Oyelade, O.O.; Abualigah, L.; Agushaka, J.O.; Eke, C.I.; Akinyelu, A.A. A comprehensive survey of clustering algorithms: State-of-the-art machine learning applications, taxonomy, challenges, and future research prospects. Eng. Appl. Artif. Intell. 2022, 110, 104743. [Google Scholar] [CrossRef]
  100. Lachgar, M.; Hrimech, H.; Kartit, A. Optimization techniques in deep convolutional neuronal networks applied to olive diseases classification. Artif. Intell. Agric. 2022, 6, 77–89. [Google Scholar]
  101. Kamilaris, A.; Prenafeta-Boldú, F.X. Deep learning in agriculture: A survey. Comput. Electron. Agric. 2018, 147, 70–90. [Google Scholar] [CrossRef] [Green Version]
  102. De Castro, A.I.; Torres-Sánchez, J.; Peña, J.M.; Jiménez-Brenes, F.M.; Csillik, O.; López-Granados, F. An automatic random forest-OBIA algorithm for early weed mapping between and within crop rows using UAV imagery. Remote Sens. 2018, 10, 285. [Google Scholar] [CrossRef] [Green Version]
  103. You, J.; Liu, W.; Lee, J. A DNN-based semantic segmentation for detecting weed and crop. Comput. Electron. Agric. 2020, 178, 105750. [Google Scholar] [CrossRef]
  104. Doha, R.; Al Hasan, M.; Anwar, S.; Rajendran, V. Deep learning based crop row detection with online domain adaptation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Data Mining, Singapore, 14–18 August 2021; pp. 2773–2781. [Google Scholar]
  105. Picon, A.; San-Emeterio, M.G.; Bereciartua-Perez, A.; Klukas, C.; Eggers, T.; Navarra-Mestre, R. Deep learning-based segmentation of multiple species of weeds and corn crop using synthetic and real image datasets. Comput. Electron. Agric. 2022, 194, 106719. [Google Scholar] [CrossRef]
  106. de Silva, R.; Cielniak, G.; Wang, G.; Gao, J. Deep learning-based Crop Row Following for Infield Navigation of Agri-Robots. arXiv 2022, arXiv:2209.04278. [Google Scholar]
  107. Kumar, R.; Singh, M.P.; Kumar, P.; Singh, J.P. Crop Selection Method to maximize crop yield rate using machine learning technique. In Proceedings of the 2015 International Conference on Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), Avadi, India, 6–8 May 2015; pp. 138–145. [Google Scholar]
  108. Fue, K.G.; Porter, W.M.; Barnes, E.M.; Rains, G.C. An extensive review of mobile agricultural robotics for field operations: Focus on cotton harvesting. AgriEngineering 2020, 2, 150–174. [Google Scholar] [CrossRef] [Green Version]
  109. Sankaran, S.; Khot, L.R.; Espinoza, C.Z.; Jarolmasjed, S.; Sathuvalli, V.R.; Vandemark, G.J.; Pavek, M.J. Low-altitude, high-resolution aerial imaging systems for row and field crop phenotyping: A review. Eur. J. Agron. 2015, 70, 112–123. [Google Scholar] [CrossRef]
  110. Rejeb, A.; Rejeb, K.; Zailani, S.; Keogh, J.G.; Appolloni, A. Examining the interplay between artificial intelligence and the agri-food industry. Artif. Intell. Agric. 2022, 6, 111–128. [Google Scholar] [CrossRef]
  111. Jiang, Y.; Li, C.; Paterson, A.H.; Sun, S.; Xu, R.; Robertson, J. Quantitative analysis of cotton canopy size in field conditions using a consumer-grade RGB-D camera. Front. Plant Sci. 2018, 8, 2233. [Google Scholar] [CrossRef] [Green Version]
  112. Yao, Y.; Sun, J.; Tian, Y.; Zheng, C.; Liu, J. Alleviating water scarcity and poverty in drylands through telecouplings: Vegetable trade and tourism in northwest China. Sci. Total Environ. 2020, 741, 140387. [Google Scholar] [CrossRef]
  113. Jha, K.; Doshi, A.; Patel, P.; Shah, M. A comprehensive review on automation in agriculture using artificial intelligence. Artif. Intell. Agric. 2019, 2, 1–12. [Google Scholar] [CrossRef]
  114. Yu, J.; Cheng, T.; Cai, N.; Zhou, X.G.; Diao, Z.; Wang, T.; Zhang, D. Wheat Lodging Segmentation Based on Lstm_PSPNet Deep Learning Network. Drones 2023, 7, 143. [Google Scholar] [CrossRef]
  115. Emmi, L.; Herrera-Diaz, J.; Gonzalez-de-Santos, P. Toward Autonomous Mobile Robot Navigation in Early-Stage Crop Growth. In Proceedings of the 19th International Conference on Informatics in Control 2022, Automation and Robotics-ICINCO, Lisbon Portugal, 14–16 July 2022; pp. 411–418. [Google Scholar]
  116. Liang, X.; Chen, B.; Wei, C.; Zhang, X. Inter-row navigation line detection for cotton with broken rows. Plant Methods 2022, 18, 1–12. [Google Scholar] [CrossRef]
  117. Wei, C.; Li, H.; Shi, J.; Zhao, G.; Feng, H.; Quan, L. Row anchor selection classification method for early-stage crop row-following. Comput. Electron. Agric. 2022, 192, 106577. [Google Scholar] [CrossRef]
  118. Winterhalter, W.; Fleckenstein, F.; Dornhege, C.; Burgard, W. Localization for precision navigation in agricultural fields—Beyond crop row following. J. Field Robot. 2021, 38, 429–451. [Google Scholar] [CrossRef]
  119. Bakken, M.; Ponnambalam, V.R.; Moore, R.J.; Gjevestad, J.G.O.; From, P.J. Robot-supervised Learning of Crop Row Segmentation. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 2185–2191. [Google Scholar]
  120. Xie, Y.; Chen, K.; Li, W.; Zhang, Y.; Mo, J. An Improved Adaptive Threshold RANSAC Method for Medium Tillage Crop Rows Detection. In Proceedings of the 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 9–11 April 2021; pp. 1282–1286. [Google Scholar]
  121. He, C.; Chen, Q.; Miao, Z.; Li, N.; Sun, T. Extracting the navigation path of an agricultural plant protection robot based on machine vision. In Proceedings of the 2021 40th Chinese Control Conference (CCC), Shanghai, China, 26–28 July 2021; pp. 3576–3581. [Google Scholar]
  122. Gai, J.; Xiang, L.; Tang, L. Using a depth camera for crop row detection and mapping for under-canopy navigation of agricultural robotic vehicle. Comput. Electron. Agric. 2021, 188, 106301. [Google Scholar] [CrossRef]
  123. Ahmadi, A.; Nardi, L.; Chebrolu, N.; Stachniss, C. Visual servoing-based navigation for monitoring row-crop fields. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May 2020; pp. 4920–4926. [Google Scholar]
  124. Iqbal, J.; Xu, R.; Sun, S.; Li, C. Simulation of an autonomous mobile robot for LiDAR-based in-field phenotyping and navigation. Robotics 2020, 9, 46. [Google Scholar] [CrossRef]
  125. Ponnambalam, V.R.; Bakken, M.; Moore, R.J.; Glenn Omholt Gjevestad, J.; Johan From, P. Autonomous crop row guidance using adaptive multi-roi in strawberry fields. Sensors 2020, 20, 5249. [Google Scholar] [CrossRef] [PubMed]
  126. Velasquez, A.E.B.; Higuti, V.A.H.; Guerrero, H.B.; Gasparino, M.V.; Magalhaes, D.V.; Aroca, R.V.; Becker, M. Reactive navigation system based on H∞ control system and LiDAR readings on corn crops. Precis. Agric. 2020, 21, 349–368. [Google Scholar] [CrossRef]
  127. Xiuzhi, L.I.; Xiaobin, P.E.N.G.; Huimin, F.A.N.G. Navigation path detection of plant protection robot based on RANSAC algorithm. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach. 2020, 51, 41–46. [Google Scholar]
  128. Liao, J.; Wang, Y.; Zhu, D.; Zou, Y.; Zhang, S.; Zhou, H. Automatic segmentation of crop/background based on luminance partition correction and adaptive threshold. IEEE Access 2020, 8, 202611–202622. [Google Scholar] [CrossRef]
  129. Simon, N.A.; Min, C.H. Neural Network Based Corn Field Furrow Detection for Autonomous Navigation in Agriculture Vehicles. In Proceedings of the 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Vancouver, BC, Canada, 9–12 September 2020; pp. 1–6. [Google Scholar]
  130. Higuti, V.A.; Velasquez, A.E.; Magalhaes, D.V.; Becker, M.; Chowdhary, G. Under canopy light detection and ranging-based autonomous navigation. J. Field Robot. 2019, 36, 547–567. [Google Scholar] [CrossRef]
  131. Winterhalter, W.; Fleckenstein, F.V.; Dornhege, C.; Burgard, W. Crop row detection on tiny plants with the pattern hough transform. IEEE Robot. Autom. Lett. 2018, 3, 3394–3401. [Google Scholar] [CrossRef]
  132. Zhang, X.; Li, X.; Zhang, B.; Zhou, J.; Tian, G.; Xiong, Y.; Gu, B. Automated robust crop-row detection in maize fields based on position clustering algorithm and shortest path method. Comput. Electron. Agric. 2018, 154, 165–175. [Google Scholar] [CrossRef]
  133. Li, J.; Zhu, R.; Chen, B. Image detection and verification of visual navigation route during cotton field management period. Int. J. Agric. Biol. Eng. 2018, 11, 159–165. [Google Scholar] [CrossRef] [Green Version]
  134. Meng, Q.; Hao, X.; Zhang, Y.; Yang, G. Guidance line identification for agricultural mobile robot based on machine vision. In Proceedings of the 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 October 2018; pp. 1887–1893. [Google Scholar]
  135. Yang, S.; Mei, S.; Zhang, Y. Detection of maize navigation centerline based on machine vision. IFAC-PapersOnLine 2018, 51, 570–575. [Google Scholar] [CrossRef]
  136. Reiser, D.; Miguel, G.; Arellano, M.V.; Griepentrog, H.W.; Paraforos, D.S. Crop row detection in maize for developing navigation algorithms under changing plant growth stages. In Proceedings of the Robot 2015: Second Iberian Robotics Conference, Lisbon, Portugal, 19–21 November 2016; Springer: Cham, Switzerland, 2016; pp. 371–382. [Google Scholar]
  137. Liu, L.; Mei, T.; Niu, R.; Wang, J.; Liu, Y.; Chu, S. RBF-based monocular vision navigation for small vehicles in narrow space below maize canopy. Appl. Sci. 2016, 6, 182. [Google Scholar] [CrossRef] [Green Version]
  138. Jiang, G.; Wang, X.; Wang, Z.; Liu, H. Wheat rows detection at the early growth stage based on Hough transform and vanishing point. Comput. Electron. Agric. 2016, 123, 211–223. [Google Scholar] [CrossRef]
  139. Tu, C.; Van Wyk, B.J.; Djouani, K.; Hamam, Y.; Du, S. An efficient crop row detection method for agriculture robots. In Proceedings of the 2014 7th International Congress on Image and Signal Processing, Dalian, China, 14–16 October 2014; pp. 655–659. [Google Scholar]
  140. Zhu, Z.X.; He, Y.; Zhai, Z.Q.; Liu, J.Y.; Mao, E.R. Research on cotton row detection algorithm based on binocular vision. Appl. Mech. Mater. 2014, 670, 1222–1227. [Google Scholar] [CrossRef]
  141. Su, D.; Qiao, Y.; Kong, H.; Sukkarieh, S. Real time detection of inter-row ryegrass in wheat farms using deep learning. Biosyst. Eng. 2021, 204, 198–211. [Google Scholar] [CrossRef]
  142. Du, Y.; Mallajosyula, B.; Sun, D.; Chen, J.; Zhao, Z.; Rahman, M.; Jawed, M.K. A Low-cost Robot with Autonomous Recharge and Navigation for Weed Control in Fields with Narrow Row Spacing. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27–30 September 2021; pp. 3263–3270. [Google Scholar]
  143. Rabab, S.; Badenhorst, P.; Chen, Y.P.P.; Daetwyler, H.D. A template-free machine vision-based crop row detection algorithm. Precis. Agric. 2021, 22, 124–153. [Google Scholar] [CrossRef]
  144. Czymmek, V.; Schramm, R.; Hussmann, S. Vision based crop row detection for low cost uav imagery in organic agriculture. In Proceedings of the 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Dubrovnik, Croatia, 25–28 May 2020; pp. 1–6. [Google Scholar]
  145. Pusdá-Chulde, M.; Giusti, A.D.; Herrera-Granda, E.; García-Santillán, I. Parallel CPU-based processing for automatic crop row detection in corn fields. In Proceedings of the XV Multidisciplinary International Congress on Science and Technology, Quito, Ecuador, 19–23 October 2020; Springer: Cham, Switzerland, 2020; pp. 239–251. [Google Scholar]
  146. Kulkarni, S.; Angadi, S.A.; Belagavi, V.T.U. IoT based weed detection using image processing and CNN. Int. J. Eng. Appl. Sci. Technol. 2019, 4, 606–609. [Google Scholar]
  147. Czymmek, V.; Harders, L.O.; Knoll, F.J.; Hussmann, S. Vision-based deep learning approach for real-time detection of weeds in organic farming. In Proceedings of the 2019 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Auckland, New Zealand, 20–23 May 2019; pp. 1–5. [Google Scholar]
  148. Hassanein, M.; Khedr, M.; El-Sheimy, N. Crop row detection procedure using low-cost UAV imagery system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 349–356. [Google Scholar] [CrossRef] [Green Version]
  149. Bah, M.D.; Hafiane, A.; Canals, R. CRowNet: Deep network for crop row detection in UAV images. IEEE Access 2019, 8, 5189–5200. [Google Scholar] [CrossRef]
  150. Tenhunen, H.; Pahikkala, T.; Nevalainen, O.; Teuhola, J.; Mattila, H.; Tyystjärvi, E. Automatic detection of cereal rows by means of pattern recognition techniques. Comput. Electron. Agric. 2019, 162, 677–688. [Google Scholar] [CrossRef]
  151. García-Santillán, I.; Guerrero, J.M.; Montalvo, M.; Pajares, G. Curved and straight crop row detection by accumulation of green pixels from images in maize fields. Precis. Agric. 2018, 19, 18–41. [Google Scholar] [CrossRef]
  152. Kaur, M.; Min, C.H. Automatic crop furrow detection for precision agriculture. In Proceedings of the 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS), Windsor, ON, Canada, 5–8 August 2018; pp. 520–523. [Google Scholar]
  153. Hamuda, E.; Mc Ginley, B.; Glavin, M.; Jones, E. Improved image processing-based crop detection using Kalman filtering and the Hungarian algorithm. Comput. Electron. Agric. 2018, 148, 37–44. [Google Scholar] [CrossRef]
  154. Bah, M.D.; Hafiane, A.; Canals, R. Deep learning with unsupervised data labeling for weed detection in line crops in UAV images. Remote Sens. 2018, 10, 1690. [Google Scholar] [CrossRef] [Green Version]
  155. Malavazi, F.B.; Guyonneau, R.; Fasquel, J.B.; Lagrange, S.; Mercier, F. LiDAR-only based navigation algorithm for an autonomous agricultural robot. Comput. Electron. Agric. 2018, 154, 71–79. [Google Scholar] [CrossRef]
  156. Lavania, S.; Matey, P.S. Novel method for weed classification in maize field using Otsu and PCA implementation. In Proceedings of the 2015 IEEE International Conference on Computational Intelligence Communication Technology, Ghaziabad, India, 13–14 February 2015; pp. 534–537. [Google Scholar]
  157. Nan, L.; Chunlong, Z.; Ziwen, C.; Zenghong, M.; Zhe, S.; Ting, Y.; Junxiong, Z. Crop positioning for robotic intra-row weeding based on machine vision. Int. J. Agric. Biol. Eng. 2015, 8, 20–29. [Google Scholar]
  158. Pérez-Ortiz, M.; Peña, J.M.; Gutiérrez, P.A.; Torres-Sánchez, J.; Hervás-Martínez, C.; López-Granados, F. A semi-supervised system for weed mapping in sunflower crops using unmanned aerial vehicles and a crop row detection method. Appl. Soft Comput. 2015, 37, 533–544. [Google Scholar] [CrossRef]
  159. Kiani, S.; Jafari, A. Crop detection and positioning in the field using discriminant analysis and neural networks based on shape features. J. Agr. Sci. Tech. 2012, 14, 755–765. [Google Scholar]
  160. Burgos-Artizzu, X.P.; Ribeiro, A.; Guijarro, M.; Pajares, G. Real-time image processing for crop/weed discrimination in maize fields. Comput. Electron. Agric. 2011, 75, 337–346. [Google Scholar] [CrossRef] [Green Version]
  161. Hemming, J.; Nieuwenhuizen, A.T.; Struik, L.E. Image Analysis System to Determine Crop Row and Plant Positions for an Intra-Row Weeding Machine. 2011, 6, pp. 1–7. Available online: https://edepot.wur.nl/180044 (accessed on 11 June 2023).
  162. Ota, K.; Kasahara, J.Y.L.; Yamashita, A.; Asama, H. Weed and Crop Detection by Combining Crop Row Detection and K-means Clustering in Weed Infested Agricultural Fields. In Proceedings of the 2022 IEEE/SICE International Symposium on System Integration (SII), Narvik, Norway, 9–12 January 2022; pp. 985–990. [Google Scholar]
  163. Cao, M.; Tang, F.; Ji, P.; Ma, F. Improved Real-Time Semantic Segmentation Network Model for Crop Vision Navigation Line Detection. Front. Plant Sci. 2022, 13, 898131. [Google Scholar] [CrossRef] [PubMed]
  164. Basso, M.; Pignaton de Freitas, E. A UAV guidance system using crop row detection and line follower algorithms. J. Intell. Robot. Syst. 2020, 97, 605–621. [Google Scholar] [CrossRef]
  165. Fue, K.; Porter, W.; Barnes, E.; Li, C.; Rains, G. Evaluation of a stereo vision system for cotton row detection and boll location estimation in direct sunlight. Agronomy 2020, 10, 1137. [Google Scholar] [CrossRef]
  166. Fareed, N.; Rehman, K. Integration of remote sensing and GIS to extract plantation rows from a drone-based image point cloud digital surface model. ISPRS Int. J. Geo-Inf. 2020, 9, 151. [Google Scholar] [CrossRef] [Green Version]
  167. Wang, L.; Yang, Y.; Shi, J. Measurement of harvesting width of intelligent combine harvester by improved probabilistic Hough transform algorithm. Measurement 2020, 151, 107130. [Google Scholar] [CrossRef]
  168. Li, X.; Lloyd, R.; Ward, S.; Cox, J.; Coutts, S.; Fox, C. Robotic crop row tracking around weeds using cereal-specific features. Comput. Electron. Agric. 2022, 197, 106941. [Google Scholar] [CrossRef]
  169. Casuccio, L.; Kotze, A. Corn planting quality assessment in very high-resolution RGB UAV imagery using Yolov5 and Python. AGILE GISci. Ser. 2022, 3, 28. [Google Scholar] [CrossRef]
  170. LeVoir, S.J.; Farley, P.A.; Sun, T.; Xu, C. High-Accuracy adaptive low-cost location sensing subsystems for autonomous rover in precision agriculture. IEEE Open J. Ind. Appl. 2020, 1, 74–94. [Google Scholar] [CrossRef]
  171. Tian, Z.; Junfang, X.; Gang, W.; Jianbo, Z. Automatic navigation path detection method for tillage machines working on high crop stubble fields based on machine vision. Int. J. Agric. Biol. Eng. 2014, 7, 29. [Google Scholar]
  172. Ulloa, C.C.; Krus, A.; Barrientos, A.; del Cerro, J.; Valero, C. Robotic fertilization in strip cropping using a CNN vegetables detection-characterization method. Comput. Electron. Agric. 2022, 193, 106684. [Google Scholar] [CrossRef]
  173. Azeta, J.; Bolu, C.A.; Alele, F.; Daranijo, E.O.; Onyeubani, P.; Abioye, A.A. Application of Mechatronics in Agriculture: A review. J. Phys. Conf. Ser. 2019, 1378, 032006. [Google Scholar] [CrossRef]
  174. Klein, L.J.; Hamann, H.F.; Hinds, N.; Guha, S.; Sanchez, L.; Sams, B.; Dokoozlian, N. Closed loop controlled precision irrigation sensor network. IEEE Internet Things J. 2018, 5, 4580–4588. [Google Scholar] [CrossRef]
  175. Rehman, A.; Saba, T.; Kashif, M.; Fati, S.M.; Bahaj, S.A.; Chaudhry, H. A revisit of internet of things technologies for monitoring and control strategies in smart agriculture. Agronomy 2022, 12, 127. [Google Scholar] [CrossRef]
  176. Wu, J.; Deng, M.; Fu, L.; Miao, J. Vanishing Point Conducted Diffusion for Crop Rows Detection. In Proceedings of the International Conference on Intelligent and Interactive Systems and Applications, Bangkok, Thailand, 28–30 June 2018; Springer: Cham, Switzerland, 2018; pp. 404–416. [Google Scholar]
  177. Ronchetti, G.; Mayer, A.; Facchi, A.; Ortuani, B.; Sona, G. Crop row detection through UAV surveys to optimize on-farm irrigation management. Remote Sens. 2020, 12, 1967. [Google Scholar] [CrossRef]
  178. Singh, A.K.; Tariq, T.; Ahmer, M.F.; Sharma, G.; Bokoro, P.N.; Shongwe, T. Intelligent Control of Irrigation Systems Using Fuzzy Logic Controller. Energies 2022, 15, 7199. [Google Scholar] [CrossRef]
  179. Pang, Y.; Shi, Y.; Gao, S.; Jiang, F.; Veeranampalayam-Sivakumar, A.N.; Thompson, L.; Liu, C. Improved crop row detection with deep neural network for early-season maize stand count in UAV imagery. Comput. Electron. Agric. 2020, 178, 105766. [Google Scholar] [CrossRef]
  180. Zhang, W.; Miao, Z.; Li, N.; He, C.; Sun, T. Review of Current Robotic Approaches for Precision Weed Management. Curr. Robot. Rep. 2022, 3, 139–151. [Google Scholar] [CrossRef]
  181. Li, Y.; Guo, Z.; Shuang, F.; Zhang, M.; Li, X. Key technologies of machine vision for weeding robots: A review and benchmark. Comput. Electron. Agric. 2022, 196, 106880. [Google Scholar] [CrossRef]
  182. Wendel, A.; Underwood, J. Self-supervised weed detection in vegetable crops using ground based hyperspectral imaging. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5128–5135. [Google Scholar]
  183. Zhao, Y.; Gong, L.; Huang, Y.; Liu, C. A review of key techniques of vision-based control for harvesting robot. Comput. Electron. Agric. 2016, 127, 311–323. [Google Scholar] [CrossRef]
  184. Li, B.; Yang, Y.; Qin, C.; Bai, X.; Wang, L. Improved random sampling consensus algorithm for vision navigation of intelligent harvester robot. Ind. Robot. Int. J. Robot. Res. Appl. 2020, 47, 881–887. [Google Scholar] [CrossRef]
  185. Benson, E.R.; Reid, J.F.; Zhang, Q.; Pinto, F.A.C. An adaptive fuzzy crop edge detection method for machine vision. In Proceedings of the Annual International Meeting Paper, New York, NY, USA, 3–5 July 2000; No. 001019. pp. 49085–49659. [Google Scholar]
  186. Pilarski, T.; Happold, M.; Pangels, H.; Ollis, M.; Fitzpatrick, K.; Stentz, A. The demeter system for automated harvesting. Auton. Robot. 2002, 13, 9–20. [Google Scholar] [CrossRef]
  187. Chen, J.; Qiang, H.; Wu, J.; Xu, G.; Wang, Z. Navigation path extraction for greenhouse cucumber-picking robots using the prediction-point Hough transform. Comput. Electron. Agric. 2021, 180, 105911. [Google Scholar] [CrossRef]
  188. Xu, S.; Wu, J.; Zhu, L.; Li, W.; Wang, Y.; Wang, N. A novel monocular visual navigation method for cotton-picking robot based on horizontal spline segmentation. In MIPPR 2015: Automatic Target Recognition and Navigation; SPIE: Bellingham, WA, USA, 2015; Volume 9812, pp. 310–315. [Google Scholar]
  189. Choi, K.H.; Han, S.K.; Park, K.H.; Kim, K.S.; Kim, S. Vision based guidance line extraction for autonomous weed control robot in paddy field. In Proceedings of the 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), Zhuhai, China, 6–9 December 2015; pp. 831–836. [Google Scholar]
  190. Li, D.; Li, B.; Long, S.; Feng, H.; Xi, T.; Kang, S.; Wang, J. Rice seedling row detection based on morphological anchor points of rice stems. Biosyst. Eng. 2023, 226, 71–85. [Google Scholar] [CrossRef]
  191. Hu, Y.; Huang, H. Extraction Method for Centerlines of Crop Row Based on Improved Lightweight Yolov4. In Proceedings of the 2021 6th International Symposium on Computer and Information Processing Technology (ISCIPT), Changsha, China, 11–13 June 2021; pp. 127–132. [Google Scholar]
  192. Tao, Z.; Ma, Z.; Du, X.; Yu, Y.; Wu, C. A crop root row detection algorithm for visual navigation in rice fields. In Proceedings of the 2020 ASABE Annual International Virtual Meeting, Joseph, MI, USA, 3–5 April 2020; p. 1. [Google Scholar]
  193. Kanagasingham, S.; Ekpanyapong, M.; Chaihan, R. Integrating machine vision-based row guidance with GPS and compass-based routing to achieve autonomous navigation for a rice field weeding robot. Precis. Agric. 2020, 21, 831–855. [Google Scholar] [CrossRef]
  194. Adhikari, S.P.; Yang, H.; Kim, H. Learning semantic graphics using convolutional encoder–decoder network for autonomous weeding in paddy. Front. Plant Sci. 2019, 10, 1404. [Google Scholar] [CrossRef] [Green Version]
  195. Sodjinou, S.G.; Mohammadi, V.; Mahama, A.T.S.; Gouton, P. A deep semantic segmentation-based algorithm to segment crops and weeds in agronomic color images. Inf. Process. Agric. 2022, 9, 355–364. [Google Scholar] [CrossRef]
  196. Lin, S.; Jiang, Y.; Chen, X.; Biswas, A.; Li, S.; Yuan, Z.; Qi, L. Automatic detection of plant rows for a transplanter in paddy field using faster r-cnn. IEEE Access 2020, 8, 147231–147240. [Google Scholar] [CrossRef]
  197. Liao, J.; Wang, Y.; Yin, J.; Liu, L.; Zhang, S.; Zhu, D. Segmentation of rice seedlings using the YCrCb color space and an improved Otsu method. Agronomy 2018, 8, 269. [Google Scholar] [CrossRef] [Green Version]
  198. Zhang, F. Detecting Crop Rows for Automated Rice Transplanters Based on Radon Transform. Sens. Lett. 2013, 11, 1100–1105. [Google Scholar] [CrossRef]
  199. Chen, J.; Song, J.; Guan, Z.; Lian, Y. Measurement of the distance from grain divider to harvesting boundary based on dynamic regions of interest. Int. J. Agric. Biol. Eng. 2021, 14, 226–232. [Google Scholar] [CrossRef]
  200. Huang, S.; Wu, S.; Sun, C.; Ma, X.; Jiang, Y.; Qi, L. Deep localization model for intra-row crop detection in paddy field. Comput. Electron. Agric. 2020, 169, 105203. [Google Scholar] [CrossRef]
  201. Khadatkar, A.; Mathur, S.M.; Dubey, K.; BhusanaBabu, V. Development of embedded automatic transplanting system in seedling transplanters for precision agriculture. Artif. Intell. Agric. 2021, 5, 175–184. [Google Scholar] [CrossRef]
  202. Paradkar, V.; Raheman, H.; Rahul, K. Development of a metering mechanism with serial robotic arm for handling paper pot seedlings in a vegetable transplanter. Artif. Intell. Agric. 2021, 5, 52–63. [Google Scholar] [CrossRef]
  203. Liao, J.; Wang, Y.; Yin, J.; Bi, L.; Zhang, S.; Zhou, H.; Zhu, D. An integrated navigation method based on an adaptive federal Kalman filter for a rice transplanter. Trans. ASABE 2021, 64, 389–399. [Google Scholar] [CrossRef]
  204. Oliveira, L.F.; Moreira, A.P.; Silva, M.F. Advances in agriculture robotics: A state-of-the-art review and challenges ahead. Robotics 2021, 10, 52. [Google Scholar] [CrossRef]
  205. Bao, Y.; Gai, J.; Xiang, L.; Tang, L. Field robotic systems for high-throughput plant phenotyping: A review and a case study. In High-Throughput Crop Phenotyping; Springer International Publishing: Cham, Switzerland, 2021; pp. 13–38. [Google Scholar]
  206. Li, Y.; Nie, J.; Chao, X. Do we really need deep CNN for plant diseases identification? Comput. Electron. Agric. 2020, 178, 105803. [Google Scholar] [CrossRef]
  207. Wang, S.; Wang, L.; Xiao, H.; Qin, C. Visual measurement method of crop height based on color feature in harvesting robot. SN Appl. Sci. 2023, 5, 59. [Google Scholar] [CrossRef]
  208. Peng, H.; Li, Z.; Zhou, Z.; Shao, Y. Weed detection in paddy field using an improved RetinaNet network. Comput. Electron. Agric. 2022, 199, 107179. [Google Scholar] [CrossRef]
  209. Mousazadeh, H. A technical review on navigation systems of agricultural autonomous off-road vehicles. J. Terramechanics 2013, 50, 211–232. [Google Scholar] [CrossRef]
  210. Chen, B.; Tojo, S.; Watanabe, K. Machine vision for a micro weeding robot in a paddy field. Biosyst. Eng. 2003, 85, 393–404. [Google Scholar] [CrossRef]
  211. Zhang, Q.; Huang, X.; Li, B. Detection of rice seedlings rows’ centerlines based on color model and nearest neighbor clustering algorithm. Trans. Chin. Soc. Agric. Eng. 2012, 28, 163–171. [Google Scholar]
  212. Choi, K.H.; Han, S.K.; Han, S.H.; Park, K.H.; Kim, K.S.; Kim, S. Morphology-based guidance line extraction for an autonomous weeding robot in paddy fields. Comput. Electron. Agric. 2015, 113, 266–274. [Google Scholar] [CrossRef]
  213. Zhang, Q.; Chen, M.S.; Li, B. A visual navigation algorithm for paddy field weeding robot based on image understanding. Comput. Electron. Agric. 2017, 143, 66–78. [Google Scholar] [CrossRef]
  214. Bai, Y.; Zhang, B.; Xu, N.; Zhou, J.; Shi, J.; Diao, Z. Vision-based navigation and guidance for agricultural autonomous vehicles and robots: A review. Comput. Electron. Agric. 2023, 205, 107584. [Google Scholar] [CrossRef]
  215. Bell, J.; MacDonald, B.A.; Ahn, H.S. Row following in pergola structured orchards by a monocular camera using a fully convolutional neural network. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 December 2017; pp. 640–645. [Google Scholar]
  216. Cerrato, S.; Mazzia, V.; Salvetti, F.; Chiaberge, M. A deep learning driven algorithmic pipeline for autonomous navigation in row-based crops. arXiv 2021, arXiv:2112.03816. [Google Scholar]
  217. Huang, P.; Zhu, L.; Zhang, Z.; Yang, C. An End-to-End Learning-Based Row-Following System for an Agricultural Robot in Structured Apple Orchards. Math. Probl. Eng. 2021, 2021, 6221119. [Google Scholar] [CrossRef]
  218. Opiyo, S.; Okinda, C.; Zhou, J.; Mwangi, E.; Makange, N. Medial axis-based machine-vision system for orchard robot navigation. Comput. Electron. Agric. 2021, 185, 106153. [Google Scholar] [CrossRef]
  219. Lyu, H.K.; Park, C.H.; Han, D.H.; Kwak, S.W.; Choi, B. Orchard free space and center line estimation using Naive Bayesian classifier for unmanned ground self-driving vehicle. Symmetry 2018, 10, 355. [Google Scholar] [CrossRef] [Green Version]
  220. Radcliffe, J.; Cox, J.; Bulanon, D.M. Machine vision for orchard navigation. Comput. Ind. 2018, 98, 165–171. [Google Scholar] [CrossRef]
  221. Nehme, H.; Aubry, C.; Rossi, R.; Boutteau, R. An Anomaly Detection Approach to Monitor the Structured-Based Navigation in Agricultural Robotics. In Proceedings of the 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), Lyon, France, 23–27 August 2021; pp. 1111–1117. [Google Scholar]
  222. Danton, A.; Roux, J.C.; Dance, B.; Cariou, C.; Lenain, R. Development of a spraying robot for precision agriculture: An edge following approach. In Proceedings of the 2020 IEEE Conference on Control Technology and Applications (CCTA), Montreal, QC, Canada, 24–26 August 2020; pp. 267–272. [Google Scholar]
  223. Benet, B.; Lenain, R.; Rousseau, V. Development of a sensor fusion method for crop row tracking operations. Adv. Anim. Biosci. 2017, 8, 583–589. [Google Scholar] [CrossRef]
  224. Comba, L.; Gay, P.; Primicerio, J.; Aimonino, D.R. Vineyard detection from unmanned aerial systems images. Comput. Electron. Agric. 2015, 114, 78–87. [Google Scholar] [CrossRef]
  225. Rosell Polo, J.R.; Sanz Cortiella, R.; Llorens Calveras, J.; Arnó Satorra, J.; Escolà i Agustí, A.; Ribes Dasi, M.; Palacín Roca, J. A tractor-mounted scanning LIDAR for the non-destructive measurement of vegetative volume and surface area of tree-row plantations: A comparison with conventional destructive measurements. Biosyst. Eng. 2009, 102, 128–134. [Google Scholar] [CrossRef] [Green Version]
  226. Wang, J.; Sun, X.; Xu, Y.; Zhou, W.; Tang, H.; Wang, Q. Timeliness harvesting loss of rice in cold region under different mechanical harvesting methods. Sustainability 2021, 13, 6345. [Google Scholar] [CrossRef]
  227. Jia, W.; Zhang, Y.; Lian, J.; Zheng, Y.; Zhao, D.; Li, C. Apple harvesting robot under information technology: A review. Int. J. Adv. Robot. Syst. 2020, 17, 1–16. [Google Scholar] [CrossRef]
  228. Ding, H.; Zhang, B.; Zhou, J.; Yan, Y.; Tian, G.; Gu, B. Recent developments and applications of simultaneous localization and mapping in agriculture. J. Field Robot. 2022, 39, 956–983. [Google Scholar] [CrossRef]
  229. Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
  230. Ma, Y.; Zhang, W.; Qureshi, W.S.; Gao, C.; Zhang, C.; Li, W. Autonomous navigation for a wolfberry picking robot using visual cues and fuzzy control. Inf. Process. Agric. 2021, 8, 15–26. [Google Scholar] [CrossRef]
  231. Blok, P.M.; van Boheemen, K.; van Evert, F.K.; IJsselmuiden, J.; Kim, G.H. Robot navigation in orchards with localization based on Particle filter and Kalman filter. Comput. Electron. Agric. 2019, 157, 261–269. [Google Scholar] [CrossRef]
  232. Simon, S.; Bouvier, J.C.; Debras, J.F.; Sauphanor, B. Biodiversity and pest management in orchard systems. Sustain. Agric. 2011, 2, 693–709. [Google Scholar]
  233. Gong, J.; Fan, W.; Peng, J. Application analysis of hydraulic nozzle and rotary atomization sprayer on plant protection UAV. Int. J. Precis. Agric. Aviat. 2019, 2, 25–29. [Google Scholar] [CrossRef]
  234. Gao, G.; Xiao, K.; Jia, Y. A spraying path planning algorithm based on colour-depth fusion segmentation in peach orchards. Comput. Electron. Agric. 2020, 173, 105412. [Google Scholar] [CrossRef]
  235. Kim, J.; Seol, J.; Lee, S.; Hong, S.W.; Son, H.I. An intelligent spraying system with deep learning-based semantic segmentation of fruit trees in orchards. In Proceedings of the 2020 IEEE international conference on robotics and automation (ICRA), Paris, France, 31 May 2020; pp. 3923–3929. [Google Scholar]
  236. Liu, L.; Liu, Y.; He, X.; Liu, W. Precision Variable-Rate Spraying Robot by Using Single 3D LIDAR in Orchards. Agronomy 2022, 12, 2509. [Google Scholar] [CrossRef]
  237. Zhang, L.; Zhu, X.; Huang, J.; Huang, J.; Xie, J.; Xiao, X.; Fang, K. BDS/IMU Integrated Auto-Navigation System of Orchard Spraying Robot. Appl. Sci. 2022, 12, 8173. [Google Scholar] [CrossRef]
  238. Yano, A.; Cossu, M. Energy sustainable greenhouse crop cultivation using photovoltaic technologies. Renew. Sustain. Energy Rev. 2019, 109, 116–137. [Google Scholar] [CrossRef]
  239. Bechar, A.; Vigneault, C. Agricultural robots for field operations: Concepts and components. Biosyst. Eng. 2016, 149, 94–111. [Google Scholar] [CrossRef]
  240. Abanay, A.; Masmoudi, L.; El Ansari, M.; Gonzalez-Jimenez, J.; Moreno, F.A. LIDAR-based autonomous navigation method for an agricultural mobile robot in strawberry greenhouse: AgriEco Robot. AIMS Electron. Electr. Eng. 2022, 6, 317–328. [Google Scholar] [CrossRef]
  241. Chen, J.; Qiang, H.; Wu, J.; Xu, G.; Wang, Z.; Liu, X. Extracting the navigation path of a tomato-cucumber greenhouse robot based on a median point Hough transform. Comput. Electron. Agric. 2020, 174, 105472. [Google Scholar] [CrossRef]
  242. Le, T.D.; Ponnambalam, V.R.; Gjevestad, J.G.; From, P.J. A low-cost and efficient autonomous row-following robot for food production in polytunnels. J. Field Robot. 2020, 37, 309–321. [Google Scholar] [CrossRef] [Green Version]
  243. Xue, J.L.; Fan, B.W.; Zhang, X.X.; Feng, Y. An agricultural robot for multipurpose operations in a greenhouse. In Proceedings of the 2017 International Conference on Mechanical and Mechatronics Engineering (ICMME 2017), Kortrijk, Belgium, 24–26 February 2017. [Google Scholar]
  244. Wang, H.; Ji, C.; An, Q.; Ding, Q. Detection of navigation route in greenhouse environment with machine vision. In Proceedings of the Fourth International Conference on Machine Vision (ICMV 2011): Machine Vision, Image Processing, and Pattern Analysis, Singapore, 11–13 January 2012; Volume 8349, pp. 375–380. [Google Scholar]
  245. Mahmud, M.A.; Abidin, M.Z.; Mohamed, Z. Crop identification and navigation design based on probabilistic roadmap for crop inspection robot. In Proceedings of the International Conference on Agricultural and Food Engineering (Cafei2016), Copenhagen, Denmark, 11–13 August 2016; Volume 23, p. 25. [Google Scholar]
  246. Wang, F. Guidance line detection for strawberry field in greenhouse. In Proceedings of the 2010 Symposium on Photonics and Optoelectronics, Chengdu, China, 19–21 June 2010; pp. 1–4. [Google Scholar]
  247. Aravind, K.R.; Raja, P.; Pérez-Ruiz, M. Task-based agricultural mobile robots in arable farming: A review. Span. J. Agric. Res. 2017, 15, e02R01. [Google Scholar] [CrossRef]
  248. Fountas, S.; Mylonas, N.; Malounas, I.; Rodias, E.; Hellmann Santos, C.; Pekkeriet, E. Agricultural robotics for field operations. Sensors 2020, 20, 2672. [Google Scholar] [CrossRef] [PubMed]
  249. Mahmud, M.S.A.; Abidin, M.S.Z.; Mohamed, Z. Development of an autonomous crop inspection mobile robot system. In Proceedings of the 2015 IEEE Student Conference on Research and Development (SCOReD), Kuala Lumpur, Malaysia, 13–14 December 2015; pp. 105–110. [Google Scholar]
  250. Wang, B.; Ding, Y.; Wang, C.; Li, D.; Wang, H.; Bie, Z.; Xu, S. G-ROBOT: An Intelligent Greenhouse Seedling Height Inspection Robot. J. Robot. 2022, 2022, 9355234. [Google Scholar] [CrossRef]
  251. Zhang, X.; Guo, Y.; Yang, J.; Li, D.; Wang, Y.; Zhao, R. Many-objective evolutionary algorithm based agricultural mobile robot route planning. Comput. Electron. Agric. 2022, 200, 107274. [Google Scholar] [CrossRef]
  252. Xie, D.; Chen, L.; Liu, L.; Chen, L.; Wang, H. Actuators and Sensors for Application in Agricultural Robots: A Review. Machines 2022, 10, 913. [Google Scholar] [CrossRef]
  253. Hu, N.; Su, D.; Wang, S.; Nyamsuren, P.; Qiao, Y.; Jiang, Y.; Cai, Y. LettuceTrack: Detection and tracking of lettuce for robotic precision spray in agriculture. Front. Plant Sci. 2022, 13, 1003243. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s. MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Applications of agricultural robots working in row-crop fields.
Figure 1. Applications of agricultural robots working in row-crop fields.
Agronomy 13 01780 g001
Figure 2. Applications of crop row detection in drylands. (a) Flowchart of vision-based line detection in a corn field [117]; (b) Schematic of the camera position while collecting data; (c) Results of crop row detection through UAV images and the improved deep neural network [179].
Figure 2. Applications of crop row detection in drylands. (a) Flowchart of vision-based line detection in a corn field [117]; (b) Schematic of the camera position while collecting data; (c) Results of crop row detection through UAV images and the improved deep neural network [179].
Agronomy 13 01780 g002
Figure 3. Applications of crop row detection in paddy fields. (a) Vision-based guidance line extraction in the paddy field [189]; (b) Schematic of the rows detected by the clustering algorithm based on dynamic search direction [190]; (c) Detection steps based on 2D clustering method [98].
Figure 3. Applications of crop row detection in paddy fields. (a) Vision-based guidance line extraction in the paddy field [189]; (b) Schematic of the rows detected by the clustering algorithm based on dynamic search direction [190]; (c) Detection steps based on 2D clustering method [98].
Agronomy 13 01780 g003
Figure 4. Applications of navigation and path detection under orchard canopy. (a) Mobile orchard robots; (b) Vision-based row detection in orchards; (c) LiDAR-based row detection in kiwifruit orchards [215]; (d) A visual representation of the simulation environment; (e) Visual representation of navigation path; (f) Representation of the linear and angular offsets of the robot in the coordinate system [216].
Figure 4. Applications of navigation and path detection under orchard canopy. (a) Mobile orchard robots; (b) Vision-based row detection in orchards; (c) LiDAR-based row detection in kiwifruit orchards [215]; (d) A visual representation of the simulation environment; (e) Visual representation of navigation path; (f) Representation of the linear and angular offsets of the robot in the coordinate system [216].
Agronomy 13 01780 g004
Figure 5. Applications of navigation-based crop row detection in greenhouses. (a) Position of the robot relative to the crop [240]; (b) Robot motion path extraction results in a greenhouse tomato field [241]; (c) Snapshots of the robot movement in simulation; (d) Topological map for navigation [242].
Figure 5. Applications of navigation-based crop row detection in greenhouses. (a) Position of the robot relative to the crop [240]; (b) Robot motion path extraction results in a greenhouse tomato field [241]; (c) Snapshots of the robot movement in simulation; (d) Topological map for navigation [242].
Agronomy 13 01780 g005
Table 1. A detailed summary of applications of sensors for crop row detection in drylands.
Table 1. A detailed summary of applications of sensors for crop row detection in drylands.
ApplicationSensorsScenariosMethodsCRDAWork
NavigationRGB and TOF cameraCorn and wheat fieldsDBSCAN-[115]
CameraCotton cropsIterative least squares method-[116]
CameraCorn crop fieldRASCM97.90%[117]
GNSS and color cameraCabbage fieldPattern RANSAC83%[118]
RGB cameraStrawberry fieldConvolutional neural network (CNN)-[119]
RGB CameraCorn and soybean fieldAdaptive threshold RANSAC92.18%[120]
CameraSoybean seedlingsLeast squares method (LSM)90%[121]
ToF cameraCorn and sorghum fieldsEuclidean Clustering algorithm and Linear programming-[122]
CameraCrop fieldSliding window technique-[123]
2D LiDARCotton fieldRANSAC-[124]
Stereo CameraStrawberry fieldsAdaptive multi-roi algorithm-[125]
LIDARMaize fieldsLST and filter method95%[126]
RGB cameraCorn seedlingLSM93.8%[127]
CameraCrop imagsLuminance partition correction and Adaptive threshold-[128]
CameraCorn fieldNeural network model97%[129]
LIDARCorn cropLSM-[130]
3D laser and vision cameraWinter canola and sugar beet plantsPattern hough transform-[131]
Color cameraMaize fieldsLSM-[132]
CameraCotton fieldHT100%[133]
Color video cameraSoybeans, wheat and cabbageImproved genetic algorithm (IGA)-[134]
RGB cameraMaize fieldsLSM92%[135]
3D-LIDAR Laser scannerMaize rowsRANSAC-[136]
DFK cameraMaize fieldRadial basis function (RBF) algorithm95%[137]
Color cameraWheat rowsVanishing point detection method based on k-means clustering90%[138]
CameraCroplandQuadrangle method-[139]
3D Stereo cameraCotton fieldParallax distance measuring method & HT90%[140]
WeedingRGB and NIR cameraRyegrassDeep neural network (DNN)-[141]
Monocular cameraFlaxseed fieldsContour algorithm93.5%[142]
CameraRyegrassPerspective projection method90%[143]
Raspberry Pi CameraCarrot fieldPPHLT-[144]
CameraCorn fieldLRM-[145]
CameraMaize fieldCNN85%[146]
Industrial cameraCarrot fieldsCNN89%[147]
Sequoia cameraCanola fieldDesign line scanner scans-[148]
CameraMaize fieldHT93.58%[149]
CameraCereal fieldsK-means clustering method94%[150]
Monocular cameraMaize fieldsLRM-[151]
-Maize fieldsHT and BA-[152]
RGB cameraCauliflowerMulti-target tracking algorithm99.3404%[153]
RGB cameraBean & spinachHT-[154]
LIDARCrop fieldPEARL method100%[155]
Multispectral imaging systemSugar beet fieldHT and SVM92%[54]
CameraMaize fieldLRM-[156]
Color cameraLettuce, cauliflower and maizeOTSU and median filtering95%[157]
Multispectral cameraSunflower cropsHT75%[158]
Digital cameraCorn fieldMorphological operations and stepwise discriminant analysis96%[159]
CameraMaize fieldAutomatic threshold adjustment method80%[160]
CCD cameraLettuce and celeriacTemplate fitting algorithm99%[161]
IrrigationRGB-D cameraCabbage fieldK-means Clustering-[162]
CameraBeet fieldENet and LSM91.2%[163]
Raspberry Pi CameraMaize cropHT100%[164]
HarvestingStereo cameraCotton fieldPixel-based algorithm92.3%[165]
LIDAR and CMOS sensorCrop plantationsPCA Transform92%[166]
Binocular vision cameraRice and wheat fieldProbabilistic Hough transform96.025%[167]
SeedingZED cameraWheat fieldMean Shift algorithm-[168]
DJI H20 sensorCorn cropsCNN99.35%[169]
LIDAR and cameraMaize fieldsOrdinary linear regression method92–94%[170]
PloughingDigital cameraRice, rape and wheat fieldLSM96.7%[171]
FertilizationMulti-spectral cameraCabbage rowsCNN90.5%[172]
Table 2. A detailed summary of applications of sensors for crop row detection in paddy fields.
Table 2. A detailed summary of applications of sensors for crop row detection in paddy fields.
ApplicationSensorsMethodsCRDAWork
NavigationIndustrial cameraTreble-classification Otsu and Double-dimensional clustering method-[98]
Industrial cameraDouble-dimensional adaptive clustering method93.6%[191]
CCD cameraK-means Clustering82.35%[192]
WeedingCameraLinear regression sliding window technique-[193]
CameraESNet-[194]
CameraSequential clustering algorithm & HT-[195]
TransplantingColor cameraCNN and LSM89.8%[196]
CameraImproved Otsu method-[197]
CameraRadon transform-[198]
HarvestingMonocular cameraProbabilistic Hough transform94.8%[199]
SprayingColor digital CMOS cameraCNN93.22%[200]
Table 3. A detailed summary of the applications of sensors for crop row detection in orchards.
Table 3. A detailed summary of the applications of sensors for crop row detection in orchards.
ApplicationSensorsScenariosMethodsCRDAWork
NavigationMonocular cameraApple orchardCNN-[217]
RGB cameraApple orchardK-means clustering-[218]
RGB-D cameraVineyard and pear orchardDeep learning-[216]
LIDARVineyardsHT98.8%[219]
CameraApple orchardLSM-[220]
Farming LIDAR VineyardsHT and filtering algorithm-[221]
SprayingLIDARVineyardLSM-[222]
TrackingColor cameraVineyardLS and HT-[223]
Pollination and harvestingMonocular CameraKiwifruit orchardFully onvolutional neural network-[215]
ViticultureTetracam ADC-lite CameraVineyardHough space clustering 95.13%[224]
PALIDAROrchards and vineyardsLRM75%[225]
Table 4. A detailed summary of applications of sensors for crop row detection in greenhouses.
Table 4. A detailed summary of applications of sensors for crop row detection in greenhouses.
ApplicationSensorsScenariosMethodsWork
SprayingIndustrial cameraCucumber
greenhouse
Median point Hough transform[241]
2D LIDAR and stereo cameraStrawberry greenhouseHokuyo node production[240]
CameraGreen vegetable greenhouseVertical projection method and SA[243]
PickingIndustrial cameraCucumber greenhousePrediction point Hough transform algorithm[187]
CameraCucumber greenhouseLSM[244]
InspectionCameraGreenhouseHT[245]
NavigationColor cameraStrawberry greenhouseHT[246]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, J.; Bai, Y.; Diao, Z.; Zhou, J.; Yao, X.; Zhang, B. Row Detection BASED Navigation and Guidance for Agricultural Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications. Agronomy 2023, 13, 1780. https://doi.org/10.3390/agronomy13071780

AMA Style

Shi J, Bai Y, Diao Z, Zhou J, Yao X, Zhang B. Row Detection BASED Navigation and Guidance for Agricultural Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications. Agronomy. 2023; 13(7):1780. https://doi.org/10.3390/agronomy13071780

Chicago/Turabian Style

Shi, Jiayou, Yuhao Bai, Zhihua Diao, Jun Zhou, Xingbo Yao, and Baohua Zhang. 2023. "Row Detection BASED Navigation and Guidance for Agricultural Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications" Agronomy 13, no. 7: 1780. https://doi.org/10.3390/agronomy13071780

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop