Next Article in Journal
Analysis of HMCVT Shift Quality Based on the Engagement Characteristics of Wet Clutch
Previous Article in Journal
Suppression of Tomato Bacterial Wilt Incited by Ralstonia pseudosolanacearum Using Polyketide Antibiotic-Producing Bacillus spp. Isolated from Rhizospheric Soil
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crop Row Detection in the Middle and Late Periods of Maize under Sheltering Based on Solid State LiDAR

1
School of Engineering, Anhui Agricultural University, Hefei 230036, China
2
Nanjing Institute of Agricultural Mechanization, Ministry of Agriculture and Rural Affairs, Nanjing 210014, China
3
Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China
*
Author to whom correspondence should be addressed.
Agriculture 2022, 12(12), 2011; https://doi.org/10.3390/agriculture12122011
Submission received: 18 October 2022 / Revised: 19 November 2022 / Accepted: 22 November 2022 / Published: 25 November 2022
(This article belongs to the Section Agricultural Technology)

Abstract

:
As the basic link of autonomous navigation in agriculture, crop row detection is vital to achieve accurate detection of crop rows for autonomous navigation. Machine vision algorithms are easily affected by factors such as changes in field lighting and weather conditions, and the majority of machine vision algorithms detect early periods of crops, but it is challenging to detect crop rows under high sheltering pressure in the middle and late periods. In this paper, a crop row detection algorithm based on LiDAR is proposed that is aimed at the middle and late crop periods, which has a good effect compared with the conventional machine vision algorithm. The algorithm proposed the following three steps: point cloud preprocessing, feature point extraction, and crop row centerline detection. Firstly, dividing the horizontal strips equally, the improved K-means algorithm and the prior information of the previous horizontal strip are utilized to obtain the candidate points of the current horizontal strip, then the candidate points information is used to filter and extract the feature points in accordance with the corresponding threshold, and finally, the least squares method is used to fit the crop row centerlines. The experimental results show that the algorithm can detect the centerlines of crop rows in the middle and late periods of maize under the high sheltering environment. In the middle period, the average correct extraction rate of maize row centerlines was 95.1%, and the average processing time was 0.181 s; in the late period, the average correct extraction rate of maize row centerlines was 87.3%, and the average processing time was 0.195 s. At the same time, it also demonstrates accuracy and superiority of the algorithm over the machine vision algorithm, which can provide a solid foundation for autonomous navigation in agriculture.

1. Introduction

The goal of autonomous navigation is to control the driving path of vehicles and keep a constant distance from the adjacent driving line [1,2], which is conducive to agricultural work, such as planting, fertilizing, spraying pesticides, harvesting, and reducing unnecessary resource waste. As a crucial part of autonomous navigation in agriculture [3,4,5,6,7,8,9], crop row detection can plan the field vehicle navigation path [2,10,11] and complete complex field operations [12] for efficient agricultural work. Consequently, it is of great significance to precisely detect crop rows.
At present, crop row detection is primarily based on classic machine vision and LiDAR to conduct research. Machine vision uses a camera as a sensor to obtain image information about the target crop to detect crop rows. The common methods of machine vision are mainly based on the Hough transformation method, path determination method, vanishing point method, horizontal strip method, etc. The Hough [13] transformation is a feature extraction technology used to detect geometric features [14]. The classical Hough transformation was so slow to operate for huge computations that it was rarely used in real-time systems [15]. Leemans and Destain [16] studied adaptive Hough transformation and improved the applicability of classical Hough transformation, even in extremely noisy environments. Results were still satisfactory when the rows lacked plants. Additionally, it was found to be effective for both the chicory rows and the sowing rows. However, crop rows could be challenging to identify in a variety of situations since the algorithm is sensitive to variations in lighting conditions. A crop row detection method based on vanishing points was proposed by Pla et al. [17] In order to detect crop rows, the image was first segmented, and then the skeletal information of crop rows was recovered using vanishing points. This method worked well for extracting simple field crop rows or early crop rows. However, it relied on skeleton extraction [18], had poor real-time performance, and was inappropriate for extracting complicated field crop rows. Jiang, Wang, and Liu [19] constructed multiple ROIs to estimate candidate points. The ROI concentrated the features of multiple rows into a single optimization process based on the equal spacing of crop rows and used clustering methods to determine the real feature points. There are many different types of crops that can be detected by this algorithm, even in fields where weed pressure is severe or there is a crop shortage. However, the detection accuracy will drop as the significance of green spectral components decreases.
In general, although machine vision is relatively mature, the accuracy of crop row detection based on machine vision is low and the processing time is long for crops with high height, lush growth, and mutual sheltering. Additionally, the detection results and image analysis will be greatly affected by varying and discontinuous brightness due to weather and lighting conditions [20].
The working principle of LiDAR is to send signals in the direction of the target, compare the returned target echo with the transmitted signal to obtain more information about the target, and use this information to directly calculate the distance between the LiDAR and the reflected target [21]. In recent years, more academics became interested in LiDAR because of its decreasing cost. Barawad [22] developed a real-time guidance system based on LiDAR, which can guide autonomous vehicles in real time in tree rows. Weiss and Biber [23] discussed the benefits of 3D sensors in environmental sensing for agricultural robots, which can operate in a variety of inclement weather, including dusty and foggy environments. They also developed a navigation method based on 3D LiDAR that used LiDAR to obtain 3D point clouds to process the detection and segmentation of plants and the ground and used statistical models to identify point clusters representing plants to detect plants. Höfle [24] used the radiation information and geometric information provided by ground LiDAR scanning to map the single plant at the early growth period of maize with high accuracy, which reduced the amplitude change in the uniform area through radiation correction and provided the obvious separability between the maize plant and the soil. To lessen the effects of environmental uncertainty, Hiremath et al. [25] proposed a particle filter method based on 2D LiDAR for autonomous navigation in maize fields, but the algorithm needs to be calibrated during the testing stage. In order to avoid the requirement for an a priori condition, Malavazi et al. [26] used LiDAR in field robot autonomous navigation. The robot navigation was improved using the PEARL algorithm, which was since further enhanced and filtered compared to the conventional PEARL and RANSAC-based algorithms. Iqbal et al. [27] introduced a LiDAR-based autonomous mobile robot scene representation and navigation simulation system. The system successfully directed the robot through the field using a mixed technique of GPS waypoint tracking and LiDAR navigation after creating a high-fidelity cotton crop simulation situation. People may not always be able to function normally during the path. Because mistakes could happen due to a lack of GNSS satellite signals, Velasquez-Elite et al. [28] proposed an alternative scheme based on front light detection, LiDAR, and a robust controller. The algorithm was tested in the farm environment using the Helvis3 validation controller. The experimental findings demonstrated that the navigation system can manage the robot’s displacement between crop rows and maintain it in the middle of the relevant path with the least amount of error, despite environmental interference. LiDAR is more stable in agricultural environments since it is unaffected by ambient lighting conditions. Ground laser scanning provides high-precision and high-density three-dimensional measurements for objects [21], and the viewing range is larger than the camera’s observation range [29], so more information can be captured.
Designed to address the issue that image processing algorithms struggle to detect middle and late crops in high sheltering environments, this paper proposes a LiDAR-based crop row detection algorithm for maize in its middle and late periods, which is mainly composed of three modules: point cloud preprocessing, feature point extraction, and crop row centerline detection. The flow chart is shown in Figure 1. Firstly, the point cloud is preprocessed by extracting a detection area and denoising. The next step is to divide the horizontal strips, using improved K-means and the information from the previous horizontal strip for iterating, filtering candidate points based on their threshold values for each crop row to obtain the final feature points, and then fitting the crop row centerlines using the least squares method. According to the experimental results, the algorithm has high accuracy and real-time performance in complex field environments.

2. Materials and Methods

2.1. Point Cloud Information Collection and Preprocessing

The point cloud information used in this paper was collected at the Wanbei Comprehensive Experimental Station of Agricultural University in the Yongqiao District of Suzhou City, Anhui Province, China. The target crop of this paper is maize. The point cloud information of maize in the middle and late periods was collected using Livox Horizon LiDAR (DJI China). Figure 2a shows the field collection environment and collection equipment. Maize row point cloud and maize image information are simultaneously collected with the Livox Horizon LiDAR and camera, which are installed on the collection platform. The LiDAR’s enlarged image is shown in Figure 2b. Livox Horizon LiDAR features high field coverage and high environmental adaptability. The maximum detection range of LiDAR is 260 m, and its noise rate is less than 0.01% under severe sunlight interference of 100 klx. The FOV is 81.7 degrees horizontally and 25.1 degrees vertically, with a point cloud data rate of 240,000 points per second, a random error range of less than 2 cm, and a random error of an angle of less than 0.05 degrees. The maize row must be in front of the LiDAR during installation to capture sufficient information; additionally, the height of the LiDAR must be adjusted appropriately for the actual height of the maize rows, and at least six maize rows of point cloud information must be collected.
According to the statistical measurement results, maize typically grows to an average height of 1 m in the middle period and 2.5 m in the late period. The point cloud information of maize in the middle period is shown in Figure 2c. The point clouds of other objects in the field environment, such as weeds and the ground, can also be captured by the LiDAR in addition to the point clouds of maize rows. To accurately extract crop row point clouds, it is necessary to reduce the influence of invalid point clouds outside the maize rows. Most weed point clouds can be eliminated using this feature since a height difference between maize and weeds is so obvious during the middle period of maize, as shown in Figure 3a. On point clouds, straight through filtering can quickly remove outliers and perform basic and simple filtering. To extract crop row point clouds and accomplish the goals of the first rough processing, straight through filtering can remove point clouds that are beyond the threshold in the Z direction according to the height threshold h. With Equation (1), determine the value of h:
h = h 1 h 2
where h is the threshold value of the straight through filter, h1 is the installation height of the LiDAR, and h2 is the average height of the weeds.
Point cloud density is a crucial property that can accurately depict the traits of crop distribution. As the distance between the crop and the LiDAR increases, the density of the point cloud will also decrease [30]. The subsequent crop row detection will be impacted by the unequal distribution of the scanned point cloud. Consequently, the target detection area must be extracted in Figure 3b. It can be seen from the planting position of the crops in the field that the row spacing of crops is about 0.6 m, and the target detection area needs to contain six rows of crops. Since the density of point clouds in the rear of the field obtained by scanning is sparse, to ensure that effective information can be extracted while avoiding too small a detection area, the size of the target detection area obtained is L × W, where L is 5 m and W is 3.6 m, combined with the distribution of the LiDAR FOV and point clouds, as shown in Figure 3c. A substantial number of point clouds were obtained after the aforementioned procedure. The second step must be completed to speed up subsequent recognition, reduce the search space while extracting feature points, and increase the processing speed of point clouds. The point cloud in the detection area was denoised using Gaussian filtering in accordance with the threshold value. The processed result is shown in Figure 3d and then projected the crop row point clouds onto the X-O-Y plane.

2.2. Feature Point Extraction

2.2.1. Horizontal Strips Dividing

The target detection area was obtained by preprocessing, which is 5 m long and 3.6 m wide. Maize rows are spaced apart by a specified value according to the planting situation, and they are roughly parallel. Maize has almost equal intervals along the X-axis. The horizontal strips need to be divided to suit the feature points under various conditions. Zhang et al. [31] projected pixels vertically based on horizontal strips. Similar processes to point clouds in the target detection area can be carried out, taking the bottom center of the target detection area as the coordinate origin O. The detection area was equally divided into N horizontal strips from origin O along the x-axis in accordance with Δh, as shown in Figure 4a. N is calculated as follows:
N = L Δ h
where L is 5 m, the plant spacing is about 0.2 m according to measurement, and Δh is 0.2 m accordingly. N is 25 in this paper.

2.2.2. Candidate Points Acquisition

Figure 4b shows the initial horizontal strip. Starting at the leftmost and divided into n = Ww grids with 0.01 m as the width of Δw, denoted as Gi,j (j ∈ [1, n]), where W is 3.6 m. We counted the number of point clouds in each grid and denoted it as Qi,j, taking Gi,j as the abscissa and Qi,j as the ordinate to obtain the point cloud quantity graph in the initial horizontal strip grids, as shown in Figure 4c. The total number of point clouds in the initial horizontal strip was calculated and was denoted as Si. If Si = 0, the feature points are taken from the following strip because there is no valuable information in the current horizontal strip. According to the threshold Ti, the grids with non-zero point cloud numbers were filtered, and the total number of such grids was denoted as ni1. In order to improve the accuracy of feature points, only when Qi,j > Ti, the point cloud in the grid was retained. The filtered results are shown in Figure 4d. The calculation method for Ti is illustrated in Equation (3):
T i = S i n i 1 , i = 1 , 2 , , N
where Ti is the threshold value of the grid in the i-th horizontal strip.
The conventional K-means [32] method clusters point clouds into clusters based on similarities in their characteristics. The filtered point clouds are iteratively separated into k clusters based on the distance from each cluster after a specific k-value and a random selection of initial centroid points. Each cluster corresponds to a centroid point. This paper proposed an improved K-means algorithm using prior information. Firstly, we divided every 60 grids into groups from left to right in the initial horizontal strip and then used the x and y average values of each group of point clouds to determine the initial centroid point’s x and y coordinates. The improved K-means algorithm accelerates clustering and decreases the number of iterations required to choose initial centroid points at random.
After clustering, each clustering area was denoted as (left to right) Ci,k, and k centroid points acquired through iteration were denoted as candidate points Di,k, k ∈ (1, 6). The candidate points of the initial horizontal strip are shown in Figure 4e. According to the coordinate system, the maximum x Xi,k(max), minimum x Xi,k(min), maximum y Yi,k(max), and minimum y values Yi,k(min) of point clouds in each cluster region from left to right were found, then those values were used to obtain the four coordinates A (Xi,k(max), Yi,k(max)), B (Xi,k(max), Yi,k(min)), C (Xi,k(min), Yi,k(min)), and D (Xi,k(max), Yi,k(min)), which functioned as the four vertices of the iterative rectangle Ri,k. The iteration rectangle in the initial horizontal strip is shown in Figure 4e. The iterative rectangle Ri,k obtained was used as the prior information for the second horizontal strip.
The candidate points in the initial horizontal strip were determined. In order to improve the accuracy of feature point extraction, starting with the second horizontal strip and using the previous information of the candidate points of the previous horizontal strip to evaluate and compare with the initial candidate points obtained by K-means to obtain the candidate points of the current horizontal strip. Figure 5a shows the process of obtaining candidate points starting from the second horizontal strip.
Firstly, it is determined whether the current horizontal strip contains valid point cloud information, and the next step of fitting candidate points is carried out if Si > 0. The n grids Gi,j are obtained by averaging, and filtering is conducted in accordance with the threshold Ti. The initial candidate points Fi,k are obtained by improved K-means clustering. Finally, a comparison is conducted to determine the final candidate points. By comparing the y value of candidate point D(i − 1),k in the (i − 1)-th horizontal strip with the initial candidate point Fi,k, only the difference d between the candidate point’s absolute value ypi,k and the corresponding initial candidate point’s y value yp(i − 1),k is less than the offset E, retaining the current candidate point. Generally, the row spacing of maize is about 0.6 m. The effects of selecting points are compared when E takes different values. The effect works best when E is 0.1. To ensure the precision of feature points, the value of E is 0.1 m. The initial candidate point and the point cloud in the current clustering area will be discarded if the difference between the absolute values of ypi,k and yp(i − 1),k are larger than or equal to offset E. Move up the corresponding iteration rectangle R(i − 1),k in the (i − 1)-th horizontal strip Δh to the i-th horizontal strip, as the corresponding iteration rectangle Ri,k, as shown in Figure 5b. The final candidate points are obtained by re-clustering the centroids after the point clouds are classified within the iterative rectangle’s limit into the clustering area Ci,k.
The flow chart is used in the other horizontal strips first to check if the point cloud information is valid, then to fit the initial candidate points, and finally to determine the final candidate points based on the information from the previous strip. Figure 6 shows the iteration rectangles in the horizontal strips and the candidate points of the horizontal strip that were acquired during iteration.

2.2.3. Feature Points Determination

The final feature points were determined after obtaining the candidate points. The candidate points required additional processing to improve the precision of crop row detection. We calculated the number of candidate points Nk in each row from left to right first. The sum of the y values of candidate points in each row was denoted as Sk. We obtained the average y value of each row’s candidate points using Equation (4), and then denoted it as Mk. Filter and extracted the crop row candidate points, which were filtered and extracted according to the threshold interval of each crop row. The corresponding candidate points were filtered and extracted according to the threshold interval of each crop row, retaining the candidate point if the y value fell within the threshold interval; otherwise, filtering it. The expression of the threshold interval Tk is shown in Equation (5).
M K = S K N K , K = 1 , 2 , , 6
T k m a x = M k + E , k 1 , 6 T k m i n = M k   E , k 1 , 6 T k T k m i n , T k m a x
In Equation (5), Tkmax and Tkmin are the upper and lower limits of the threshold interval, respectively. Figure 7b shows the final feature point results obtained from Equation (5), which is then added to Pk.

2.3. Crop Row Centerlines Detection

After feature points were extracted, the centerlines of the crop rows were fitted. The least squares method is an optimization algorithm that seeks to minimize the sum of squares of the total residuals of the regression function of samples subjected to normal distribution to obtain the unknown data accurately and conveniently and to conduct the best fitting. The univariate linear regression model is used in this paper to obtain the crop row centerline fitting result that fits the data the best. The model of unitary linear regression is Equation (6):
Y i = β 0 X i + β 1
β 0 = n X i Y i X i Y i n X i 2 X i 2
β 1 = X i 2 Y i X i X i Y n X i 2 X i 2 .
In the equation above, β0 represents the slope and β1 represents the intercept in the univariate linear regression model, and y = β0x + β1 is the objective function which this paper seeks to acquire. β0 and β1 are the parameters that need to be calculated. The goal function can be obtained by entering the feature point pairs in Pk into the upper Equations (7) and (8) and iterating to obtain the parameter values β0 and β1. The fitting result of the crop row centerlines is shown in Figure 8.

3. Results and Discussion

The centerlines of the crop rows in the middle of the maize were obtained after 2.3. The late maize rows were verified to confirm the applicability of this algorithm, and the process is shown in Figure 9. Figure 9f shows the effect of fitting late crop row centerlines. The maize grew luxuriously as it neared its late period. The method successfully detected the crop row centerlines in the condition of severe mutual sheltering of late period maize leaves.
To verify the accuracy of the algorithm for detecting the centerlines of crop rows, the calibration line is selected as the benchmark for comparative verification. Crops are typically planted in rows in accordance with the needs of the applicable agronomic management measures, so the calibration lines drawn should be as parallel and evenly spaced as possible. Drawing calibration lines must take into account all relevant elements, including how weed ground affects crop row fitting, and must avoid having too much overlap between crop and weed. Only then should the most logical calibration line be chosen. In Figure 10, the red lines are the calibration lines, and the blue lines are the crop row centerlines by this algorithm. The error angle, which is denoted as ∆θ, is the angle between the red line and the blue line. For verification, 127 frame point cloud data from the middle and late periods of maize were chosen to obtain more trustworthy results. The precision of maize row centerline extraction is evaluated using the given Equation (9).
a i = 1 2 Δ θ i π × 100 %
A i = 1 n f i = 1 i a i , i [ 1 , 6 ] A = 1 i i = 1 i A i T = 1 n f n f = 1 n f T n f
In addition, the average accuracy and average processing time are calculated in accordance with Equation (9) to evaluate the stability and real-time performance of the algorithm.
In Equation (10), Ai is the average accuracy of a fixed crop row in 127 frames; ai is the extraction accuracy of the centerline of the i-th crop row from left to right in each frame; and nf is the value of 127 for the processed frame. A is the average accuracy of six crop rows in 127 frames; T is the average processing time of 127 frame point cloud data; and Tnf is the time for processing each frame of point cloud data. The test results obtained are shown in Table 1.
The extraction accuracy of maize rows in the middle period was 95.1%, and the processing time was 0.181 s, according to the results in Table 1. The extraction accuracy was 87.3% in the late period, and the processing time was 0.195 s. This indicates that even when maize is sheltered in the middle and late periods and the extraction rate is high, the algorithm still obtains decent results. To verify the superiority of this algorithm, different image processing algorithms are used to fit the centerlines of maize in middle and late periods and compare them with this algorithm. Figure 11 shows the process of using algorithms based on LiDAR and image processing algorithms to fit the middle and late periods of maize. The red lines in Figure 11 are all crop row centerlines extracted using different algorithms.
The average accuracy of crop row detection based on this method was 95.1% for the middle period of maize, and the average processing time was 0.181 s. Figure 11b shows the extracted centerlines. The accuracy of the crop rows detected was 84.7%, 85.3%, 87.5%, and 89.6%, respectively, and the average processing time was 0.196 s, 0.255 s, 0.268 s, and 0.260 s, respectively, based on the image processing algorithms. The mutual sheltering of maize rows is lessened in the middle period, the density is moderate, and the adjacent maize row leaves have less mutual shielding for row spacing, which results in a low level of environmental pressure in the field, as shown in Figure 11a. Figure 11c shows the crop rows extracted by Algorithm 1 [33]. The algorithm only extracted two crop rows, and the centerline slightly deviated from the crop rows under low sheltering pressure. It can be seen from Figure 11d that only the crop rows at the bottom of the image can be extracted, and the crop row deviates when the row is stretched to the top of the image by using Algorithm 2 [34], which reduces the precision of crop row extraction. Using Algorithm 3 [35], it can accurately extract crop rows, but only two crop rows can be extracted, as shown in Figure 11e. Figure 11f shows the results of crop row extraction by Algorithm 4 [31]. The intersection of the fitted crop row centerlines indicates that the crop row detection is wrong, resulting in low average extraction accuracy, which cannot meet the requirements of crop row detection. Table 2 indicates that the average accuracy of this algorithm for extracting maize rows is higher than algorithms based on image processing, and its average processing time is likewise lower.
The average accuracy of crop row detection based on this method was 87.3% for the late period of maize, and the average processing time was 0.195 s. Figure 11h shows the extracted centerlines. The accuracy of crop rows detected was 80.4%, 82.2%, 75.4%, and 70.1%, respectively, and the average processing time was 0.237 s, 0.330 s, 0.412 s, and 0.399 s, respectively, based on the image processing algorithms. Figure 11g shows the late period of maize, which is the second development stage of maize maturity, is more prosperous and dense, and that the situation of mutually sheltering is severe, which makes it harder to extract crop rows. The crop rows extracted by Algorithm 1 [33] are displayed in Figure 11i. The left centerline diverged from the real crop row when there was severe sheltering, resulting in a low overall extraction accuracy. Figure 11j shows the results of crop row extraction by Algorithm 2 [34]. Two crop rows can be detected, but the average accuracy is poor. It can be seen from Figure 11k that using Algorithm 3 [35] had low adaptability to high sheltering environments, and the extracted crop rows seriously deviated from the actual crop rows, leading to errors in detection. Using Algorithm 4 [31], the algorithm extracted crop rows, which can detect the crops at the bottom and middle of the image. However, when the crops are at the top of the image, the extracted crop rows intersect, and the extraction accuracy is low, as shown in Figure 11l. Crop rows are challenging to accurately extract due to the severe mutual sheltering of crop rows in the late period, which makes it difficult for the image processing algorithm to distinguish the row spacing. As a result, the late period of maize row centerline extraction had a poor average extraction rate. The extensive field coverage of LiDAR enables it to fully collect the point cloud information for maize rows. More point clouds could be collected using this algorithm, although it takes more time to process the point clouds of maize rows in the late period of growth because of flourishing maize growth and severe leaf sheltering. Due to this, the average accuracy of crop rows is lower than that of middle period maize, and its processing times are longer.

4. Conclusions and Future Research

A crop row detection algorithm based on LiDAR is proposed for the middle and late periods of maize. Despite the high pressure of sheltering in the field, this algorithm is still adaptable. To begin with, point clouds are collected using LiDAR for preprocessing. The obtained target detection area is divided into equal horizontal strips. It is improved and then clustered using the conventional K-means algorithm as a basis. The candidate points of the current horizontal strip are evaluated, compared, and iterated using the information from the previous horizontal strip. The final feature points are then filtered and extracted in accordance with the threshold interval of crop rows, and the maize row centerlines are fitted using the least square method.
By contrasting the crop row centerlines that the algorithm extracted with the drawn calibration lines, the accuracy of this algorithm is confirmed. To demonstrate the superiority of this algorithm, it is also contrasted with another LiDAR-based algorithm and three image processing-based algorithms. The phenomenon of sheltered row spacing will worsen as crops mature, especially in the late period of maize when the row is about to close. The experimental results show that another LiDAR-based algorithm has low extraction accuracy in the late period when maize is very prosperous and that image processing algorithms have trouble detecting the late crop rows. In summary, when maize is growing luxuriantly and is seriously blocked in the middle and late periods, the algorithm in this paper can successfully extract the centerlines of the rows, and the extraction accuracy and processing speed are higher than those of the contrast processing algorithms. As a result, this algorithm can provide a solid foundation for agricultural autonomous navigation. In this paper, the algorithm just detects the centerline of maize rows. By combining LiDAR and machine vision, a future study can attempt to detect the crop row centerlines of various crops.

Author Contributions

S.Z.: writing original draft; Y.Y. and Q.M.: writing review and writing editing; S.C. and D.A.: validation; Z.Y. and B.M.: supervision. All authors have read and agreed to the published version of the manuscript.

Funding

The research in this article was funded by the National Natural Foundation of China Youth Fund Project (51905004), the University Synergy Innovation Program of Anhui Province (Grant No. GXXT-2020-011).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationship that could have appeared to influence the work reported in this paper.

References

  1. Backman, J.; Oksanen, T.; Visala, A. Parallel guidance system for tractor-trailer system with active joint. In Precision Agriculture; Wageningen Academic Publishers: Wageningen, The Netherlands, 2009; pp. 615–622. [Google Scholar]
  2. Backman, J.; Oksanen, T.; Visala, A. Navigation system for agricultural machines: Nonlinear model predictive path tracking. Comput. Electron. Agric. 2012, 82, 32–43. [Google Scholar] [CrossRef]
  3. Keicher, R.; Seufert, H. Automatic guidance for agricultural vehicles in Europe. Comput. Electron. Agr. 2000, 25, 169–194. [Google Scholar] [CrossRef]
  4. Lulio, L.C.; Tronco, M.L.; Porto, A.J.V. Cognitive-merged statistical pattern recognition method for image processing in mobile robot navigation. In Proceedings of the 2012 Brazilian Robotics Symposium and Latin American Robotics Symposium, Fortaleza, Brazil, 16–19 October 2012; pp. 279–283. [Google Scholar]
  5. Eaton, R.; Katupitiya, J.; Siew, K.W.; Howarth, B. Autonomous farming: Modelling and control of agricultural machinery in a unified framework. Int. J. Intell. Syst. Technol. Appl. 2010, 8, 444–457. [Google Scholar] [CrossRef]
  6. English, A.; Ross, P.; Ball, D.; Corke, P. Vision based guidance for robot navigation in agriculture. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–5 June 2014; pp. 1693–1698. [Google Scholar]
  7. Li, M.; Imou, K.; Wakabayashi, K.; Yokoyama, S. Review of research on agricultural vehicle autonomous guidance. Int. J. Agric. Biol. Eng. 2009, 2, 1–16. [Google Scholar]
  8. Bergerman, M.; Maeta, S.M.; Zhang, J.; Freitas, G.M.; Hamner, B.; Singh, S.; Kantor, G. Robot farmers: Autonomous orchard vehicles help tree fruit production. IEEE Robot. Autom. Mag. 2015, 22, 54–63. [Google Scholar] [CrossRef]
  9. Xie, D.; Chen, L.; Liu, L.; Wang, H. Actuators and Sensors for Application in Agricultural Robots: A Review. Machines 2022, 10, 913. [Google Scholar] [CrossRef]
  10. Han, X.Z.; Kim, H.J.; Kim, J.Y.; Yi, S.Y.; Moon, H.C.; Kim, J.H.; Kim, Y.J. Path-tracking simulation and field tests for an auto-guidance tillage tractor for a paddy field. Comput. Electron. Agric. 2015, 112, 161–171. [Google Scholar] [CrossRef]
  11. Zhang, S.; Wang, Y.; Zhu, Z.; Li, Z.; Du, Y.; Mao, E. Tractor path tracking control based on binocular vision. Inf. Process. Agric. 2018, 5, 422–432. [Google Scholar] [CrossRef]
  12. Liu, L.; Mei, T.; Niu, R.; Wang, J.; Liu, Y.; Chu, S. RBF-based monocular vision navigation for small vehicles in narrow space below maize canopy. Appl. Sci. 2016, 6, 182. [Google Scholar] [CrossRef] [Green Version]
  13. Hough, P.V.C. A Method and Means for Recognizing Complex Patterns. US Patent 3069654, 18 December 1962. [Google Scholar]
  14. Tsuji, S.; Matsumoto, F. Detection of ellipses by a modified Hough transformation. IEEE Trans. Comput. 1978, 27, 777–781. [Google Scholar] [CrossRef]
  15. Ji, R.; Qi, L. Crop-row detection algorithm based on Random Hough Transformation. Math. Comput. Model. 2011, 54, 1016–1020. [Google Scholar] [CrossRef]
  16. Leemans, V.; Destain, M.F. Line cluster detection using a variant of the Hough transform for culture row localisation. Image Vis. Comput. 2006, 24, 541–550. [Google Scholar] [CrossRef] [Green Version]
  17. Pla, F.; Sanchiz, J.M.; Marchant, J.A.; Brivot, R. Building perspective models to guide a row crop navigation vehicle. Image Vis. Comput. 1997, 15, 465–473. [Google Scholar] [CrossRef]
  18. Guerrero, J.M.; Guijarro, M.; Montalvo, M.; Romeo, J.; Emmi, L.; Ribeiro, A.; Pajares, G. Automatic expert system based on images for accuracy crop row detection in maize fields. Expert Syst. Appl. 2013, 40, 656–664. [Google Scholar] [CrossRef] [Green Version]
  19. Jiang, G.; Wang, Z.; Liu, H. Automatic detection of crop rows based on multi-ROIs. Expert Syst. Appl. 2015, 42, 2429–2441. [Google Scholar] [CrossRef]
  20. Papari, G.; Petkov, N. Edge and line oriented contour detection: State of the art. Image Vis. Comput. 2011, 29, 79–103. [Google Scholar] [CrossRef]
  21. George, H.; Andy, L. Laser Scanning for the Environmental Sciences; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  22. Barawid, O.C., Jr.; Mizushima, A.; Ishii, K.; Noguchi, N. Development of an autonomous navigation system using a two-dimensional laser scanner in an orchard application. Biosyst. Eng. 2007, 96, 139–149. [Google Scholar] [CrossRef]
  23. Weiss, U.; Biber, P. Plant detection and mapping for agricultural robots using a 3D LIDAR sensor. Robot. Auton. Syst. 2011, 59, 265–273. [Google Scholar] [CrossRef]
  24. Höfle, B. Radiometric correction of terrestrial LiDAR point cloud data for individual maize plant detection. IEEE Geosci. Remote Sens. Lett. 2013, 11, 94–98. [Google Scholar] [CrossRef]
  25. Hiremath, S.A.; Van Der Heijden, G.W.A.M.; Van Evert, F.K.; Stein, A.; ter Braak, C.J.F. Laser range finder model for autonomous navigation of a robot in a maize field using a particle filter. Comput. Electron. Agric. 2014, 100, 41–50. [Google Scholar] [CrossRef]
  26. Malavazi, F.B.P.; Guyonneau, R.; Fasquel, J.B.; Lagrange, S.; Mercier, F. LiDAR-only based navigation algorithm for an autonomous agricultural robot. Comput. Electron. Agric. 2018, 154, 71–79. [Google Scholar] [CrossRef]
  27. Iqbal, J.; Xu, R.; Sun, S.; Li, C. Simulation of an autonomous mobile robot for LiDAR-based in-field phenotyping and navigation. Robotics 2020, 9, 46. [Google Scholar] [CrossRef]
  28. Velasquez, A.E.B.; Higuti, V.A.H.; Guerrero, H.B.; Magalhães, D.V.; Aroca, R.V.; Becker, M. Reactive navigation system based on H∞ control system and LiDAR readings on corn crops. Precis. Agric. 2020, 21, 349–368. [Google Scholar] [CrossRef]
  29. Hoffmeister, D.; Curdt, C.; Tilly, N.; Bendig, J. 3D terres-trial laser scanning for field crop modelling. In Proceedings of the Workshop on Remote Sensing Methods for Change Detection and Process Modelling, Köln, Germany, 18–19 November 2010; pp. 11–19. [Google Scholar]
  30. Chazette, P.; Totems, J.; Hespel, L.; Bailly, J.S. Principle and Physics of the LiDAR Measurement. In Optical Remote Sensing of Land Surface; Elsevier: Amsterdam, The Netherlands, 2016; pp. 201–247. [Google Scholar]
  31. Zhang, X.; Li, X.; Zhang, B.; Zhou, J.; Tian, G.; Xiong, Y.; Gu, B. Automated robust crop-row detection in maize fields based on position clustering algorithm and shortest path method. Comput. Electron. Agric. 2018, 154, 165–175. [Google Scholar] [CrossRef]
  32. Kanungo, T.; Mount, D.M.; Netanyahu, N.S.; Piatko, C.D.; Silverman, R.; Wu, A.Y. An efficient k-means clustering algorithm: Analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 881–892. [Google Scholar] [CrossRef]
  33. Yang, Y.; Ma, Q.; Chen, Z.; Wen, X.; Zhang, G.; Zhang, T.; Dong, X.; Chen, L. Real-time extraction of the navigation lines between sugarcane ridges using LiDAR. Trans. Chin. Soc. Agric. Eng. 2022, 38, 178–185. [Google Scholar] [CrossRef]
  34. Yang, Z.; Yang, Y.; Li, C.; Zhou, Y.; Zhang, X.; Yu, Y.; Liu, D. Tasseled Crop Rows Detection Based on Micro-Region of Interest and Logarithmic Transformation. Front. Plant Sci. 2022, 13, 916474. [Google Scholar] [CrossRef]
  35. Zhou, Y.; Yang, Y.; Zhang, B.; Wen, X.; Yue, X.; Chen, L. Autonomous detection of crop rows based on adaptive multi-ROI in maize fields. Int. J. Agric. Biol. Eng. 2021, 14, 217–225. [Google Scholar] [CrossRef]
Figure 1. Algorithm flow chart.
Figure 1. Algorithm flow chart.
Agriculture 12 02011 g001
Figure 2. Point cloud information collection: (a) field collection environment and point cloud collection equipment; (b) enlarged LiDAR image; and (c) point cloud of middle period maize.
Figure 2. Point cloud information collection: (a) field collection environment and point cloud collection equipment; (b) enlarged LiDAR image; and (c) point cloud of middle period maize.
Agriculture 12 02011 g002
Figure 3. Point cloud preprocessing:(a) straight through filtering segmentation schematic diagram; (b) detection area extraction, W is the width of the detection area, m, L is the length of detection area, m. (c) Enlarged image of detection area; (d) denoising using Gaussian filtering.
Figure 3. Point cloud preprocessing:(a) straight through filtering segmentation schematic diagram; (b) detection area extraction, W is the width of the detection area, m, L is the length of detection area, m. (c) Enlarged image of detection area; (d) denoising using Gaussian filtering.
Agriculture 12 02011 g003
Figure 4. Determination of initial horizontal strip candidate points:(a) horizontal strip division in the detection area; (b) initial horizontal strip; (c) number of point clouds in the initial horizontal strip grids after filtering; (d) number of point clouds in the initial horizontal strip grids; (e) initial horizontal strip candidate points and iteration rectangle, with A, B, C, and D as four vertices of the iteration rectangle.
Figure 4. Determination of initial horizontal strip candidate points:(a) horizontal strip division in the detection area; (b) initial horizontal strip; (c) number of point clouds in the initial horizontal strip grids after filtering; (d) number of point clouds in the initial horizontal strip grids; (e) initial horizontal strip candidate points and iteration rectangle, with A, B, C, and D as four vertices of the iteration rectangle.
Agriculture 12 02011 g004
Figure 5. Candidate point acquisition process: (a) Gi,j represents the j-th grid of the i-th horizontal strip; Qi,j represents the number of the j-th grid of the i-th horizontal strip; Fi,k is the initial candidate point fitted in the k-th clustering area from left to right of the i-th horizontal strip; yp(i − 1),k is the y value of D(i − 1),k; ypi,k is the y value of Di,k; D(i − 1),k represents the candidate point in the k-th clustering area in the (i − 1)-th horizontal strip from left to right; R(i − 1),k is the iterative rectangle in the clustering area from left to right in the (i − 1)-th horizontal strip; Di,k is a candidate point in the k-th clustering area in the i-th horizontal strip from left to right; (b) Comparison between initial candidate points and candidate points: d is the absolute distance in the Y direction between candidate and feature points in two adjacent horizontal strips; E is offset and its value is 0.1; and Δh is the width of the strip.
Figure 5. Candidate point acquisition process: (a) Gi,j represents the j-th grid of the i-th horizontal strip; Qi,j represents the number of the j-th grid of the i-th horizontal strip; Fi,k is the initial candidate point fitted in the k-th clustering area from left to right of the i-th horizontal strip; yp(i − 1),k is the y value of D(i − 1),k; ypi,k is the y value of Di,k; D(i − 1),k represents the candidate point in the k-th clustering area in the (i − 1)-th horizontal strip from left to right; R(i − 1),k is the iterative rectangle in the clustering area from left to right in the (i − 1)-th horizontal strip; Di,k is a candidate point in the k-th clustering area in the i-th horizontal strip from left to right; (b) Comparison between initial candidate points and candidate points: d is the absolute distance in the Y direction between candidate and feature points in two adjacent horizontal strips; E is offset and its value is 0.1; and Δh is the width of the strip.
Agriculture 12 02011 g005
Figure 6. Candidate points obtained: (a) iteration rectangles in the horizontal strips; (b) candidate points in the horizontal strips.
Figure 6. Candidate points obtained: (a) iteration rectangles in the horizontal strips; (b) candidate points in the horizontal strips.
Agriculture 12 02011 g006
Figure 7. Comparison between final feature points and candidate points: (a) candidate points; (b) feature points.
Figure 7. Comparison between final feature points and candidate points: (a) candidate points; (b) feature points.
Agriculture 12 02011 g007
Figure 8. Fitting results of crop row centerlines.
Figure 8. Fitting results of crop row centerlines.
Agriculture 12 02011 g008
Figure 9. Late period maize row centerline extraction:(a) the environment of the late period maize field; (b) the point clouds of late period maize; (c) enlarged image of detection area; (d) denoising using Gaussian filtering; (e) feature points of late period maize; and (f) the centerlines of late maize rows.
Figure 9. Late period maize row centerline extraction:(a) the environment of the late period maize field; (b) the point clouds of late period maize; (c) enlarged image of detection area; (d) denoising using Gaussian filtering; (e) feature points of late period maize; and (f) the centerlines of late maize rows.
Agriculture 12 02011 g009
Figure 10. Error angle: the red line is the calibration line, and the blue line is the centerline of the crop row fitted by this algorithm; θ1 and θ2 are the error angles.
Figure 10. Error angle: the red line is the calibration line, and the blue line is the centerline of the crop row fitted by this algorithm; θ1 and θ2 are the error angles.
Agriculture 12 02011 g010
Figure 11. Comparison of maize processing algorithms in the middle and late periods: (a) image of real maize in the middle period; (b,c) two resulting figures based on the LiDAR processing algorithms in the middle period; (df) three resulting figures based on the image processing algorithms in the middle period; (g) image of real maize in the late period; (h,i) two resulting figures based on the LiDAR processing algorithms in the late period; and (jl) three resulting figures based on the image processing algorithms in the late period.
Figure 11. Comparison of maize processing algorithms in the middle and late periods: (a) image of real maize in the middle period; (b,c) two resulting figures based on the LiDAR processing algorithms in the middle period; (df) three resulting figures based on the image processing algorithms in the middle period; (g) image of real maize in the late period; (h,i) two resulting figures based on the LiDAR processing algorithms in the late period; and (jl) three resulting figures based on the image processing algorithms in the late period.
Agriculture 12 02011 g011aAgriculture 12 02011 g011b
Table 1. Results of experiment.
Table 1. Results of experiment.
Growth PeriodA/%T/s
Middle period95.1%0.181
Late period87.3%0.195
Table 2. Results of contrast experiment.
Table 2. Results of contrast experiment.
Growth PeriodAlgorithmA/%T/s
Middle periodThis paper95.1%0.181
Algorithm 1 [33]84.7%0.196
Algorithm 2 [34]85.3%0.255
Algorithm 3 [35]87.5%0.268
Algorithm 4 [31]89.6%0.260
Late periodThis paper87.3%0.195
Algorithm 1 [33]80.4%0.237
Algorithm 2 [34]82.2%0.330
Algorithm 3 [35]75.5%0.412
Algorithm 4 [31]70.1%0.399
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, S.; Ma, Q.; Cheng, S.; An, D.; Yang, Z.; Ma, B.; Yang, Y. Crop Row Detection in the Middle and Late Periods of Maize under Sheltering Based on Solid State LiDAR. Agriculture 2022, 12, 2011. https://doi.org/10.3390/agriculture12122011

AMA Style

Zhang S, Ma Q, Cheng S, An D, Yang Z, Ma B, Yang Y. Crop Row Detection in the Middle and Late Periods of Maize under Sheltering Based on Solid State LiDAR. Agriculture. 2022; 12(12):2011. https://doi.org/10.3390/agriculture12122011

Chicago/Turabian Style

Zhang, Shaolin, Qianglong Ma, Shangkun Cheng, Dong An, Zhenling Yang, Biao Ma, and Yang Yang. 2022. "Crop Row Detection in the Middle and Late Periods of Maize under Sheltering Based on Solid State LiDAR" Agriculture 12, no. 12: 2011. https://doi.org/10.3390/agriculture12122011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop