Next Article in Journal
WaterNet: A Convolutional Neural Network for Chlorophyll-a Concentration Retrieval
Next Article in Special Issue
Evaluation of Rapeseed Winter Crop Damage Using UAV-Based Multispectral Imagery
Previous Article in Journal
Suitability of the MODIS-NDVI Time-Series for a Posteriori Evaluation of the Citrus Tristeza Virus Epidemic
Previous Article in Special Issue
Multi-Temporal Unmanned Aerial Vehicle Remote Sensing for Vegetable Mapping Using an Attention-Based Recurrent Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Crop Row Detection through UAV Surveys to Optimize On-Farm Irrigation Management

1
Department of Civil and Environmental Engineering, Politecnico di Milano, 20133 Milan, Italy
2
Department of Agricultural and Environmental Sciences – Production, Landscape, Agroenergy, Università degli Studi di Milano, 20133 Milan, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(12), 1967; https://doi.org/10.3390/rs12121967
Submission received: 11 May 2020 / Revised: 12 June 2020 / Accepted: 16 June 2020 / Published: 18 June 2020
(This article belongs to the Special Issue UAS-Remote Sensing Methods for Mapping, Monitoring and Modeling Crops)

Abstract

:
Climate change and competition among water users are increasingly leading to a reduction of water availability for irrigation; at the same time, traditionally non-irrigated crops require irrigation to achieve high quality standards. In the context of precision agriculture, particular attention is given to the optimization of on-farm irrigation management, based on the knowledge of within-field variability of crop and soil properties, to increase crop yield quality and ensure an efficient water use. Unmanned Aerial Vehicle (UAV) imagery is used in precision agriculture to monitor crop variability, but in the case of row-crops, image post-processing is required to separate crop rows from soil background and weeds. This study focuses on the crop row detection and extraction from images acquired through a UAV during the cropping season of 2018. Thresholding algorithms, classification algorithms, and Bayesian segmentation are tested and compared on three different crop types, namely grapevine, pear, and tomato, for analyzing the suitability of these methods with respect to the characteristics of each crop. The obtained results are promising, with overall accuracy greater than 90% and producer’s accuracy over 85% for the class “crop canopy”. The methods’ performances vary according to the crop types, input data, and parameters used. Some important outcomes can be pointed out from our study: NIR information does not give any particular added value, and RGB sensors should be preferred to identify crop rows; the presence of shadows in the inter-row distances may affect crop detection on vineyards. Finally, the best methodologies to be adopted for practical applications are discussed.

Graphical Abstract

1. Introduction

According to the most recent projections presented by the Intergovernmental Panel on Climate Change (IPCC), the variation in precipitation is altering hydrological systems in many agricultural areas of the planet, affecting water resources in terms of both quantity and quality [1]. In particular, climate change is leading to more frequent extreme events (droughts and heat waves, occurring more often and lasting longer), to the alteration of spatial and temporal precipitation (less precipitation concentrated in a few heavy rainfall events), as well as to an increase in air temperatures and crop water needs in many geographical areas.
In Mediterranean countries, freshwater resources are currently highly exploited due to the rapid population growth and intensive water use in agriculture, industry, and tourism activities. In many areas of Europe, including Italy, the effects of climate change are already detectable and have led to the need to irrigate crops and areas traditionally not irrigated [2].
In the context of Precision Agriculture (PAg), particular attention is given to the optimization of on-farm irrigation management, since water resources for agricultural use have become scarcer due to the combined effect of climate change and the increased competition among different water uses [1,3]. Moreover, due to the intensification of extreme weather events, irrigation is becoming an important tool to guarantee adequate quality standards to agricultural products [4]. The optimization of on-farm irrigation management through variable rate irrigation systems is based on the detection of the spatial variability of soil and crop properties. Variable rate irrigation is aimed at managing water inputs to match the spatial variability of the water requirements found in the field, by providing irrigation in different amounts depending on real crop requirements. This approach represents a valid solution to increase water use efficiency and water savings, and for certain crops, variable water application might lead also to an increase of yield and product quality [5,6,7].
Currently, the detection of within-field spatial variability of crops can be easily carried out through many different technologies, including proximal and remote sensing techniques, sensors mounted on agricultural machinery, georeferencing systems, and geographical information systems [8]. Among them, the noteworthy development of Unmanned Aerial Vehicles (UAVs) registered in the last few years has allowed filling the gap between remote sensing and terrestrial techniques in agricultural applications [9]. Using UAVs is a good compromise between the large coverage obtainable with remote platforms (mainly satellite and aircraft) and the accuracy of the terrestrial data, with advantages in terms of time consumption and the costs of the surveys.
By allowing collecting information closely from above the crop canopy, UAVs have introduced a new point of view for agricultural surveys, giving rise to several applications. As early as 2008, Nebiker et al. [10] proposed the prototype of a multispectral sensor suitable to be mounted on a UAV and conducted experiments with it to assess vegetation health with promising results. Starting from this experience, a variety of studies can be found in the literature about the effectiveness of UAV surveys conducted for PAg purposes. The main applications involve in-field weed mapping, vegetation growth monitoring, crop water stress analysis, and optimization of irrigation management [9,11].
The major strength of UAV surveys is to provide information in a rapid, non-destructive way with a high spatial resolution that can detect within-field variation in detail. Nevertheless, when UAV imagery is used to monitor crop variability in row-crops, a post-processing procedure must be set up to identify and extract crop rows from soil background and weeds, thus generating a crop mask. This issue is crucial to manage precision irrigation properly, by adjusting water supplies according to crop water requirements, considering the actual crop water status and vigor and their spatial variability. Moreover, the crop row detection is crucial to reliably assess the effectiveness of variable rate irrigation in optimizing the crop growth by reducing crop variability. In both of these cases, a non-accurate recognition of crop rows could induce misleading assessment of crop status, then erroneous evaluation of water amounts to be applied through irrigation, as well as erroneous validation of irrigation management. The same highlights are also valid for fertilization practices based on prescription maps, obtained from NDVI maps acquired through UAV surveys. Fertilization is more effective the more accurate the crop row detection is. As a matter of fact, what has been reported is valid in fruit-viticulture and horticulture, where nutrients are applied by fertigation through drip irrigation systems. On the other hand, crop row detection can be useful even for herbaceous crops sown in rows, such as maize. In the case of maize, procedures to generate crop masks are fundamental to manage precision fertilization or to control weeds in the early growth stages, before the canopy closure when the crop rows are still recognizable [12,13].
Different studies can be found in the literature, proposing (semi)automatic methods, using image-processing techniques on single-band images, maps of Vegetation Indices (VIs), or Digital Elevation Models (DEMs), to detect crop canopy [14]. Poblete-Echeverría et al. [15] compared the performance of four classification methods, including standard and well known methods (i.e., K-means and VIs’ thresholding) and machine learning methods (i.e., artificial neural networks and random forest), to detect vine canopy using ultra-high-resolution RGB imagery acquired with a conventional camera mounted on a low-cost UAV. Marques et al. [16] presented a UAV-based automatic method to detect chestnut trees, by using RGB and CIR (Color Infrared) orthomosaics combined with the canopy height model. In [17], potato plant objects were extracted from bare soil using the excess green index and Otsu thresholding methods.
This study focuses on the crop row detection and extraction by analyzing and post-processing images acquired through a UAV. Different methodologies are tested and compared, as well as several segmentation methods, such as supervised classifications, Bayesian segmentation, and thresholding algorithms, developed ad hoc for this purpose. Extraction algorithms are applied both on geometric products (i.e., digital elevation model) and vegetation indices’ maps. As already presented by other studies [15,16,18], the proposed methods exploit existing indicators, including NDVI and RGB derived indices. In this way, this study aims at demonstrating the importance and effectiveness of UAV systems in precision agriculture and providing to less experienced users the opportunity to use them. As an added value, using simple tools and proving their usefulness would allow the diffusion of these methodologies to a wide audience, even outside the academic environment. All the described methods are semi-automatic and require little human intervention for parameter choosing and fine-tuning. The assessment of the methods was performed on three different crop types, grapevine, pear, and tomato, to analyze the suitability of the proposed methods with respect to the characteristics of each crop. Considering the application of the proposed algorithms to different agro-ecosystems, each with its own peculiarities, represents the real novelty of this study with respect to the existing literature.

2. Materials

2.1. Study Sites

This study is part of the two year project NUTRIPRECISO (RDP-EU, Measure 1.2.01, Lombardy Region), which involved the Department of Agricultural and Environmental Sciences (DiSAA), University of Milan, the Department of Civil and Environmental Engineering (DICA), Politecnico di Milano, and CREA (Consiglio per la ricerca in agricoltura e l’analisi dell’economia agraria). The project was aimed at developing and disseminating practices of PAg in fruit viticulture and horticulture. Consequently, in this study, simple and reliable methodologies, moreover using commonly available data, are presented to be proposed to the farmers and potential users.
Three different types of crops were investigated in the NUTRIPRECISO project: grapevine, pear, and tomato; therefore, three different areas in the Lombardy Region were chosen as study sites, to be representative of each crop. The location of the vineyard is shown in Figure 1a, while the pear orchard and tomato sites are reported in Figure 1b. In the following, the main characteristics of the three study sites are described.

2.1.1. Vineyard

The vineyard, with an extent of 1 ha, is located in Olfino di Monzambano, in the province of Mantova, the heart of the Morainic Hills region (Northern Italy).
The vineyard is nearly flat and located at an altitude of about 88 m a.s.l. A soil survey with an Electro-Magnetic Induction (EMI) sensor followed by a traditional pedological survey showed that the field, despite its small dimension, was characterized by four different soil types with coarse soils in the northwestern part and fine heavy soils in the eastern part [19].
According to the ARPA (Regional Environmental Protection Agency) agro-meteorological station located at Ponti sul Mincio (about 6.5 km from the vineyard), the climatic conditions are warm and mild with average maximum and minimum temperatures during summer of about 30 and 20 ° C, respectively, and rainfall concentrated in spring and autumn, with a mean annual value of 765 mm (values calculated as the average over the period 1993–2017), which provide favorable conditions for the grapevine to grow.
The vineyard variety was Chardonnay, cultivated in rows oriented along the east-west axis, with a distance of plants on the row of 0.8 m and a distance between the rows of 2.4 m, while the row height was about 1.3 m. The plant cover fraction in the phase of maximum development of the canopy was estimated to be about 25% (equivalent to a row width of 0.6 m). The soil, both under the rows and between the rows, was grass-covered with periodic mowing to regulate the excessive development of vegetation [19].

2.1.2. Pear Orchard

The pear orchard was situated in the “Dotti” farm, a research facility of the University of Milan, located in Arcagna locality, Montanaso Lombardo (province of Lodi (LO)).
The orchard covers an area of about 1 ha and is flat, positioned at about 80 m a.s.l. According to the data recorded in the period 1993–2017 at the nearest ARPA agro-meteorological station (located at about 12 km southeast from the experimental site in Cavenago d’Adda), the climate is characterized by two rainy periods, respectively in April and September, while the highest temperatures occur in July, when rain is minimum.
The soil is loam with more clay in deeper horizons. In the orchard, four different pear varieties were cultivated, namely Williams, Abate, Kaiser, and Conference varieties, distributed in 17 rows with an inter-row distance of about 4 m, while the distance between plants on the row was about 1.5 m, depending on the variety; the soil, under the rows and between the rows, was grass-covered with periodic cutting.

2.1.3. Tomato Field

The third site included in the project is an area of about one hectare cultivated with industrial tomato “Pietra Rossa F1”, inside the CREA research center in Montanaso Lombardo (LO). The site is characterized by loamy soils with an increase of clay with the depth, while climatic conditions are the same as the ones above described for the pear orchard site. Tomatoes were 0.3 m high at images’ acquisition and cultivated in parallel rows of 0.5 m width with a distance between rows of about 1.5 m.

2.2. UAV Surveys and Photogrammetric Processing

Two UAV surveys were conducted on each study site, during the agricultural season of 2018. For each site, in the first survey, the Parrot Bebop 2 (Parrot S.A., Paris, France) was used to acquire RGB images, and the Parrot Sequoia camera (Parrot S.A., Paris, France) was mounted on the Parrot Disco (Parrot S.A., Paris, France) to collect multispectral imagery during the second survey. The photogrammetric processing of all the surveys was performed with Pix4Dmapper Pro software (Pix4D S.A., Prilly, Switzerland), Version 4.1.1. The details of the surveys and their processing are described in the following sections.

2.2.1. Vineyard

The RGB survey of the vineyard took place on 28 June 2018, while the multispectral survey, six days later on 4 July. The flight height of the Parrot Bebop 2 used in the RGB survey was set to 40 m Above Ground Level (AGL), while the Parrot Disco flew at 60 m (AGL) during the multispectral survey. The same overlaps among images were fixed for the two surveys: longitudinal overlapping equal to 80% and transversal equal to 70%. An amount of 130 and 560 images were collected during the RGB and multispectral survey, respectively. According to the sensors’ characteristics (i.e., Parrot Bebop 2 fisheye camera and Sequoia camera), the final Ground Sample Distance (GSD) of the acquired images was about 0.1 m for both cases. As suggested in [20], the coordinates of nine Ground Control Points (GCPs) were measured with the Global Navigation Satellite System (GNSS) receiver Leica Viva GS14 (Leica Geosystems, Heerbrugg, Switzerland) in Network Real-Time Kinematic (NRTK) mode, to ensure the georeferencing of the photogrammetric products with high accuracy. According to the results of [20], eight GCPs were distributed all around the perimeter of the vineyard, while the ninth target was placed in the middle of the field (Figure 2). In all the study sites, some of the GCPs were adopted as Check Points (CPs) during the photogrammetric processing, to assess process accuracy.
At the end of two independent photogrammetric processing, performed with Pix4Dmapper Pro software, a Digital Surface Model (DSM), an RGB orthophoto, and a four band multispectral orthophoto were produced, with GSD of around 0.1 m, to exploit for the crop row detection. The DSM generated from the multispectral survey had lower quality than the RGB one; therefore, it was not considered in further analysis. Table 1 summarizes the residuals on GCPs after photogrammetric processing, while in Figure 3, DSM and orthophotos of the vineyard are shown.

2.2.2. Pear Orchard

In the pear orchard site, the RGB and multispectral surveys were conducted on 26 June and 2 July, respectively. The characteristics of the flights were the same as the surveys performed on the vineyard site: longitudinal overlap among images equal to 80%, transversal overlap equal to 70%, flight height set at 40 m and 60 m for the multirotor UAV and the fixed-wings UAV, respectively, thus ensuring a GSD of the images of about 0.1 m. 140 images were acquired during the RGB survey, while 540 during the multispectral one. During the UAV flights, seven targets were placed on the terrain to be used as GCPs, well distributed all around the orchard, as shown in Figure 4.
As the case of the vineyard, the DSM and the orthophotos were produced in Pix4Dmapper Pro with a spatial resolution of 0.1 m (Figure 5). The residuals on the GCPs computed after the photogrammetric workflow are reported in Table 2.

2.2.3. Tomato Field

Given the proximity of the two sites, the tomato field was surveyed on the same days as the pear orchard, with the same equipment and flight characteristics. During the RGB survey, 96 images were acquired, as well as 388 images came from the multispectral flight. From these datasets, one DSM and two orthophotos, having a GSD equal to 0.1 m, were generated after bundle block adjustment in Pix4Dmapper Pro. The various photogrammetric products were georeferenced by means of six GCPs, whose center coordinates were measured with GNSS-NRTK on the dates of the surveys. The distribution of the GCPs on the tomato site and their computed residuals are shown in Figure 6 and Table 3, respectively. The photogrammetric products are reported in Figure 7.

3. Crop Row Detection Methods

Differentiating between crop canopy and background can be very challenging. Moreover, the different crop types considered in this study led to the need for many methods to extract crop rows, each one more suitable for a specific crop type. In the following sections, five different detection methods were proposed; some of them were taken from the existing literature (i.e., classification algorithms and Bayesian segmentation), while two methods, labeled as thresholding algorithms, were developed ad hoc for the purposes of the project.
In order to achieve the best possible crop mask to be used to identify crop rows on orthophotos, Vegetation Indices (VIs) were chosen as the inputs of the detection methods, as already proposed by many authors [15,16,21]. Considering vegetation, most of those indices take into account Red (R) and NIR reflectance bands ( ρ λ ): the greater is the difference between ρ R and ρ N I R , the greater is the amount of green and healthy vegetation in that particular pixel. Among all the possible VIs, only those composed of spectral bands that sensors involved in the surveys could provide were used in this study (Table 4).
To exploit the proposed methods fully, also the DSM and RGB orthophoto were individually used as inputs. This ensured that the methods were still operational, even in the cases where only imagery resulting from UAVs supporting only RGB sensors was available, as for the Parrot Bebop 2.
The assessment of the final results was performed by computing error matrices and classification accuracies (Overall Accuracy (OA), User’s Accuracy (UA), and Producer’s Accuracy (PA)), on some validation samples manually identified on orthophotos, through visual inspection. In particular, the quality of the crop detection was defined according to the value of PA of the class crop canopy: the greater is the PA, the lower is the probability to omit crop pixels; therefore, to underestimate the detected crop rows.

3.1. Thresholding Algorithms

Two algorithms (specifically local maxima extraction and threshold selection) were developed ad hoc for the purpose of the project, by starting from [28], who proposed a method for crop rows’ extraction by using as input the 3D point cloud. Both methods were generated in MATLAB R2017b [29] and are based on the concept that high pixel values generally correspond to crop rows. They are illustrated in the following sections.

3.1.1. Local Maxima Extraction

This method aims at generating a binary raster, where non-null values refer to the presence of the crop canopy. First, the input raster (e.g., VI map or DSM) is divided into square cells (macro-cells), then inside each macro-cell, the crop pixels are identified as those corresponding to a percentage of pixels with the highest values. It is a semi-automatic algorithm, where the user has to define the dimensions of the cell and the percentage value.
This method is sensitive to the user’s choices: in particular, the macro-cell dimension should be selected in order to include both crop and ground pixels, thus being close to the distance between rows or slightly larger. Macro-cell size should be neither too small nor too big with respect to the distance between rows. If it is too small, a wrong selection of pixels is performed whatever the chosen percentage is. When the cell includes only crop pixels, whatever percentage not equal to 100% causes an underestimation of crop pixels; on the other side, when the cell overlaps only ground pixels, they would be selected as crop pixels, thus producing an over-estimation in the crop mask. The overestimation arises also when the dimension of the macro-cell is too big, because the probability of selecting pixels belonging to the ground increased with the chosen cell size.

3.1.2. Threshold Selection

This method produces a binary crop mask, by selecting as crop pixels all pixels with values higher than a reference value. The challenging part of this algorithm is the definition of this reference value. When both DSM and Digital Terrain Model (DTM) of the area are available, the Canopy Height Model (CHM) could be derived as difference between DSM and DTM, and zero could be considered as the reference value, while in other cases (i.e., VIs as input, no availability of an accurate DTM), it should be determined as described below.
Starting from the input raster, create a smoothed raster with a moving window average filter. Subtract the smoothed raster to the input raster, and define on the differences a threshold to be considered as the reference value, by visual checking of the results with an empirical trial and error approach. Pixels with values greater than the threshold are retained as crop. Even in this algorithm, the user intervention is twofold, choosing the dimensions of the moving window and the value of the threshold, and could cause problems of under-/over-estimation of crop canopy pixels.

3.2. Classification Algorithms

Two well-known classification algorithms were exploited in this study, K-means clustering and the Minimum Distance to the Mean (MDM) classifier, to be representative of both unsupervised and supervised classification algorithms. Both methods were applied in QGIS (Version 3.4) [30], to allow users not familiar with programming languages to run the algorithms thanks to a dedicated user-friendly GUI.

3.2.1. K-Means Clustering

This is a well-known algorithm for hard unsupervised thematic classification [31]. The clustering made by K-means is based on the minimization of the objective function f ( Ω ) , defined as the Euclidean distance of samples of a cluster from the respective centroid.
The number of classes (K) are known a priori. Once K is defined, the method consists of three iterative steps. In the first step, for each class K i , a centroid is automatically chosen. The rest of the data are assigned to K clusters based on the minimum distance criterion. The Euclidean distances of each sample from the centroids are computed, and in the second step, the sample is assigned to the cluster for which the computed distance is minimum. In the last step, centroids are re-calculated, and all the samples are re-assigned. This step is iterated until the clustering converges to a stable solution, namely when centroids of clusters do not change meaningfully.
The final configuration is stable and does not depend on the initial position of centroids arbitrarily selected. The initial configuration only influences the number of iterations necessary to reach the convergence.

3.2.2. Minimum Distance to Mean Classifier

This method finds the mean values of all the training sets and classifies all the image pixels according to the class mean they are closest. The process is performed for all image pixels, one at a time. Bounds are determined using statistics derived from the training sets, and the distance used is the Euclidean one.
As for all supervised algorithms, a crucial phase of the MDM classifier is the selection of the training samples. They are used to compute class spectral signatures, therefore must be representative of all the classes. In this study, training samples were defined by visual inspections of the UAV images and grouped in two macro-classes: crop canopy and background, which included weeds, soil, and shadow pixels.

3.3. Bayesian Segmentation

This method relies on the Bayesian approach, where any uncertainty can be considered random variables that are fully described by probability distributions [32]. Given the vector of data y and the vector of parameter x, the conditional distribution of parameters is described by the Bayes theorem [33]:
P ( x | y ) = P ( x , y ) P ( y ) = P ( y | x ) P ( x ) P ( y )
where:
  • P ( x | y ) is called the posterior probability and describes the new level of knowledge of the unknown parameters x given the observed data y.
  • P ( y ) is a normalization constant used to impose that the sum of P ( y | x ) for all possible x is equal to one.
  • P ( x ) , instead, represents the prior probability distribution. It describes the knowledge of the unknown parameters x without the contribution of the observed data.
  • P ( y | x ) is defined as the likelihood and is a function of x. It describes the way in which the a priori knowledge is modified by data and depends on the noise distribution.
The terms in Equation (1) can be adapted to match the purpose of this study, the detection of crop rows: the posterior probability is the probability of a pixel to be part of the class crop canopy or background, and the prior probability is defined starting from the mean and standard deviation values, a priori assigned to each class, while the likelihood is described by a Gaussian distribution, in which the parameters are the mean and standard deviation of the two classes:
P ( y i | x i ) = 1 σ x i 2 π e x p ( y i μ x i ) 2 2 σ x i 2
The final goal of Bayesian approach can be identified in finding the optimal parameters x that maximize the posterior probability distribution P ( x | y ) . This is called the Maximum A Posteriori (MAP) estimate [34], and it is defined as:
x M A P = arg max x P ( x | y )
In crop row detection, it consists of assigning a unique class to each pixel of the image, depending on the posterior probabilities estimated for each pixel. In order to obtain outputs less affected by pixel noise, smoothing filters or image adjustment can be applied on the input raster.

4. Results

Considering all the detection methods, their parameters, and all the possible input rasters, the number of crop masks potentially available is very high. For the vineyard site, 191 outputs were tested and 166 and 104 for the pear orchard and tomato site, respectively. For the sake of brevity, only the best masks, representing each detection method, are reported and compared. An exhaustive analysis of all the tests performed was described in [35].

4.1. Vineyard

For the vineyard site, the parameters of each detection method that resulted in the best results are reported in Table 5.
For the computation of the error matrices, 104 polygons (N pixels = 42,732) were defined for the class crop canopy and 97 polygons (N pixels = 64,309) for the class background. In Table 6, the accuracies of the five selected best results are summarized, and the detail of each crop mask is shown in Figure 8.

4.2. Pear Orchard

The parameters for the best results of each detection method for the pear orchard are summarized in Table 7.
The error matrices were computed starting from a validation set composed by 37 polygons (N pixels = 44,889) for the class crop canopy and 34 polygons (N pixels = 62,378) for the class background. The five selected best results and their respective accuracies are presented in Figure 9 and in Table 8.

4.3. Tomato Field

Table 9 reports the parameters chosen for each detection method, which gave the best crop mask outputs.
The assessment of the results for the tomato site was performed on 52 polygons (N pixels = 81,290) for the class crop canopy and 42 polygons (N pixels = 155,296) for the class background. The accuracy values are presented in Table 10, while Figure 10 shows the crop detection for the five selected methods.

5. Discussion

As general findings, it can be stated that all the methods tested in this study performed well for crop row detection, with OA close or even greater than 0.9. The vineyard site seemed to be the most challenging (OA values lower than 0.9 for some methods), due to the concurrent presence of weed, bare soil, and shadow in the inter-row distance, while the high contrast between bare soil and crop canopy facilitated the crop detection in the tomato site (OA values always higher than 0.9).
Despite the high accuracy values achieved, the proposed algorithms did not require any particular computational resources, and the calculation time is reasonable with mass-market hardware, even if it varied according to the method. Local Maxima Extraction (LME) and Bayesian Segmentation (BS) overall returned the best outputs in terms of accuracies values, but with different performances in terms of time cost and parameter setting. The first method was faster, and choosing a cell size comparable with the rows’ distance, or slightly larger, and a percentage of maxima between 30% and 40% could produce high quality results in all cases. The latter required a high level of a priori knowledge, and parameters had to be ad hoc fine-tuned with a time-consuming trial and error approach. The Threshold Selection (TS) algorithm needs to be run with an accurate DSM, and the definition of the reference value is crucial and sometimes can fail, as demonstrated by the low accuracy values registered in the vineyard site (OA = 0.76), especially on fields characterized by a relevant slope. On hilly fields and non-flat areas, the use of a real CHM is necessary and cannot be bypassed by the creation of a smoothed raster, on which the reference value of terrain height is identified. Classification algorithms are easy to run, especially in the QGIS implementation, and widely used in remote sensing, but cannot reach, in all cases, the same level of accuracy as the other methods. In addition, these algorithms require considerable human intervention, either in the labeling phase, as the case of K-means clustering, or in the delineation of the training samples, as for starting the MDM Classifier.
Regarding input rasters, DSM, RGB orthophotos, and VIs obtained as a combination of RGB bands are the most adopted in the selected methods. Only the cases of Bayesian segmentation on the pear orchard site and classification algorithms for tomato site require NDVI and NDVI jointly with SAVI to obtain the best results. Therefore, NIR information does not give any particular additional value in crop row detection, and RGB sensors can perform accurate canopy extraction, as already demonstrated by other authors [15,28], saving the time and cost of the UAV surveys and processing.
According to crop characteristics, specific considerations can be stressed for each single crop.
In the case of a vineyard, it is important to maintain the continuity of the crop row, when detecting the crop canopy. This characteristics was enhanced in the Bayesian segmentation, as shown in Figure 8e, also thanks to the Gaussian filter applied to the input raster before launching the algorithm. The continuity of the vine rows was also guaranteed by using the local maxima extraction algorithm as the detection method (Figure 8a), apart from some rare and sparse pixels. Indeed, the two aforementioned methods registered the highest accuracy values, in particular PA values: 0.97 and 0.95 for BS and LME, respectively, considerably greater than the PA values of the other detection methods. The major issue of detecting vine rows is the presence of shadows, weeds, and bare soil in the inter-row distance. Our results demonstrated that the shadows made the classification methods practically unusable on the vineyard. In Figure 8c,d, it is clearly visible how the pixels at the edges of the shadow areas were detected as crop pixels. Classification algorithms are unable to separate the vine canopy from its shadow on the terrain, resulting in an overestimation of the crop rows (UA values around 0.8).
In the orchard, pear trees are planted quite distant from one another (in our study site, around 1.5 m); therefore, a good detection has to identify single plants rather than rows. In these terms, classification algorithms return the best results, as visible in Figure 9c,d and confirmed by the highest values of UA practically equal to one (Table 8). On the other hand, these detection methods also returned the most noisy outputs and underestimated the presence of pear trees in the orchard, in particular the MDM classifier with a PA equal to 0.68. The height of the trees favored their extraction from the background, also in presence of weeds in the inter-row distance. To exploit these characteristics fully, it is advisable to use the DSM as the input of whatever detection method; in particular, the threshold selection algorithm gave the best outcomes in the pear orchard site, with PA value for the class crop canopy of 0.97.
As already mentioned, the tomato field site had the highest accuracy values for the crop detection, thanks to the regular alternation of bare soil and vegetation canopy. The OA values for all the tested methods were higher than 0.9; the LME algorithm, BS, and K-means also returned PA values above 0.9, while the PA of the TS method was slightly lower than 0.9 due to the use of the DSM as the input for starting the algorithm. At the time of the survey, the plants had a height of 0.3 m, equivalent to a few image GSDs; therefore, the errors in the photogrammetric processing (column height in Table 3) in this specific case affected the results of the canopy detection.
Precision viticulture is already widespread in the world, and recent articles have demonstrated the added value that remote sensing from UAV platforms can give to this sector [36]. Hence, numerous studies can be found in the literature dealing with vine canopy extraction [15,28,37,38]. The results presented in this work had accuracy values similar to those available in the literature. To the best of authors’ knowledge, very few studies have already been published related to the detection of pear trees in orchards or tomato canopy, thus hampering the availability of reference values to compare the outputs. The assessment of the reliability of the illustrated results was based on similar case studies present in the literature. The detection of pear plants was performed with accuracy values slightly lower than the results obtained by [39], in two orchards in China, but close to the outcomes of the chestnut tree extraction described in [16]. The high values found for the tomato site were in agreement with the results presented in [17], about the estimation of crop emergence in potatoes.
The potential utility of this study in precision agriculture is high. The methods herein described allowed deriving from UAV imagery vegetation properties specifically related to the characteristics of the crop under investigation. In particular, in the irrigation management context, it must be taken into account that usually, irrigation for row-crops is provided through localized pressurized systems, wetting only the areas near the crop. The soil between rows is usually grass-covered, so analyzing crop maps (for example NDVI or thermal maps) of the field without masking the soil between rows could lead to errors when evaluating the water status or vigor of the crop under study. In Figure 11, Figure 12 and Figure 13, NDVI maps for the three analyzed study sites are shown: on the left, the original maps, while on the right, the canopy maps generated after the extraction of crop rows. This information could be used in precision agriculture applications for mapping vegetation stress status or to optimize on-farm irrigation management. As an example, in [40], the use of a crop row detection method to delineate Site-Specific Management Zones (SSMZs) maps on a vineyard was described.
Moreover, it is crucial to stress that in the case of row-crops, the importance of using UAV imagery with respect to satellite imagery becomes fundamental. In satellite images, in fact, pixels have larger dimensions, and the operation of extracting crop rows is not possible. This leads to the presence of mixed pixels, for which Vegetation Indices’ maps give information about both row and inter-row vegetation, therefore not very usable for agronomic inputs’ management [41,42].

6. Conclusions

This study demonstrates the feasibility to perform crop row detection from high-resolution UAV imagery, for different crop types, including vineyards, orchards, and horticultural crops. DSM, RGB, or multispectral orthophotos can be used as input for the detection methods; in particular, the DSM performs better with crop characterized by high heights (i.e., grapevine and pear), even in the presence of inter-row weed, but it should be avoided to detect horticultural crops (i.e., tomato). Commercial RGB sensors give high accuracy values for crop row detection; therefore, for this purpose, it is not necessary to perform surveys mounting more expensive multispectral cameras, if no additional infrared information is required. Furthermore, in the presence of shadows produced by the crop canopy on the terrain, indices based on the NIR band and classification algorithms could lead to an overestimation of the crop rows.
Although all applied methods need some level of human intervention, among all, the local maxima extraction algorithm, developed ad hoc within this study, allows reaching the best compromise in terms of time-cost, automation, and quality of results. Bayesian segmentation applied on VIs performs better than the other methods in the presence of bare soils, but it depends on a priori information.

Author Contributions

Conceptualization, G.R., A.F., B.O., and G.S.; methodology, G.R. and G.S.; software, G.R. and A.M.; validation, G.R.; data curation, G.R., A.M.; writing, original draft preparation, G.R. and A.M.; writing, review and editing, G.R., A.M., B.O., A.F., and G.S.; visualization, G.R. and A.M.; supervision, B.O., A.F., and G.S.; project administration, B.O., A.F., and G.S.; funding acquisition, B.O., A.F., and G.S. All authors read and agreed to the published version of the manuscript.

Funding

We wish to thank Regione Lombardia for funding the NUTRIPRECISO project (EU-RDP 2017), in the context of which this research was developed.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

References

  1. Pachauri, R.K.; Allen, M.R.; Barros, V.R.; Broome, J.; Cramer, W.; Christ, R.; Church, J.A.; Clarke, L.; Dahe, Q.; Dasgupta, P.; et al. Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; IPCC: Geneva, Switzerland, 2014. [Google Scholar]
  2. Costa, J.; Vaz, M.; Escalona, J.; Egipto, R.; Lopes, C.; Medrano, H.; Chaves, M. Modern viticulture in southern Europe: Vulnerabilities and strategies for adaptation to water scarcity. Agric. Water Manag. 2016, 164, 5–18. [Google Scholar] [CrossRef]
  3. UN-Water. 2018 UN World Water Development Report, Nature-Based Solutions for Water; UNESCO: Paris, France, 2018. [Google Scholar]
  4. Castellarin, S.D.; Bucchetti, B.; Falginella, L.; Peterlunger, E. Influenza del deficit idrico sulla qualità delle uve: Aspetti fisiologici e molecolari. Italus Hortus 2011, 18, 63–79. [Google Scholar]
  5. Monaghan, J.M.; Daccache, A.; Vickers, L.H.; Hess, T.M.; Weatherhead, E.K.; Grove, I.G.; Knox, J.W. More ‘crop per drop’: Constraints and opportunities for precision irrigation in European agriculture. J. Sci. Food Agric. 2013, 93, 977–980. [Google Scholar] [CrossRef]
  6. Sanchez, L.; Sams, B.; Alsina, M.; Hinds, N.; Klein, L.; Dokoozlian, N. Improving vineyard water use efficiency and yield with variable rate irrigation in California. Adv. Anim. Biosci. 2017, 8, 574–577. [Google Scholar] [CrossRef]
  7. McClymont, L.; Goodwin, I.; Whitfield, D.; O’Connell, M. Effects of within-block canopy cover variability on water use efficiency of grapevines in the Sunraysia irrigation region, Australia. Agric. Water Manag. 2019, 211, 10–15. [Google Scholar] [CrossRef]
  8. Matese, A.; Di Gennaro, S.F. Technology in precision viticulture: A state of the art review. Int. J. Wine Res. 2015, 7, 69–81. [Google Scholar] [CrossRef] [Green Version]
  9. Pádua, L.; Vanko, J.; Hruška, J.; Adão, T.; Sousa, J.J.; Peres, E.; Morais, R. UAS, sensors, and data processing in agroforestry: A review towards practical applications. Int. J. Remote Sens. 2017, 38, 2349–2391. [Google Scholar] [CrossRef]
  10. Nebiker, S.; Annen, A.; Scherrer, M.; Oesch, D. A light-weight multispectral sensor for micro UAV: Opportunities for very high resolution airborne remote sensing. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1193–1199. [Google Scholar]
  11. Tsouros, D.C.; Bibi, S.; Sarigiannidis, P.G. A Review on UAV-Based Applications for Precision Agriculture. Information 2019, 10, 349. [Google Scholar] [CrossRef] [Green Version]
  12. Corti, M.; Cavalli, D.; Cabassi, G.; Vigoni, A.; Degano, L.; Gallina, P.M. Application of a low-cost camera on a UAV to estimate maize nitrogen-related variables. Precis. Agric. 2019, 20, 675–696. [Google Scholar] [CrossRef]
  13. Noh, H.; Zhang, Q. Shadow effect on multi-spectral image for detection of nitrogen deficiency in corn. Comput. Electron. Agric. 2012, 83, 52–57. [Google Scholar] [CrossRef]
  14. Pádua, L.; Marques, P.; Hruška, J.; Adão, T.; Bessa, J.; Sousa, A.; Peres, E.; Morais, R.; Sousa, J.J. Vineyard properties extraction combining UAS-based RGB imagery with elevation data. Int. J. Remote Sens. 2018, 39, 5377–5401. [Google Scholar] [CrossRef]
  15. Poblete-Echeverría, C.; Olmedo, G.; Ingram, B.; Bardeen, M. Detection and segmentation of vine canopy in ultra-high spatial resolution RGB imagery obtained from unmanned aerial vehicle (UAV): A case study in a commercial vineyard. Remote Sens. 2017, 9, 268. [Google Scholar] [CrossRef] [Green Version]
  16. Marques, P.; Pádua, L.; Adão, T.; Hruška, J.; Peres, E.; Sousa, A.; Sousa, J.J. UAV-based automatic detection and monitoring of chestnut trees. Remote Sens. 2019, 11, 855. [Google Scholar] [CrossRef] [Green Version]
  17. Li, B.; Xu, X.; Han, J.; Zhang, L.; Bian, C.; Jin, L.; Liu, J. The estimation of crop emergence in potatoes by UAV RGB imagery. Plant Methods 2019, 15, 15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Pádua, L.; Adão, T.; Sousa, A.; Peres, E.; Sousa, J.J. Individual Grapevine Analysis in a Multi-Temporal Context Using UAV-Based Multi-Sensor Imagery. Remote Sens. 2020, 12, 139. [Google Scholar] [CrossRef] [Green Version]
  19. Ortuani, B.; Facchi, A.; Mayer, A.; Bianchi, D.; Bianchi, A.; Brancadoro, L. Assessing the Effectiveness of Variable-Rate Drip Irrigation on Water Use Efficiency in a Vineyard in Northern Italy. Water 2019, 11, 1964. [Google Scholar] [CrossRef] [Green Version]
  20. Ronchetti, G.; Pagliari, D.; Sona, G. DTM generation through UAV survey with a fisheye camera on a vineyard. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, XLII-2, 983–989. [Google Scholar] [CrossRef] [Green Version]
  21. Pádua, L.; Marques, P.; Hruška, J.; Adão, T.; Peres, E.; Morais, R.; Sousa, J. Multi-Temporal Vineyard Monitoring through UAV-Based RGB Imagery. Remote Sens. 2018, 10, 1907. [Google Scholar] [CrossRef] [Green Version]
  22. Rouse, J., Jr.; Haas, R.; Schell, J.; Deering, D. Monitoring Vegetation Systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309. [Google Scholar]
  23. Birth, G.S.; McVey, G.R. Measuring the color of growing turf with a reflectance spectrophotometer 1. Agron. J. 1968, 60, 640–643. [Google Scholar] [CrossRef]
  24. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  25. Kaufman, Y.J.; Tanre, D. Atmospherically resistant vegetation index (ARVI) for EOS-MODIS. IEEE Trans. Geosci. Remote Sens. 1992, 30, 261–270. [Google Scholar] [CrossRef]
  26. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  27. Richardson, A.D.; Jenkins, J.P.; Braswell, B.H.; Hollinger, D.Y.; Ollinger, S.V.; Smith, M.L. Use of digital webcam images to track spring green-up in a deciduous broadleaf forest. Oecologia 2007, 152, 323–334. [Google Scholar] [CrossRef]
  28. Weiss, M.; Baret, F. Using 3D point clouds derived from UAV RGB imagery to describe vineyard 3D macro-structure. Remote Sens. 2017, 9, 111. [Google Scholar] [CrossRef] [Green Version]
  29. MATLAB. Version 9.3 (R2017b); The MathWorks Inc.: Natick, MA, USA, 2017; Available online: https://www.mathworks.com/downloads/ (accessed on 14 December 2018).
  30. QGIS Development Team. QGIS Geographic Information System; Open Source Geospatial Foundation, 2009. Available online: http://qgis.osgeo.org (accessed on 2 January 2020).
  31. MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 1 January 1967; Volume 1, pp. 281–297. [Google Scholar]
  32. Ross, S.M. Probabilità e statistica per l’ingegneria e le scienze; Apogeo Editore: Milano, Italy, 2003. [Google Scholar]
  33. Bayes, T. LII. An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, FRS communicated by Mr. Price, in a letter to John Canton, AMFR S. Philos. Trans. R. Soc. Lond. 1763, 53, 370–418. [Google Scholar] [CrossRef]
  34. Geman, S.; Geman, D. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 1984, 721–741. [Google Scholar] [CrossRef]
  35. Marino, A.; Marotta, F.; Crop rows detection through UAV images. In Master Degree Final Dissertation, Politecnico di Milano. 2019. Available online: https://www.politesi.polimi.it/ (accessed on 28 November 2019).
  36. BorgognoMondino, E.; Gajetti, M. Preliminary considerations about costs and potential market of remote sensing from UAV in the Italian viticulture context. Eur. J. Remote Sens. 2017, 50, 310–319. [Google Scholar] [CrossRef]
  37. Cinat, P.; Di Gennaro, S.F.; Berton, A.; Matese, A. Comparison of Unsupervised Algorithms for Vineyard Canopy Segmentation from UAV Multispectral Images. Remote Sens. 2019, 11, 1023. [Google Scholar] [CrossRef] [Green Version]
  38. De Castro, A.; Jiménez-Brenes, F.; Torres-Sánchez, J.; Peña, J.; Borra-Serrano, I.; López-Granados, F. 3-D characterization of vineyards using a novel UAV imagery-based OBIA procedure for precision viticulture applications. Remote Sens. 2018, 10, 584. [Google Scholar] [CrossRef] [Green Version]
  39. Dong, X.; Zhang, Z.; Yu, R.; Tian, Q.; Zhu, X. Extraction of Information about Individual Trees from High-Spatial-Resolution UAV-Acquired Images of an Orchard. Remote Sens. 2020, 12, 133. [Google Scholar] [CrossRef] [Green Version]
  40. Ortuani, B.; Sona, G.; Ronchetti, G.; Mayer, A.; Facchi, A. Integrating Geophysical and Multispectral Data to Delineate Homogeneous Management Zones within a Vineyard in Northern Italy. Sensors 2019, 19, 3974. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Di Gennaro, S.F.; Dainelli, R.; Palliotti, A.; Toscano, P.; Matese, A. Sentinel-2 Validation for Spatial Variability Assessment in Overhead Trellis System Viticulture Versus UAV and Agronomic Data. Remote Sens. 2019, 11, 2573. [Google Scholar] [CrossRef] [Green Version]
  42. Khaliq, A.; Comba, L.; Biglia, A.; Ricauda Aimonino, D.; Chiaberge, M.; Gay, P. Comparison of satellite and UAV-based multispectral imagery for vineyard variability assessment. Remote Sens. 2019, 11, 436. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The experimental sites: (a) the vineyard; (b) in yellow the pear orchard and in red the tomato field. Coordinate Reference System (CRS): WGS84/UTM Zone 32N. Map data: ©OpenStreetMap contributors.
Figure 1. The experimental sites: (a) the vineyard; (b) in yellow the pear orchard and in red the tomato field. Coordinate Reference System (CRS): WGS84/UTM Zone 32N. Map data: ©OpenStreetMap contributors.
Remotesensing 12 01967 g001
Figure 2. Ground Control Points’ (GCPs) distribution for the surveys on the vineyard. Map data: ©Google Satellite.
Figure 2. Ground Control Points’ (GCPs) distribution for the surveys on the vineyard. Map data: ©Google Satellite.
Remotesensing 12 01967 g002
Figure 3. Vineyard site: DSM (a) and orthophoto (b) produced through photogrammetric processing of the RGB dataset; false color orthophoto (c), generated from the multispectral dataset.
Figure 3. Vineyard site: DSM (a) and orthophoto (b) produced through photogrammetric processing of the RGB dataset; false color orthophoto (c), generated from the multispectral dataset.
Remotesensing 12 01967 g003
Figure 4. Ground Control Points’ (GCPs) distribution for the surveys on the pear orchard. Map data: ©Google Satellite.
Figure 4. Ground Control Points’ (GCPs) distribution for the surveys on the pear orchard. Map data: ©Google Satellite.
Remotesensing 12 01967 g004
Figure 5. Pear orchard site: DSM (a) and orthophoto (b) produced through photogrammetric processing of the RGB dataset; false color orthophoto (c), generated from the multispectral dataset.
Figure 5. Pear orchard site: DSM (a) and orthophoto (b) produced through photogrammetric processing of the RGB dataset; false color orthophoto (c), generated from the multispectral dataset.
Remotesensing 12 01967 g005
Figure 6. Ground Control Points’ (GCPs) distribution for the surveys on the tomato site. Map data: ©Google Satellite.
Figure 6. Ground Control Points’ (GCPs) distribution for the surveys on the tomato site. Map data: ©Google Satellite.
Remotesensing 12 01967 g006
Figure 7. Tomato site: DSM (a) and orthophoto (b) produced through photogrammetric processing of RGB dataset; false color orthophoto (c), generated from the multispectral dataset.
Figure 7. Tomato site: DSM (a) and orthophoto (b) produced through photogrammetric processing of RGB dataset; false color orthophoto (c), generated from the multispectral dataset.
Remotesensing 12 01967 g007
Figure 8. Vineyard site: crop row detection results for (a) local maxima extraction, (b) threshold selection, K-means clustering (c), MDM classifier (d), and the Bayesian segmentation (e). Figures refer to the area included in the red box in (f).
Figure 8. Vineyard site: crop row detection results for (a) local maxima extraction, (b) threshold selection, K-means clustering (c), MDM classifier (d), and the Bayesian segmentation (e). Figures refer to the area included in the red box in (f).
Remotesensing 12 01967 g008
Figure 9. Pear orchard site: crop row detection results for (a) local maxima extraction, (b) threshold selection, K-means clustering (c), MDM classifier (d), and Bayesian segmentation (e). Figures refer to the area included in the red box in (f).
Figure 9. Pear orchard site: crop row detection results for (a) local maxima extraction, (b) threshold selection, K-means clustering (c), MDM classifier (d), and Bayesian segmentation (e). Figures refer to the area included in the red box in (f).
Remotesensing 12 01967 g009
Figure 10. Tomato field site: crop row detection results for (a) local maxima extraction, (b) threshold selection, K-means clustering (c), MDM classifier (d), and Bayesian segmentation (e). Figures refer to the area included in the red box in (f).
Figure 10. Tomato field site: crop row detection results for (a) local maxima extraction, (b) threshold selection, K-means clustering (c), MDM classifier (d), and Bayesian segmentation (e). Figures refer to the area included in the red box in (f).
Remotesensing 12 01967 g010
Figure 11. Vineyard site: NDVI map before (a) and after (b) the crop rows’ extraction.
Figure 11. Vineyard site: NDVI map before (a) and after (b) the crop rows’ extraction.
Remotesensing 12 01967 g011
Figure 12. Pear orchard site: NDVI map before (a) and after (b) the crop rows’ extraction.
Figure 12. Pear orchard site: NDVI map before (a) and after (b) the crop rows’ extraction.
Remotesensing 12 01967 g012
Figure 13. Tomato field site: NDVI map before (a) and after (b) the crop rows’ extraction.
Figure 13. Tomato field site: NDVI map before (a) and after (b) the crop rows’ extraction.
Remotesensing 12 01967 g013
Table 1. Vineyard site: residuals on the GCPs after bundle block adjustment.
Table 1. Vineyard site: residuals on the GCPs after bundle block adjustment.
LabelEasting (m)Northing (m)Height (m)
v1−0.029−0.0080.088
v2−0.007−0.012−0.012
v3−0.025−0.030−0.063
v40.0080.0250.062
v50.0150.0050.000
v60.022−0.0130.024
v70.030−0.0130.027
v80.020−0.015−0.086
v90.0120.0080.012
RMSE0.0210.0160.052
Table 2. Pear orchard site: residuals on the GCPs after bundle block adjustment.
Table 2. Pear orchard site: residuals on the GCPs after bundle block adjustment.
LabelEasting (m)Northing (m)Height (m)
p1−0.050−0.0170.007
p2−0.032−0.004−0.017
p30.018−0.0600.021
p40.028−0.0890.113
p50.0030.023−0.290
p60.1000.0530.018
p7−0.0850.0030.054
RMSE0.0560.0470.119
Table 3. Tomato site: residuals on the GCPs after bundle block adjustment.
Table 3. Tomato site: residuals on the GCPs after bundle block adjustment.
LabelEasting (m)Northing (m)Height (m)
t1−0.0980.071−0.101
t20.0950.0680.046
t30.0280.710.077
t4−0.026−0.127−0.054
t5−0.019−0.1540.149
t6−0.053−0.0380.114
RMSE0.0620.0960.097
Table 4. Vegetation Indices (VIs) used in this study.
Table 4. Vegetation Indices (VIs) used in this study.
IndexNameFormulaReferences
NDVINormalized Difference
Vegetation Index
N I R R e d N I R + R e d [22]
SRSimple
Ratio
N I R R e d [23]
SAVISoil-Adjusted
Vegetation Index
N I R R e d N I R + R e d + L 1 + L [24]
ARVIAtmospherically Resistant
Vegetation Index
N I R R B N I R + R B where:
R B = R e d γ ( B l u e R e d )
[25]
ExGExcess
Green
2 ( G r e e n ) ( R e d + B l u e ) [26]
G%Normalized Green
Channel Brightness
G r e e n R e d + G r e e n + B l u e [27]
Table 5. Vineyard site: parameters for the best results of each detection method. MDM, Minimum Distance to the Mean.
Table 5. Vineyard site: parameters for the best results of each detection method. MDM, Minimum Distance to the Mean.
MethodInputUser’s Choices
Local Maxima
Extraction
G%cell size: 5 m
percentage: 30%
Threshold
Selection
DSMcell size: 3 m
threshold: 0.3
K-means
Clustering
RGB
orthophoto
classes: 6
MDM
Classifier
RGB
orthophoto
classes: 2
Bayesian
Segmentation
ExG,
Gaussian filter ( σ = 3)
Background: μ = 0.2, σ = 0.2
Crop canopy: μ = 0.7, σ = 0.25
Table 6. Vineyard site: assessment of the best results of each detection method.
Table 6. Vineyard site: assessment of the best results of each detection method.
MethodOAPA
Crop Canopy
UA
Crop Canopy
Local Maxima
Extraction
0.940.950.91
Threshold
Selection
0.760.410.99
K-means
Clustering
0.820.730.80
MDM
Classifier
0.870.840.83
Bayesian
Segmentation
0.960.970.94
Table 7. Pear orchard site: parameters for the best results of each detection method.
Table 7. Pear orchard site: parameters for the best results of each detection method.
MethodInputUser’s Choices
Local Maxima
Extraction
DSMcell size: 4 m
percentage: 40%
Threshold
Selection
DSMcell size: 4 m
threshold: 0
K-means
Clustering
RGB
orthophoto
classes: 5
MDM
Classifier
RGB
orthophoto
classes: 2
Bayesian
Segmentation
NDVI,
Gaussian filter ( σ = 3)
Background: μ =0.8, σ =0.04
Crop canopy: μ =0.93, σ =0.04
Table 8. Pear orchard site: assessment of the best results of each detection method.
Table 8. Pear orchard site: assessment of the best results of each detection method.
MethodOAPA
Crop Canopy
UA
Crop Canopy
Local Maxima
Extraction
0.920.880.93
Threshold
Selection
0.950.970.92
K-means
Clustering
0.950.900.99
MDM
Classifier
0.870.680.99
Bayesian
Segmentation
0.940.910.95
Table 9. Tomato field site: parameters for the best results of each detection method.
Table 9. Tomato field site: parameters for the best results of each detection method.
MethodInputUser’s Choices
Local Maxima
Extraction
G%cell size: 3 m
percentage: 30%
Threshold
Selection
DSMcell size: 4 m
threshold: 0
K-means
Clustering
SAVI +
NDVI
classes: 5
MDM
Classifier
SAVI +
NDVI
classes: 2
Bayesian
Segmentation
ExG,
Histogram adjustment
Background: μ = 0.05, σ = 0.15
Crop canopy: μ = 0.65, σ = 0.35
Table 10. Tomato field site: assessment of the best results of each detection method.
Table 10. Tomato field site: assessment of the best results of each detection method.
MethodOAPA
Crop Canopy
UA
Crop Canopy
Local Maxima
Extraction
0.980.940.98
Threshold
Selection
0.970.870.99
K-means
Clustering
0.930.930.79
MDM
Classifier
0.900.600.92
Bayesian
Segmentation
0.980.910.98

Share and Cite

MDPI and ACS Style

Ronchetti, G.; Mayer, A.; Facchi, A.; Ortuani, B.; Sona, G. Crop Row Detection through UAV Surveys to Optimize On-Farm Irrigation Management. Remote Sens. 2020, 12, 1967. https://doi.org/10.3390/rs12121967

AMA Style

Ronchetti G, Mayer A, Facchi A, Ortuani B, Sona G. Crop Row Detection through UAV Surveys to Optimize On-Farm Irrigation Management. Remote Sensing. 2020; 12(12):1967. https://doi.org/10.3390/rs12121967

Chicago/Turabian Style

Ronchetti, Giulia, Alice Mayer, Arianna Facchi, Bianca Ortuani, and Giovanna Sona. 2020. "Crop Row Detection through UAV Surveys to Optimize On-Farm Irrigation Management" Remote Sensing 12, no. 12: 1967. https://doi.org/10.3390/rs12121967

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop