Next Article in Journal
Toward a Simple and Generic Approach for Identifying Multi-Year Cotton Cropping Patterns Using Landsat and Sentinel-2 Time Series
Next Article in Special Issue
Machine Learning in Evaluating Multispectral Active Canopy Sensor for Prediction of Corn Leaf Nitrogen Concentration and Yield
Previous Article in Journal
Mapping Large-Scale Forest Disturbance Types with Multi-Temporal CNN Framework
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Deep Learning-Based Object Detection System for Identifying Weeds Using UAS Imagery

Department of Agricultural and Biological Engineering, Purdue University, West Lafayette, IN 47907, USA
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(24), 5182;
Submission received: 28 October 2021 / Revised: 13 December 2021 / Accepted: 14 December 2021 / Published: 20 December 2021
(This article belongs to the Special Issue Advances of Remote Sensing in Precision Agriculture)


Current methods of broadcast herbicide application cause a negative environmental and economic impact. Computer vision methods, specifically those related to object detection, have been reported to aid in site-specific weed management procedures for targeted herbicide application within a field. However, a major challenge to developing a weed detection system is the requirement for a properly annotated database to differentiate between weeds and crops under field conditions. This research involved creating an annotated database of 374 red, green, and blue (RGB) color images organized into monocot and dicot weed classes. The images were acquired from corn and soybean research plots located in north-central Indiana using an unmanned aerial system (UAS) flown at 30 and 10 m heights above ground level (AGL). A total of 25,560 individual weed instances were manually annotated. The annotated database consisted of four different subsets (Training Image Sets 1–4) to train the You Only Look Once version 3 (YOLOv3) deep learning model for five separate experiments. The best results were observed with Training Image Set 4, consisting of images acquired at 10 m AGL. For monocot and dicot weeds, respectively, an average precision (AP) score of 91.48 % and 86.13% was observed at a 25% IoU threshold (AP @ T = 0.25), as well as 63.37% and 45.13% at a 50% IoU threshold (AP @ T = 0.5). This research has demonstrated a need to develop large, annotated weed databases to evaluate deep learning models for weed identification under field conditions. It also affirms the findings of other limited research studies utilizing object detection for weed identification under field conditions.

1. Introduction

Weed infestations have been globally reported to cause yield losses in all field crops. In 2018, noxious weed infestation alone contributed to 30% of total yield loss worldwide [1]. In North America alone, weed infestations were reported to cause a 40-billion-dollar (USD) loss in harvest profit in the 2018 growing season [2]. Chemical weed control via the use of herbicides is a crucial component to crop health and yield. Broadcast application, the current standard in agriculture, involves the uniform distribution of herbicide over the entire field, regardless of if there are weeds present or not. This practice has negative environmental implications and is financially detrimental to farming operations [3]. The ability to detect, identify, and control weed growth in the early stages of plant development is necessary for crop development. In the early crop production season, an effective management strategy helps prevent weed infestation from spreading to other field areas. Early-season site-specific weed management (ESSWM) is achievable by implementing this strategy on a plant-by-plant level [4]. In the practice of ESSWM, an automatic weed detection strategy can be utilized to spray only where a weed is present in-field. Advances in computer vision techniques have generated researchers’ interest in developing automated systems capable of accurately identifying weeds.
Various computer vision techniques have been used across different engineering disciplines. Computer vision is commonly used in the healthcare industry to evaluate different diseases [5], lesions [6,7], and detect cancer [8]. It was observed that the YOLO object-detection model performed the best for breast cancer detection [8]. In addition, computer vision is used for autonomous vehicles to develop self-driving cars [9], ground robots [10], and unmanned aerial systems [11]. Security applications such as facial recognition [12], pedestrian avoidance [13], and obstacle avoidance [14] also rely on computer vision. Although computer vision is commonly used, its recent implementation in precision agriculture applications has shown promising results for the detection of different stresses within crop fields, such as weeds [15], diseases [16], pests [17], nutrient deficiencies [18], etc. In addition, it has been used for fruit counting [19], crop height detection [20], automation [21], and assessment of fruit and vegetable quality [22]. Data from different sensors are utilized to implement computer vision techniques in agriculture. Stereo camera sensors have also been used for computer vision applications [10,14,20]. Hyperspectral and multispectral sensors are commonly used for weed identification to obtain detailed information and pick up multiple different channels [23]. Although research has been conducted and solutions have been developed using a range of sensors, red, green, and blue (RGB) sensors are the most popular, cost less, are easy to use, and are readily available [23,24].
Before the popularity of deep learning-based computer vision, traditional image processing and machine learning algorithms were commonly used by the research community. Computer vision systems using image processing were developed to discriminate between crop rows and weeds in real-time [25] and weed identification using multispectral and RGB imagery [26]. Machine learning was also recently used to identify weeds in corn using hyperspectral imagery [1] and in rice using stereo computer vision [27]. However, as traditional image processing and machine learning algorithms relied on manual feature extraction [28], the algorithms were less generalizable [29] and prone to bias [30]. Therefore, training deep learning models gained popularity as they rely on convolutional neural networks capable of automatically extracting important features from images [31]. Deep learning was recently used for weed identification in corn using the You Only Look Once (YOLOv3) algorithm [15].
Although promising results have been reported for weed identification, the development of deep learning models capable of accurately identifying weeds from UAS are limited. UAS mounted with hyperspectral and multispectral [32] sensors were used for weed identification. However, as RGB sensors are cost-effective, machine learning was recently used by mounting RGB sensors on UAS for acquiring images at 30, 60, and 90 m altitude, respectively, for weed identification [33]. Machine learning-based support vector machines (SVM), along with the YOLOv3 and Mask RCNN deep learning models, were used for weed identification using multispectral imagery acquired using a UAS at an altitude of 2 m [30]. YOLOv3 was also used to identify weeds in winter wheat using UAS-acquired imagery at 2 m altitude [34].
Flying a UAS at a low altitude allows obtaining higher spatial resolution imagery than manned aircraft or satellites [35]. A UAS also provides a high temporal resolution to track physical and biological changes in a field over time [36]. UAS-based imagery was implemented to train a DNN for weed detection [37], resulting in high testing accuracy. Similarly, UAS-based multispectral imagery was successfully used to develop a crop/weed segmentation and mapping framework on a whole field basis [32].
Despite a few successful research outcomes reported previously, weed detection has proven difficult within tilled and no-till row-crop fields. These fields present a complex and challenging environment for a computer vision application. Weeds and crops have similar spectral characteristics and share physical similarities early in the growing season. Soil conditions may also vary heavily within a small area, and the presence of stalks and debris in no-till or mini