Computer Vision for Intelligent Crop Identification and Crop Protection

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (20 December 2023) | Viewed by 20626

Special Issue Editors

College of Engineering, China Agricultural University, Beijing 100083, China
Interests: smart urban agriculture; artificial intelligence; agricultural robotics; automated control; unmanned aerial vehicle; plant phenotyping; computer vision; crop plant signaling; machine (deep) learning; food processing and safety; fluorescence imaging; hyper/multispectral imaging; Vis/NIR/MIR imaging spectroscopy
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Affected by climate change and other factors, crops are susceptible to a variety of diseases, pests, and weeds, resulting in production loss and quality degradation. Crop protection is the science and practice of managing plant diseases, weeds, and pests that damage agricultural crops. Herbicides, insecticides, and fungicides are widely used for crop protection in agricultural areas. However, a series of factors, including resistance to chemicals, environmental pollution, growing consumer concerns, and their strong interest in organic food, limit the acceptability of chemical reagents in future applications. Conventional protocols of weed control or phenotyping crop disease severity are a costly and time-consuming process. This context requires the development of smart technologies to accelerate the selection of disease-resistant crops, or to apply compounds or alternative products to targets to control diseases, pests, or weeds.

This Special Issue focuses on computer vision using near-ground and airborne cameras to identify plant traits for crop protection. We would like to invite experts and researchers in the field to contribute original and high-quality research articles and reviews to the journal (Agriculture or Agronomy) peer-reviewed Special Issue: “Computer Vision for Intelligent Crop Identification and Crop Protection”.

You may choose our Joint Special Issue in Agriculture.

Dr. Wen-Hao Su
Dr. Zhou Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • crop plant signaling
  • automatic system
  • robot–-plant interaction
  • high- throughput phenotyping
  • machine learning
  • image segmentation
  • plant detection
  • physiology of crops and weeds
  • monitoring and control of diseases
  • pests and weeds

Related Special Issue

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 5093 KiB  
Article
Strawberry Maturity Recognition Based on Improved YOLOv5
by Zhiqing Tao, Ke Li, Yuan Rao, Wei Li and Jun Zhu
Agronomy 2024, 14(3), 460; https://doi.org/10.3390/agronomy14030460 - 26 Feb 2024
Viewed by 478
Abstract
Strawberry maturity detection plays an essential role in modern strawberry yield estimation and robot-assisted picking and sorting. Due to the small size and complex growth environment of strawberries, there are still problems with existing recognition systems’ accuracy and maturity classifications. This article proposes [...] Read more.
Strawberry maturity detection plays an essential role in modern strawberry yield estimation and robot-assisted picking and sorting. Due to the small size and complex growth environment of strawberries, there are still problems with existing recognition systems’ accuracy and maturity classifications. This article proposes a strawberry maturity recognition algorithm based on an improved YOLOv5s model named YOLOv5s-BiCE. This algorithm model is a replacement of the upsampling algorithm with a CARAFE module structure. It is an improvement on the previous model in terms of its content-aware processing; it also widens the field of vision and maintains a high level of efficiency, resulting in improved object detection capabilities. This article also introduces a double attention mechanism named Biformed for small-target detection, optimizing computing allocation, and enhancing content perception flexibility. Via multi-scale feature fusion, we utilized double attention mechanisms to reduce the number of redundant computations. Additionally, the Focal_EIOU optimization method was introduced to improve its accuracy and address issues related to uneven sample classification in the loss function. The YOLOv5s-BiCE algorithm was better at recognizing strawberry maturity compared to the original YOLOv5s model. It achieved a 2.8% increase in the mean average precision and a 7.4% increase in accuracy for the strawberry maturity dataset. The improved algorithm outperformed other networks, like YOLOv4-tiny, YOLOv4-lite-e, YOLOv4-lite-s, YOLOv7, and Fast RCNN, with recognition accuracy improvements of 3.3%, 4.7%, 4.2%, 1.5%, and 2.2%, respectively. In addition, we developed a corresponding detection app and combined the algorithm with DeepSort to apply it to patrol robots. It was found that the detection algorithm exhibits a fast real-time detection speed, can support intelligent estimations of strawberry yield, and can assist picking robots. Full article
Show Figures

Figure 1

20 pages, 6336 KiB  
Article
A SPH-YOLOv5x-Based Automatic System for Intra-Row Weed Control in Lettuce
by Bo Jiang, Jian-Lin Zhang, Wen-Hao Su and Rui Hu
Agronomy 2023, 13(12), 2915; https://doi.org/10.3390/agronomy13122915 - 27 Nov 2023
Cited by 1 | Viewed by 665
Abstract
Weeds have a serious impact on lettuce cultivation. Weeding is an efficient way to increase lettuce yields. Due to the increasing costs of labor and the harm of herbicides to the environment, there is an increasing need to develop a mechanical weeding robot [...] Read more.
Weeds have a serious impact on lettuce cultivation. Weeding is an efficient way to increase lettuce yields. Due to the increasing costs of labor and the harm of herbicides to the environment, there is an increasing need to develop a mechanical weeding robot to remove weeds. Accurate weed recognition and crop localization are prerequisites for automatic weeding in precision agriculture. In this study, an intra-row weeding system is developed based on a vision system and open/close weeding knives. This vision system combines the improved you only look once v5 (YOLOv5) identification model and the lettuce–weed localization method. Compared with models including YOLOv5s, YOLOv5m, YOLOv5l, YOLOv5n, and YOLOv5x, the optimized SPH-YOLOv5x model exhibited the best performance in identifying, with precision, recall, F1-score, and mean average precision (mAP) value of 95%, 93.32%, 94.1% and 96%, respectively. The proposed weed control system successfully removed the intra-row weeds with 80.25% accuracy at 3.28 km/h. This study demonstrates the robustness and efficacy of the automatic system for intra-row weed control in lettuce. Full article
Show Figures

Figure 1

17 pages, 25331 KiB  
Article
A Grape Dataset for Instance Segmentation and Maturity Estimation
by Achilleas Blekos, Konstantinos Chatzis, Martha Kotaidou, Theocharis Chatzis, Vassilios Solachidis, Dimitrios Konstantinidis and Kosmas Dimitropoulos
Agronomy 2023, 13(8), 1995; https://doi.org/10.3390/agronomy13081995 - 27 Jul 2023
Cited by 2 | Viewed by 1904
Abstract
Grape maturity estimation is vital in precise agriculture as it enables informed decision making for disease control, harvest timing, grape quality, and quantity assurance. Despite its importance, there are few large publicly available datasets that can be used to train accurate and robust [...] Read more.
Grape maturity estimation is vital in precise agriculture as it enables informed decision making for disease control, harvest timing, grape quality, and quantity assurance. Despite its importance, there are few large publicly available datasets that can be used to train accurate and robust grape segmentation and maturity estimation algorithms. To this end, this work proposes the CERTH grape dataset, a new sizeable dataset that is designed explicitly for evaluating deep learning algorithms in grape segmentation and maturity estimation. The proposed dataset is one of the largest currently available grape datasets in the literature, consisting of around 2500 images and almost 10 k grape bunches, annotated with masks and maturity levels. The images in the dataset were captured under various illumination conditions and viewing angles and with significant occlusions between grape bunches and leaves, making it a valuable resource for the research community. Thorough experiments were conducted using a plethora of general object detection methods to provide a baseline for the future development of accurate and robust grape segmentation and maturity estimation algorithms that can significantly advance research in the field of viticulture. Full article
Show Figures

Figure 1

21 pages, 5816 KiB  
Article
Estimating the Reduction in Cover Crop Vitality Followed by Pelargonic Acid Application Using Drone Imagery
by Eliyeh Ganji, Görres Grenzdörffer and Sabine Andert
Agronomy 2023, 13(2), 354; https://doi.org/10.3390/agronomy13020354 - 26 Jan 2023
Cited by 3 | Viewed by 1249
Abstract
Cultivation of cover crops is a valuable practice in sustainable agriculture. In cover crop management, the method of desiccation is an important consideration, and one widely used method for this is the application of glyphosate. With use of glyphosate likely to be banned [...] Read more.
Cultivation of cover crops is a valuable practice in sustainable agriculture. In cover crop management, the method of desiccation is an important consideration, and one widely used method for this is the application of glyphosate. With use of glyphosate likely to be banned soon in Europe, the purpose of this study was to evaluate the herbicidal effect of pelargonic acid (PA) as a bio-based substitute for glyphosate. This study presents the results of a two-year field experiment (2019 and 2021) conducted in northeast Germany. The experimental setup included an untreated control, three different dosages (16, 8, and 5 L/ha) of PA, and the active ingredients glyphosate and pyraflufen. A completely randomised block design was established. The effect of the herbicide treatments was assessed by a visual estimate of the percentage of crop vitality and a comparison assessment provided by an Ebee+ drone. Four vegetation indices (VIs) calculated from the drone images were used to verify the credibility of colour (RGB)-based and near-infrared (NIR)-based vegetation indices. The results of both types of assessment indicated that pelargonic acid was reasonably effective in controlling cover crops within a week of application. In both experimental years, the PA (16 L/ha) and PA_2T (double application of 8 L/ha) treatments demonstrated their highest herbicidal effect for up to seven days after application. PA (16 L/ha) vitality loss decreased over time, while PA_2T (double application of 8 L/ha) continued to exhibit an almost constant effect for longer due to the second application one week later. The PA dosage of 5 L/ha, pyraflufen, and a mixture of the two exhibited a smaller vitality loss than the other treatments. However, except for glyphosate, the herbicidal effect of all the other treatments decreased over time. At the end of the experiment, the glyphosate treatment (3 L/ha) demonstrated the lowest estimated vitality. The results of the drone assessments indicated that vegetation indices (VIs) can provide detailed information regarding crop vitality following herbicide application and that RGB-based indices, such as EXG, have the potential to be applied efficiently and cost-effectively utilising drone imagery. The results of this study demonstrate that pelargonic acid has considerable potential for use as an additional tool in integrated crop management. Full article
Show Figures

Figure 1

17 pages, 4873 KiB  
Article
Identification Method of Corn Leaf Disease Based on Improved Mobilenetv3 Model
by Chunguang Bi, Suzhen Xu, Nan Hu, Shuo Zhang, Zhenyi Zhu and Helong Yu
Agronomy 2023, 13(2), 300; https://doi.org/10.3390/agronomy13020300 - 18 Jan 2023
Cited by 12 | Viewed by 2983
Abstract
Corn is one of the main food crops in China, and its area ranks in the top three in the world. However, the corn leaf disease has seriously affected the yield and quality of corn. To quickly and accurately identify corn leaf diseases, [...] Read more.
Corn is one of the main food crops in China, and its area ranks in the top three in the world. However, the corn leaf disease has seriously affected the yield and quality of corn. To quickly and accurately identify corn leaf diseases, taking timely and effective treatment to reduce the loss of corn yield. We proposed identifying corn leaf diseases using the Mobilenetv3 (CD-Mobilenetv3) model. Based on the Mobilenetv3 model, we replaced the model’s cross-entropy loss function with a bias loss function to improve accuracy. Replaced the model’s squeeze and excitation (SE) module with the efficient channel attention (ECA) module to reduce parameters. Introduced the cross-layer connections between Mobile modules to utilize features synthetically. Then we Introduced the dilated convolutions in the model to increase the receptive field. We integrated a hybrid open-source corn leaf disease dataset (CLDD). The test results on CLDD showed the accuracy reached 98.23%, the precision reached 98.26%, the recall reached 98.26%, and the F1 score reached 98.26%. The test results are improved compared to the classic deep learning (DL) models ResNet50, ResNet101, ShuffleNet_x2, VGG16, SqueezeNet, InceptionNetv3, etc. The loss value was 0.0285, and the parameters were lower than most contrasting models. The experimental results verified the validity of the CD-Mobilenetv3 model in the identification of corn leaf diseases. It provides adequate technical support for the timely control of corn leaf diseases. Full article
Show Figures

Figure 1

20 pages, 7849 KiB  
Article
Efficient Identification of Apple Leaf Diseases in the Wild Using Convolutional Neural Networks
by Qing Yang, Shukai Duan and Lidan Wang
Agronomy 2022, 12(11), 2784; https://doi.org/10.3390/agronomy12112784 - 09 Nov 2022
Cited by 18 | Viewed by 2320
Abstract
Efficient identification of apple leaf diseases (ALDs) can reduce the use of pesticides and increase the quality of apple fruit, which is of significance to smart agriculture. However, existing research into identifying ALDs lacks models/methods that satisfy efficient identification in the wild environment, [...] Read more.
Efficient identification of apple leaf diseases (ALDs) can reduce the use of pesticides and increase the quality of apple fruit, which is of significance to smart agriculture. However, existing research into identifying ALDs lacks models/methods that satisfy efficient identification in the wild environment, hindering the application of smart agriculture in the apple industry. Therefore, this paper explores an ACCURATE, LIGHTWEIGHT, and ROBUST convolutional neural network (CNN) called EfficientNet-MG, improving the conventional EfficientNet network by the multistage feature fusion (MSFF) method and gaussian error linear unit (GELU) activation function. The shallow and deep convolutional layers usually contain detailed and semantic information, respectively, but conventional EfficientNets do not fully utilize the different stage convolutional layers. Thus, MSFF was adopted to improve the semantic representation capacity of the last layer of features, and GELU was used to adapt to complicated tasks. Further, a comprehensive ALD dataset called AppleLeaf9 was constructed for the wild environment. The experimental results show that EfficientNet-MG achieves a higher accuracy (99.11%) and fewer parameters (8.42 M) than the five classical CNN models, thus proving that EfficientNet-MG achieves more competitive results on ALD identification. Full article
Show Figures

Figure 1

16 pages, 2795 KiB  
Article
A Systematic Study of Estimating Potato N Concentrations Using UAV-Based Hyper- and Multi-Spectral Imagery
by Jing Zhou, Biwen Wang, Jiahao Fan, Yuchi Ma, Yi Wang and Zhou Zhang
Agronomy 2022, 12(10), 2533; https://doi.org/10.3390/agronomy12102533 - 17 Oct 2022
Cited by 5 | Viewed by 1916
Abstract
Potato growth depends largely on nitrogen (N) availability in the soil. However, the shallow-root crop coupled with its common cultivation in coarse-textured soils leads to its poor N use efficiency. Fast and accurate estimations of potato tissue N concentrations are urgently needed to [...] Read more.
Potato growth depends largely on nitrogen (N) availability in the soil. However, the shallow-root crop coupled with its common cultivation in coarse-textured soils leads to its poor N use efficiency. Fast and accurate estimations of potato tissue N concentrations are urgently needed to assist the decision making in precision fertilization management. Remote sensing has been utilized to evaluate the potato N status by correlating spectral information with lab tests on leaf N concentrations. In this study, a systematic comparison was conducted to quantitatively evaluate the performance of hyperspectral and multispectral images in estimating the potato N status, providing a reference for the trade-off between sensor costs and performance. In the experiment, two potato varieties were planted under four fertilization rates with replicates. UAV images were acquired multiple times during the season with a narrow-band hyperspectral imager. Multispectral reflectance was simulated by merging the relevant narrow bands into broad bands to mimic commonly used multispectral cameras. The whole leaf total N concentration and petiole nitrate-N concentration were obtained from 160 potato leaf samples. A partial least square regression model was developed to estimate the two N status indicators using different groups of image features. The best estimation accuracies were given by reflectance of the full spectra with 2.2 nm narrow, with the coefficient of determination (R2) being 0.78 and root mean square error (RMSE) being 0.41 for the whole leaf total N concentration; while, for the petiole nitrate-N concentration, the 10 nm bands had the best performance (R2 = 0.87 and RMSE = 0.13). Generally, the model performance decreased with an increase of the spectral bandwidth. The hyperspectral full spectra largely outperformed all three multispectral cameras, but there was no significant difference among the three brands of multispectral cameras. The results also showed that spectral bands in the visible regions (400–700 nm) were the most highly correlated with potato N concentrations. Full article
Show Figures

Figure 1

18 pages, 3866 KiB  
Article
SE-YOLOv5x: An Optimized Model Based on Transfer Learning and Visual Attention Mechanism for Identifying and Localizing Weeds and Vegetables
by Jian-Lin Zhang, Wen-Hao Su, He-Yi Zhang and Yankun Peng
Agronomy 2022, 12(9), 2061; https://doi.org/10.3390/agronomy12092061 - 29 Aug 2022
Cited by 21 | Viewed by 2565
Abstract
Weeds in the field affect the normal growth of lettuce crops by competing with them for resources such as water and sunlight. The increasing costs of weed management and limited herbicide choices are threatening the profitability, yield, and quality of lettuce. The application [...] Read more.
Weeds in the field affect the normal growth of lettuce crops by competing with them for resources such as water and sunlight. The increasing costs of weed management and limited herbicide choices are threatening the profitability, yield, and quality of lettuce. The application of intelligent weeding robots is an alternative to control intra-row weeds. The prerequisite for automatic weeding is accurate differentiation and rapid localization of different plants. In this study, a squeeze-and-excitation (SE) network combined with You Only Look Once v5 (SE-YOLOv5x) is proposed for weed-crop classification and lettuce localization in the field. Compared with models including classical support vector machines (SVM), YOLOv5x, single-shot multibox detector (SSD), and faster-RCNN, the SE-YOLOv5x exhibited the highest performance in weed and lettuce plant identifications, with precision, recall, mean average precision (mAP), and F1-score values of 97.6%, 95.6%, 97.1%, and 97.3%, respectively. Based on plant morphological characteristics, the SE-YOLOv5x model detected the location of lettuce stem emerging points in the field with an accuracy of 97.14%. This study demonstrates the capability of SE-YOLOv5x for the classification of lettuce and weeds and the localization of lettuce, which provides theoretical and technical support for automated weed control. Full article
Show Figures

Figure 1

Review

Jump to: Research

25 pages, 2589 KiB  
Review
Convolutional Neural Networks in Computer Vision for Grain Crop Phenotyping: A Review
by Ya-Hong Wang and Wen-Hao Su
Agronomy 2022, 12(11), 2659; https://doi.org/10.3390/agronomy12112659 - 27 Oct 2022
Cited by 31 | Viewed by 4558
Abstract
Computer vision (CV) combined with a deep convolutional neural network (CNN) has emerged as a reliable analytical method to effectively characterize and quantify high-throughput phenotyping of different grain crops, including rice, wheat, corn, and soybean. In addition to the ability to rapidly obtain [...] Read more.
Computer vision (CV) combined with a deep convolutional neural network (CNN) has emerged as a reliable analytical method to effectively characterize and quantify high-throughput phenotyping of different grain crops, including rice, wheat, corn, and soybean. In addition to the ability to rapidly obtain information on plant organs and abiotic stresses, and the ability to segment crops from weeds, such techniques have been used to detect pests and plant diseases and to identify grain varieties. The development of corresponding imaging systems to assess the phenotypic parameters, yield, and quality of crop plants will increase the confidence of stakeholders in grain crop cultivation, thereby bringing technical and economic benefits to advanced agriculture. Therefore, this paper provides a comprehensive review of CNNs in computer vision for grain crop phenotyping. It is meaningful to provide a review as a roadmap for future research in such a thriving research area. The CNN models (e.g., VGG, YOLO, and Faster R-CNN) used CV tasks including image classification, object detection, semantic segmentation, and instance segmentation, and the main results of recent studies on crop phenotype detection are discussed and summarized. Additionally, the challenges and future trends of the phenotyping techniques in grain crops are presented. Full article
Show Figures

Figure 1

Back to TopTop