Machine Vision Systems in Digital Agriculture

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (1 March 2023) | Viewed by 13361

Special Issue Editors

Plant Science Research Unit, USDA Agricultural Research Service (USDA-ARS), University of Minnesota, Washington, DC, USA
Interests: genomics; genetics; bioinformatics; machine learning; deep learning; plant breeding

E-Mail Website
Guest Editor
Department of Statistics, University of Nebraska–Lincoln, Lincoln, NE 68583-0963, USA
Interests: statistics; genomic prediction; model development for genotype by environmental interaction for genomic prediction; digital agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Industrial and Manufacturing Systems Engineering (IMSE) of Iowa State University, Washington, DC, USA
Interests: agricultural engineer; industrial and manufacturing systems engineering and electrical and computer engineering (ECpE); machine learning for cyber agriculture

Special Issue Information

Dear Colleagues,

Background and history of this topic:

Climate change and the increasing population make food security in agriculture a top priority challenge. Agriculture is transitioning from conventional cultivation to digital farming with machine vision and artificial intelligence to address this challenge and develop sustainable environments. Innovative agriculture will have to rely on intelligent farming technologies to be productive, efficient, and sustainable. Significant progress has been made for machine vision-based smart agriculture from academia and industries. New journals have arisen for new areas to facilitate the communication of innovations, discoveries and provided excellent channels for publishing domain-specific research results, for example, Sensors for sensor-related research, and Plant Phenotyping for collecting phenotypic data. But there are fewer interdisciplinary journals that cover machine vision for digital agriculture.

Aim and scope of the Special Issue:

This Special Issue of Agronomy aims to offer researchers a unique opportunity to publish interdisciplinary studies integrating machine vision, sensors, artificial intelligence, and digital and precision farming for modern agriculture.

The scope of this Special Issue covers the latest technologies in machines (robots, tractors, and combines), sensors (remote sensing with satellites and drones, proximal sensing with robots and cameras), phenotyping for data acquisition, machine learning, deep learning modeling, and their applications for digital crop planting, stress management breeding, harvesting, post-harvest processing, and other agronomy-related domains.

Cutting-edge research:

  • Robotic systems for agronomic data collection in the greenhouse and field.
  • 3D vision technologies for robotic-optimized harvesting automation system.
  • Multispectral and hyperspectral sensors for early abiotic and biotic stress early detection and recommendation system.
  • Machine vision monitors greenhouse gas emissions and CO2 and/or methane sequestration for climate and sustainable agriculture.
  • More efficient machine vision-enabled targeted monitoring, precise fertilization, pesticide application, farmland irrigation, and sustainable cultivation.
  • Satellite monitoring for disease control and yield prediction.

What kind of papers we are soliciting:

Application of machine vision, digital agriculture for precision farming

Dr. Zhanyou Xu
Dr. Reka Howard
Prof. Dr. Lizhi Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine vision
  • digital agriculture
  • precision farming

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4540 KiB  
Article
RDE-YOLOv7: An Improved Model Based on YOLOv7 for Better Performance in Detecting Dragon Fruits
by Jialiang Zhou, Yueyue Zhang and Jinpeng Wang
Agronomy 2023, 13(4), 1042; https://doi.org/10.3390/agronomy13041042 - 31 Mar 2023
Cited by 9 | Viewed by 2975
Abstract
There is a great demand for dragon fruit in China and Southeast Asia. Manual picking of dragon fruit requires a lot of labor. It is imperative to study the dragon fruit-picking robot. The visual guidance system is an important part of a picking [...] Read more.
There is a great demand for dragon fruit in China and Southeast Asia. Manual picking of dragon fruit requires a lot of labor. It is imperative to study the dragon fruit-picking robot. The visual guidance system is an important part of a picking robot. To realize the automatic picking of dragon fruit, this paper proposes a detection method of dragon fruit based on RDE-YOLOv7 to identify and locate dragon fruit more accurately. RepGhost and decoupled head are introduced into YOLOv7 to better extract features and better predict results. In addition, multiple ECA blocks are introduced into various locations of the network to extract effective information from a large amount of information. The experimental results show that the RDE-YOLOv7 improves the precision, recall, and mean average precision by 5.0%, 2.1%, and 1.6%. The RDE-YOLOv7 also has high accuracy for fruit detection under different lighting conditions and different blur degrees. Using the RDE-YOLOv7, we build a dragon fruit picking system and conduct positioning and picking experiments. The spatial positioning error of the system is only 2.51 mm, 2.43 mm, and 1.84 mm. The picking experiments indicate that the RDE-YOLOv7 can accurately detect dragon fruits, theoretically supporting the development of dragon fruit-picking robots. Full article
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)
Show Figures

Figure 1

20 pages, 5522 KiB  
Article
Deep Learning for Detecting and Classifying the Growth Stages of Consolida regalis Weeds on Fields
by Abeer M. Almalky and Khaled R. Ahmed
Agronomy 2023, 13(3), 934; https://doi.org/10.3390/agronomy13030934 - 21 Mar 2023
Cited by 4 | Viewed by 1978
Abstract
Due to the massive surge in the world population, the agriculture cycle expansion is necessary to accommodate the anticipated demand. However, this expansion is challenged by weed invasion, a detrimental factor for agricultural production and quality. Therefore, an accurate, automatic, low-cost, environment-friendly, and [...] Read more.
Due to the massive surge in the world population, the agriculture cycle expansion is necessary to accommodate the anticipated demand. However, this expansion is challenged by weed invasion, a detrimental factor for agricultural production and quality. Therefore, an accurate, automatic, low-cost, environment-friendly, and real-time weed detection technique is required to control weeds on fields. Furthermore, automating the weed classification process according to growth stages is crucial for using appropriate weed controlling techniques, which represents a gap of research. The main focus of the undertaken research described in this paper is on providing a feasibility study for the agriculture community using recent deep-learning models to address this gap of research on classification of weed growth stages. For this paper we used a drone to collect a dataset of four weed (Consolida regalis) growth stages. In addition, we developed and trained one-stage and two-stage models YOLOv5, RetinaNet (with Resnet-101-FPN, Resnet-50-FPN backbones) and Faster R-CNN (with Resnet-101-DC5, Resnet-101-FPN, Resnet-50-FPN backbones), respectively. The results show that the generated Yolov5-small model succeeds in detecting weeds and classifying weed growth stages in real time with the highest recall of 0.794. RetinaNet with ResNet-101-FPN backbone shows accurate results in the testing phase (average precision of 87.457). Although Yolov5-large showed the highest precision in classifying almost all weed growth stages, Yolov5-large could not detect all objects in tested images. Overall, RetinaNet with ResNet-101-FPN backbones shows accurate and high precision, whereas Yolov5-small shows the shortest inference time in real time for detecting a weed and classifying its growth stages. Full article
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)
Show Figures

Figure 1

10 pages, 2492 KiB  
Communication
Location of Fruits by Counting: A Point-to-Point Approach
by Bo Li and Cheng Chen
Agronomy 2022, 12(11), 2863; https://doi.org/10.3390/agronomy12112863 - 16 Nov 2022
Cited by 1 | Viewed by 1191
Abstract
The emergence of deep learning-based methods for harvesting and yield estimates, including object detection or image segmentation-based methods, has notably improved performance but has also resulted in large annotation workloads. Considering the difficulty of such annotation, a method for locating fruit is developed [...] Read more.
The emergence of deep learning-based methods for harvesting and yield estimates, including object detection or image segmentation-based methods, has notably improved performance but has also resulted in large annotation workloads. Considering the difficulty of such annotation, a method for locating fruit is developed in this study using only center-point labeling information. To address point labeling, the weighted Hausdorff distance is chosen as the loss function of the corresponding network, while deep layer aggregation (DLA) is used to contend with the variability in the visible area of the fruit. The performance of our method in terms of both detection and position is not inferior to the method based on Mask-RCNN. Experiments on a public apple dataset are provided to further demonstrate the performance of the proposed method. Specifically, no more than two targets had positioning deviations exceeding five pixels within the field of view. Full article
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)
Show Figures

Figure 1

17 pages, 6234 KiB  
Article
A Real-Time Detection Algorithm for Sweet Cherry Fruit Maturity Based on YOLOX in the Natural Environment
by Zhiyong Li, Xueqin Jiang, Luyu Shuai, Boda Zhang, Yiyu Yang and Jiong Mu
Agronomy 2022, 12(10), 2482; https://doi.org/10.3390/agronomy12102482 - 12 Oct 2022
Cited by 11 | Viewed by 2591
Abstract
Fast, accurate, and non-destructive large-scale detection of sweet cherry ripeness is the key to determining the optimal harvesting period and accurate grading by ripeness. Due to the complexity and variability of the orchard environment and the multi-scale, obscured, and even overlapping fruit, there [...] Read more.
Fast, accurate, and non-destructive large-scale detection of sweet cherry ripeness is the key to determining the optimal harvesting period and accurate grading by ripeness. Due to the complexity and variability of the orchard environment and the multi-scale, obscured, and even overlapping fruit, there are still problems of low detection accuracy even using the mainstream algorithm YOLOX in the absence of a large amount of tagging data. In this paper, we proposed an improved YOLOX target detection algorithm to quickly and accurately detect sweet cherry ripeness categories in complex environments. Firstly, we took a total of 2400 high-resolution images of immature, semi-ripe, and ripe sweet cherries in an orchard in Hanyuan County, Sichuan Province, including complex environments such as sunny days, cloudy days, branch and leaf shading, fruit overlapping, distant views, and similar colors of green fruits and leaves, and formed a dataset dedicated to sweet cherry ripeness detection by manually labeling 36068 samples, named SweetCherry. On this basis, an improved YOLOX target detection algorithm YOLOX-EIoU-CBAM was proposed, which embedded the Convolutional Block Attention Module (CBAM) between the backbone and neck of the YOLOX model to improve the model’s attention to different channels, spaces capability, and replaced the original bounding box loss function of the YOLOX model with Efficient IoU (EIoU) loss to make the regression of the prediction box more accurate. Finally, we validated the feasibility and reliability of the YOLOX-EIoU-CBAM network on the SweetCherry dataset. The experimental results showed that the method in this paper significantly outperforms the traditional Faster R-CNN and SSD300 algorithms in terms of mean Average Precision (mAP), recall, model size, and single-image inference time. Compared with the YOLOX model, the mAP of this method is improved by 4.12%, recall is improved by 4.6%, F-score is improved by 2.34%, while model size and single-image inference time remain basically comparable. The method in this paper can cope well with complex backgrounds such as fruit overlap, branch and leaf occlusion, and can provide a data base and technical reference for other similar target detection problems. Full article
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)
Show Figures

Figure 1

21 pages, 5850 KiB  
Article
Evaluating Impacts between Laboratory and Field-Collected Datasets for Plant Disease Classification
by Gianni Fenu and Francesca Maridina Malloci
Agronomy 2022, 12(10), 2359; https://doi.org/10.3390/agronomy12102359 - 30 Sep 2022
Cited by 6 | Viewed by 1808
Abstract
Deep learning with convolutional neural networks represents the most used approach in recent years in the classification of leaves’ diseases. The literature has extensively addressed the problem using laboratory-acquired datasets with a homogeneous background. In this article, we explore the variability factors that [...] Read more.
Deep learning with convolutional neural networks represents the most used approach in recent years in the classification of leaves’ diseases. The literature has extensively addressed the problem using laboratory-acquired datasets with a homogeneous background. In this article, we explore the variability factors that influence the classification of plant diseases by analyzing the same plant and disease under different conditions, i.e., in the field and in the laboratory. Two plant species and five biotic stresses are analyzed using different architectures, such as EfficientB0, MobileNetV2, InceptionV2, ResNet50 and VGG16. Experiments show that model performance drops drastically when using representative datasets, and the features learned from the network to determine the class do not always belong to the leaf lesion. In the worst case, the accuracy drops from 92.67% to 54.41%. Our results indicate that while deep learning is an effective technique, there are some technical issues to consider when applying it to more representative datasets collected in the field. Full article
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)
Show Figures

Figure 1

20 pages, 2304 KiB  
Article
Mobile Plant Disease Classifier, Trained with a Small Number of Images by the End User
by Nikos Petrellis, Christos Antonopoulos, Georgios Keramidas and Nikolaos Voros
Agronomy 2022, 12(8), 1732; https://doi.org/10.3390/agronomy12081732 - 22 Jul 2022
Cited by 2 | Viewed by 1566
Abstract
Mobile applications that can be used for the training and classification of plant diseases are described in this paper. Professional agronomists can select the species and their diseases that are supported by the developed tool and follow an automatic training procedure using a [...] Read more.
Mobile applications that can be used for the training and classification of plant diseases are described in this paper. Professional agronomists can select the species and their diseases that are supported by the developed tool and follow an automatic training procedure using a small number of indicative photographs. The employed classification method is based on features that represent distinct aspects of the sick plant such as, for example, the color level distribution in the regions of interest. These features are extracted from photographs that display a plant part such as a leaf or a fruit. Multiple reference ranges are determined for each feature during training. When a new photograph is analyzed, its feature values are compared with the reference ranges, and different grades are assigned depending on whether a feature value falls within a range or not. The new photograph is classified as the disease with the highest grade. Ten tomato diseases are used as a case study, and the applications are trained with 40–100 segmented and normalized photographs for each disease. An accuracy between 93.4% and 96.1% is experimentally measured in this case. An additional dataset of pear disease photographs that are not segmented or normalized is also tested with an average accuracy of 95%. Full article
(This article belongs to the Special Issue Machine Vision Systems in Digital Agriculture)
Show Figures

Figure 1

Back to TopTop