Artificial Intelligence in Agriculture

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 31 July 2024 | Viewed by 22560

Special Issue Editor

Special Issue Information

Dear Colleagues,

The agriculture industry has long used technology to improve farming practices and yield. However, traditional technological methods alone are not sufficient to provide for the increasing world population. It is estimated that food production needs to increase by approximately 60% in order to feed an additional two billion people in 2050. This need for increased food production is driving farmers and the agriculture industry to devise newer ways of increasing production, improving crop quality, and reducing waste and nitrogen footprint. Artificial intelligence (AI) is emerging as a promising solution for the agriculture industry to meet the food challenges in coming years. AI can be utilized in various stages of farming and food production, from seed sowing to monitoring and harvesting. AI is also an important enabling technology for precision agriculture. AI can be used for crop monitoring to detect diseases, nutrient deficiencies, and pest infestation. AI can also be used for soil monitoring to detect nutrient deficiencies and defects in soil. Moreover, AI can be used for weed detection, and then for intelligently and precisely spraying herbicides in the right areas to reduce the usage of herbicides. AI-enabled autonomous agricultural robots outfitted with different sensors and actuators can assist not only in crop harvesting and fruit picking, but also in crop and soil monitoring. AI can also provide predictive insights for maximizing crop productivity, such as predicting the impact of weather conditions on crops. Predictive analytics can help predict the best time to sow the seeds. AI can also be utilized for predicting crop yields, and for forecasting the price of crops in the coming weeks. 

This Special Issue on “Artificial Intelligence in Agriculture” focuses on fundamental and applied research targeting AI in all stages of agriculture, from soil preparation to the sowing of seeds, addition of fertilizers, irrigation, weed protection, harvesting, storage, packing, and transportation. Topics of interest include but are not limited to the following:

  • Smart farming and agriculture;
  • AI-assisted precision agriculture;
  • AI-based soil and plant nutrient analysis;
  • AI-assisted sowing;
  • Computer vision in agriculture;
  • Spatial AI-based agricultural robotics;
  • AI-based crop monitoring;
  • AI-based disease detection in crops;
  • AI-based pest infestation detection and management;
  • Intelligent irrigation for agriculture;
  • Intelligent spraying of crops;
  • AI-assisted phenotyping and genotyping;
  • Predictive analytics for agriculture;
  • Computational intelligence in agriculture;
  • Livestock health monitoring;
  • Smart Internet of things in agriculture;
  • Edge AI in agriculture;
  • AI in food supply chain.

Dr. Arslan Munir
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • agriculture
  • smart farming
  • precision agriculture
  • agricultural robotics
  • intelligent irrigation
  • crop monitoring
  • predictive analytics
  • computational intelligence

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 31064 KiB  
Article
Enhancing Tuta absoluta Detection on Tomato Plants: Ensemble Techniques and Deep Learning
by Nikolaos Giakoumoglou, Eleftheria-Maria Pechlivani, Nikolaos Frangakis and Dimitrios Tzovaras
AI 2023, 4(4), 996-1009; https://doi.org/10.3390/ai4040050 - 20 Nov 2023
Cited by 2 | Viewed by 1397
Abstract
Early detection and efficient management practices to control Tuta absoluta (Meyrick) infestation is crucial for safeguarding tomato production yield and minimizing economic losses. This study investigates the detection of T. absoluta infestation on tomato plants using object detection models combined with ensemble techniques. [...] Read more.
Early detection and efficient management practices to control Tuta absoluta (Meyrick) infestation is crucial for safeguarding tomato production yield and minimizing economic losses. This study investigates the detection of T. absoluta infestation on tomato plants using object detection models combined with ensemble techniques. Additionally, this study highlights the importance of utilizing a dataset captured in real settings in open-field and greenhouse environments to address the complexity of real-life challenges in object detection of plant health scenarios. The effectiveness of deep-learning-based models, including Faster R-CNN and RetinaNet, was evaluated in terms of detecting T. absoluta damage. The initial model evaluations revealed diminishing performance levels across various model configurations, including different backbones and heads. To enhance detection predictions and improve mean Average Precision (mAP) scores, ensemble techniques were applied such as Non-Maximum Suppression (NMS), Soft Non-Maximum Suppression (Soft NMS), Non-Maximum Weighted (NMW), and Weighted Boxes Fusion (WBF). The outcomes shown that the WBF technique significantly improved the mAP scores, resulting in a 20% improvement from 0.58 (max mAP from individual models) to 0.70. The results of this study contribute to the field of agricultural pest detection by emphasizing the potential of deep learning and ensemble techniques in improving the accuracy and reliability of object detection models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

19 pages, 3076 KiB  
Article
A General Machine Learning Model for Assessing Fruit Quality Using Deep Image Features
by Ioannis D. Apostolopoulos, Mpesi Tzani and Sokratis I. Aznaouridis
AI 2023, 4(4), 812-830; https://doi.org/10.3390/ai4040041 - 27 Sep 2023
Cited by 2 | Viewed by 4792
Abstract
Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using [...] Read more.
Fruit quality is a critical factor in the produce industry, affecting producers, distributors, consumers, and the economy. High-quality fruits are more appealing, nutritious, and safe, boosting consumer satisfaction and revenue for producers. Artificial intelligence can aid in assessing the quality of fruit using images. This paper presents a general machine learning model for assessing fruit quality using deep image features. This model leverages the learning capabilities of the recent successful networks for image classification called vision transformers (ViT). The ViT model is built and trained with a combination of various fruit datasets and taught to distinguish between good and rotten fruit images based on their visual appearance and not predefined quality attributes. The general model demonstrated impressive results in accurately identifying the quality of various fruits, such as apples (with a 99.50% accuracy), cucumbers (99%), grapes (100%), kakis (99.50%), oranges (99.50%), papayas (98%), peaches (98%), tomatoes (99.50%), and watermelons (98%). However, it showed slightly lower performance in identifying guavas (97%), lemons (97%), limes (97.50%), mangoes (97.50%), pears (97%), and pomegranates (97%). Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

14 pages, 3156 KiB  
Article
Comparison of Various Nitrogen and Water Dual Stress Effects for Predicting Relative Water Content and Nitrogen Content in Maize Plants through Hyperspectral Imaging
by Hideki Maki, Valerie Lynch, Dongdong Ma, Mitchell R. Tuinstra, Masanori Yamasaki and Jian Jin
AI 2023, 4(3), 692-705; https://doi.org/10.3390/ai4030036 - 18 Aug 2023
Viewed by 1545
Abstract
Water and nitrogen (N) are major factors in plant growth and agricultural production. However, these are often confounded and produce overlapping symptoms of plant stress. The objective of this study is to verify whether the different levels of N treatment influence water status [...] Read more.
Water and nitrogen (N) are major factors in plant growth and agricultural production. However, these are often confounded and produce overlapping symptoms of plant stress. The objective of this study is to verify whether the different levels of N treatment influence water status prediction and vice versa with hyperspectral modeling. We cultivated 108 maize plants in a greenhouse under three-level N treatments in combination with three-level water treatments. Hyperspectral images were collected from those plants, then Relative Water Content (RWC), as well as N content, was measured as ground truth. A Partial Least Squares (PLS) regression analysis was used to build prediction models for RWC and N content. Then, their accuracy and robustness were compared according to the different N treatment datasets and different water treatment datasets, respectively. The results demonstrated that the PLS prediction for RWC using hyperspectral data was impacted by N stress difference (Ratio of Performance to Deviation; RPD from 0.87 to 2.27). Furthermore, the dataset with water and N dual stresses improved model accuracy and robustness (RPD from 1.69 to 2.64). Conversely, the PLS prediction for N content was found to be robust against water stress difference (RPD from 2.33 to 3.06). In conclusion, we suggest that water and N dual treatments can be helpful in building models with wide applicability and high accuracy for evaluating plant water status such as RWC. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

13 pages, 3794 KiB  
Article
Application of Machine Learning for Insect Monitoring in Grain Facilities
by Querriel Arvy Mendoza, Lester Pordesimo, Mitchell Neilsen, Paul Armstrong, James Campbell and Princess Tiffany Mendoza
AI 2023, 4(1), 348-360; https://doi.org/10.3390/ai4010017 - 22 Mar 2023
Cited by 8 | Viewed by 5313
Abstract
In this study, a basic insect detection system consisting of a manual-focus camera, a Jetson Nano—a low-cost, low-power single-board computer, and a trained deep learning model was developed. The model was validated through a live visual feed. Detecting, classifying, and monitoring insect pests [...] Read more.
In this study, a basic insect detection system consisting of a manual-focus camera, a Jetson Nano—a low-cost, low-power single-board computer, and a trained deep learning model was developed. The model was validated through a live visual feed. Detecting, classifying, and monitoring insect pests in a grain storage or food facility in real time is vital to making insect control decisions. The camera captures the image of the insect and passes it to a Jetson Nano for processing. The Jetson Nano runs a trained deep-learning model to detect the presence and species of insects. With three different lighting situations: white LED light, yellow LED light, and no lighting condition, the detection results are displayed on a monitor. Validating using F1 scores and comparing the accuracy based on light sources, the system was tested with a variety of stored grain insect pests and was able to detect and classify adult cigarette beetles and warehouse beetles with acceptable accuracy. The results demonstrate that the system is an effective and affordable automated solution to insect detection. Such an automated insect detection system can help reduce pest control costs and save producers time and energy while safeguarding the quality of stored products. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

15 pages, 1696 KiB  
Article
Data Synthesis for Alfalfa Biomass Yield Estimation
by Jonathan Vance, Khaled Rasheed, Ali Missaoui and Frederick W. Maier
AI 2023, 4(1), 1-15; https://doi.org/10.3390/ai4010001 - 21 Dec 2022
Viewed by 1747
Abstract
Alfalfa is critical to global food security, and its data is abundant in the U.S. nationally, but often scarce locally, limiting the potential performance of machine learning (ML) models in predicting alfalfa biomass yields. Training ML models on local-only data results in very [...] Read more.
Alfalfa is critical to global food security, and its data is abundant in the U.S. nationally, but often scarce locally, limiting the potential performance of machine learning (ML) models in predicting alfalfa biomass yields. Training ML models on local-only data results in very low estimation accuracy when the datasets are very small. Therefore, we explore synthesizing non-local data to estimate biomass yields labeled as high, medium, or low. One option to remedy scarce local data is to train models using non-local data; however, this only works about as well as using local data. Therefore, we propose a novel pipeline that trains models using data synthesized from non-local data to estimate local crop yields. Our pipeline, synthesized non-local training (SNLT pronounced like sunlight), achieves a gain of 42.9% accuracy over the best results from regular non-local and local training on our very small target dataset. This pipeline produced the highest accuracy of 85.7% with a decision tree classifier. From these results, we conclude that SNLT can be a useful tool in helping to estimate crop yields with ML. Furthermore, we propose a software application called Predict Your CropS (PYCS pronounced like Pisces) designed to help farmers and researchers estimate and predict crop yields based on pretrained models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

11 pages, 337 KiB  
Article
Dimensionality Reduction Statistical Models for Soil Attribute Prediction Based on Raw Spectral Data
by Marcelo Chan Fu Wei, Ricardo Canal Filho, Tiago Rodrigues Tavares, José Paulo Molin and Afrânio Márcio Corrêa Vieira
AI 2022, 3(4), 809-819; https://doi.org/10.3390/ai3040049 - 30 Sep 2022
Cited by 1 | Viewed by 1934
Abstract
To obtain a better performance when modeling soil spectral data for attribute prediction, researchers frequently resort to data pretreatment, aiming to reduce noise and highlight the spectral features. Even with the awareness of the existence of dimensionality reduction statistical approaches that can cope [...] Read more.
To obtain a better performance when modeling soil spectral data for attribute prediction, researchers frequently resort to data pretreatment, aiming to reduce noise and highlight the spectral features. Even with the awareness of the existence of dimensionality reduction statistical approaches that can cope with data sparse dimensionality, few studies have explored its applicability in soil sensing. Therefore, this study’s objective was to assess the predictive performance of two dimensionality reduction statistical models that are not widespread in the proximal soil sensing community: principal components regression (PCR) and least absolute shrinkage and selection operator (lasso). Here, these two approaches were compared with multiple linear regressions (MLR). All of the modelling strategies were applied without employing pretreatment techniques for soil attribute determination using X-ray fluorescence spectroscopy (XRF) and visible and near-infrared diffuse reflectance spectroscopy (Vis-NIR) data. In addition, the achieved results were compared against the ones reported in the literature that applied pretreatment techniques. The study was carried out with 102 soil samples from two distinct fields. Predictive models were developed for nine chemical and physical soil attributes, using lasso, PCR and MLR. Both Vis-NIR and XRF raw spectral data presented a great performance for soil attribute prediction when modelled with PCR and the lasso method. In general, similar results were found comparing the root mean squared error (RMSE) and coefficient of determination (R2) from the literature that applied pretreatment techniques and this study. For example, considering base saturation (V%), for Vis-NIR combined with PCR, in this study, RMSE and R2 values of 10.60 and 0.79 were found compared with 10.38 and 0.80, respectively, in the literature. In addition, looking at potassium (K), XRF associated with lasso yielded an RMSE value of 0.60 and R2 of 0.92, and in the literature, RMSE and R2 of 0.53 and 0.95, respectively, were found. The major discrepancy was observed for phosphorus (P) and organic matter (OM) prediction applying PCR in the XRF data, which showed R2 of 0.33 (for P) and 0.52 (for OM) without using pretreatment techniques in this study, and R2 of 0.01 (for P) and 0.74 (for OM) when using preprocessing techniques in the literature. These results indicate that data pretreatment can be disposable for predicting some soil attributes when using Vis-NIR and XRF raw data modeled with dimensionality reduction statistical models. Despite this, there is no consensus on the best way to calibrate data, as this seems to be attribute and area specific. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
20 pages, 11177 KiB  
Article
A Spatial AI-Based Agricultural Robotic Platform for Wheat Detection and Collision Avoidance
by Sujith Gunturu, Arslan Munir, Hayat Ullah, Stephen Welch and Daniel Flippo
AI 2022, 3(3), 719-738; https://doi.org/10.3390/ai3030042 - 30 Aug 2022
Cited by 3 | Viewed by 3347
Abstract
To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to [...] Read more.
To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to not run over the wheat while driving. Accordingly, we have trained a spatial deep learning model that helps navigate the robot autonomously in the field while avoiding collisions with the wheat. To train this model, we used publicly available databases of prelabeled images of wheat, along with the images of wheat that we have collected in the field. We used the MobileNet single shot detector (SSD) as our deep learning model to detect wheat in the field. To increase the frame rate for real-time robot response to field environments, we trained MobileNet SSD on the wheat images and used a new stereo camera, the Luxonis Depth AI Camera. Together, the newly trained model and camera could achieve a frame rate of 18–23 frames per second (fps)—fast enough for the robot to process its surroundings once every 2–3 inches of driving. Once we knew the robot accurately detects its surroundings, we addressed the autonomous navigation of the robot. The new stereo camera allows the robot to determine its distance from the trained objects. In this work, we also developed a navigation and collision avoidance algorithm that utilizes this distance information to help the robot see its surroundings and maneuver in the field, thereby precisely avoiding collisions with the wheat crop. Extensive experiments were conducted to evaluate the performance of our proposed method. We also compared the quantitative results obtained by our proposed MobileNet SSD model with those of other state-of-the-art object detection models, such as the YOLO V5 and Faster region-based convolutional neural network (R-CNN) models. The detailed comparative analysis reveals the effectiveness of our method in terms of both model precision and inference speed. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Back to TopTop