sensors-logo

Journal Browser

Journal Browser

Sensor and AI Technologies in Intelligent Agriculture

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Smart Agriculture".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 18425

Special Issue Editors


E-Mail Website
Guest Editor
College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: bionic intelligent agricultural machinery; autonomous navigation; target recognition based on visual bionics; agricultural drones; agricultural artificial intelligence; soil and plant sensing; agricultural machinery information collection and control
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
The College of Biological and Agricultural Engineering, Jilin University, 5988 Renmin Street, Changchun 130025, China
Interests: agricultural robots; smart agricultural technology

Special Issue Information

Dear Colleagues,

In recent years, sensors and artificial intelligence (AI) technologies have received increased interest in both academia and industry and have been extensively applied in intelligent agriculture. Accelerating the application of agriculture sensors and AI technologies in intelligent agriculture is urgently needed for the development of modern agriculture and will help to promote the development of smart agriculture. This Special Issue aims to showcase the excellent implementation of agricultural sensors and AI technologies for intelligent agricultural applications and to provide opportunities for researchers to publish their work related to the topic. Articles that address agricultural sensors and AI technologies applied to crop and animal production are welcome. This Special Issue seeks to amass original research articles and reviews. The scope of this Special Issue includes but is not limited to the following topics:

  • Crop sensing and sensors;
  • Animal perception and sensors;
  • Environmental information perception and sensors;
  • Agricultural equipment information collection and processing;
  • Key technologies of smart agriculture;
  • Artificial intelligence in agriculture;
  • Farm-intelligent equipment;
  • Orchard-intelligent equipment;
  • Garden-intelligent equipment;
  • Pasture-intelligent equipment;
  • Fishing ground-intelligent equipment.

Prof. Dr. Jiangtao Qi
Prof. Dr. Haiye Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensors
  • artificial intelligence
  • intelligent agriculture

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3541 KiB  
Article
Development of a Low-Cost Distributed Computing Pipeline for High-Throughput Cotton Phenotyping
by Vaishnavi Thesma, Glen C. Rains and Javad Mohammadpour Velni
Sensors 2024, 24(3), 970; https://doi.org/10.3390/s24030970 - 02 Feb 2024
Viewed by 633
Abstract
In this paper, we present the development of a low-cost distributed computing pipeline for cotton plant phenotyping using Raspberry Pi, Hadoop, and deep learning. Specifically, we use a cluster of several Raspberry Pis in a primary-replica distributed architecture using the Apache Hadoop ecosystem [...] Read more.
In this paper, we present the development of a low-cost distributed computing pipeline for cotton plant phenotyping using Raspberry Pi, Hadoop, and deep learning. Specifically, we use a cluster of several Raspberry Pis in a primary-replica distributed architecture using the Apache Hadoop ecosystem and a pre-trained Tiny-YOLOv4 model for cotton bloom detection from our past work. We feed cotton image data collected from a research field in Tifton, GA, into our cluster’s distributed file system for robust file access and distributed, parallel processing. We then submit job requests to our cluster from our client to process cotton image data in a distributed and parallel fashion, from pre-processing to bloom detection and spatio-temporal map creation. Additionally, we present a comparison of our four-node cluster performance with centralized, one-, two-, and three-node clusters. This work is the first to develop a distributed computing pipeline for high-throughput cotton phenotyping in field-based agriculture. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

22 pages, 4354 KiB  
Article
Monitoring of a Productive Blue-Green Roof Using Low-Cost Sensors
by Afsana Alam Akhie and Darko Joksimovic
Sensors 2023, 23(24), 9788; https://doi.org/10.3390/s23249788 - 12 Dec 2023
Viewed by 802
Abstract
Considering the rising concern over climate change and the need for local food security, productive blue-green roofs (PBGR) can be an effective solution to mitigate many relevant environmental issues. However, their cost of operation is high because they are intensive, and an economical [...] Read more.
Considering the rising concern over climate change and the need for local food security, productive blue-green roofs (PBGR) can be an effective solution to mitigate many relevant environmental issues. However, their cost of operation is high because they are intensive, and an economical operation and maintenance approach will render them as more viable alternative. Low-cost sensors with the Internet of Things can provide reliable solutions to the real-time management and distributed monitoring of such roofs through monitoring the plant as well soil conditions. This research assesses the extent to which a low-cost image sensor can be deployed to perform continuous, automated monitoring of a urban rooftop farm as a PBGR and evaluates the thermal performance of the roof for additional crops. An RGB-depth image sensor was used in this study to monitor crop growth. Images collected from weekly scans were processed by segmentation to estimate the plant heights of three crops species. The devised technique performed well for leafy and tall stem plants like okra, and the correlation between the estimated and observed growth characteristics was acceptable. For smaller plants, bright light and shadow considerably influenced the image quality, decreasing the precision. Six other crop species were monitored using a wireless sensor network to investigate how different crop varieties respond in terms of thermal performance. Celery, snow peas, and potato were measured with maximum daily cooling records, while beet and zucchini showed sound cooling effects in terms of mean daily cooling. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

22 pages, 13151 KiB  
Article
SFHG-YOLO: A Simple Real-Time Small-Object-Detection Method for Estimating Pineapple Yield from Unmanned Aerial Vehicles
by Guoyan Yu, Tao Wang, Guoquan Guo and Haochun Liu
Sensors 2023, 23(22), 9242; https://doi.org/10.3390/s23229242 - 17 Nov 2023
Cited by 3 | Viewed by 1101
Abstract
The counting of pineapple buds relies on target recognition in estimating pineapple yield using unmanned aerial vehicle (UAV) photography. This research proposes the SFHG-YOLO method, with YOLOv5s as the baseline, to address the practical needs of identifying small objects (pineapple buds) in UAV [...] Read more.
The counting of pineapple buds relies on target recognition in estimating pineapple yield using unmanned aerial vehicle (UAV) photography. This research proposes the SFHG-YOLO method, with YOLOv5s as the baseline, to address the practical needs of identifying small objects (pineapple buds) in UAV vision and the drawbacks of existing algorithms in terms of real-time performance and accuracy. Field pineapple buds are small objects that may be detected in high density using a lightweight network model. This model enhances spatial attention and adaptive context information fusion to increase detection accuracy and resilience. To construct the lightweight network model, the first step involves utilizing the coordinate attention module and MobileNetV3. Additionally, to fully leverage feature information across various levels and enhance perception skills for tiny objects, we developed both an enhanced spatial attention module and an adaptive context information fusion module. Experiments were conducted to validate the suggested algorithm’s performance in detecting small objects. The SFHG-YOLO model exhibited significant gains in assessment measures, achieving mAP@0.5 and mAP@0.5:0.95 improvements of 7.4% and 31%, respectively, when compared to the baseline model YOLOv5s. Considering the model size and computational cost, the findings underscore the superior performance of the suggested technique in detecting high-density small items. This program offers a reliable detection approach for estimating pineapple yield by accurately identifying minute items. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

16 pages, 6841 KiB  
Article
Development and Experiment of an Innovative Row-Controlled Device for Residual Film Collector to Drive Autonomously along the Ridge
by Zhijian Chen, Jianjun Yin, Jiaxin Yang, Maile Zhou, Xinzhong Wang and Sheikh Muhammad Farhan
Sensors 2023, 23(20), 8484; https://doi.org/10.3390/s23208484 - 16 Oct 2023
Cited by 1 | Viewed by 715
Abstract
The field harvesting process of harvesting machinery is often affected by high workload and environmental factors that can impede/delay manual rowing, thereby leading to lower efficiency and quality in the residual film collector. To address this challenge, an automatic rowing control system using [...] Read more.
The field harvesting process of harvesting machinery is often affected by high workload and environmental factors that can impede/delay manual rowing, thereby leading to lower efficiency and quality in the residual film collector. To address this challenge, an automatic rowing control system using the 4mz-220d self-propelled residual film collector as the experimental carrier was proposed in this study. Cotton stalks in the ridges were chosen as the research object, and a comprehensive application of key technologies, machinery, and electronic control was used, thereby incorporating a pure tracking model as the path-tracking control method. To achieve the automatic rowing function during the field traveling process, the fuzzy control principle was implemented to adjust the forward distance within the pure tracking model dynamically, and the expected steering angle of the steering wheel was determined based on the kinematic model of the recovery machine. The MATLAB/Simulink software was utilized to simulate and analyze the proposed model, thus achieving significant improvements in the automation level of the residual film collector. The field harvesting tests showed that the average deviation of the manual rowing was 0.144 m, while the average deviation of the automatic rowing was 0.066 m. Moreover, the average lateral deviation of the automatic rowing was reduced by 0.078 m with a probability of deviation within 0.1 m of 95.71%. The research study demonstrated that the designed automatic rowing system exhibited high stability and robustness, thereby meeting the requirements of the autonomous rowing operations of residual film collectors. The results of this study can serve as a reference for future research on autonomous navigation technology in agriculture. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

15 pages, 2094 KiB  
Article
Resistive Sensing of Seed Cotton Moisture Regain Based on Pressure Compensation
by Liang Fang, Ruoyu Zhang, Hongwei Duan, Jinqiang Chang, Zhaoquan Zeng, Yifu Qian and Mianzhe Hong
Sensors 2023, 23(20), 8421; https://doi.org/10.3390/s23208421 - 12 Oct 2023
Viewed by 888
Abstract
The measurement of seed cotton moisture regain (MR) during harvesting operations is an open and challenging problem. In this study, a new method for resistive sensing of seed cotton MR measurement based on pressure compensation is proposed. First, an experimental platform was designed. [...] Read more.
The measurement of seed cotton moisture regain (MR) during harvesting operations is an open and challenging problem. In this study, a new method for resistive sensing of seed cotton MR measurement based on pressure compensation is proposed. First, an experimental platform was designed. After that, the change of cotton bale parameters during the cotton picker packaging process was simulated through the experimental platform, and the correlations among the compression volume, compression density, contact pressure, and conductivity of seed cotton were analyzed. Then, support vector regression (SVR), random forest (RF), and a backpropagation neural network (BPNN) were employed to build seed cotton MR prediction models. Finally, the performance of the method was evaluated through the experimental platform test. The results showed that there was a weak correlation between contact pressure and compression volume, while there was a significant correlation (p < 0.01) between contact pressure and compression density. Moreover, the nonlinear mathematical models exhibited better fitting performance than the linear mathematical models in describing the relationships among compression density, contact pressure, and conductivity. The comparative analysis results of the three MR prediction models showed that the BPNN algorithm had the highest prediction accuracy, with a coefficient of determination (R2) of 0.986 and a root mean square error (RMSE) of 0.204%. The mean RMSE and mean coefficient of variation (CV) of the performance evaluation test results were 0.20% and 2.22%, respectively. Therefore, the method proposed in this study is reliable. In addition, the study will provide a technical reference for the accurate and rapid measurement of seed cotton MR during harvesting operations. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

25 pages, 21439 KiB  
Article
Accuracy vs. Energy: An Assessment of Bee Object Inference in Videos from On-Hive Video Loggers with YOLOv3, YOLOv4-Tiny, and YOLOv7-Tiny
by Vladimir A. Kulyukin and Aleksey V. Kulyukin
Sensors 2023, 23(15), 6791; https://doi.org/10.3390/s23156791 - 29 Jul 2023
Cited by 3 | Viewed by 1252
Abstract
A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive’s entrance. Since traffic at the hive’s entrance is a contributing factor to the hive’s productivity and health, we assessed [...] Read more.
A continuing trend in precision apiculture is to use computer vision methods to quantify characteristics of bee traffic in managed colonies at the hive’s entrance. Since traffic at the hive’s entrance is a contributing factor to the hive’s productivity and health, we assessed the potential of three open-source convolutional network models, YOLOv3, YOLOv4-tiny, and YOLOv7-tiny, to quantify omnidirectional traffic in videos from on-hive video loggers on regular, unmodified one- and two-super Langstroth hives and compared their accuracies, energy efficacies, and operational energy footprints. We trained and tested the models with a 70/30 split on a dataset of 23,173 flying bees manually labeled in 5819 images from 10 randomly selected videos and manually evaluated the trained models on 3600 images from 120 randomly selected videos from different apiaries, years, and queen races. We designed a new energy efficacy metric as a ratio of performance units per energy unit required to make a model operational in a continuous hive monitoring data pipeline. In terms of accuracy, YOLOv3 was first, YOLOv7-tiny—second, and YOLOv4-tiny—third. All models underestimated the true amount of traffic due to false negatives. YOLOv3 was the only model with no false positives, but had the lowest energy efficacy and highest operational energy footprint in a deployed hive monitoring data pipeline. YOLOv7-tiny had the highest energy efficacy and the lowest operational energy footprint in the same pipeline. Consequently, YOLOv7-tiny is a model worth considering for training on larger bee datasets if a primary objective is the discovery of non-invasive computer vision models of traffic quantification with higher energy efficacies and lower operational energy footprints. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

15 pages, 4089 KiB  
Article
PlantInfoCMS: Scalable Plant Disease Information Collection and Management System for Training AI Models
by Dong Jin, Helin Yin, Ri Zheng, Seong Joon Yoo and Yeong Hyeon Gu
Sensors 2023, 23(11), 5032; https://doi.org/10.3390/s23115032 - 24 May 2023
Viewed by 1391
Abstract
In recent years, the development of deep learning technology has significantly benefited agriculture in domains such as smart and precision farming. Deep learning models require a large amount of high-quality training data. However, collecting and managing large amounts of guaranteed-quality data is a [...] Read more.
In recent years, the development of deep learning technology has significantly benefited agriculture in domains such as smart and precision farming. Deep learning models require a large amount of high-quality training data. However, collecting and managing large amounts of guaranteed-quality data is a critical issue. To meet these requirements, this study proposes a scalable plant disease information collection and management system (PlantInfoCMS). The proposed PlantInfoCMS consists of data collection, annotation, data inspection, and dashboard modules to generate accurate and high-quality pest and disease image datasets for learning purposes. Additionally, the system provides various statistical functions allowing users to easily check the progress of each task, making management highly efficient. Currently, PlantInfoCMS handles data on 32 types of crops and 185 types of pests and diseases, and stores and manages 301,667 original and 195,124 labeled images. The PlantInfoCMS proposed in this study is expected to significantly contribute to the diagnosis of crop pests and diseases by providing high-quality AI images for learning about and facilitating the management of crop pests and diseases. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

28 pages, 2393 KiB  
Article
Ambient Electromagnetic Radiation as a Predictor of Honey Bee (Apis mellifera) Traffic in Linear and Non-Linear Regression: Numerical Stability, Physical Time and Energy Efficiency
by Vladimir A. Kulyukin, Daniel Coster, Anastasiia Tkachenko, Daniel Hornberger and Aleksey V. Kulyukin
Sensors 2023, 23(5), 2584; https://doi.org/10.3390/s23052584 - 26 Feb 2023
Cited by 2 | Viewed by 1416
Abstract
Since bee traffic is a contributing factor to hive health and electromagnetic radiation has a growing presence in the urban milieu, we investigate ambient electromagnetic radiation as a predictor of bee traffic in the hive’s vicinity in an urban environment. To that end, [...] Read more.
Since bee traffic is a contributing factor to hive health and electromagnetic radiation has a growing presence in the urban milieu, we investigate ambient electromagnetic radiation as a predictor of bee traffic in the hive’s vicinity in an urban environment. To that end, we built two multi-sensor stations and deployed them for four and a half months at a private apiary in Logan, UT, USA. to record ambient weather and electromagnetic radiation. We placed two non-invasive video loggers on two hives at the apiary to extract omnidirectional bee motion counts from videos. The time-aligned datasets were used to evaluate 200 linear and 3,703,200 non-linear (random forest and support vector machine) regressors to predict bee motion counts from time, weather, and electromagnetic radiation. In all regressors, electromagnetic radiation was as good a predictor of traffic as weather. Both weather and electromagnetic radiation were better predictors than time. On the 13,412 time-aligned weather, electromagnetic radiation, and bee traffic records, random forest regressors had higher maximum R2 scores and resulted in more energy efficient parameterized grid searches. Both types of regressors were numerically stable. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

11 pages, 1511 KiB  
Article
Detection of Tomato Leaf Miner Using Deep Neural Network
by Seongho Jeong, Seongkyun Jeong and Jaehwan Bong
Sensors 2022, 22(24), 9959; https://doi.org/10.3390/s22249959 - 17 Dec 2022
Cited by 7 | Viewed by 1760
Abstract
As a result of climate change and global warming, plant diseases and pests are drawing attention because they are dispersing more quickly than ever before. The tomato leaf miner destroys the growth structure of the tomato, resulting in 80 to 100 percent tomato [...] Read more.
As a result of climate change and global warming, plant diseases and pests are drawing attention because they are dispersing more quickly than ever before. The tomato leaf miner destroys the growth structure of the tomato, resulting in 80 to 100 percent tomato loss. Despite extensive efforts to prevent its spread, the tomato leaf miner can be found on most continents. To protect tomatoes from the tomato leaf miner, inspections must be performed on a regular basis throughout the tomato life cycle. To find a better deep neural network (DNN) approach for detecting tomato leaf miner, we investigated two DNN models for classification and segmentation. The same RGB images of tomato leaves captured from real-world agricultural sites were used to train the two DNN models. Precision, recall, and F1-score were used to compare the performance of two DNN models. In terms of diagnosing the tomato leaf miner, the DNN model for segmentation outperformed the DNN model for classification, with higher precision, recall, and F1-score values. Furthermore, there were no false negative cases in the prediction of the DNN model for segmentation, indicating that it is adequate for detecting plant diseases and pests. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

19 pages, 16915 KiB  
Article
Water Color Identification System for Monitoring Aquaculture Farms
by Hsiang-Chieh Chen, Sheng-Yao Xu and Kai-Han Deng
Sensors 2022, 22(19), 7131; https://doi.org/10.3390/s22197131 - 20 Sep 2022
Cited by 2 | Viewed by 2630
Abstract
This study presents a vision-based water color identification system designed for monitoring aquaculture ponds. The algorithm proposed in this system can identify water color, which is an important factor in aquaculture farming management. To address the effect of outdoor lighting conditions on the [...] Read more.
This study presents a vision-based water color identification system designed for monitoring aquaculture ponds. The algorithm proposed in this system can identify water color, which is an important factor in aquaculture farming management. To address the effect of outdoor lighting conditions on the proposed system, a color correction method using a color checkerboard was introduced. Several candidates for water-only image patches were extracted by performing image segmentation and fuzzy inferencing. Finally, a deep learning-based model was employed to identify the color of these patches and then find the representative color of the water. Experiments at different aquaculture sites verified the effectiveness of the proposed system and its algorithm. The color identification accuracy exceeded 96% for the test data. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

17 pages, 19813 KiB  
Article
Estimation of Greenhouse Lettuce Growth Indices Based on a Two-Stage CNN Using RGB-D Images
by Min-Seok Gang, Hak-Jin Kim and Dong-Wook Kim
Sensors 2022, 22(15), 5499; https://doi.org/10.3390/s22155499 - 23 Jul 2022
Cited by 13 | Viewed by 4514
Abstract
Growth indices can quantify crop productivity and establish optimal environmental, nutritional, and irrigation control strategies. A convolutional neural network (CNN)-based model is presented for estimating various growth indices (i.e., fresh weight, dry weight, height, leaf area, and diameter) of four varieties of greenhouse [...] Read more.
Growth indices can quantify crop productivity and establish optimal environmental, nutritional, and irrigation control strategies. A convolutional neural network (CNN)-based model is presented for estimating various growth indices (i.e., fresh weight, dry weight, height, leaf area, and diameter) of four varieties of greenhouse lettuce using red, green, blue, and depth (RGB-D) data obtained using a stereo camera. Data from an online autonomous greenhouse challenge (Wageningen University, June 2021) were employed in this study. The data were collected using an Intel RealSense D415 camera. The developed model has a two-stage CNN architecture based on ResNet50V2 layers. The developed model provided coefficients of determination from 0.88 to 0.95, with normalized root mean square errors of 6.09%, 6.30%, 7.65%, 7.92%, and 5.62% for fresh weight, dry weight, height, diameter, and leaf area, respectively, on unknown lettuce images. Using red, green, blue (RGB) and depth data employed in the CNN improved the determination accuracy for all five lettuce growth indices due to the ability of the stereo camera to extract height information on lettuce. The average time for processing each lettuce image using the developed CNN model run on a Jetson SUB mini-PC with a Jetson Xavier NX was 0.83 s, indicating the potential for the model in fast real-time sensing of lettuce growth indices. Full article
(This article belongs to the Special Issue Sensor and AI Technologies in Intelligent Agriculture)
Show Figures

Figure 1

Back to TopTop