AI, Sensors and Robotics for Smart Agriculture

A special issue of Agronomy (ISSN 2073-4395). This special issue belongs to the section "Precision and Digital Agriculture".

Deadline for manuscript submissions: closed (25 July 2023) | Viewed by 53067

Special Issue Editors


E-Mail Website
Guest Editor
College of Artificial Intelligence, Nanjing Agricultural University, Nanjing 210095, China
Interests: quality and safety assessment of agricultural products; harvesting robots; robot vision; robotic grasping; spectral analysis and modeling; robotic systems and their applications in agriculture
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Australian Centre for Field Robotics (ACFR), University of Sydney, Sydney, NSW 2006, Australia
Interests: field robotics; intelligent perception; visual localization; artificial intelligence; image processing; pattern recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Food security is a big issue for human society whilst the traditional labor-based agricultural production model has been unable to meet the increasing needs. With the continuous progress of artificial intelligence (AI), sensors and robotics, smart agriculture is gradually applied to agriculture production all over the world. The purpose of smart agriculture is to improve the efficiency of agricultural production, improve production and management methods, implement green production and retain the ecological environment. 

Smart agriculture is the deep combination of IoT technology and traditional agriculture. IoT will elevate the future of agriculture to a new level, and smart agriculture has become more and more common among farmers. Using the Internet of Things, Sensor technology and agricultural robots, smart agriculture could achieve precise control and scientific management of the production and operation process, realize intelligent control of agricultural cultivation, and promote the transformation of agricultural development to intensive and large-scale. The aim of this Special Issue is to share the studies and practices on AI, sensors and robots in smart agriculture. 

Dr. Baohua Zhang
Dr. Yongliang Qiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Agronomy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • panchromatic, multispectral, and hyperspectral approaches
  • field phenotyping and yield estimation
  • disease and stress detection
  • computer vision
  • robot sensing systems
  • artificial intelligence and machine learning
  • sensor fusion in agri-robotics
  • variable-rate applications
  • farm management information systems
  • remote sensing
  • ict applications
  • agri-robotics navigation and awareness

Published Papers (26 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 19203 KiB  
Article
Improved Faster Region-Based Convolutional Neural Networks (R-CNN) Model Based on Split Attention for the Detection of Safflower Filaments in Natural Environments
by Zhenguo Zhang, Ruimeng Shi, Zhenyu Xing, Quanfeng Guo and Chao Zeng
Agronomy 2023, 13(10), 2596; https://doi.org/10.3390/agronomy13102596 - 11 Oct 2023
Cited by 2 | Viewed by 1011
Abstract
The accurate acquisition of safflower filament information is the prerequisite for robotic picking operations. To detect safflower filaments accurately in different illumination, branch and leaf occlusion, and weather conditions, an improved Faster R-CNN model for filaments was proposed. Due to the characteristics of [...] Read more.
The accurate acquisition of safflower filament information is the prerequisite for robotic picking operations. To detect safflower filaments accurately in different illumination, branch and leaf occlusion, and weather conditions, an improved Faster R-CNN model for filaments was proposed. Due to the characteristics of safflower filaments being dense and small in the safflower images, the model selected ResNeSt-101 with residual network structure as the backbone feature extraction network to enhance the expressive power of extracted features. Then, using Region of Interest (ROI) Align improved ROI Pooling to reduce the feature errors caused by double quantization. In addition, employing the partitioning around medoids (PAM) clustering was chosen to optimize the scale and number of initial anchors of the network to improve the detection accuracy of small-sized safflower filaments. The test results showed that the mean Average Precision (mAP) of the improved Faster R-CNN reached 91.49%. Comparing with Faster R-CNN, YOLOv3, YOLOv4, YOLOv5, and YOLOv6, the improved Faster R-CNN increased the mAP by 9.52%, 2.49%, 5.95%, 3.56%, and 1.47%, respectively. The mAP of safflower filaments detection was higher than 91% on a sunny, cloudy, and overcast day, in sunlight, backlight, branch and leaf occlusion, and dense occlusion. The improved Faster R-CNN can accurately realize the detection of safflower filaments in natural environments. It can provide technical support for the recognition of small-sized crops. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

20 pages, 21888 KiB  
Article
Design and Testing of Bionic-Feature-Based 3D-Printed Flexible End-Effectors for Picking Horn Peppers
by Lexing Deng, Tianyu Liu, Ping Jiang, Aolin Qi, Yuchen He, Yujie Li, Mingqin Yang and Xin Deng
Agronomy 2023, 13(9), 2231; https://doi.org/10.3390/agronomy13092231 - 26 Aug 2023
Cited by 2 | Viewed by 1219
Abstract
To solve the problems of poor adaptability and large sizes of pepper harvesting machinery in facility agriculture to enhance the efficiency and quality of pepper harvesting and ultimately boost farmers’ income, several flexible end-effectors were designed. These end-effectors were tailored to the unique [...] Read more.
To solve the problems of poor adaptability and large sizes of pepper harvesting machinery in facility agriculture to enhance the efficiency and quality of pepper harvesting and ultimately boost farmers’ income, several flexible end-effectors were designed. These end-effectors were tailored to the unique morphologies of horn peppers, drawing inspiration from biomimicry. Subsequently, we conducted experimental verification to validate their performance. Four biological features, namely, the outer contours of a Vicia faba L. fruit, an Abelmoschus esculentus fruit, the upper jaw of a Lucanidae, and a Procambarus clarkii claw, were selected and designed using 3D software. In order to ascertain the structural viability and establish the initial design framework for the test end-effector, a simulation analysis to evaluate the strength and deformation of the flexible end-effector under various pepper-picking conditions was conducted. PLA material and 3D printing technology were used to create the end-effector, and, together with the mobile robotic arm platform ROSMASTER X3 PLUS, they were used to build a test prototype; a pepper tensile test was performed to pre-determine the reasonableness of the picking program, and then a prototype was created for the actual picking of the peppers to compare the picking effectiveness of several types of flexible end-effectors. In six experiments, each flexible end was harvested for 120 horn peppers. The Vicia faba L. flexible end-effector had the lowest average breakage rate. The average breakage rate was 1.7%. At the same time, it had the lowest average drop rate. The average drop rate was 3.3%. The test results indicated that the flexible end-effector that emulated the outer contour characteristics of the Vicia faba L. fruit demonstrated the most favorable outcomes. This design exhibited high working efficiency and the lowest rates of fruit breakage and fruit drops, surpassing both the artificial and traditional machine picking methods and effectively fulfilling the requirements for pepper-picking operations in facility agriculture. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

16 pages, 8895 KiB  
Article
Detection and Positioning of Camellia oleifera Fruit Based on LBP Image Texture Matching and Binocular Stereo Vision
by Xiangming Lei, Mingliang Wu, Yajun Li, Anwen Liu, Zhenhui Tang, Shang Chen and Yang Xiang
Agronomy 2023, 13(8), 2153; https://doi.org/10.3390/agronomy13082153 - 16 Aug 2023
Viewed by 983
Abstract
To achieve the rapid recognition and accurate picking of Camellia oleifera fruits, a binocular vision system composed of two industrial cameras was used to collect images of Camellia oleifera fruits in natural environments. The YOLOv7 convolutional neural network model was used for iterative [...] Read more.
To achieve the rapid recognition and accurate picking of Camellia oleifera fruits, a binocular vision system composed of two industrial cameras was used to collect images of Camellia oleifera fruits in natural environments. The YOLOv7 convolutional neural network model was used for iterative training, and the optimal weight model was selected to recognize the images and obtain the anchor frame region of the Camellia oleifera fruits. The local binary pattern (LBP) maps of the anchor frame region were extracted and matched by using the normalized correlation coefficient template matching algorithm to obtain the positions of the center point in the left and right images. The recognition experimental results showed that the accuracy rate, recall rate, mAP and F1 of the model were 97.3%, 97.6%, 97.7% and 97.4%. The recognition rate of the Camellia oleifera fruit with slight shading was 93.13%, and the recognition rate with severe shading was 75.21%. The recognition rate of the Camellia oleifera fruit was 90.64% under sunlight condition, and the recognition rate was 91.34% under shading condition. The orchard experiment results showed that, in the depth range of 400–600 mm, the maximum error value of the binocular stereo vision system in the depth direction was 4.279 mm, and the standard deviation was 1.142 mm. The detection and three-dimensional positioning accuracy of the binocular stereo vision system for Camellia oleifera fruits could basically meet the working requirements of the Camellia oleifera fruit-picking robot. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

16 pages, 17129 KiB  
Article
Cucumber Picking Recognition in Near-Color Background Based on Improved YOLOv5
by Liyang Su, Haixia Sun, Shujuan Zhang, Xinyuan Lu, Runrun Wang, Linjie Wang and Ning Wang
Agronomy 2023, 13(8), 2062; https://doi.org/10.3390/agronomy13082062 - 04 Aug 2023
Cited by 2 | Viewed by 1113
Abstract
Rapid and precise detection of cucumbers is a key element in enhancing the capability of intelligent harvesting robots. Problems such as near-color background interference, branch and leaf occlusion of fruits, and target scale diversity in greenhouse environments posed higher requirements for cucumber target [...] Read more.
Rapid and precise detection of cucumbers is a key element in enhancing the capability of intelligent harvesting robots. Problems such as near-color background interference, branch and leaf occlusion of fruits, and target scale diversity in greenhouse environments posed higher requirements for cucumber target detection algorithms. Therefore, a lightweight YOLOv5s-Super model was proposed based on the YOLOv5s model. First, in this study, the bidirectional feature pyramid network (BiFPN) and C3CA module were added to the YOLOv5s-Super model with the goal of capturing cucumber shoulder features of long-distance dependence and dynamically fusing multi-scale features in the near-color background. Second, the Ghost module was added to the YOLOv5s-Super model to speed up the inference time and floating-point computation speed of the model. Finally, this study visualized different feature fusion methods for the BiFPN module; independently designed a C3SimAM module for comparison between parametric and non-parametric attention mechanisms. The results showed that the YOLOv5s-Super model achieves mAP of 87.5%, which was 4.2% higher than the YOLOv7-tiny and 1.9% higher than the YOLOv8s model. The improved model could more accurately and robustly complete the detection of multi-scale features in complex near-color backgrounds while the model met the requirement of being lightweight. These results could provide technical support for the implementation of intelligent cucumber picking. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

16 pages, 6657 KiB  
Article
DCF-Yolov8: An Improved Algorithm for Aggregating Low-Level Features to Detect Agricultural Pests and Diseases
by Lijuan Zhang, Gongcheng Ding, Chaoran Li and Dongming Li
Agronomy 2023, 13(8), 2012; https://doi.org/10.3390/agronomy13082012 - 29 Jul 2023
Cited by 5 | Viewed by 2904
Abstract
The invasion of agricultural diseases and insect pests is a huge difficulty for the growth of crops. The detection of diseases and pests is a very challenging task. The diversity of diseases and pests in terms of shapes, colors, and sizes, as well [...] Read more.
The invasion of agricultural diseases and insect pests is a huge difficulty for the growth of crops. The detection of diseases and pests is a very challenging task. The diversity of diseases and pests in terms of shapes, colors, and sizes, as well as changes in the lighting environment, have a massive impact on the accuracy of the detection results. We improved the C2F module based on DenseBlock and proposed DCF to extract low-level features such as the edge texture of pests and diseases. Through the sensitivity of low-level features to the diversity of pests and diseases, the DCF module can better cope with complex detection tasks and improve the accuracy and robustness of the detection. The complex background environment of pests and diseases and different lighting conditions make the IP102 data set have strong nonlinear characteristics. The Mish activation function is selected to replace the CBS module with the CBM, which can better learn the nonlinear characteristics of the data and effectively solve the problems of gradient disappearance in the algorithm training process. Experiments show that the advanced Yolov8 algorithm has improved. Comparing with Yolov8, our algorithm improves the MAP50 index, Precision index, and Recall index by 2%, 1.3%, and 3.7%. The model in this paper has higher accuracy and versatility. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

18 pages, 4046 KiB  
Article
Adaptive Fusion Positioning Based on Gaussian Mixture Model for GNSS-RTK and Stereo Camera in Arboretum Environments
by Shenghao Liang, Wenfeng Zhao, Nuanchen Lin and Yuanjue Huang
Agronomy 2023, 13(8), 1982; https://doi.org/10.3390/agronomy13081982 - 27 Jul 2023
Viewed by 771
Abstract
The integration of Global Navigation Satellite System (GNSS) Real-Time Kinematics (RTK) can provide high-precision, real-time, and global coverage of location information in open areas. But in arboretum environment, the ability to achieve continuous high-precision positioning using global positioning technology is limited due to [...] Read more.
The integration of Global Navigation Satellite System (GNSS) Real-Time Kinematics (RTK) can provide high-precision, real-time, and global coverage of location information in open areas. But in arboretum environment, the ability to achieve continuous high-precision positioning using global positioning technology is limited due to various sources of interference, such as multi-path effects, signal obstruction, and environmental noise. In order to achieve precise navigation in challenging GNSS signal environments, visual SLAM systems are widely used due to their ability to adapt to different environmental features. Therefore, this paper proposes an optimized solution that integrates the measurements from GNSS-RTK and stereo cameras. The presented approach aligns the coordinates between the two sensors, and then employs an adaptive sliding window approach, which dynamically adjusts the window size and optimizes the pose within the sliding window. At the same time, to address the variations and uncertainties of GNSS signals in non-ideal environments, this paper proposes a solution that utilizes a Gaussian Mixture Model (GMM) to model the potential noise in GNSS signals. Furthermore, it employs a Variational Bayesian Inference-based (VBI) method to estimate the parameters of the GMM model online. The integration of this model with an optimization-based approach enhances the positioning accuracy and robustness even further. The evaluation results of real vehicle tests show that in challenging GNSS arboretum environments, GMM applied to GNSS/VO integration has higher accuracy and better robustness. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

22 pages, 9649 KiB  
Article
Maize Nitrogen Grading Estimation Method Based on UAV Images and an Improved Shufflenet Network
by Weizhong Sun, Bohan Fu and Zhao Zhang
Agronomy 2023, 13(8), 1974; https://doi.org/10.3390/agronomy13081974 - 26 Jul 2023
Cited by 3 | Viewed by 993
Abstract
Maize is a vital crop in China for both food and industry. The nitrogen content plays a crucial role in its growth and yield. Previous researchers have conducted numerous studies on the issue of the nitrogen content in single maize plants from a [...] Read more.
Maize is a vital crop in China for both food and industry. The nitrogen content plays a crucial role in its growth and yield. Previous researchers have conducted numerous studies on the issue of the nitrogen content in single maize plants from a regression perspective; however, partition management techniques of precision agriculture require plants to be divided by zones and classes. Therefore, in this study, the focus is shifted to the problems of plot classification and graded nitrogen estimation in maize plots performed based on various machine learning and deep learning methods. Firstly, the panoramic unmanned aerial vehicle (UAV) images of maize farmland are collected by UAV and preprocessed to obtain UAV images of each maize plot to construct the required datasets. The dataset includes three classes—low nitrogen, medium nitrogen, and high nitrogen, with 154, 94, and 46 sets of UAV images, respectively, in each class. The training set accounts for eighty percent of the entire dataset and the test set accounts for the other twenty percent. Then, the dataset is used to train models based on machine learning and convolutional neural network algorithms and subsequently the models are evaluated. Comparisons are made between five machine learning classifiers and four convolutional neural networks to assess their respective performances, followed by a separate assessment of the most optimal machine learning classifier and convolutional neural networks. Finally, the ShuffleNet network is enhanced by incorporating SENet and improving the kernel size of the Depthwise separable convolution. The findings demonstrate that the enhanced ShuffleNet network has the highest performance; its classification accuracy, precision, recall, and F1 scores were 96.8%, 97.0%, 97.1%, and 97.0%, respectively. The RegNet, the optimal model among deep learning models, achieved accuracy, precision, recall, and F1 scores of 96.4%, 96.9%, 96.5%, and 96.6%, respectively. In comparison, logistic regression, the optimal model among the machine learning classifiers, attained accuracy of 77.6%, precision of 79.5%, recall of 77.6%, and an F1 score of 72.6%. Notably, the logistic regression exhibited significant enhancements of 19.2% in accuracy, 17.5% in precision, 19.5% in recall, and 24.4% in the F1 score. In contrast, RegNet demonstrated modest improvements of 0.4% in accuracy, 0.1% in precision, 0.6% in recall, and 0.4% in the F1 score. Moreover, ShuffleNet-improvement boasted a substantially lower loss rate of 0.117, which was 0.039 lower than that of RegNet (0.156). The results indicated the significance of ShuffleNet-improvement in the nitrogen classification of maize plots, providing strong support for agricultural zoning management and precise fertilization. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

15 pages, 6107 KiB  
Article
YOLOv5-ASFF: A Multistage Strawberry Detection Algorithm Based on Improved YOLOv5
by Yaodi Li, Jianxin Xue, Mingyue Zhang, Junyi Yin, Yang Liu, Xindan Qiao, Decong Zheng and Zezhen Li
Agronomy 2023, 13(7), 1901; https://doi.org/10.3390/agronomy13071901 - 19 Jul 2023
Cited by 4 | Viewed by 1895
Abstract
The smart farm is currently a hot topic in the agricultural industry. Due to the complex field environment, the intelligent monitoring model applicable to this environment requires high hardware performance, and there are difficulties in realizing real-time detection of ripe strawberries on a [...] Read more.
The smart farm is currently a hot topic in the agricultural industry. Due to the complex field environment, the intelligent monitoring model applicable to this environment requires high hardware performance, and there are difficulties in realizing real-time detection of ripe strawberries on a small automatic picking robot, etc. This research proposes a real-time multistage strawberry detection algorithm YOLOv5-ASFF based on improved YOLOv5. Through the introduction of the ASFF (adaptive spatial feature fusion) module into YOLOv5, the network can adaptively learn the fused spatial weights of strawberry feature maps at each scale as a way to fully obtain the image feature information of strawberries. To verify the superiority and availability of YOLOv5-ASFF, a strawberry dataset containing a variety of complex scenarios, including leaf shading, overlapping fruit, and dense fruit, was constructed in this experiment. The method achieved 91.86% and 88.03% for mAP and F1, respectively, and 98.77% for AP of mature-stage strawberries, showing strong robustness and generalization ability, better than SSD, YOLOv3, YOLOv4, and YOLOv5s. The YOLOv5-ASFF algorithm can overcome the influence of complex field environments and improve the detection of strawberries under dense distribution and shading conditions, and the method can provide technical support for monitoring yield estimation and harvest planning in intelligent strawberry field management. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

24 pages, 6605 KiB  
Article
Design of a Tomato Sorting Device Based on the Multisine-FSR Composite Measurement
by Zizhao Yang, Ahmed Amin, Yongnian Zhang, Xiaochan Wang, Guangming Chen and Mahmoud A. Abdelhamid
Agronomy 2023, 13(7), 1778; https://doi.org/10.3390/agronomy13071778 - 30 Jun 2023
Cited by 1 | Viewed by 1936
Abstract
The ripeness of tomatoes is crucial to determining their shelf life and quality. Most of the current methods for picking and sorting tomatoes take a long time, so this paper aims to design a device for sorting tomatoes based on force and bioelectrical [...] Read more.
The ripeness of tomatoes is crucial to determining their shelf life and quality. Most of the current methods for picking and sorting tomatoes take a long time, so this paper aims to design a device for sorting tomatoes based on force and bioelectrical impedance measurement. A force sensor installed on each of its four fingers may be used as an impedance measurement electrode. When picking tomatoes, the electrical impedance analysis circuit is first connected for pre-grasping. By applying a certain pre-tightening force, the FSR sensor on the end effector finger can be tightly attached to the tomato and establish an electric current pathway. Then, the electrical parameters of the tomato are measured to determine its maturity, and some of the electrical parameters are used for force monitoring compensation. Then, a force analysis is conducted to consider the resistance of the FSR under current stress. According to the principle of complex impedance circuit voltage division, the voltage signal on the tomato is determined. At the same time, the specific value of the grasping force at this time is determined based on the calibration of the pre-experiment and the compensation during the detection process, achieving real-time detection of the grasping force. The bioelectricity parameters of tomatoes can not only judge the ripeness of tomatoes, but also compensate for the force measurement stage to achieve more accurate non-destructive sorting. The experimental results showed that within 0.6 s of stable grasping, this system could complete tomato ripeness detection, improve the overall tomato sorting efficiency, and achieve 95% accuracy in identifying ripeness through impedance. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

24 pages, 7690 KiB  
Article
Mango Fruit Fly Trap Detection Using Different Wireless Communications
by Federico Hahn, Salvador Valle, Roberto Rendón, Oneyda Oyorzabal and Alondra Astudillo
Agronomy 2023, 13(7), 1736; https://doi.org/10.3390/agronomy13071736 - 28 Jun 2023
Viewed by 1592
Abstract
Fruit flies cause production losses in mango orchards affecting fruit quality. A National Campaign against Fruit Flies (NCFF) evaluates farm status using the fruit flies per trap per day index (FTD). Traps with attractant are installed manually within orchards in Mexico, but counting [...] Read more.
Fruit flies cause production losses in mango orchards affecting fruit quality. A National Campaign against Fruit Flies (NCFF) evaluates farm status using the fruit flies per trap per day index (FTD). Traps with attractant are installed manually within orchards in Mexico, but counting the flies trapped every week requires excessive numbers of trained personal. Electronic traps (e-traps) use sensors to monitor fruit fly population, saving labor and obtaining the real-time orchard infestation. The objective of this work was to acquire an image within a e-trap at 17:00 when an insect was detected and binarize the information in real-time to count the number of flies. Each e-trap was implemented with a polyethylene PET bottle screwed to a tap containing an ESP32-CAM camera. E-traps from several hectares of mango trees were sampled and transmitted through WSN wireless sensor networks. This original system presents a star topology network within each hectare with the long range LoRa transceiver at the central tower. It receives the fly count from five e-traps and finally transmits data to the house tower end point. Another contribution of this research was the use of a DJI mini2 for acquiring the e-trap data, and the 8-ha flight took 15 min and 35 s. This period can be reduced if the drone flies higher. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

24 pages, 12270 KiB  
Article
Realtime Picking Point Decision Algorithm of Trellis Grape for High-Speed Robotic Cut-and-Catch Harvesting
by Zhujie Xu, Jizhan Liu, Jie Wang, Lianjiang Cai, Yucheng Jin, Shengyi Zhao and Binbin Xie
Agronomy 2023, 13(6), 1618; https://doi.org/10.3390/agronomy13061618 - 15 Jun 2023
Cited by 3 | Viewed by 1644
Abstract
For high-speed robotic cut-and-catch harvesting, efficient trellis grape recognition and picking point positioning are crucial factors. In this study, a new method for the rapid positioning of picking points based on synchronous inference for multi-grapes was proposed. Firstly, a three-dimensional region of interest [...] Read more.
For high-speed robotic cut-and-catch harvesting, efficient trellis grape recognition and picking point positioning are crucial factors. In this study, a new method for the rapid positioning of picking points based on synchronous inference for multi-grapes was proposed. Firstly, a three-dimensional region of interest for a finite number of grapes was constructed according to the “eye to hand” configuration. Then, a feature-enhanced recognition deep learning model called YOLO v4-SE combined with multi-channel inputs of RGB and depth images was put forward to identify occluded or overlapping grapes and synchronously infer picking points upwards of the prediction boxes of the multi-grapes imaged completely in the three-dimensional region of interest (ROI). Finally, the accuracy of each dimension of the picking points was corrected, and the global continuous picking sequence was planned in the three-dimensional ROI. The recognition experiment in the field showed that YOLO v4-SE has good detection performance in various samples with different interference. The positioning experiment, using a different number of grape bunches from the field, demonstrated that the average recognition success rate is 97% and the average positioning success rate is 93.5%; the average recognition time is 0.0864 s; and the average positioning time is 0.0842 s. The average positioning errors of the x, y, and z directions are 2.598, 2.012, and 1.378 mm, respectively. The average positioning error of the Euclidean distance between the true picking point and the predicted picking point is 7.69 mm. In field synchronous harvesting experiments with different fruiting densities, the average recognition success rate is 97%; the average positioning success rate is 93.606%; and the average picking success rate is 92.78%. The average picking speed is 6.18 s×bunch1, which meets the harvesting requirements for high-speed cut-and-catch harvesting robots. This method is promising for overcoming time-consuming harvesting caused by the problematic positioning of the grape stem. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

14 pages, 7627 KiB  
Article
Pests Identification of IP102 by YOLOv5 Embedded with the Novel Lightweight Module
by Lijuan Zhang, Cuixing Zhao, Yuncong Feng and Dongming Li
Agronomy 2023, 13(6), 1583; https://doi.org/10.3390/agronomy13061583 - 12 Jun 2023
Cited by 4 | Viewed by 2008
Abstract
The development of the agricultural economy is hindered by various pest-related problems. Most pest detection studies only focus on a single pest category, which is not suitable for practical application scenarios. This paper presents a deep learning algorithm based on YOLOv5, which aims [...] Read more.
The development of the agricultural economy is hindered by various pest-related problems. Most pest detection studies only focus on a single pest category, which is not suitable for practical application scenarios. This paper presents a deep learning algorithm based on YOLOv5, which aims to assist agricultural workers in efficiently diagnosing information related to 102 types of pests. To achieve this, we propose a new lightweight convolutional module called C3M, which is inspired by the MobileNetV3 network. Compared to the original convolution module C3, C3M occupies less computing memory and results in a faster inference speed, with the detection precision improved by 4.6%. In addition, the GAM (Global Attention Mechanism) is introduced into the neck of YOLO5, which further improves the detection capability of the model. The experimental results indicate that the C3M-YOLO algorithm performs better than YOLOv5 on IP102, a public dataset consisting of 102 pests. Specifically, the detection precision P is 2.4% higher than that of the original model, and mAP0.75 increased by 1.7%, while the F1-score improved by 1.8%. Furthermore, the mAP0.5 and mAP0.75 of the C3M-YOLO algorithm are higher than those of the YOLOX detection model by 5.1% and 6.2%, respectively. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

18 pages, 12324 KiB  
Article
TS-YOLO: An All-Day and Lightweight Tea Canopy Shoots Detection Model
by Zhi Zhang, Yongzong Lu, Yiqiu Zhao, Qingmin Pan, Kuang Jin, Gang Xu and Yongguang Hu
Agronomy 2023, 13(5), 1411; https://doi.org/10.3390/agronomy13051411 - 19 May 2023
Cited by 8 | Viewed by 1500
Abstract
Accurate and rapid detection of tea shoots within the tea canopy is essential for achieving the automatic picking of famous tea. The current detection models suffer from two main issues: low inference speed and difficulty in deployment on movable platforms, which constrain the [...] Read more.
Accurate and rapid detection of tea shoots within the tea canopy is essential for achieving the automatic picking of famous tea. The current detection models suffer from two main issues: low inference speed and difficulty in deployment on movable platforms, which constrain the development of intelligent tea picking equipment. Furthermore, the detection of tea canopy shoots is currently limited to natural daylight conditions, with no reported studies on detecting tea shoots under artificial light during the nighttime. Developing an all-day tea picking platform would significantly improve the efficiency of tea picking. In view of these problems, the research objective was to propose an all-day lightweight detection model for tea canopy shoots (TS-YOLO) based on YOLOv4. Firstly, image datasets of tea canopy shoots sample were collected under low light (6:30–7:30 and 18:30–19:30), medium light (8:00–9:00 and 17:00–18:00), high light (11:00–15:00), and artificial light at night. Then, the feature extraction network of YOLOv4 and the standard convolution of the entire network were replaced with the lightweight neural network MobilenetV3 and the depth-wise separable convolution. Finally, to compensate for the lack of feature extraction ability in the lightweight neural network, a deformable convolutional layer and coordinate attention modules were added to the network. The results showed that the improved model size was 11.78 M, 18.30% of that of YOLOv4, and the detection speed was improved by 11.68 FPS. The detection accuracy, recall, and AP of tea canopy shoots under different light conditions were 85.35%, 78.42%, and 82.12%, respectively, which were 1.08%, 12.52%, and 8.20% higher than MobileNetV3-YOLOv4, respectively. The developed lightweight model could effectively and rapidly detect tea canopy shoots under all-day light conditions, which provides the potential to develop an all-day intelligent tea picking platform. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

15 pages, 3765 KiB  
Article
Origin Identification of Saposhnikovia divaricata by CNN Embedded with the Hierarchical Residual Connection Block
by Dongming Li, Chenglin Yang, Rui Yao and Li Ma
Agronomy 2023, 13(5), 1199; https://doi.org/10.3390/agronomy13051199 - 24 Apr 2023
Cited by 2 | Viewed by 1318
Abstract
This paper proposes a method for recognizing the origin of Saposhnikovia divaricata using the IResNet model to achieve computer vision-based classification. Firstly, we created a small sample dataset and applied data augmentation techniques to enhance its diversity. After that, we introduced the hierarchical [...] Read more.
This paper proposes a method for recognizing the origin of Saposhnikovia divaricata using the IResNet model to achieve computer vision-based classification. Firstly, we created a small sample dataset and applied data augmentation techniques to enhance its diversity. After that, we introduced the hierarchical residual connection block in the early stage of the original model to expand the perceptual field of the neural network and enhance the extraction of scale features. Meanwhile, a depth-separable convolution operation was adopted in the later stage of the model to replace the conventional convolution operation and further reduce the time cost of the model. The experimental results demonstrate that the improved network model achieved a 5.03% improvement in accuracy compared to the original model while also significantly reducing the number of parameters required for the model. In our experiments, we compared the accuracy of the proposed model with several classical convolutional neural network models, including ResNet50, Resnest50, Res2net50, RepVggNet_B0, and ConvNext_T. The results showed that our proposed model achieved an accuracy of 93.72%, which outperformed ResNet50 (86.68%), Resnest50 (89.38%), Res2net50 (91.83%), RepVggNet_B0 (88.68%), and ConvNext_T (92.18%). Furthermore, our proposed model achieved the highest accuracy among the compared models, with a transmission frame rate of 158.9 fps and an inference time of only 6.29 ms. The research methodology employed in this study has demonstrated the ability to reduce potential errors caused by manual observation, effectively improving the recognition ability of Saposhnikovia divaricata based on existing data. Furthermore, the findings of this study provide valuable reference and support for future efforts to develop lightweight models in this area. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

16 pages, 10184 KiB  
Article
An Information Entropy Masked Vision Transformer (IEM-ViT) Model for Recognition of Tea Diseases
by Jiahong Zhang, Honglie Guo, Jin Guo and Jing Zhang
Agronomy 2023, 13(4), 1156; https://doi.org/10.3390/agronomy13041156 - 19 Apr 2023
Cited by 2 | Viewed by 1195
Abstract
Tea is one of the most popular drinks in the world. The rapid and accurate recognition of tea diseases is of great significance for taking targeted preventive measures. In this paper, an information entropy masked vision transformation (IEM-ViT) model was proposed for the [...] Read more.
Tea is one of the most popular drinks in the world. The rapid and accurate recognition of tea diseases is of great significance for taking targeted preventive measures. In this paper, an information entropy masked vision transformation (IEM-ViT) model was proposed for the rapid and accurate recognition of tea diseases. The information entropy weighting (IEW) method was used to calculate the IE of each segment of the image, so that the model could learn the maximum amount of knowledge and information more quickly and accurately. An asymmetric encoder–decoder architecture was used in the masked autoencoder (MAE), where the encoder operated on only a subset of visible patches and the decoder recovered the labeled masked patches, reconstructing the missing pixels for parameter sharing and data augmentation. The experimental results showed that the proposed IEM-ViT had an accuracy of 93.78% for recognizing the seven types of tea diseases. In comparison to the currently common image recognition algorithms including the ResNet18, VGG16, and VGG19, the recognition accuracy was improved by nearly 20%. Additionally, in comparison to the other six published tea disease recognition methods, the proposed IEM-ViT model could recognize more types of tea diseases and the accuracy was improved simultaneously. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

22 pages, 9388 KiB  
Article
Spectral Quantitative Analysis and Research of Fusarium Head Blight Infection Degree in Wheat Canopy Visible Areas
by Yanyu Chen, Xiaochan Wang, Xiaolei Zhang, Ye Sun, Haiyan Sun, Dezhi Wang and Xin Xu
Agronomy 2023, 13(3), 933; https://doi.org/10.3390/agronomy13030933 - 21 Mar 2023
Viewed by 1333
Abstract
Obtaining complete and consistent spectral images of wheat ears in the visible areas of in situ wheat canopies poses a significant challenge due to the varying growth posture of wheat. Nevertheless, detecting the presence and degree of wheat Fusarium head blight (FHB) in [...] Read more.
Obtaining complete and consistent spectral images of wheat ears in the visible areas of in situ wheat canopies poses a significant challenge due to the varying growth posture of wheat. Nevertheless, detecting the presence and degree of wheat Fusarium head blight (FHB) in situ is critical for formulating measures that ensure stable grain production and supply while promoting green development in agriculture. In this study, a spectral quantitative analysis model was developed to evaluate the infection degree of FHB in an in situ wheat canopy’s visible areas. To achieve this, a spectral acquisition method was used to evaluate the infection degree of FHB in a wheat canopy’s visible areas. Hyperspectral images were utilized to obtain spectral data from healthy and mildly, moderately, and severely infected wheat ear canopies. The spectral data were preprocessed, and characteristic wavelengths were extracted using twelve types of spectral preprocessing methods and four types of characteristic wavelength extraction methods. Subsequently, sixty-five spectral quantitative prediction models for the infection degree of FHB in the in situ wheat canopy visible areas were established using the PLSR method, based on the original spectral data, preprocessed spectral data, original spectral characteristic wavelengths extracted data, and preprocessed spectral characteristic wavelengths extracted data. Comparative analysis of the models indicated that the MMS + CARS + PLSR model exhibited the best prediction effect and could serve as the spectral quantitative analysis model for the evaluation of the infection degree of FHB in an in situ wheat canopy’s visible areas. The model extracted thirty-five characteristic wavelengths, with a modeling set coefficient of determination (R2) of 0.9490 and a root-mean-square error (RMSE) of 0.2384. The testing set of the coefficient of determination (R2) was 0.9312, with a root-mean-square error (RMSE) of 0.2588. The model can facilitate the spectral quantitative analysis of the infection degree of FHB in the in situ wheat canopy visible areas, thereby aiding in the implementation of China’s targeted poverty alleviation and agricultural power strategy. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

18 pages, 8232 KiB  
Article
Estimation of Aboveground Biomass for Winter Wheat at the Later Growth Stage by Combining Digital Texture and Spectral Analysis
by Ling Zheng, Qun Chen, Jianpeng Tao, Yakun Zhang, Yu Lei, Jinling Zhao and Linsheng Huang
Agronomy 2023, 13(3), 865; https://doi.org/10.3390/agronomy13030865 - 16 Mar 2023
Cited by 1 | Viewed by 1356
Abstract
Aboveground biomass (AGB) is an important indicator used to predict crop yield. Traditional spectral features or image textures have been proposed to estimate the AGB of crops, but they perform poorly at high biomass levels. This study thus evaluated the ability of spectral [...] Read more.
Aboveground biomass (AGB) is an important indicator used to predict crop yield. Traditional spectral features or image textures have been proposed to estimate the AGB of crops, but they perform poorly at high biomass levels. This study thus evaluated the ability of spectral features, image textures, and their combinations to estimate winter wheat AGB. Spectral features were obtained from the wheat canopy reflectance spectra at 400–1000 nm, including original wavelengths and seven vegetation indices. Effective wavelengths (EWs) were screened through use of the successive projection algorithm, and the optimal vegetation index was selected by correlation analysis. Image texture features, including texture features and the normalized difference texture index, were extracted using gray level co-occurrence matrices. Effective variables, including the optimal texture subset (OTEXS) and optimal normalized difference texture index subset (ONDTIS), were selected by the ranking of feature importance using the random forest (RF) algorithm. Linear regression (LR), partial least squares regression (PLS), and RF were established to evaluate the relationship between each calculated feature and AGB. Results demonstrate that the ONDTIS with PLS based on the validation datasets exhibited better performance in estimating AGB for the post-seedling stage (R2 = 0.75, RMSE = 0.04). Moreover, the combinations of the OTEXS and EWs exhibited the highest prediction accuracy for the seeding stage when based on the PLS model (R2 = 0.94, RMSE = 0.01), the post-seedling stage when based on the LR model (R2 = 0.78, RMSE = 0.05), and for all stages when based on the RF model (R2 = 0.87, RMSE = 0.05). Hence, the combined use of spectral and image textures can effectively improve the accuracy of AGB estimation, especially at the post-seedling stage. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

21 pages, 8363 KiB  
Article
Design of a Virtual Multi-Interaction Operation System for Hand–Eye Coordination of Grape Harvesting Robots
by Jizhan Liu, Jin Liang, Shengyi Zhao, Yingxing Jiang, Jie Wang and Yucheng Jin
Agronomy 2023, 13(3), 829; https://doi.org/10.3390/agronomy13030829 - 12 Mar 2023
Cited by 3 | Viewed by 1692
Abstract
In harvesting operations, simulation verification of hand–eye coordination in a virtual canopy is critical for harvesting robot research. More realistic scenarios, vision-based driving motion, and cross-platform interaction information are needed to achieve such simulations, which are very challenging. Current simulations are more focused [...] Read more.
In harvesting operations, simulation verification of hand–eye coordination in a virtual canopy is critical for harvesting robot research. More realistic scenarios, vision-based driving motion, and cross-platform interaction information are needed to achieve such simulations, which are very challenging. Current simulations are more focused on path planning operations for consistency scenarios, which are far from satisfying the requirements. To this end, a new approach of visual servo multi-interaction simulation in real scenarios is proposed. In this study, a dual-arm grape harvesting robot in the laboratory is used as an example. To overcome these challenges, a multi-software federation is first proposed to establish their communication and cross-software sending of image information, coordinate information, and control commands. Then, the fruit recognition and positioning algorithm, forward and inverse kinematic model and simulation model are embedded in OpenCV and MATLAB, respectively, to drive the simulation run of the robot in V-REP, thus realizing the multi-interaction simulation of hand–eye coordination in virtual trellis vineyard. Finally, the simulation is verified, and the results show that the average running time of a string-picking simulation system is 6.5 s, and the success rate of accurate picking point grasping reached 83.3%. A complex closed loop of “scene-image recognition-grasping” is formed by data processing and transmission of various information. It can effectively realize the continuous hand–eye coordination multi-interaction simulation of the harvesting robot under the virtual environment. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

15 pages, 2739 KiB  
Article
Edge Device Detection of Tea Leaves with One Bud and Two Leaves Based on ShuffleNetv2-YOLOv5-Lite-E
by Shihao Zhang, Hekai Yang, Chunhua Yang, Wenxia Yuan, Xinghui Li, Xinghua Wang, Yinsong Zhang, Xiaobo Cai, Yubo Sheng, Xiujuan Deng, Wei Huang, Lei Li, Junjie He and Baijuan Wang
Agronomy 2023, 13(2), 577; https://doi.org/10.3390/agronomy13020577 - 17 Feb 2023
Cited by 10 | Viewed by 3321
Abstract
In order to solve the problem of an accurate recognition of tea picking through tea picking robots, an edge device detection method is proposed in this paper based on ShuffleNetv2-YOLOv5-Lite-E for tea with one bud and two leaves. This replaces the original feature [...] Read more.
In order to solve the problem of an accurate recognition of tea picking through tea picking robots, an edge device detection method is proposed in this paper based on ShuffleNetv2-YOLOv5-Lite-E for tea with one bud and two leaves. This replaces the original feature extraction network by removing the Focus layer and using the ShuffleNetv2 algorithm, followed by a channel pruning of YOLOv5 at the neck layer head, thus achieving the purpose of reducing the model size. The results show that the size of the improved generated weight file is 27% of that of the original YOLOv5 model, and the mAP value of ShuffleNetv2-YOLOv5-Lite-E is 97.43% and 94.52% on the pc and edge device respectively, which are 1.32% and 1.75% lower compared to that of the original YOLOv5 model. The detection speeds of ShuffleNetv2-YOLOv5-Lite-E, YOLOv5, YOLOv4, and YOLOv3 were 8.6 fps, 2.7 fps, 3.2 fps, and 3.4 fps respectively after importing the models into an edge device, and the improved YOLOv5 detection speed was 3.2 times faster than that of the original YOLOv5 model. Through the detection method, the size of the original YOLOv5 model is effectively reduced while essentially ensuring recognition accuracy. The detection speed is also significantly improved, which is conducive to the realization of intelligent and accurate picking for future tea gardens, laying a solid foundation for the realization of tea picking robots. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

22 pages, 5888 KiB  
Article
Summer Maize Growth Estimation Based on Near-Surface Multi-Source Data
by Jing Zhao, Fangjiang Pan, Xiao Xiao, Lianbin Hu, Xiaoli Wang, Yu Yan, Shuailing Zhang, Bingquan Tian, Hailin Yu and Yubin Lan
Agronomy 2023, 13(2), 532; https://doi.org/10.3390/agronomy13020532 - 12 Feb 2023
Cited by 8 | Viewed by 2050
Abstract
Rapid and accurate crop chlorophyll content estimation and the leaf area index (LAI) are both crucial for guiding field management and improving crop yields. This paper proposes an accurate monitoring method for LAI and soil plant analytical development (SPAD) values (which are closely [...] Read more.
Rapid and accurate crop chlorophyll content estimation and the leaf area index (LAI) are both crucial for guiding field management and improving crop yields. This paper proposes an accurate monitoring method for LAI and soil plant analytical development (SPAD) values (which are closely related to leaf chlorophyll content; we use the SPAD instead of chlorophyll relative content) based on the fusion of ground–air multi-source data. Firstly, in 2020 and 2021, we collected unmanned aerial vehicle (UAV) multispectral data, ground hyperspectral data, UAV visible-light data, and environmental cumulative temperature data for multiple growth stages of summer maize, respectively. Secondly, the effective plant height (canopy height model (CHM)), effective accumulation temperature (growing degree days (GDD)), canopy vegetation index (mainly spectral vegetation index) and canopy hyperspectral features of maize were extracted, and sensitive features were screened by correlation analysis. Then, based on single-source and multi-source data, multiple linear regression (MLR), partial least-squares regression (PLSR) and random forest (RF) regression were used to construct LAI and SPAD inversion models. Finally, the distribution of LAI and SPAD prescription plots was generated and the trend for the two was analyzed. The results were as follows: (1) The correlations between the position of the hyperspectral red edge and the first-order differential value in the red edge with LAI and SPAD were all greater than 0.5. The correlation between the vegetation index, including a red and near-infrared band, with LAI and SPAD was above 0.75. The correlation between crop height and effective accumulated temperature with LAI and SPAD was above 0.7. (2) The inversion models based on multi-source data were more effective than the models made with single-source data. The RF model with multi-source data fusion achieved the highest accuracy of all models. In the testing set, the LAI and SPAD models’ R2 was 0.9315 and 0.7767; the RMSE was 0.4895 and 2.8387. (3) The absolute error between the extraction result of each model prescription map and the measured value was small. The error between the predicted value and the measured value of the LAI prescription map generated by the RF model was less than 0.4895. The difference between the predicted value and the measured value of the SPAD prescription map was less than 2.8387. The LAI and SPAD of summer maize first increased and then decreased with the advancement of the growth period, which was in line with the actual growth conditions. The research results indicate that the proposed method could effectively monitor maize growth parameters and provide a scientific basis for summer maize field management. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

16 pages, 6667 KiB  
Article
Branch Interference Sensing and Handling by Tactile Enabled Robotic Apple Harvesting
by Hongyu Zhou, Hanwen Kang, Xing Wang, Wesley Au, Michael Yu Wang and Chao Chen
Agronomy 2023, 13(2), 503; https://doi.org/10.3390/agronomy13020503 - 09 Feb 2023
Cited by 5 | Viewed by 2027
Abstract
In the dynamic and unstructured environment where horticultural crops grow, obstacles and interference frequently occur but are rarely addressed, which poses significant challenges for robotic harvesting. This work proposed a tactile-enabled robotic grasping method that combines deep learning, tactile sensing, and soft robots. [...] Read more.
In the dynamic and unstructured environment where horticultural crops grow, obstacles and interference frequently occur but are rarely addressed, which poses significant challenges for robotic harvesting. This work proposed a tactile-enabled robotic grasping method that combines deep learning, tactile sensing, and soft robots. By integrating fin-ray fingers with embedded tactile sensing arrays and customized perception algorithms, the robot gains the ability to sense and handle branch interference during the harvesting process and thus reduce potential mechanical fruit damage. Through experimental validations, an overall 83.3–87.0% grasping status detection success rate, and a promising interference handling method have been demonstrated. The proposed grasping method can also be extended to broader robotic grasping applications wherever undesirable foreign object intrusion needs to be addressed. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

24 pages, 10679 KiB  
Article
Research on Instance Segmentation Algorithm of Greenhouse Sweet Pepper Detection Based on Improved Mask RCNN
by Peichao Cong, Shanda Li, Jiachao Zhou, Kunfeng Lv and Hao Feng
Agronomy 2023, 13(1), 196; https://doi.org/10.3390/agronomy13010196 - 07 Jan 2023
Cited by 13 | Viewed by 2643
Abstract
The fruit quality and yield of sweet peppers can be effectively improved by accurately and efficiently controlling the growth conditions and taking timely corresponding measures to manage the planting process dynamically. The use of deep-learning-based image recognition technology to segment sweet pepper instances [...] Read more.
The fruit quality and yield of sweet peppers can be effectively improved by accurately and efficiently controlling the growth conditions and taking timely corresponding measures to manage the planting process dynamically. The use of deep-learning-based image recognition technology to segment sweet pepper instances accurately is an important means of achieving the above goals. However, the accuracy of the existing instance segmentation algorithms is seriously affected by complex scenes such as changes in ambient light and shade, similarity between the pepper color and background, overlap, and leaf occlusion. Therefore, this paper proposes an instance segmentation algorithm that integrates the Swin Transformer attention mechanism into the backbone network of a Mask region-based convolutional neural network (Mask RCNN) to enhance the feature extraction ability of the algorithm. In addition, UNet3+ is used to improve the mask head and segmentation quality of the mask. The experimental results show that the proposed algorithm can effectively segment different categories of sweet peppers under conditions of extreme light, sweet pepper overlap, and leaf occlusion. The detection AP, AR, segmentation AP, and F1 score were 98.1%, 99.4%, 94.8%, and 98.8%, respectively. The average FPS value was 5, which can be satisfied with the requirement of dynamic monitoring of the growth status of sweet peppers. These findings provide important theoretical support for the intelligent management of greenhouse crops. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

22 pages, 11851 KiB  
Article
Design and Testing of an Intelligent Multi-Functional Seedling Transplanting System
by Shengyi Zhao, Jizhan Liu, Yucheng Jin, Zongchun Bai, Jianlong Liu and Xin Zhou
Agronomy 2022, 12(11), 2683; https://doi.org/10.3390/agronomy12112683 - 28 Oct 2022
Cited by 5 | Viewed by 1875
Abstract
Transplanting is a core part of factory nurseries and is a key factor in determining the healthy growth of seedlings. While transplanting has a single operational function, the complete process of sorting, transplanting and replanting is complex. This paper innovatively proposes an intelligent [...] Read more.
Transplanting is a core part of factory nurseries and is a key factor in determining the healthy growth of seedlings. While transplanting has a single operational function, the complete process of sorting, transplanting and replanting is complex. This paper innovatively proposes an intelligent multi-functional seedling transplanting system, where the sorting, transplanting and replanting functions can be achieved with a single machine. This paper proposes the key strategies of seedling dynamic detection during transplanting and performing transplanting and replanting within the same tray, thus realizing the integration and miniaturization of the all-in-one machine. Then, through the flat design of the transplanting–replanting mechanism and the construction of a multi-module cooperative control strategy, a stable and reliable multi-functional synchronous operation is realized. Finally, the integrated operation experiment shows that the transplanting efficiency of the whole machine is 5000 plants/h, and the qualification rate after replanting is as high as 99.33%, which meets the operational needs of factory nurseries. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

14 pages, 5144 KiB  
Article
Analysing Airflow Velocity in the Canopy to Improve Droplet Deposition for Air-Assisted Spraying: A Case Study on Pears
by Rongkai Shi, Hao Sun, Wei Qiu, Xiaolan Lv, Fiaz Ahmad, Jiabing Gu, Hongfeng Yu and Zhengwei Zhang
Agronomy 2022, 12(10), 2424; https://doi.org/10.3390/agronomy12102424 - 06 Oct 2022
Cited by 3 | Viewed by 1673
Abstract
The suitability of airflow velocity in airborne spraying operations in orchards is mostly evaluated on the basis of inlet and outlet based on the airflow velocity at the canopy. However, the airflow velocity required to penetrate into the inner layer of the canopy, [...] Read more.
The suitability of airflow velocity in airborne spraying operations in orchards is mostly evaluated on the basis of inlet and outlet based on the airflow velocity at the canopy. However, the airflow velocity required to penetrate into the inner layer of the canopy, which is prone to pests and diseases, is still unclear due to variation in the geometry of the plant canopies. In this study, pear trees were selected as an example to explore the variations in the law of airflow attenuation in the inner canopy. Furthermore, we examine mist droplet formation in the inner canopy to determine a suitable inner canopy airflow end velocity (ICAEV) for air-assisted application. We also conducted a field validation test. The results showed that the majority of airflow velocity loss occurred in the middle and outer part of the canopy; rapid decline of airflow occurred in the 0–0.3 m section, whereas the slow decline of airflow occurred in the 0.3–0.8 m section. When the ICAEV is in the range of 2.70–3.18 m/s, the spraying effect is better. The droplet deposition variation coefficient was 42.25% compared with 51.25% in the conventional airflow delivery mode. Additionally, the droplet drift was reduced by 12.59 μg/cm2. The results of this study can identify a suitable ICAEV for air-assisted spraying in orchards. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

20 pages, 7505 KiB  
Article
Real-Time Localization and Mapping Utilizing Multi-Sensor Fusion and Visual–IMU–Wheel Odometry for Agricultural Robots in Unstructured, Dynamic and GPS-Denied Greenhouse Environments
by Yaxuan Yan, Baohua Zhang, Jun Zhou, Yibo Zhang and Xiao’ang Liu
Agronomy 2022, 12(8), 1740; https://doi.org/10.3390/agronomy12081740 - 23 Jul 2022
Cited by 27 | Viewed by 4868
Abstract
Autonomous navigation in greenhouses requires agricultural robots to localize and generate a globally consistent map of surroundings in real-time. However, accurate and robust localization and mapping are still challenging for agricultural robots due to the unstructured, dynamic and GPS-denied environmental conditions. In this [...] Read more.
Autonomous navigation in greenhouses requires agricultural robots to localize and generate a globally consistent map of surroundings in real-time. However, accurate and robust localization and mapping are still challenging for agricultural robots due to the unstructured, dynamic and GPS-denied environmental conditions. In this study, a state-of-the-art real-time localization and mapping system was presented to achieve precise pose estimation and dense three-dimensional (3D) point cloud mapping in complex greenhouses by utilizing multi-sensor fusion and Visual–IMU–Wheel odometry. In this method, measurements from wheel odometry, an inertial measurement unit (IMU) and a tightly coupled visual–inertial odometry (VIO) are integrated into a loosely coupled framework based on the Extended Kalman Filter (EKF) to obtain a more accurate state estimation of the robot. In the multi-sensor fusion algorithm, the pose estimations from the wheel odometry and IMU are treated as predictions and the localization results from VIO are used as observations to update the state vector. Simultaneously, the dense 3D map of the greenhouse is reconstructed in real-time by employing the modified ORB-SLAM2. The performance of the proposed system was evaluated in modern standard solar greenhouses with harsh environmental conditions. Taking advantage of measurements from individual sensors, our method is robust enough to cope with various challenges, as shown by extensive experiments conducted in the greenhouses and outdoor campus environment. Additionally, the results show that our proposed framework can improve the localization accuracy of the visual–inertial odometry, demonstrating the satisfactory capability of the proposed approach and highlighting its promising applications in autonomous navigation of agricultural robots. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

Review

Jump to: Research

30 pages, 7408 KiB  
Review
Row Detection BASED Navigation and Guidance for Agricultural Robots and Autonomous Vehicles in Row-Crop Fields: Methods and Applications
by Jiayou Shi, Yuhao Bai, Zhihua Diao, Jun Zhou, Xingbo Yao and Baohua Zhang
Agronomy 2023, 13(7), 1780; https://doi.org/10.3390/agronomy13071780 - 30 Jun 2023
Cited by 10 | Viewed by 5037
Abstract
Crop row detection is one of the foundational and pivotal technologies of agricultural robots and autonomous vehicles for navigation, guidance, path planning, and automated farming in row crop fields. However, due to a complex and dynamic agricultural environment, crop row detection remains a [...] Read more.
Crop row detection is one of the foundational and pivotal technologies of agricultural robots and autonomous vehicles for navigation, guidance, path planning, and automated farming in row crop fields. However, due to a complex and dynamic agricultural environment, crop row detection remains a challenging task. The surrounding background, such as weeds, trees, and stones, can interfere with crop appearance and increase the difficulty of detection. The detection accuracy of crop rows is also impacted by different growth stages, environmental conditions, curves, and occlusion. Therefore, appropriate sensors and multiple adaptable models are required to achieve high-precision crop row detection. This paper presents a comprehensive review of the methods and applications related to crop row detection for agricultural machinery navigation. Particular attention has been paid to the sensors and systems used for crop row detection to improve their perception and detection capabilities. The advantages and disadvantages of current mainstream crop row detection methods, including various traditional methods and deep learning frameworks, are also discussed and summarized. Additionally, the applications for different crop row detection tasks, including irrigation, harvesting, weeding, and spraying, in various agricultural scenarios, such as dryland, the paddy field, orchard, and greenhouse, are reported. Full article
(This article belongs to the Special Issue AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

Back to TopTop