Recent Advances in Computer Vision Technology and Its Agricultural Application

A special issue of Machines (ISSN 2075-1702). This special issue belongs to the section "Robotics, Mechatronics and Intelligent Machines".

Deadline for manuscript submissions: closed (30 September 2022) | Viewed by 27915

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical Engineering, Xinjiang University, Urumqi 830000, China
Interests: agricultural machinery design; agricultural automation
Special Issues, Collections and Topics in MDPI journals
Biosystems Engineering and Soil Sciences, The University of Tennessee, Knoxville, TN, USA
Interests: sensors; smart agriculture; agricultural robotics; computer vision
Special Issues, Collections and Topics in MDPI journals
Department of Agricultural Engineering, Northwest A & F University, Xian 712100, China
Interests: bionic robotics; computer vision; precision agriculture

Special Issue Information

Dear Colleagues,

Computer vision is a technological application that can detect, locate, or track objects. It has been extensively studied in industrial and precision agriculture fields, particularly in regard to autonomous driving, surface defect detection, object detection and localization, automatic harvesting robotics, plant phenotyping, and crop yield estimation.

Autonomous driving allows a motor vehicle to sense its environment and operate without human involvement. Uncertainties on the road, such as illumination changes and occlusion, make autonomous driving highly challenging; pedestrian detection and lane line detection, for example, are difficult to achieve.

Surface defect detection can distinguish desired features from anomalies. It is an important technology in terms of production automation. Surface defect detection requires an algorithm to detect and analyze defects quickly and robustly, which is highly challenging.

Deep learning techniques can facilitate object detection and localization, but deep learning algorithms require burdensome computational processes and large amounts of storage, which pose a significant challenge in their implementation. A mobile and fast deep learning algorithm is preferred.

Automatic fruit/vegetable harvesting robots have been researched for several decades, but there is still no such commercial product available. Existing harvesting robots struggle with target fruit/vegetable detection and localization, obstacle detection and localization, collision-free path planning, vision-based servo control, and other problems.

The “plant phenotype” refers to all the measurable features of a plant (e.g., leaf color and shape). The phenotype describes the relationship between the genotype and environment on a plant’s measurable characteristics. Measuring plant phenotypes via computer vision is a notable potential research direction as it may allow for the accurate detection and segmentation of plant parts as well as the accurate reconstruction of plant parts.

This research topic relates to several computer vision applications in industrial and agricultural fields, mainly autonomous driving, surface defect detection, object detection and localization, automatic harvesting robotics, plant phenotyping, and crop yield estimation.

Topics of interest include (but are not limited to):

  • Computer vision-based autonomous driving algorithms that can robustly detect pedestrians, lane lines, and other targets.
  • Generalizable, real-time surface defect detection methods.
  • End-to-end, real-time, deep neural networks that can detect and locate objects simultaneously.
  • Fruit/vegetable detection and localization for automatic harvesting robots, specifically end-to-end, real-time, and precise deep neural networks that can detect target produce in fields.
  • Generalizable methods for segmenting or reconstructing plant parts to support plant phenotyping processes.
  • Crop yield estimation.

We look forward to receiving your contributions.

Dr. Xiangjun Zou
Dr. Hao Gan
Dr. Zhiguo Li
Dr. Yunchao Tang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • precision agriculture
  • computer vision
  • autonomous driving
  • surface defect detection
  • harvesting robot
  • crop yield estimation

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 4978 KiB  
Article
Identification and Detection of Biological Information on Tiny Biological Targets Based on Subtle Differences
by Siyu Chen, Yunchao Tang, Xiangjun Zou, Hanlin Huo, Kewei Hu, Boran Hu and Yaoqiang Pan
Machines 2022, 10(11), 996; https://doi.org/10.3390/machines10110996 - 30 Oct 2022
Cited by 5 | Viewed by 1380
Abstract
In order to detect different biological features and dynamic tiny targets with subtle features more accurately and efficiently and analyze the subtle differences of biological features, this paper proposes classifying and identifying the local contour edge images of biological features and different types [...] Read more.
In order to detect different biological features and dynamic tiny targets with subtle features more accurately and efficiently and analyze the subtle differences of biological features, this paper proposes classifying and identifying the local contour edge images of biological features and different types of targets and reveals high similarities in their subtle features. Taking pigeons as objects, there is little difference in appearance between female pigeons and male pigeons. Traditional methods need to manually observe the morphology near the anus of pigeons to identify their sex or carry out chromosome examination or even molecular biological examination to achieve accurate sex identification. In this paper, a compound marker region for extracting gender features is proposed. This area has a strong correlation with the gender difference of pigeons, and its area’s proportion is low, which can reduce calculation costs. A dual-weight image fusion feature enhancement algorithm based on edge detection is proposed. After the color information and contour information of the image are extracted, a new feature enhancement image is fused according to a pair of weights, and the difference between tiny features increased so as to realize the detection and identification of pigeon sex by visual methods. The results show that the detection accuracy is 98%, and the F1 value is 0.98. Compared with the original data set without any enhancement, the accuracy increased by 32% and the F1 score increased by 0.35. Experiments show that this method can achieve accurate visual sex classifications of pigeons and provide intelligent decision data for pigeon breeding. Full article
Show Figures

Figure 1

17 pages, 8339 KiB  
Article
Real-Time Detection of Eichhornia crassipes Based on Efficient YOLOV5
by Yukun Qian, Yalun Miao, Shuqin Huang, Xi Qiao, Minghui Wang, Yanzhou Li, Liuming Luo, Xiyong Zhao and Long Cao
Machines 2022, 10(9), 754; https://doi.org/10.3390/machines10090754 - 01 Sep 2022
Cited by 5 | Viewed by 2119
Abstract
The rapid propagation of Eichhornia crassipes has a threatening impact on the aquatic environment. For most small water areas with good ecology, daily manual monitoring and salvage require considerable financial and material resources. Unmanned boats have important practical significance for the automatic monitoring [...] Read more.
The rapid propagation of Eichhornia crassipes has a threatening impact on the aquatic environment. For most small water areas with good ecology, daily manual monitoring and salvage require considerable financial and material resources. Unmanned boats have important practical significance for the automatic monitoring and cleaning Eichhornia crassipes. To ensure that the target can be accurately detected, we solve the problems that exist in the lightweight model algorithm, such as low accuracy and poor detection effect on targets with small or unclear characteristics. Taking YOLOV5m 6.0 version as the baseline model, given the computational limit of real-time detection, this paper proposes to use EfficientNet-Lite0 as the backbone, use the ELU function as the activation function, modify the pooling mode in SPPF, embed the SA attention mechanism, and add the RFB module in the feature fusion network to improve the feature extraction ability of the whole model. The dataset collected water hyacinth images from ponds and lakes in Guangxi, Yunnan, and the China Plant Image Library. The test results show that efficient YOLOV5 reached 87.6% mAP, which was 7.1% higher than that of YOLOV5s, and the average detection time was 62 FPS. The ablation experiment verifies the effectiveness of each module of efficient YOLOV5, and its detection accuracy and model parameters meet the real-time detection requirements of the Eichhornia crassipes unmanned cleaning boat. Full article
Show Figures

Figure 1

20 pages, 13069 KiB  
Article
Tracking and Counting of Tomato at Different Growth Period Using an Improving YOLO-Deepsort Network for Inspection Robot
by Yuhao Ge, Sen Lin, Yunhe Zhang, Zuolin Li, Hongtai Cheng, Jing Dong, Shanshan Shao, Jin Zhang, Xiangyu Qi and Zedong Wu
Machines 2022, 10(6), 489; https://doi.org/10.3390/machines10060489 - 17 Jun 2022
Cited by 24 | Viewed by 4361
Abstract
To realize tomato growth period monitoring and yield prediction of tomato cultivation, our study proposes a visual object tracking network called YOLO-deepsort to identify and count tomatoes in different growth periods. Based on the YOLOv5s model, our model uses shufflenetv2, combined with the [...] Read more.
To realize tomato growth period monitoring and yield prediction of tomato cultivation, our study proposes a visual object tracking network called YOLO-deepsort to identify and count tomatoes in different growth periods. Based on the YOLOv5s model, our model uses shufflenetv2, combined with the CBAM attention mechanism, to compress the model size from the algorithm level. In the neck part of the network, the BiFPN multi-scale fusion structure is used to improve the prediction accuracy of the network. When the target detection network completes the bounding box prediction of the target, the Kalman filter algorithm is used to predict the target’s location in the next frame, which is called the tracker in this paper. Finally, calculate the bounding box error between the predicted bounding box and the bounding box output by the object detection network to update the parameters of the Kalman filter and repeat the above steps to achieve the target tracking of tomato fruits and flowers. After getting the tracking results, we use OpenCV to create a virtual count line to count the targets. Our algorithm achieved a competitive result based on the above methods: The mean average precision of flower, green tomato, and red tomato was 93.1%, 96.4%, and 97.9%. Moreover, we demonstrate the tracking ability of the model and the counting process by counting tomato flowers. Overall, the YOLO-deepsort model could fulfill the actual requirements of tomato yield forecast in the greenhouse scene, which provide theoretical support for crop growth status detection and yield forecast. Full article
Show Figures

Figure 1

9 pages, 1939 KiB  
Communication
An Open-Source Low-Cost Imaging System Plug-In for Pheromone Traps Aiding Remote Insect Pest Population Monitoring in Fruit Crops
by Mark Jacob Schrader, Peter Smytheman, Elizabeth H. Beers and Lav R. Khot
Machines 2022, 10(1), 52; https://doi.org/10.3390/machines10010052 - 10 Jan 2022
Cited by 13 | Viewed by 2362
Abstract
This note describes the development of a plug-in imaging system for pheromone delta traps used in pest population monitoring. The plug-in comprises an RGB imaging sensor integrated with a microcontroller unit and associated hardware for optimized power usage and data capture. The plug-in [...] Read more.
This note describes the development of a plug-in imaging system for pheromone delta traps used in pest population monitoring. The plug-in comprises an RGB imaging sensor integrated with a microcontroller unit and associated hardware for optimized power usage and data capture. The plug-in can be attached to the top of a modified delta trap to realize periodic image capture of the trap liner (17.8 cm × 17.8 cm). As configured, the captured images are stored on a microSD card with ~0.01 cm2 pixel−1 spatial resolution. The plug-in hardware is configured to conserve power, as it enters in sleep mode during idle operation. Twenty traps with plug-in units were constructed and evaluated in the 2020 field season for codling moth (Cydia pomonella) population monitoring in a research study. The units reliably captured images at daily interval over the course of two weeks with a 350 mAh DC power source. The captured images provided the temporal population dynamics of codling moths, which would otherwise be achieved through daily manual trap monitoring. The system’s build cost is about $33 per unit, and it has potential for scaling to commercial applications through Internet of Things-enabled technologies integration. Full article
Show Figures

Figure 1

12 pages, 3944 KiB  
Communication
A CNN-Based Method for Counting Grains within a Panicle
by Liang Gong and Shengzhe Fan
Machines 2022, 10(1), 30; https://doi.org/10.3390/machines10010030 - 01 Jan 2022
Cited by 6 | Viewed by 1765
Abstract
The number of grains within a panicle is an important index for rice breeding. Counting manually is laborious and time-consuming and hardly meets the requirement of rapid breeding. It is necessary to develop an image-based method for automatic counting. However, general image processing [...] Read more.
The number of grains within a panicle is an important index for rice breeding. Counting manually is laborious and time-consuming and hardly meets the requirement of rapid breeding. It is necessary to develop an image-based method for automatic counting. However, general image processing methods cannot effectively extract the features of grains within a panicle, resulting in a large deviation. The convolutional neural network (CNN) is a powerful tool to analyze complex images and has been applied to many image-related problems in recent years. In order to count the number of grains in images both efficiently and accurately, this paper applied a CNN-based method to detecting grains. Then, the grains can be easily counted by locating the connected domains. The final error is within 5%, which confirms the feasibility of CNN-based method for counting grains within a panicle. Full article
Show Figures

Figure 1

18 pages, 4978 KiB  
Article
Grape Berry Detection and Size Measurement Based on Edge Image Processing and Geometric Morphology
by Lufeng Luo, Wentao Liu, Qinghua Lu, Jinhai Wang, Weichang Wen, De Yan and Yunchao Tang
Machines 2021, 9(10), 233; https://doi.org/10.3390/machines9100233 - 13 Oct 2021
Cited by 28 | Viewed by 3566
Abstract
Counting grape berries and measuring their size can provide accurate data for robot picking behavior decision-making, yield estimation, and quality evaluation. When grapes are picked, there is a strong uncertainty in the external environment and the shape of the grapes. Counting grape berries [...] Read more.
Counting grape berries and measuring their size can provide accurate data for robot picking behavior decision-making, yield estimation, and quality evaluation. When grapes are picked, there is a strong uncertainty in the external environment and the shape of the grapes. Counting grape berries and measuring berry size are challenging tasks. Computer vision has made a huge breakthrough in this field. Although the detection method of grape berries based on 3D point cloud information relies on scanning equipment to estimate the number and yield of grape berries, the detection method is difficult to generalize. Grape berry detection based on 2D images is an effective method to solve this problem. However, it is difficult for traditional algorithms to accurately measure the berry size and other parameters, and there is still the problem of the low robustness of berry counting. In response to the above problems, we propose a grape berry detection method based on edge image processing and geometric morphology. The edge contour search and the corner detection algorithm are introduced to detect the concave point position of the berry edge contour extracted by the Canny algorithm to obtain the best contour segment. To correctly obtain the edge contour information of each berry and reduce the error grouping of contour segments, this paper proposes an algorithm for combining contour segments based on clustering search strategy and rotation direction determination, which realizes the correct reorganization of the segmented contour segments, to achieve an accurate calculation of the number of berries and an accurate measurement of their size. The experimental results prove that our proposed method has an average accuracy of 87.76% for the detection of the concave points of the edge contours of different types of grapes, which can achieve a good edge contour segmentation. The average accuracy of the detection of the number of grapes berries in this paper is 91.42%, which is 4.75% higher than that of the Hough transform. The average error between the measured berry size and the actual berry size is 2.30 mm, and the maximum error is 5.62 mm, which is within a reasonable range. The results prove that the method proposed in this paper is robust enough to detect different types of grape berries. Full article
Show Figures

Figure 1

15 pages, 6111 KiB  
Article
Row End Detection and Headland Turning Control for an Autonomous Banana-Picking Robot
by Peichen Huang, Lixue Zhu, Zhigang Zhang and Chenyu Yang
Machines 2021, 9(5), 103; https://doi.org/10.3390/machines9050103 - 18 May 2021
Cited by 14 | Viewed by 2778
Abstract
A row-following system based on machine vision for a picking robot was designed in our previous study. However, the visual perception could not provide reliable information during headland turning according to the test results. A complete navigation system for a picking robot working [...] Read more.
A row-following system based on machine vision for a picking robot was designed in our previous study. However, the visual perception could not provide reliable information during headland turning according to the test results. A complete navigation system for a picking robot working in an orchard needs to support accurate row following and headland turning. To fill this gap, a headland turning method for an autonomous picking robot was developed in this paper. Three steps were executed during headland turning. First, row end was detected based on machine vision. Second, the deviation was further reduced before turning using the designed fast posture adjustment algorithm based on satellite information. Third, a curve path tracking controller was developed for turning control. During the MATLAB simulation and experimental test, different controllers were developed and compared with the designed method. The results show that the designed turning method enabled the robot to converge to the path more quickly and remain on the path with lower radial errors, which eventually led to reductions in time, space, and deviation during headland turning. Full article
Show Figures

Figure 1

17 pages, 6211 KiB  
Article
A Method of Fast Segmentation for Banana Stalk Exploited Lightweight Multi-Feature Fusion Deep Neural Network
by Tianci Chen, Rihong Zhang, Lixue Zhu, Shiang Zhang and Xiaomin Li
Machines 2021, 9(3), 66; https://doi.org/10.3390/machines9030066 - 18 Mar 2021
Cited by 15 | Viewed by 2429
Abstract
In an orchard environment with a complex background and changing light conditions, the banana stalk, fruit, branches, and leaves are very similar in color. The fast and accurate detection and segmentation of a banana stalk are crucial to realize the automatic picking using [...] Read more.
In an orchard environment with a complex background and changing light conditions, the banana stalk, fruit, branches, and leaves are very similar in color. The fast and accurate detection and segmentation of a banana stalk are crucial to realize the automatic picking using a banana picking robot. In this paper, a banana stalk segmentation method based on a lightweight multi-feature fusion deep neural network (MFN) is proposed. The proposed network is mainly composed of encoding and decoding networks, in which the sandglass bottleneck design is adopted to alleviate the information a loss in high dimension. In the decoding network, a different sized dilated convolution kernel is used for convolution operation to make the extracted banana stalk features denser. The proposed network is verified by experiments. In the experiments, the detection precision, segmentation accuracy, number of parameters, operation efficiency, and average execution time are used as evaluation metrics, and the proposed network is compared with Resnet_Segnet, Mobilenet_Segnet, and a few other networks. The experimental results show that compared to other networks, the number of network parameters of the proposed network is significantly reduced, the running frame rate is improved, and the average execution time is shortened. Full article
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 8350 KiB  
Review
Computer Vision in Self-Steering Tractors
by Eleni Vrochidou, Dimitrios Oustadakis, Axios Kefalas and George A. Papakostas
Machines 2022, 10(2), 129; https://doi.org/10.3390/machines10020129 - 11 Feb 2022
Cited by 16 | Viewed by 4582
Abstract
Automatic navigation of agricultural machinery is an important aspect of Smart Farming. Intelligent agricultural machinery applications increasingly rely on machine vision algorithms to guarantee enhanced in-field navigation accuracy by precisely locating the crop lines and mapping the navigation routes of vehicles in real-time. [...] Read more.
Automatic navigation of agricultural machinery is an important aspect of Smart Farming. Intelligent agricultural machinery applications increasingly rely on machine vision algorithms to guarantee enhanced in-field navigation accuracy by precisely locating the crop lines and mapping the navigation routes of vehicles in real-time. This work presents an overview of vision-based tractor systems. More specifically, this work deals with (1) the system architecture, (2) the safety of usage, (3) the most commonly faced navigation errors, (4) the navigation control system of tractors and presents (5) state-of-the-art image processing algorithms for in-field navigation route mapping. In recent research, stereovision systems emerge as superior to monocular systems for real-time in-field navigation, demonstrating higher stability and control accuracy, especially in extensive crops such as cotton, sunflower, maize, etc. A detailed overview is provided for each topic with illustrative examples that focus on specific agricultural applications. Several computer vision algorithms based on different optical sensors have been developed for autonomous navigation in structured or semi-structured environments, such as orchards, yet are affected by illumination variations. The usage of multispectral imaging can overcome the encountered limitations of noise in images and successfully extract navigation paths in orchards by using a combination of the trees’ foliage with the background of the sky. Concisely, this work reviews the current status of self-steering agricultural vehicles and presents all basic guidelines for adapting computer vision in autonomous in-field navigation. Full article
Show Figures

Figure 1

Back to TopTop