sensors-logo

Journal Browser

Journal Browser

Advances of Computer Vision in Precision Agriculture

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 17177

Special Issue Editors


E-Mail Website
Guest Editor
Graphic Computation Center (CCG), University of Minho Campus de Azurém, Edifício 14, 4800-058 Guimarães, Portugal
Interests: deep learning in agriculture/forestry; computer vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Engineering, School of Sciences and Technology, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
Interests: remote sensing; precision agriculture; in-field data processing; remote monitoring; UAV; UAS; precision forestry; sensors and data processing; human–computer interfaces; augmented reality; virtual reality; embedded systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Crossing image-based data analytics with precision agriculture in both short- and long-range modalities has been proving to be of great usefulness not only to support scientific research in the agricultural sector, but also to provide related professionals with cutting-edge decision support tools for yielding sustainably within farming contexts. Increasingly sophisticated sensors (RGB, hyperspectral, multispectral, etc.) and computer vision algorithms, either supported by classic techniques or artificial intelligence approaches, have been proposed, contributing to the digitalization and modernization of agriculture toward the 4.0 revolution. Aligned with that scope, this Special Issue welcomes original papers focusing on all types of image sensors combined with digital image processing and computer vision algorithms, including those based on deep learning, with relevant and innovative proposals applied to smart farming.

The works expected for this Special Issue may include the application of custom-made or off-the-shelf image sensors to produce agricultural-related data for which processing and computer vision algorithms will be proposed and discussed. A wide range of topics of MDPI’s Sensors are covered, including: physical sensors, remote sensors, smart/intelligent sensors, sensor devices, sensor technology and application, optoelectronic and photonic sensors, optomechanical sensors, Internet of Things, sensing systems, localization and object tracking, sensing and imaging, image sensors, vision/camera-based sensors, action recognition, machine/deep learning and artificial intelligence in sensing and imaging and 3D sensing.

Dr. Telmo Adão
Dr. Emanuel Peres
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • smart farming
  • Agriculture 4.0
  • precision agriculture
  • remote sensing
  • close-range sensing
  • RGB sensors
  • hyperspectral sensors
  • multispectral sensors
  • 3D sensors
  • computer vision
  • image data processing
  • deep learning
  • machine learning
  • artificial intelligence
  • image analytics

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 4061 KiB  
Article
Real-Time Recognition and Detection of Bactrocera minax (Diptera: Trypetidae) Grooming Behavior Using Body Region Localization and Improved C3D Network
by Yong Sun, Wei Zhan, Tianyu Dong, Yuheng Guo, Hu Liu, Lianyou Gui and Zhiliang Zhang
Sensors 2023, 23(14), 6442; https://doi.org/10.3390/s23146442 - 16 Jul 2023
Cited by 1 | Viewed by 992
Abstract
Pest management has long been a critical aspect of crop protection. Insect behavior is of great research value as an important indicator for assessing insect characteristics. Currently, insect behavior research is increasingly based on the quantification of behavior. Traditional manual observation and analysis [...] Read more.
Pest management has long been a critical aspect of crop protection. Insect behavior is of great research value as an important indicator for assessing insect characteristics. Currently, insect behavior research is increasingly based on the quantification of behavior. Traditional manual observation and analysis methods can no longer meet the requirements of data volume and observation time. In this paper, we propose a method based on region localization combined with an improved 3D convolutional neural network for six grooming behaviors of Bactrocera minax: head grooming, foreleg grooming, fore-mid leg grooming, mid-hind leg grooming, hind leg grooming, and wing grooming. The overall recognition accuracy reached 93.46%. We compared the results obtained from the detection model with manual observations; the average difference was about 12%. This shows that the model reached a level close to manual observation. Additionally, recognition time using this method is only one-third of that required for manual observation, making it suitable for real-time detection needs. Experimental data demonstrate that this method effectively eliminates the interference caused by the walking behavior of Bactrocera minax, enabling efficient and automated detection of grooming behavior. Consequently, it offers a convenient means of studying pest characteristics in the field of crop protection. Full article
(This article belongs to the Special Issue Advances of Computer Vision in Precision Agriculture)
Show Figures

Figure 1

19 pages, 9722 KiB  
Article
Developing a Colorimetric Equation and a Colorimetric Model to Create a Smartphone Application That Identifies the Ripening Stage of Lady Finger Bananas in Thailand
by Bhoomin Tanut, Watcharapun Tatomwong and Suwichaya Buachard
Sensors 2023, 23(14), 6387; https://doi.org/10.3390/s23146387 - 13 Jul 2023
Viewed by 1162
Abstract
This article develops a colorimetric equation and a colorimetric model to create a smartphone application that identifies the ripening stage of the lady finger banana (LFB) (Musa AA group ‘Kluai Khai’, กล้วยไข่ “gluay kai” in Thai). The mobile application photographs an [...] Read more.
This article develops a colorimetric equation and a colorimetric model to create a smartphone application that identifies the ripening stage of the lady finger banana (LFB) (Musa AA group ‘Kluai Khai’, กล้วยไข่ “gluay kai” in Thai). The mobile application photographs an LFB, automatically analyzes the color of the banana, and tells the user the number of days until the banana ripens and the number of days the banana will remain edible. The application is called the Automatic Banana Ripeness Indicator (ABRI, pronounced like “Aubrey”), and the rapid analysis that it provides is useful to anyone involved in the storage and distribution of bananas. The colorimetric equation interprets the skin color with the CIE L*a*b* color model in conjunction with the Pythagorean theorem. The colorimetric model has three parts. First, COCO-SSD object detection locates and identifies the banana in the image. Second, the Automatic Power-Law Transformation, developed here, adjusts the illumination to a standard derived from the average of a set of laboratory images. After removing the image background and converting the image to L*a*b*, the data are sent to the colorimetric equation to calculate the ripening stage. Results show that ABRI correctly detects a banana with 91.45% accuracy and the Automatic Power-Law Transformation correctly adjusts the image illumination with 95.72% accuracy. The colorimetric equation correctly identifies the ripening stage of all incoming images. ABRI is thus an accurate and robust tool that quickly, conveniently, and reliably provides the user with any LFB’s ripening stage and the remaining days for consumption. Full article
(This article belongs to the Special Issue Advances of Computer Vision in Precision Agriculture)
Show Figures

Graphical abstract

18 pages, 5974 KiB  
Article
Fruit Volume and Leaf-Area Determination of Cabbage by a Neural-Network-Based Instance Segmentation for Different Growth Stages
by Nils Lüling, David Reiser, Jonas Straub, Alexander Stana and Hans W. Griepentrog
Sensors 2023, 23(1), 129; https://doi.org/10.3390/s23010129 - 23 Dec 2022
Cited by 3 | Viewed by 2530
Abstract
Fruit volume and leaf area are important indicators to draw conclusions about the growth condition of the plant. However, the current methods of manual measuring morphological plant properties, such as fruit volume and leaf area, are time consuming and mainly destructive. In this [...] Read more.
Fruit volume and leaf area are important indicators to draw conclusions about the growth condition of the plant. However, the current methods of manual measuring morphological plant properties, such as fruit volume and leaf area, are time consuming and mainly destructive. In this research, an image-based approach for the non-destructive determination of fruit volume and for the total leaf area over three growth stages for cabbage (brassica oleracea) is presented. For this purpose, a mask-region-based convolutional neural network (Mask R-CNN) based on a Resnet-101 backbone was trained to segment the cabbage fruit from the leaves and assign it to the corresponding plant. Combining the segmentation results with depth information through a structure-from-motion approach, the leaf length of single leaves, as well as the fruit volume of individual plants, can be calculated. The results indicated that even with a single RGB camera, the developed methods provided a mean accuracy of fruit volume of 87% and a mean accuracy of total leaf area of 90.9%, over three growth stages on an individual plant level. Full article
(This article belongs to the Special Issue Advances of Computer Vision in Precision Agriculture)
Show Figures

Figure 1

17 pages, 7638 KiB  
Article
A Multi-Flow Production Line for Sorting of Eggs Using Image Processing
by Fatih Akkoyun, Adem Ozcelik, Ibrahim Arpaci, Ali Erçetin and Sinan Gucluer
Sensors 2023, 23(1), 117; https://doi.org/10.3390/s23010117 - 23 Dec 2022
Cited by 2 | Viewed by 3726
Abstract
In egg production facilities, the classification of eggs is carried out either manually or by using sophisticated systems such as load cells. However, there is a need for the classification of eggs to be carried out with faster and cheaper methods. In the [...] Read more.
In egg production facilities, the classification of eggs is carried out either manually or by using sophisticated systems such as load cells. However, there is a need for the classification of eggs to be carried out with faster and cheaper methods. In the agri-food industry, the use of image processing technology is continuously increasing due to the data processing speed and cost-effective solutions. In this study, an image processing approach was used to classify chicken eggs on an industrial roller conveyor line in real-time. A color camera was used to acquire images in an illumination cabinet on a motorized roller conveyor while eggs are moving on the movement halls. The system successfully operated for the grading of eggs in the industrial multi-flow production line in real-time. There were significant correlations among measured weights of the eggs after image processing. The coefficient of linear correlation (R2) between measured and actual weights was 0.95. Full article
(This article belongs to the Special Issue Advances of Computer Vision in Precision Agriculture)
Show Figures

Figure 1

25 pages, 7523 KiB  
Article
Improving the Reliability of Scale-Free Image Morphometrics in Applications with Minimally Restrained Livestock Using Projective Geometry and Unsupervised Machine Learning
by Catherine McVey, Daniel Egger and Pablo Pinedo
Sensors 2022, 22(21), 8347; https://doi.org/10.3390/s22218347 - 31 Oct 2022
Cited by 1 | Viewed by 1584
Abstract
Advances in neural networks have garnered growing interest in applications of machine vision in livestock management, but simpler landmark-based approaches suitable for small, early stage exploratory studies still represent a critical stepping stone towards these more sophisticated analyses. While such approaches are well-validated [...] Read more.
Advances in neural networks have garnered growing interest in applications of machine vision in livestock management, but simpler landmark-based approaches suitable for small, early stage exploratory studies still represent a critical stepping stone towards these more sophisticated analyses. While such approaches are well-validated for calibrated images, the practical limitations of such imaging systems restrict their applicability in working farm environments. The aim of this study was to validate novel algorithmic approaches to improving the reliability of scale-free image biometrics acquired from uncalibrated images of minimally restrained livestock. Using a database of 551 facial images acquired from 108 dairy cows, we demonstrate that, using a simple geometric projection-based approach to metric extraction, a priori knowledge may be leveraged to produce more intuitive and reliable morphometric measurements than conventional informationally complete Euclidean distance matrix analysis. Where uncontrolled variations in image annotation, camera position, and animal pose could not be fully controlled through the design of morphometrics, we further demonstrate how modern unsupervised machine learning tools may be used to leverage the systematic error structures created by such lurking variables in order to generate bias correction terms that may subsequently be used to improve the reliability of downstream statistical analyses and dimension reduction. Full article
(This article belongs to the Special Issue Advances of Computer Vision in Precision Agriculture)
Show Figures

Figure 1

12 pages, 2210 KiB  
Article
Development of an Intelligent Imaging System for Ripeness Determination of Wild Pistachios
by Kamran Kheiralipour, Mohammad Nadimi and Jitendra Paliwal
Sensors 2022, 22(19), 7134; https://doi.org/10.3390/s22197134 - 21 Sep 2022
Cited by 11 | Viewed by 2107
Abstract
Rapid, non-destructive, and smart assessment of the maturity levels of fruit facilitates their harvesting and handling operations throughout the supply chain. Recent studies have introduced machine vision systems as a promising candidate for non-destructive evaluations of the ripeness levels of various agricultural and [...] Read more.
Rapid, non-destructive, and smart assessment of the maturity levels of fruit facilitates their harvesting and handling operations throughout the supply chain. Recent studies have introduced machine vision systems as a promising candidate for non-destructive evaluations of the ripeness levels of various agricultural and forest products. However, the reported models have been fruit-specific and cannot be applied to other fruit. In this regard, the current study aims to evaluate the feasibility of estimating the ripeness levels of wild pistachio fruit using image processing and artificial intelligence techniques. Images of wild pistachios at four ripeness levels were recorded using a digital camera, and 285 color and texture features were extracted from 160 samples. Using the quadratic sequential feature selection method, 16 efficient features were identified and used to estimate the maturity levels of samples. Linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and an artificial neural network (ANN) were employed to classify samples into four ripeness levels, including initial unripe, secondary unripe, ripe, and overripe. The developed machine vision system achieved a correct classification rate (CCR) of 93.75, 97.5, and 100%, respectively. The high accuracy of the developed models confirms the capability of the low-cost visible imaging system in assessing the ripeness of wild pistachios in a non-destructive, automated, and rapid manner. Full article
(This article belongs to the Special Issue Advances of Computer Vision in Precision Agriculture)
Show Figures

Figure 1

Other

Jump to: Research

30 pages, 27857 KiB  
Technical Note
UAV-Based Hyperspectral Monitoring Using Push-Broom and Snapshot Sensors: A Multisite Assessment for Precision Viticulture Applications
by Joaquim J. Sousa, Piero Toscano, Alessandro Matese, Salvatore Filippo Di Gennaro, Andrea Berton, Matteo Gatti, Stefano Poni, Luís Pádua, Jonáš Hruška, Raul Morais and Emanuel Peres
Sensors 2022, 22(17), 6574; https://doi.org/10.3390/s22176574 - 31 Aug 2022
Cited by 14 | Viewed by 4022
Abstract
Hyperspectral aerial imagery is becoming increasingly available due to both technology evolution and a somewhat affordable price tag. However, selecting a proper UAV + hyperspectral sensor combo to use in specific contexts is still challenging and lacks proper documental support. While selecting an [...] Read more.
Hyperspectral aerial imagery is becoming increasingly available due to both technology evolution and a somewhat affordable price tag. However, selecting a proper UAV + hyperspectral sensor combo to use in specific contexts is still challenging and lacks proper documental support. While selecting an UAV is more straightforward as it mostly relates with sensor compatibility, autonomy, reliability and cost, a hyperspectral sensor has much more to be considered. This note provides an assessment of two hyperspectral sensors (push-broom and snapshot) regarding practicality and suitability, within a precision viticulture context. The aim is to provide researchers, agronomists, winegrowers and UAV pilots with dependable data collection protocols and methods, enabling them to achieve faster processing techniques and helping to integrate multiple data sources. Furthermore, both the benefits and drawbacks of using each technology within a precision viticulture context are also highlighted. Hyperspectral sensors, UAVs, flight operations, and the processing methodology for each imaging type’ datasets are presented through a qualitative and quantitative analysis. For this purpose, four vineyards in two countries were selected as case studies. This supports the extrapolation of both advantages and issues related with the two types of hyperspectral sensors used, in different contexts. Sensors’ performance was compared through the evaluation of field operations complexity, processing time and qualitative accuracy of the results, namely the quality of the generated hyperspectral mosaics. The results shown an overall excellent geometrical quality, with no distortions or overlapping faults for both technologies, using the proposed mosaicking process and reconstruction. By resorting to the multi-site assessment, the qualitative and quantitative exchange of information throughout the UAV hyperspectral community is facilitated. In addition, all the major benefits and drawbacks of each hyperspectral sensor regarding its operation and data features are identified. Lastly, the operational complexity in the context of precision agriculture is also presented. Full article
(This article belongs to the Special Issue Advances of Computer Vision in Precision Agriculture)
Show Figures

Figure 1

Back to TopTop