Next Article in Journal
Layer-by-Layer Immobilization of DNA Aptamers on Ag-Incorporated Co-Succinate Metal–Organic Framework for Hg(II) Detection
Previous Article in Journal
A Study on Refraction Error Compensation Method for Underwater Spinning Laser Scanning Three-Dimensional Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time AI-Assisted Push-Broom Hyperspectral System for Precision Agriculture

1
Department of Physics and Geology, University of Perugia, Via A. Pascoli, 06123 Perugia, Italy
2
Materials Foundry (IOM-CNR), National Research Council, c/o Department of Physics and Geology, Via A. Pascoli, 06123 Perugia, Italy
3
Consiglio per la Ricerca in Agricoltura e l’Analisi Dell’Economia Agraria (CREA)—Centro di Ricerca Ingegneria e Trasformazioni Agroalimentari, Via della Pascolare 16, Monterotondo, 00015 Rome, Italy
4
Department of Agriculture and Forest Sciences (DAFNE), Tuscia University, Via S. Camillo De Lellis, Via Angelo Maria Ricci, 35a-02100 Rieti, 01100 Viterbo, Italy
*
Authors to whom correspondence should be addressed.
Sensors 2024, 24(2), 344; https://doi.org/10.3390/s24020344
Submission received: 30 November 2023 / Revised: 2 January 2024 / Accepted: 4 January 2024 / Published: 6 January 2024

Abstract

:
In the ever-evolving landscape of modern agriculture, the integration of advanced technologies has become indispensable for optimizing crop management and ensuring sustainable food production. This paper presents the development and implementation of a real-time AI-assisted push-broom hyperspectral system for plant identification. The push-broom hyperspectral technique, coupled with artificial intelligence, offers unprecedented detail and accuracy in crop monitoring. This paper details the design and construction of the spectrometer, including optical assembly and system integration. The real-time acquisition and classification system, utilizing an embedded computing solution, is also described. The calibration and resolution analysis demonstrates the accuracy of the system in capturing spectral data. As a test, the system was applied to the classification of plant leaves. The AI algorithm based on neural networks allows for the continuous analysis of hyperspectral data relative up to 720 ground positions at 50 fps.

1. Introduction

In the ever-evolving landscape of modern agriculture, the integration of advanced technologies has become indispensable for optimizing crop management and ensuring sustainable food production. One such cutting-edge technology that has gained prominence in precision agriculture is the push-broom hyperspectral technique [1,2,3]. Unlike traditional multispectral imaging systems, push-broom hyperspectral sensors offer a continuous and high-resolution spectral data-set. This characteristic allows for a more comprehensive analysis of the optical reflectance spectrum, enabling the precise identification and characterization of various materials, including plants and their health indicators [4,5,6]. This innovative approach to remote sensing, when coupled with the power of artificial intelligence (AI), holds great promise for enhancing our ability to monitor and manage crops with unprecedented detail and accuracy [7,8,9,10,11].
Based on the huge spectral resolution employed, this analysis can provide additional sensitivity for agricultural applications. Pereira et al. demonstrated the feasibility to detect early bacterial disease onset on tomato using hyperspectral point measurement [12]. Gold et al. proved the efficacy of hyperspectral measurements for the detection of presymptomatic late blight and early blight in potato [13]. Multispectral devices for field applications are nowadays commercially sold all over the world [14,15,16]. However, the hyperspectral approach is often used for research purposes, aerial/UAV mapping, or proximal mapping, while not for real-time applications. This is mainly due to the heavy computational power required to elaborate the huge amount of data produced by imaging sensors. In this respect, AI is going to have a huge technological impact on the agrifood and forestry sectors, due to its ability to extract synthetic information from a large amount of collected data [17]. As a result, it allows us to face old problems in a new and effective perspective [18,19,20]. In particular, some authors have used AI algorithms for the extraction of synthetic indices [21,22] and to perform early detection on biotic and abiotic stress in rocket leaf [23]. Moreover, AI can push hyperspectral analysis towards real-time applications when combined with open source and commercial spectrometers [24,25,26].
The present work aims to develop and implement a real-time AI-assisted push-broom hyperspectral system for real-time applications to be used on a on a UGV (unmanned ground vehicle) featuring an innovative ultra-wideband autonomous navigation system. The system is designed to produce a real-time output signal used by an actuator to operate an action such as the site-specific application of a control treatment in a greenhouse or open field. We start from a meticulous analysis of the resolution in both the spectral and spatial domains, comparing the data generated by this hyperspectral system with that of a conventional commercial point spectrometer. As a test-bench, we then use hyperspectral data collected from lettuce and arugula leaves to train a neural network able to classify the plant species based on its spectral characteristics. The developed system showed the ability to classify the plant species used for the test with high accuracy (i.e., 0.996) at a high frame rate.

2. Materials and Methods

2.1. Push-Broom Spectrometer Design

2.1.1. Optical Assembly

In a line spectrograph, the main elements are (1) the collection optics, which determine the investigated region, (2) the dispersing element responsible for the separation of the spectral features, and (3) the recording device, with suitable speed and sensitivity. The features of the different elements have to be chosen in order to satisfy the application requirements. Specifically, the real-world application that informed the design of the system was monitoring crops cultivated in 1 m wide strips, with the optical device positioned on an unmanned ground vehicle (UGV) moving longitudinally along the strip. Ensuring a spatial resolution of at least 1 cm2 is imperative to enable the discernment of weeds or diseases.
In the following, we detail the main elements of the push-broom spectrometer, as sketched in Figure 1:
(1)
A wide-angle (81°) objective, TTartisan APS–C 17 mm F1.4 [27] (TTArtisan Tech Co., Limited, Shenzhen, China), collecting lens L1, focuses the incoming light on a 20 mm long and 200 μm wide slit, 3D-printed in black PLA. Considering the objective focal length ( f = 26 mm, considering the crop factor), at a distance of 1.2 m, the slit selects on a soil a line which is about 1 cm wide and 1 m long. It should be noted that in such conditions, the depth of the field is about 25 cm, sufficient to accommodate the different heights of plants. Next, the slit and the collimating lens L2, f = 75 mm [28] (Thorlabs Inc., Newton, NJ, USA) focus on the slit and collimate the light toward the prism.
(2)
An F2 equilateral prism [29] (Thorlabs Inc., Newton, NJ, USA) was chosen for dispersing the collected light. For this application, the prism presents an advantageous alternative to grating by offering simplicity and robustness, important features for a setup that can be mounted on a ground vehicle moving on rough terrain, also avoiding the complexities associated with higher diffraction orders. The light is dispersed by the prism in a direction perpendicular to the slit length so that, after the prism, the light rays’ vertical angle with the optical axis depends on the position with regard to the soil and the horizontal angle on the wavelength (mainly).
(3)
The re-imaging lens L3, f = 25 mm [30] (Edmund Optics Inc., Barrington, NJ, USA), focuses the parallel light rays on the detector so that the horizontal coordinate of the sensor depends on the wavelength while the vertical component depends on its position with regard to the soil. The two lenses, L2 and L3, are in a telescopic configuration with a magnification factor equal to the ratio of the focal distances ( M = 1 / 3 ). The sensor is the monochrome camera Allied Vision Alvium 1800 U-040m [31] (Allied Vision, Stadtroda, Germany). It satisfies the requirements of a continuous acquisition and real-time analysis (max. frame rate at full resolution, 495 fps), together with the needed spectral and spatial resolution (728 × 544 px). In fact, at 50 fps, considering a UGV speed of 5 km/h (i.e., ∼14 cm/s), each snapshot differs by less than 3 mm, enough to measure the changes in different leaves. Moreover, the number of pixels allows for a nominal spatial resolution of 0.16 cm/px, with an average nominal spectral resolution in the sensitivity region of the detector (300–1000 nm) lower than 2 nm/px.
The final design was refined using 3DOptix (https://www.3doptix.com/, accessed on 19 June 2023) [32] (3DOptix, Rehovot, Israel). Optical simulation results are reported in Figure 2.
To investigate the performances of the spectrometer, we simulated three light sources laying in a line along the scanned dimension. The light is focused by the collecting lens on the slit and then collimated and dispersed by the prism in each color component. The re-imaging lens focuses the light on the detector. The output of the detector is pictured in the top-left corner of Figure 2. Each original point light source corresponds to a line on the detector, while the light is dispersed on its frequency component. We can observe that the hyperspectral image is subject to a distortion, where the wavelength positions are not constant for all the scanned dimensions. This distortion depends on the arrival angles on the detector along the scanned dimension and can be easily corrected by remapping the hyperspectral image.

2.1.2. 3D Printing and Machining

The system support and the enclosure were designed using the CAD software SolidWorks 2017, as shown in Figure 3. The position of every optical element such as the objective, slit, prism, lenses, and camera is fixed into a base following the optical path, as shown in Figure 3.
The base is attached over a larger base that is connected to an enclosure designed to be resistant to humidity and water splashes. Figure 4 (right) and (left) show the photos of two prototypes of the spectrometer, one 3D-printed and the other machined. The 3D-printed version was realized in tough PLA, printed with an Ultimaker 3. The second prototype was realized in aluminum by CNC machining. In fact, the 3D-printed version is more convenient to fabricate and it is lightweight; however, at a high frame rate, it suffers from overheating problems and misalignment due to the thermal deformation of PLA. Indeed, heat production from the camera sensor is proportional to the camera frame rate; thus, for the high-throughput application fo thermal-resistant polymers, the 3D-printable or aluminum versions are preferable. In the present case, we chose to use an aluminum machined prototype, which is more heavy but also more robust and better suited for a UGV moving on rough terrain.

2.2. Real-Time Acquisition and Classification System

2.2.1. Acquisition System

The custom-built real-time acquisition and classification system used for this project was developed with the single-board computer Raspberry Pi 4 Model B [33]. The Raspberry Pi module encompasses features such as a Micro SD port for external storage access, Bluetooth, wireless LAN, USB ports, and GPIO pins for external communication. The detector was connected to the Raspberry module through a USB 3.0 port in order to support its maximum frame rate of 495 frames per second, with eight bits per pixel (Mono8), at transmission speeds higher than 250 MByte/s. The system was installed with the Linux-based Raspberry Pi OS (Bullseye) distribution. To access data from the camera, we used the Vimba SDK v6.0 for Linux ARMv8 64-bit [34] (Allied Vision, Stadtroda, Germany), which provides APIs for C, C++, .NET, and Python. Each hyperspectral image acquired was converted in a NumPy matrix and then remapped with OpenCV [35] to correct the distortions introduced by the dispersive element of the optical train. The wireless LAN module of the Raspberry Pi was used to remotely communicate the classification results using Apache Kafka [36] to provide unified, high-throughput, low-latency, real-time data feeds.

2.2.2. Data-Set

For training and classification purposes, we set up a data-set of reflectance spectra from two different plant species: lettuce and arugula. The reflectance spectra were recorded using the designed spectrometer. We placed several leaves from the plant along the scanned direction. The plant leaves were placed on a black surface with a white reference panel. The images were captured in natural, environmental light conditions. Reflectance was automatically calculated for each point along the scanned dimension, dividing the spectrum acquired on the leaves point by point by that acquired on the white panel, which was considered to have a constant reflectance equal to 1. This allowed us to take into account both the variability of the natural illuminator and the spectral responsivity of the whole system. Due to the absence of sharp peaks on the reflectance spectra (see Section 3.2) and to reduce the noise in the spectrum, the number of spectral features was reduced to 50, each feature comprising a range of about 10 nm in wavelength. Moving the spectrometer along the motion direction, we acquired multiple spectra for a total of 296,862 spectra, 224,867 from lettuce and 71,995 from arugula. For the training procedure, the data-set was split into random train and test subsets in the proportion 0.7 and 0.3, respectively. Samples in the data-set are reported in Table 1.

2.2.3. Training and Classification

Training and classification were performed with scripts in Python, version 3.9, using the scikit-learn library [37], version 1.2.2. Hyperspectral data from the detector were split into single spectra, each one relative to a different space region observed. The classification problem can be solved with different techniques, as a decision tree or a neural network. A preliminary test using decision trees showed a good classification accuracy, but the generated structure resulted over-complex, with the risk of loosing generality. The model selected for the classification tests is the multilayer perceptron classifier (MLPClassifier). We tested several architectures, with different numbers of hidden layers and nodes per layer. With the aim of maintaining good accuracy in the classification and low computing complexity, we chose an architecture with one hidden layer consisting of 25 nodes. The neural network architecture schematic of the used classifier is reported in Figure 5. The output y of each node is given by
y = g i = 0 d w i x i + b i a s
where w i is the weight between layer i and layer i + 1 , x i is the ith input of the node, and b i a s is the values added to layer i + 1 . The function g ( · ) is a nonlinear activation function. The rectified linear unit function (relu) was used as an activation function for each node in the hidden layer, while the logistic sigmoid function (logistic) was used for the output layer. The weight optimization was performed with the stochastic gradient-based optimizer (adam) [38], with a maximum number of iterations equal to 500. The accuracy of the classification was validated using a computing confusion matrix, along with the main classification metrics (precision, recall, and F-measure). Once trained and tested, the classifier was validated, implementing a classification function for the real-time hyperspectral data in a scenario with mixed plant species.

3. Results

3.1. Calibration and Resolution Analysis

The wavelength-to-pixel calibration in a spectrometer is a crucial process that establishes a precise correspondence between the detected wavelengths of light and the corresponding pixels in the sensor of the spectrometer. This calibration ensures accurate and reliable spectral measurements by aligning the pixel positions with specific wavelengths of incoming light. Typically achieved through the use of known spectral lines or standard calibration sources, this calibration process compensates for any variations or distortions in the optical system. We performed the calibration procedure using the emission lines of a fluorescent lamp, three laser diodes (red, green, and blue), and an IR diode as the known sources. All these sources were recorded both with the commercial spectrometer and with the developed spectrometer, extracting the peak position in wavelength and pixel, respectively. The obtained experimental wavelength-to-pixel relationship is presented in Figure 6.
The wavelength-to-pixel relationship depends on the exit angle of the prism element, θ o u t , which is a function of the wavelength-dependent refractive index, n ( λ ) , as
θ o u t = θ 0 + arcsin n ( λ ) sin α arcsin 1 n ( λ ) sin θ 0 .
Considering a transparent medium, the refractive index and wavelength relationship can be described empirically using Sellmeier’s equation:
n 2 ( λ ) = 1 + i B i λ 2 λ 2 C i
where B i and C i are experimentally determined Sellmeier’s coefficients. Simplifying Sellmeier’s equation to a one-term form, we obtain
n 2 ( λ ) = A + B λ 2 λ 2 C .
Linearizing Equation (2) around the refractive index of yellow and combining that with Equation (4), we obtain the wavelength-to-pixel relation:
p i x e l = A + B λ 2 λ 2 C 1 / 2 + D .
The obtained relation, fitted with experimental data, is plotted as a solid line in Figure 6, showing a good agreement between the data and the model.
In the following, we outline the procedures employed to validate the calibration of the system along both the frequency and spatial axes, also presenting the achieved spectral and spatial resolutions. The frequency response of the spectrometer was tested, measuring the spectrum of a fluorescent lamp, selected for its distinct spectral features.
The spectrum of the emitted light was recorded utilizing the presented system; the corresponding data are depicted in Figure 7 as a red line. As a comparative benchmark, the same measurement was performed using a commercial spectrometer, the Avantes AvaSpec-ULS2048-USB2-UA-50 (Avantes B.V., Apeldoorn, The Netherlands), working in the UV-VIS-NIR range. The resulting data are shown in the same figure, as a black line. The excellent agreement in the peak positions of the two spectra substantiates the accuracy of the calibration in the wavelength scale of the custom-made system. Notably, the discernible differences in the peak widths reflect the different frequency resolutions of the two spectroscopic systems. To estimate the frequency resolution, Δ λ , of our setup, we used the width of the sharp peak in the emission spectrum located at about 540 nm. The so obtained value, less than 20 nm, can be considered a good estimation of Δ λ . This value perfectly meets the requirements for our specific application, ensuring the system’s appropriateness for the designated scientific objectives, as the reflectance spectra of the leaves exhibit broad features rather than sharp peaks.
The contrast and the spatial resolution of the custom-made system were determined by analyzing the images of line patterns and performing a quantitative analysis on the obtained measurements. The black-and-white line pattern chosen for this purpose is reported in Figure 8a. It presents successive lines, varying their spatial frequencies in order to cover a range of spatial resolutions. The identification of the single lines in the captured image can be estimated by the plot of the intensity profile measured using the instrument and reported in Figure 8b. While thicker and more widely spaced lines are well resolved, as they converge, diminishing their thickness, as and the system’s ability to distinguish them separately decreases. Particularly, when the size of the single lines falls below 0.06 cm, as in the last line pattern, a single intensity peak is measured. The contrast parameter defines the system’s ability to distinguish separate lines of a given pattern. It is defined as
C = I max I min I max + I min
and it was evaluated for the different groups of lines. The obtained values are reported in Figure 8c. The resulting spatial resolution estimated at C = 0.5 corresponds to ∼0.5 cm.
Moreover, the evaluation of the linearity within the spatial sampling involves the examination of the distances in pixels between two consecutive maxima in the measured intensity in Figure 8b. The corresponding data are presented in Figure 8d alongside the linear fit. The noteworthy agreement underscores that the optical components selected for the system construction demonstrate an absence of measurable optical aberrations. It is worth noting that, in accordance with the theoretical expectations based on the arrangement of the chosen optical components, a single pixel of the camera corresponds to 0.1 mm in real space at 1 m.

3.2. Plant Classification Training, Tests, and Validation

As a stringent test for our system, we selected leaves from lettuce and arugula. In fact, while these leaves are easily recognizable from their shape, discriminating them only by their reflectance spectrum is a challenging task.
In Figure 9, we present the typical normalized reflectance spectra of lettuce (left) and arugula (right), captured at various angles and on different parts of the leaf. Each spectrum is depicted with a different color in order to make them distinguishable. As expected, the majority of the visible-range reflectance takes place in the green region for both species, exhibiting similar characteristics, with the primary distinction lying in the amplitude of the peak value within this green range. Nevertheless, relying solely on this feature proves inadequate for species discrimination due to the significant variation and overlap in this particular value between the two species. To address such complexities, we employed an artificial intelligence system based on neural networks.
The neural network architecture was trained on the data-set presented earlier. The data-set was split into random train and test subsets in the proportion 0.7 and 0.3, respectively, for evaluating the training procedure. Figure 10 reports the confusion matrix from the test set, which shows an accuracy of 0.996.
In Table 2, precision, recall, and F-measure are reported for each class. From the recall column, since we are dealing with a binary classifier, we can compute the sensitivity and specificity, which are both close to one.
The validation of the trained model was performed on a real-time data consisting of mixed leaves of lettuce and arugula, as pictured in Figure 11 (top image), where the region investigated using the push-broom spectrometer is highlighted; the scanned portion of the image is represented in color space. An example of the output classification is reported in bottom part of Figure 11, where different colors represent the classification outcome (green for lettuce, red for arugula). Before classification, we applied a filter to the spectra to skip those relative to the ground, thus granting them low integrated diffuse reflectance. These regions are reported in the classification output in gray. As evident from the validation results, in some cases, the leaves are misclassified. This is probably due to the proximity of the leaves of different species, whose spectra are partially reflected by the whiter part of the leaf.

4. Discussion

As we have seen in the spectra used for the analysis, the reflectance spectrum of plant leaves exhibits significant variability, even within a confined spatial area. This variation is influenced by various factors. Some of these factors depend on the intrinsic features of the plant, including the pigments of the leaf, cell structure, water content, and biochemical composition [39,40,41]. While chlorophyll, the primary pigment responsible for photosynthesis, strongly absorbs light in the blue and red regions of the spectrum, giving leaves their characteristic green color, other compounds, such as carotenoids and anthocyanins, contribute to the overall reflectance pattern [42,43]. Moreover, the presence of pathogens can significantly alter the reflectance spectrum of plants due to the physiological and biochemical changes induced by the infection. Pathogens, such as bacteria, fungi, and viruses, can cause structural alterations and trigger biochemical responses in plant tissues [12,13]. These changes can affect the absorption and reflection of light. For instance, pathogens may cause disruptions in chlorophyll content, affecting its absorption peaks and resulting in an overall shift in reflectance patterns. Additionally, pathogen-induced stress can lead to changes in water content and cell structure and to the accumulation of secondary metabolites, all of which contribute to alterations in the reflectance spectrum. Other features of the collected spectrum depend on the “experimental” conditions. The reflectance spectrum of a leaf can change based on the observation point, which includes both the angle and position of the incident light and the viewing angle [44,45,46]. When the observation point changes, the interaction of light with the surface of the leaf structure and pigments can lead to variations in the reflectance spectrum. As the angle of incidence increases or decreases, the amount of light that is absorbed, transmitted, or reflected by the leaf changes. This is influenced by factors such as the orientation of the leaf, the surface roughness, and the arrangement of the cells and pigments. To capture a comprehensive view of the properties of place reflectance, is it usually necessary to use various angles and positions when conducting spectral analysis. Moreover, the natural spectral source is not constant; it can change because of the time of the day or the atmospheric conditions. In a real scenario, as in on field measurement of crop status, the excellent results on classification obtained in the controlled environment of the laboratory cannot be expected in general. In this complex scenario, the classification strategies often have to be optimized on the spot. These problems can be sometimes be overcome by increasing the complexity of the classification model. The classifier used in the present work is based on neural networks, which indeed improve the model’s performance and are less prone to overfitting [47]. Generalization is also aided by the data acquisition process, which samples a large variety of illumination conditions. VIS-NIR spectra are very broad, and it is more important to recognize the spectral signature of this phenomenon than for single sharp peaks.As a result, the spectral resolution can be acceptably reduced in favor of a short acquisition time. In this way, many realizations can be collected, and the influence of external conditions on the measure can be reduced. Also, the noise in the spectrum is reduced, averaging out in the adjacent wavelength region. Thus, the present experimental setup results as an intermediate between a point-based setup, characterized by high spectral resolution and being very site specific, and that of a hyperspectral camera, very informative but very slow. The system presented here also allows for a continuous calibration of the actual reflectance spectrum features taken into account for real-time analysis. In fact, on the one hand, the training of the classification system can be carried out in few minutes, also using the limited capabilities of the embedded Raspberry Pi; on the other hand, all the acquired spectra can be corrected considering the instantaneous illumination condition, as the reference spectrum can be always acquired once a white reference panel, traveling with the UGV, is placed in the field of vision of the camera.
In Table 3, we report the main specifications of the developed hyperspectral system. Note that the specificity of the application and the versatility in the analysis allow for relaxing the requirements in terms of the spectral resolution of the setup. In fact, leaf reflectance spectra usually present smooth features. Conversely, it is crucial to minimize the acquisition and elaboration time. This parameter is of pivotal importance in order to enhance the velocity of the UGV for any future applications. This can be achieved thanks to the small number of spectral features observed, reducing the computational effort required for the analysis in order to meet the requirements for the real-time application of the system.

5. Conclusions

In this work, we presented the development of a real-time system for plant spectrum classification from hyperspectral data. The developed system encompasses a custom-designed push-broom spectrometer where the spectral resolution is traded in favor of light collection to obtain a high frame rate. The data are processed and classified by a neural network model in an embedded system. The fast acquisition time, in conjunction with the efficiency of the neural network architecture, permits the real-time analysis of the reflectance spectra. As test example of its application, the system was trained to classify leaves from different plant species (i.e., lettuce and arugula), obtaining the classification of more than 35,000 spectra per second with an accuracy of 0.996. The same system can potentially be trained and used to detect weeds or diseases in a greenhouse or open-field plantation. Future studies will implement the sensor on a UGV featuring an innovative ultra-wideband autonomous navigation system for automated greenhouse treatment.

Author Contributions

Conceptualization, I.N., S.C. and M.M.; Data curation, I.N. and S.C.; Formal analysis, I.N., S.C. and M.M.; Funding acquisition, S.C., L.G. and M.M.; Investigation, I.N., S.C. and M.M.; Methodology, I.N., S.C., F.C. and M.M.; Project administration, S.C.; Resources, G.C., F.C., F.P., S.F., L.O., S.A. and L.G.; Software, I.N. and F.B.; Supervision, M.M.; Validation, I.N., S.C. and M.M.; Visualization, F.B. and F.C.; Writing—original draft, I.N., S.C., F.C. and M.M.; Writing—review and editing, I.N., S.C., F.B., G.C., F.C., F.P., S.F., L.O., L.G. and M.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work has been funded by the European Union, NextGenerationEU, under the Italian Ministry of University and Research (MUR) National Innovation Ecosystem grant ECS00000041-VITALITY-CUP J97G22000170005.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We acknowledge Università degli Studi di Perugia for support through “Fondo Ricerca di Ateneo, edizione 2021”, projects “Microspettrometro per imaging di trasmittanza e fluorescenza” and “Studio di scenari multirischio per i disastri naturali nell’area dell’Italia centro-meridionale e della Sicilia: capire il passato e il presente per proteggere il futuro”. This work was funded by the Italian Ministry of Agriculture, Ministry of Agriculture, Food Sovereignty and Forestry (MASAF), national program sub-project “Tecnologie digitali integrate per il rafforzamento sostenibile di produzioni e trasformazioni agroalimentari (AgroFiliere)” (AgriDigit program) (DM 36503.7305.2018 of 20 December 2018).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sousa, J.J.; Toscano, P.; Matese, A.; Di Gennaro, S.F.; Berton, A.; Gatti, M.; Poni, S.; Pádua, L.; Hruška, J.; Morais, R.; et al. UAV-Based Hyperspectral Monitoring Using Push-Broom and Snapshot Sensors: A Multisite Assessment for Precision Viticulture Applications. Sensors 2022, 22, 6574. [Google Scholar] [CrossRef] [PubMed]
  2. Sigernes, F.; Syrjäsuo, M.; Storvold, R.; Fortuna, J.; Grøtte, M.E.; Johansen, T.A. Do it yourself hyperspectral imager for handheld to airborne operations. Opt. Express 2018, 26, 6021–6035. [Google Scholar] [CrossRef] [PubMed]
  3. Jin-Ling, Z.; Dong-Yan, Z.; Ju-Hua, L.; Yang, H.; Lin-Sheng, H.; Wen-Jiang, H. A comparative study on monitoring leaf-scale wheat aphids using pushbroom imaging and non-imaging ASD field spectrometers. Int. J. Agric. Biol. 2012, 14, 136–140. [Google Scholar]
  4. Fu-Ping, G.; Run-Sheng, W.; Ai-Nai, M.; Su-Ming, Y. Investigation on physiological status of regional vegetation using pushbroom hyperspectral imager data. J. Integr. Plant Biol. 2002, 44, 983. [Google Scholar]
  5. Fan, S.; Li, C.; Huang, W. Data fusion of two hyperspectral imaging systems for blueberry bruising detection. In Proceedings of the 2017 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers, Spokane, WA, USA, 16–19 July 2017; p. 1. [Google Scholar]
  6. Moroni, M. Vegetation monitoring via a novel push-broom-sensor-based hyperspectral device. J. Phys. Conf. Ser. 2019, 1249, 012007. [Google Scholar] [CrossRef]
  7. Akhtman, Y.; Golubeva, E.; Tutubalina, O.; Zimin, M. Application of hyperspectural images and ground data for precision farming. Geogr. Environ. Sustain. 2017, 10, 117–128. [Google Scholar] [CrossRef]
  8. Dale, L.M.; Thewis, A.; Boudry, C.; Rotar, I.; Dardenne, P.; Baeten, V.; Pierna, J.A.F. Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: A review. Appl. Spectrosc. Rev. 2013, 48, 142–159. [Google Scholar] [CrossRef]
  9. Dharmaraj, V.; Vijayanand, C. Artificial intelligence (AI) in agriculture. Int. J. Curr. Microbiol. Appl. Sci. 2018, 7, 2122–2128. [Google Scholar] [CrossRef]
  10. Falcioni, R.; Gonçalves, J.V.F.; Oliveira, K.M.d.; Oliveira, C.A.d.; Demattê, J.A.; Antunes, W.C.; Nanni, M.R. Enhancing Pigment Phenotyping and Classification in Lettuce through the Integration of Reflectance Spectroscopy and AI Algorithms. Plants 2023, 12, 1333. [Google Scholar] [CrossRef]
  11. Subudhi, S.; Dabhade, R.G.; Shastri, R.; Gundu, V.; Vignesh, G.; Chaturvedi, A. Empowering sustainable farming practices with AI-enabled interactive visualization of hyperspectral imaging data. Meas. Sensors 2023, 30, 100935. [Google Scholar] [CrossRef]
  12. Reis Pereira, M.; Santos, F.N.d.; Tavares, F.; Cunha, M. Enhancing host-pathogen phenotyping dynamics: Early detection of tomato bacterial diseases using hyperspectral point measurement and predictive modeling. Front. Plant Sci. 2023, 14, 1242201. [Google Scholar] [CrossRef] [PubMed]
  13. Gold, K.M.; Townsend, P.A.; Chlus, A.; Herrmann, I.; Couture, J.J.; Larson, E.R.; Gevens, A.J. Hyperspectral measurements enable pre-symptomatic detection and differentiation of contrasting physiological effects of late blight and early blight in potato. Remote Sens. 2020, 12, 286. [Google Scholar] [CrossRef]
  14. Morlin Carneiro, F.; Angeli Furlani, C.E.; Zerbato, C.; Candida de Menezes, P.; da Silva Gírio, L.A.; Freire de Oliveira, M. Comparison between vegetation indices for detecting spatial and temporal variabilities in soybean crop using canopy sensors. Precis. Agric. 2019, 21, 979–1007. [Google Scholar] [CrossRef]
  15. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-based multispectral remote sensing for precision agriculture: A comparison between different cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  16. Su, J.; Zhu, X.; Li, S.; Chen, W.H. AI meets UAVs: A survey on AI empowered UAV perception systems for precision agriculture. Neurocomputing 2023, 518, 242–270. [Google Scholar] [CrossRef]
  17. European Parliament; Directorate-General for Parliamentary Research Services; De Baerdemaeker, J. Artificial Intelligence in the Agri-Food Sector: Applications, Risks and Impacts; Publications Office of the European Union: Luxembourg, 2023. [CrossRef]
  18. Kutyauripo, I.; Rushambwa, M.; Chiwazi, L. Artificial intelligence applications in the agrifood sectors. J. Agric. Food Res. 2023, 11, 100502. [Google Scholar] [CrossRef]
  19. Ortenzi, L.; Violino, S.; Costa, C.; Figorilli, S.; Vasta, S.; Tocci, F.; Moscovini, L.; Basiricò, L.; Evangelista, C.; Pallottino, F.; et al. An innovative technique for faecal score classification based on RGB images and artificial intelligence algorithms. J. Agric. Sci. 2023, 161, 291–296. [Google Scholar] [CrossRef]
  20. Sperandio, G.; Ortenzi, L.; Spinelli, R.; Magagnotti, N.; Figorilli, S.; Acampora, A.; Costa, C. A multi-step modelling approach to evaluate the fuel consumption, emissions, and costs in forest operations. Eur. J. For. Res. 2023. [Google Scholar] [CrossRef]
  21. Pane, C.; Manganiello, G.; Nicastro, N.; Ortenzi, L.; Pallottino, F.; Cardi, T.; Costa, C. Machine learning applied to canopy hyperspectral image data to support biological control of soil-borne fungal diseases in baby leaf vegetables. Biol. Control 2021, 164, 104784. [Google Scholar] [CrossRef]
  22. Moscovini, L.; Ortenzi, L.; Pallottino, F.; Figorilli, S.; Violino, S.; Pane, C.; Capparella, V.; Vasta, S.; Costa, C. An open-source machine-learning application for predicting pixel-to-pixel NDVI regression from RGB calibrated images. Comput. Electron. Agric. 2024, 216, 108536. [Google Scholar] [CrossRef]
  23. Navarro, A.; Nicastro, N.; Costa, C.; Pentangelo, A.; Cardarelli, M.; Ortenzi, L.; Pallottino, F.; Cardi, T.; Pane, C. Sorting biotic and abiotic stresses on wild rocket by leaf-image hyperspectral data mining with an artificial intelligence model. Plant Methods 2022, 18, 45. [Google Scholar] [CrossRef] [PubMed]
  24. Ortenzi, L.; Figorilli, S.; Violino, S.; Pallottino, F.; Costa, C. Artificial Intelligence approaches for fast and portable traceability assessment of EVOO. In Proceedings of the 2023 IEEE International Conference on Omni-layer Intelligent Systems (COINS), Berlin, Germany, 23–25 July 2023. [Google Scholar] [CrossRef]
  25. Tsakanikas, P.; Karnavas, A.; Panagou, E.Z.; Nychas, G.J. A machine learning workflow for raw food spectroscopic classification in a future industry. Sci. Rep. 2020, 10, 11212. [Google Scholar] [CrossRef] [PubMed]
  26. Heydarov, S.; Aydin, M.; Faydaci, C.; Tuna, S.; Ozturk, S. Low-cost VIS/NIR range hand-held and portable photospectrometer and evaluation of machine learning algorithms for classification performance. Eng. Sci. Technol. Int. J. 2023, 37, 101302. [Google Scholar] [CrossRef]
  27. TTArtisan APS-C 17mm F1.4-APS-C Lenses. Available online: https://en.ttartisan.com/?list_10%2F122.html (accessed on 29 November 2023).
  28. Thorlabs—AC127-075-A f = 75 mm, Ø1/2—thorlabs.com. Available online: https://www.thorlabs.com/thorproduct.cfm?partnumber=AC127-075-A (accessed on 29 November 2023).
  29. Thorlabs, Inc. PS852-F2 Equilateral Dispersive Prism, 25 mm. Available online: https://www.thorlabs.com/thorproduct.cfm?partnumber=PS852 (accessed on 29 November 2023).
  30. 25.0 mm FL, No IR-Cut Filter, f/2.5, Micro Video Lens—Edmundoptics.com. Available online: https://www.edmundoptics.com/p/250mm-fl-no-ir-cut-filter-f25-micro-video-lens/13716/ (accessed on 29 November 2023).
  31. Alvium 1800 U-040. Available online: https://www.alliedvision.com/fileadmin/pdf/en/Alvium_1800_U-040m_Closed-Housing_C-Mount_Standard_DataSheet_en.pdf (accessed on 29 November 2023).
  32. Suchowski, H. Cloud-based simulation tools for streamlined optical design: 3DOptix is a free, easy-to-use, cloud-based optical design and simulation software which includes a suite of discovery tools and drawings. PhotonicsViews 2021, 18, 46–48. [Google Scholar] [CrossRef]
  33. Raspberry Pi (Trading) Ltd. Raspberry Pi 4 Model B; Raspberry Pi (Trading) Ltd.: Cambridge, UK, 2019; Rel. 1; Available online: https://datasheets.raspberrypi.com/rpi4/raspberry-pi-4-datasheet.pdf (accessed on 3 July 2023).
  34. Allied Vision. Vimba for Linux ARMv8 64-bit, 6.0. 2022. Available online: https://www.alliedvision.com/en/products/vimba-sdk (accessed on 3 May 2023).
  35. Bradski, G. The openCV library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–123. [Google Scholar]
  36. Garg, N. Apache Kafka; Packt Publishing: Birmingham, UK, 2013. [Google Scholar]
  37. Kramer, O.; Kramer, O. Scikit-learn. In Machine Learning for Evolution Strategies; Springer: Cham, Switzerland, 2016; pp. 45–53. [Google Scholar]
  38. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  39. Grant, L. Diffuse and specular characteristics of leaf reflectance. Remote Sens. Environ. 1987, 22, 309–322. [Google Scholar] [CrossRef]
  40. Liu, C.; Sun, P.S.; Liu, S.R. A review of plant spectral reflectance response to water physiological changes. Chin. J. Plant Ecol. 2016, 40, 80. [Google Scholar]
  41. Mishra, P.; Asaari, M.S.M.; Herrero-Langreo, A.; Lohumi, S.; Diezma, B.; Scheunders, P. Close range hyperspectral imaging of plants: A review. Biosyst. Eng. 2017, 164, 49–67. [Google Scholar] [CrossRef]
  42. Merzlyak, M.; Gitelson, A.; Chivkunova, O.; Solovchenko, A.; Pogosyan, S. Application of reflectance spectroscopy for analysis of higher plant pigments. Russ. J. Plant Physiol. 2003, 50, 704–710. [Google Scholar] [CrossRef]
  43. Manjunath, K.; Ray, S.; Vyas, D. Identification of indices for accurate estimation of anthocyanin and carotenoids in different species of flowers using hyperspectral data. Remote Sens. Lett. 2016, 7, 1004–1013. [Google Scholar] [CrossRef]
  44. Zou, X.; Mõttus, M. Retrieving crop leaf tilt angle from imaging spectroscopy data. Agric. For. Meteorol. 2015, 205, 73–82. [Google Scholar] [CrossRef]
  45. Song, X.; Feng, W.; He, L.; Xu, D.; Zhang, H.Y.; Li, X.; Wang, Z.J.; Coburn, C.A.; Wang, C.Y.; Guo, T.C. Examining view angle effects on leaf N estimation in wheat using field reflectance spectroscopy. ISPRS J. Photogramm. Remote Sens. 2016, 122, 57–67. [Google Scholar] [CrossRef]
  46. Hu, P.; Huang, H.; Chen, Y.; Qi, J.; Li, W.; Jiang, C.; Wu, H.; Tian, W.; Hyyppä, J. Analyzing the angle effect of leaf reflectance measured by indoor hyperspectral light detection and ranging (LiDAR). Remote Sens. 2020, 12, 919. [Google Scholar] [CrossRef]
  47. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 7 September 2023).
Figure 1. Push-broom spectrometer working principle. Collecting lens focuses the target on a slit and selects only a portion of the target. The light is then dispersed by a prism and refocused on the imaging sensor.
Figure 1. Push-broom spectrometer working principle. Collecting lens focuses the target on a slit and selects only a portion of the target. The light is then dispersed by a prism and refocused on the imaging sensor.
Sensors 24 00344 g001
Figure 2. The 3D optical simulation of the push-broom spectrometer. Simulation was performed with 3DOptix software, v2.0. For representative purposes, the incoming white light starts from three points. The light is focused on the vertical slit and then dispersed by the prism and finally refocused on the sensor. Top-left inset shows three strips of dispersed light on the detector relative to the three starting points.
Figure 2. The 3D optical simulation of the push-broom spectrometer. Simulation was performed with 3DOptix software, v2.0. For representative purposes, the incoming white light starts from three points. The light is focused on the vertical slit and then dispersed by the prism and finally refocused on the sensor. Top-left inset shows three strips of dispersed light on the detector relative to the three starting points.
Sensors 24 00344 g002
Figure 3. CAD drawing of the system with base support and optical elements.
Figure 3. CAD drawing of the system with base support and optical elements.
Sensors 24 00344 g003
Figure 4. The 3D-printed (left) and aluminum CNC-machined (right) versions of the prototype.
Figure 4. The 3D-printed (left) and aluminum CNC-machined (right) versions of the prototype.
Sensors 24 00344 g004
Figure 5. Neural network architecture schematic. The input features consist of the normalized reflectance at different wavelengths. A hidden layer with 25 node was weighed and combined with the spectral points. The binary output layer classified the spectra based on the results of the hidden layer.
Figure 5. Neural network architecture schematic. The input features consist of the normalized reflectance at different wavelengths. A hidden layer with 25 node was weighed and combined with the spectral points. The binary output layer classified the spectra based on the results of the hidden layer.
Sensors 24 00344 g005
Figure 6. Wavelength-to-pixel relationship obtained from the position of the peaks of a fluorescent lamp spectrum, laser diodes, and IR diode. Dots represent experimental data and the solid line represents the fit according to Equation (5).
Figure 6. Wavelength-to-pixel relationship obtained from the position of the peaks of a fluorescent lamp spectrum, laser diodes, and IR diode. Dots represent experimental data and the solid line represents the fit according to Equation (5).
Sensors 24 00344 g006
Figure 7. Comparison of the spectrum of a fluorescent lamp acquired using a commercial spectrometer (Avantes AvaSpec-ULS2048-USB2-UA-50) and the custom-made spectroscopic system. Inset: The spectral resolution of the system, δ λ , can be estimated using the measured width of a very sharp peak. The estimation was conducted with an atomic emission line located at about 540 nm of the fluorescent lamp.
Figure 7. Comparison of the spectrum of a fluorescent lamp acquired using a commercial spectrometer (Avantes AvaSpec-ULS2048-USB2-UA-50) and the custom-made spectroscopic system. Inset: The spectral resolution of the system, δ λ , can be estimated using the measured width of a very sharp peak. The estimation was conducted with an atomic emission line located at about 540 nm of the fluorescent lamp.
Sensors 24 00344 g007
Figure 8. Spatial resolution of the spectrometer. (a) Line pattern used to calibrate the spatial response function and the contrast of the instrument. (b) Measured intensity as a function of the pixel position. (c) Contrast of the four-line patterns as a function of the frequency of the chosen pattern (dots); the line is a guide for the eyes. (d) Distance in pixels between two successive maxima as a function of the dimension of the pattern period (black dots) together with the linear fit (solid line).
Figure 8. Spatial resolution of the spectrometer. (a) Line pattern used to calibrate the spatial response function and the contrast of the instrument. (b) Measured intensity as a function of the pixel position. (c) Contrast of the four-line patterns as a function of the frequency of the chosen pattern (dots); the line is a guide for the eyes. (d) Distance in pixels between two successive maxima as a function of the dimension of the pattern period (black dots) together with the linear fit (solid line).
Sensors 24 00344 g008
Figure 9. Normalized reflectance spectra acquired from leaves of lettuce (left) and arugula (right) at different angles and positions. Each set of spectra shows a large variance due to the part of the leaf and acquisition angle selected.
Figure 9. Normalized reflectance spectra acquired from leaves of lettuce (left) and arugula (right) at different angles and positions. Each set of spectra shows a large variance due to the part of the leaf and acquisition angle selected.
Sensors 24 00344 g009
Figure 10. Confusion matrix for lettuce and arugula classification. The rows correspond to the true class, and the columns correspond to the predicted class. Diagonal and off-diagonal cells correspond to correctly and incorrectly classified observations, respectively.
Figure 10. Confusion matrix for lettuce and arugula classification. The rows correspond to the true class, and the columns correspond to the predicted class. Diagonal and off-diagonal cells correspond to correctly and incorrectly classified observations, respectively.
Sensors 24 00344 g010
Figure 11. Classifier validation on mixed plant leaves. (Top) panel shows an image of the leaves where colored region represents the dimension scanned by the push-broom spectrometer. (Bottom) panel shows an example of classification (green for lettuce and red of arugula) where most of the points are classified correctly.
Figure 11. Classifier validation on mixed plant leaves. (Top) panel shows an image of the leaves where colored region represents the dimension scanned by the push-broom spectrometer. (Bottom) panel shows an example of classification (green for lettuce and red of arugula) where most of the points are classified correctly.
Sensors 24 00344 g011
Table 1. Samples in the data-set. The data-set contains 296,862 spectra, 224,867 from lettuce and 71,995 from arugula. The data-set was split into random train and test subsets in the proportion 0.7 and 0.3, respectively.
Table 1. Samples in the data-set. The data-set contains 296,862 spectra, 224,867 from lettuce and 71,995 from arugula. The data-set was split into random train and test subsets in the proportion 0.7 and 0.3, respectively.
SamplesLettuceArugulaTotal
Train157,40750,396207,803
Test67,46021,59989,059
Total224,86771,995296,862
Table 2. Classification report (precision, recall, F-measure, and accuracy) for each class. From the recall column, we observe a sensitivity and specificity close to one.
Table 2. Classification report (precision, recall, F-measure, and accuracy) for each class. From the recall column, we observe a sensitivity and specificity close to one.
PrecisionRecallF-Measure
Lettuce1.001.001.00
Arugula0.990.990.99
Accuracy 0.996
Table 3. Summary of the technical data of the newly developed portable hyperspectral imaging device for remote sensing at the operating conditions. The table includes the spatial and frequency specifications, the principal physical parameters such as weight and dimensions, and the data treatment.
Table 3. Summary of the technical data of the newly developed portable hyperspectral imaging device for remote sensing at the operating conditions. The table includes the spatial and frequency specifications, the principal physical parameters such as weight and dimensions, and the data treatment.
Relevant Working Parameters
Wavelength operation range300–1000 nm
Spectral resolution<20 nm at 540 nm
Field of view at 1.2 mSoil line 1 cm wide and 1 m long
Working distanceFrom 0.9 to 1.10 m
Spatial resolution∼0.5 cm along the scanned dimension line
∼1 cm along the motion direction
Acquisition timeMax. frame rate at full resolution, 495 fps
Classification speed∼35,000 spectra @ 50 fps
Data transferWireless connectivity
Weight∼1.5 kg
Dimensions30 cm × 20 cm × 10 cm (L × W × H)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Neri, I.; Caponi, S.; Bonacci, F.; Clementi, G.; Cottone, F.; Gammaitoni, L.; Figorilli, S.; Ortenzi, L.; Aisa, S.; Pallottino, F.; et al. Real-Time AI-Assisted Push-Broom Hyperspectral System for Precision Agriculture. Sensors 2024, 24, 344. https://doi.org/10.3390/s24020344

AMA Style

Neri I, Caponi S, Bonacci F, Clementi G, Cottone F, Gammaitoni L, Figorilli S, Ortenzi L, Aisa S, Pallottino F, et al. Real-Time AI-Assisted Push-Broom Hyperspectral System for Precision Agriculture. Sensors. 2024; 24(2):344. https://doi.org/10.3390/s24020344

Chicago/Turabian Style

Neri, Igor, Silvia Caponi, Francesco Bonacci, Giacomo Clementi, Francesco Cottone, Luca Gammaitoni, Simone Figorilli, Luciano Ortenzi, Simone Aisa, Federico Pallottino, and et al. 2024. "Real-Time AI-Assisted Push-Broom Hyperspectral System for Precision Agriculture" Sensors 24, no. 2: 344. https://doi.org/10.3390/s24020344

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop