Next Article in Journal
Linear Multi-Task Learning for Predicting Soil Properties Using Field Spectroscopy
Previous Article in Journal
Detection of Informal Settlements from VHR Images Using Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry

1
Institute for Systems and Computer Engineering, Technology and Science (INESC-TEC—Formerly INESC Porto), 4200-465 Porto, Portugal
2
Department of Engineering, School of Sciences and Technology, University of Trás-os-Montes e Alto Douro, 5000-801 Vila Real, Portugal
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(11), 1110; https://doi.org/10.3390/rs9111110
Submission received: 20 September 2017 / Revised: 22 October 2017 / Accepted: 27 October 2017 / Published: 30 October 2017
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Traditional imagery—provided, for example, by RGB and/or NIR sensors—has proven to be useful in many agroforestry applications. However, it lacks the spectral range and precision to profile materials and organisms that only hyperspectral sensors can provide. This kind of high-resolution spectroscopy was firstly used in satellites and later in manned aircraft, which are significantly expensive platforms and extremely restrictive due to availability limitations and/or complex logistics. More recently, UAS have emerged as a very popular and cost-effective remote sensing technology, composed of aerial platforms capable of carrying small-sized and lightweight sensors. Meanwhile, hyperspectral technology developments have been consistently resulting in smaller and lighter sensors that can currently be integrated in UAS for either scientific or commercial purposes. The hyperspectral sensors’ ability for measuring hundreds of bands raises complexity when considering the sheer quantity of acquired data, whose usefulness depends on both calibration and corrective tasks occurring in pre- and post-flight stages. Further steps regarding hyperspectral data processing must be performed towards the retrieval of relevant information, which provides the true benefits for assertive interventions in agricultural crops and forested areas. Considering the aforementioned topics and the goal of providing a global view focused on hyperspectral-based remote sensing supported by UAV platforms, a survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry—wherein the combination of UAV and hyperspectral sensors plays a center role—is presented in this paper. Firstly, the advantages of hyperspectral data over RGB imagery and multispectral data are highlighted. Then, hyperspectral acquisition devices are addressed, including sensor types, acquisition modes and UAV-compatible sensors that can be used for both research and commercial purposes. Pre-flight operations and post-flight pre-processing are pointed out as necessary to ensure the usefulness of hyperspectral data for further processing towards the retrieval of conclusive information. With the goal of simplifying hyperspectral data processing—by isolating the common user from the processes’ mathematical complexity—several available toolboxes that allow a direct access to level-one hyperspectral data are presented. Moreover, research works focusing the symbiosis between UAV-hyperspectral for agriculture and forestry applications are reviewed, just before the paper’s conclusions.

Graphical Abstract

1. Introduction

Remote sensing relying on unmanned aircraft systems (UAS), although an emerging field of application, has been systematically applied for monitoring vegetation and environmental parameters aiming the optimization of agroforestry activities [1]. In this context, UAS have become suitable to assess crops’ conditions by gathering huge amounts of raw data that require further processing to enable a wide range of applications, such as water status [2], vigor assessment [3], biomass estimation [4] and disease monitoring [5]. Similarly, forestry and nature preservation can also greatly benefit from the use of UAS technology, allowing inspection of forestry operations [6], wildfire detection [7], health monitoring [8] and forest preservation [9].
In spite of the large number of successful works that have been applied to agroforestry and related areas using low-cost passive imagery sensors—such as visible (RGB) and near infrared (NIR)—many applications require higher spectral fidelity that only multispectral and hyperspectral [10,11,12,13] sensors can offer. Both of the referred spectral-based methods consist of the acquisition of images where, for each of the image’s spatially distributed elements, a spectrum of the energy reaching the respective sensor is measured. The main difference between them is the number of bands (also referred as channels) and their width [14]. While multispectral imagery generally ranges from 5 to 12 bands that are represented in pixels and each band is acquired using a remote sensing radiometer, hyperspectral imagery consists of a much higher band number—hundreds or thousands of them—arranged in a narrower bandwidth (5–20 nm, each). Figure 1 represents the differences between multi and hyperspectral imaging, which rely on a spectroscopic approach that has been used for laboratorial practices and in Astronomy for more than 100 years.
In fact, both multi- and hyperspectral imagery have the potential to take data mining to a whole new exploration level in many areas, including food quality assessment [11] and agriculture [12]. For instance, productivity and stress indicators in both agricultural and forest ecosystems can be assessed through photosynthetic light use efficiency quantification, which can be obtained by measuring the photochemical reflectance index (PRI) relying on narrowband absorbance of xanthophyll pigments at 531 and 570 nm [15]. However, while the higher spectral resolution present in hyperspectral data allows remote sensing of narrowband spectral composition—also known as spectra, signature or, according to [16], spectral signature—multispectral data manifests itself in larger intervals over the electromagnetic spectrum, which does not enable to reach the same level of detail. Thus, hyperspectral data has a better performance profiling materials and respective endmembers due to its almost continuous spectra. On the one hand, it covers spectral detail that might pass unnoticeable in multispectral data due to its discrete and sparse nature. For example, in Figure 2, since red-edge (RE, 670–780 nm) is not accessible through the broadband sensor, leaf chlorophyll content, phenological state and vegetation stress—which are parameters that manifest in that spectral range—cannot be assessed. On the other hand, hyperspectral has the ability to discriminate components that may be unwittingly grouped by multispectral bands (see, for example, [17] for more details).
Along with the resolution improvement, the hyperspectral sensing approach also increases data processing complexity, since such imagery ranges from hundreds to thousands of narrow bands that can be difficult to handle in real-time with reduced computational resources. Besides, spectral signatures can undergo through variations depending on light exposure and atmospheric conditions, which is an issue that has been leading the scientific community to propose processes for acquisition (to control environmental conditions) and/or analysis methodologies (to correct the noise resulting from environmental conditions). Such efforts allow accurately matching spectral data and identifying material compositions.
The first developments of imaging spectrometry in remote sensing applications started using satellites, more specifically to support Landsat-1 data analysis through field spectral measurements, according to [19]. Studies were mostly regarded to mineral exploration [20], but also landmine detection [21], agroforestry and related areas [22]. Back then, hyperspectral imaging technology did not have the supporting resources to go mainstream because developments in electronics, computing and software areas were required. Progress in the 1980s ended up to overcome technological limitations, opening the doors for the dissemination of this remote sensing technique for earth monitoring by the 1990s [23]. However, its development is still considered an ongoing process [24]. Currently, satellite capabilities for wide spatial range covering along with the improvements that have been carried out regarding spatial and spectral resolution [25] have been allowing the development of remote sensing works in—also but not limited to—agriculture, forestry and related areas (e.g., [26,27,28]). Notwithstanding, long distances relatively to earth surface raises recurrent problems, according to [29]. For example, Pölönen et al. [30] pointed out that for conditions involving cloudiness and short growing season, hyperspectral data acquired from traditional platforms can become useless in some cases. Other issues include the high cost of commercial satellite imagery, which only provide up to 30 cm resolution [31]. Even recent technology such as Sentinel 2 provides up to 10 m resolution in RGB and NIR [32], which is too coarse for some applications. For a better insight, considering a scenario with vines in which consecutive rows are parted by 4 m, such imagery would mix, at least, 2 rows with a significant portion of soil. An alternative to satellites started to be designed by National Aeronautics and Space Administration Jet Propulsion Laboratory (NASA/JPL) in 1983, with the development of hyperspectral hardware, specific for aircrafts, resulting in an Airborne Imaging Spectrometer (AIS). Later, in 1987, the airborne visible/infrared imaging spectrometer (AVIRIS) [33] came out as a high quality hyperspectral data provider that became popular among the scientific community [19]. However, besides the costs involved in the use of this solution, a certified pilot for manning the aerial vehicle and flight-related logistics is required. Lately, a remote sensing platform capable of overcoming not only satellite but also manned aircraft issues by bringing enhanced spectral and spatial resolutions, operational flexibility and affordability to the users is emerging: the UAS [34]. Together with specialized sensors, UAS are becoming powerful sensing systems [35] that complement the traditional sensing techniques rather than competing with them [1]. According to Pádua et al. [1], a UAS can be defined as a power-driven and reusable aircraft, operated without a human pilot on board [36]. Usually, it is composed of a UAV that, in turn, is capable of carrying remote sensing devices. UAS can be remotely controlled or have a programmed route to perform an autonomous flight using the embedded autopilot. Generally, it also requires a ground-control station and communication devices for carrying out flight missions [37]. Colomina and Molina [38] share a similar perspective by referring that a UAV is usually referred to as the remotely piloted platform, whereas the UAS is regarded as the platform and control segment. They also add that UAV and UAS are somewhat used interchangeably too. In what regards to hyperspectral data handling, a set of steps can be followed [13]: (1) image acquisition; (2) calibration; (3) spectral/spatial processing; (4) dimensionality reduction and; finally (5) computation related tasks (e.g., analysis, classification, detection, etc.). Similarly, in [39], file reduction and subsetting, spectral library definition (e.g., made by selecting a portion in the image) and classification are pointed out as valid operations to constitute a chain. Remote sensing, through the combination of UAV and on-board hyperspectral sensors and relying in many of the aforementioned steps/operations, has been applied both to agriculture and forestry (e.g., [40,41,42]). However, available works are not so numerous when compared with other platforms, since this is a relatively new research field. Even so, they provide a proper demonstration of this approach’s potential.
All in all, the main focus of this paper is, precisely, UAS-based remote sensing using hyperspectral sensors, applied both in agriculture and forestry. The acquisition equipment designed to be attached to UAVs is presented next, in Section 2. Then, Section 3 provides a discussion towards the operations that should be carried out before and after flight missions, as well as pre-processing data procedures for image calibration. Important approaches for data processing are reviewed in Section 4, specifically data dimension, target detection, classification and vegetation indices operations. Supporting software tools and libraries are presented in Section 5. Applications focusing the use of UAV’s and hyperspectral sensors in agriculture and forestry are addressed in Section 6, right before some conclusions within Section 7. To provide guidance along this paper’s reading, a glossary regarding the used abbreviations and acronyms is listed in Appendix A.

2. Hyperspectral Sensors

Independent of the aerial platform (Airborne, Satellite, UAV, etc.), sensors play an important role in data acquisition. According to [43], in which an extensive work addressing hyperspectral technology can be found, there are four main techniques for acquiring measurable data from a given target: by hyperspectral imaging, multispectral imaging, spectroscopy and RGB imagery. The most significant differences are synthetized in Table 1, which considers not only the comparison carried out by [43] but also the vision of Sellar and Boreman [44], who stated that imaging sensors for remote sensing can be divided into the method by which they achieve (1) spatial discrimination and (2) spectral discrimination.
When compared with others, hyperspectral imaging sensors are effectively capable of capturing more detail in both spectral and spatial ranges. RGB imaging does not provide spectral information beyond the visible spectrum, which is of high importance for characterizing chemical and physical proprieties of a specimen. On the other hand, spectroscopy is a proximity technology mainly used for sensing tiny areas (e.g., leaf spots), aiming the acquisition of spectral samples without spatial definition. Regarding multispectral imaging and in spite of its capability for sensing both spectral and spatial data, there is a lack of spectral resolution that is only surpassed by hyperspectral imaging, as it was pointed out in the previous section. Thereby, hyperspectral sensing technology should be preferred when it comes to sense chemical and physical properties of a specimen.
In what regards to the concept of hyperspectral sensors, there are area detectors that have the ability of quantifying acquired light that results from the conversion of incident photons into electrons [43]. Two types of sensors are prominently used to achieve such conversion: charge-coupled device (CCD) and complementary metal-oxide-semiconductor (CMOS) sensors. Both consist in an array of photodiodes that might be built using different materials, as it is detailed in Figure 3.
CCD and CMOS sensors are different mainly in the way they treat incoming energy. On the one hand, CCD requires moving the electric charges accumulated in the photodiodes into another place wherein the quantity of charges can be measured. On the other hand, CMOS holds the photodetector and readout amplifier integrated as a single part capable of converting the voltage signal resulted from incoming electrons—converted photons—due to optically intensive transistors placed adjacently to photodiode. This seems to be the reason why CMOS technology is faster when acquiring and measuring light intensity. However, it is more prone to noise and dark currents than CCD due to the on-chip circuits used to transfer and amplify signals and as a result of lower dynamic range and sensitivity, as it is explained by [43]. A dark current is a temperature-dependent common phenomenon that contributes to introduce noise in a sensor’s reading and that needs to be considered in calibration tasks, for correction purposes. According to [45], such phenomenon can be generated by the Shockley–Read–Hall (SRH) process—that considers band gap by an impurity in the lattice to make the electron in transition pass through a new energy state—due to multiple factors, which end up resulting in the so-called blemish pixels. Additional information can be found in [15], in which is pointed out that CCD-based sensors have higher sensitivity regarding band data acquisition while, on the other hand, high grade CMOS have greater quantum efficiency in NIR.
Regarding acquisition modes, reference [43] categorizes them in four main ones (in fairly enough accordance with [44,46,47]): point scanning (or whiskbroom), line scanning (or pushbroom), plan scanning and single shot (Figure 4). While whiskbroom mode acquires all the bands pixel by pixel by moving the detector in the x-y space to store data in a band-interleaved-by-pixel (BIP) cube, pushbroom mode proceeds similarly but, instead of pixel-based scanning, an entire sequence of pixels forming a line is acquired, which ends up by constituting a band-interleaved-by-line (BIL) cube. Some other pushbroom characteristics include compact size, low weight, simpler operation and higher signal to noise ratio [10]. More comparisons between pushbroom and whiskbroom modes are presented in [48]. Plane scanning mode builds a band sequential (BSQ) cube constituted by several images taken at a time, each one holding spectral data regarding a whole given x-y space. Finally, there is a more recent mode that acquires all of the spatial and spectral data at once known as single shot. In [46], snapshot imager seems to be related with the referenced single shot mode inasmuch as it is presented as a device that collects an entire data cube within a single integration period. Additionally, some noteworthy issues are pointed out for each acquisition mode. Whiskbroom is a slow acquisition mode and pushbroom must use short enough time exposure to avoid the risk of inconsistencies at the spectral band level (saturation or underexposure). Plane scanning is not suitable for moving environments, while single shot was reported as an under development technology that still lacks support to higher spatial resolutions.
The combination of hyperspectral sensors with UAVs is commonly made available through pre-built systems that require dealing at least with three parties: the sensor manufacturer, the UAV manufacturer and the party that provides system integration [15]. The list of available commercial hyperspectral sensors is presented in Table 2. Beside market options, others were developed in research initiatives (or projects). For example, in [49] a hyperspectral sensor was developed weighing 960 g, with support for capturing 324 spectral bands (or half in the binned mode) between 361 and 961 nm. In [50], another sensor was proposed to deal with rice paddies cultivated under water. It weighs 400 g and has a capturing range of 256 bands between 340 and 763 nm. In [51], a whiskbroom imager based on a polygon mirror and compact spectrometers with a promising low-cost applicability is presented. Sellar and Boreman [44] proposed a windowing approach, distinct from the time-delay integration technique used by some panchromatic imagers. Fabry-Perot interferometer (FPI) hyperspectral imager was alternatively developed by [30] as a Raman spectroscopy device [52] behaving like a single shot imager, since it acquires the whole 2D-plane at once. The data processing steps related with this sensor are described in [53].
Pre- and post-flight operations are required to reach level-one data, i.e., spatially and spectrally reliable hyperspectral cubes, ready to use and process. Next section is devoted to such operations.

3. Operations Prior to Flight and Post-Acquisition Data Pre-Processing

Regardless of the platform from which hyperspectral data is acquired (airborne, satellite, laboratory, etc.), its processing steps and methods are similar, with the exception of the pre-processing stage, which raises some issues. For example, radiometric noise is the prominent concern regarding UAS systems—mainly due to the light conditions—while atmospheric noise needs to be also considered as a major source of distortion when regarding satellite imagery. Therefore, it is very important to overcome such issues to obtain the proper conditions that will allow to subsequently apply the proper set of common processing stages, oriented for an intended outcome (for example, classification or pixel detection). Bioucas-Dias [54] addresses some approaches for data restoration and spectral resolution improvement that seem to be more suitable when applied to satellite imagery. Considering this work’s focus, calibration operations and methods for pre-processing UAS-based hyperspectral imagery are presented next.
Before starting to acquire hyperspectral data, all the hardware needs to be calibrated, including the inertial measurement unit (IMU) and the imager. In the latter, incoming light—under typical conditions—must be managed by reducing or increasing lens aperture. Currently, the simplest way of converting the sensor digital numbers (DN) to reflectance is via relative reflectance. In addition, [55] pointed out the low quality of UAV georeferencing and its negative impact, especially in pushbroom spectrometers. Indeed, navigation grade GPS sensors do not facilitate hyperspectral pushbroom scanning on a UAV [49]. The use of ground control points (GCPs) is absolutely necessary for improving georeferencing. However, recent advances in direct georeferencing and imaging technologies—like the integrated global navigation satellite systems (GNSS) and inertial navigation systems (INS)—enable precise mapping using a minimal number of GCPs. Foreseeing radiometric calibration at the pre-processing stage, white references are placed on the ground, at least at four pixels wide and preferably at 1 m2. Low-cost (and low-quality) calibration panels can be used towards radiometric adjustments based on estimations that, however, cannot beat the accuracy provided by spectralon panels. Recently, Headwall [56] launched a VNIR E-series hyperspectral sensor with a built-in module for radiometric corrections, claiming that there is no need for placing such ground references. Still, validation tests are required. IMU’s calibration should be performed against the earth’s local magnetic field, otherwise the UAV’s orientation can get compromised [15].
After the hyperspectral data acquisition stage, some pre-processing operations are required. Duan et al. [57] consider pre-processing a mandatory prerequisite to extract quantitative information of the sensed area. Hyperspectral data acquired by traditional platforms like satellites—whose limitations were identified in [29]—and aircrafts is usually influenced by climate—among others factors pointed out by Pölönen [30]—and this hampers its use, especially in precision agriculture, where information detail matters. On the other hand, UAVs have the advantage of flying closer to the earth’s surface wherein the influence of atmosphere is not as significant. For this reason, there is no need for atmospheric corrections as it would be in traditional platforms. However, low acquisition height and unstable movement of the UAV, along with the viewing angle and high influence of micro-relief on illumination, make the pre-processing operations more difficult [58].
The need for radiometric and geometric corrections [59] and spectral calibration [49] while using UAV-carried hyperspectral sensors to preserve acquired data scientific rigor was highlighted. In addition, sensors need to be recalibrated regularly. Regarding radiometric calibration, coefficient might be obtained in the laboratory to achieve conversion to radiance [59]. In [49], optical integrating sphere USR-SR 2000S (Labsphere Inc., North Sutton, NH, USA) with four electric current stabilized light sources (labelled as SI and INGAAS lamps) producing calibrated irradiance levels of traceable uncertainties was used for radiometric correction purposes. Nevertheless, some factors—for example, sensor’s transportation and installation—are likely to lead to calibration loss that, in turn, results in a radiometric performance decreasing [57]. Such phenomenon can be assessed by using targets of known reflectance in the field to later check data radiometric quality. For geometric corrections, reference [49] suggests triangulation transformation. Both radiometric and geometric corrections can be done using software tools—whenever these are provided by the hyperspectral sensor’s manufacturer—, which should georegister and convert raw DN data cubes to radiance. One of the ways of achieving georegistration is to combine the external orientation data gathered during the flight with sensor model—interior orientation—and elevation model. In [58], it is mentioned that most of available pre-processing software uses the exact IMU position and orientation. Presumably, to be able of using IMUs on-board a UAV, a compromise between IMU weight and the UAV payload needs to be found. Moreover, spectral calibration needs be carried out by using standard methodologies like Lucieer et al. [49] did, using open-ray line source producing sharp gas absorption features of well-known wavelengths [60]. These pre-processing operations result in corrected data, which allows for comparisons between different time series and even data acquired by different hyperspectral sensors.
Quality assessment of hyperspectral data is a relevant topic for pre-processing [57]. An important criterion for hyperspectral data quality characterization is the signal-to-noise ratio (SNR). In [61], it was stated that quantitatively analyzing hyperspectral data requires the precise evaluation of the SNR. Essentially, for optimizing the use of data, high ratio is needed and, contrastingly, low ratio must be removed due to its negative effect on data.
According to Bioucas-Dias [54], signal correction/improvement fits in a group of approaches identified as data fusion, found in works such as [62,63,64], that refers to the class of methods capable of dealing with spectral and spatial data merging. Such methods include restoration (i.e., noise reduction and spectral improvement), spatial data fusion (for resolution enhancement based on subpixel information), spatial-spectral data fusion (refereeing to resolution improvement based on fusion of different parts of a hyperspectral imaging (HSI) cube, considering a cubes’ different bands) and multisource data fusion (resolution improvement considering more than one data provider, for example, satellites and UAVs).
A final remark on this section goes to data storage related issues during the data acquisition stage, mainly due to two reasons: on the one hand, (1) huge amounts of data are generated with the use of hyperspectral sensors and, on the other hand; (2) UAVs’ have a limited payload, which implies the use of a confined number of lightweight storage devices. To tackle these issues, authors in [65] proposed methods regarding hyperspectral images captured on-board UAVs during mission flights. There is a pre-processing step they used to transform hyperspectral data cubes into speech-like signals. Afterwards, those signals undergo a compression stage that processes them by means of a linear prediction.
After pre-processing operations over acquired data, the next stage is to perform data processing and analysis, which is addressed in the following section.

4. Data Processing and Analysis

This section focuses on approaches around hyperspectral data processing and analysis—including target detection, classification and vegetation indices (VI) operations—which are usually carried out after pre-processing operations that handle radiometric, geometric or even atmospheric calibrations.

4.1. Dimension Reduction

Hyperspectral data pre-processing stage is usually followed by a dimension reduction operation. Following this, Burger and Gowen [66] addressed the importance of handling hyperspectral data more efficiently to tackle dimensionality issues that have a malign effect in computational processing in general, as well as in approaches like neural networks (Hughes phenomenon), in particular [67]. According to the authors, there are some methods for compression (spatial domain in image processing) and dimension reduction (reducing of hyperspectral cubes’ dimensions) that aim to achieve efficient data handling: wavelet compression, band selection and projection methods. Within the former, there is the principal component analysis (PCA) method to concentrate the spectral variance that can be combined with others (like wavelet compression). Burger and Gowen [66] highlighted the maximum noise fraction (MNF) as a PCA-based method capable of achieving good performance in classification with low distortion rates, even better than the PCA alone [68,69]. Random projection and independent component analysis (ICA) are also mentioned as relevant projection methods, along with the multivariate curve resolution (MCR) analysis, which aims to decompose mixed spectra into pure components and corresponding concentrations. In [54], dimension reduction problems belong to the data fusion scope and are handled by spectral data fusion approaches, in which reduction is achieved by redundancy analysis.

4.2. Target and Anomaly Detection

Essentially, target/anomaly detection consists in a binary classification technique that labels every pixel in the hyperspectral cube as belonging to target/background or outlier/background, respectively [70]. Due to some nuances establishing the specific goal of finding a target/anomaly among data through the classification of two classes, this detection approach was detached from the classification methods that will be addressed after this section. A graphical summary consisting of a taxonomy regarding target/anomaly detection is provided at the end of this subsection.
Manolakis and Shaw [71] have been bridging the gap between remote sensing and signal processing by providing an extensive overview on the statistical tools (mainly based on [72]) that can be used to handle hyperspectral data concerning pixels of interest. More specifically, they address target detection that refers to identifying a specific material in the HSI cube. It involves the binary hypothesis testing problem in which each pixel is assigned to a target or non-target label. In this task, the criterion is to maximize the detection probability while keeping false alarms at a predefined desirable rate (also known as Neyman-Person criterion [72]).
Reflectance spectra are stored in libraries and must then be compared with the acquired data, after conversion to reflectance and considering some corrections related with distortions such as atmospheric noise. The absence of a priori information (libraries or the needed data to work with a target reflectance) calls for an approach in which spectral signatures that differ from the background are searched (anomaly detection).
Likelihood ratio (LR) is one of the most known approaches to check the target probability. It is given by Equation (1):
Δ ( x ) = p ( x   |   signal   present ) p ( x   |   signal   absent   )
in which Δ(x) > η (threshold). The threshold that separates targets from non-targets should be set to properly balance the probability of false alarms and detection. Receiving operating characteristic (ROC) curves are useful to analyze such a trade-off.
Whenever there are unknown targets and background parameters, a trick can be used: replacing unknown parameters in the LR by maximum likelihood estimates. Thresholds should be automatically figured out in practical applications (i.e., without an operator’s intervention), enabling a spontaneous detection. To that end, Manolakis and Shaw [71] underline the importance of the constant false alarm rate (CFAR) processor for dealing with misclassifications.
In [16], the importance of choosing a good threshold to minimize the occurrence of missing targets and false alarms is also highlighted. While a ROC provides the means to get the threshold variation involved in the detectors evaluation, Bayes and Newman-Pearson criterions are suitable methods for optimum threshold selection in cases wherein the conditional densities used in LR ratio calculus are completely known. However, in most real cases, probability densities rely on unknown targets and backgrounds. A widely used workaround is to replace unknown parameters with maximum-likelihood estimates through a process known as generalized likelihood ratio (GLR), from where adaptive detectors result. However, as the target class is very small, a problem arises related with adaptive detectors: target density parameters become difficult to estimate. In practice, target detection should rely on automatic threshold setters, considering that a high false alarm rate has its computational costs. Detectors based on CFAR represent a key approach to solve the referred issue inasmuch as they are immune to background variations and noise, according to another Manolakis and Shaw work [16]. After some performance considerations, they have proposed guidelines to design good detection methods, dividing them into main classes: full pixel and subpixel. Both will be addressed later in more detail.
Detection algorithms are many but their application depends on some factors, such as: type of models used for spectral variability, pixel purity and models used for describing mixed pixels. As for spectral variability, it is usually described in the following forms:
  • geometrically: uses a subspace of the spectra in which the matrix defining the variability of the subspace can be a spectra signatures library (on data or even vectors), obtained from statistical techniques;
  • statistically: spectral variability is described accordingly with probability distribution models. Calculus such as mean vector and covariance matrix are applied in different moments under a multivariate normal distribution assumption. Thus, variability can be measured from a uniform distribution of the data space.
Manolakis and Shaw [71] reinforced that variations in the material surface, along with factors induced by atmospheric conditions, sensor noise, materials composition and location (among others) usually change spectra readings. Moreover, most of the spectra readings acquired for real applications are likely to be random due to the combination of certain variability factors, which makes direct correlations to spectral libraries—reflectance spectra collections measured from materials of known composition [73]—infeasible. According to Manolakis and Shaw, the description of such statistical variability is more suitable to be analyzed using probabilistic models, which can include the following approaches:
  • Probability Density Models: this kind of models consist in a scatter set of the reflectance values for a range of spectral bands that aims to identify different spectral classes in a scene using clustering algorithms (e.g., k-means) and classification methods (e.g., color attribution);
  • Subspace models: are applied to analyze the variability within an M-dimensional band space, from a K-dimensional set, where M < K. PCA is one of the approaches within this probabilistic model;
  • Linear spectral mixing models: are used to estimate the composition of the image’s pixels in those cases wherein there are pixels composed of a small number of unique spectral signatures corresponding to different components (endmembers).
As aforementioned, target detection methods can be classified into the following fundamental types: full pixel and subpixel [16,71]. The former refers to pure pixel that does not have “contaminating” interference from the background. On the other hand, sub-pixel methods are more challenging since they involve mixed spectra, corresponding to distinct materials.
There are two main ways of analyzing mixed spectra: (1) using linear mixed models (LMM) [72] that assume that a pixel is constituted by a linear small set of spectra known as endmembers (also in agreement with [16]); and (2) through stochastic mixing models in which there is a randomness and independency of the endmember spectra.
It turns out that mixed spectra can be faced as spectral unmixing problems. According to Bioucas-Dias [54] spectral unmixing (see, for example, [74,75]) purpose is to restore the ability of ascertain materials, i.e., to retrieve information of spectral endmembers and their abundances. Several conditions can compromise that ability, such as light, intimate mixtures and the spectral resolution of the sensor itself. The general name for the mathematical model that deals with this kind of issues is the radiative transfer theory (RTT). In the spectral unmixing scope, there are several approaches such as linear unmixing (inverse problem formulation for ICA analysis, widely used in the past decade), pure pixel-based (which assumes the presence in the data of one or more pure pixels per endmember), based on non-pure pixels (endmembers are inferred by fitting a minimum volume simplex to data in the case of lack of pure pixels) and sparse regression unmixing (a recent method that demands ground spectral libraries to find the optimal subset of signatures that can best model each pixel). Unmixing is associated to target detection by convenience but it can also be applied to classification problems.
For detection problems formulation, background only or background and target constitute the competing hypothesis. Unknown parameters such as the background covariance matrix have to be estimated. Then, a generalized likelihood ratio test (GLRT) approach [72] can be applied as an adaptive detector.

4.2.1. Full Pixel Detection

Usually, the process for full pixel detection includes clustering two groups of pixels separated by target and background. Then, a LR (concept depicted on Figure 5) or a GLR approach is applied to decide if a certain pixel falls in the target or in the background, by analyzing its position relatively to some scalar boundary decision curve (scalar threshold). According to [16], the use of normal probability models lead to a better performance.
To address the problem of the overlapping background and target classes that might lead to classification errors, some approaches are available: LR detectors and adaptive matched filters (AMF). The former uses multivariate normal vectors that can be checked for a spectrum through the probability density function that considers a mean vector and a covariance matrix. The design of a Neyman-Pearson detector can be made if the probability density function is satisfied, for example, with some matched filter like Fisher’s linear discriminant, which enables the detection by using an automatically calculated threshold. Alternatively, an AMF approach can also be applied to the same end. It uses a sample covariance matrix (from libraries or local data) distribution, determined by Richmond [76] and it is focused on the normal distribution of the background under the test “target absent” hypothesis. For target detection, Spectral Angle Mapper (SAM), Equation (2), can be used as a valid algorithm if the conditions comply with well separated distributions and small dispersions. Optimal properties are not applicable here, accordingly with Manolakis [71].
SAM = ( x , y ) = arccos ( x T y | | x | | |   | y ! | | )
where x T y is the dot product of vectors x and y.

4.2.2. Subpixel Target Detection

If it is known that the target of interest only occupies part of the pixel (presence of mixed background), another set of approaches should be applied, namely the unstructured and structured background models.
Unstructured background models are supposed to be used with backgrounds that include additive noise, multivariate normal distribution, mean 0 (forced) and a covariance matrix. The competing hypothesis in this approach is shown in Equation (3):
{ H 0 :   x = b   ( target   absent )   H 1 :   x = S α + b   ( target   present )
where Sα corresponds to a spectral signature subspace model, b is the background and x are the testing pixels.
In the presence of such backgrounds, it is assumed that materials are spread independently and identically within a given pixel. According to Manolakis [16], the design of a practical detector could involve merging the GLR detector proposed by Kelly [77,78]—which includes the threshold to set up the probability of detection (Pd) and the probability of false alarm (Pfa)—and Reed and Yu [79] approach using a multivariate analysis of variance formulation. Such merge results in the ability of estimating Mahalanobis distance of the testing pixel, from background mean, which earns adaptability through CFAR propriety. The author highlighted its usefulness for anomaly detection in hyperspectral data applications (approach for spectral data with unknown target). Regarding target detection, one must consider that the amount of background is variable in unstructured cases. Thereby, estimators relying on variance—directly related with the fill factor of the target—can be used, such as adaptive coherence cosine (ACE) estimator [80,81]. On the other hand, structured background modelled by a subspace models have, as the only source of randomness, additive noise with unknown variance. Manolakis et al. [16] proposed a GLRT detector [72,82,83], relying in the substitution of likelihood ratio by maximum-likelihood estimates of the unknown quantities. Its geometrical interpretation is also provided to help understanding criteria for fitting testing pixels in the different models, namely full linear model (target plus background) and reduced model (background). Conclusions regarding target present/absent can be taken based on such fitting.
For materials that overlap with each other making their spectral distinction difficult, a nonlinear decision boundary can be used. Manolakis et al. [16] provided an example on how to distinguish cars and trees overlapping in a scene. The plot of two-band data as points in two-dimensional space with the reflectance values as coordinates gives origin to a couple of distinguishable data clusters related with the evolved elements. According to the authors, this is the essence of multiple spectra-band data exploitation in remote sensing spectroscopy. However, both UAS flight altitudes and sensors’ improved spatial resolution should relieve mixed spectra issues in tasks related with forestry and agriculture monitoring and assessment.
Meanwhile, other developments were made by Nasrabadi [70], who addressed anomaly and target detection with recent statistical signal processing and machine learning approaches. In spite of its experimental results being around full-pixel targets, he claimed that the addressed methods can be extended to sub-pixel. For such reason, this work was referred without being assigned either to one or other targeting method. According the author, anomaly detection aims the pattern recognition of detected objects that stand out the cluttered background. Along with some already mentioned anomaly detectors, such as Reed-Xiaoli (RX) [79] and subspace-based [84] detectors, others were highlighted as emerging approaches, more specifically, Kernel RX [85]—non-linear version of RX detector to deal with the lack of Gaussian distribution behavior of clutter background, which cannot be easily modelled due to insufficient training data and knowledge about Gaussian mixtures—and support vector data description (SVDD) [86], which is a classifier capable of estimating support region for a given data set, avoiding prior information of distribution. Anomalous change detection (ACD) was defined by [87] as an algorithm capable of suppressing environmental changes to highlight materials transformation, giving a couple of hyperspectral images. Additionally, they provided comparison data between several ACD algorithms. Regarding the target detection and besides spectral matched filter [88], matched subspace [82] and orthogonal subspace projection [89], also kernel-based target detectors (proposed considering [90])—capable of extending previous target detectors, thus enabling the analysis of nonlinear features through kernels—and continuum fusion (CF) detectors (e.g., [91])—for dealing with the variability of target detection models parameters—were presented. Additionally, theoretical background on sparse representation classifiers relying on training data regarding all the classes is provided.
Still concerned with target detection gaps (Figure 6)—related with material discrimination in the presence of pixel’s mixed spectra and the large number of false alarms outputted by algorithms relying on classical detection theory and physics-based signal models—, false alarm mitigation (FAM) and target identification (TID) algorithms were recently developed and presented, after a set of introductory subjects, including spectral variability, subpixel targets modelling, likelihood detectors, matched filters and ROC-based performance analysis [92]. While FAM [93] is considered a promising approach to deal with the aforementioned issue related with false alarm rates, TID is useful to determine if a given pixel within a certain detector contains or not the target. The latter is also suitable for mixed spectral problems, which are addressed by unmixing techniques and material identification through library spectra association. Typical approaches are inspired by existing methods (e.g., [94,95,96]).
To conclude this subsection devoted to the targets’ detection within hyperspectral data, there are other works that are worthy of being consulted [54] and some related references [71,97,98] that might complete and update some of the previously addressed approaches as, for example, anomaly detection (identification of pixels significantly different from neighboring background in which no prior knowledge is considered), signature-based target detection (a matched filter that finds pixels relying on prior spectra knowledge), sparse representation target detection (classification uses sparse representation given by a short sample of target and background libraries) and nonlinear detectors (machine learning applications).

4.3. Other Classification Methods

For Manolakis and Shaw [71], classification methods consist of the assignment of pixels to classes or themes (also known as thermographic maps). For instance, in land cover classification tasks, the user has to determine classes by using training sets, spectral libraries and/or ground truth information, considering a probability minimization of misclassification as criteria.
Burger and Gowen [66] point out the availability of a wide set of classification and regression algorithms, including unsupervised methods like k-nearest neighbor and hierarchal clustering supervised classification such as partial least squares discriminant and SAM. With regard to statistical methods, [67] starts by explaining that class discrimination must rely on some statistic measurement—like Bhattacharyya distance [99]—in which there are typically involved class conditional density functions, desirable for discrimination. Such functions have a mean value and a covariance matrix associated. It turns out that those classes result from maximum likelihood classifiers—that can be applied through, for example, the minimum distance to means, a Fischer’s linear discriminant and a Gaussian classifier—that should be applied considering two main aspects: variance behavior and spectra correlation. According to [67], neural networks perform well on classification but are demanding in terms of computation and training data sets. Feature extraction for class discrimination might be achieved using one of two approaches: decision boundary feature matrix (DAFE) or decision boundary feature extraction (DBFE). Finally, a data analysis paradigm was proposed as well as a concrete example using a mall—located in Washington, District of Columbia (DC), USA—in which the final result is the discrimination of the scene’s elements, including roofs, roads and grass.
Plaza et al. [100] revised some alternative approaches to the classical statistical analysis (e.g., [90,101,102,103,104,105]) and proposed some methods based on support vector machines (SVM) and Markov-based techniques. To perform, SVMs rely on kernel functions that can be one of the following three types: Gaussian radial basis function (RBF), polynomial and based on SAM. They noticed that with a small number of training sets, more specifically 10, 20, 40, 60, 80, and 100 pixels of a given class, randomly extracted from a wider set, all of the three kernel functions allowed the SVM approach to reach above 92% of classification accuracy with the 10-pixel sample. The full training sample (of the wider set) did not increase this rate significantly. Transductive SVM (TSVM) was also explored by the authors, who defined it as an approach “based on specific iterative algorithms, which gradually search the optimal discriminant hyperplane in the feature space with a transductive process that incorporates unlabeled samples in the training phase”. A comparison made between SVM and TSVM from a sample obtained of a training samples subset, which considered some nuances—like variables to control the error for misclassification—was carried out and the results for the TSVM approach were better, regarding the conditions specified by the authors. Kernel-based classifiers related with spectral (e.g., Euclidian [106]) and spatial (e.g., using means and standard deviations) domains were then presented for a contextual information integration approach that leads to better classification results when compared to both of the aforementioned approaches. Additionally, they proposed an enhanced approach based on extended morphological profiles and Markov’s random fields, along with a method for spectral unmixing. A recent study [107] shows that hyperspectral image transductive classification is still being used, this time as an approach based on a co-training process to manage spectral and spatial information. Three kinds of classifiers (spectral, frequency-based and morphology-based spatial profile) belonging to imagery are firstly learned from the originally labelled part of the input data. Then, an iterative co-training phase takes over wherein classifiers are learned from imagery data. Also, a separated set of classifiers is used to predict unlabeled pixels. According to the authors, this approach overcomes the accuracy of the ones found in the state-of-the-art referring to the same class of classifiers.
Bioucas-Dias [54] agrees with the previous authors by stating that the goal, underlying classification problems, is to attribute labels to each pixel (e.g., [108,109]). They have enumerated a set of techniques that can be applied to classification, including: feature mining (which, in turn, is subdivided in feature extraction and selection approaches), supervised classification (to discriminate classes of pixels based on, for example, SVM) and semi-supervised classification (enlargement of the training set composed by labelled samples by considering unlabeled ones).

4.4. Vegetation Indices

VIs enable the analysis of several proprieties in leaf area index (LAI) and the assessment of biophysical, physiological, or biochemical crop parameters. In [110], VIs were classified as broad and narrow bands, being the latter group considered more proper for hyperspectral data [111]. Some of the addressed narrowband indices include chlorophyll absorption ratio index (CARI), greenness index (GI), greenness vegetation index (GVI), modified chlorophyll absorption ratio index (MCARI), modified normalized difference vegetation index (MNDVI), simple ratio (SR, including narrowband variants 1–4), transformed chlorophyll absorption ratio index (TCARI), triangular vegetation index (TVI), modified vegetation stress ratio (MVSR), modified soil-adjusted vegetation index (MSAVI) and PRI.
Indeed, VIs have been widely applied in hyperspectral data for several purposes [40,112,113,114,115,116,117]. For example, Haboudane et al. [112] carried out a study to evaluate VIs sensitivity to green LAI, and to modify some of them in order to enhance their responsivity to LAI variations. Modified versions of TVI and MCARI popped out as the best to predict green in LAI. Vineyard conditions assessment was addressed in [113], pointing out the PRI as one of the most sensitives to carotenoids Cx+c and chlorophyll-carotenoid ratios Cab/Cx+c. TCARI combined with a broadband index known as Optimized Soil-Adjusted Vegetation Index (OSAVI) was the most consistent for estimating Cab on aggregated and pure vine pixels. Later, 12 VIs assessment was performed by Lin [114], in which TCARI/OSAVI combination was highlighted as the best for chlorophyll estimation while, simultaneously, resisting to LAI variations. TCARI/MSAVI occupied the second place in that ranking. In [40], 3 years of UAV-acquired hyperspectral data over vineyard fields were used to carry out a study with VIs that led to the conclusion that the combination of R515/R570 (sensitive to Cx+c) and TCARI/OSAVI (sensitive to Ca+b) narrow-band indices are suitable for mapping carotenoid concentration at the pure-vine level. According to Liang et al. [115], PRI and, once again, TCARI/OSVAI, are the most suitable indices for leaf chlorophyll content estimation. Regarding canopy chlorophyll content estimation, SR and SR2 are at the top of their ranking. In a recent work on rice [117], MSAVI is among the VIs that presented the strong and significant relationships with the LAI estimation at different phenological stages. For more information in hyperspectral VIs, the fourteenth section of the fifth part of [116]—a book devoted to hyperspectral remote sensing of vegetation—is available for consultation.
In [12], a wide survey on optimal hyperspectral narrow bands (between 400–2500 nm) to study vegetation and agricultural crop biophysical and biochemical properties was presented in the form of a table containing sections regarding band classes, more specifically: blue, green, red, RE, NIR, far near infrared (FNIR), Early short-wave infrared (ESWIR) and Far short-wave infrared (FSWIR).
Alternatively to the presented hyperspectral data processing, emerging approaches for dealing with HSI complexity based on deep learning (DL) are worthy to be referred. For example, in [118], a framework that joins dimension reduction and deep learning to allow hyperspectral image classification based on spectral-spatial features is proposed. Discriminant algorithms for spectral features and convolutional neural networks (CNN) to deal with spatial features are at the basis of this proposal that is pointed out as an outperforming framework comparatively to commonly used methods for hyperspectral image classification. For saliency detection regarding to band selection in HSI, a manifold ranking approach—that is also extended to DL—is presented in [119]. The spectral-spatial residual network found in [120] consists of a supervised deep learning framework that is capable of discriminating features from abundant spectral signatures and spatial contexts in a HSI (provided as input and without the need for feature engineering). In [121], both attribute profile and DL approaches were merged with the goal of classifying HIS. Profiling working together with CNN showed better results than using each of the involved approaches individually. Also, the need for lighter network architectures and deep network types was highlighted. Even for anomaly detection, CNN demonstrated to outperform classical methods such as RX-based approaches [122].
To sum up this section, dimension reduction focuses on data simplification for computational performance improvement while ensuring information conservation. Target detection refers to the identification of some particular feature of interest through imagery spectral profile analysis using pure inference statistics. Full pixel and sub-pixel methods are available for different scenarios. Other types of classification provide extensive analysis and categorization of a whole scene. The machine learning-based approaches can be understood as general classification since their black boxes provide flexible ways of classifying a scene and detecting particular features. For estimating vegetation properties and status, several indices are available. Emerging DL approaches are also strongly contributing to HSI processing with methods that include dimension reduction, classification and anomaly detection.
To deal with the mathematical complexity entangled in pure statistical approaches, several efforts have been made towards the development of software and libraries focusing on hyperspectral data processing. In the next section, several results of such efforts are presented.

5. Software and Libraries for Dealing with Hyperspectral Data

Handling, processing and analyzing hyperspectral data demands skilled workmanship with advanced knowledge in statistics, as it is evidenced in Section 4. Thereby, professionals relying on remote sensing—for example farmers, foresters or even code developers serving agriculture, forestry and related areas—are required to hold that knowledge somehow, either by training courses or by hiring external resources. To tackle with that situation, some entities have been developing efforts to hide the mathematical complexity to those professionals. In this subsection, a brief overview on software tools and programming libraries that represent such efforts will be presented.
Regarding software tools, there are some mature solutions that allow interacting with hyperspectral data and that only demand awareness of the processes to carry out data analysis rather than the inherent mathematical knowledge. For example, Earth Resource Data Analysis System (ERDAS) [123] is a commercial and complex software with a graphical user interface for manipulating not only hyperspectral imagery—for which many of the operations addressed in the previous section, like supervised classification, are supported—but also optical panchromatic, multispectral, radio detection/direction and ranging (RADAR) and light detection and ranging (LIDAR) data. Another tool is the Environment for Visualizing Images (ENVI) software [124], which combines advanced image processing and proven geospatial technology to help in the extraction of meaningful information from all kinds of data and therefore improves the decision making process. Tactical Hyperspectral Operations Resource (THOR) is a special ENVI package that enables orthorectification, atmospheric corrections, and target detection, among other operations, with data cubes. There is also ImageLab [125], which processes data from various spectroscopic imaging techniques, such as submillimeter radiation (THz band), optical, ultraviolet-visible (UV-Vis), infrared, Raman or mass spectrometry. Based on multivariate statistical methods, it enables the user both to analyze and classify acquired hyperspectral images and to merge them with maps of physical properties and high-resolution RGB photos. The convenient user interface gives full control over the vast range of functions. EXPRESSO 13 [126] is the flagship software of Brandywine (a hyperspectral sensors manufacturer). It consists of real-time hyperspectral image processing and control with an intuitive interface and provides all of the standard algorithms (e.g., k-means clustering, matched filtering and MNF transform). Finally, Spectronon [127] is another analysis software that provides tools like hyperspectral classification (e.g., based on SAM and logistic regression) VIs determination, advanced visualization tools and support for user-written Python plugins. It is designed to deal with Resonon’s spectral sensors. Next, available programming libraries for dealing with hyperspectral data processing will be addressed, including some application programming interfaces (APIs) released by companies that also released the aforementioned software tools.
In fact, programming libraries, as it turns out, are in lesser numbers. A free open-source option that seems to be mature is the Spectral Python (SPy) module [128], which is released under GNU general public license (GPL). Documentation seems to be well structured and validated by some published works [79,99]. There is also Hyperspectral Python (HypPy) [129], which works with ENVI file format for images and it can be used along with other software, such as ENVI software. Another option is Hyperspectral Image Analysis Toolbox (HIAT) [130] that consists in a collection of functions for analysis of hyperspectral and multispectral data in Matlab environment. It provides processing methods like discriminant analysis, principal components analysis, Euclidean distance or maximum likelihood, but it seems not covered by developer support anymore [131]. MultiSpec [132] is a freeware processing system for analyzing multispectral and hyperspectral images developed by Landgrebe (author of [67], for example). Also, some of the addressed software tools provide libraries for programmers. For example, ENVI has a so-called interactive data language that is easy to learn and to use, enabling scientists and professionals to make a softer hyperspectral data handling. From Erdas, a C/C++ development kit is available, too. Resonon [127] also shares a programming interface in C++ for those who desire to control hyperspectral imagers, available from the company. However, hyperspectral data processing and analysis seem to not be supported by this API.
Just like for statistical-based hyperspectral data manipulation, some engines and libraries (e.g., Tensorflow [133] and Theano [134]) that support the development of machine/deep learning applications are available.
The next section presents a set of works that combine UAVs, hyperspectral sensors and data analysis, within the scope of agriculture, forestry and related areas.

6. Applications in Agriculture and Forestry Areas

In this section, applications employing hyperspectral data acquisition, processing and analysis for areas related with farming and wild vegetation management will be presented, with a special focus on matters that range from the earlier sowing tasks to post-crop stages, moving through the estimation of parameters (e.g., carotenoid content, biomass and nitrogen) to disease monitoring. Works are presented considering a clear division between agriculture, forestry and agroforestry areas.

6.1. Agriculture

Methods for leaf carotenoid content estimation in vineyards, using high resolution hyperspectral imagery acquired from an UAV are presented in [40]. The atmospheric correction was conducted using total incoming irradiance at 1 nm intervals, simulated with the simple model of the atmospheric radiative transfer of sunshine (SMARTS) developed by the National Renewable Energy Laboratory, Unites States of America (US) Department of Energy (Gueymard, 1995, 2001). Radiative transfer models and scaling-up methods were used to achieve estimations upon 2 years of hyperspectral imagery with good results, although slightly less accurate than in [135], which is a work with similar measuring purposes.
Pölönen et al. [30] used hyperspectral Fabry-Perot and UAVs to estimate biomass and nitrogen. Their approach was tested in field with plantations of wheat and barley. Pre-processing operations involved laboratory calibration, spectral and dark signal corrections. Feature extraction was done using NDVI spectral index, spectral unmixing and spatial features. Estimation was made using a machine learning approach that joins k-nearest neighbor (k-NN) for classification and training data acquired in a laboratory environment.
Corresponding objectives are pursued in [136] using a similar sensor for adverse meteorological conditions, which represented a challenge regarding data processing. According to the authors, varying illumination conditions caused significant radiometric differences between images. However, their radiometric correction was able to reduce grey values variation in overlapping images from 14–18% to 6–8%.
In [41], a multitemporal analysis process of hydrological soil surface characteristics (H-SSC) was applied to Mediterranean vineyards. It was capable of discriminating land classes and translating possible evolutions to decision rules. Classification consists of correcting H-SSC class maps, previously derived from any monotemporal classification (maps are pixel or image-object-based), relying on the comparison between expert knowledge reflected on H-SSC possible evolutions (successors) and a given monotemporal classification. Mechanical and chemical interventions in field are prone of being detected with this approach.
In [49], the authors built their own hyperspectral-based UAS that includes a 960 g hyperspectral sensor capable of capturing 324 spectral bands (or half in the binned mode) between 361 and 961 nm, a multirotor drone, positional/orientation sensors and a power supply unit. Validation experiments included the assessment of chlorophyll content and green biomass of pasture and barley crop, mainly based on calculus involving VIs.
Water status of a lemon orchard was investigated in [137] using field data collection, i.e., leaf-level measurements of chlorophyll fluorescence and PRI data, seasonal time-series of crown temperature and PRI under different regulated deficit irrigation (RDI) treatments. UAS—consisting in a UAV plus thermal and hyperspectral sensors—was used to gather data for analyzing pure crown temperature, radiance and reflectance spectra with the objective of estimate chlorophyll fluorescence, visible ratios and structural indices for water stress detection. Radiometric calibration was made in relation to a calibrated unit light source while atmospheric calibration was carried out by using SMARTS model. Indices-based measurements and FluorMOD model plus FLD3 were applied for fluorescence estimation, which was in agreement with ground measurements.
Verticillium wilt—which is a disease that affects olive cultures in the way that it blocks the water flow at the vascular plant level—was studied by [138] with the use of thermal imagery, chlorophyll fluorescence, structural and physiological indices (xanthophyll, chlorophyll a + b, carotenoids and blue/green/red B/G/R indices) calculated from multispectral and hyperspectral data as early indicators of the infection’s presence and severity.
In [50], chlorophyll densities estimation on rice paddies at low flight altitudes was the main concern. A hyperspectral sensor was assembled to that end, capable of reading 256 bands equally spaced between 340 and 763 nm. Comparisons of the sensor readings to ground truth demonstrated great accuracy in the estimations of chlorophyll density, more specifically at RE and NIR ranges.
The angular effect over several VIs using UAVs and a hyperspectral sensor was studied in [139] with a focus on wheat. The so-called goniometer system uses a structure for motion approach to assess pointing and position accuracy. As main conclusions, the authors stated that their UAV-based goniometer represents a useful mean-step towards the development of correction techniques to attenuate the effects of angular reflections on spectra.
Researchers from National Lab and State University of Idaho in the USA [140] presented several tests on their remote sensing system that evaluates the influence of improved flights on geometric accuracy, mosaicking capabilities and ability to discriminate non-vegetation targets, grasses and shrubs. Unsupervised classification based on k-mean was used for exploring data imagery acquired from the field and a matched filter based on SAM was applied for supervised classification of ground features (e.g., Sandberg bluegrass and bare soil). The former seems to support vegetation management objectives that rely on mapping shrub cover and distribution patterns, while the latter performed poorly, highlighting the need of improving supervised classification procedures.
A study demonstrating the relationship between steady-state fluorescence and net photosynthesis measured under natural light field conditions—both at the leaf and image levels—was carried out by Zarco-Tejada [42], who used a 260-band hyperspectral sensor attached to an UAV to perform tests in twelve production vineyards of Ribera del Duero (northern Spain). Several tests for acquiring parameters on biochemical determination of chlorophyll a + b, carotenoids and anthocyanins, while other parameters (e.g., Leaf steady-state and dark-adapted fluorescence) were collected on the ground to be compared to imagery data from which florescence estimations were taken by using the Fraunhofer Line Depth (FLD) principle. Results point out to a connection between both.
Focusing on wheat fields, Kaivosoja et al. [141] exploited classified raster maps from hyperspectral data to produce a work task for a precision fertilizer application. Flight campaigns were carried out in wheat fields (Finland) to then produce classified raster maps with biomass and nitrogen contents estimation. Such maps were combined with historic data (e.g., yield maps) to generate proper task maps for farm machinery. Some equations relating nitrogen with biomass were used for statistical purposes. Despite the potential usefulness of this kind of estimation practices, the authors stressed out some inaccuracies related with the hyperspectral data processing chain. Perhaps ground measures taken in sparser temporal data could contribute to a more accurate classification, thus improving the results.

6.2. Forestry

A method to derive 3D hyperspectral information from lightweight snapshot sensors on board of UAVs for vegetation monitoring was proposed in [142], based on a structure from motion approach that results in a digital surface model (DSM) as a representation of the 3D space linked to objects’ reflectance. The used sensor provides 125 bands of information the range of 450–950 nm and 21 cm of resolution. Operations around radiometric calibration are extensively documented. Since the center wavelengths and full width at half maximum (FWHM) were provided by the manufacturer, spectral calibration was not carried out.
To identify the presence of bark beetle infestations on Norway’s spruce, Näsi et al. [143] developed a new processing approach to analyze spectral characteristics for high spatial resolution photogrammetric and hyperspectral image data in a forested environment. The used hyperspectral sensor—FPI—is different from most of the others since it has stereoscopic capabilities, which suggest improved accuracy in point cloud gathering and further DSM production. The workflow for analysis includes images’ correction according to laboratory calibrations, creation of a 3D geometric model, determination of spectral image mosaics, identification of individual trees, spectral feature extraction and, finally, classification based on supervised k-NN [144]. For three color classes (healthy, infested, dead), an overall accuracy of 76% was obtained, while 90% was achieved when using 2 classes (healthy, dead).
Small tropical forests were the focus in [145], that presents a work for the assessment of altimetry accuracy involving Bundle Block Adjustment (BBA) with a couple of hyperspectral bands and GCPs. Conclusions were: (1) ground control is needed to improve altimetry accuracy and (2) fewer discrepancies were noticed for the GCPs placed in the image border rather than the ones placed in its corners. Preliminary results showed a discrepancy of around 40 cm in the Z coordinate, which is sufficient for forestry applications.

6.3. Agroforestry

Saari et al. [146] integrated Unmanned Aerial System Innovations (UASI), which is a project that aims to study how useful light weighted UAVs can be in fields related with forestry or agriculture. They start off by pointing out forestry requirements (e.g., tree height estimation and species identification) and crop monitoring applications (e.g., feedstock estimation and nitrogen status assessment) as well as the requirements of UASI systems (collaborative support for farmers or forest professionals). They conclude that FPI is a suitable sensor to be used in the several tasks concerning the system. This sensor was used to demonstrate the technical feasibility of false color and hyperspectral data in forest inventory and agriculture applications. Later, this research group [147] focused in precision agriculture with studies on biomass estimation based on support vector regression machine. Multi-temporal data and point clouds were their main data sources.
Summing up, several works ranging from vegetation status monitoring to disease detection demonstrate how versatile and useful hyperspectral remote sensing using UAVs can be for both agriculture and forestry.

7. Conclusions

UAS are getting increasingly popular in agricultural and forestry insofar as they represent a cost-effective and readily available tool for surveying land and crops, with the purpose of acquiring data for further analysis and to support decision-making and management processes. However, most of the sensors that are commonly applied to that end (e.g., RGB and multispectral sensors) usually only provide information on a very limited number of bands and/or spectral ranges which might not suffice some components that are part of a data set representing a scanned place, such as a forest section or a crop area.
To tackle this issue, hyperspectral sensors—commonly used on satellites or manned aircraft a few years ago—are getting redesigned to be lighter and smaller and, thus, more suitable to be carried by UAVs for earth surface surveying purposes, considering a wider spectral range and narrower bands. Several of these sensors are commercially available and were presented in Section 2, along with a few proposals based on scientific research.
Hyperspectral sensors usually generate huge amounts of data since they retrieve spectra sets composed of hundreds of bands across considerable spatial resolution images. Such dimensionality demands proper analysis and processing methods. The most important ones were presented in Section 4, right before some software tools and programming libraries that implement most of the addressed mathematical approaches. For remote sensing professionals, land surveying technicians and programmers working on applications for agroforestry and related areas, such tools may prove to be relevant.
Works on agriculture and forestry areas involving UAVs and hyperspectral sensors were reviewed, focusing on their main purposes and general processing and analysis approaches. Notwithstanding the considerable number of applications, they seem to be still scarce compared with applications involving UAVs and other types of sensors, probably because of the high prices of high-resolution spectroscopy that can compromise cost-effectiveness on applications around agroforestry and related areas. Another reason for the low number of studies is the complexity of data acquisition and analysis. A higher level of training is needed to acquire the data. Additionally, some researchers [148] expect that hyperspectral sensors will be applied only for research purposes. However, we refuse to agree with that perspective due to the increasing number of market solutions that are contributing to the wide spread of high-resolution spectroscopy devices. Moreover, it is expected that the technological development of upcoming years can bring smaller and more affordable devices, turning hyperspectral-based sensing into a mainstream approach for agriculture, forestry and related areas. Even knowing there is still a long way to go until UAS-based remote sensing using hyperspectral becomes effectively affordable for common farmers and foresters, there is, undoubtedly, a window of opportunity for the dissemination of business models relying on regionalized and cost-effective services supply focusing on this kind of survey and enabling more accurate decision-support tools. With this in mind and following the proposal already presented in [149], future efforts will be developed towards the identification and discrimination of phytosanitary issues affecting vineyards through hyperspectral-based remote sensing carried out using UAVs.

Acknowledgments

This work was financed by the European Regional Development Fund (ERDF) through the Operational Programme for Competitiveness and Internationalisation—COMPETE 2020 under the PORTUGAL 2020 Partnership Agreement, and through the Portuguese National Innovation Agency (ANI) as a part of project “PARRA—Plataforma integrAda de monitoRização e avaliação da doença da flavescência douRada na vinhA” (N° 3447) and ERDF and North 2020—North Regional Operational Program, as part of project “INNOVINE & WINE—Vineyard and Wine Innovation Platform” (NORTE-01-0145-FEDER-000038).

Author Contributions

The authors that form a multidisciplinary team acting in informatics, earth sciences and electronics, provided a steady contribution for this paper, each one focusing their respective areas of expertise. Thereby, Telmo Adão, Jonáš Hruška, Luís Pádua and Miguel Bessa carried out the research related with hyperspectral data processing, software tools and agroforestry and related areas applications. Hyperspectral sensors as well as pre- and post-flight operations were addressed by Emanuel Peres, Raul Morais and Joaquim João Sousa that also designed the paper and managed its development.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

ACDAnomalous Change Detection
AISAirborne Imaging Spectrometer
AMFAdaptive Matched Filters
APIApplication Programming Interface
AVIRISAirborne Visible/Infrared Imaging Spectrometer
BBABundle Block Adjustment
BILBand-Interleaved-by-Line
BIPBand-Interleaved-by-Pixel
BSQBand Sequential
CARIChlorophyll Absorption Ratio Index
CCDCharge-Coupled Device
CFContinuum Fusion
CFARConstant False Alarm Rate
CMOSComplementary Metal-Oxide-Semiconductor
CNNConvolutional Neural Networks
DAFEDecision Boundary Feature Matrix
DBFEDecision Boundary Feature Extraction
DLDeep Learning
DNDigital Numbers
DSMDigital Surface Model
ENVIENvironment for Visualizing Images
ERDASEarth Resource Data Analysis System
ESWIREarly Short-Wave InfraRed
FAMFalse Alarm Mitigation
FLDFraunhofer Line Depth
FNIRFar Near InfraRed
FPIFabry-Perot Interferometer
FSWIRFar Short-Wave InfraRed
FWHMFull Width At Half Maximum
GaAsGallium Arsenide
GCPGround Control Point
GIGreenness Index
GIGreenness Index
GLRGeneralized Likelihood Ratio
GLRTGeneralized Likelihood Ratio Test
GNSSGlobal Navigation Satellite Systems
GPLGeneral Public License
GPSGlobal Positioning System
GVIGreenness Vegetation Index
HIATHyperspectral Image Analysis Toolbox
HSIHyperspectral Imaging
H-SSCHydrological Soil Surface Characteristics
HypPyHyperspectral Python
ICAIndependent Component Analysis
IMUInertial Measurement Unit
InAsIndium Arsenide
InGaAsIndium Gallium Arsenide
INSInertial Navigation Systems
k-NNk-Nearest Neighbor
LIDARLIght Detection And Ranging
LMMLinear Mixed Models
LRLikelihood Ratio
MCARIModified Chlorophyll Absorption Ratio Index
MCRMultivariate Curve Resolution
MCT, HgCdTeMercury Cadmium Tellurium
mNDVIModified Normalized Difference Vegetation Index
MNFMaximum Noise Fraction
MSAVIModified Soil-Adjusted Vegetation Index
MVSRModified Vegetation Stress Ratio
NASA/JPLNational Aeronautics and Space Administration Jet Propulsion Laboratory
NDVINormalized Difference Vegetation Index
NIRNear InfraRed
NPCINormalized Pigment Chlorophyll Index
OSAVIOptimized Soil-Adjusted Vegetation Index
PCAPrincipal Component Analysis
PdProbability of detection
PfaProbability of false alarm
PRIPhotochemical Reflectance Index
RADARRAdio Detection/Direction And Ranging
RBFRadial Basis Function
RDIRegulated Deficit Irrigation
RERed-Edge
RGBRed Green Blue
ROCReceiving Operating Characteristic
RTTRadiative Transfer Theory
RXReed-Xiaoli
SAMSpectral Angle Mapper
SiSilicon
SMARTSSimple Model Of The Atmospheric Radiative Transfer Of Sunshine
SNRSignal-To-Noise Ratio
SPySpectral Python
SRSimple Ratio
SRPISimple Ratio Pigment Index
SVDDSupport Vector Data Description
SVMSupport Vector Machines
TCARITransformed Chlorophyll Absorption Ratio Index
THORTactical Hyperspectral Operations Resource
THzsubmillimeter radiation
TIDTarget IDentification
TSVMTransductive Support Vector Machines
TVITriangular Vegetation Index
UASUnmanned Aircraft Systems
UASIUnmanned Aerial System Innovations
UAVUnmanned Aerial Vehicle
UV-VisUltraviolet-Visible
VIVegetation Index
VNIRVisible and Near-Infrared

References

  1. Pádua, L.; Vanko, J.; Hruška, J.; Adão, T.; Sousa, J.J.; Peres, E.; Morais, R. UAS, sensors, and data processing in agroforestry: A review towards practical applications. Int. J. Remote Sens. 2017, 38, 2349–2391. [Google Scholar] [CrossRef]
  2. Park, S.; Nolan, A.; Ryu, D.; Fuentes, S.; Hernandez, E.; Chung, H.; O’Connell, M. Estimation of crop water stress in a nectarine orchard using high-resolution imagery from unmanned aerial vehicle (UAV). In Proceedings of the 21st International Congress on Modelling and Simulation, Gold Coast, Australia, 29 November–4 December 2015; pp. 1413–1419. [Google Scholar]
  3. Primicerio, J.; Gennaro, S.F. D.; Fiorillo, E.; Genesio, L.; Lugato, E.; Matese, A.; Vaccari, F.P. A flexible unmanned aerial vehicle for precision agriculture. Precis. Agric. 2012, 13, 517–523. [Google Scholar] [CrossRef]
  4. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  5. Calderón, R.; Navas-Cortés, J.A.; Zarco-Tejada, P.J. Early Detection and Quantification of Verticillium Wilt in Olive Using Hyperspectral and Thermal Imagery over Large Areas. Remote Sens. 2015, 7, 5584–5610. [Google Scholar] [CrossRef]
  6. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2012, 3, 397–404. [Google Scholar] [CrossRef]
  7. Merino, L.; Caballero, F.; Martínez-de-Dios, J.R.; Maza, I.; Ollero, A. An Unmanned Aircraft System for Automatic Forest Fire Monitoring and Measurement. J. Intell. Robot. Syst. 2012, 65, 533–548. [Google Scholar] [CrossRef]
  8. Smigaj, M.; Gaulton, R.; Barr, S.L.; Suárez, J.C. Uav-Borne Thermal Imaging for Forest Health Monitoring: Detection of Disease-Induced Canopy Temperature Increase. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3/W3, 349–354. [Google Scholar] [CrossRef]
  9. Horcher, A.; Visser, R.J. Unmanned aerial vehicles: Applications for natural resource management and monitoring. In Proceedings of the 2004 Council on Forest Engineering (COFE) Conference: “Machines and People, The Interface”, Hot Springs, AR, Canada, 27–30 April 2004. [Google Scholar]
  10. Coulter, D.; Hauff, P.L.; Kerby, W.L. Airborne Hyperspectral Remote Sensing. In Proceedings of the Exploration 07: Fifth Decennial International Conference on Mineral Exploration, Toronto, ON, Canada, 9–12 September 2007; pp. 375–386. [Google Scholar]
  11. Qin, J.; Chao, K.; Kim, M.S.; Lu, R.; Burks, T.F. Hyperspectral and multispectral imaging for evaluating food safety and quality. J. Food Eng. 2013, 118, 157–171. [Google Scholar] [CrossRef]
  12. Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Mohammed, I.A. Hyperspectral Remote Sensing of Vegetation and Agricultural Crops. Photogramm. Eng. Remote Sens. 2014, 80, 697–723. [Google Scholar]
  13. Park, B.; Lu, R. Hyperspectral Imaging Technology in Food and Agriculture; Food Engineering Series; Springer: New York, NY, USA, 2015; ISBN 978-1-4939-2836-1. [Google Scholar]
  14. Multispectral vs. Hyperspectral Imagery Explained. Available online: http://gisgeography.com/multispectral-vs-hyperspectral-imagery-explained/ (accessed on 15 September 2017).
  15. Proctor, C.; He, Y. Workflow for Building A Hyperspectral Uav: Challenges And Opportunities. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-1/W4, 415–419. [Google Scholar] [CrossRef]
  16. Manolakis, D.; Marden, D.; Shaw, G.A. Hyperspectral image processing for automatic target detection applications. Linc. Lab. J. 2003, 14, 79–116. [Google Scholar]
  17. AVIRIS—Airborne Visible/Infrared Imaging Spectrometer—Imaging Spectroscopy. Available online: https://aviris.jpl.nasa.gov/aviris/imaging_spectroscopy.html (accessed on 9 October 2017).
  18. Sahoo, R. Hyperspectral Remote Sensing (Sahoo’s Report); Indian Agricultural Statistics Research Institute: New Delhi, India, 2013; pp. 848–859. [Google Scholar]
  19. Goetz, A.F. H. Three decades of hyperspectral remote sensing of the Earth: A personal view. Remote Sens. Environ. 2009, 113, S5–S16. [Google Scholar] [CrossRef]
  20. Sabins, F.F. Remote sensing for mineral exploration. Ore Geol. Rev. 1999, 14, 157–183. [Google Scholar] [CrossRef]
  21. Maathuis, B.H.P.; van Genderen, J.L. A review of satellite and airborne sensors for remote sensing based detection of minefields and landmines. Int. J. Remote Sens. 2004, 25, 5201–5245. [Google Scholar] [CrossRef]
  22. Teke, M.; Deveci, H.S.; Haliloğlu, O.; Gürbüz, S.Z.; Sakarya, U. A short survey of hyperspectral remote sensing applications in agriculture. In Proceedings of the 2013 6th International Conference on Recent Advances in Space Technologies (RAST), Istanbul, Turkey, 12–14 June 2013; pp. 171–176. [Google Scholar]
  23. Mather, P.M. (Ed.) TERRA-1: Understanding the Terrestrial Environment, the Role of Earth Observations from Space; CRC Press: Boca Raton, FL, USA, 1992. [Google Scholar]
  24. Lin, J.; Singer, P.W. China to Launch Powerful Civilian Hyperspectral Satellite. Available online: http://www.popsci.com/china-to-launch-worlds-most-powerful-hyperspectral-satellite (accessed on 18 April 2017).
  25. Mulla, D.J. Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  26. Datt, B.; McVicar, T.R.; Niel, T.G.V.; Jupp, D.L.B.; Pearlman, J.S. Preprocessing EO-1 Hyperion hyperspectral data to support the application of agricultural indexes. IEEE Trans. Geosci. Remote Sens. 2003, 41, 1246–1259. [Google Scholar] [CrossRef]
  27. Moharana, S.; Dutta, S. Spatial variability of chlorophyll and nitrogen content of rice from hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2016, 122, 17–29. [Google Scholar] [CrossRef]
  28. Clark, M.L.; Kilham, N.E. Mapping of land cover in northern California with simulated hyperspectral satellite imagery. ISPRS J. Photogramm. Remote Sens. 2016, 119, 228–245. [Google Scholar] [CrossRef]
  29. Zhang, N.; Wang, M.; Wang, N. Precision agriculture—A worldwide overview. Comput. Electron. Agric. 2002, 36, 113–132. [Google Scholar] [CrossRef]
  30. Pölönen, I.; Saari, H.; Kaivosoja, J.; Honkavaara, E.; Pesonen, L. Hyperspectral imaging based biomass and nitrogen content estimations from light-weight UAV. In Proceedings of the SPIE Remote Sensing, Dresden, Germany, 16 October 2013. [Google Scholar]
  31. WorldView-3 WorldView-3 Satellite Sensor|Satellite Imaging Corp. Available online: http://www.satimagingcorp.com/satellite-sensors/worldview-3/ (accessed on 19 April 2017).
  32. ESA Spatial-Resolutions-Sentinel-2 MSI—User Guides—Sentinel Online. Available online: https://earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/resolutions/spatial (accessed on 19 April 2017).
  33. AVIRIS—Airborne Visible/Infrared Imaging Spectrometer. Available online: https://aviris.jpl.nasa.gov/ (accessed on 1 August 2017).
  34. Pajares, G. Overview and Current Status of Remote Sensing Applications Based on Unmanned Aerial Vehicles (UAVs). Photogramm. Eng. Remote Sens. 2015, 81, 281–329. [Google Scholar] [CrossRef]
  35. Aasen, H. The Acquisition of Hyperspectral Digital Surface Models of Crops from UAV Snapshot Cameras. Ph.D. Thesis, Universität zu Köln, Köln, Germany, 2016. [Google Scholar]
  36. Sullivan, J.M. Evolution or revolution? The rise of UAVs. IEEE Technol. Soc. Mag. 2006, 25, 43–49. [Google Scholar] [CrossRef]
  37. Pappalardo, J. Unmanned Aircraft “Roadmap” Reflects Changing Priorities. Available online: http://www.nationaldefensemagazine.org/articles/2005/3/31/2005april-unmanned-aircraft-roadmap-reflects-changing-priorities (accessed on 1 September 2017).
  38. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  39. Bock, C.H.; Poole, G.H.; Parker, P.E.; Gottwald, T.R. Plant Disease Severity Estimated Visually, by Digital Photography and Image Analysis, and by Hyperspectral Imaging. Crit. Rev. Plant Sci. 2010, 29, 59–107. [Google Scholar] [CrossRef]
  40. Zarco-Tejada, P.J.; Guillén-Climent, M.L.; Hernández-Clemente, R.; Catalina, A.; González, M.R.; Martín, P. Estimating leaf carotenoid content in vineyards using high resolution hyperspectral imagery acquired from an unmanned aerial vehicle (UAV). Agric. For. Meteorol. 2013, 171–172, 281–294. [Google Scholar] [CrossRef]
  41. Corbane, C.; Jacob, F.; Raclot, D.; Albergel, J.; Andrieux, P. Multitemporal analysis of hydrological soil surface characteristics using aerial photos: A case study on a Mediterranean vineyard. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 356–367. [Google Scholar] [CrossRef]
  42. Zarco-Tejada, P.J.; Catalina, A.; González, M.R.; Martín, P. Relationships between net photosynthesis and steady-state chlorophyll fluorescence retrieved from airborne hyperspectral imagery. Remote Sens. Environ. 2013, 136, 247–258. [Google Scholar] [CrossRef]
  43. Wu, D.; Sun, D.-W. Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: A review—Part I: Fundamentals. Innov. Food Sci. Emerg. Technol. 2013, 19, 1–14. [Google Scholar] [CrossRef]
  44. Sellar, R.G.; Boreman, G.D. Classification of imaging spectrometers for remote sensing applications. Opt. Eng. 2005, 44, 13602. [Google Scholar] [CrossRef]
  45. Carrère, J.P.; Place, S.; Oddou, J.P.; Benoit, D.; Roy, F. CMOS image sensor: Process impact on dark current. In Proceedings of the 2014 IEEE International on Reliability Physics Symposium, Waikoloa, HI, USA, 1–5 June 2014; pp. 3C.1.1–3C.1.6. [Google Scholar]
  46. Hagen, N.; Kudenov, M.W. Review of snapshot spectral imaging technologies. Opt. Eng. 2013, 52, 090901. [Google Scholar] [CrossRef]
  47. Uto, K.; Seki, H.; Saito, G.; Kosugi, Y.; Komatsu, T. Development of a Low-Cost Hyperspectral Whiskbroom Imager Using an Optical Fiber Bundle, a Swing Mirror, and Compact Spectrometers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 3909–3925. [Google Scholar] [CrossRef]
  48. Fowler, J.E. Compressive pushbroom and whiskbroom sensing for hyperspectral remote-sensing imaging. In Proceedings of the 2014 IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 684–688. [Google Scholar]
  49. Lucieer, A.; Malenovský, Z.; Veness, T.; Wallace, L. HyperUAS-Imaging Spectroscopy from a Multirotor Unmanned Aircraft System: HyperUAS-Imaging Spectroscopy from a Multirotor Unmanned. J. Field Robot. 2014, 31, 571–590. [Google Scholar] [CrossRef]
  50. Uto, K.; Seki, H.; Saito, G.; Kosugi, Y. Characterization of Rice Paddies by a UAV-Mounted Miniature Hyperspectral Sensor System. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 851–860. [Google Scholar] [CrossRef]
  51. Uto, K.; Seki, H.; Saito, G.; Kosugi, Y.; Komatsu, T. Development of a Low-Cost, Lightweight Hyperspectral Imaging System Based on a Polygon Mirror and Compact Spectrometers. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 861–875. [Google Scholar] [CrossRef]
  52. Rozas, G.; Jusserand, B.; Fainstein, A. Fabry-Pérot-multichannel spectrometer tandem for ultra-high resolution Raman spectroscopy. Rev. Sci. Instrum. 2014, 85, 13103. [Google Scholar] [CrossRef] [PubMed]
  53. Honkavaara, E.; Hakala, T.; Kirjasniemi, J.; Lindfors, A.; Mäkynen, J.; Nurminen, K.; Ruokokoski, P.; Saari, H.; Markelin, L. New light-weight stereosopic spectrometric airborne imaging technology for high-resolution environmental remote sensing case studies in water quality mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, 1, W1. [Google Scholar] [CrossRef]
  54. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  55. Habib, A.; Xiong, W.; He, F.; Yang, H.L.; Crawford, M. Improving Orthorectification of UAV-Based Push-Broom Scanner Imagery Using Derived Orthophotos From Frame Cameras. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 262–276. [Google Scholar] [CrossRef]
  56. Photonics, Headwall VNIR. Available online: http://www.headwallphotonics.com/spectral-imaging/hyperspectral/vnir (accessed on 2 April 2017).
  57. Duan, S.-B.; Li, Z.-L.; Tang, B.-H.; Wu, H.; Ma, L.; Zhao, E.; Li, C. Land Surface Reflectance Retrieval from Hyperspectral Data Collected by an Unmanned Aerial Vehicle over the Baotou Test Site. PLoS ONE 2013, 8, e66972. [Google Scholar] [CrossRef]
  58. Jakob, S.; Zimmermann, R.; Gloaguen, R. The Need for Accurate Geometric and Radiometric Corrections of Drone-Borne Hyperspectral Data for Mineral Exploration: MEPHySTo—A Toolbox for Pre-Processing Drone-Borne Hyperspectral Data. Remote Sens. 2017, 9, 88. [Google Scholar] [CrossRef]
  59. Hruska, R.; Mitchell, J.; Anderson, M.; Glenn, N.F. Radiometric and Geometric Analysis of Hyperspectral Imagery Acquired from an Unmanned Aerial Vehicle. Remote Sens. 2012, 4, 2736–2752. [Google Scholar] [CrossRef]
  60. Chen, H.S. Remote Sensing Calibration Systems: An Introduction; A. Deepak: Oakland, CA, USA, 1997; ISBN 978-0-937194-38-6. [Google Scholar]
  61. Richter, R.; Schlapfer, D.; Muller, A. Operational Atmospheric Correction for Imaging Spectrometers Accounting for the Smile Effect. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1772–1780. [Google Scholar] [CrossRef]
  62. Mendez-Rial, R.; Calvino-Cancela, M.; Martin-Herrero, J. Accurate Implementation of Anisotropic Diffusion in the Hypercube. IEEE Geosci. Remote Sens. Lett. 2010, 7, 870–874. [Google Scholar] [CrossRef]
  63. Qian, S.E.; Chen, G. Enhancing Spatial Resolution of Hyperspectral Imagery Using Sensor’s Intrinsic Keystone Distortion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 5033–5048. [Google Scholar] [CrossRef]
  64. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L.M. Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data-Fusion Contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef] [Green Version]
  65. Herrero, R.; Cadirola, M.; Ingle, V.K. Preprocessing and compression of Hyperspectral images captured onboard UAVs. In Proceedings of the SPIE 9647, Unmanned/Unattended Sensors and Sensor Networks XI; and Advanced Free-Space Optical Communication Techniques and Applications, Toulouse, France, 13 October 2015; p. 964705. [Google Scholar]
  66. Burger, J.; Gowen, A. Data handling in hyperspectral image analysis. Chemom. Intell. Lab. Syst. 2011, 108, 13–22. [Google Scholar] [CrossRef]
  67. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  68. Du, Q.; Raksuntorn, N. Hyperspectral image analysis using noise-adjusted principal component transform. In Proceedings of the SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XII, Orlando, FL, USA, 4 May 2006. [Google Scholar]
  69. Chen, C. Comparison of principal components analysis and minimum noise fraction transformation for reducing the dimensionality of hyperspectral imagery. Geogr. Res. 2000, 163–178. [Google Scholar]
  70. Nasrabadi, N.M. Hyperspectral Target Detection : An Overview of Current and Future Challenges. IEEE Signal Process. Mag. 2014, 31, 34–44. [Google Scholar] [CrossRef]
  71. Manolakis, D.; Shaw, G. Detection algorithms for hyperspectral imaging applications. IEEE Signal Process. Mag. 2002, 19, 29–43. [Google Scholar] [CrossRef]
  72. Kay, S.M. Fundamentals of Statistical Signal Processing; Prentice-Hall: Englewood Cliffs, NJ, USA, 1998. [Google Scholar]
  73. Shippert, P. Introduction to hyperspectral image analysis. Online J. Space Commun. 2003, 3, 13. [Google Scholar]
  74. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef] [Green Version]
  75. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  76. Richmond, C.D. Derived PDF of maximum likelihood signal estimator which employs an estimated noise covariance. IEEE Trans. Signal Process. 1996, 44, 305–315. [Google Scholar] [CrossRef]
  77. Kelly, E.J. An Adaptive Detection Algorithm. IEEE Trans. Aerosp. Electron. Syst. 1986, AES-22, 115–127. [Google Scholar] [CrossRef]
  78. Kelly, E.J. Adaptive Detection in Non-Stationary Interference, Part III; MIT Lincoln Laboratory: Lexington, MA, USA, 1987. [Google Scholar]
  79. Reed, I.S.; Yu, X. Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1760–1770. [Google Scholar] [CrossRef]
  80. Kraut, S.; Scharf, L.L. The CFAR adaptive subspace detector is a scale-invariant GLRT. In Proceedings of the Ninth IEEE Signal on Workshop on Statistical Signal and Array Processing, Portland, OR, USA, 14–16 September 1998; pp. 57–60. [Google Scholar]
  81. Kraut, S.; Scharf, L.L.; McWhorter, L.T. Adaptive subspace detectors. IEEE Trans. Signal Process. 2001, 49, 1–16. [Google Scholar] [CrossRef]
  82. Scharf, L.L.; Friedlander, B. Matched subspace detectors. IEEE Trans. Signal Process. 1994, 42, 2146–2157. [Google Scholar] [CrossRef]
  83. Manolakis, D.; Siracusa, C.; Shaw, G. Hyperspectral subpixel target detection using the linear mixing model. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1392–1409. [Google Scholar] [CrossRef]
  84. Goldberg, H.; Nasrabadi, N.M. A comparative study of linear and nonlinear anomaly detectors for hyperspectral imagery. In Proceedings of the SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII, Orlando, FL, USA, 9–13 April 2007; p. 656504. [Google Scholar]
  85. Kwon, H.; Nasrabadi, N.M. Kernel RX-algorithm: A nonlinear anomaly detector for hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2005, 43, 388–397. [Google Scholar] [CrossRef]
  86. Banerjee, A.; Burlina, P.; Diehl, C. A support vector method for anomaly detection in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2282–2291. [Google Scholar] [CrossRef]
  87. Pieper, M.; Manolakis, D.; Truslow, E.; Cooley, T.; Brueggeman, M.; Weisner, A.; Jacobson, J. Comparison of hyperspectral change detection algorithms. In Proceedings of the SPIE Optical Engineering + Applications, San Diego, CA, USA, 9–13 August 2015; p. 96110Z. [Google Scholar]
  88. Robey, F.C.; Fuhrmann, D.R.; Kelly, E.J.; Nitzberg, R. A CFAR adaptive matched filter detector. IEEE Trans. Aerosp. Electron. Syst. 1992, 28, 208–216. [Google Scholar] [CrossRef]
  89. Harsanyi, J.C.; Chang, C.I. Hyperspectral image classification and dimensionality reduction: An orthogonal subspace projection approach. IEEE Trans. Geosci. Remote Sens. 1994, 32, 779–785. [Google Scholar] [CrossRef]
  90. Scholkopf, B.; Smola, A.J. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2001; ISBN 978-0-262-19475-4. [Google Scholar]
  91. Schaum, A. Continuum fusion: A theory of inference, with applications to hyperspectral detection. Opt. Express 2010, 18, 8171–8181. [Google Scholar] [CrossRef] [PubMed]
  92. Manolakis, D.; Truslow, E.; Pieper, M.; Cooley, T.; Brueggeman, M. Detection Algorithms in Hyperspectral Imaging Systems: An Overview of Practical Algorithms. IEEE Signal Process. Mag. 2014, 31, 24–33. [Google Scholar] [CrossRef]
  93. DiPietro, R.S.; Manolakis, D.; Lockwood, R.B.; Cooley, T.; Jacobson, J. Hyperspectral matched filter with false-alarm mitigation. Opt. Eng. 2012, 51, 16202. [Google Scholar] [CrossRef]
  94. Pieper, M.L.; Manolakis, D.; Truslow, E.; Cooley, T.; Brueggeman, M. False alarm mitigation techniques for hyperspectral target detection. In Proceedings of the SPIE Defense, Security, and Sensing, Baltimore, MD, USA, 29 April–3 May 2013; p. 874304. [Google Scholar] [CrossRef]
  95. Burr, T.; Fry, H.; McVey, B.; Sander, E.; Cavanaugh, J.; Neath, A. Performance of Variable Selection Methods in Regression Using Variations of the Bayesian Information Criterion. Commun. Stat. Simul. Comput. 2008, 37, 507–520. [Google Scholar] [CrossRef]
  96. Keshava, N. Distance metrics and band selection in hyperspectral processing with applications to material identification and spectral libraries. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1552–1565. [Google Scholar] [CrossRef]
  97. Matteoli, S.; Diani, M.; Corsini, G. A tutorial overview of anomaly detection in hyperspectral images. IEEE Aerosp. Electron. Syst. Mag. 2010, 25, 5–28. [Google Scholar] [CrossRef]
  98. Kwon, H.; Nasrabadi, N.M. A Comparative Analysis of Kernel Subspace Target Detectors for Hyperspectral Imagery. EURASIP J. Adv. Signal Process. 2006, 2007, 29250. [Google Scholar] [CrossRef]
  99. Richards, J.A.; Jia, X. Remote Sensing Digital Image Analysis: An Introduction; Springer: New York, NY, USA, 1990. [Google Scholar]
  100. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113 (Suppl. S1), S110–S122. [Google Scholar] [CrossRef]
  101. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A Training Algorithm for Optimal Margin Classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, New York, NY, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
  102. Mercier, G.; Lennon, M. Support vector machines for hyperspectral image classification with spectral-based kernels. In Proceedings of the 2003 IEEE International Conferences on Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; pp. 288–290. [Google Scholar]
  103. Chi, M.; Bruzzone, L. Semisupervised Classification of Hyperspectral Images by SVMs Optimized in the Primal. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1870–1880. [Google Scholar] [CrossRef]
  104. Kasetkasem, T.; Arora, M.K.; Varshney, P.K. Super-resolution land cover mapping using a Markov random field based approach. Remote Sens. Environ. 2005, 96, 302–314. [Google Scholar] [CrossRef]
  105. Chen, Y.; Wang, G.; Dong, S. Learning with progressive transductive support vector machine. Pattern Recognit. Lett. 2003, 24, 1845–1855. [Google Scholar] [CrossRef]
  106. Tadjudin, S.; Landgrebe, D. Classification of High Dimensional Data with Limited Training Samples. Available online: http://docs.lib.purdue.edu/ecetr/56/ (accessed on 20 March 2017).
  107. Appice, A.; Guccione, P.; Malerba, D. A novel spectral-spatial co-training algorithm for the transductive classification of hyperspectral imagery data. Pattern Recognit. 2017, 63, 229–245. [Google Scholar] [CrossRef]
  108. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  109. Camps-Valls, G.; Marsheva, T.V. B.; Zhou, D. Semi-Supervised Graph-Based Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
  110. Agapiou, A.; Hadjimitsis, D.G.; Alexakis, D.D. Evaluation of Broadband and Narrowband Vegetation Indices for the Identification of Archaeological Crop Marks. Remote Sens. 2012, 4, 3892–3919. [Google Scholar] [CrossRef]
  111. Stagakis, S.; Markos, N.; Sykioti, O.; Kyparissis, A. Monitoring canopy biophysical and biochemical parameters in ecosystem scale using satellite hyperspectral imagery: An application on a Phlomis fruticosa Mediterranean ecosystem using multiangular CHRIS/PROBA observations. Remote Sens. Environ. 2010, 114, 977–994. [Google Scholar] [CrossRef]
  112. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  113. Zarco-Tejada, P.; Berjon, A.; Lopezlozano, R.; Miller, J.; Martin, P.; Cachorro, V.; Gonzalez, M.; Defrutos, A. Assessing vineyard condition with hyperspectral indices: Leaf and canopy reflectance simulation in a row-structured discontinuous canopy. Remote Sens. Environ. 2005, 99, 271–287. [Google Scholar] [CrossRef]
  114. Lin, P.; Qin, Q.; Dong, H.; Meng, Q. Hyperspectral vegetation indices for crop chlorophyll estimation: Assessment, modeling and validation. In Proceedings of the 2012 IEEE International conferences on Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 4841–4844. [Google Scholar]
  115. Liang, L.; Qin, Z.; Zhao, S.; Di, L.; Zhang, C.; Deng, M.; Lin, H.; Zhang, L.; Wang, L.; Liu, Z. Estimating crop chlorophyll content with hyperspectral vegetation indices and the hybrid inversion method. Int. J. Remote Sens. 2016, 37, 2923–2949. [Google Scholar] [CrossRef]
  116. Thenkabail, P.S.; Lyon, J.G. Hyperspectral Remote Sensing of Vegetation; CRC Press: Boca Raton, FL, USA, 2016; ISBN 978-1-4398-4538-7. [Google Scholar]
  117. Din, M.; Zheng, W.; Rashid, M.; Wang, S.; Shi, Z. Evaluating Hyperspectral Vegetation Indices for Leaf Area Index Estimation of Oryza sativa L. at Diverse Phenological Stages. Front. Plant Sci. 2017, 8. [Google Scholar] [CrossRef] [PubMed]
  118. Zhao, W.; Du, S. Spectral-Spatial Feature Extraction for Hyperspectral Image Classification: A Dimension Reduction and Deep Learning Approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  119. Wang, Q.; Lin, J.; Yuan, Y. Salient Band Selection for Hyperspectral Image Classification via Manifold Ranking. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1279–1289. [Google Scholar] [CrossRef] [PubMed]
  120. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2017, PP, 1–12. [Google Scholar] [CrossRef]
  121. Aptoula, E.; Ozdemir, M.C.; Yanikoglu, B. Deep Learning With Attribute Profiles for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1970–1974. [Google Scholar] [CrossRef]
  122. Li, W.; Wu, G.; Du, Q. Transferred Deep Learning for Anomaly Detection in Hyperspectral Imagery. IEEE Geosci. Remote Sens. Lett. 2017, 14, 597–601. [Google Scholar] [CrossRef]
  123. Hexagon Geospatial Erdas Imagine® 2016 Product Features and Comparisons. Available online: http://www.hexagongeospatial.com/technical-documents/product-descriptions-2016/erdas-imagine-2016-product-description (accessed on 9 October 2017).
  124. Harris Geospatial ENVI Software Platform. Available online: http://www.harrisgeospatial.com/ (accessed on 29 March 2017).
  125. Image Lab Software Bio-Rad. Available online: http://www.bio-rad.com/en-us/product/image-lab-software (accessed on 29 March 2017).
  126. Brandywine Photonics Hyperspectral Imaging and CMOS Image Sensors. Available online: http://brandywinephotonics.com/ (accessed on 29 March 2017).
  127. Resonon Inc. SpectrononPro Manual (Release 5.0). Available online: http://docs.resonon.com/spectronon/pika_manual/SpectrononProManual.pdf (accessed on 29 March 2017).
  128. Welcome to Spectral Python (SPy)—Spectral Python 0.18 documentation. Available online: http://www.spectralpython.net/ (accessed on 29 March 2017).
  129. Jelmer Oosthoek Hyperspectral Python (HypPy). Available online: https://www.itc.nl/personal/bakker/hyppy.html (accessed on 29 March 2017).
  130. Rosario-Torres, S.; Arzuaga-Cruz, E.; Velez-Reyes, M.; Jimenez-Rodriguez, L.O. An update on the MATLAB hyperspectral image analysis toolbox. In Proceedings of the Defense and Security, Orlando, FL, USA, 1 June 2005; pp. 743–752. [Google Scholar]
  131. Isaac Gerg Matlab Hyperspectral Toolbox. Available online: https://github.com/isaacgerg/matlabHyperspectralToolbox (accessed on 29 March 2017).
  132. Landgrebe, D.; Biehl, L. An Introduction & Reference for MultiSpec. Available online: ftp://bsa.bf.lu.lv/pub/TIS/atteelu_analiize/MultiSpec/Intro9_11.pdf (accessed on 29 March 2017).
  133. TensorFlow. Available online: https://www.tensorflow.org/ (accessed on 16 August 2017).
  134. Welcome—Theano 0.9.0 Documentation. Available online: http://deeplearning.net/software/theano/ (accessed on 16 August 2017).
  135. Yamada, N.; Fujimura, S. Nondestructive measurement of chlorophyll pigment content in plant leaves from three-color reflectance and transmittance. Appl. Opt. 1991, 30, 3964–3973. [Google Scholar] [CrossRef] [PubMed]
  136. Honkavaara, E.; Saari, H.; Kaivosoja, J.; Pölönen, I.; Hakala, T.; Litkey, P.; Mäkynen, J.; Pesonen, L. Processing and Assessment of Spectrometric, Stereoscopic Imagery Collected Using a Lightweight UAV Spectral Camera for Precision Agriculture. Remote Sens. 2013, 5, 5006–5039. [Google Scholar] [CrossRef] [Green Version]
  137. Zarco-Tejada, P.J.; González-Dugo, V.; Berni, J.A. J. Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera. Remote Sens. Environ. 2012, 117, 322–337. [Google Scholar] [CrossRef]
  138. Calderón, R.; Navas-Cortés, J.A.; Lucena, C.; Zarco-Tejada, P.J. High-resolution airborne hyperspectral and thermal imagery for early detection of Verticillium wilt of olive using fluorescence, temperature and narrow-band spectral indices. Remote Sens. Environ. 2013, 139, 231–245. [Google Scholar] [CrossRef]
  139. Burkart, A.; Aasen, H.; Alonso, L.; Menz, G.; Bareth, G.; Rascher, U. Angular Dependency of Hyperspectral Measurements over Wheat Characterized by a Novel UAV Based Goniometer. Remote Sens. 2015, 7, 725–746. [Google Scholar] [CrossRef] [Green Version]
  140. Mitchell, J.J.; Glenn, N.F.; Anderson, M.O.; Hruska, R.C.; Halford, A.; Baun, C.; Nydegger, N. Unmanned aerial vehicle (UAV) hyperspectral remote sensing for dryland vegetation monitoring. In Proceedings of the 2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Shanghai, China, 4–7 June 2012; pp. 1–10. [Google Scholar]
  141. Kaivosoja, J.; Pesonen, L.; Kleemola, J.; Pölönen, I.; Salo, H.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Rajala, A. A case study of a precision fertilizer application task generation for wheat based on classified hyperspectral data from UAV combined with farm history data. In Proceedings of the SPIE Remote Sensing, Dresden, Germany, 15 October 2013. [Google Scholar]
  142. Aasen, H.; Burkart, A.; Bolten, A.; Bareth, G. Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance. ISPRS J. Photogramm. Remote Sens. 2015, 108, 245–259. [Google Scholar] [CrossRef]
  143. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Tanhuanpää, T.; Holopainen, M. Using UAV-Based Photogrammetry and Hyperspectral Imaging for Mapping Bark Beetle Damage at Tree-Level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef]
  144. Kotsiantis, S.B. Supervised Machine Learning: A Review of Classification Techniques. Informatica 2007, 31. [Google Scholar]
  145. Berveglieri, A.; Tommaselli, A.M.G. Exterior Orientation of Hyperspectral Frame Images Collected with Uav for Forest Applications. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XL-3/W4, 45–50. [Google Scholar] [CrossRef]
  146. Saari, H.; Pellikka, I.; Pesonen, L.; Tuominen, S.; Heikkilä, J.; Holmlund, C.; Mäkynen, J.; Ojala, K.; Antila, T. Unmanned Aerial Vehicle (UAV) operated spectral camera system for forest and agriculture applications. In Proceedings of the SPIE Remote Sensing for Agriculture, Ecosystems, and Hydrology XIII, Prague, Czech Republic, 15 October 2011; Volume 8174. [Google Scholar]
  147. Honkavaara, E.; Kaivosoja, J.; Mäkynen, J.; Pellikka, I.; Pesonen, L.; Saari, H.; Salo, H.; Hakala, T.; Marklelin, L.; Rosnell, T. Hyperspectral Reflectance Signatures and Point Clouds for Precision Agriculture by Light Weight Uav Imaging System. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 7, 353–358. [Google Scholar] [CrossRef]
  148. Salamí, E.; Barrado, C.; Pastor, E. UAV Flight Experiments Applied to the Remote Sensing of Vegetated Areas. Remote Sens. 2014, 6, 11051–11081. [Google Scholar] [CrossRef] [Green Version]
  149. Adão, T.; Peres, E.; Pádua, L.; Hruška, J.; Sousa, J.J.; Morais, R. UAS-based hyperspectral sensing methodology for continuous monitoring and early detection of vineyard anomalies. In Proceedings of the Small Unmanned Aerial Systems for Environmental Research, Vila Real, Portugal, 28–30 June 2017. [Google Scholar]
Figure 1. Spectrum representation including: (A) Multispectral example, with 5 wide bands; and (B) Hyperspectral example consisting of several narrow bands that, usually, extends to hundreds or thousands of them (image not drawn to scale, based in [14]).
Figure 1. Spectrum representation including: (A) Multispectral example, with 5 wide bands; and (B) Hyperspectral example consisting of several narrow bands that, usually, extends to hundreds or thousands of them (image not drawn to scale, based in [14]).
Remotesensing 09 01110 g001
Figure 2. Broadband sensors’ inability for accessing the spectral shift of the RE (670–780 nm) slope associated with leaf chlorophyll content, phenological state and vegetation stress, comparatively to wide-range narrowband ones (re-edited from [18]).
Figure 2. Broadband sensors’ inability for accessing the spectral shift of the RE (670–780 nm) slope associated with leaf chlorophyll content, phenological state and vegetation stress, comparatively to wide-range narrowband ones (re-edited from [18]).
Remotesensing 09 01110 g002
Figure 3. Materials involved in hyperspectral sensors fabrication (inspired by [43]): Silicon (Si) is used for acquiring ultraviolet, visible and shortwave NIR regions; indium arsenide (InAs) and gallium arsenide (GaAs) have a spectral response between 900–1700 nm; indium gallium arsenide (InGaAs) extends the previous range to 2600 nm; and mercury cadmium tellurium (MCT or HgCdTe) is characterized by a large spectral range and high quantum efficiency that enables reaching mid-infrared region (about 2500 to 25,000 nm) and NIR region (about 800–2500 nm).
Figure 3. Materials involved in hyperspectral sensors fabrication (inspired by [43]): Silicon (Si) is used for acquiring ultraviolet, visible and shortwave NIR regions; indium arsenide (InAs) and gallium arsenide (GaAs) have a spectral response between 900–1700 nm; indium gallium arsenide (InGaAs) extends the previous range to 2600 nm; and mercury cadmium tellurium (MCT or HgCdTe) is characterized by a large spectral range and high quantum efficiency that enables reaching mid-infrared region (about 2500 to 25,000 nm) and NIR region (about 800–2500 nm).
Remotesensing 09 01110 g003
Figure 4. Hyperspectral data acquisition modes: (A) represents point scanning or whiskbroom mode; in (B) presents line scanning or pushbroom mode; (C,D) correspond to plane (or area) scanning and single shot modes, respectively (adapted from [43]).
Figure 4. Hyperspectral data acquisition modes: (A) represents point scanning or whiskbroom mode; in (B) presents line scanning or pushbroom mode; (C,D) correspond to plane (or area) scanning and single shot modes, respectively (adapted from [43]).
Remotesensing 09 01110 g004
Figure 5. LR detector approach, according to [16]. A static method classifies a given spectrum as target or background, depending on a threshold.
Figure 5. LR detector approach, according to [16]. A static method classifies a given spectrum as target or background, depending on a threshold.
Remotesensing 09 01110 g005
Figure 6. Target detection overall summary organized by goal, approach and testing hypothesis. Terms association is made through color code. Grey and white labels are related to more than one goal. For full-pixel detection, an adaptive matched filter (AMF), continuum fusion and kernel-based algorithms can be applied. Linear unmixing, sparse regression and non-/pure pixel-based approaches regard to sub-pixel detection (mixed material spectra cases). Anomaly detection, which is used to recognize patterns standing out from cluttered backgrounds, can be addressed with approaches based on Reed-Xiaoli (RX), support vector data description (SVDD) and anomalous change detection (ACD). Grey and white labels constitute the taxonomy for supporting statistical tools. Spectral angle mapper (SAM), Mahalanobis distance and likelihood ratio (LR) aims to test if a pixel is a target or not, considering a threshold to discriminate spectra of interest from background. Such threshold should consider a trade-off between false alarm and probability detection (towards Neyman-Person criterion satisfaction), which can be analyzed through receiving operator characteristic (ROC) curves. Common occurrences of probability densities relying in unknown targets and backgrounds demand a generalized likelihood ratio (GLR) workaround—the basis of adaptive detectors—consisting in replacing unknown parameters by maximum likelihood estimates. Constant false alarm rate (CFAR) promotes setting up detectors immune to background variations and noise. False alarm mitigation (FAM) and target identification (TID) are self-explanatory concepts regarding to recent developments in algorithms.
Figure 6. Target detection overall summary organized by goal, approach and testing hypothesis. Terms association is made through color code. Grey and white labels are related to more than one goal. For full-pixel detection, an adaptive matched filter (AMF), continuum fusion and kernel-based algorithms can be applied. Linear unmixing, sparse regression and non-/pure pixel-based approaches regard to sub-pixel detection (mixed material spectra cases). Anomaly detection, which is used to recognize patterns standing out from cluttered backgrounds, can be addressed with approaches based on Reed-Xiaoli (RX), support vector data description (SVDD) and anomalous change detection (ACD). Grey and white labels constitute the taxonomy for supporting statistical tools. Spectral angle mapper (SAM), Mahalanobis distance and likelihood ratio (LR) aims to test if a pixel is a target or not, considering a threshold to discriminate spectra of interest from background. Such threshold should consider a trade-off between false alarm and probability detection (towards Neyman-Person criterion satisfaction), which can be analyzed through receiving operator characteristic (ROC) curves. Common occurrences of probability densities relying in unknown targets and backgrounds demand a generalized likelihood ratio (GLR) workaround—the basis of adaptive detectors—consisting in replacing unknown parameters by maximum likelihood estimates. Constant false alarm rate (CFAR) promotes setting up detectors immune to background variations and noise. False alarm mitigation (FAM) and target identification (TID) are self-explanatory concepts regarding to recent developments in algorithms.
Remotesensing 09 01110 g006
Table 1. Main differences between hyperspectral and multispectral imaging, spectroscopy and RGB imagery (merging perspectives of [43,44]). A classification based on a bullet rate (1–3) was used to quantify both the spectral and spatial information associated to each acquisition technique, in relative terms.
Table 1. Main differences between hyperspectral and multispectral imaging, spectroscopy and RGB imagery (merging perspectives of [43,44]). A classification based on a bullet rate (1–3) was used to quantify both the spectral and spatial information associated to each acquisition technique, in relative terms.
Spectral InformationSpatial Information
Hyperspectral Imaging••••••
Multispectral Imaging•••••
Spectroscopy•••
RGB Imagery•••
Table 2. List of hyperspectral sensors (and respective characteristics) available for being coupled with UAVs.
Table 2. List of hyperspectral sensors (and respective characteristics) available for being coupled with UAVs.
Manuf.SensorSpectral Range (nm)No. BandsSpectral Resol. (nm)Spatial Resol. (px)Acquis. ModeWeight (g)
BaySpecOCI-UAV-1000600–1000100<5 b2048 dP272
Brandywine PhotonicsCHAI S-640825–21252605 c640 × 512P5000
CHAI V-640350–10802562.5 c640 × 512P480
5 c
10 c
Cubert GmbHS 185—FIREFLEYE SE450–9501254 c50 × 50S490
S 485—FIREFLEYE XL355–7501254.5 c70 × 70S1200
450–950
550–1000
Q 285—FIREFLEYE QE450–9501254 c50 × 50S3000
Headwall Photonics Inc., Fitchburg, MA, USANano HyperSpec400–10002706 b640 dP1200 e
Micro Hyperspec
VNIR
380–1000775
837
923
2.5 b1004 d
1600 d
P≤3900
HySpexVNIR-1024400–10001085.4 c1024 dP4000
Mjolnir V-1240400–10002003 c1240 dP4200
HySpex SWIR-3841000–25002885.45 c384 dP5700
MosaicMillRikola500–90050 a10 b1010 × 1010S720
NovaSolvis-NIR microHSI400–800
400–1000
380–880
120
180
150
3.3 c680 dP<450
Alpha-vis micro HSI 400–800
350–1000
40
60
10 c1280 dP<2100
SWIR 640 microHSI850–1700
600–1700
170
200
5 c640 dP3500
Alpha-SWIR microHSI900–17001605 c640 dP1200
Extra-SWIR microHSI964–25002566 c320 dP2600
PhotonFocusMV1-D2048x1088-HS05-96-G2470–90015010-12 b2048 × 1088P265
Quest InnovationsHyperea 660 C1400–1000660-1024 dP1440
ResononPika L400–10002812.1 c900 dP600
Pika XC2400–10004471.3 c1600 dP2200
Pika NIR900–17001644.9 c320 dP2700
Pika NUV350–8001962.3 c1600 dP2100
SENOPVIS-VNIR Snapshot400–90038010 b1010 × 1010S720
SPECIMSPECIM FX10400–10002245.5 b1024 dP1260
SPECIM FX17900–17002248 b640 dP1700
Surface Optics Corp., San Diego, CA, USASOC710-GX400–10001204.2 c640 dP1250
XIMEAMQ022HG-IM-LS100-NIR600–975100+4 c2048 × 8P32
MQ022HG-IM-LS150-VISNIR470–900150+3 c2048 × 5P300
Note: a 380 in laboratory; b at FWHM; c by sampling; d Pushbroom length line (the other dimension depends on sensor’s sweep distance); e without lens and global positioning system (GPS); P—Pushbroom; S—Snapshot.

Share and Cite

MDPI and ACS Style

Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. https://doi.org/10.3390/rs9111110

AMA Style

Adão T, Hruška J, Pádua L, Bessa J, Peres E, Morais R, Sousa JJ. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sensing. 2017; 9(11):1110. https://doi.org/10.3390/rs9111110

Chicago/Turabian Style

Adão, Telmo, Jonáš Hruška, Luís Pádua, José Bessa, Emanuel Peres, Raul Morais, and Joaquim João Sousa. 2017. "Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry" Remote Sensing 9, no. 11: 1110. https://doi.org/10.3390/rs9111110

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop