Next Article in Journal
Rapid Linear Frequency Swept Frequency-Modulated Continuous Wave Laser Source Using Iterative Pre-Distortion Algorithm
Next Article in Special Issue
Standoff Infrared Measurements of Chemical Plume Dynamics in Complex Terrain Using a Combination of Active Swept-ECQCL Laser Spectroscopy with Passive Hyperspectral Imaging
Previous Article in Journal
Interrupted-Sampling Repeater Jamming-Suppression Method Based on a Multi-Stages Multi-Domains Joint Anti-Jamming Depth Network
Previous Article in Special Issue
Attention Mechanism and Depthwise Separable Convolution Aided 3DCNN for Hyperspectral Remote Sensing Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multisensor UAS mapping of Plant Species and Plant Functional Types in Midwestern Grasslands

1
Department of Geography & GIS, University of Illinois, Urbana, IL 61801, USA
2
Department of Plant Biology, University of Illinois, Urbana, IL 61801, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(14), 3453; https://doi.org/10.3390/rs14143453
Submission received: 30 May 2022 / Revised: 13 July 2022 / Accepted: 16 July 2022 / Published: 18 July 2022
(This article belongs to the Special Issue Hyperspectral Remote Sensing: Current Situation and New Challenges)

Abstract

:
Uncrewed aerial systems (UASs) have emerged as powerful ecological observation platforms capable of filling critical spatial and spectral observation gaps in plant physiological and phenological traits that have been difficult to measure from space-borne sensors. Despite recent technological advances, the high cost of drone-borne sensors limits the widespread application of UAS technology across scientific disciplines. Here, we evaluate the tradeoffs between off-the-shelf and sophisticated drone-borne sensors for mapping plant species and plant functional types (PFTs) within a diverse grassland. Specifically, we compared species and PFT mapping accuracies derived from hyperspectral, multispectral, and RGB imagery fused with light detection and ranging (LiDAR) or structure-for-motion (SfM)-derived canopy height models (CHM). Sensor–data fusion were used to consider either a single observation period or near-monthly observation frequencies for integration of phenological information (i.e., phenometrics). Results indicate that overall classification accuracies for plant species and PFTs were highest in hyperspectral and LiDAR-CHM fusions (78 and 89%, respectively), followed by multispectral and phenometric–SfM–CHM fusions (52 and 60%, respectively) and RGB and SfM–CHM fusions (45 and 47%, respectively). Our findings demonstrate clear tradeoffs in mapping accuracies from economical versus exorbitant sensor networks but highlight that off-the-shelf multispectral sensors may achieve accuracies comparable to those of sophisticated UAS sensors by integrating phenometrics into machine learning image classifiers.

1. Introduction

Recent technological advances in spatial and spectral drone-borne remote sensing systems provide emerging opportunities to improve knowledge of ecological system dynamics (e.g., aboveground vegetation structure) that influence ecosystem function (e.g., carbon and nitrogen cycling). Uncrewed aerial systems (UASs) are capable of retrieving very high-resolution spatial information (i.e., plant communities, functional types, and species) and spectral information necessary to detect seasonal to interannual patterns of plant community change, plant physiology, and plant phenology. As such, UAS platforms cannot only characterize the spatial distribution of vegetation but also detect various patterns of change across natural and human-modified environments. Because UASs can match the spatial and temporal resolutions of local-scale investigations, they may provide new insights into patterns of plant community change in response to herbivore activity in tundra [1], habitat destruction in coastal meadows [2], and grazing management in grasslands [3]. In human-modified environments, UASs can detect plant physiological characteristics to monitor disease severity [4,5,6,7] and crop productivity [8,9,10]. Among the most uniquely apt abilities of UASs is that of readily observing plant phenological patterns [11,12,13,14,15,16], which have been challenging to measure from air- and space-borne platforms, ranging from the timing of flowering in deciduous woodlands [17] to aiding in the identification of individual tree species in forest communities [18].
Remote sensing of plant phenology (i.e., recurring life stages driven by internal molecular processes that yield externally observable responses) has been widely applied by space-borne sensors such as the Landsat series, Moderate Resolution Imaging Spectroradiometer (MODIS), and the Advanced Very High Resolution Radiometer (AVHRR); however, their relatively coarse spatial resolution has limited the use of such data-rich biological metrics in retrieving species and community-specific information. Recent evidence suggests that the integration of plant phenological metrics (phenometrics; i.e., specific stages of the seasonal plant growth curve) in very high-resolution drone-borne analyses may uncover new insights into local-scale controls on regional-scale heterogeneity of plant species distribution and productivity. For example, Wood et al. [19] found that time-series phenological data increased plant functional-type mapping accuracy by 5 to 10% relative to a single-peak summer observation in Montana grasslands. In addition, Berra et al. [17] used UAS normalized difference vegetation index (NDVI) phenometrics to estimate the timing of flowering in an array of temperate woodland species, which revealed substantial local-scale phenological heterogeneity that could not be detected within a single Landsat pixel. Similarly, the disparity in pixel resolution between drone- and space-borne sensors resulted in higher nonparametric correlations between ground and UAS observations than with ground and Sentinel-2 data (ρ = 0.70 and ρ = 0.58, respectively) [20]. UAS observations provide a new wealth of information needed to begin closing observation gaps between individual plants and landscapes.
However, not all UAS sensor networks are created equal, as some hyperspectral sensors are capable of identifying individual species and plant functional types (PFTs) [4,21,22,23,24], whereas lower spectral resolution optical sensors (e.g., RGB) may be unreliable or have difficulty discriminating between plant communities [25,26]. For example, UAS RGB imagery has been shown to produce lower vegetation classification accuracies over UAS multispectral imagery (64% and 96% accuracy, respectively) [27]. Although Tait et al. [28] also found UAS multispectral imagery to produce higher accuracies than UAS RGB, overall vegetation classification accuracy improved with the combination of RGB and multispectral bands (RGB: 79% accuracy; multispectral: 81% accuracy; combined: 90% accuracy). Sensor–data fusion was also observed by Näsi et al. [29], who fused UAS hyperspectral data with a structure-from-motion (SfM) digital surface model (DSM) to identify five unique tree species. Similarly, Melville et al. [24] applied an object-based random forest (RF) classifier to hyperspectral and SfM data to identify four grassland communities with an average of 93% classification accuracy. The combination of imaging spectroscopy with light detection and ranging (LiDAR) has also been shown to improve vegetation discrimination and mapping [22,23,30,31]. However, the method of DSM creation may impact classification accuracy; Sankey et al. [23] compared UAS-hyperspectral RF classification fused with SfM- and LiDAR-derived DSMs for forest monitoring and found that LiDAR (R2 = 0.92, p < 0.001) produced a higher correlation to ground-sampled data than SfM (R2 = 0.71, p < 0.001). Despite the potential for UAS sensor networks to improve mapping of vegetation (and their properties) using spectral, LiDAR, and phenometric information, the tradeoffs in terms of UAS cost and ability to map vegetation remains uncertain.
Here, we used a suite of passive and active sensors onboard three UASs to evaluate the sensor–data fusion approach for mapping plant species and PFTs in a diverse tall-grass prairie. We used a combination of off-the-shelf UAS RGB and multispectral sensors, as well as state-of-the-art hyperspectral sensor networks fused with three-dimensional canopy height models (CHM) computed using SfM and LiDAR datasets. We used combinations of UAS datasets acquired from a single time period (i.e., peak growing season) and over the entire growing season (i.e., near-monthly observation frequencies) to evaluate random forest map accuracies and overall performance. Results demonstrate the tradeoffs between vegetation mapping accuracies with economical off-the-shelf UAS sensors and exorbitant state-of-the-art UAS sensor networks, providing recommendations for minimizing costs and maximizing mapping accuracies in diverse temperate grasslands.

2. Materials and Methods

2.1. Study Site

Our study site was located within a 35-acre restored native tallgrass prairie in Urbana, Illinois (latitude: 40.11°, longitude: −88.18°). The prairie was restored in 2003 and regularly burned every couple of years in the spring by the Urbana Park District (e.g., 2011, 2015, and 2017). The mean annual temperature, rainfall, and snowfall were 11.2 °C, 103.9 cm, and 52.8 cm, respectively (NOAA National Centers for Environmental Information U.S. Climate Normals, 1991–2020). Elevation across the prairie slightly varied from 219 to 224 m.a.s.l. (USGS Map Quadrant, accessed November 2021). Similar to other grasslands, plant community composition varies along elevation gradients due to differential patterns in soil drainage [32].
Prairies in central Illinois are typically classified as tallgrass black soil prairies, where the dominant plant species include Andropogon gerardii (big bluestem) and Sorghastrum nutans (yellow Indiangrass) [33]. In addition to these dominant plant species, Symphyotrichum pilosum (frost aster), Eupatorium serotinum (late boneset), Eryngium yuccifolium (rattlesnake master), Juniperus virginiana (Eastern red cedar), and Cornus drummondii (roughleaf dogwood) were observed and included in this assessment. Although we also considered Solidago (goldenrods), sedges, and deciduous saplings within our analysis, they were not identified to species. These dominant tallgrass prairie species represent five distinct plant functional types (PFTs), as described by Piper et al. [34] (Table 1).
The following sections describe field surveys (Section 2.2) used for training and testing UAS data-fusion products (Section 2.3) with random forest models (Section 2.4). A subset of field survey data was used to independently validate and evaluate the accuracy of resulting plant species and PFT maps (Section 2.5 and Section 2.6).

2.2. Field Surveys

Two methods were used to collect ground-truth data for UAS image classification and to assess map accuracy, (1) species cover and abundance, and (2) randomized cluster sampling. Species cover and abundance data were collected during peak growing season (i.e., 03 August 2020) within fourteen 5 × 5 m2 plots set up every 10 m along two 170 m parallel belt transects. Transects were orientated from north to south to capture patterns of vegetation compositional change along a natural soil moisture gradient (Figure 1, Table 2). All species cover and abundance datasets were measured following protocols described in Daubenmire [35]. We used nine of the fourteen field-surveyed plots that overlapped all UAS image products to evaluate map accuracy. A stratified random cluster sampling approach was used to measure spectral samples for random forest classifications and an independent assessment of map accuracy using a confusion matrix. We measured 77 ground-truth data points (outside of 5 × 5 m2 plots) corresponding to the dominant plant species and an additional ~15 points within the canopy of each individual to capture the species-specific spectral variability associated with leaf geometry [36], resulting in a point dataset with a total of 612 points. All ground-truth data points were measured using a differential global positioning system (dGPS; Emlid ReachView 3 RTK) and post-processed to 2 cm accuracy using the National Continuously Operating Reference Station (CORS) system provided by the National Oceanic and Atmospheric Administration’s National Geodetic Survey.

2.3. Uncrewed Aerial Systems (UASs)

Hyperspectral and LiDAR data were acquired concurrently with a Headwall Nano-Hyperspec visible near-infrared (VNIR) sensor (Headwall Photonics, Bolton, MA, USA) and a Velodyne Puck lite LiDAR sensor onboard a DJI Matrice 600 Pro hexacopter (DJI, Shenzhen, China) (Figure 2A). The Nano-Hyperspec is a push-broom sensor that collects 270 spectral bands from 640 spatial bands in the 400–1000 nm wavelength range. The Velodyne Puck LITE LiDAR sensor is a 16-channel, dual-return LiDAR with a high-performing GPS/IMU capable of 300,000 points per second. Universal Ground Control (UgCS) mission-planning software (SHP Engineering, Riga, Latvia) was used to control the Matrice 600 Pro with the following flight parameters: altitude of 60 m, speed of 2 m/s, and side overlap of 40%. These flight parameters enabled the acquisition of hyperspectral imagery at a 3 cm ground-sample distance (GSD) and high LiDAR point densities. The altitude of the drone was adjusted with surface topography to minimize variation in GSD throughout the mission.
Multispectral data were acquired with a MicaSense Rededge-M sensor (MicaSense Inc., Seattle, WA, USA) onboard a DJI Inspire II quadcopter (DJI, Shenzhen, China) (Figure 2B). The MicaSense Rededge sensor measures the following five narrow spectral bands: blue (475 nm), green (560 nm), red (668 nm), red edge (717 nm), and NIR (842 nm). UgCS mission-planning software was also used to control the Inspire II drone with the following flight parameters: altitude of 20 m, speed of 4 m/s, and front and side overlap of 65%. Each mission covered ~1.5 ha with a GSD of 1.5 cm.
Three-band RGB data were acquired with a Sony 1” CMOS 20 megapixel sensor onboard an Autel Evo II quadcopter (Autel Robotics, Bothell, WA, USA) (Figure 2C). Similar to multispectral data collection, each mission covered ~1.5 ha of prairie with a GSD of 0.26 cm and the following flight parameters: altitude of 20 m, speed of 2 m/s, and front and side overlap of 80% and 75%, respectively.

2.4. UAS Data Collection and Processing

Drone-borne hyperspectral, multispectral, and RGB data were acquired during peak growing season (i.e., late July to early August 2020; Figure 3). However, only multispectral data were collected monthly between May and October to estimate seasonal plant phenological metrics (adverse weather conditions varied with respect to the exact timing of image recapture by approximately one week per month).

2.4.1. Hyperspectral and LiDAR Data Processing

Raw LiDAR point clouds were imported into LiDAR Tools (Headwall Photonics, Bolton, MA), where rotational offset values were adjusted and point clouds were inspected and outliers were removed (Figure 4). A digital surface model (DSM) and digital terrain model (DTM) were created with 10 cm spacing using only maximum and minimum elevation values, respectively. A LiDAR-CHM was derived by differencing the DSM and DTM [37].
We processed hyperspectral data cubes with SpectralView software (Headwall Photonics, Inc., Bolton, MA, USA); one image cube contained 270 bands, with 3 cm pixel dimensions of 639 × 1999. Hyperspectral data cubes were corrected for “dark current” noise levels (i.e., electric current flowing in the photoelectric sensor not exposed to incident illumination) and calibrated to surface reflectance using a white reference panel collected prior to data collection [38]. Although fifty-nine hyperspectral data cubes were acquired, only twelve were used in the analysis due to non-overlapping field surveys (i.e., reference data). The twelve hyperspectral data cubes were orthorectified using the LiDAR-DEM and corrected for rotational offsets (i.e., −1 pitch). Both dGPS ground control and check points were used for a block adjustment of the orthomosaic and to quantify the geopositioning accuracy within ArcMap (Esri, West Redlands, CA, USA).
Orthomosaics were imported into ENVI (L3Harris Technologies, Melbourne, FL, USA) to perform the minimum noise fraction (MNF) dimensionality reduction algorithm. Such dimensionality reduction techniques reduce classification computation time while eliminating redundant spectral information [39]. Hyperspectral and LiDAR data fusion were reduced using an MNF forward rotation from 270 bands to 10 bands; 9 bands were spectral, and 1 was a CHM. The selected bands had the highest eigenvalues produced by the noise reduction analysis, and bands with eigenvalues less than 1.0 were not accepted due to low spectral variance.

2.4.2. Multispectral and RGB Data Collection and Processing

Both multispectral and RGB drone-borne imagery were processed using data acquired during peak growing season (i.e., 30 July to 2 August; Figure 3), following the same workflow in Agisoft Metashape Professional (Agisoft LLC, St. Petersburg, Russia). Aerial images were aligned using an alignment optimization tool to generate a georeferenced dense point cloud. Using the set of overlapping images, an SfM technique [40] was used to create a DEM. This photogrammetrically derived product was used to orthorectify the multispectral and RGB photomosaic. However, specific to multispectral imagery, radiometric correction and reflectance calibrations were derived with pre- and post-flight white reference panel data. Both dGPS ground control and check points were used for block adjustment of the orthomosaic and to quantify the geopositioning accuracies.
Similarly, multispectral and RGB CHMs were computed in Agisoft Metashape Professional by differencing the DSM and DTMs. The DSM was created, and the DTM was created through ground point identification of surface points derived by modifying the maximum angle from the ground, the maximum distance between cells, and ground point cell size parameters. A DTM was then constructed using only ground-classified points. The multispectral and RGB photogrammetry-derived CHM was combined with the five-band and three-band orthomosaics, respectively (Figure 4).
In addition, we used all six multispectral image acquisitions collected throughout the growing season (Figure 3) to compute fifteen plant phenology metrics (Table 3) using the CropPhenology R package [41]. These metrics used repeated observations of the NDVI to derive the following phenometrics: (1) onset of NDVI value, (2) onset time, (3) maximum NDVI value, (4) time of maximum NDVI, (5) offset NDVI Value, (6) offset time, (7) length of growing season, (8) length of growing season before MaxT, (9) length of growing season after MaxT, (10) growth rate between onset and MaxT, (11) growth rate between MaxT and offset, (12) area under the NDVI curve, (13) area under the NDVI curve between onset and MaxT, (14) area under the NDVI curve between MaxT and offset, and (15) measure of asymmetry between NDVIBeforeMax and NDVIAfterMax. Refer to Table 3 for a complete definition and description of each phenometric.

2.4.3. UAS Data Fusion

To evaluate the influence of various spectral and canopy structural (via SfM and LiDAR) datasets on plant species and PFT classifications, we generated a series of sensor-data-fusion products: (1) “Hyperspectral + LiDARCHM” (10 bands), (2) “Hyperspectral” (9 bands), (3) “Multispectral + SfMCHM + Phenology” (21 bands), (4) “Multispectral + SfMCHM” (6 bands), (5) “Multispectral” (5 bands), (6) “RGB + SfMCHM” (4 bands), and (7) “RGB” (3 bands). Prior to fusing data, we georeferenced all products to the same ground control points (centimeter precision) and resampled all products to 3 cm spatial resolution. These sensor data-fusion products were used in random forest classifications to evaluate UAS sensor network vegetation mapping accuracies.

2.5. Random Forest Classification

Random forest (RF) classification was selected due to its ability to adjust for any correlation or interaction among predictor variables [42] and its ability to deal with many features (i.e., bands) but only requiring two user-defined parameters: (1) the number of decision trees and (2) the number of features per subset. This machine learning algorithm produces each decision tree independently, as it splits each node of the tree using a number of features [43,44]. Features are randomly selected and evaluated in each tree during classification, and the optimal value for the number of features per subset is determined. As random subsets of the training data are sampled, those subsets not selected are considered out-of-bag (OOB) data, which are used to measure variable importance (i.e., important features or bands used in classification) and calculate OOB error rates [45].
Spectral information was extracted from ground-truth data points (Figure 1) and split (50:50) into model training and testing subsets. We used the same reference datasets to train RF algorithms for the classification of the seven sensor-data fusion orthomosaics. For RGB and multispectral products, a “shadow” class was added to the training data to mask canopy shadows from plant species. Shadows were removed from hyperspectral orthomosaics during MNF rotation. Using the RF package in R (R Core Team), we used 500 decision trees to train each classifier.

2.6. Random Forest Model Evaluation

Models were validated using 10-fold cross validations, which are commonly used to test algorithm accuracy and reduce variability [46]. Model accuracy was assessed according to OOB errors and Kappa coefficients; low OOB percentages and Kappa values closest to 1 correlate with minimal error in the model [47]. Classification prediction was validated according to confusion matrices and RF model Kappa coefficients. Confusion matrices computed the producer, user, and overall map accuracies, which describe the correspondence between the independent reference (i.e., testing dataset) and modeled data [48] used to evaluate the map accuracy of plant species and PFT classifications (Table 1) [34]. Overall classification accuracies include only vegetation classes, excluding “shadow” and “bare-ground” classes. Independent 5 × 5 m2 plot species and PFT percent cover datasets (Figure 1) were used to validate predictions made by each sensor and sensor-fusion product. PFT classification maps were synthesized by merging the corresponding plant species classes.

3. Results

3.1. Model Performance

Random forest model performance varied for each sensor-data fusion product and was validated with cross validation, confusion matrices, and independent percent cover accuracy assessments. Models using hyperspectral data-fusion products outperformed all others, yet all data-fusion products had relatively high cross-validation Kappa coefficients (Kappacv) (Table 4). The lowest out-of-bag (OOB) error rate was identified in “Hyperspectral + CHM”, with an OOB error rate of 29.8% (Table 4). Without the CHM, the “Hyperspectral” product slightly increased the OOB error rate to 33.9%. Multispectral data-fusion products progressively increased the OOB error from “Multispectral + CHM + Phenology”, “Multispectral + CHM”, and “Multispectral”, with error rates of 46.6, 58.6, and 70.7%, respectively. Random forest models with RGB data-fusion products had the highest OOB error rate, as “RGB + CHM” and “RBG” products had 66.5 and 80.5% OOB error rates, respectively.
Variable importance statistics revealed CHMs to be the most important bands included in all RF models (Figure 5). The CHM within the “Hyperspectral + CHM” data-fusion classification had the highest variable importance, followed by steadily decreasing MNF eigenvalue bands. These decreasing MNF eigenvalue bands also explained the most spectral variability in the “Hyperspectral” data-fusion product. Similarly, considering the “Multispectral + CHM + Phenology” data-fusion product, the most important variable was the CHM, followed by phenometrics: (i) time-integrated NDVI, (ii) maxNDVI, and (iii) time-integrated NDVI before maxNDVI, as well as the red band. These patterns were similar for the “Multispectral + CHM” and “Multispectral” product, as the highest to lowest variable importance values were CHM (only in “Multispectral + CHM”), red, blue, red-edge, NIR, and green bands (Figure 5). The highest to lowest variable importance values for “RGB + CHM” and “RGB” data-fusion products were the CHM (only in “RGB + CHM”), red, blue, and green bands.

3.2. Vegetation Classification

We developed seven plant species and PFT classification maps using RF models across our study site. In line with the OOB error rates (Table 4) the distribution of plant species and PFTs notably differed by sensor-data fusion product (e.g., Figure 6 and Figure 7). Overall patterns show that RGB and multispectral products had a higher distribution of woody and sedge species compared to hyperspectral products (Table 5). “Hyperspectral” and “Hyperspectral + CHM” had similar percent cover values for plant species and PFTs (Table 5) and, spatially, show indistinguishable plant species distribution (Figure 6) but dominant C4 grasses for “Hyperspectral” and dominant PLF PFTs for “Hyperspectral + CHM” (Figure 7). Comparatively, total percent cover values show that “Multispectral + CHM + Phenology” had less Bluestem but more Late boneset and Yellow Indiangrass than “Multispectral + CHM” (Table 5). This pattern is apparent in the vegetation maps, where Bluestem is the dominant species in “Multispectral + CHM”, and Yellow Indiangrass and Goldenrod are the dominant species in “Multispectral + CHM + Phenology” (Figure 6). Although plant species distribution differs considerably in these two products, PFT distribution is nearly identical (Table 5). For plant species and PFTs, “RGB” and “Multispectral” display similar patterns for both total percent cover and spatial distributions (Table 5, Figure 6 and Figure 7), and “RGB + CHM” and “Multispectral + CHM” share similar percent cover distributions across the study region (Table 5).

3.3. Accuracy Assessment

Overall classification accuracies ranged from 78% to 25% for plant species and 89% to 43% for PFT classifications, depending on the data-fusion product (Table 6). The highest to lowest overall accuracies by product were: (1) “Hyperspectral + CHM”, (2) “Hyperspectral”, (3) “Multispectral + CHM + Phenology”, (4) “Multispectral + CHM”, (5) “RGB + CHM”, (6) “Multispectral”, and (7) “RGB”. Accuracy assessments for “Hyperspectral + CHM” and “Hyperspectral” data-fusion products produced overall accuracies of 78% and 73% for species and 89% and 86% for PFTs and Kappa coefficients of 0.73 and 0.69, respectively (Table 6). “Multispectral”, “Multispectral + CHM”, and “Multispectral + CHM + Phenology” data-fusion products produced overall accuracies of 27%, 45% and 52% for species; 43%, 61%, and 60% for PFTs; and Kappa coefficients of 0.16, 0.37, and 0.45, respectively. “RGB” and “RGB + CHM” data-fusion products produced overall accuracies of 25% and 33% for species; 43% and 47% for PFTs; and Kappa coefficients of 0.12 and 0.22, respectively.
User and producer accuracies generally declined with the aforementioned pattern of overall accuracies by sensor-data fusion product. User and producer accuracies were relatively high for all plant species in “Hyperspectral + CHM” and “Hyperspectral” classifications, with the exception of Frost aster (user: 48% and producer: 83% and user: 33% and producer: 50%, respectively; Figure 8, Supplemental Table S1). These high user and producer accuracies were maintained in PFT accuracies. “Multispectral + CHM + Phenology” and “Multispectral + CHM” had similar user and producer accuracies for plant species classifications, with the exception of saplings (user: 7% and producer: 17% and user: 21% and producer: 23%, respectively) and the absence of Frost aster from the “Multispectral + CHM + Phenology” product. The limitations of the multispectral data-fusion products in the identification of sapling species also resulted in low user accuracies for woody PFTs (user: 46% and producer: 75%). User and producer accuracies were substantially lower in “Multispectral”, “RGB + CHM”, and “RGB” data-fusion products. However, the fusion of a CHM with RGB imagery notably improved both species and PFT accuracies (Figure 8, Supplemental Table S1). For example, Frost aster was not identified in “RGB” but was found to have the highest user and producer accuracies in “RGB + CHM” (user: 0% and producer: 0% and user: 57% and producer: 48%, respectively; Figure 8, Supplemental Table S1).

3.4. Accuracy Assessment: Percent Cover Validation Plots

The relationships between the observed and predicted species and PFT percent cover estimates (e.g., 5 × 5 m plots) were compared with and without the addition of CHMs (Figure 9). For overall plant species cover estimates with CHMs, correlations were highest in “Hyperspectral + CHM” (R2 = 0.83, p < 0.0001), “Multispectral + CHM” (R2 = 0.48, p < 0.0001), and “RGB + CHM” (R2 = 0.33, p < 0.0001) but lowest in “Multispectral + CHM + Phenology” (R2 = 0.06, p < 0.0001) products. Without CHMs, correlations were strongest in “Hyperspectral” (R2 = 0.80, p < 0.0001), followed by “Multispectral” and “RGB” (R2 = 0.42, p < 0.0001 and R2 = 0.38, p < 0.0001) products. Both hyperspectral products had near one-to-one linear relationships with field-measured cover estimates, whereas multispectral fused with phenology had a much lower relationship with observed species percent cover than “Multispectral + CHM” and “Multispectral”, and RGB products had similarly low relationships. Plant species with the highest variation or misclassification across sensor-data fusion classification products were Big bluestem and Goldenrod (Figure 9A).
PFT percent cover estimates increased observed–predicted correlations across all sensor-data products with and without CHM data fusion (Figure 9). For overall PFT estimates with fusion, the highest correlations were observed in “Hyperspectral + CHM” (R2 = 0.90, p < 0.0001), “Multispectral + CHM” (R2 = 0.54, p < 0.0001), and “RGB + CHM” (R2 = 0.55, p < 0.0001), with the lowest correlations in “Multispectral + CHM+ Phenology” (R2 = 0.49, p < 0.0001). Without CHMs, correlations were strongest in “Hyperspectral” (R2 = 0.88), followed by “Multispectral” and “RGB” (R2 = 0.63, p < 0.0001 and R2 = 0.45, p < 0.0002). Woody species exhibited the least amount of variation, and C4 grasses and perennial late-flowering (PLF) plants had the most variation among all sensor types.

4. Discussion

Our UAS multisensor and data-fusion analyses identified a general stepwise increase in plant species and PFT classification accuracies with sensor spectral resolution and the fusion of CHMs. Our analysis identified the most accurate plant species and PFT maps to be produced with “Hyperspectral + CHM” data-fusion products. These results are consistent with a study by Sankey et al. [22], who reported that UAS-hyperspectral fused with LiDAR-DSM had higher classification accuracies than UAS-multispectral fused with SfM-DSM (Table 4, Figure 6). Although this result was not surprising, as it was also our most sophisticated UAS sensor network, our results further highlight that the fusion of plant phenometrics with off-the-shelf multispectral and CHM data may improve the overall discrimination of plant species (e.g., [40]) and achieve nearly identical accuracies to those obtained with “Hyperspectral + CHM” within select woody, sedges, and late flowering species (Figure 8, Supplemental Table S1).
Our results are in line with previous research suggesting that multitemporal phenometrics may increase species and PFT classification accuracies more than monotemporal UAS observations. For example, the integration of time-series phenometrics improved the accuracies of (1) invasive grass species from 54 to 64% [16], (2) PFTs in moist and dry grasslands from 59 to 64% and 51 to 61%, respectively [19], and (3) estimates of crop yields from 85 to 93% [49]. Although gradual change sin plant phenology throughout the growing season have proven to be an important factor in improving classification accuracies [50,51,52,53,54] (e.g., Table 6, Figure 3), new phenological indices derived from synthetic aperture radar polarimetry have also achieved high species-mapping accuracies (i.e., 86 to 90%) [55]. Collectively, these results suggest that vegetation mapping will be improved by integrating plant phenometrics into machine learning classifiers, regardless of the retrieval method (i.e., optical versus active remote sensing).
In addition to plant phenometrics, canopy height models were found to disproportionately improve plant species and PFT classifications, depending on sensor spectral resolution. However, we found the importance of such canopy structural information to decrease with increasing spectral resolution. For example, the fusion of our CHM with hyperspectral imagery only slightly improved species accuracy from 73 to 78% and PFT accuracy from 86 to 89%, yet substantially influenced multispectral classifications, increasing from “Multispectral” to “Multispectral + CHM” for both species (27 to 45%), as well as PFT (43 to 61%) accuracies (Table 6). The minimal increase in hyperspectral accuracy is comparable to findings of studies by Anderson et al. [30] and Dalponte et al. [31], who reported that LiDAR fusion increased UAS-hyperspectral ability to discriminate tree species by 8–9% [30] and 2% [31], respectively. These findings are also consistent with a study by Sankey et al. [22], who reported that LiDAR-CHM fusions increased overall hyperspectral classification accuracy from 71 to 87%. Although our results may suggest that SfM-derived CHMs had the highest influence on classifications, this was likely due to the application of SfM-CHMs with lower spectral resolution sensors (e.g., RGB and Multispectral), as numerous studies show LiDAR to consistently outperform SfM photogrammetry methods due to high point cloud densities [40], 3D characterization of vegetation [56], and for producing consistent, reliable, and near identical plant height values with respect to field observations [57].
Although we robustly evaluated plant species and PFT map accuracies using two independent datasets (Figure 1), we did not account for all spectral, spatial, and temporal data permutations that may have influenced mapping accuracies. For example, although we managed to achieve high map accuracies with our hyperspectral VNIR (400–1000 nm) sensor, it is likely that our discrimination of woody and herbaceous plant species may have been improved by the combined use of SWIR (1100–1600 nm) and thermal datasets [29,58,59]. In addition, despite using high spatial resolutions (3 cm GSD) for vegetation mapping, higher resolutions (0.35 cm GSD) have shown the ability to improve tallgrass and perennial wetland species mapping [60]. Furthermore, if we had increased our temporal observation frequencies (i.e., monthly to weekly) during green-up and peak growing season, we may have been able to improve accuracies [61], as many common annual and perennial species (e.g., asters and goldenrods) germinate, flower, and fruit within a few weeks. Lastly, although we acknowledge that the performance of various machine learning algorithms may subtly vary map accuracies [62], the evaluation of such model differences was beyond the scope of this study. Therefore, to maintain continuity across our UAS vegetation mapping assessment, we applied a single widely used random forest learning algorithm [48,63,64] to all our UAS datasets (resampled to 3 cm spatial resolution) to evaluate the sensor-data fusion approach for mapping vegetation species and PFTs in midwestern grasslands.

5. Conclusions

We used systematic UAS data acquisition and processing methodologies to determine the key tradeoffs in plant species and PFT mapping accuracies among sophisticated and economical drone-borne sensor networks. Our results suggest that the sophisticated UAS simultaneously measuring hyperspectral and LiDAR-CHMs most accurately mapped plant species using observations of a single peak growing season. More economical UAS sensor networks have the potential to match PFT classification accuracies of our sophisticated UAS sensor network by increasing observation frequencies and fusing the resulting phenometrics with multispectral and SfM-CHMs; however, these data-fusion products are inadequate to consistently and reliably detect plant species. With the rising demand for hyperspectral and LiDAR sensor technology in machine vision, crop phenotyping, precision agriculture, and many other environmental, industrial, forensic, and medical applications [65,66,67], the cost of these cutting-edge sensors is decreasing [19]. Increased accessibility of these sensors will provide new opportunities for more rapid and accurate detection of the spread of invasive grassland species (e.g., Microstegium vimineum [68]) while improving knowledge of the responses of plant communities to environmental change and disturbance.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs14143453/s1, Data that support the findings of this study are available. Table S1: “RGB” confusion matrix for plant species classification; Table S2: “RGB” confusion matrix for PFT classification (PEF = perennial early flowering, PLF = perennial late flowering); Table S3: “RGB + CHM” confusion matrix for plant species classification; Table S4: “RGB + CHM” confusion matrix for PFT classification (PEF = perennial early flowering, PLF = perennial late flowering); Table S5: “Multispectral” confusion matrix for plant species classification; Table S6: “Multispectral” confusion matrix for PFT classification (PEF = perennial early flowering, PLF = perennial late flowering); Table S7: “Multispectral + CHM” confusion matrix for plant species classification; Table S8: “Multispectral + CHM” confusion matrix for PFT classification (PEF = perennial early flowering, PLF = perennial late flowering); Table S9: “Multispectral + CHM + Phenology” confusion matrix for plant species classification; Table S10: “Multispectral + CHM + Phenology” confusion matrix for PFT classification (PEF = perennial early flowering, PLF = perennial late flowering); Table S11: “Hyperspectral” confusion matrix for plant species classification; Table S12: “Hyperspectral” confusion matrix for PFT classification (PEF = perennial early flowering, PLF = perennial late flowering); Table S13: “Hyperspectral + CHM” confusion matrix for plant species classification; Table S14: “Hyperspectral + CHM” confusion matrix for PFT classification (PEF = perennial early flowering, PLF = perennial late flowering).

Author Contributions

Conceptualization, M.J.L.; data curation, E.C.H. and M.J.L.; formal analysis, E.C.H.; funding acquisition, M.J.L.; investigation, E.C.H.; methodology, M.J.L.; project administration, M.J.L.; resources, E.C.H. and M.J.L.; software, E.C.H.; supervision, M.J.L.; validation, E.C.H. and M.J.L.; visualization, E.C.H.; writing—original draft, E.C.H.; writing—review and editing, E.C.H. and M.J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Science Foundation’s Environmental Engineering Program (EnvE-1928048).

Data Availability Statement

In process of uploading data to the PANGAEA World Data Center.

Acknowledgments

Thank you to Aiden Schore and Yaping Chen for field survey assistance, as well as Matthew Balk at the Urbana Park District for permitting. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Siewert, M.B.; Olofsson, J. UAV Reveals Substantial but Heterogeneous Effects of Herbivores on Arctic Vegetation. Sci. Rep. 2021, 11, 19468. [Google Scholar] [CrossRef]
  2. Villoslada, M.; Bergamo, T.F.; Ward, R.D.; Burnside, N.G.; Joyce, C.B.; Bunce, R.G.H.; Sepp, K. Fine Scale Plant Community Assessment in Coastal Meadows Using UAV Based Multispectral Data. Ecol. Indic. 2020, 111, 105979. [Google Scholar] [CrossRef]
  3. Villoslada Peciña, M.; Bergamo, T.F.; Ward, R.D.; Joyce, C.B.; Sepp, K. A Novel UAV-Based Approach for Biomass Prediction and Grassland Structure Assessment in Coastal Meadows. Ecol. Indic. 2021, 122, 107227. [Google Scholar] [CrossRef]
  4. Chan, Y.K.; Koo, V.C.; Choong, E.H.K.; Lim, C.S. The Drone Based Hyperspectral Imaging System for Precision Agriculture. NVEO-Nat. Volatiles Essent. Oils J.|NVEO 2021, 8, 5561–5573. [Google Scholar]
  5. Bhandari, M.; Ibrahim, A.M.H.; Xue, Q.; Jung, J.; Chang, A.; Rudd, J.C.; Maeda, M.; Rajan, N.; Neely, H.; Landivar, J. Assessing Winter Wheat Foliage Disease Severity Using Aerial Imagery Acquired from Small Unmanned Aerial Vehicle (UAV). Comput. Electron. Agric. 2020, 176, 105665. [Google Scholar] [CrossRef]
  6. Huang, H.; Deng, J.; Lan, Y.; Yang, A.; Zhang, L.; Wen, S.; Zhang, H.; Zhang, Y.; Deng, Y. Detection of Helminthosporium Leaf Blotch Disease Based on UAV Imagery. Appl. Sci. 2019, 9, 558. [Google Scholar] [CrossRef] [Green Version]
  7. Zhang, T.; Yang, Z.; Xu, Z.; Li, J. Wheat Yellow Rust Severity Detection by Efficient DF-UNet and UAV Multispectral Imagery. IEEE Sens. J. 2022, 22, 9057–9068. [Google Scholar] [CrossRef]
  8. Maimaitiyiming, M.; Sagan, V.; Sidike, P.; Maimaitijiang, M.; Miller, A.J.; Kwasniewski, M. Leveraging Very-High Spatial Resolution Hyperspectral and Thermal UAV Imageries for Characterizing Diurnal Indicators of Grapevine Physiology. Remote Sens. 2020, 12, 3216. [Google Scholar] [CrossRef]
  9. Xue, W.; Ko, J.; Werner, C.; Tenhunen, J. A Spatially Hierarchical Integration of Close-Range Remote Sensing, Leaf Structure and Physiology Assists in Diagnosing Spatiotemporal Dimensions of Field-Scale Ecosystem Photosynthetic Productivity. Agric. For. Meteorol. 2017, 247, 503–519. [Google Scholar] [CrossRef]
  10. Romero-Trigueros, C.; Nortes, P.A.; Alarcón, J.J.; Hunink, J.E.; Parra, M.; Contreras, S.; Droogers, P.; Nicolás, E. Effects of Saline Reclaimed Waters and Deficit Irrigation on Citrus Physiology Assessed by UAV Remote Sensing. Agric. Water Manag. 2017, 183, 60–69. [Google Scholar] [CrossRef] [Green Version]
  11. Anderson, K.; Gaston, K.J. Lightweight Unmanned Aerial Vehicles Will Revolutionize Spatial Ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef] [Green Version]
  12. Klosterman, S.; Richardson, A.D. Observing Spring and Fall Phenology in a Deciduous Forest with Aerial Drone Imagery. Sensors 2017, 17, 2852. [Google Scholar] [CrossRef] [Green Version]
  13. Aeberli, A.; Johansen, K.; Robson, A.; Lamb, D.W.; Phinn, S. Detection of Banana Plants Using Multi-Temporal Multispectral Uav Imagery. Remote Sens. 2021, 13, 2123. [Google Scholar] [CrossRef]
  14. Thapa, S.; Garcia Millan, V.E.; Eklundh, L. Assessing Forest Phenology: A Multi-Scale Comparison of Near-Surface (UAV, Spectral Reflectance Sensor, Phenocam) and Satellite (MODIS, Sentinel-2) Remote Sensing. Remote Sens. 2021, 13, 1597. [Google Scholar] [CrossRef]
  15. Klosterman, S.; Melaas, E.; Wang, J.; Martinez, A.; Frederick, S.; O’Keefe, J.; Orwig, D.A.; Wang, Z.; Sun, Q.; Schaaf, C.; et al. Fine-Scale Perspectives on Landscape Phenology from Unmanned Aerial Vehicle (UAV) Photography. Agric. For. Meteorol. 2018, 248, 397–407. [Google Scholar] [CrossRef]
  16. Weisberg, P.J.; Dilts, T.E.; Greenberg, J.A.; Johnson, K.N.; Pai, H.; Sladek, C.; Kratt, C.; Tyler, S.W.; Ready, A. Phenology-Based Classification of Invasive Annual Grasses to the Species Level. Remote Sens. Environ. 2021, 263, 112568. [Google Scholar] [CrossRef]
  17. Berra, E.F.; Gaulton, R.; Barr, S. Assessing Spring Phenology of a Temperate Woodland: A Multiscale Comparison of Ground, Unmanned Aerial Vehicle and Landsat Satellite Observations. Remote Sens. Environ. 2019, 223, 229–242. [Google Scholar] [CrossRef]
  18. Fawcett, D.; Bennie, J.; Anderson, K. Monitoring Spring Phenology of Individual Tree Crowns Using Drone-Acquired NDVI Data. Remote Sens. Ecol. Conserv. 2021, 7, 227–244. [Google Scholar] [CrossRef]
  19. Wood, D.J.A.; Preston, T.M.; Powell, S.; Stoy, P.C. Multiple UAV Flights across the Growing Season Can Characterize Fine Scale Phenological Heterogeneity within and among Vegetation Functional Groups. Remote Sens. 2022, 14, 1290. [Google Scholar] [CrossRef]
  20. Assmann, J.J.; Myers-Smith, I.H.; Kerby, J.T.; Cunliffe, A.M.; Daskalova, G.N. Drone Data Reveal Heterogeneity in Tundra Greenness and Phenology Not Captured by Satellites. Environ. Res. Lett. 2020, 15, 125002. [Google Scholar] [CrossRef]
  21. Bolch, E.A.; Hestir, E.L. Using Hyperspectral UAS Imagery to Monitor Invasive Plant Phenology. In Hyperspectral Imaging and Sounding of the Environment, Proceedings of the Optical Sensors and Sensing Congress (ES, FTS, HISE, Sensors), San Jose, CA, USA, 25–27 June 2019; Optical Society of America: San Diego, CA, USA, 2019; ISBN 978-1-943580-61-3. [Google Scholar]
  22. Sankey, J.B.; Sankey, T.T.; Li, J.; Ravi, S.; Wang, G.; Caster, J.; Kasprak, A. Quantifying Plant-Soil-Nutrient Dynamics in Rangelands: Fusion of UAV Hyperspectral-LiDAR, UAV Multispectral-Photogrammetry, and Ground-Based LiDAR-Digital Photography in a Shrub-Encroached Desert Grassland. Remote Sens. Environ. 2021, 253, 112223. [Google Scholar] [CrossRef]
  23. Sankey, T.; Donager, J.; McVay, J.; Sankey, J.B. UAV Lidar and Hyperspectral Fusion for Forest Monitoring in the Southwestern USA. Remote Sens. Environ. 2017, 195, 30–43. [Google Scholar] [CrossRef]
  24. Melville, B.; Lucieer, A.; Aryal, J. Classification of Lowland Native Grassland Communities Using Hyperspectral Unmanned Aircraft System (Uas) Imagery in the Tasmanian Midlands. Drones 2019, 3, 5. [Google Scholar] [CrossRef] [Green Version]
  25. Zweig, C.L.; Burgess, M.A.; Percival, H.F.; Kitchens, W.M. Use of Unmanned Aircraft Systems to Delineate Fine-Scale Wetland Vegetation Communities. Wetlands 2015, 35, 303–309. [Google Scholar] [CrossRef]
  26. Marcial-Pablo, M.d.J.; Gonzalez-Sanchez, A.; Jimenez-Jimenez, S.I.; Ontiveros-Capurata, R.E.; Ojeda-Bustamante, W. Estimation of Vegetation Fraction Using RGB and Multispectral Images from UAV. Int. J. Remote Sens. 2019, 40, 420–438. [Google Scholar] [CrossRef]
  27. Furukawa, F.; Laneng, L.A.; Ando, H.; Yoshimura, N.; Kaneko, M.; Morimoto, J. Comparison of Rgb and Multispectral Unmanned Aerial Vehicle for Monitoring Vegetation Coverage Changes on a Landslide Area. Drones 2021, 5, 97. [Google Scholar] [CrossRef]
  28. Tait, L.; Bind, J.; Charan-Dixon, H.; Hawes, I.; Pirker, J.; Schiel, D. Unmanned Aerial Vehicles (UAVs) for Monitoring Macroalgal Biodiversity: Comparison of RGB and Multispectral Imaging Sensors for Biodiversity Assessments. Remote Sens. 2019, 11, 2332. [Google Scholar] [CrossRef] [Green Version]
  29. Näsi, R.; Honkavaara, E.; Tuominen, S.; Saari, H.; Pölönen, I.; Hakala, T.; Viljanen, N.; Soukkamäki, J.; Näkki, I.; Ojanen, H.; et al. UAS Based Tree Species Identification Using The Novel Fpi Based Hyperspectral Cameras In Visible, Nir and Swir Spectral Ranges. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B1, 1143. [Google Scholar] [CrossRef] [Green Version]
  30. Anderson, J.E.; Plourde, L.C.; Martin, M.E.; Braswell, B.H.; Smith, M.L.; Dubayah, R.O.; Hofton, M.A.; Blair, J.B. Integrating Waveform Lidar with Hyperspectral Imagery for Inventory of a Northern Temperate Forest. Remote Sens. Environ. 2008, 112, 1856–1870. [Google Scholar] [CrossRef]
  31. Dalponte, M.; Bruzzone, L.; Gianelle, D. Fusion of Hyperspectral and LIDAR Remote Sensing Data for Classification of Complex Forest Areas. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1416–1427. [Google Scholar] [CrossRef] [Green Version]
  32. Nelson, D.C.; Anderson, R.C. Factors Related to the Distribution of Prairie Plants along a Moisture Gradient. Am. Midl. Nat. 1983, 109, 367–375. [Google Scholar] [CrossRef]
  33. Old, S.M. Microclimate, Fire, and Plant Production in an Illinois Prairie. Ecol. Monogr. 1969, 39, 355–384. [Google Scholar] [CrossRef]
  34. Piper, J.K.; Schmidt, E.S.; Janzen, A.J. Effects of Species Richness on Resident and Target Species Components in a Prairie Restoration. Restor. Ecol. 2007, 15, 189–198. [Google Scholar] [CrossRef]
  35. Daubenmire, R. Canopy Coverage Method of Vegetation Analysis. NW Sci. 1959, 33, 39–64. [Google Scholar]
  36. Ollinger, S.V. Sources of Variability in Canopy Reflectance and the Convergent Properties of Plants. New Phytol. 2011, 189, 375–394. [Google Scholar] [CrossRef]
  37. Yin, D.; Wang, L. Individual Mangrove Tree Measurement Using UAV-Based LiDAR Data: Possibilities and Challenges. Remote Sens. Environ. 2019, 223, 34–49. [Google Scholar] [CrossRef]
  38. Barreto, M.A.P.; Johansen, K.; Angel, Y.; McCabe, M.F. Radiometric Assessment of a UAV-Based Push-Broom Hyperspectral Camera. Sensors 2019, 19, 4699. [Google Scholar] [CrossRef] [Green Version]
  39. Qian, S.E. Optical Satellite Signal Processing and Enhancement; SPIE: Bellingham, DC, USA, 2013; pp. 419–451. ISBN 978-0-8194-9328-6. [Google Scholar]
  40. Sun, Z.; Wang, X.; Wang, Z.; Yang, L.; Xie, Y.; Huang, Y. UAVs as Remote Sensing Platforms in Plant Ecology: Review of Applications and Challenges. J. Plant Ecol. 2021, 14, 1003–1023. [Google Scholar] [CrossRef]
  41. Araya, S.; Ostendorf, B.; Lyle, G.; Lewis, M. CropPhenology: An R Package for Extracting Crop Phenology from Time Series Remotely Sensed Vegetation Index Imagery. Ecol. Inform. 2018, 46, 45–56. [Google Scholar] [CrossRef]
  42. Chen, X.; Ishwaran, H. Random Forests for Genomic Data Analysis. Genomics 2012, 99, 323–329. [Google Scholar] [CrossRef] [Green Version]
  43. Belgiu, M.; Drăgu, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  44. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  45. Lowe, B.; Kulkarni, A. Multispectral Image Analysis Using Random Forest. Int. J. Soft Comp. 2015, 6, 1–14. [Google Scholar] [CrossRef]
  46. Sun, D.; Xu, J.; Wen, H.; Wang, Y. An Optimized Random Forest Model and Its Generalization Ability in Landslide Susceptibility Mapping: Application in Two Areas of Three Gorges Reservoir, China. J. Earth Sci. 2020, 31, 1068–1086. [Google Scholar] [CrossRef]
  47. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An Assessment of the Effectiveness of a Random Forest Classifier for Land-Cover Classification. ISPRS Photogram. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  48. Kulkarni, A.D.; Lowe, B. Random Forest Algorithm for Land Cover Classification. Int. J. Recent Innov. Trends Comp. Comm. 2016, 4, 58–63. [Google Scholar]
  49. Yang, B.; Zhu, W.; Rezaei, E.E.; Li, J.; Sun, Z.; Zhang, J. The Optimal Phenological Phase of Maize for Yield Prediction with High-Frequency UAV Remote Sensing. Remote Sens. 2022, 14, 1559. [Google Scholar] [CrossRef]
  50. Bargiel, D. A New Method for Crop Classification Combining Time Series of Radar Images and Crop Phenology Information. Remote Sens. Environ. 2017, 198, 369–383. [Google Scholar] [CrossRef]
  51. Hariharan, S.; Mandal, D.; Tirodkar, S.; Kumar, V.; Bhattacharya, A.; Lopez-Sanchez, J.M. A Novel Phenology Based Feature Subset Selection Technique Using Random Forest for Multitemporal PolSAR Crop Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4244–4258. [Google Scholar] [CrossRef] [Green Version]
  52. Htitiou, A.; Boudhar, A.; Lebrini, Y.; Hadria, R.; Lionboui, H.; Elmansouri, L.; Tychon, B.; Benabdelouahab, T. The Performance of Random Forest Classification Based on Phenological Metrics Derived from Sentinel-2 and Landsat 8 to Map Crop Cover in an Irrigated Semi-Arid Region. Remote Sens. Earth Syst. Sci. 2019, 2, 208–224. [Google Scholar] [CrossRef]
  53. Jin, Y.; Sung, S.; Lee, D.K.; Biging, G.S.; Jeong, S. Mapping Deforestation in North Korea Using Phenology-Based Multi-Index and Random Forest. Remote Sens. 2016, 8, 997. [Google Scholar] [CrossRef] [Green Version]
  54. Nguyen, L.H.; Joshi, D.R.; Clay, D.E.; Henebry, G.M. Characterizing Land Cover/Land Use from Multiple Years of Landsat and MODIS Time Series: A Novel Approach Using Land Surface Phenology Modeling and Random Forest Classifier. Remote Sens. Environ. 2020, 238, 111017. [Google Scholar] [CrossRef]
  55. Woźniak, E.; Rybicki, M.; Kofman, W.; Aleksandrowicz, S.; Wojtkowski, C.; Lewiński, S.; Bojanowski, J.; Musiał, J.; Milewski, T.; Slesiński, P.; et al. Multi-Temporal Phenological Indices Derived from Time Series Sentinel-1 Images to Country-Wide Crop Classification. Int. J. Appl. Earth Obs. Geoinf. 2022, 107, 102683. [Google Scholar] [CrossRef]
  56. Liao, J.; Zhou, J.; Yang, W. Comparing LiDAR and SfM Digital Surface Models for Three Land Cover Types. Open Geosci. 2021, 13, 497–504. [Google Scholar] [CrossRef]
  57. Obanawa, H.; Yoshitoshi, R.; Watanabe, N.; Sakanoue, S. Portable Lidar-Based Method for Improvement of Grass Height Measurement Accuracy: Comparison with SFM Methods. Sensors 2020, 20, 4809. [Google Scholar] [CrossRef]
  58. Nagai, T.; Makino, A. Differences between Rice and Wheat in Temperature Responses of Photosynthesis and Plant Growth. Plant Cell Physiol. 2009, 50, 744–755. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Turner, D.; Lucieer, A.; Watson, C. Development of an Unmanned Aerial Vehicle (UAV) for Hyper Resolution Vineyard Mapping Based on Visible, Multispectral, and Thermal Imagery. In Proceedings of the 34th International Symposium on Remote Sensing of Environment—The GEOSS Era: Towards Operational Environmental Monitoring, Sydney, NSW, Australia, 10–15 April 2011; p. 4. [Google Scholar]
  60. Bertacchi, A.; Giannini, V.; di Franco, C.; Silvestri, N. Using Unmanned Aerial Vehicles for Vegetation Mapping and Identification of Botanical Species in Wetlands. Landsc. Ecol. Eng. 2019, 15, 231–240. [Google Scholar] [CrossRef]
  61. Yang, Q.; Shi, L.; Han, J.; Yu, J.; Huang, K. A near Real-Time Deep Learning Approach for Detecting Rice Phenology Based on UAV Images. Agric. For. Meteorol. 2020, 287, 107938. [Google Scholar] [CrossRef]
  62. Lan, Y.; Huang, Z.; Deng, X.; Zhu, Z.; Huang, H.; Zheng, Z.; Lian, B.; Zeng, G.; Tong, Z. Comparison of Machine Learning Methods for Citrus Greening Detection on UAV Multispectral Images. Comput. Electron. Agric. 2020, 171, 105234. [Google Scholar] [CrossRef]
  63. Cavender-Bares, J.; Gamon, J.A.; Hobbie, S.E.; Madritch, M.D.; Meireles, J.E.; Schweiger, A.K.; Townsend, P.A. Harnessing Plant Spectra to Integrate the Biodiversity Sciences across Biological and Spatial Scales. Am. J. Bot. 2017, 104, 966–969. [Google Scholar] [CrossRef] [Green Version]
  64. Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J. Random Forests for Classification in Ecology. Ecology 2007, 88, 2783–2792. [Google Scholar] [CrossRef] [PubMed]
  65. Maimaitijiang, M.; Ghulam, A.; Sidike, P.; Hartling, S.; Maimaitiyiming, M.; Peterson, K.; Shavers, E.; Fishman, J.; Peterson, J.; Kadam, S.; et al. Unmanned Aerial System (UAS)-Based Phenotyping of Soybean Using Multi-Sensor Data Fusion and Extreme Learning Machine. ISPRS J. Photogramm. Remote Sens. 2017, 134, 43–58. [Google Scholar] [CrossRef]
  66. Hu, J.B.; Zhang, J. Unmanned Aerial Vehicle Remote Sensing in Ecology: Advances and Prospects. Shengtai Xuebao Acta Ecol. Sin. 2018, 38, 20–30. [Google Scholar] [CrossRef]
  67. Wang, K.; Franklin, S.E.; Guo, X.; Cattet, M. Remote Sensing of Ecology, Biodiversity and Conservation: A Review from the Perspective of Remote Sensing Specialists. Sensors 2010, 10, 9647–9667. [Google Scholar] [CrossRef] [PubMed]
  68. Fusco, E.J.; Finn, J.T.; Balch, J.K.; Chelsea Nagy, R.; Bradley, B.A. Invasive Grasses Increase Fire Occurrence and Frequency across US Ecoregions. Proc. Natl. Acad. Sci. USA 2019, 116, 23594–23599. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Tallgrass prairie study site in Champaign County, Illinois. Species cover and abundance plots (black, field survey plots) with randomized ground-truth data points. Plot numbers associate with Table 2.
Figure 1. Tallgrass prairie study site in Champaign County, Illinois. Species cover and abundance plots (black, field survey plots) with randomized ground-truth data points. Plot numbers associate with Table 2.
Remotesensing 14 03453 g001
Figure 2. UAS sensor combinations used in the study: (A) DJI Matrice 600 Pro with Headwall Nano-Hyperspec VNIR and Velodyne Puck lite LiDAR sensors, (B) DJI Inspire II with MicaSense Rededge multispectral sensor, and (C) Autel Evo II with Sony 1” CMOS 20 RGB sensor.
Figure 2. UAS sensor combinations used in the study: (A) DJI Matrice 600 Pro with Headwall Nano-Hyperspec VNIR and Velodyne Puck lite LiDAR sensors, (B) DJI Inspire II with MicaSense Rededge multispectral sensor, and (C) Autel Evo II with Sony 1” CMOS 20 RGB sensor.
Remotesensing 14 03453 g002
Figure 3. Landsat-derived seasonal patterns of green-up and senescence (acquired between 2004 and 2021) at our tallgrass prairie sites. UAS data acquired during 2020 are overlaid on a 5-day moving average (solid grey line).
Figure 3. Landsat-derived seasonal patterns of green-up and senescence (acquired between 2004 and 2021) at our tallgrass prairie sites. UAS data acquired during 2020 are overlaid on a 5-day moving average (solid grey line).
Remotesensing 14 03453 g003
Figure 4. Uncrewed aerial system (UAS) data-processing workflow.
Figure 4. Uncrewed aerial system (UAS) data-processing workflow.
Remotesensing 14 03453 g004
Figure 5. Variable importance plots for each sensor product (x: bands ranked by importance, y: variable importance values): (A) RGB, (B) RGB + CHM, (C) Multispectral, (D) Multispectral+ CHM, (E) Multispectral + CHM + Phenology, (F) Hyperspectral, and (G) Hyperspectral + CHM.
Figure 5. Variable importance plots for each sensor product (x: bands ranked by importance, y: variable importance values): (A) RGB, (B) RGB + CHM, (C) Multispectral, (D) Multispectral+ CHM, (E) Multispectral + CHM + Phenology, (F) Hyperspectral, and (G) Hyperspectral + CHM.
Remotesensing 14 03453 g005
Figure 6. Example of random forest plant species maps. Panels represent RGB (A), RGB + CHM (B), Multispectral (C), Multispectral + CHM (D), Multispectral + CHM + Phenology (E), Hyperspectral (F), and Hyperspectral + CHM (G) orthorectified UAS RGB imagery superimposed by ground-delineated reference data by species (H). See Supplementary Material (Tables S1, S3, S5, S7, S9, S11 and S13) for species summaries of the entire study area derived from our top-performing UAS network.
Figure 6. Example of random forest plant species maps. Panels represent RGB (A), RGB + CHM (B), Multispectral (C), Multispectral + CHM (D), Multispectral + CHM + Phenology (E), Hyperspectral (F), and Hyperspectral + CHM (G) orthorectified UAS RGB imagery superimposed by ground-delineated reference data by species (H). See Supplementary Material (Tables S1, S3, S5, S7, S9, S11 and S13) for species summaries of the entire study area derived from our top-performing UAS network.
Remotesensing 14 03453 g006
Figure 7. Example of random forest plant functional type (PFT) maps. Panels represent RGB (A), RGB + CHM (B), Multispectral (C), Multispectral + CHM (D), Multispectral + CHM + Phenology (E), Hyperspectral (F), and Hyperspectral + CHM (G) orthorectified UAS RGB imagery superimposed by ground-delineated reference data by PFT (H). See Supplementary Material (Tables S2, S4, S6, S8, S10, S12 and S14) for PFT summaries of the entire study area derived from our top-performing UAS network.
Figure 7. Example of random forest plant functional type (PFT) maps. Panels represent RGB (A), RGB + CHM (B), Multispectral (C), Multispectral + CHM (D), Multispectral + CHM + Phenology (E), Hyperspectral (F), and Hyperspectral + CHM (G) orthorectified UAS RGB imagery superimposed by ground-delineated reference data by PFT (H). See Supplementary Material (Tables S2, S4, S6, S8, S10, S12 and S14) for PFT summaries of the entire study area derived from our top-performing UAS network.
Remotesensing 14 03453 g007
Figure 8. User and producer accuracy results for each sensor-fusion product for plant species and PFTs computed by confusion matrices (available in Supplementary Materials). (A) Producer accuracy for PFTs, (B) producer accuracy for species, (C) user accuracy for PFTs, and (D) user accuracy for species.
Figure 8. User and producer accuracy results for each sensor-fusion product for plant species and PFTs computed by confusion matrices (available in Supplementary Materials). (A) Producer accuracy for PFTs, (B) producer accuracy for species, (C) user accuracy for PFTs, and (D) user accuracy for species.
Remotesensing 14 03453 g008
Figure 9. Observed vs. predicted percent cover estimates for plant species (panels (AC)) and plant functional types (panel (D)). Random forest classification performance was compared among sensors (Hyperspectral, Multispectral, RGB) without CHM data fusion (panel (A)), with CHM data fusion (panel (B)), overall top-performing species models (panel (C)), and overall top-performing PFT models (panel (D)). Symbols refer to sensor product and color refers to species or PFT.
Figure 9. Observed vs. predicted percent cover estimates for plant species (panels (AC)) and plant functional types (panel (D)). Random forest classification performance was compared among sensors (Hyperspectral, Multispectral, RGB) without CHM data fusion (panel (A)), with CHM data fusion (panel (B)), overall top-performing species models (panel (C)), and overall top-performing PFT models (panel (D)). Symbols refer to sensor product and color refers to species or PFT.
Remotesensing 14 03453 g009
Table 1. Plant species and plant functional types considered in this study adapted with permission from Piper et al. [34]. Note: sedges and deciduous saplings were not identified to species.
Table 1. Plant species and plant functional types considered in this study adapted with permission from Piper et al. [34]. Note: sedges and deciduous saplings were not identified to species.
Plant Functional TypePlant Species (Common Name)
C4 GrassesAndropogon gerardii (Big bluestem)
Sorghastrum nutans (Yellow Indiangrass)
C3 SedgesSedges
Woody SpeciesDeciduous saplings
Cornus drummondii (Roughleaf dogwood)
Juniperus virginiana (Eastern red cedar)
Perennial Late Flowering (PLF)Eupatorium serotinum (Late boneset)
Solidago (Goldenrod)
Symphyotrichum pilosum (Frost aster)
Perennial Early Flowering (PEF)Eryngium yuccifolium (Rattlesnake master)
Table 2. Randomized dGPS points used to train and test classification models for each plant species. Corresponding percent cover values for each plant species are included for each of the fourteen 5 × 5 m2 survey plots (Figure 1). Plot numbers with asterisks indicate the percent cover data used for validation (see Section 3.4). Not all plots add up to 100% cover. Note: sedges and deciduous saplings were not identified to species.
Table 2. Randomized dGPS points used to train and test classification models for each plant species. Corresponding percent cover values for each plant species are included for each of the fourteen 5 × 5 m2 survey plots (Figure 1). Plot numbers with asterisks indicate the percent cover data used for validation (see Section 3.4). Not all plots add up to 100% cover. Note: sedges and deciduous saplings were not identified to species.
Plant Species (Common Name)dGPS (n)Percent Cover by Field Survey Plot #
12 *3 *4 *56 *7 *8910 *11 *1213 *14 *
Frost aster436040 14
Big bluestem11825151610256078 10 60207040
Late boneset49 201810
Red cedar22 15 8 30
Roughleaf
Dogwood
28 1020
Goldenrod1261530745560 5254070 20225
Rattlesnake master7 2 2
Sapling28 10 10310
Sedge52 20
Yellow
Indiangrass
88 13 5223 10
Table 3. NDVI phenology metrics computed in CropPhenology R package, adapted with permission from Araya et al. [41].
Table 3. NDVI phenology metrics computed in CropPhenology R package, adapted with permission from Araya et al. [41].
PhenometricDefinitionDescription
Onset NDVI Value
(OnsetV)
NDVI value measured at the start of a
continuous positive slope above the minimum NDVI value before the NDVI peak
Identifies new leaf emergence
and early growth stages
Onset Time
(OnsetT)
Image acquisition time when OnsetV
is derived
Shows the month when early
growing stages occur
Maximum NDVI Value
(MaxV)
Maximum NDVI value achieved in the
time series
MaxV = Maximum (NDVI1:NDVI6)
Peak growing month
Time of Maximum NDVI
(MaxT)
Image acquisition time when MaxV
is derived
Shows the month with highest
productivity
Offset NDVI Value
(OffsetV)
NDVI value measured as the lowest slope
before the minimum NDVI value
Signifies the end of the
growing season
Offset Time
(OffsetT)
Image acquisition time when OffsetV
is derived
Shows the month when
growing season ends
Length of Growing Season (LengthGS)Duration of time that the vegetation takes to go through all growing stages
LengthGS = OffsetT − OnsetT
Higher values indicate a
longer growing season
Length of Growing Season
Before MaxT
(BeforeMaxT)
Length of time from OnsetT to MaxT
BeforeMaxT = MaxT − OnsetT
Duration of time
from emergence to flowering
Length of Growing Seasons After MaxT
(AfterMaxT)
Length of time from MaxT to OffsetT
AfterMaxT = OffsetT − MaxT
Duration of time
from flowering to the end of
the growing period
Growth Rate Between Onset and MaxT (GreenUpSlope)GreenUpSlope =
(MaxV − OnsetV)/(MaxT − OnsetT)
Duration of time
from emergence to flowering
Growth Rate Between MaxT and Offset (BrownDownSlope)BrownDownSlope =
(MaxV − OffsetV)/(OffsetT − MaxT)
Duration of time
from post-flowering to end of
growing season
Area Under the
NDVI Curve
(TINDVI)
Area under the NDVI curve between OnsetT and OffsetT; estimated using trapezoidal
numerical integration
A measure of biomass
productivity in the
growing season
Area Under the NDVI Curve between
Onset and MaxT
(TINDVIBeforeMax)
Numerical integration of NDVI between
OnsetT and MaxT; indicates plant
growth pre-flowering
Shows biomass accumulation
before flowering occurs
Area Under the NDVI Curve between
MaxT and Offset
(TINDVIAfterMax)
Numerical integration of NDVI between
MaxT and OffsetT;
indicates growth after flowering
Shows biomass accumulated
after flowering occurs
Measure of Asymmetry between NDVIBeforeMax and NDVIAfterMax
(Asymmetry)
Measures which part of the growing
season attains relatively higher
accumulated NDVI values
Asymmetry =
TINDVIBeforeMax − TINDVIAfterMax
Displays the asymmetry of biomass
before and after flowering in the
growing season
Table 4. Random forest (RF) model parameters and cross-validation statistics for each data-fusion product. The number of random variables (mtry), OOB error, and Kappacv are described for each product. Tree number was constant at 500 for all RF analyses. See text for descriptions of OOB error and Kappacv.
Table 4. Random forest (RF) model parameters and cross-validation statistics for each data-fusion product. The number of random variables (mtry), OOB error, and Kappacv are described for each product. Tree number was constant at 500 for all RF analyses. See text for descriptions of OOB error and Kappacv.
Data-Fusion ProductRF ModelRF Cross Validation
mtryOOB Error (%)KappacvOOB Error (%)
RGB180.20.981.7
RGB + CHM266.50.963.5
Multispectral270.70.8910.0
Multispectral + CHM258.60.936.0
Multispectral + CHM + Phenology446.60.8413.4
Hyperspectral333.90.972.3
Hyperspectral + CHM329.80.972.5
Table 5. Classified percent cover for plant species and functional types for each data-fusion product across our tallgrass prairie study site. Shadows and ground (i.e., class “other” from Figure 6) are excluded. Note: sedges and deciduous saplings were not identified to species.
Table 5. Classified percent cover for plant species and functional types for each data-fusion product across our tallgrass prairie study site. Shadows and ground (i.e., class “other” from Figure 6) are excluded. Note: sedges and deciduous saplings were not identified to species.
Data-Fusion ProductPlant SpeciesPlant Functional Types
AsterBluestemLate
Boneset
Red
Cedar
DogwoodGoldenrodRattlesnake
Master
SaplingSedgeYellow
Indiangrass
PLFC4 GrassesWoody SpeciesC3
Sedge
PEF
RGB5.824.97.01.92.223.20.52.85.821.936.046.86.95.80.5
RGB + CHM6.725.98.52.41.422.50.22.47.617.537.843.46.17.60.2
Multispectral4.125.65.50.82.024.90.72.34.421.234.646.85.14.40.7
Multispectral + CHM8.128.55.00.51.326.20.53.07.215.139.343.64.87.20.5
Multispectral + CHM + Phenology0.15.512.40.51.727.10.22.46.737.339.642.94.66.70.2
Hyperspectral3.042.13.30.30.532.43.20.00.811.438.753.60.80.83.2
Hyperspectral + CHM1.842.43.60.30.633.53.10.00.711.738.954.11.00.73.1
Table 6. Summary of the overall accuracy and Kappa coefficients of random forest mapping of 10 plant species and 5 plant functional types (PFTs) as computed from confusion matrices.
Table 6. Summary of the overall accuracy and Kappa coefficients of random forest mapping of 10 plant species and 5 plant functional types (PFTs) as computed from confusion matrices.
Data-Fusion ProductOverall Accuracy
KapparfSpeciesPFT
RGB0.1225%43%
RGB + CHM0.2233%47%
Multispectral0.1627%43%
Multispectral + CHM0.3745%61%
Multispectral + CHM + Phenology0.4552%60%
Hyperspectral0.6973%86%
Hyperspectral + CHM0.7378%89%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hall, E.C.; Lara, M.J. Multisensor UAS mapping of Plant Species and Plant Functional Types in Midwestern Grasslands. Remote Sens. 2022, 14, 3453. https://doi.org/10.3390/rs14143453

AMA Style

Hall EC, Lara MJ. Multisensor UAS mapping of Plant Species and Plant Functional Types in Midwestern Grasslands. Remote Sensing. 2022; 14(14):3453. https://doi.org/10.3390/rs14143453

Chicago/Turabian Style

Hall, Emma C., and Mark J. Lara. 2022. "Multisensor UAS mapping of Plant Species and Plant Functional Types in Midwestern Grasslands" Remote Sensing 14, no. 14: 3453. https://doi.org/10.3390/rs14143453

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop