Next Article in Journal
EuroDRONE, a European Unmanned Traffic Management Testbed for U-Space
Next Article in Special Issue
Open Collaborative Platform for Multi-Drones to Support Search and Rescue Operations
Previous Article in Journal
Homogeneous Agent Behaviours for the Multi-Agent Simultaneous Searching and Routing Problem
Previous Article in Special Issue
UAV Patrolling for Wildfire Monitoring by a Dynamic Voronoi Tessellation on Satellite Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precise Quantification of Land Cover before and after Planned Disturbance Events with UAS-Derived Imagery

1
School of Transportation & Aviation Technology, Purdue University, 1401 Aviation Drive, West Lafayette, IN 47907, USA
2
Department of Forestry and Natural Resources, Purdue University, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Drones 2022, 6(2), 52; https://doi.org/10.3390/drones6020052
Submission received: 19 January 2022 / Revised: 7 February 2022 / Accepted: 10 February 2022 / Published: 18 February 2022

Abstract

:
This paper introduces a detailed procedure to utilize the high temporal and spatial resolution capabilities of an unmanned aerial system (UAS) to document vegetation at regular intervals both before and after a planned disturbance, a key component in natural disturbance-based management (NDBM), which uses treatments such as harvest and prescribed burns toward the removal of vegetation fuel loads. We developed a protocol and applied it to timber harvest and prescribed burn events. Geographic image-based analysis (GEOBIA) was used for the classification of UAS orthomosaics. The land cover classes included (1) bare ground, (2) litter, (3) green vegetation, and (4) burned vegetation for the prairie burn site, and (1) mature canopy, (2) understory vegetation, and (3) bare ground for the timber harvest site. Sample datasets for both kinds of disturbances were used to train a support vector machine (SVM) classifier algorithm, which produced four land cover classifications for each site. Statistical analysis (a two-tailed t-test) indicated there was no significant difference in image classification efficacies between the two disturbance types. This research provides a framework to use UASs to assess land cover, which is valuable for supporting effective land management practices and ensuring the sustainability of land practices along with other planned disturbances, such as construction and mining.

1. Introduction

To combat destructive disturbance events, such as wildfire, land managers are increasingly implementing frequent low-intensity planned disturbances, such as selective timber harvest and prescribed burn “treatments”, to remove overgrown and non-native vegetation fuel loads in a controlled setting. Planned disturbance treatments like these belong to a methodology of forest management known in the industry as natural disturbance-based management or NDBM. NDBM treatments can significantly reduce damage from unplanned disturbances, as well as produce healthy, diverse, and resilient forest ecosystems [1,2].
Despite decades of research demonstrating the benefits of both prescribed burning and selective cutting, these treatments are inherently risky and can unintentionally accelerate forest degradation if done incorrectly [1]. Therefore, it is necessary when applying NDBM to regularly monitor changes to the physical environment to determine if the desired outcome was achieved and adjust practices accordingly [3]. To monitor changes, land managers must inventory the treated area both prior to and regularly after the planned disturbance event [4]. While there are many treatment efficacy and vegetation health indices, land cover is commonly used and proves to be a good indicator of a variety of forest health metrics [5,6]. To identify and document the cause-and-effect of NDBM, it is important to accurately document the before condition, and subsequently, the vegetation removed by NDBM; these steps are critical for developing the best practices for future applications of NDBM treatments.
Traditionally, land cover inventories involve manual ground-based methods, which are time-intensive, laborious, and even dangerous in some areas [7]. More recently, land managers have relied upon conventional aerial imaging platforms, such as manned aircraft or satellite networks to conduct inventories [8], both of which have limits. Manned aircraft operations can also be dangerous and are very expensive [9]. Satellite imagery often lacks the temporal (return rate) and spatial (pixel size) resolution needed for this scale of disturbance [10]. Many image-collecting satellites only return to an area every few weeks and are legally limited to pixel sizes of ≥30 cm; these limitations negate the benefits of their capability to cover large areas, such as multiple watersheds, to produce landscape-level land cover classification [11]. While this resolution is useful for disastrous unplanned disturbances, many planned disturbances are conducted at plot-level scales, where coarse spatial and temporal resolutions of traditional remote sensing platforms cannot provide the detail needed to derive useful land cover assessments [12]. Manned aircraft can provide more appropriate spatial resolutions but are often cost prohibitive for discrete planned disturbances [9,10,13]. Fortunately, drones or unmanned aerial systems (UASs) are now commonly used as a data collection tool in many industries and are useful given their ability to collect high-spatial-resolution imagery nearly on-demand [14]. UASs have been recognized as a cost-effective and efficient means to collect precise aerial imagery for repeated analyses of disturbances [4,9,10,13,15,16,17,18,19].
UASs have the potential to serve as flexible, inexpensive, and efficient aerial surveying platforms for land managers to conduct inventories before and after planned disturbance events [20,21]. Despite their demonstrated capability as an effective geospatial data collection tool, the integration of UASs in the forestry community has, until recently, been limited. With previous UAS technologies, the vast sizes of many forests could not be covered given the endurance limits of UASs. Today, advancements in UAS design, battery technology, and mission planning software make it easier than ever to survey large areas precisely and efficiently. Technology advances also make it possible for UASs to obtain an accurate survey of ground control tie-in points (GCPs) even under a dense forest canopy. Surveying GCPs is historically a time and labor-intensive task but necessary for obtaining useful inventory measurements. Real-time kinematics (RTK) technology, which relays positional data to UASs during flight via a fixed dual-frequency ground station, also has precision deficiencies in dense forest environments [22]. This limitation has been overcome with post-processing kinematic (PPK) technology, which uses an on-board GPS sensor to simultaneously trigger the camera and capture the bounding coordinates of each image at sub-decimeter positional accuracies [23]. The combination of precision and rapid deployment capabilities now provides the means to produce multi-image datasets with pixel resolutions of a few centimeters.
To analyze the land cover change surrounding planned disturbances, UAS imagery has been used with the 30+-year-old RS technique [24] of geographic object-based image analysis (GEOBIA) to classify the phenological conditions of discrete forests for damage and regeneration assessments following disturbances. Typically, land inventory studies that have used GEOBIA employed object-based classification (OBC) methods as opposed to pixel-based classification (PBC) methods since the image objects are comprised of multiple pixels [24]. Examples of timber felling events inventoried with UAS-derived OBC were documented by Knoth [25] and Goodbody et al. [26]; each study examined previously cut sites that were under restoration. Knoth [25] was able to inventory four peatbog species (Waterlogged Bare Peat, Tussocks, Peat Moss, and European White Birch) with a 91% overall accuracy (OA) and individual class accuracies of 84–95%. Goodbody et al. [26] inventoried several conifer species, such as the Douglas Fir, Western Larch, Western White Pine, Lodgepole Pine, Engelmann Spruce, and White Spruce at 5, 10, and 15 years since planting. Three land covers (LCs) were classified to assess seral succession performance—conifer, deciduous, and bare ground, where the analysis achieved an OA of 86–95% between age groups.
For fire events, UAS-derived OBC has also been studied extensively. Larrinaga [27] employed OBC for the post-fire inventory of two successional pine species (black and scots pine) within an oak-dominant Mediterranean forest. White et al. [19] leveraged OBC for inventorying jack pine succession following a wildfire in northern Michigan, with OAs of 98%. Both Pádua et al. [28] and Anderson [29] performed manual OBC for inventorying the phenological condition of forests following fire events. Pádua [28] digitized three classes of post-fire damage severity, in addition to target tree species identification using image segmentation. Anderson [29] constructed a three-class vegetation height map and pine regeneration stand map through the visual interpretation and digitization of image objects. Pérez-Rodríguez et al. [30] achieved an 84.3% OA for a three-class vegetation burn severity inventory. Rousselet [4] performed image differencing of burn classes following Paradise, CA wildfires with multiple OBC maps. Samiappan et al. [31] also performed three-class OBC of burn severity following wildfires in the southern plains. Simpson et al. [32] assessed tropical peatland burns with OBC of LC and achieved a 96% OA when compared to verified high-resolution manned imagery.
While a range of OAs and individual class accuracies have been demonstrated, much of the research that has been done has focused on assessing the damage (burn severity or gap size/frequency) and conducting an inventory of land covers (heights, species, and percent coverage) solely after the disturbance events; some of the studies have used satellite or manned aircraft imagery for pre-disturbance datasets [18,31,32,33]. One limitation of this methodology is the different spatial and temporal scales, which requires the downsampling of the high spatial and temporal resolution of post-disturbance UAS imagery [31], negating the advantages of the UAS. With increasing implementation of planned disturbances and the importance of understanding the economic and ecological impacts of these planned disturbances, there is an excellent opportunity to leverage the rapid, precise, and efficient data collection methods provided by UASs. Moreover, due to the planned nature of prescribed burns and selective timber harvests, as well as advancements to on-board post-survey georeferencing methods (which eliminate the need for time-consuming terrestrial-based surveys) [23], there is the opportunity to not only repeatedly inventory land cover following a disturbance but also before to develop a comprehensive disturbance analysis which would advance the silvicultural management of local forests [3,34,35].
The purpose of this research was to investigate the use of UAS technology to support land management, including the inventory of land covers prior to and repeatedly after planned disturbance events, and to confirm whether UAS results produce similar accuracies for both prairie burn and timber harvest treatments.

Research Objectives

Image acquisition parameters and processing methods vary significantly across other similar studies. Subsequently, the differences in UAS image quality and the resulting land cover classifications between these studies are vast. However, to better understand the factors influencing the quality of UAS imagery, diligent testing of data collection parameters and associated results are needed to integrate UASs into forestry applications effectively [15]. Therefore, the first research objective was to establish effective data procurement methods with UASs for both before and after planned disturbances. This first objective was also intended to optimize the processing parameters for this application and to examine to what extent the parameters can be modified to enhance data collected with poor quality. While collecting and processing quality data are important to any forest inventory, equally important are the quality and usefulness of the resulting data products [36].
Data classification parameters and results also vary significantly across other pertinent studies, yet are crucial for the replicability and effective implementation of this method [15]. Therefore, the second research objective was to establish effective data classification methods with UAS imagery to quantify the land cover from both before and after planned disturbances. By developing sequential maps of the land cover at every stage surrounding the disturbance treatment, quantifiable areas of vegetation loss, retention, and regeneration can be determined [3].
Because this study also involved validating the resulting classifications through visual interpretation, these results can be used to compare the output quality among land cover types [37]. Therefore, the third research objective was to determine if there was a significant difference in the output classification quality between the prescribed burn and the selective harvest treatment land covers. By comparing the quality of the classified map series between the two disturbance types, the reliability of this inventory method between open and closed canopy land covers can be examined. Completing these objectives established an effective methodology for the efficient inventorying of land cover change both before and after planned disturbance treatments.

2. Materials and Methods

2.1. Study Area

The Central Hardwood Forest (CHF) region is home to a variety of deciduous hardwood species that exist in a segmented mosaic of savannahs and prairies between discrete plantings of agricultural crops and hardwood forests [38]. Many prairies and hardwood forest plots in this region are protected areas that are privately owned and managed, ranging in size from 10 to 60 acres. To maintain native species in these areas, land managers often engage in controlled burns and selective timber harvests throughout the spring, fall, and winter. The goal of these planned disturbances is to mimic the frequency and intensity of natural fire and windthrow events, but in a controlled setting to facilitate native species regeneration [3]. This way, both the ecosystem and the land manager can benefit by reducing biomass accumulation, increasing favorable conditions for native species regeneration and wildlife habitat and producing valuable forestry goods [39].

2.1.1. Prescribed Prairie Burn Sites

The data collection for prescribed burn treatments was conducted first at three different prairie properties. Each of these prairies, symbolized as beige trees in Figure 1, contain between 8 and 60 acres of native prairie land over relatively flat terrain. The properties are segmented into multiple 1-acre plantings of native grass and forb seed mixtures, which include Big/Little Blue Stem, Black/Brown-eyed Susan, Stiff Goldenrod, Indian Grass, White/Purple Prairie Clover, Wild Rye, Side Oats Grama, Prairie Dropseed, and many others [40]. With 10–15 ft fire line paths separating the plantings, the structure of these properties allows ecology researchers to conduct prescribed fire seasonality studies, where individual plots are burned at bi-annual, annual, or 2-year intervals so that correlations between fire frequency and species regeneration can be determined [41].
The Doak and Hermann properties (Figure 2) were used for preliminary flight testing with the UAS, and were burned on 19 September 2019 and 8 October 2019, respectively. Each of these sites was surveyed both immediately before and after the prescribed burn treatments to gather preliminary datasets with the UAS. Then, PWA (Figure 2) was burned on 2 April 2020, and was flown over multiple times; one day before the burn, one day after the burn, and multiple times thereafter (Table 1). The proximity of PWA made it possible to gather many datasets following the disturbance, achieving the best temporal resolution of the three sites. Therefore, this site was used in the final land cover assessment for prescribed burns in this project. By assessing land cover change surrounding the prescribed burn at PWA, land managers can leverage this information in conjunction with their current management procedures to assess the efficacy and ecological response of their burn treatment [3].

2.1.2. Selective Timber Harvest Sites

The data collection for selective timber harvests was conducted after prescribed burns at seven different hardwood timber plantings (Figure 3). These plantings, symbolized as green trees in Figure 1, are owned and maintained by Pike Lumber Company for valuable hardwood lumber and veneer production [43]. Each of these properties is between 10 and 60 acres (Table 2), harboring species such as Oak, Maple, Hickory, Beech, Poplar, Basswood, Walnut, and many others (Table 3). These discrete plantings are surrounded by private agricultural fields and/or unmanaged forests, often with easement corridors that allowed for sufficient vertical take-off and landing (VTOL) clearance with the UAS. However, some sites were more accessible than others, and this determined which sites were revisited (Table 2). Every site was flown over once before harvest, five sites were flown over more than once, and just one site was chosen for the resulting land cover classification analysis—Volz (Figure 3). Although Volz was farthest away, it was chosen for three primary reasons—(1) an open space surrounding the plot provided a safe environment for VTOL with the UAS; (2) Volz was the smallest site and, therefore, the most efficient for testing various data collection parameters; (3) Pike recently assumed the management of Volz in February of 2010, making this harvest their first management action on the site. By assessing the land cover change surrounding the first harvest of Volz, Pike’s land managers can leverage this information in conjunction with their current management procedures to assess the efficacy and ecological response because of their harvest actions [44].
Once the sites for the NDBM treatments were established, the next step was to engage in data collection testing. This was done to optimize the atmospheric, sensor, and mission planning parameters for quality imagery surrounding the burn and harvest events. The data collection was performed with an industry-grade UAS and sensor combination.

2.2. UAS and Sensors Combination

To test the various data collection parameters and gather image datasets for each planned disturbance, a multi-rotor UAS was equipped with a mirrorless interchangeable lens (MIL) digital camera and a post-processing kinematic (PPK) sensor (Figure 4, Table 3 and Table 4). The UAS used was a DJI Matrice 600 Pro (M600). The platform was selected for its well-balanced combination of payload capacity, rapid deployment, and endurance. With six propeller motors affixed to a full carbon frame, this UAS was powered by six lithium polymer (LiPo) batteries and capable of a relatively long flight time (Max. ~26 min) while remaining steady in strong winds (up to 20 mph) [45].
The UAS and sensor combination was able to rapidly gather multiple datasets, sometimes up to three flights in one day, with precise and accurately geolocated images in a variety of field settings. This allowed for the efficient testing of multiple data collection parameters between flights until quality imagery was consistent. Because the sites were relatively discrete and high temporal datasets were necessary, a multi-rotor was chosen over a fixed wing, which often requires more time in the field and larger deployment areas to be safe and worthwhile [13]. Furthermore, the vertical take-off and landing (VTOL) capability of the multi-rotor allowed the aircraft to launch and land in tight areas, such as road ditches, easement corridors, or small canopy openings, in addition to open areas [48]—all of which were present in this study. Additionally, the M600 was lightweight and had ample surface area that allowed for mounting external payloads, such as the camera and georeferencing system used to capture and geolocate imagery.
Table 3. Camera payload specifications. NOTE: The DJI sensor was a Zenmuse XT2 [49] and was required to operate the UAS. The Sony A6000 was used for capturing georeferenced imagery and was independently operational from the UAS.
Table 3. Camera payload specifications. NOTE: The DJI sensor was a Zenmuse XT2 [49] and was required to operate the UAS. The Sony A6000 was used for capturing georeferenced imagery and was independently operational from the UAS.
Image
Sensor
WeightLensFocal LengthMega-PixelsGround Sample Distance (GSD)DJI
Sensor
Weight
Sony A6000~0.5 kg (~1 lb)Voigtländer Color Skopar Aspherical21 mm24.3 MP2–3 cm per pixelZenmuse XT2 (13 mm)~0.6 kg (~1.3 lbs)
Table 4. Georeferencing payload specifications. The UAS positioning system was fixed to the aircraft and controlled the position of the UAS during flight. The image georeferencing system was the GeoSnap PPK and was independently operational from the UAS.
Table 4. Georeferencing payload specifications. The UAS positioning system was fixed to the aircraft and controlled the position of the UAS during flight. The image georeferencing system was the GeoSnap PPK and was independently operational from the UAS.
Georeferencing SystemUncorrected PPK AccuracyMax Corrected PPK AccuracyWeightDimensions (mm)UAS PositioningGPS Accuracy
Field of View GeoSnap PPK30 cm Vertical/
Horizontal
2 cm Vertical/
Horizontal
206 g (~0.5 lb)90 × 50 × 28A3 GPS Compass Pro50 cm Vertical,
150 cm
Horizontal
The camera used in this study was chosen because of its interchangeable lens, which was able to fit a short (21 mm) lens with a wide field of view (91.2°) and had a 24.3-megapixel (MP) resolution. This allowed the camera to capture many overlapping pixels between images, thus providing the photogrammetric software with many tie-points when producing orthomosaic composites [36,50]. Additionally, the camera was able to capture intense detail (2.3 cm/pixel GSD) while suspended from a moving platform with a frequently changing depth of field. The resolution and wide field of view of this camera are important because, according to Seifert et al. [51], the greatest contributing factors to producing quality aerial image products are the camera’s resolution and heavy overlap between images.
The georeferencing system used for this study was a GeoSnap PPK [46]. This system was designed to mount onto most any mapping-grade UAS and contains its own GPS receiver, internal metric unit (IMU), and central processing unit (CPU) to gather its own position (externally from the UAS) and trigger the camera based on predefined user settings. By adjusting the PPK configuration file settings, the PPK was responsible for triggering the Sony A6000 to capture 80% lateral and frontal overlap between consecutive images in a grid-based mapping mission.
With this combination of UAS and sensors, each of the identified burn and harvest sites was surveyed (Figure 1), testing the various data collection parameters that affect image quality before processing the imagery and engaging in land cover assessments (Figure 5).

2.3. Data Collection

To address the first objective of this study, which was to establish effective data procurement methods with a UAS for both before and after planned disturbances, the many factors contributing to quality image collection with a UAS needed to be identified and tested to ensure quality aerial image products before engaging in land cover analyses [15]. Before conducting any flights with the UAS, the main data collection variables affecting image quality were identified through the literature and previous UAS survey experience. These variables were divided into three main categories—(1) atmospheric conditions, (2) sensor settings, and (3) mission planning (Table 5).
Atmospheric conditions are often only considered for how they might affect the safe operation of the UAS but should also be considered for quality image collection, especially in forested environments [36,51]. Conditions such as wind, sun illumination, and sun angle can all have an effect on the quality of images captured by the UAS and, thus, the quality of output image products [16,52]. Similarly, the sensor settings that affect the quality of imagery were defined as shutter speed, aperture, ISO, and zoom [17,27,32,51,53,54,55,56]. Finally, the mission planning settings that affect image quality included flight altitude, overlap/sidelap between images, and the number of images collected outside the study area [51,57,58]. Once these settings were identified, the next step was to test them in the field, first at the Doak and Hermann prairie burn sites (Figure 1 and Figure 2).

2.3.1. Prairie Burn Site Testing

The prairie burn sites were tested first due to the openness of these areas, which provided safe operational practice with the UAS and reduced the risk for tree, bird, and aircraft collisions [59]. Here, the “default” data collection variables were tested both before and after the prescribed burn was conducted at the Doak prairie site (Table 6). Following these flights, the images were examined to determine the output of the “default” parameters and given atmospheric conditions. Many parameters remained the same across the prairie site testing, such as image overlap (80% × 80%), ISO (auto), and zoom (∞), flight altitude (121 m or 400 ft AGL), and the number of images outside the study area (1), for example. However, the most critical parameters affecting the initial image quality at the prairie sites were adjusted and tested. Therefore, the shutter speed was increased, and a slightly lower sun angle was examined in the flights surrounding the Hermann burn (* in Table 6).
Upon examining the preliminary images, it was determined that the shutter speed used for the Hermann burn flights produced higher quality images than those collected from the Doak burn flights. Despite lower sun angles producing longer shadows, the shutter speed increase (1/2500) balanced the exposure and view into shaded areas and, therefore, was used for the data collection surrounding the PWA prairie burn. The atmospheric conditions of the repeat PWA flights varied slightly, but were mostly conducted during peak sun angle times (1100–1600), with clear skies and low wind (Table 7).
Repeat flights at the PWA site were flown using the same mission plan, which covered the same 26-acre area in roughly 15 min (not including the manual launch and landing times). The UAS flew a north-to-south pattern at a height of 400 ft AGL and a maximum ground speed of 20 MPH during image passes. While the mission was set to capture an 80% vertical overlap between consecutive images and an 80% lateral overlap between image passes, these overlaps were likely higher given the PPK-triggered camera dimensions were wider than the DJI sensor on which the mission planning software based this percentage.

2.3.2. Timber Harvest Site Testing

To establish effective data collection parameters at the timber harvest sites, flights were conducted at each harvest site and optimized by the third flight (post-cut) at Volz. At first, Volz was flown before harvest, with the same parameters that were established for the PWA site. However, when these settings were used in the following flights at other timber harvest sites, images displayed motion blur, overexposure, and low depth of field. Because of this, the flight altitude, shutter speed, and number of images outside the study area were adjusted for the mid-cut flight at Volz (Table 8). The flight was conducted at 500 ft (150 m) AGL to broaden the depth of field and field of view, as well as to reduce the motion blur and the overall number of images [57]. Since the canopy heights were around 100 ft (30 m) tall, this increase in flight altitude was permittable by the 14 CFR Part 107 regulations at the time—which allowed for the operation of UASs no more than 400 ft above the ground or above the tallest object in open airspaces [60]. Additionally, the shutter speed was increased to 1/4000 s to further reduce the likelihood of overexposure and motion blur in images (Table 8).
Every flight at the Volz site used the same mission plan, which covered the same 10-acre area in 15 to 20 min by flying an east-to-west pattern. Since the flight height changed between the first and last three flights (400 ft AGL to 500 ft AGL), the number of images was reduced, but each image captured a wider field of view and greater depth of field.
Once all flights for the prescribed burn at PWA and the selective timber harvest at Volz were completed, the next step was to correct the image coordinates that were collected by the PPK and engage in photogrammetric processing (see Figure 5 for the workflow).

2.3.3. Data Processing

The next step in establishing effective data procurement methods for both before and after planned disturbances was to produce quality output products—specifically, georeferenced orthomosaics. Georeferencing is the process of pairing each image with its associated coordinates. This can be achieved in many different ways, such as with GCPs, RTK, PPK, or a combination of these methodologies [22,61,62,63]. Georeferenced orthomosaics were produced for each image dataset, while those surrounding the PWA and Volz disturbances were used in the final land cover classification assessments. When testing processing methods for the prairie and forested sites, the procedures were mostly the same; however, some forested datasets performed better after adjusting two specific parameters in Pix4D.
PPK was used in this study to both capture images and georeference the imagery but was corrected after the flight to ensure the most accurate image positions possible. To do this, EZSurv software was used for its compatibility with the selected PPK device and user-friendly interface. An appropriate datum and projected coordinate system were set to establish an appropriate spatial reference when performing corrections (Table 9). Then, RINEX (raw positioning) data from the flight was corrected using the nearest CORS location for each site. This was done for each dataset surrounding the selected prairie burn (PWA) and timber harvest (Volz) sites (Table 10).
Once the image coordinates were corrected, this information was brought into Pix4D along with the associated image dataset to produce georeferenced orthomosaics [57,65]. When in the Pix4D processing phase, several parameter adjustments were made—the processing speed, output quality, and reconstruction detail. The keypoint image scale (KPIS) in the “initial processing” step defines the scale at which the processing algorithm looks for similar pixels of identifying objects between multiple images in order to properly position each image within the orthomosaic [66]. For prescribed burn prairie sites, the default KPIS of “1” was used almost exclusively, while some selective harvest sites performed better with a reduced KPIS of “½” or “¼”, especially when some of the images contained motion blur. In some instances, adjusting the KPIS still was not enough to produce a quality georeferenced orthomosaic, and the calibration method was switched from “standard” to “alternative” [50]. Because the burn sites performed well with the default “standard” calibration method, the “alternative” was not tested for PWA. However, after discovering issues within some of the orthomosaics for harvest sites, the Volz datasets were processed with both the “standard” and “alternative” calibration methods at each KPIS to determine the best combination (Table 11).
Orthomosaic composites, 3D Point Clouds, and DSMs were produced for each PWA and Volz dataset surrounding their given NDBM treatment. The best iterations of the PWA and Volz datasets were chosen as the basis for the land cover classifications.

2.4. Data Classification

To directly address the second objective in this study, which was to establish effective data classification methods with UAS imagery to quantify land cover from both before and after planned disturbances, the georeferenced orthomosaics were used to engage in geographic object-based image analysis (GEOBIA). Object-based classification (OBC) was used because the image objects were comprised of many pixels due to the high-resolution imagery [19,24]. OBC relies on image segmentation prior to classifying land covers, which groups pixels into objects with similar spectral and spatial characteristics to simplify texture and color variances [67]. Next, the user collects multiple sample polygons for each land cover type. These are then combined with the segmented image to train the classification algorithm. An example of the progression from orthomosaic imagery, through segmentation, and the resulting land cover maps can be seen in Figure 6.
A widely studied component of OBC is the classification algorithm. Numerous studies have suggested using support vector machines (SVMs) over other popular algorithms, such as random forest, maximum likelihood, and ISO clusters [19,68,69,70,71,72,73,74]. Additionally, few studies have examined the efficacy of an SVM classification algorithm in ArcGIS Pro for land cover change surrounding disturbances and instead have focused on software packages, such as eCognition, AgiSoft, and others [53,54,65,68,75,76].
Furthermore, determining the accuracy of the resulting land cover classification is done by plotting points randomly on top of the classified image and comparing what class each point falls within to what the actual land cover of each point is. Husson, Ecke, and Reese [37] validated this procedure by comparing the proportions of land covers in their classified dataset to proportions collected through field samples. After achieving an overall classification accuracy of 95% when compared to field measurements, the authors determined that standalone visual interpretation of their high spatial resolution orthomosaic image was sufficient for performing an accuracy assessment of land cover classifications produced with that imagery.
While these findings provided a good foundation for performing GEOBIA, the specific parameters for this application required further experimentation to produce quality land cover classifications for image datasets surrounding both the prescribed burn and selective timber harvest NDBM treatments.

2.4.1. Image Segmentation

In ArcGIS Pro, the “default” segmentation parameters are 15.5 (spectral), 15 (spatial), and 20 (min. segment size). Segmentation requires user-set parameters for spectral, spatial, and minimum segment size sensitivities, which are typically optimized through trial-and-error and vary based on pixel resolution [24]. For the first test, the post-burn dataset from Doak was segmented using these default parameters, and then adjusted until the image objects were comprised of as few segments as possible while simultaneously avoiding grouping multiple objects into one segment [37]. This was done by increasing the spectral detail, decreasing the spatial detail, and increasing the minimum segment size. Figure 7 shows the progression of segmentation parameters tested, using the Doak post-burn dataset as an example.
Upon optimizing the segmentation parameters, orthomosaics for the PWA and remaining Volz sites were segmented using the “Segment Mean Shift” tool. The parameters used were 17 spectral detail, 10 spatial detail, and 80 minimum segment size. After this was completed, the next step was to define a classification schema for each type of NDBM treatment.

2.4.2. Classification Schema

In this study, two classification schemas were developed—one for each disturbance type. For the PWA datasets, four classes were used to assess the land covers surrounding the prescribed burn—(1) bare ground, (2) litter, (3) burned vegetation, and (4) green vegetation. For the Volz datasets, three classes were used to assess the land covers surrounding the selective timber harvest—(1) bare ground and woody debris, (2) understory vegetation, and (3) mature canopy. After defining the classification schemas for both types of NDBM treatments and respective land covers, samples were collected for each class within each dataset.

2.4.3. Sample Collection, Classification, and Area Calculations

Within the GEOBIA literature, the number of samples collected for each class is still widely debated [15]. Kucharczyk et al. [24] suggested collecting an equal number of samples per class with a minimum of 50 to properly train a machine-learning classification algorithm. At first, 65 samples were collected for each class within the PWA dataset to train the classification algorithm. After promising outputs, this same method was applied to the mid-cut dataset for Volz before we realized that a higher number of samples were needed to produce similar output qualities. The sample size was then increased to 80 per class for the mid-cut dataset and applied to the remaining Volz datasets.
Upon collecting training sample polygons for each dataset, the next step was to use these samples and associated segmented images as inputs for the SVM algorithm to produce land cover classifications.
Following each classification, a count of pixels within each land cover class was included in the resulting attribute table. The areas for each land cover were calculated by multiplying the number of pixels for each classification by the ground sampled distance (GSD), which in this case, was between 2 and 3 cm. Area calculations were conducted for each PWA and Volz dataset.

2.4.4. Accuracy Assessment and Confusion Matrix

Accuracy assessments were performed using the “create accuracy assessment points” and “compute confusion matrix” tools in ArcGIS Pro. A sampling strategy of 250 equalized stratified random (ESR) points was used so that each class had either 62 (for four-class burn maps) or 84 (for three-class harvest maps) reference points. The accuracy assessment points tool projected an equal number of points for each class (ESR) onto the classified output, which generated a table with columns for (a) point ID, (b) classified as, and (c) ground truth. Each point was then examined using the georeferenced orthomosaic as a reference [37] to determine its true land cover for which the corresponding class value was entered into the “ground truth” column. Once the table was filled with both the “classified as” and “ground truth” values, the “compute confusion matrix” tool was used to generate a sample error matrix table. Then, all the sample error matrix tables were converted into estimated population error matrix tables [77,78], from which per-class and map-level classification accuracies and efficacies were derived [79] (see Section 3). Image classification efficacies were computed to compare the classification effectiveness between the two different classification schemas. The map-level image classification efficacies (MICE) were used for this comparison.

2.5. Significance Testing

To directly address the third objective in this study, which was to determine if there was a significant difference in the final classification quality between the prescribed burn and selective harvest treatment land covers, a two-tailed t-test assuming equal variances was performed. This was conducted to determine if (1) the variance within disturbance types/land covers and (2) the variance between disturbance types/land covers were significant at a 95% confidence level. By calculating these statistics, the reliability of the classification method for either type of disturbance or land cover could be determined. Excel was used for the calculation by first putting the overall classification accuracies (OA) for each disturbance/land cover type into their own columns. Then, the “t-Test Two-Sample: Assuming Equal Variances” data analysis tool was used. The range of OA values for each disturbance/land cover type were put into the model, as well as the “hypothesized mean difference”, which in this case was “0” because the variances were assumed to be equal. The data analysis tool generated output with the mean, variance, and observation size (or sample) for each disturbance/land cover type, a pooled variance between both ranges of OAs, the hypothesized mean difference (“0”), degrees of freedom, and then the t-stat, the p-value, and the t-critical values.

3. Results

3.1. Objective 1: Data Procurement

Objective 1 was to establish effective data procurement methods for both before and after planned disturbances and was achieved by testing various data collection and processing parameters with UAS imagery at multiple prescribed burn and selective timber harvest sites in the Central Hardwood Forest (CHF) region. Key UAS data collection variables were tested for two types of land cover (prairie and forest) surrounding their respective natural disturbance-based management (NDBM) treatments (Table 12 and Table 13).

3.1.1. Optimized Data Collection Parameters

Quality image datasets were collected at PWA both before and repeatedly after the prescribed prairie burn. Following the preliminary data collection tests at the Doak and Hermann prescribed prairie burn sites, the atmospheric, sensor, and mission planning parameters were optimized and implemented for the PWA burn surveys (Table 12).
The optimal data collection parameters for harvest sites differed slightly from the prairie burn sites due to the height of vegetation and sometimes minimal ground visibility. Quality imagery was collected both before and repeatedly after the selective timber harvest at Volz with the parameters shown in Table 13.

3.1.2. Resulting Georeferenced Orthomosaics

Four georeferenced orthomosaics were produced for the PWA prairie burn and Volz timber harvest sites, respectively, to serve as the basis for land cover classification (Figure 8 and Figure 9).

3.2. Objective 2: Land Cover Classification

Objective 2 was to establish effective data classification methods to quantify land cover from both before and after planned disturbances and was achieved by testing the various segmentation, sampling, and classification algorithms for the prescribed prairie burn (PWA) and selective timber harvest (Volz) datasets. The same segmentation parameters were used for both the PWA and Volz datasets—17 spectral detail, 10 spatial detail, and 80 minimum segment size. Then, classification schemas were established for both the NDBM treatment and land cover type. A total of 65 samples were collected for each of the four classes in the PWA prescribed prairie burn datasets (Figure 10), while the Volz selective timber harvest datasets required 80 samples for each of the three classes to produce similar quality land cover classifications (Figure 11). A support vector machine (SVM) classification algorithm was used to classify each of the datasets, and an accuracy assessment of 250 equalized stratified random (ESR) points to produce confusion matrices for the resulting classifications was conducted (Table 14 and Table 15).

3.3. Objective III: Significance Testing

A two-tailed t-test assuming equal variances was conducted between the average overall accuracies of each disturbance type to determine whether the difference in overall accuracy between the disturbance types/vegetation cover was significant at a 95% confidence interval (0.05) (Table 16). Because the GEOBIA process used in this study was mostly the same between the two types of disturbances and land covers, the variances were assumed to be equal. Furthermore, by conducting this particular type of t-test, the objective effectiveness of this method could be tested for the two types of NDBM treatments commonly employed in the CHF region.
To have a statistically significant difference between the two types of disturbances and land covers, the p-value (0.133) would need to be less than the confidence interval (0.05). In this case, the p-value is not less than the confidence interval, and therefore, there is not a statistically significant difference between the overall accuracies of the PWA prescribed prairie burn and the Volz selective timber harvest land cover classifications. This means that while PWA did have higher overall and individual class accuracies, they were not significantly better than Volz, so both the collection and classification methods employed in this study were effective for both the disturbance and land cover type.

4. Discussion

4.1. Objective 1—Part 1: Data Collection Parameters

Before collecting any data, the various parameters affecting quality image collection with UAS were defined by reviewing similar studies within the literature and recalling UAS survey experience prior to engaging in this research study. Unfortunately, as Buters et al. [15] points out, each study contained a unique set of parameters for various parts of the collection, processing, or classification steps—thus, standards for this UAS application were non-existent. This led to the iterative testing of atmospheric, sensor, and mission planning parameters, in conjunction with diligent note-taking, to infer how these variables affected the quality of imagery collected by the UAS. Fortunately, a number of suitable prairie burn and timber harvest sites were identified, which allowed for this type of iterative testing without compromising the production of final land cover classifications for either type of disturbance.
The results show that prairie burn sites were less sensitive to atmospheric conditions, such as sun angle, brightness/cloud cover, and wind (Table 12 and Table 13). This was likely due to the low height and visual simplicity of the vegetation being imaged. Conversely, timber harvest sites were more sensitive to all three atmospheric conditions, likely due in part to the tall height and visual complexity of mature forest canopies. Therefore, the best resulting images were collected as close to solar noon as possible, with overcast skies and diffuse lighting, and low wind (Table 13). Furthermore, the sensor settings were mostly the same, with the exception of shutter speed. However, this was mostly dependent on lighting and wind (Table 12 and Table 13). Finally, the flight altitude and the number of images collected outside the study area differed between the two land covers. Ensuring sufficient distance between the sensor and vegetation was easily done at 121 m above ground level (AGL) for prairie sites but given the height of mature trees at timber harvest sites (often between 24 and 30 m), the flight altitude was increased to 121 m above forest canopy (AFC). Additionally, while prairie burn sites required just 1 image outside the study area to reduce outer image distortion in resulting orthomosaics, timber harvest sites required 2 or more, depending on the adjacent vegetation. This was likely due to the presence of ground within the datasets, where prairies had ample visible ground within and surrounding the study area, timber harvest sites only had visible ground present along plot perimeters, if at all.
Overall, quality images were gathered with the respective data collection parameters, despite logistical challenges that prevented ideal data collection scenarios, especially for timber harvest sites.

4.2. Objective 1—Part 2: Data Processing Parameters

After gathering quality images for each site, the next step was to engage in data processing. This involved correcting image coordinates collected by the post-processing kinematic (PPK) sensor during flight before synchronizing those coordinates with their associated image in photogrammetric processing software to produce georeferenced orthomosaics. Because the datasets were captured within the state of Indiana, or just across the border into Michigan or Ohio, the sites fell within the 16th zone of the Universal Transverse Mercator (UTM) projected coordinate system. This allowed each site to be corrected to the WGS 84 UTM Zone 16 North spatial reference, which has an estimated error of just 2 cm [64] (Table 9).
At the orthomosaic generation stage, the photogrammetric processing parameters diverged slightly. The prairie sites performed well with the default parameters, often achieving image calibration numbers of 99% or better (Table 11). On the other hand, the timber harvest sites required more data manipulation to produce quality orthomosaics, particularly when the images were collected in adverse conditions (Table 11). Motion blur, overexposure, dark shadows, and/or too few images collected outside the study area proved to be problematic for the orthomosaic generation of the timber harvest sites. To salvage these datasets, the keypoint image scale (KPIS) was reduced from “1” to “1/2”, thus reducing the number of tie-points the software would look for at one time and, therefore, performing better mosaicking than with the default KPIS [66]. Additionally, the calibration method was changed from “standard” to “alternative”, for which the back-end procedure is not publicly available. However, this parameter likely reduced the number of images discarded if enough keypoints were not identified, thus increasing the number of images kept in the resulting orthomosaic.

4.3. Objective 2: Data Classification Parameters

Using the high-quality georeferenced orthomosaics generated in the previous step, supervised data classification methods were used to produce a series of land cover classification maps surrounding both the PWA prescribed prairie burn and the Volz selective timber harvest sites (Figure 10 and Figure 11). This was done by adjusting the segmentation parameters that affected the spatial, spectral, and size details of the output segmented objects. As Kucharczyk et al. [24] noted, “determining an optimal segmentation parameter is a heuristic, subjective, challenging, and time-intensive trial-and-error process” (p. 7). Looking at the resulting classifications, it appears that this process might be to blame for not achieving even higher overall and individual classification accuracies. While the “optimal” parameters for the cm-level resolution of images used in this study were assumed to be suitable for all datasets collected with the same sensor, this was decided after processing the PWA datasets. In hindsight, each iteration of the segmentations tried should have been classified to determine which parameters led to the best initial classification accuracies before segmenting the rest of the datasets. Furthermore, the timber harvest datasets would likely require different parameters, especially for the spectral detail and minimum segment size, to better delineate large tree crowns and distinguish felled timber stem colors from bare ground in canopy gaps.
Before collecting training samples, classification schemas were developed for both land covers, with specific and appropriate classes for each. When monitoring the vegetation loss and recovery surrounding the prescribed burns, it was important to know the proportions of no vegetation growth (bare ground), fuel (litter), healthy and/or unconsumed vegetation (green vegetation), and vegetation consumed by the fire (burned vegetation) [41]. By quantifying these areas both before and after the disturbance, land managers had an objective determination of the prescribed burn efficacy as it relates to the native ecosystem vitality. For selective timber harvests, it was important to monitor the proportions of no visible growth and/or felled stems (bare ground and woody debris), regenerative growth (understory vegetation), and well-established growth (mature canopy), with woody debris being a function of wildlife habitat and forest structure [80]. While in some ways, quantifying the volume of these classes in terms of biomass would be more desirable for forest managers, measuring the “area” of each land cover still provided a valuable insight into the patterns and rates of regeneration as they relate to harvest techniques [6,10,25,26,37,70].
Next, sample polygons were collected for each segmented image to train the machine-learning classification algorithm. Provided that the recommended sample size was equal across all classes with ≥50 per class [24], 65 samples were used for the PWA classifications. While this worked well for the PWA datasets, Volz required 80 samples per class to produce quality land cover classifications. This was likely due to the similar spectral profiles of the mature canopy and understory vegetation, combined with less-than-optimal segmentation parameters for the given land cover. Furthermore, distinguishing between understory vegetation and mature canopy proved difficult from visual interpretation alone. As a result, each Volz classification displayed instances of a ring around some tree crowns that were misclassified as understory vegetation. Layering a 3D digital surface model (DSM), which was a generated output from photogrammetric processing, could help differentiate the understory from the canopy through height filtering [27,56,71].
Despite the number of samples collected, the machine-learning algorithm that performed the best and, therefore, was selected for the final land cover classifications in this study, was the support vector machine (SVM).

4.4. Objective 3: Significance Testing

The results of the two-tailed t-test assuming equal variances revealed no significant difference in the resulting classification accuracies between the PWA burn and the Volz harvest sites, and thus, the data procurement and processing parameters applied were found to be effective for both types of NDBM treatment. Despite this finding, however, a larger sample of land cover classifications would be a better test of this. Additionally, the variance within the Volz datasets was nearly double that of the PWA datasets, albeit tenths of a percent. This suggests that the timber harvest sites were slightly less reliable than the prairie burn sites, which, again, was likely due to the visual complexity of the Volz imagery.

5. Conclusions

Quantifying planned disturbance is an important component of any ecosystem management plan that allows land managers to objectively determine the efficacy of their treatment and adjust the frequency and severity of NDBM treatments accordingly. To do this, inventories of the land cover must be collected both before and repeatedly after the disturbance event. This way, vegetation recovery can be determined through the proper inventory of land cover present before the disturbance event, what was removed as a result of the disturbance treatment, and what grew back after the disturbance.
Traditionally, obtaining a pre-disturbance inventory has been challenging because disturbances are often unplanned events and thus do not enable temporally precise image datasets. Traditional remote sensing methods, such as satellite and manned aircraft, are not practical due to the resolution and expense. Planned disturbances present an opportunity to utilize the high temporal and spatial resolution afforded by UASs to obtain a pre-disturbance inventory, which, combined with regular and repeated post-disturbance inventories, support robust land management practices.
This study documented the application of land cover surveys for prescribed prairie burns and selective timber harvests, which are common planned disturbances in the CHF region. The prairie burn sites were less sensitive to atmospheric, sensor, and mission planning parameters than the forested sites during the data collection. The forested sites were successfully surveyed with the UAS by increasing the flight altitude, shutter speed, and the number of images collected outside the study area. The prairie burn site was successfully processed with default photogrammetric parameters, while the forest site required reducing the scale for keypoint identification and an alternative calibration method to produce quality georeferenced orthomosaics. Both land covers achieved overall accuracies between 80.8% and 97.1% and proportional efficacies between 70.2% and 93.6%. While the variance in overall accuracies for the prairie site outperformed the forested site, the difference between the land cover types was not significant. Overall, the methods of data collection, processing, and classification produced high-quality land cover assessments for two common types of planned ecological disturbances within the CHF region.
In future studies, researchers can build on these findings by examining optimal temporal resolutions for post-disturbance inventories, verifying land covers through field sampling, using field samples to train classifier algorithms, and experimenting with different segmentation parameters. Determining the optimal frequency and duration of surveys following a planned disturbance treatment would enhance the usability of this method and deserves future research attention. Ground truthing through field sample collection is another recommendation to further validate these methods and determine the accuracy of the land cover classifications. Ground truthing would also enable field samples to be projected onto the imagery, which can be used to support the collection of classifier training samples. Another area deserving further research is to experiment with minimum segment sizes to identify the optimal segmentation parameters for the ultra-high-resolution UAS imagery GEOBIA, especially in forested sites where distinguishing tree crowns fully would likely enhance the classification accuracy. Furthermore, filtering vegetation height with a digital surface model would be an important elaboration on the procedures demonstrated by this study.
Myriad applications exist that could benefit from the workflow developed in this study. For example, land cover remediation following construction, mining, or landscaping projects could employ a similar workflow to objectively determine vegetation recovery. Similarly, this methodology could be used for invasive species identification, preventative wildfire management, documentation of the impact of pesticide and herbicide use in agriculture, water level monitoring (e.g., lakes, coastal areas, and dams), excavation or landslide analysis, wildlife habitat development, and documentation of the impact of energy sector projects (e.g., solar, wind, and conventional projects). All of these examples present opportunities to examine the before and after effects of planned disturbances. The solutions provided in this study illustrate the benefits of the use of UASs and Geographic Information Sciences technology in environmental applications. The results of this study demonstrate how UASs can serve as valuable geospatial data collection tools for land managers to help ensure the sustainable management of Earth’s precious and finite natural resources.

Author Contributions

Validation, Z.M., J.H. and G.S.; formal analysis, Z.M. and G.S.; investigation, Z.M., J.H. and G.S.; resources, J.H., G.S. and S.H.; data curation, Z.M., J.H., G.S. and S.H.; writing—original draft preparation, Z.M.; writing—review and editing, J.H., G.S. and S.H.; visualization, Z.M.; supervision, J.H., G.S. and S.H.; project administration, J.H., G.S. and S.H.; funding acquisition, J.H. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the Hardwood Tree Improvement and Regeneration Center. (Funding number: F.00162256.06.001).

Acknowledgments

We would like to thank Jarred Brooke from the Purdue FNR Wildlife Extension for conducting and coordinating with us for the prescribed burns, and Oliver Grimm from Pike Lumber Company for being our main contact and allowing us to fly over Pike’s lumber production sites.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kuppinger, D.M.; Jenkins, M.A.; White, P.S. Predicting the post-fire establishment and persistence of an invasive tree species across a complex landscape. Biol. Invasions 2010, 12, 3473–3484. [Google Scholar] [CrossRef]
  2. O’Hara, K.L. What is close-to-nature silviculture in a changing world? Forestry 2016, 89, 1–6. [Google Scholar] [CrossRef]
  3. Fernandes, P.M.; Botelho, H.S. A review of prescribed burning effectiveness in fire hazard reduction. Int. J. Wildland Fire 2003, 12, 117–128. [Google Scholar] [CrossRef] [Green Version]
  4. Rousselet, G. Classification of Post-Wildfire Aerial Imagery Using Convolutional Neural Networks: A Study of Machine Learning and Resampling Techniques to Assist Post-Wildfire Efforts. Available online: https://www.diva-portal.org/smash/get/diva2:1353041/FULLTEXT01.pdf (accessed on 6 January 2021).
  5. Anderson, J.R.; Hardy, E.E.; Roach, J.T.; Witmer, R.E. A Land Use and Land Cover Classification System for Use with Remote Sensor Data; U.S. Government Printing Office: Washington, DC, USA, 1976.
  6. Lister, A.J.; Andersen, H.; Frescino, T.; Gatziolis, D.; Healey, S.; Heath, L.S.; Liknes, G.C.; McRoberts, R.; Moisen, G.G.; Nelson, M.; et al. Use of remote sensing data to improve the efficiency of national forest inventories: A case study from the united states national forest inventory. Forests 2020, 11, 1364. [Google Scholar] [CrossRef]
  7. Frayer, W.E.; Furnival, G.M. Forest survey sampling designs: A history. J. For. 1999, 97, 4–10. [Google Scholar]
  8. King, D.J. Airborne remote sensing in forestry: Sensors, analysis and applications. For. Chron. 2000, 76, 859–876. [Google Scholar] [CrossRef] [Green Version]
  9. Anderson, K.; Gaston, K.J. Lightweight Unmanned Aerial Vehicles Will Revolutionize Spatial Ecology. Front. Ecol. Environ. 2013, 11, 138–146. [Google Scholar] [CrossRef] [Green Version]
  10. Ruwaimana, M.; Satyanarayana, B.; Otero, V.; Muslim, A.M.; Muhammad Syafiq, A.; Ibrahim, S.; Raymaekers, D.; Koedam, N.; Dahdouh-Guehbas, F. The advantages of using drones over space-borne imagery in the mapping of mangrove forests. PLoS ONE 2018, 13, e0200288. [Google Scholar] [CrossRef] [Green Version]
  11. Sakumara, C. Seeing a Better World from Space. In Proceedings of the 11th Annual Purdue GIS Day Conference, West Lafayette, IN, USA, 7 November 2019. [Google Scholar]
  12. Asenova, M. GIS-Based Analysis of the Tree Health Problems Using UAV Images and Satellite Data. Surv. Geol. Min. Ecol. Manag. 2018, 18, 813–820. [Google Scholar] [CrossRef]
  13. Berie, H.T.; Burud, I. Application of Unmanned Aerial Vehicles in Earth Resources Monitoring: Focus on Evaluating Potentials for Forest Monitoring in Ethiopia. Eur. J. Remote Sens. 2018, 51, 326–335. [Google Scholar] [CrossRef] [Green Version]
  14. Merkert, R.; Bushell, J. Managing the drone revolution: A systematic literature review into the current use of airborne drones and future strategic directions for their effective control. J. Air Transp. Manag. 2020, 89, 101929. [Google Scholar] [CrossRef] [PubMed]
  15. Buters, T.M.; Bateman, P.W.; Robinson, T.; Belton, D.; Dixon, K.W.; Cross, A.T. Methodological Ambiguity and Inconsistency Constrain Unmanned Aerial Vehicles as A Silver Bullet for Monitoring Ecological Restoration. Remote Sens. 2019, 11, 1180. [Google Scholar] [CrossRef] [Green Version]
  16. Getzin, S.; Nuske, R.S.; Wiegand, K. Using Unmanned Aerial Vehicles (UAV) to Quantify Spatial Gap Patterns in Forests. Remote Sens. 2014, 6, 6988–7004. [Google Scholar] [CrossRef] [Green Version]
  17. Guerra-Hernández, J.; González-Ferreiro, E.; Monleón, V.; Faias, S.; Tomé, M.; Díaz-Varela, R. Use of Multi-Temporal UAV-Derived Imagery for Estimating Individual Tree Growth in Pinus Pinea Stands. Forests 2017, 8, 300. [Google Scholar] [CrossRef]
  18. Pádua, L.; Guimarães, N.; Adão, T.; Sousa, A.; Peres, E.; Sousa, J.J. Effectiveness of Sentinel-2 in Multi-Temporal Post-Fire Monitoring When Compared with UAV Imagery. ISPRS Int. J. Geo-Inf. 2020, 9, 225. [Google Scholar] [CrossRef] [Green Version]
  19. White, R.A.; Bomber, M.; Hupy, J.P.; Shortridge, A. UAS-GEOBIA Approach to Sapling Identification in Jack Pine Barrens after Fire. Drones 2018, 2, 40. [Google Scholar] [CrossRef] [Green Version]
  20. Hassler, S.C.; Baysal-Gurel, F. Unmanned Aircraft System (UAS) Technology and Applications in Agriculture. Agronomy 2019, 9, 618. [Google Scholar] [CrossRef] [Green Version]
  21. Mayes, M.T.; Estes, L.D.; Gago, X.; Débats, S.R.; Caylor, K.K.; Manfreds, S.; Oudemans, P.; Ciraolo, G.; Maltese, A.; Nadal, M.; et al. Using Small Drone (UAS) Imagery to Bridge the Gap Between Field-and Satellite-Based Measurements of Vegetation Structure and Change. Am. Geophys. Union Fall Meet. 2016, 2016, B53J-05. [Google Scholar]
  22. Tomaštík, J.; Mokroš, M.; Surový, P.; Grznárová, A.; Merganič, J. UAV RTK/PPK Method—An Optimal Solution for Mapping Inaccessible Forested Areas? Remote Sens. 2019, 11, 721. [Google Scholar] [CrossRef] [Green Version]
  23. Miller, Z.M.; Hupy, J.; Chandrasekaran, A.; Shao, G.; Fei, S. Application of Postprocessing Kinematic Methods with UAS Remote Sensing in Forest Ecosystems. J. For. 2021, 119, 454–466. [Google Scholar] [CrossRef]
  24. Kucharczyk, M.; Hay, G.J.; Ghaffarian, S.; Hugenholtz, C.H. Geographic Object-Based Image Analysis: A Primer and Future Directions. Remote Sens. 2020, 12, 2012. [Google Scholar] [CrossRef]
  25. Knoth, C.; Klein, B.; Prinz, T.; Kleinebecker, T. Unmanned aerial vehicles as innovative remote sensing platforms for high-resolution infrared imagery to support restoration monitoring in cut-over bogs. Appl. Veg. Sci. 2013, 16, 509–517. [Google Scholar] [CrossRef]
  26. Goodbody, T.R.H.; Coops, N.C.; Hermosilla, T.; Tompalski, P.; Crawford, P. Assessing the status of forest regeneration using digital aerial photogrammetry and unmanned aerial systems. Int. J. Remote Sens. 2018, 39, 5246–5264. [Google Scholar] [CrossRef]
  27. Larrinaga, A.R.; Brotons, L. Greenness indices from a low-cost uav imagery as tools for monitoring post-fire forest recovery. Drones 2019, 3, 6. [Google Scholar] [CrossRef] [Green Version]
  28. Pádua, L.; Adão, T.; Guimarães, N.; Sousa, A.; Peres, E.; Sousa, J.J. Post-fire forestry recovery monitoring using high-resolution multispectral imagery from unmanned aerial vehicles. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2019, 42, 301–305. [Google Scholar] [CrossRef] [Green Version]
  29. Anderson, C.L. Examining Aspen Expansion from before and after Prescribed Burning in a Native Fescue Grassland through Geospatial Techniques. Master’s Thesis, Michigan Technological University, Houghton, MI, USA, 2019. [Google Scholar]
  30. Pérez-Rodríguez, L.A.; Quintano, C.; Marcos, E.; Suarez-Seoane, S.; Calvo, L.; Fernández-Manso, A. Evaluation of prescribed fires from unmanned aerial vehicles (UAVs) imagery and machine learning algorithms. Remote Sens. 2020, 12, 1295. [Google Scholar] [CrossRef] [Green Version]
  31. Samiappan, S.; Hathcock, L.; Turnage, G.; McCraine, C.; Pitchford, J.; Moorhead, R. Remote sensing of wildfire using a small unmanned aerial system: Post-fire mapping, vegetation recovery and damage analysis in Grand Bay, Mississippi/Alabama, USA. Drones 2019, 3, 43. [Google Scholar] [CrossRef] [Green Version]
  32. Simpson, J.E.; Wooster, M.J.; Smith, T.E.L.; Trivedi, M.; Vernimmen, R.R.E.; Dedi, R.; Shakti, M.; Dinata, Y. Tropical peatland burn depth and combustion heterogeneity assessed using UAV photogrammetry and airborne LiDAR. Remote Sens. 2016, 8, 1000. [Google Scholar] [CrossRef] [Green Version]
  33. Sill, K. Use of GIS Spatial Analysis, Remote Sensing, and Unmanned Aerial Systems in Determining the Susceptibility to Wildfires in Barber County, Kansas. Master’s Thesis, Fort Hays State University, Hays, KS, USA, 2020. [Google Scholar]
  34. USDA Forest Service. FS-1079 Forest Inventory and Analysis Strategic Plan. Available online: https://www.fia.fs.fed.us/library/bus-org-documents/docs/strategic-plan-docs/FIA%20Strategic%20Plan%20FS-1079.pdf (accessed on 8 January 2020).
  35. Hardwood Tree Improvement and Regeneration Center (HTIRC). Strategic Plan: 2017–2021; HTIRC: West Lafayette, IN, USA, 2017; Available online: https://htirc.org//wp-content/uploads/2018/08/HTIRC-Strategic-Plan-2017-2021.pdf (accessed on 8 January 2020).
  36. Cromwell, C.; Giampaolo, J.; Hupy, J.P.; Miller, Z.M.; Chandrasekaran, A. A systematic review of best practices for UAS data collection in forestry-related applications. Forests 2021, 12, 957. [Google Scholar] [CrossRef]
  37. Husson, E.; Ecke, F.; Reese, H. Comparison of manual mapping and automated object-based image analysis of non-submerged aquatic vegetation from very-high-resolution UAS images. Remote Sens. 2016, 8, 724. [Google Scholar] [CrossRef] [Green Version]
  38. Evans, T.P.; Green, G.M.; Carlson, L.A. Multi-scale analysis of landcover composition and landscape management of public and private lands in Indiana. In GIS and Remote Sensing Applications in Biogeography and Ecology; Millington, A.C., Walsh, S.J., Osborne, P.E., Eds.; Springer: Boston, MA, USA, 2001; pp. 271–287. [Google Scholar] [CrossRef]
  39. Hesseln, H. The economics of prescribed burning: A research review. For. Sci. 2000, 46, 322–334. [Google Scholar] [CrossRef]
  40. Brooke, J. Personal Communication; Purdue FNR: West Lafayette, IN, USA, 2021. [Google Scholar]
  41. Miller, Z.M.; Brooke, J. Monitoring efficacy and impacts of differing prescribed fire seasonality with UAS. In Proceedings of the Wildlife Society Annual Conference, Morgan Monroe State Forest, Martinsville, IN, USA, 23–24 September 2019. [Google Scholar]
  42. Fralish, J.S. The Central Hardwood Forest: Its Boundaries and Physiographic Provinces; Technical Report for U.S.; Department of Agriculture, Forest Service: Saint Paul, MN, USA, 2003.
  43. Pike Lumber, LLC. Volz Tract #125–Forest Stewardship Management Plan. 2017. Available online: https://www.pikelumber.com/forestry/ (accessed on 6 January 2020).
  44. Halpern, C.B.; Halaj, J.; Evans, S.A.; Dovčiak, M. Level and pattern of overstory retention interact to shape long-term responses of understories to timber harvest. Ecol. Appl. 2012, 22, 2049–2064. [Google Scholar] [CrossRef]
  45. DJI. Matrice 600 Pro: Product Information–Specifications. Available online: https://www.dji.com/matrice600-pro/info#specs (accessed on 8 January 2021).
  46. Field of View, LLC. GeoSnap PPK–Product Sheet. Available online: https://static1.squarespace.com/static/5d0d06f497c0e80001c6ca18/t/5ebdab9213269e6b5b3a3fd9/1589488532053/GeoSnap_PPK_Product_Sheet_rev12_web.pdf (accessed on 8 January 2021).
  47. Sony. Sony A6000 Specifications. Available online: https://www.bhphotovideo.com/c/product/1029860-REG/sony_ilce6000l_b_alpha_a6000_mirrorless_digital.html (accessed on 8 January 2021).
  48. Boon, M.A.; Drijfhout, A.P.; Tesfamichael, S. Comparison of a fixed-wing and multi-rotor UAV for environmental mapping applications: A case study. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2017, 42, 47–54. [Google Scholar] [CrossRef] [Green Version]
  49. DJI. Zenmuse XT2 User Manual. Available online: https://dl.djicdn.com/downloads/Zenmuse%20XT%202/Zenmuse%20XT%202%20User%20Manual%20v1.0.pdf (accessed on 8 January 2021).
  50. Pix4D. Support–Initial Processing–Calibration. Available online: https://support.pix4d.com/hc/en-us/articles/205327965-Menu-Process-Processing-Options-1-Initial-Processing-Calibration (accessed on 8 January 2021).
  51. Seifert, E.; Seifert, S.; Vogt, H.; Drew, D.; van Aardt, J.; Kunneke, A.; Seifert, T. Influence of drone altitude, image overlap, and optical sensor resolution on multi-view reconstruction of forest images. Remote Sens. 2019, 11, 1252. [Google Scholar] [CrossRef] [Green Version]
  52. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal altitude, overlap, and weather conditions for computer vision uav estimates of forest structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  53. Bagaram, M.B.; Giuliarelli, D.; Chirici, G.; Giannetti, F.; Barbati, A. UAV remote sensing for biodiversity monitoring: Are forest canopy gaps good covariates? Remote Sens. 2018, 10, 1397. [Google Scholar] [CrossRef]
  54. Díaz-Varela, R.A.; De la Rosa, R.; León, L.; Zarco-Tejada, P.J. High-resolution airborne UAV imagery to assess olive tree crown parameters using 3D photo reconstruction: Application in breeding trials. Remote Sens. 2015, 7, 4213–4232. [Google Scholar] [CrossRef] [Green Version]
  55. Frey, J.; Kovach, K.; Stemmler, S.; Koch, B. UAV photogrammetry of forests as a vulnerable process: A sensitivity analysis for a structure from motion RGB-image pipeline. Remote Sens. 2018, 10, 912. [Google Scholar] [CrossRef] [Green Version]
  56. Lehmann, J.R.K.; Prinz, T.; Ziller, S.R.; Thiele, J.; Heringer, G.; Meira-Neto, J.A.A.; Buttschardt, T.K. Open-source processing and analysis of aerial imagery acquired with a low-cost unmanned aerial system to support invasive plant management. Front. Environ. Sci. 2017, 5, 44. [Google Scholar] [CrossRef] [Green Version]
  57. Fraser, B.T.; Congalton, R.G. Issues in unmanned aerial systems (UAS) data collection of complex forest environments. Remote Sens. 2018, 10, 908. [Google Scholar] [CrossRef] [Green Version]
  58. Shahbazi, M.; Théau, J.; Ménard, P. Recent applications of unmanned aerial imagery in natural resource management. GIScience Remote Sens. 2014, 51, 339–365. [Google Scholar] [CrossRef]
  59. Hubbard, S.; Pak, A.; Gu, Y.; Jin, Y. UAS to support airport safety and operations: Opportunities and challenges. J. Unmanned Veh. Syst. 2017, 6, 1–17. [Google Scholar] [CrossRef] [Green Version]
  60. FAA. 14 CFR–Part 107: Small Unmanned Aerial Systems. Available online: https://www.ecfr.gov/current/title-14/chapter-I/subchapter-F/part-107 (accessed on 1 January 2019).
  61. Jurjević, L.; Gašparović, M.; Milas, A.S.; Balenović, I. Impact of UAS image orientation on accuracy of forest inventory attributes. Remote Sens. 2020, 12, 404. [Google Scholar] [CrossRef] [Green Version]
  62. Zhang, H.; Aldana-Jague, E.; Clapuyt, F.; Wilken, F.; Vanacker, V.; Van Oost, K. Evaluating the potential of post-processing kinematic (PPK) georeferencing for uav-based structure-from-motion (SfM) photogrammetry and surface change detection. Earth Surf. Dyn. 2019, 7, 807–827. [Google Scholar] [CrossRef] [Green Version]
  63. Zhang, H.; Aldana Jague, E.; Clapuyt, F.; Wilken, F.; Vanacker, V.; Oost, K. Evaluating the potential of PPK direct georeferencing for UAV-SfM photogrammetry and precise topographic mapping. Earth Surf. Dyn. Discuss. 2019, 1–34. [Google Scholar] [CrossRef]
  64. GISGeography, World Geodetic System (WGS84). Available online: https://gisgeography.com/wgs84-world-geodetic-system/ (accessed on 5 January 2022).
  65. Manfreda, S.; McCabe, M.F.; Miller, P.E.; Lucas, R.; Madrigal, V.P.; Mallinis, G.; Ben Dor, E.; Helman, D.; Estes, L.; Ciraolo, G.; et al. On the use of unmanned aerial systems for environmental monitoring. Remote Sens. 2018, 10, 641. [Google Scholar] [CrossRef] [Green Version]
  66. Pix4D, Support–Initial Processing–Calibrate (Image Scale and Keypoints). Available online: https://support.pix4d.com/hc/en-us/articles/202557759-Menu-Process-Processing-Options-1-Initial-Processing-General#label3 (accessed on 6 January 2021).
  67. Li, C.; Shao, G. Object-oriented classification of land use/cover using digital aerial orthophotography. Int. J. Remote Sens. 2012, 33, 922–938. [Google Scholar] [CrossRef]
  68. Al-Ali, Z.M.; Abdullah, M.M.; Asadalla, N.B.; Gholoum, M. A comparative study of remote sensing classification methods for monitoring and assessing desert vegetation using a UAV-based multispectral sensor. Environ. Monit. Assess. 2020, 192, 389. [Google Scholar] [CrossRef]
  69. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  70. Boardman, A.L. Next Generation of Land System Science: Integrating Meso-Scale Analysis and UAS Remote Sensing in Changing Plant Communities of the United States’ Southern Great Plains. Master’s Thesis, Oklahoma State University, Stillwater, OK, USA, 2020. [Google Scholar]
  71. Gibril, M.B.A.; Kalantar, B.; Al-Ruzouq, R.; Ueda, N.; Saeidi, V.; Shanableh, A.; Mansor, S.; Shafri, H.Z.M. Mapping heterogeneous urban landscapes from the fusion of digital surface model and unmanned aerial vehicle-based images using adaptive multiscale image segmentation and classification. Remote Sens. 2020, 12, 1081. [Google Scholar] [CrossRef] [Green Version]
  72. Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. GIScience Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
  73. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  74. Pande-Chhetri, R.; Abd-Elrahman, A.; Liu, T.; Morton, J.; Wilhelm, V.L. Object-based classification of wetland vegetation using very high-resolution unmanned air system imagery. Eur. J. Remote Sens. 2017, 50, 564–576. [Google Scholar] [CrossRef] [Green Version]
  75. Fraser, B.T.; Congalton, R.G. Evaluating the effectiveness of unmanned aerial systems (UAS) for collecting thematic map accuracy assessment reference data in New England forests. Forests 2019, 10, 24. [Google Scholar] [CrossRef] [Green Version]
  76. Martin, F.M.; Müllerová, J.; Borgniet, L.; Dommanget, F.; Breton, V.; Evette, A. Using single- and multi-date UAV and satellite imagery to accurately monitor invasive knotweed species. Remote Sens. 2018, 10, 1662. [Google Scholar] [CrossRef] [Green Version]
  77. Shao, G.F.; Tang, L.N.; Liao, J.F. Overselling overall map accuracy misinforms about research reliability. Landsc. Ecol. 2019, 34, 2487–2492. [Google Scholar] [CrossRef] [Green Version]
  78. Pontius, R.G., Jr.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
  79. Shao, G.F.; Tang, L.N.; Zhang, H. Introducing Image Classification Efficacies. IEEE Access 2021, 9, 134809–134816. [Google Scholar] [CrossRef]
  80. Nordén, B.; Rørstad, P.K.; Magnér, J.; Götmark, F.; Löf, M. The economy of selective cutting in recent mixed stands during restoration of temperate deciduous forest. Scand. J. For. Res. 2019, 34, 709–717. [Google Scholar] [CrossRef]
Figure 1. Study area and site map for prescribed burn and selective timber harvest plots throughout the state of Indiana. Sites selected for final analysis are outlined in red. The “CHF Boundary” locator map was adapted from The Central Hardwood Forest: Its Boundaries and Physiographic Provinces [42].
Figure 1. Study area and site map for prescribed burn and selective timber harvest plots throughout the state of Indiana. Sites selected for final analysis are outlined in red. The “CHF Boundary” locator map was adapted from The Central Hardwood Forest: Its Boundaries and Physiographic Provinces [42].
Drones 06 00052 g001
Figure 2. Purdue FNR prairie plots for prescribed fire seasonality research (outlined in yellow). (A) =Doak, (B) =Hermann, (C) =PWA. PWA was selected for final land cover analysis of prescribed burns.
Figure 2. Purdue FNR prairie plots for prescribed fire seasonality research (outlined in yellow). (A) =Doak, (B) =Hermann, (C) =PWA. PWA was selected for final land cover analysis of prescribed burns.
Drones 06 00052 g002
Figure 3. Pike Lumber Company forest plots for timber production (outlined in yellow). (A) =Rough, (B) =Deardorff, (C) =Jackson, (D) =Urton, (E) =McAffee, (F) =Whiteman, (G) =Volz*. Volz was selected for the final land cover analysis.
Figure 3. Pike Lumber Company forest plots for timber production (outlined in yellow). (A) =Rough, (B) =Deardorff, (C) =Jackson, (D) =Urton, (E) =McAffee, (F) =Whiteman, (G) =Volz*. Volz was selected for the final land cover analysis.
Drones 06 00052 g003
Figure 4. The UAS platform used in this study was a DJI M600 Pro [45] equipped with a Geosnap PPK [46], which triggered a Sony A6000 Mirrorless Interchangeable Lens Camera [47].
Figure 4. The UAS platform used in this study was a DJI M600 Pro [45] equipped with a Geosnap PPK [46], which triggered a Sony A6000 Mirrorless Interchangeable Lens Camera [47].
Drones 06 00052 g004
Figure 5. Workflow diagram of methods. Yellow squares = user actions/processes, and blue squares = outputs (and inputs).
Figure 5. Workflow diagram of methods. Yellow squares = user actions/processes, and blue squares = outputs (and inputs).
Drones 06 00052 g005
Figure 6. Geographic object-based image analysis (GEOBIA) process. (a) PWA original image; (b) PWA segmented image; (c) PWA classified image; (d) Volz original image; (e) Volz segmented image; (f) Volz classified image.
Figure 6. Geographic object-based image analysis (GEOBIA) process. (a) PWA original image; (b) PWA segmented image; (c) PWA classified image; (d) Volz original image; (e) Volz segmented image; (f) Volz classified image.
Drones 06 00052 g006
Figure 7. Segmented image parameters tested. (a) Original image; (b) “default” segmentation parameters—15.5 spectral, 15 spatial, 20 minimum segment size (MSS); (c) 17 spectral, 10 spatial, 20 MSS; (d) 17 spectral, 10 spatial, 80 MSS.
Figure 7. Segmented image parameters tested. (a) Original image; (b) “default” segmentation parameters—15.5 spectral, 15 spatial, 20 minimum segment size (MSS); (c) 17 spectral, 10 spatial, 20 MSS; (d) 17 spectral, 10 spatial, 80 MSS.
Drones 06 00052 g007
Figure 8. (ad) PWA prairie burn orthomosaics. UAS flight area shown in yellow; study area for land cover classification shown in green.
Figure 8. (ad) PWA prairie burn orthomosaics. UAS flight area shown in yellow; study area for land cover classification shown in green.
Drones 06 00052 g008
Figure 9. (ad) Volz harvest orthomosaics. Study area for land cover classification shown in yellow.
Figure 9. (ad) Volz harvest orthomosaics. Study area for land cover classification shown in yellow.
Drones 06 00052 g009
Figure 10. (A) RB (B) PB (C) PB 2 (D) PB 3, RB = pre-burn, PB = post-burn. PWA prairie burn land cover classification. Burn plots shown in red outline.
Figure 10. (A) RB (B) PB (C) PB 2 (D) PB 3, RB = pre-burn, PB = post-burn. PWA prairie burn land cover classification. Burn plots shown in red outline.
Drones 06 00052 g010
Figure 11. (A) pre-cut (B) mid-cut (C) post-cut (D) post-cut 2. Volz timber harvest land cover classification. Harvest area shown in yellow outline.
Figure 11. (A) pre-cut (B) mid-cut (C) post-cut (D) post-cut 2. Volz timber harvest land cover classification. Harvest area shown in yellow outline.
Drones 06 00052 g011
Table 1. Prescribed burn information for each prairie site surveyed.
Table 1. Prescribed burn information for each prairie site surveyed.
PropertyPrairie
Coverage
Number of Plots BurnedTotal Burned AreaBurn DateUAS Survey Dates
Doak
(49.2 acres)
15 acres5 plots(~5 acres)9/19/19Pre and Post Burn: 9/19/19
Hermann (38.4 acres)8.9 acres4 plots(~4 acres)10/8/19Pre and Post Burn: 10/8/19
PWA
(290 acres)
60 acres9 plots(~14.7 acres)4/2/20Pre-burn: 4/1/20
Post-burn: 4/3/20 4/11/20, 4/22/20, 4/28/20, 5/1/20, 5/9/20, 5/13/20, 6/21/20, 6/26/20, 7/17/20
Table 2. Selective timber harvest information for each forest site surveyed. “#”—Number.
Table 2. Selective timber harvest information for each forest site surveyed. “#”—Number.
Property# of Trees HarvestedAmount Harvested (in Board Feet)Harvest DurationUAS Survey Dates
Deardorff
(17.3 acres)
12735,824′5/8/20–5/13/20Pre-cut: 5/7/20
Mid-cut: 5/18/20
Post-cut: 7/2/20, 10/9/20
Jackson
(23.6 acres)
12658,702′6/11/20–8/20/20Pre-cut: 5/16/20
McAffee
(27 acres)
217 selected**Not yet harvestedNAPre-cut: 5/16/20
Rough
(20.7 acres)
19911,268′6/1/20–6/10/21Pre-cut: 5/18/20, 10/9/20
Urton
(51.9 acres)
311109,209′5/30/20–6/20/20Pre-cut: 5/22/20
Whiteman
(56.2 acres)
16674,956′5/13/20–6/25/20Mid-cut: 5/16/20
Post-cut: 7/9/20
Volz
(10 acres)
6539,515′5/7/20–6/25/20Pre-cut: 5/7/20
Mid-cut: 5/21/20
Post-cut: 7/2/20, 8/26/20
Table 5. Data collection parameters affecting image quality that were tested throughout the data collection phase.
Table 5. Data collection parameters affecting image quality that were tested throughout the data collection phase.
CategoryData Collection Parameters
Atmospheric ConditionsSun angle, brightness (cloud cover), and wind
Sensor SettingsShutter speed, aperture, ISO, and zoom
Mission PlanningFlight altitude, overlap/sidelap, and number of images collected outside the study area
Table 6. Preliminary data collection parameters tested for first two flights at Doak prairie burn site and second round of flights at Hermann prairie burn site*.
Table 6. Preliminary data collection parameters tested for first two flights at Doak prairie burn site and second round of flights at Hermann prairie burn site*.
SubjectParameter TestedNotes
Sun Angle°
(Takeoff time)
48.6° (1130)
& 43.2° (1315)*
Shadows present in both datasets; slightly longer shadows present in Hermann flights
Shutter Speed1/1600 & 1/2500*Slightly overexposed, difficult to see shaded areas
Table 7. Flight information for each UAS survey surrounding PWA prescribed burn. NOTE: * =pre-burn, 1 =unstandardized data collection parameters.
Table 7. Flight information for each UAS survey surrounding PWA prescribed burn. NOTE: * =pre-burn, 1 =unstandardized data collection parameters.
DateTakeoffLandingTotal DurationNumber of ImagesClouds/Brightness 1Wind 1
4/01/20*1500152020 min303Scattered, sunnyVariable ~8 MPH
4/03/201113113017 min378Overcast, diffuseCalm < 3 MPH
4/11/201409142819 min389Clear, sunnyLow < 5 MPH
5/01/201332134715 min310Clear, sunnyVariable ~5 MPH
Table 8. UAS data collection parameters tested for preliminary imaging of forest sites. NOTE: number of “*” denotes change to iteration of data collection parameter tests.
Table 8. UAS data collection parameters tested for preliminary imaging of forest sites. NOTE: number of “*” denotes change to iteration of data collection parameter tests.
SiteDateSun Angle° (Takeoff Time)Cloud Cover and BrightnessWind
(MPH)
Shutter SpeedFlight
Altitude (AGL)
Notes
Volz*
(Pre-cut)
5/7/20(0900)Clear, sunnyMed ~91/2500400 ftSlightly overexposed
Deardorff*5/7/20(1230)Clear, sunnyHigh ~11.51/2500400 ftMotion blur,
overexposed
McAffee*5/16/20(1115)Broken,
variable
Med ~71/2500400 ftMotion blur, inconsistent lighting
Whiteman*5/16/20(1230)Scattered, sunnyLow ~4.51/2500400 ftMotion blur,
inconsistent lighting
Jackson*5/16/20(1545)Scattered,
variable
High ~101/2500400 ftMotion blur, good
exposure
Rough**5/18/20(1330)Overcast,
diffuse
VRB ~3, gusting 101/2500500 ftSome blur, slightly overexposed
Deardorff**5/18/20(1615)Overcast,
diffuse
High ~101/2500500 ftSome blur, slightly overexposed
Volz***
(Mid-cut)
5/21/20(1130)Overcast,
diffuse
Med ~81/4000500 ftGood detail, slightly underexposed
Urton****5/22/20(1445)Scattered, sunnyMed ~91/4000500 ftGood detail, good
exposure
Volz*****
(Post-cut)
7/2/20(1145)Clear, sunnyMed ~71/4000500 ftGood detail, good
exposure
Volz*****
(Post-cut 2)
8/26/20(1130)Clear, sunnyVRB ~8, gusting 101/4000500 ftSome motion blur, good exposure
Table 9. Datum and projected coordinate system used for georeferencing. NOTE: Estimated error obtained from World Geodetic System—“WGS 84” [64].
Table 9. Datum and projected coordinate system used for georeferencing. NOTE: Estimated error obtained from World Geodetic System—“WGS 84” [64].
DatumProjected Coordinate SystemZoneOrthometric SurfaceEstimated Error
World Geodetic System— 1984 revision
(WGS 84)
Universal Transverse Mercator (UTM)16 NorthEllipsoidal2 cm
Table 10. NOAA CORS locations used for correcting PWA prairie burn (P775) and Volz timber harvest (KYBO) sites.
Table 10. NOAA CORS locations used for correcting PWA prairie burn (P775) and Volz timber harvest (KYBO) sites.
CORS SiteDistance from SiteMean Fixed EpochsMean Float EpochsConstellations UsedMean Position Uncertainty (X)Mean Position Uncertainty (Y)Mean Position
Uncertainty (Z)
P7756 km
(3.7 mi)
30,086 (99.9%)3.2
(0.012%)
GPS2.15 cm2.18 cm2.43 cm
KYBO35 km (21.4 mi)23,302 (89.9%)4869 (10.1%)GPS + GLONASS2.7 cm2.65 cm2.73 cm
Table 11. Photogrammetric processing parameters tested for georeferenced orthomosaic generation at the PWA prescribed burn and Volz selective timber harvest sites.
Table 11. Photogrammetric processing parameters tested for georeferenced orthomosaic generation at the PWA prescribed burn and Volz selective timber harvest sites.
DatasetKeypoint Image ScaleCalibration MethodNumber of Calibrated Images
PWA pre-burn
(4/1/20)
1Standard615/621 (99%)
1/2Standard618/621 (99%)
1/4Standard615/621 (99%)
PWA post-burn
(4/3/20)
1Standard377/378 (99%)
PWA post-burn 2
(4/11/20)
1Standard388/389 (99%)
PWA post-burn 3
(5/1/20)
1Standard309/310 (99%)
Volz pre-cut
(5/7/20)
1Standard241/256 (94%)
1/2Standard241/256 (94%)
1/4Standard239/256 (93%)
1/2Alternative243/256 (94%)
1/4Alternative239/256 (93%)
Volz mid-cut
(5/21/20)
1Standard149/159 (94%)
1/2Standard151/159 (95%)
1/4Standard146/159 (92%)
1Alternative148/159 (93%)
1/2Alternative154/159 (97%)
1/4Alternative149/159 (94%)
Volz post-cut
(7/2/20)
1Standard155/157 (99%)
1/2Standard156/157 (99%)
1/4Standard154/157 (98%)
1/2Alternative157/157 (100%)
1/4Alternative156/157 (99%)
Volz post-cut 2
(8/26/20)
1Standard166/174 (95%)
1/2Standard168/174 (96%)
1/4Standard163/174 (93%)
1/2Alternative169/174 (97%)
1/4Alternative163/174 (93%)
Table 12. Optimal data collection parameters for surveying prescribed prairie burn sites.
Table 12. Optimal data collection parameters for surveying prescribed prairie burn sites.
ConsiderationSubjectRecommendationJustification
Atmospheric ConditionsSun Angle/Time of CollectionMid-morning to mid-eveningLittle to no tall objects that would cause long shadows. Wider range of collection times than forested sites.
Brightness/Cloud CoverBright to Diffuse/Clear to OvercastLow vegetation height was less sensitive to overexposure from bright light.
Wind0–10 mph (med)Motion blur less common with low vegetation height.
Sensor
Parameters
Shutter Speed1/2500–1/3200Properly exposed image with sharp detail for given lighting conditions.
Aperture and
Focal Length
f/3.5 and 21 mmWide field of view for high image overlap.
ISO and ZoomAuto and ∞Auto-balanced gain and focused images.
Mission
Planning
Flight Altitude121 m AGL (400 ft)Sufficient depth of field for sharp detail.
Overlap/Sidelap80% × 80%Ensured thorough coverage of study area.
Number of Boundary
Images
≥1 imageInherently more visible ground cover than forested sites allowed for smaller outside image buffer.
Table 13. Optimal data collection parameters for surveying Volz selective timber harvest site.
Table 13. Optimal data collection parameters for surveying Volz selective timber harvest site.
ConsiderationSubjectRecommendationJustification
Atmospheric ConditionsSun Angle/Time of CollectionMid-dayTall objects and dense canopy restricted times of collection to highest sun angle possible.
Brightness/Cloud CoverDiffuse/OvercastTall vegetation height was more sensitive to overexposure if too bright and view into canopy gaps was better with diffuse overcast lighting.
Wind≤5 mph (low)Motion blur more common with tall vegetation, wind needed to be minimal.
Sensor
Parameters
Shutter Speed1/3200–1/4000Properly exposed image with sharp detail for given lighting conditions.
Aperture and
Focal Length
f/3.5 and 21 mmWide field of view for high image overlap.
ISO and ZoomAuto and ∞Auto-balanced gain and focused images.
Mission
Planning
Flight Altitude152 m AGL (500 ft)Sufficient depth of field for sharp detail.
Overlap/Sidelap80% × 80%Ensured thorough coverage of study area.
Number of Boundary
Images
≥2 imagesCommonly less visible ground cover than prairie sites necessitated larger outside image buffer.
Table 14. PWA prairie burn population matrices. NOTE: Dataset is denoted in top-left corner of each sub-table, where RB = pre-burn, PB = post-burn, etc. Classes are labeled as follows: BG = bare-ground, LT = litter, GV = green vegetation, BV = burned vegetation. P = proportion of total, UA = user’s accuracy, UE = UA-based efficacy, PA = producer’s accuracy, and PE = PA-based efficacy. OA = overall accuracy and MICE = map-level image classification efficacy.
Table 14. PWA prairie burn population matrices. NOTE: Dataset is denoted in top-left corner of each sub-table, where RB = pre-burn, PB = post-burn, etc. Classes are labeled as follows: BG = bare-ground, LT = litter, GV = green vegetation, BV = burned vegetation. P = proportion of total, UA = user’s accuracy, UE = UA-based efficacy, PA = producer’s accuracy, and PE = PA-based efficacy. OA = overall accuracy and MICE = map-level image classification efficacy.
RBBGLTGVBVPUAUE
BG0.00920.0015001.1%85.7%85.5%
LT00.69980070.0%100%100%
GV00.00350.1010.007111.2%90.5%89.3%
BV0.00280.00850.00560.160917.8%90.5%88.6%
P1.2%71.3%10.7%16.8%1
PA76.5%98.1%94.7%95.8% OA = 97.1%
PE76.2%93.4%94.1%94.9% MICE = 93.6%
PB2BGLTGVBVPUAUE
BG0.12950.01270.00760.010216%81%75.9%
LT0.01210.24290025.5%95.2%93.1%
GV0.01280.00960.17620.003220.2%87.3%84.2%
BV0.05470.04260.01220.273738.3%71.4%59.9%
P20.9%30.8%19.6%28.7%1
PA61.9%78.9%89.9%95.3% OA =82.2%
PE51.8%69.5%87.4%93.5% MICE = 76%
PBBGLTGVBVPUAUE
BG0.03640.01060.000804.8%76.2%74.5%
LT0.01060.31310.0106033.4%93.7%90.6%
GV0.02070.00260.13730.002616.3%84.1%81.4%
BV0000.454745.5%100%100%
P6.8%32.6%14.9%45.7%1
PA53.7%96%92.4%99.4% OA = 94.1%
PE50.4%94%91%99% MICE = 91.1%
PB3BGLTGVBVPUAUE
BG0.00760.00240.000201%74.6%73.8%
LT0.01520.22060.0038024%92.1%89.8%
GV000.69550.011270.7%98.4%94.7%
BV0.00690.00070.00070.03524.3%81%80%
P3%22.4%70%4.6%1
PA25.7%98.6%99.3%75.8% OA = 95.9%
PE23.4%98.2%97.8%74.6% MICE = 91%
Table 15. Volz timber harvest population matrices. NOTE: Dataset is denoted in top-left corner of each sub-table, where RH = pre-harvest, MH = mid-harvest, PH = post-harvest, etc. Classes are labeled as follows: MC = mature canopy, UV = understory vegetation, BWD = bare ground/woody debris. P = proportion of total, UA = user’s accuracy, UE = UA-based efficacy, PA = producer’s accuracy, and PE = PA-based efficacy. OA = overall accuracy and MICE = map-level image classification efficacy.
Table 15. Volz timber harvest population matrices. NOTE: Dataset is denoted in top-left corner of each sub-table, where RH = pre-harvest, MH = mid-harvest, PH = post-harvest, etc. Classes are labeled as follows: MC = mature canopy, UV = understory vegetation, BWD = bare ground/woody debris. P = proportion of total, UA = user’s accuracy, UE = UA-based efficacy, PA = producer’s accuracy, and PE = PA-based efficacy. OA = overall accuracy and MICE = map-level image classification efficacy.
RHMCUVBWDPUAUE
MC0.31980.10320.005242.8%74.7%58.7%
UV0.05670.30080.004436.2%83.1%71.2%
BWD0.01010.01010.189721%90.4%88%
P38.7%41.4%19.9%1
PA82.7%72.6%95.2% OA = 81%
PE71.8%53.3%94% MICE = 70.3%
MHMCUVBWDPUAUE
MC0.4630.0361049.9%92.8%85.4%
UV0.04280.30830.004335.5%86.7%79.6%
BWD00.00530.140214.5%96.4%95.8%
P50.6%35%14.5%1
PA91.5%88.2%97% OA = 91.2%
PE82.9%81.8%96.5% MICE = 85.3%
PHMCUVBWDPUAUE
MC0.49510.04620.006654.8%90.4%80.1%
UV0.02020.24960.010128.0%89.2%84.4%
BWD00.00830.163917.2%95.2%94.1%
P51.5%30.4%18.1%1
PA96.1%82.1%90.7% OA = 90.9%
PE91.9%74.2%88.7% MICE = 85%
PH2MCUVBWDPUAUE
MC0.38650.11040.012350.9%75.9%57.3%
UV0.04380.20960.006326.0%80.7%71.1%
BWD0.00560.01390.211623.1%91.6%89%
P43.6%33.4%23%1
PA88.7%62.8%92% OA = 80.8%
PE79.9%44.1%89.5% MICE = 70.2%
Table 16. Two-tailed t-test results.
Table 16. Two-tailed t-test results.
PWA BurnVolz Harvest
Mean0.8780.777
Variance0.006470.00739
Observations44
Pooled Variance0.00694
Hypothesized Mean Difference0
df6
t-stat1.737
P(T ≤ t) two-tail0.132
t-critical two-tail2.447
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Miller, Z.; Hupy, J.; Hubbard, S.; Shao, G. Precise Quantification of Land Cover before and after Planned Disturbance Events with UAS-Derived Imagery. Drones 2022, 6, 52. https://doi.org/10.3390/drones6020052

AMA Style

Miller Z, Hupy J, Hubbard S, Shao G. Precise Quantification of Land Cover before and after Planned Disturbance Events with UAS-Derived Imagery. Drones. 2022; 6(2):52. https://doi.org/10.3390/drones6020052

Chicago/Turabian Style

Miller, Zachary, Joseph Hupy, Sarah Hubbard, and Guofan Shao. 2022. "Precise Quantification of Land Cover before and after Planned Disturbance Events with UAS-Derived Imagery" Drones 6, no. 2: 52. https://doi.org/10.3390/drones6020052

Article Metrics

Back to TopTop