Next Article in Journal
A Spatial–Temporal Joint Radar-Communication Waveform Design Method with Low Sidelobe Level of Beampattern
Previous Article in Journal
An Integrated GNSS/MEMS Accelerometer System for Dynamic Structural Response Monitoring under Thunder Loading
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Post-Hurricane Damage Severity Classification at the Individual Tree Level Using Terrestrial Laser Scanning and Deep Learning

1
School of Forest, Fisheries, and Geomatics Sciences, University of Florida Gainesville, FL 32611, USA
2
Center for Tropical Research, Institute of the Environment and Sustainability, University of California Los Angeles (UCLA), Los Angeles, CA 90095, USA
3
NASA-Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109, USA
4
Cartographic Engineering Department, Military Institute of Engineering (IME), Praça Gen, Tibúrcio 80, Rio de Janeiro 22290-270, RJ, Brazil
5
Federal Institute of Education, Science and Technology of São Paulo, Avenida Doutor Ênio Pires de Camargo, Capivari 13365-010, SP, Brazil
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(4), 1165; https://doi.org/10.3390/rs15041165
Submission received: 30 December 2022 / Revised: 31 January 2023 / Accepted: 12 February 2023 / Published: 20 February 2023

Abstract

:
Natural disturbances like hurricanes can cause extensive disorder in forest structure, composition, and succession. Consequently, ecological, social, and economic alterations may occur. Terrestrial laser scanning (TLS) and deep learning have been used for estimating forest attributes with high accuracy, but to date, no study has combined both TLS and deep learning for assessing the impact of hurricane disturbance at the individual tree level. Here, we aim to assess the capability of TLS and convolutional neural networks (CNNs) combined for classifying post-Hurricane Michael damage severity at the individual tree level in a pine-dominated forest ecosystem in the Florida Panhandle, Southern U.S. We assessed the combined impact of using either binary-color or multicolored-by-height TLS-derived 2D images along with six CNN architectures (Densenet201, EfficientNet_b7, Inception_v3, Res-net152v2, VGG16, and a simple CNN). The confusion matrices used for assessing the overall accuracy were symmetric in all six CNNs and 2D image variants tested with overall accuracy ranging from 73% to 92%. We found higher F-1 scores when classifying trees with damage severity varying from extremely leaning, trunk snapped, stem breakage, and uprooted compared to trees that were undamaged or slightly leaning (<45°). Moreover, we found higher accuracies when using VGG16 combined with multicolored-by-height TLS-derived 2D images compared with other methods. Our findings demonstrate the high capability of combining TLS with CNNs for classifying post-hurricane damage severity at the individual tree level in pine forest ecosystems. As part of this work, we developed a new open-source R package (rTLsDeep) and implemented all methods tested herein. We hope that the promising results and the rTLsDeep R package developed in this study for classifying post-hurricane damage severity at the individual tree level will stimulate further research and applications not just in pine forests but in other forest types in hurricane-prone regions.

1. Introduction

The forest is an ecological and dynamic system influenced not only by topography, geographic location, and anthropogenic disturbances, but also by natural disturbances, such as those caused by windstorms. Disturbance affects the forest structure and the composition of species [1,2,3], and in this way has an effect on biodiversity and plant regeneration by increasing the light intensity in the understory [4], as well as altering water and carbon cycles [5]. These natural disturbances and their consequences are essential components of forest dynamics and are important for a forest’s health and development. However, frequent disturbance episodes caused by tropical storm events may cause negative outcomes where the forest cannot recover from the consequent damage. The resilience of the forest may be affected by extreme and constant disturbance, subsequent insect outbreaks, invasive species, intense wildfire due to the woody debris [6,7,8,9,10], alteration of forest demography, and tree injuries that reduce harvested timber value [11,12].
Hurricanes are one of the major natural disturbances to forest ecosystems in the Southeastern United States (US). A severe hurricane can extensively influence forest structure, composition, and succession [13], and can consequently cause related ecological, social, and economic damage [14]. Hurricanes with winds classified as Category 4 (119–253 km/h) or 5 (>253 km/h) generally have ‘catastrophic’ effects on forests [15]. On 10 October 2018, Hurricane Michael caused landfall in the panhandle of Florida as a Category 5 hurricane [16] and caused an estimated $15 billion in damages, including more than $5.18 billion in losses to the agriculture and timber industries. In Florida alone, Hurricane Michael damaged more than 1.1 M ha of forest, with over 560,000 ha severely or catastrophically damaged (>75% of the forest downed). In the Florida Panhandle, due to extraordinary winds, Hurricane Michael led to a recorded tree mortality rate as high as 80% in some areas, and the single event affected 28% of all extant longleaf pine forests [17]. The storm continued inland, causing a tree mortality rate exceeding 20% as far as 150 km inland, where it continued to be classified as a Category 2 storm [16] and caused catastrophic effects in forests in Georgia and Alabama. Hurricane Michael changed the overall structure of the present forests, modifying the forest canopies by creating large gaps and downed trees where it passed [18].
High-frequency and -intensity hurricane events can create different types and amounts of tree damage, potentially affecting pathogen outbreaks, fire regimes, and the subsequent availability of seed sources for regeneration [12,19]. Trees that are twisted or bent by strong winds could lose vascular connection to root systems, making them susceptible to insect or pathogen attacks. Fire regimes could be altered if downed trees are lying entirely on the ground where higher moisture contents would reduce fire intensity, or if they are ‘hung up’ and elevated aboveground in some manner where tissues would more easily dry out. In the latter case, hanging tree crowns could facilitate ground fires climbing into the canopy, where the resulting fires are more difficult to control and more likely to kill the mature remaining live trees. Tree mortality from either wind or any resulting factors (insects/pathogens, fire) could affect tree regeneration, especially for species, like longleaf pine (Pinus palustris Mill.), that mast or infrequently produce seeds. The devastated area in the Florida Panhandle includes a longleaf pine habitat that is classified as a global biodiversity hotspot and endangered [14,15]. Although longleaf pine forests have, for over millennia, been exposed to frequent storm events and wildfires, making them resistant to these disturbances [20,21], maintaining longleaf pine ecosystems requires a significant amount of management.
Measuring forest damage and other factors, such as carbon storage, tree diseases, and fuel amount, provides a basic quantification of a forest’s potential scenarios and consequent forest management practices [22]. The most accurate way to evaluate forest damage is by doing forest inventory in situ as a method to conduct a quality and quantitative diagnosis as a guide to forest management. The forest attribute data may be collected by a trained team using ground assessments. However, forest inventory is expensive and requires a lot of time [23], and it is not possible to assess each tree and access all landscapes that will be evaluated, particularly post-hurricane. Remote sensing has proven to have a high potential and capacity to map and estimate forest attributes, e.g., species structure, canopy height, carbon and fuel amount, etc. [24].
Remote sensing tools and techniques offer the possibility to monitor and assess the impacts of forest disturbances like windstorms at the landscape level. One of the first and most notorious studies using remote sensing to assess hurricane damage was the study from [25], which used Landsat and MODIS optical data to assess the damage from Hurricane Katrina, which affected the US Gulf Coast back in 2005 [25]. Besides using the remote sensing data, they also inventoried the affected areas to train their models. The parameters of vegetation structure are arduous and onerous to measure in the field, and it is a challenging task, especially after a hurricane, for example. More recently, light detecting and ranging (lidar) data from terrestrial, airborne, or spaceborne systems have proven to be useful in mapping forest structure and canopy cover [26], determining forest health [27], stem detection [28], tree diameter [29], and biomass [30], and extracting tree variables such as height [31] and crown diameter [32]. The importance of the terrestrial lidar system, known as terrestrial laser scanning (TLS), has grown in view of its ability to provide 3D point cloud data with high precision, the fact that it is relatively easy to interpret, and the fact that parameter extraction can be automated [33,34,35]. However, most of the previous research has been conducted on stands and single trees [36], and none of the research relates to the damage to single trees after a natural disturbance by hurricanes.
In recent years, deep learning methods have become important in remote sensing [37] and are gaining popularity in the forestry field [38,39,40]. For instance, Ferreira et al. [41] proposed the species classification of Amazonian palms using convolutional neural networks (CNNs) in UAV images. CNNs in an input image (e.g., acquired from UAV) automatically perform feature extraction of target objects (e.g., trees, buildings, etc.) and, based on specific features, they can classify objects within the input image (e.g., tree species) [37]. Nezami et al. [42] used hyperspectral and RGB imagery with CNNs to facilitate tree species classification. Implementing deep learning to 3D data from lidar brings unique challenges and could provide the highest performance development in classifying image components [43]. Directly using the lidar points often requires prohibitive effort, as shown by Xi et al. [44], who assessed the effectiveness of machine learning and deep learning for wood filtering and tree species classification from TLS. In the case of airborne lidar data, one possibility is simply to use the canopy height model (CHM), therefore an image showing the top of the crowns, as has been done in a study segmenting palm trees across the Brazilian Amazon forests using a U-Net model [40]. Another way of working with the point data is through a methodology named SimpleView, which takes snapshots of the 3D trees over different viewpoints and uses those to train deep learning models [43]. This method has been recently used to identify tree species with TLS data and a ResNet model [45]. In general, classification from image or scan datasets is still a challenge due to the uncertainty over classifier selection.
TLS and deep learning combined could provide an efficient way of assessing post-hurricane damage severity at the individual tree level. For instance, deep learning algorithms can be used to analyze images derived from TLS data, while TLS data can be used to provide information about forest structure, allowing the deep learning algorithm to accurately classify damage severity. Therefore, the aim of this study was to assess the capability of TLS and CNNs to classify post-Hurricane Michael damage severity at the individual tree level in a pine-dominated forest ecosystem in the Florida Panhandle, Southern U.S. More specifically, we assessed the combined impact of using either binary-color or multicolored-by-height TLS-derived 2D images along with six CNN architectures for post-hurricane damage severity classification at the individual tree level, as well as to support and enhance TLS-based forest inventory, monitoring, and conservation initiatives. As part of this study, we developed rTLsDeep, a new open-source R package for post-hurricane individual tree-level damage severity classification using TLS and CNN architectures [46].

2. Materials and Methods

2.1. Study Area

Forests of the Southeastern U.S. are highly biodiverse (e.g., the longleaf pine (Pinus palustris Mill.) ecosystem), and productive (e.g., Loblolly pine (Pinus taeda L.) forest plantations), and natural disturbances by hurricanes are well-known to be an important source of mortality. For this study, we focused on the northwestern panhandle area in Florida, near Apalachicola National Forest—ANF (30°19′15.74″N, 84°52′0.02″W), where Hurricane Michael caused severe to moderate destruction in November 2018 (Figure 1). This region includes the national forest, the nearby state forest, and some private forests that are managed for timber, recreation, and/or wildlife purposes. The United States Forest Service (USFS) has a network of permanently fixed area plots as part of a RESTORE Act-funded project with objectives including restoring hydrologic function in the zone. Further, this region was classified as a “hotspot” in North America, and threatened and endangered species were observed and studied. The topography is relatively flat, and either the Gulf of Mexico or the Apalachicola River receives the water from existing small streams in large quantities. The predominant tree species are pines (Pinus spp.), though hardwoods and bald cypress (Taxodium distichum (L.) Rich.) can be found. A highly diverse understory of grasses, forbs, and shrubs is found in pine forests, where low-intensity fires are common. The majority of fauna and flora species in this zone are adapted to fire, and prescribed fires are used for fuel management as a way to reduce a wildfire’s negative impacts and to promote the development of desired understory plants. The mean elevation is 5 m, while the mean annual precipitation is 1395.8 mm. The average temperature annually ranges from 15 to 25 °C.

2.2. TLS Data Acquisition and Processing

TLS data were obtained in November 2021 across longleaf pine and sand pine (Pinus clausa (Chapm. ex Engelm.)) forest stands within public and private lands in the Panhandle area affected by Hurricane Michael. The sites were pre-selected after the disturbance event based on a visual assessment of pre- and post-disturbance Google Earth aerial images. In the field, we visited the pre-selected areas and selected the final sites, covering the entire range of damage severity (light, moderate, severe, and catastrophic; see Figure 1). Within 12 16.92-m fixed radius plots (900 m2) distributed in three sites, five TLS scans (four in the edge—north, south, east, and west; one in the center) were obtained using Riegl VZ 400 i coupled with a NIKON D850 45.7 MegaPixel digital camera and a differential GNSS RTK Receiver. The scan configuration was set to panorama 20 and frequency 1200 Hz. The point cloud pre-processing, including point cloud registration, noise removal, and clipping, was carried out using RiSCAN Pro® [48] (Figure 2a). Using CloudCompare® [49] a total of 90 individual trees were randomly and manually extracted from the point cloud with 15 trees per class of damage severity (Figure 2b): C1—no damaged tree (intact tree, no visual damage), C2—leaning tree, C3—beading tree with a trunk like a bow, C4—trunk snapped with stem or crown broken, C5—stem breakage (no crown), C6—fallen or uprooted tree.
Using the a priori damage severity classification, we derived 12 (1500 × 1500 pixels) 2D images from the 3D point cloud for every single individual tree extracted using the tlsrotate3d function from the rTLsDeep package in R developed in this study [46] (see Supplementary Material, Figures S1–S6). Each of the 12 images corresponds to a different viewing angle according to the rotation in the Z-axis at each 30° increment (from 30° to 360°) (Figure 3). We created two sets of 2D images, one using binary colors (black and white) and the other being multicolored by tree height (0–30 m) (Figure 3). In total, we created 1080 2D images per color class (in binary color and multicolored by height) that were then used as inputs in six CNN architectures (Figure 3).

2.3. CNN Models and Accuracy Assessment

In this study, we used six of the most used CNN architectures for damage severity classification at the tree level as follows:
(i) Densenet201, a densely connected convolutional network (DenseNet), was proposed by Huang et al. [50] and introduced a novel framework to connect the layers of a CNN. In a DenseNet, each layer takes all preceding feature maps as input and passes its own feature maps to all subsequent layers. Here, we used a DenseNet architecture that is 201 layers deep and has been widely used in vegetation remote sensing [51].
(ii) EfficientNet_b7 [52]—EfficientNets are a family of CNNs that have achieved outstanding performances with a reduced number of parameters. They were introduced by Tan et al. [52] and rely on the so-called compound coefficient that uniformly scales all dimensions (depth, width, and resolution) of the network. EfficientNet_b7, used in this work, has a compound coefficient that equals 7 and achieved the best results among all EfficientNet variants tested by [52].
(iii) Inception_v3 [53]—The Inception architecture was first proposed by Szegedy et al. [53,54] in a network called GoogLeNet. It allowed GoogLeNet to have a reduced set of parameters (e.g., 12 times fewer parameters than AlexNet [55]) and still provide outstanding classification results. However, its architecture and design are complex and changes to the network are prohibitive and often hamper computational gains. Szegedy et al. [54] proposed a set of modifications to the Inception architecture, such as factorizing convolutions with large filter sizes, factorization into smaller convolutions, and spatial factorization into asymmetric convolutions. These modifications allowed a significant improvement in classification accuracy while maintaining network complexity and keeping computational costs low [53,54].
(iv) Resnet152v2 [56]—Previous research on CNNs (e.g., [57,58]) showed that increasing the number of layers (the depth) of a CNN can improve feature extraction and, consequently, classification accuracy. However, the experimental analysis showed that an increase in the network depth leads to an increase in the training error [58]. He et al. [59] proposed a novel framework that allowed training models with many layers, thus improving feature extraction while maintaining the trade-off between classification accuracy and computational cost. This framework is based on residual blocks with skip connections that forward the feature maps of a given layer to a deeper layer in the network, giving rise to the residual network (ResNet) family. Many ResNet variants exist that differ from each other by the number of residual layers. Here, we used the variant Resnet152v2 [56], which is composed of 152 layers with identity mappings as skip connections. Identity mappings reduce the difficulty of network convergence by transferring to deep layers feature maps from shallow layers.
(v) VGG16 [58]—The VGG16 model, proposed by Simonyan et al. [58], was one of the first networks to overcome AlexNet [55] in the large-scale visual recognition challenge (ILSVRC), a renowned international competition that evaluates algorithms for object detection and image classification. VGG16 is a simple model composed of only 16 layers.
(vi) A simple CNN variant composed of two convolutional layers and one max pooling layer. We have laid out our approach in Figure 4. First, an input image was passed through a set of convolutions, pooling, and fully connected layers for feature extraction (Figure 4b). Then, the softmax classifier was applied to retrieve class membership probabilities, and the input image was classified according to the class that achieved the highest probability score (Figure 4c). The parameters of all networks were initialized with pre-trained values of the ImageNet database [60] except the simple CNN variant, which was trained from scratch. During training, to update the network hyperparameters (weights and biases), we used the adaptive moment estimation (ADAM) optimizer [61].
We have implemented all CNN models in our rTLsDeep package [42]. From the total of 1080 TLS-derived 2D images per color class (in binary color and multicolored by height), 80% of them (n = 864) were used for training, and the remaining 20% (n = 216) were used for testing. During the construction of the training and testing sets, the tree identity was respected. This means that if a given tree was selected for training, all of its 12 images (corresponding to different viewing angles) were used to train the model and not used for testing. For assessing the classification accuracy, we computed the overall accuracy (OA), F1-score, and the Kappa index (Equations (1)–(3)). Moreover, we computed the confusion matrices for each model. The diagonal cells of a confusion matrix show the amount of correctly classified samples, while the off-diagonal cells show the misclassification rate.
OA = (TP + TN)/(TP + TN + FP + FN)
F1-score= (TP)/(TP + ½ × (FP +FN))
Kappa = (pope)/(1 − pe)
where TP is the true positive, FP is the false positive, FN is the false negative, TN is the true negative, and ns is the total number of samples. po is the proportion of trees correctly classified and pe is the expected proportion of trees correctly classified by chance [62].

3. Results

Based on the six deep learning architectures, we had a mean overall accuracy of 87.34% for the binary-colored images and 86.59% for the images multicolored by height for classifying damage level severity at the tree level from TLS data (Table 1). In general, all CNN classifiers had similar Kappa statistics and mean F1-score values, except in the simple CNN architecture with multicolored-by-height 2D images, whose values were 20% less than general (Table 1). Considering the overall accuracy for binary-color images, VGG16 attained a slightly higher accuracy compared to the other CNN architectures. For the multicolored-by-height images, VGG16, Inception_v3, and ResNet152v2 showed higher overall accuracy, F1-scores, and Kappa values compared to other CNN architectures.
A detailed F1-score by individual classes reveals a higher accuracy for predicting damage severity in classes C3 to C6 compared to the accuracy in classes C1 and C2 (Figure 5). In general, when using images multicolored by height, CNN architectures show better performance based on the F1-score. VGG16 and EfficientNet_b7 had superior F1-score values in all six CNN architectures when using binary color images. For the multicolored-by-height images, VGG16 and Inception_v3 had the highest F1-score in all classes assessed (Figure 5).
Based on the confusion matrices (Figure 6), both within the binary-color and multicolored-by-height 2D images, we found many samples that were mistakenly classified in classes C1 and C2. Commonly, C1 is misunderstood as C2 and C3. The trees in C2 are also confused with trees in C1 and C3. The confusion matrices show that all models had accurately classified damage severity levels in C3 (95.7% to 100%), C4 (92.3% to 100%), C5 (90% to 100%), and C6 (94.6% to 100%) when using binary-color images and in C3 (100%), C4 (59.0% to 100%), C5 (90.9% to 100%), and C6 (100% correct) when using multicolored-by-height images. For the best overall CNNs, VGG16, Inception_v3, and DenseNet201 were the most frequently accurate. The simple CNN variant, compared with all the other CNN architectures, showed the lowest accuracies on both sets of 2d images.

4. Discussion

We tested six CNN architectures for classifying post-hurricane damage severity at the individual tree level using 2D images derived from 3D point clouds acquired by TLS. Previous research demonstrated the potential of lidar data and deep learning methods for leaf and wood separation and tree species classification [14,40,44]. However, as far as we know, this is the first study demonstrating the potential of combining TLS with deep learning models for classifying damage severity at the tree level after a hurricane disturbance. The results showed that CNN architectures in our study had satisfactory performances. An accurate estimation of the damage severity after a major disturbance event is very important for prioritizing forest management practices. For land managers, damage severity classification can be used in response to future hurricanes, for economic timber evaluation, and to manage potential wildfires. In this section, we discuss the results and highlight future research needs.
The TLS dataset is from an environment with a stressor, which induces significant changes to the tree structure. The transformation of 3D point clouds to images allowed the usage of powerful and established techniques of image classification based on CNNs, while the direct use of 3D point clouds would have been computationally onerous [63]. The summary of the accuracy results from all the architectures tested in our approach was promising (overall accuracies ranging from 67.9% to 92.3%). Although overall accuracy can indicate performance, it could be, to a certain degree, inaccurate in a situation where there is an unbalanced number of samples in the training dataset [42]. Hence, the F-1 score and Kappa value were also used to assess damaged trees by class and by the type of model that was tested.
Our results show that damaged trees in classes C3, C4, C5, and C6 could be classified with high accuracy rates (F1-score > 0.94). When comparing the confusion matrices, we can see that classes C3, C4, C5, and C6 were accurately classified. This classification success is attributed to the distinct architecture of these four classes. The shape of C4 looks like a triangle, C5 has no crown, and C6 is horizontally arranged. In contrast, C1, C2, and C3 have in common the presence of trunks and crowns, and the inclination of the whole tree is used as a parameter for dissimilarity among them. The image representation of the point clouds multicolored by height is also more distinguished in classes C4, C5, and C6 compared to in C1, C2, and C3. C1 and C2 used all established ranks of colors in most of the samples, which is the opposite scenario in classes C4, C5, and C6 as these trees are not tall as in C1 and C2. For example, C6 is horizontally laid out and will not achieve a mean height value. The capability of CNNs models to learn the tree architecture characteristics in 2D images is inspired by the human visual cortex [64], and both CNNs and human vision are susceptible to errors. For example, we observed relatively high rates of misclassification among C1, C2, and C3 (C1 to C2 ~28% and C2 to C3 ~11%). The classification errors among these classes may have resulted from a different point of view depending on the rotation in the x-y plane. For instance, the C2 trees might have been misclassified as C1 because the trunk inclination of this is less than 45°, and in a different position it could have appeared to be a straight trunk. However, trees with no damage (C1) and leaning (<45°) (C2) stems are feasible for forest harvest since these trees can still be merchandised as sawlogs [16].
Regarding the performance of all the CNN architectures tested in our study, the VGG16, DenseNet201, and Inception_v3 approaches yielded results with less confusion and high accuracy values than the other methods. The simple CNN model showed an overall weaker performance in three out of the six tested damaged tree classes. One potential explanation for the difference in accuracy between the simple CNN and the remaining models is that all the other models were pre-trained with ImageNet data and then fine-tuned with the labeled training data, while the simple CNN relied entirely on our small training data to adjust the model weights. Therefore, all models but the simple CNN leveraged transfer learning, that is, the previous weights that were trained with other images helped to better identify the tree damage classes. A direct and specific cause of the difference in prediction accuracy associated with the six CNN models is complex to determine, and many factors might have impacted the accuracy, including the selection of the training and test datasets [63], data resolution, and attribution [44]. Additionally, small samples (<40 trees) may explain the poor performance of CNN models, but limited sample sizes are not unusual for TLS research in many ecosystems [64]. Further, classification accuracy is not only affected by data resolution and sample size but also by the choice of classifiers. In methods of classification based on deep learning, convolution, and pooling layers will vary among the architectures and will affect these. The pooling can be used to remove anomalous pixels, and it is a process of down-sampling to obtain an average or maximum value in a near portion [65]. Deep learning classifiers, i.e., VGG, ResNet, and DenseNet, have different parameter sizes and each one will respond in accuracy, rapidness, and stability in a distinct way. As such, DenseNet was drawn to have a small parameter size and to be a weightless model [44].
In this study, the whole procedure of using plot scans to classify damage severity at the tree level was not totally computerized, and the segmentation of individual trees by damage severity class was executed manually. However, the general high accuracy rates found herein set out the advantages of our proposition—the application of deep learning and TLS data in disturbed areas—and they are an important first step in prompting an automated forest inventory for after hurricanes that would provide wood and carbon storage estimates, as well as support research based on the ecological process. In this way, a deep learning approach may be used in future applications due to the potential performance to solve segmentation problems. Also, the direct use of 3D point clouds in the damage severity classification procedure might be considered for improving performance accuracy and assisting with the quantification and qualification of the trees after a disturbance for forest management practices, however, the trade-off between accuracy and computation must be further investigated.

5. Conclusions

In this study, we assessed the capability of TLS and deep learning for classifying post-hurricane damage severity at the individual tree level in a pine-dominated forest ecosystem in the Florida Panhandle. Combining TLS with six types of CNNs was shown to be efficient for classifying post-hurricane damage severity at the individual tree level with a high accuracy. However, VGG16 and multicolored-by-height TLS-derived 2D images outperformed all the other methods tested. This is the first attempt to combine TLS and deep learning for classifying damage severity at the tree level. Despite the promising results found herein, there is still a long path to run until the proposed method can be applied at an operational scale. Improvements are needed not only in damage severity classification, but also in the efficiency and automation of methods for individual tree extraction, especially fallen trees, from TLS data. We hope the open-source rTLsDeep R package developed in this study for classifying post-hurricane damage severity at the individual tree level will stimulate further research and applications not just in longleaf pine but other forest types in hurricane-prone regions.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs15041165/s1, Description of the rTLsDeep R package (Figures S1–S6).

Author Contributions

Conceptualization, C.K., R.D., M.P.F. and C.A.S.; methodology, C.K., R.D., M.P.F. and C.A.S.; software, C.K., R.D., M.P.F., C.H. and C.A.S.; validation, C.K., R.D., M.P.F. and C.A.S.; formal analysis, C.K., R.D., M.P.F. and C.A.S.; investigation, C.K., R.D., M.P.F. and C.A.S.; resources, C.K., R.D., M.P.F., J.V., E.B. and C.A.S.; data curation, C.K., R.D., M.P.F. and C.A.S.; writing—original draft preparation, C.K., R.D., M.P.F. and C.A.S.; writing—review and editing, C.K., R.D., M.P.F., C.H., E.B., J.V. and C.A.S.; visualization, C.K. and C.A.S.; supervision, C.A.S., E.B. and J.V.; project administration, C.K.; funding acquisition, J.V. and E.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the USDA NIFA Awards # 2020-67030-30714 and #2023-68016-39039.

Acknowledgments

We would like to express our sincere thanks to the funding agency and Danilo R. F. Souza and Luiz A. Nogueira for assisting with the rTLsDeep package in R and Rodrigo Leite for the field data collection.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lugo, A.E.; Applefield, M.; Pool, D.J.; McDonald, R.B. The impact of Hurricane David on forests of Dominica. Can. J. For. Res. 1983, 13, 201–211. [Google Scholar] [CrossRef]
  2. Gresham, C.A.; Williams, T.M.; Lipscomb, D.J. Hurricane Hugo wind damage to southeastern U.S. coastal forest tree species. Biotropica 1991, 23, 420–446. [Google Scholar] [CrossRef]
  3. Tanner, E.V.J.; Kampos, V.; Healey, J.R. Hurricane effects on forest ecosystems in the Caribbean. Biotropica 1991, 23, 512–521. [Google Scholar] [CrossRef]
  4. Lin, T.C.; Hamburg, S.P.; Hsia, Y.J.; Lin, T.T.; King, H.B.; Wang, L.J.; Lin, K.C. Influence of typhoon disturbances on the understory light regime and stand dynamics of a subtropical rain forest in northeastern Taiwan. J. For. Res. 2003, 8, 139–145. [Google Scholar] [CrossRef]
  5. Mitchell, S.J. Wind as a natural disturbance agent in forests: A synthesis. Forestry 2013, 86, 147–157. [Google Scholar] [CrossRef] [Green Version]
  6. Whigham, D.F.; Olmsted, I.; Cano, E.C.; Harmon, M.E. The impact of Hurricane Gilbert on trees, litterfall, and woody debris in a dry tropical forest in the northeastern Yucatan Peninsula. Biotropica 1991, 32, 434–441. [Google Scholar] [CrossRef] [Green Version]
  7. Liu, K.B.; Lu, H.; Shen, C. A 1200-year proxy record of hurricanes and fires from the Gulf of Mexico coast: Testing the hypothesis of hurricane-fire interactions. Quat. Res. 2008, 69, 29–41. [Google Scholar] [CrossRef]
  8. Evans, A.M.; Camp, A.E.; Tyrrell, M.L.; Riely, C.C. Biotic and abiotic influences on wind disturbance in forests of NW Pennsylvania, USA. For. Ecol. Manag. 2007, 245, 44–53. [Google Scholar] [CrossRef]
  9. Achim, A.; Ruel, J.-C.; Gardiner, G.L.; Meunier, S. Modelling the vulnerability of balsam fir forests to wind damage. For. Ecol. Manag. 2005, 204, 35–50. [Google Scholar] [CrossRef]
  10. Schroeder, L.M.; Eidmann, H.H. Attacks of bark and wood boring Coleoptera on snow broken conifers over a two-year period. Scand. J. For. Res. 1993, 8, 257–265. [Google Scholar] [CrossRef]
  11. Oliver, C.D.; Larson, B.C. Forest Stand Dynamics; Wiley: New York, NY, USA, 1996; p. 520. [Google Scholar]
  12. Seidl, R.; Thom, D.; Kautz, M.; Martin-Benito, D.; Peltoniemi, M.; Vacchiano, G.; Wild, J.; Ascoli, D.; Petr, M.; Honkaniemi, J.; et al. Forest disturbances under climate change. Nat. Clim. Chang. 2017, 7, 395–402. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Sharma, A.; Ojha, S.K.; Dimov, L.D.; Vogel, J.; Nowak, J. Long-term effects of catastrophic wind on southern US coastal forests: Lessons from a major hurricane. PLoS ONE 2021, 16, 243–362. [Google Scholar] [CrossRef]
  14. Xi, W.; Peet, R.K.; Urban, D.L. Changes in forest structure, species diversity and spatial pattern following hurricane disturbance in a Piedmont North Carolina forest, USA. J. Plant Ecol. 2008, 1, 43–57. [Google Scholar] [CrossRef]
  15. Zampieri, N.E.; Pau, S.; Okamoto, D.K. The impact of Hurricane Michael on longleaf pine habitats in Florida. Sci. Rep. 2020, 10, 8483. [Google Scholar] [CrossRef]
  16. Rutledge, B.T.; Cannon, J.B.; McIntyre, R.K.; Holland, A.M.; Jack, S.B. Tree, Stand, and Landscape Factors Contributing to Hurricane Damage in a Coastal Plain Forest: Post-Hurricane Assessment in a Longleaf Pine Landscape. For. Ecol. Manag. 2021, 481, 118724. [Google Scholar] [CrossRef]
  17. Bigelow, S.; Looney, C.; Cannon, J.B. Hurricane effects on climate-adaptive silviculture treatments to longleaf pine woodland in southwestern Georgia, USA. For. Int. J. For. Res. 2020, 94, 395–406. [Google Scholar] [CrossRef]
  18. Willson, K.G.; Cox, L.E.; Hart, J.L.; Dey, D.C. Three-dimensional light structure of an upland Quercus stand post-tornado disturbance. J. For. Res. 2020, 31, 141–153. [Google Scholar] [CrossRef] [Green Version]
  19. Bender, M.A.; Knutson, T.R.; Tuleya, R.E.; Sirutis, J.J.; Vecchi, G.A.; Garner, S.T.; Held, I.M. Modeled impact of anthropogenic warming on the frequency of intense Atlantic hurricanes. Science 2010, 327, 454–458. [Google Scholar] [CrossRef] [Green Version]
  20. Gresham, F.M. Conceptualizing behavior disorders in terms of resistance to intervention. Sch. Psychol. Rev. 1991, 20, 23–36. [Google Scholar] [CrossRef]
  21. Provencher, L.; Litt, A.R.; Gordon, D.R.; LeRoy Rodgers, H.; Herring, B.J.; Galley, K.E.M.; McAdoo, J.P.; McAdoo, S.J.; Gobris, N.M.; Hardesty, J.L. Restoration fire and hurricanes in longleaf pine sandhills. Ecol. Restor. 2001, 19, 92–98. [Google Scholar] [CrossRef]
  22. McIntyre, R.K.; Jack, S.B.; Mitchell, R.J.; Hiers, J.K.; Neel, W.L. Multiple Value Management: The Stoddard-Neel Approach to Ecological Forestry in Longleaf Pine Grasslands; Joseph, W., Ed.; Jones Ecological Research Center at Ichauway: Newton, GA, USA, 2008; p. 31. [Google Scholar]
  23. Blackman, R.; Yuan, F. Detecting Long-Term Urban Forest Cover Change and Impacts of Natural Disasters Using High-Resolution Aerial Images and LiDAR Data. Remote Sens. 2020, 12, 1820. [Google Scholar] [CrossRef]
  24. Silva, C.A.; Saatchi, S.; Alonso, M.G.; Labriere, N.; Klauberg, C.; Ferraz, A.; Meyer, V.; Jeffery, K.J.; Abernethy, K.; White, L.; et al. Comparison of Small- and Large-Footprint Lidar Characterization of Tropical Forest Aboveground Structure and Biomass: A Case Study From Central Gabon. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3512–3526. [Google Scholar] [CrossRef] [Green Version]
  25. Chambers, J.Q.; Fisher, J.I.; Zeng, H.; Chapman, E.L.; Baker, D.B.; Hurtt, G.C. Hurricane Katrina’s carbon footprint on US Gulf Coast forests. Science 2007, 318, 1107. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Russell, M.; Eitel, J.U.H.; Link, T.E.; Silva, C.A. Important Airborne Lidar Metrics of Canopy Structure for Estimating Snow Interception. Remote Sens. 2021, 1, 4188. [Google Scholar] [CrossRef]
  27. Klauberg, C.; Hudak, A.T.; Silva, C.A.; Lewis, S.A.; Robichaud, P.R.; Jain, T.B. Characterizing fire effects on conifers at tree level from airborne laser scanning and high-resolution, multispectral satellite data. Ecol. Model. 2019, 412, 108820. [Google Scholar] [CrossRef]
  28. Silva, V.S.; Silva, C.A.; Mohan, M.; Cardil, A.; Rex, F.E.; Loureiro, G.H.; Almeida, D.R.A.; Broadbent, E.N.; Gorgens, E.B.; Dalla Corte, A.P.; et al. Combined Impact of Sample Size and Modeling Approaches for Predicting Stem Volume in Eucalyptus spp. Forest Plantations Using Field and LiDAR Data. Remote Sens. 2020, 12, 1438. [Google Scholar] [CrossRef]
  29. Cunha Neto, E.M.; Veras, H.F.P.; Moraes, A.; Klauberg, C.; Mohan, M.; Cardil, A.; Broadbent, E.N. Measuring Individual Tree Diameter and Height Using GatorEye High-Density UAV-Lidar in an Integrated Crop-Livestock-Forest System. Remote Sens. 2020, 12, 863. [Google Scholar] [CrossRef] [Green Version]
  30. Silva, C.A.; Duncansona, L.; Hancockb, S.; Neuenshwanderc, A.; Thomasd, N.; Hofton, M.; Simardd, M.; Armston, J.; Feng, T.; Montesano, P.; et al. Mapping Tropical Forest Aboveground Biomass Density from Synergism of GEDI, ICESat-2, and NISAR data. Remote Sens. Environ. 2022. in review. [Google Scholar]
  31. Silva, C.A.; Klauberg, C.; Hudak, A.T.; Vierling, L.A.; Liesenberg, V.; Bernett, L.G.; Scheraiber, C.; Schoeninger, E. Estimating Stand Height and Tree Density in Pinus taeda plantations using in-situ data, airborne LiDAR and k-Nearest Neighbor Imputation. An. Acad. Bras. Ciências 2018, 90, 295–309. Available online: https://www.scielo.br/j/aabc/a/JjVJzFxjcyMWyQxRrbyCbjR/?lang=en (accessed on 10 December 2022). [CrossRef]
  32. Jaafar, W.S.W.M.; Woodhouse, I.H.; Silva, C.A.; Omar, H.; Maulud, K.N.A.; Hudak, A.T.; Klauberg, C.; Cardil, A.; Mohan, M. Improving Individual Tree Crown Delineation and Attributes Estimation of Tropical Forests Using Airborne LiDAR Data. Forests 2018, 9, 759. [Google Scholar] [CrossRef] [Green Version]
  33. Béland, M.; Widlowski, J.-L.; Fournier, R.A.; Côté, J.-F.; Verstraete, M.M. Estimating leaf area distribution in savanna trees from terrestrial LiDAR measurements. Agric. For. Meteorol. 2011, 151, 1252–1266. [Google Scholar] [CrossRef]
  34. Calders, K.; Newnham, G.; Burt, A.; Murphy, S.; Raumonen, P.; Herold, M.; Culvenor, D.S.; Avitabile, V.; Disney, M.; Armston, J.; et al. Nondestructive estimates of above-ground biomass using terrestrial laser scanning. Methods Ecol. Evol. 2014, 6, 198–208. [Google Scholar] [CrossRef]
  35. Dobre, A.C.; Pascu, I.-S.; Leca, Ș.; Garcia-Duro, J.; Dobrota, C.-E.; Tudoran, G.M.; Badea, O. Applications of TLS and ALS in Evaluating Forest Ecosystem Services: A Southern Carpathians Case Study. Forests 2021, 12, 1269. [Google Scholar] [CrossRef]
  36. Leite, R.V.; Silva, C.A.; Mohan, M.; Cardil, A.; Almeida, D.R.A.; Carvalho, S.P.C.; Jaafar, W.S.W.M.; Guerra-Hernández, J.; Weiskittel, A.; Hudak, A.T.; et al. Individual Tree Attribute Estimation and Uniformity Assessment in Fast-Growing Eucalyptus spp. Forest Plantations Using Lidar and Linear Mixed-Effects Models. Remote Sens. 2020, 12, 3599. [Google Scholar] [CrossRef]
  37. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar]
  38. Liu, H.; Shen, X.; Cao, L.; Yun, T.; Zhang, Z.; Fu, X.; Chen, X.; Liu, F. Deep Learning in Forest Structural Parameter Estimation Using Airborne LiDAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 1603–1618. [Google Scholar] [CrossRef]
  39. Abdi, O.; Uusitalo, J.; Kivinen, V.-P. Logging Trail Segmentation via a Novel U-Net Convolutional Neural Network and High-Density Laser Scanning Data. Remote Sens. 2022, 14, 349. [Google Scholar] [CrossRef]
  40. Dalagnol, R.; Wagner, F.H.; Emilio, T.; Streher, A.S.; Galvão, L.S.; Ometto, J.P.H.B.; Aragão, L.E.O.C. Canopy palm cover across the Brazilian Amazon forests mapped with airborne LiDAR data and deep learning. Remote Sens. Ecol. Conserv. 2022, 8, 579–760. Available online: https://zslpublications.onlinelibrary.wiley.co (accessed on 10 December 2022). [CrossRef]
  41. Ferreira, M.P.; de Almeida, D.R.A.; Papa, D.A.; Minervino, J.B.S.; Veras, H.F.P.; Formighieri, A.; Santos, C.A.N.; Ferreira, M.A.D.; Figueiredo, E.O.; Ferreira, E.J.L. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For. Ecol. Manag. 2020, 475, 118397. [Google Scholar] [CrossRef]
  42. Nezami, N.S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree Species Classification of Drone Hyperspectral and RGB Imagery with Deep Learning Convolutional Neural Networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef] [Green Version]
  43. Goyal, A.; Law, H.; Liu, B.; Newell, A.; Deng, J. Revisiting point cloud shape classification with a simple and effective baseline. Int. Conf. Mach. Learn. 2021, 139, 3809–3820. [Google Scholar] [CrossRef]
  44. Xi, Z.; Hopkinson, C.; Rood, S.B.; Peddle, D.R. See the forest and the trees: Effective machine and deep learning algorithms for wood filtering and tree species classification from terrestrial laser scanning. ISPRS J. Photogramm. Remote Sens. 2020, 168, 1–16. [Google Scholar] [CrossRef]
  45. Allen, M.J.; Grieve, S.W.D.; Owen, H.J.F.; Lines, E.R. Tree species classification from complex laser scanning data in Mediterranean forests using deep learning. Methods Ecol. Evol. 2022, 1–11. [Google Scholar] [CrossRef]
  46. Klauberg, C.; Vogel, J.; Dalagnol, R.; Ferreira, M.; Broadbent, E.N.; Hamamura, C.; Souza, D.R.F.; Nogueira, L.G.A.; Silva, C.A. rTLsDeep: An R Package for Post-Hurricane Damage Severity Classification at the Individual Tree Level Using Terrestrial Laser Scanning and Deep Learning. Version 0.0.1. Available online: https://github.com/carlos-alberto-silva/rTLsDeep (accessed on 30 December 2022).
  47. Georgia Forestry Commission. TIMBER IMPACT ASSESSMENT Hurricane Michael. 10-11 October 2018. Available online: https://gatrees.org/wp-content/uploads/2020/01/Hurricane-MichaelTimber-Impact-Assessment-Georgia-October-10-11-2018-2.pdf (accessed on 3 October 2022).
  48. RiSCAN Pro® Version 2.9.0. RIEGL RIEGL VZ-400 VZ-400. RIEGL Laser Measurement Systems GmbH. 2019. Available online: http://www.riegl.com/products/software-packages/riscan-pro/ (accessed on 1 November 2022).
  49. CloudCompare®. CloudCompare (Version 2.12). Available online: http://www.cloudcompare.org/ (accessed on 1 July 2022).
  50. Huang, G.; Liu, Z.; Maaten, L.V.D.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2017, arXiv:1608.06993. [Google Scholar]
  51. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  52. Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. Available online: https://proceedings.mlr.press/v97/tan19a.html (accessed on 10 December 2022).
  53. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  54. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  55. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
  56. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
  57. Shimodaira, H. Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plan. Inference 2000, 90, 227–244. [Google Scholar] [CrossRef]
  58. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  60. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  61. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  62. Cohen, J. A Coefficient of Agreement for Nominal Scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  63. Seidel, D.; Annighöfer, P.; Thielman, A.; Seifert, Q.E.; Thauer, J.H.; Glatthorn, J.; Ehbrecht, M.; Kneib, T.; Ammer, C. Predicting Tree Species From 3D Laser Scanning Point Clouds Using Deep Learning. Front. Plant Sci. 2021, 12, 635440. [Google Scholar] [CrossRef] [PubMed]
  64. Cadieu, C.F.; Hong, H.; Yamins, D.L.K.; Pinto, N.; Ardila, D.; Solomon, E.A.; Majaj, N.J.; DiCarlo, J.J. Deep neural networks rival the representation of primate it cortex for core visual object recognition. PLoS Comput. Biol. 2014, 10, e1003963. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Scherer, D.; Müller, A.; Behnke, S. Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks; Diamantaras, K., Duch, W., Iliadis, L.S., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6354, pp. 92–101. [Google Scholar] [CrossRef]
Figure 1. Hurricane Michael path and category in the Southern US and study sites located within the area impacted by Michael within the Apalachicola National Forest (ANF) and private forests areas (a). Area impacted by Hurricane Michael (36,218.00 km2) and study sites location (b). The timber damage severity map was produced by the Georgia Forestry Commission and Florida Forest Service [47].
Figure 1. Hurricane Michael path and category in the Southern US and study sites located within the area impacted by Michael within the Apalachicola National Forest (ANF) and private forests areas (a). Area impacted by Hurricane Michael (36,218.00 km2) and study sites location (b). The timber damage severity map was produced by the Georgia Forestry Commission and Florida Forest Service [47].
Remotesensing 15 01165 g001
Figure 2. TLS-derived 3D point cloud at plot level (a) and extracted individual trees (b): C1—no damaged tree (intact tree, no visual damage), C2—leaning tree, C3—beading tree with a trunk like a bow, C4—trunk snapped with stem or crown broken, C5—stem breakage (no crown), C6—fallen or uprooted tree. The damage severity classification was adapted from previous studies (e.g., Rutedge et [15]).
Figure 2. TLS-derived 3D point cloud at plot level (a) and extracted individual trees (b): C1—no damaged tree (intact tree, no visual damage), C2—leaning tree, C3—beading tree with a trunk like a bow, C4—trunk snapped with stem or crown broken, C5—stem breakage (no crown), C6—fallen or uprooted tree. The damage severity classification was adapted from previous studies (e.g., Rutedge et [15]).
Remotesensing 15 01165 g002
Figure 3. An example of tree-level 3D point cloud rotation (Z-axis by 30°) and 2D image computation as (a,b).
Figure 3. An example of tree-level 3D point cloud rotation (Z-axis by 30°) and 2D image computation as (a,b).
Remotesensing 15 01165 g003
Figure 4. Damage severity classification approach using convolutional neural networks (CNNs). The input image (a) passes through a set of convolutions, pooling, and fully-connected layers (b) that perform feature extraction. At the end of the process (c), the softmax classifier is applied to retrieve class membership probabilities. The input image is then classified according to the class that achieved the highest probability.
Figure 4. Damage severity classification approach using convolutional neural networks (CNNs). The input image (a) passes through a set of convolutions, pooling, and fully-connected layers (b) that perform feature extraction. At the end of the process (c), the softmax classifier is applied to retrieve class membership probabilities. The input image is then classified according to the class that achieved the highest probability.
Remotesensing 15 01165 g004
Figure 5. F1 score for each CNN architecture across post-hurricane damage severity categories (a,b).
Figure 5. F1 score for each CNN architecture across post-hurricane damage severity categories (a,b).
Remotesensing 15 01165 g005
Figure 6. Confusion matrices derived from CNN architectures and TLS-derived 2D images in (a,b). The percentage of misclassification between a pairwise combination of damaged trees is shown in the off-diagonal cells. Each cell contains absolute values (pixels) and relative percentages.
Figure 6. Confusion matrices derived from CNN architectures and TLS-derived 2D images in (a,b). The percentage of misclassification between a pairwise combination of damaged trees is shown in the off-diagonal cells. Each cell contains absolute values (pixels) and relative percentages.
Remotesensing 15 01165 g006
Table 1. Summary of overall accuracy, kappa statistics, and mean F1-score for six models’ approaches within image type (colored by height and black) from the validation dataset. The best results by image type are shown in bold.
Table 1. Summary of overall accuracy, kappa statistics, and mean F1-score for six models’ approaches within image type (colored by height and black) from the validation dataset. The best results by image type are shown in bold.
Validation DataArchitectureOverall Accuracy (OA)Kappa StatisticMean F1-Score
Binary color (Black and white)DenseNet2010.87250.846310.87107
EfficientNet_b70.84350.858140.88065
Inception_v30.87740.852280.87754
ResNet152v20.87250.846310.87027
Simple CNN0.88230.858280.88307
VGG160.89220.869950.89321
Multicolored by heightDenseNet2010.87250.846370.87253
EfficientNet_b70.86270.834030.84620
Inception_v30.90200.881660.89777
ResNet152v20.90120.881820.90512
Simple CNN0.73530.679360.70929
VGG160.92160.905560.92337
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Klauberg, C.; Vogel, J.; Dalagnol, R.; Ferreira, M.P.; Hamamura, C.; Broadbent, E.; Silva, C.A. Post-Hurricane Damage Severity Classification at the Individual Tree Level Using Terrestrial Laser Scanning and Deep Learning. Remote Sens. 2023, 15, 1165. https://doi.org/10.3390/rs15041165

AMA Style

Klauberg C, Vogel J, Dalagnol R, Ferreira MP, Hamamura C, Broadbent E, Silva CA. Post-Hurricane Damage Severity Classification at the Individual Tree Level Using Terrestrial Laser Scanning and Deep Learning. Remote Sensing. 2023; 15(4):1165. https://doi.org/10.3390/rs15041165

Chicago/Turabian Style

Klauberg, Carine, Jason Vogel, Ricardo Dalagnol, Matheus Pinheiro Ferreira, Caio Hamamura, Eben Broadbent, and Carlos Alberto Silva. 2023. "Post-Hurricane Damage Severity Classification at the Individual Tree Level Using Terrestrial Laser Scanning and Deep Learning" Remote Sensing 15, no. 4: 1165. https://doi.org/10.3390/rs15041165

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop