Next Article in Journal
Intra- and Interspecific Competition Altered the Competitive Strategies of Alternanthera philoxeroides and Trifolium regens under Cadmium Contamination
Previous Article in Journal
How Wood Quality Can Be Shaped: Results of 70 Years of Experience
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model-Based Identification of Larix sibirica Ledeb. Damage Caused by Erannis jacobsoni Djak. Based on UAV Multispectral Features and Machine Learning

1
College of Geographical Science, Inner Mongolia Normal University, Hohhot 010022, China
2
Inner Mongolia Key Laboratory of Remote Sensing & Geography Information System, Hohhot 010022, China
3
Inner Mongolia Key Laboratory of Disaster and Ecological Security on the Mongolia Plateau, Hohhot 010022, China
4
Department of Geography, School of Arts and Sciences, National University of Mongolia, Ulaanbaatar 14200, Mongolia
5
Baotou Normal College, Inner Mongolia University of Science & Technology, Baotou 014030, China
6
Institute of Geography and Geoecology, Mongolian Academy of Sciences, Ulaanbaatar 15170, Mongolia
7
Institute of Biology, Mongolian Academy of Sciences, Ulaanbaatar 13330, Mongolia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Forests 2022, 13(12), 2104; https://doi.org/10.3390/f13122104
Submission received: 27 October 2022 / Revised: 4 December 2022 / Accepted: 6 December 2022 / Published: 9 December 2022
(This article belongs to the Special Issue Monitoring and Control of Forest Pest and Disease)

Abstract

:
While unmanned aerial vehicle (UAV) remote sensing technology has been successfully used in crop vegetation pest monitoring, a new approach to forest pest monitoring that can be replicated still needs to be explored. The aim of this study was to develop a model for identifying the degree of damage to forest trees caused by Erannis jacobsoni Djak. (EJD). By calculating UAV multispectral vegetation indices (VIs) and texture features (TF), the features sensitive to the degree of tree damage were extracted using the successive projections algorithm (SPA) and analysis of variance (ANOVA), and a one-dimensional convolutional neural network (1D-CNN), random forest (RF), and support vector machine (SVM) were used to construct damage degree recognition models. The overall accuracy (OA), Kappa, Macro-Recall (Rmacro), and Macro-F1 score (F1macro) of all models exceeded 0.8, and the best results were obtained for the 1D-CNN based on the vegetation index sensitive feature set (OA: 0.8950, Kappa: 0.8666, Rmacro: 0.8859, F1macro: 0.8839), while the SVM results based on both vegetation indices and texture features exhibited the poorest performance (OA: 0.8450, Kappa: 0.8082, Rmacro: 0.8415, F1macro: 0.8335). The results for the stand damage level identified by the models were generally consistent with the field survey results, but the results of SVMVIs+TF were poor. Overall, the 1D-CNN showed the best recognition performance, followed by the RF and SVM. Therefore, the results of this study can serve as an important and practical reference for the accurate and efficient identification of the damage level of forest trees attacked by EJD and for the scientific management of forest pests.

1. Introduction

Erannis jacobsoni Djak. (Lepidoptera: Geometridae) is a unique forest pest on the Mongolian plateau with one generation a year; the hatching period is from the beginning to the end of May, the larval stage (damage period) is from the end of May to the end of June, the pupal stage is from the beginning of July to the beginning of September, the fledging period is from the beginning of September to the middle of October, in which eggs can be laid at the end of the fledging period, and the overwintering period is from the outbreak consuming a large number of Larix sibirica Ledeb. (LSL) needles, causing the death of forest trees and posing a serious threat to the ecosystem of Mongolia [1]. Forest disturbance is a destructive event that disrupts the health of the forest at any point in time or space and includes the effects of fire, disease, pests, or climate change [2,3]. Rising global temperatures provide more suitable conditions for pests to survive, and high temperatures also lead to frequent forest fires and a decrease in the ability of forests to recover, helping pests thrive; thus, the probability of forest pests continues to increase [4,5,6]. According to the Ministry of Nature and Tourism of Mongolia, Mongolia’s forest area is approximately 18.6 million hectares, accounting for 11.9% of Mongolia’s land area. Forest resources are mainly LSL, accounting for approximately 60% or more of the natural forest area and 80% of the forest reserves [7,8,9,10]. Since 1980, approximately 11.52 million hectares of forests in Mongolia have been infested with insects, the forest cover has been reduced by 0.39%, the total forest area has been degraded by 30%, and 17% of the forests have been classified as heavily damaged and in danger of dying out. Of these, the Dendrolimus sibiricus Tschtv. (DST) and the Erannis jacobsoni Djak. (EJD) are the two most significant pests. Since 2010, the larch area affected by EJD has increased from tens of thousands of hectares to hundreds of thousands of hectares, making it the most damaging larch pest. However, research in Mongolia has focused on implementing pest control by manually counting egg numbers during the overwintering period and monitoring single-wood population densities during infestation outbreaks, which cannot be achieved on a large scale and is sensitive to weather and topography-related factors. To overcome this problem, it is necessary to develop an efficient and wide-scale method to accurately monitor the location of EJD infestation and assess the degree of damage to forest trees; this is necessary for developing effective scientific measures to control insects and to protect the healthy and sustainable development of forest ecosystems.
Remote sensing can effectively provide contactless and spatially continuous pest monitoring, providing a new method for ground staff. With the continuous improvement of sensors and the rapid development of computer technology, a wide range of remote sensing data are being used for vegetation pest monitoring. Physiological and biochemical changes in vegetation caused by pest stress provide the physical basis for monitoring pests via remote sensing [11]. The changes mainly include the following: (1) a reduction in biomass, which includes the roots, stems, leaves, flowers, and fruits of the forest, and pest attacks of forest trees, resulting in a reduction in root systems, trunks and forest conifer tissue, and the biomass of the affected forest trees [12]; (2) vegetation leaf wilting, changes in leaf structure following pest attacks, reduced transpiration of leaves following massive water loss, and the inability to reduce leaf heat, allowing leaf tissue to be scorched and resulting in leaf wilting [13,14]; and (3) damage to the pigment system, damage to chloroplasts or other organelles, changes in pigment content (e.g., chlorophylls, carotenoids, and anthocyanins), and the rapid dissolution of pigments due to high temperatures, resulting in the discoloration of damaged leaves [15,16]. These changes cause the spectral reflectance of vegetation on remote sensing images to be different from that of healthy vegetation [17,18], and these changes become more pronounced with increasing damage. At different damage levels, the changes exhibit unique characteristics, making it possible to monitor vegetation health and distinguish between different levels of damage by remote sensing. Yu et al. [19] used multinomial logistic regression (MLR), the stepwise linear discriminant approach (SDA), and a random forest (RF) to classify damaged Yunnan pine by WorldView-3 satellite images and found that the SDA based on all spectral variables had the highest accuracy (OA: 78.33%, Kappa: 0.712). Harati et al. [20] used 16 years of Landsat data (1999–2014) combined with elevation, slope, and aspect data to simulate forest cover changes caused by the mountain pine beetle (MPB) between 2005 and 2014 using generalized linear regression and random forest algorithms. Although satellite remote sensing can be used to classify forest tree species and identify forest pests on the image element scale, its accuracy must be improved due to the low spatial resolution of images and mixed image elements, and the monitoring of lightly damaged forest trees is especially important. A longer revisit cycle makes it easy to miss the best monitoring time, which makes monitoring forest insect pests on a high timeframe a challenge.
In recent years, the rapid development of UAV remote sensing technology has led to a rapid increase in the use of UAVs with various sensors for forest health monitoring. With this approach, more high-spatial-and-temporal-resolution images are obtained at a lower cost; the technique is less affected by weather [21,22] and can distinguish vegetation from nonvegetation on the image element scale to achieve high-regional-accuracy monitoring. Unlike hyperspectral images, multispectral images do not have tens or even hundreds of continuous and narrow bands to characterize changes in the moisture content of the forest canopy. Choosing more sensitive features to characterize the extent of forest damage in a relatively small amount of information is therefore the key to achieving high-accuracy monitoring. UAV multispectral images in the red band, red-edge band, and near-infrared band are sensitive to the vegetation health status, and there are significant changes in each band as the degree of damage increases [23,24,25]. Meanwhile, vegetation indices and texture features derived from high-resolution UAV multispectral images are also sensitive to the health of forest trees, and their combination can effectively detect forest pests and diseases. For example, Ye et al. [26] used vegetation indices derived from UAV multispectral images to assess the relationship between vegetation indices and plant health status using a binary logistic regression method, which showed that the green chlorophyll index, normalized difference vegetation index, and normalized difference red edge index easily identified Banana Fusarium wilt. Guo et al. [27] used UAV images to obtain vegetation indices and texture features, combined them to establish a disease monitoring model for different infection stages of wheat yellow rust using partial least squares regression, and found that the vegetation index model had the highest monitoring accuracy (R2 = 0.75) at mid-infection, texture features obtained the highest R2 at mid- and late-infection (0.65 and 0.82, respectively), and the vegetation index and texture models with a combination of the vegetation index and texture features had the highest accuracy in all infection periods. However, different vegetation pests exhibit different characteristics, so the selection of suitable features to characterize the degree of vegetation damage is crucial for the UAV remote sensing monitoring of forest trees.
Several algorithms have been developed to classify the damage level of forest trees, which can be broadly classified into mathematical and statistical algorithms, machine learning algorithms, and deep learning algorithms. Current common methods include multiple linear regression (MLR), partial least squares regression (PLSR), random forests (RF), support vector machines (SVM), K-nearest neighbors (KNN), and convolutional neural networks (CNN) [28,29,30]. Among these models, random forests, support vector machines, and convolutional neural networks showed good results in forest damage classification. For example, Deng et al. [31] constructed a Region Proposal Network (RPN) and Faster Region Convolutional Neural Networks (Faster-RCNN), which are deep learning frameworks for training pine dieback detection models, and the accuracy of the optimized models can reach approximately 90%; Duarte et al. [32] mapped forest density using RF after the Otsu thresholding analysis of vegetation indices derived from UAV multispectral images with good results (OA: 98.5%, Kappa: 0.94), suggesting that the use of Otus thresholding and random forests could provide appropriate support for forest pest management; Syifa et al. [33] used artificial neural networks (ANN) and support vector machines (SVM) to identify pine wilt disease and found that SVM was more accurate than artificial neural networks in classifying pine wilt stands. However, there is no universal method for identifying forest pests and diseases, and different methods have been limited to identifying one or more pests and diseases.
Until now, the remote sensing monitoring of vegetation pests and diseases has mainly focused on crops; there have been relatively few studies on forests. Crops have a short growth cycle, which makes it easy to control the occurrence of pests and diseases, making it possible to perform remote sensing monitoring research of multiple types of pests and diseases. On the other hand, forest trees have a long growth cycle of decades or more, and the occurrence of pests and diseases not controlled in a timely fashion will cause serious damage to the forest ecosystem. In addition, crop cultivation occurs on mostly flat terrain and close to urban areas, and it is possible to carry a large amount of test equipment for a long time to perform high-frequency testing. Forests are located in mountainous areas far from the city, and it is difficult to carry experimental equipment into forests, making them difficult to monitor over a long period of time with high frequency. At the same time, the spatial distribution of crops is neat, which is conducive to the image element segmentation and control of variables such as pest occurrence time, type, crop growing season temperature, humidity, and nutrition and is convenient for quantitative remote sensing research; compared with crops, the spatial distribution of forest trees is disorganized, and it is difficult to segment image elements and control variables, which is not conducive to quantitative remote sensing research [11,34]. For all these reasons, forest pest research takes more time than crop research and has been limited.
In this study, the typical outbreak area of EJD in Binder District, Khentii Province, Mongolia, was the test area, and UAV multispectral images and real ground measurement data were the data sources used to construct the UAV multispectral vegetation indices and texture features for the EJD forest tree damage recognition models. The specific objectives of the study were as follows: (1) to analyze the spectral changes in the canopy, sensitive vegetation indices, and texture features of forest trees with different damage levels and explore the indicators of the damage level of EJD forest trees; (2) to build identification models of EJD pest severity using a one-dimensional convolutional neural network, a random forest, and a support vector machine; and (3) to analyze the recognition potential of the models at different damage levels and to evaluate the overall recognition accuracy of the models to find the best recognition model in order to realize the recognition of the damage level of EJD forest trees. This study not only illustrates the application of UAV multispectral remote sensing for assessing the severity of EJD stands but also provides a framework for using UAV remote sensing to monitor forest insect pests.

2. Materials and Methods

2.1. Study Sites

The test area is located in the Binder Forest District, Khentii Province, Mongolia (48°439′–48°442′ N, 110°768′–110°776′ E), with an average elevation of 1059.34 m, as shown in Figure 1. The test area has a north temperate, continental climate and a semi-arid region; the average temperature in June and July is approximately 20 °C, and the maximum temperature exceeds 30 °C. The average annual precipitation is 200 to 300 mm [9,35]. The warm, dry weather and suitable altitude provide convenient conditions for the hatching of eggs and the growth of larvae. The study area is a natural forest of LSL, with a single tree species and poor quality. EJD has no natural enemies, meaning that damage to the trees may occur year-round. The test was conducted in mid-June 2020, a period when there was a serious outbreak of EJD, and no other insect pests were found in the stands during the field survey. The forest trees had obvious evidence of damage symptoms, which was sufficient to meet the needs of this study.

2.2. Data Acquisition

2.2.1. UAV Multispectral Remote Sensing Data

The UAV multispectral data were acquired through the DJI Phantom 4 RTK Multispectral Edition, which was equipped with an all-in-one multispectral imaging system that integrated one visible (RGB) camera and five multispectral cameras (Table 1). The UAV flight day was clear and cloudless, and the image acquisition time was from 11:00 a.m. to 14:00 p.m. A calibration whiteboard was placed on the ground before the flight, and the sensor reflectivity was calibrated at approximately 5 m from the ground. The UAV flight path followed the established route with an altitude of 60 m, a heading overlap rate of 80%, a side overlap rate of 65%, and a camera lens at 90° to the ground. The acquired images were radiometrically calibrated and synthesized by DJI Terra (DJI-Innovations Inc., Shenzhen, China) software, which finally generated a digital orthophoto map (DSM), a visible image (RGB), and five multispectral images (B, G, R, RE, NIR).

2.2.2. Data on the Extent of Damage to Forest Trees

The extent of forest tree damage was determined by the forest canopy appearance characteristics and canopy leaf loss rate obtained within 5 days before and after the UAV flight. Three branches were selected in each of the upper, middle, and lower parts of the forest canopy layer of a single tree, and the numbers of damaged needles and healthy needles were recorded. Then, the leaf loss rate of the three branches was calculated using Equation (1), and, finally, the average value of the leaf loss rate was obtained as the canopy leaf loss rate of the sample trees.
L L R i = 1 n i = 1 n N H i N H i + N L i × 100 %
where LLRi is the rate of leaf loss of the i-th sample tree and takes a range of 0%–100%. NHi is the number of healthy needles of the i-th branch of the sample tree, and NLi is the number of damaged needles of the i-th branch of the sample tree.
The canopy appearance characteristics were obtained by field surveys, and three or more researchers confirmed the canopy appearance characteristics of trees with different degrees of damage in the field. The LLR was 0 to 5% for healthy stands with lush needles and no exposed branches in the canopy, 5 to 30% for lightly affected stands with lush needles and a few exposed branches in the canopy, 30 to 70% for moderately affected stands with sparse needles and a large number of exposed branches in the canopy with yellow and red colors, and 70 to 100% for heavily affected stands with only a few needles present in the canopy and all exposed branches in the canopy, with gray colors [9], as shown in Table 2. In the end, a total of 840 trees were selected as sample trees, 210 trees for each damage class.

2.3. Research Methodology

In this study, the degree of forest damage was obtained from ground truth data, and a sensitivity analysis was performed with UAV multispectral images to extract vegetation indices and texture features after using SPA to extract sensitive features and build a sensitive feature set. Three methods, 1D-CNN, RF, and SVC, were selected to construct an identification model of the EJD forest damage level, with the sensitive feature set as the independent variable and the forest damage level as the dependent variable. The general research framework is shown in Figure 2.

2.3.1. UAV Multispectral Image Feature Selection

Irrelevant information in UAV multispectral images can lead to data redundancy. Principal component analysis (PCA) is a statistical method that regroups the original data into a new set of several mutually unrelated composite variables and selects one or several lesser sum variables to reflect as much information of the original variables as possible according to practical needs [36,37], which can effectively remove redundant data. To eliminate the influence of surface vegetation and shadows on the final results, the sample trees were extracted using ENVI 5.3 (Harris Corporation, Jersey City, NJ, USA) for manual visual interpretation to extract regions of interest (ROIs). In this study, 60 vegetation indices were selected in ENVI 5.3 to calculate the vegetation indices of sample trees by referring to the relevant literature, and the mean value was obtained as the value of this sample tree. The texture feature mainly included the mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation [33]. After the PCA of the images by ENVI 5.3 software, the 3 × 3, 5 × 5, 7 × 7, and 9 × 9 window size texture eigenvalues of the largest band of weight were calculated, and the mean value was calculated as the sample tree value.

2.3.2. UAV Multispectral Image Sensitive Feature Extraction

The successive projections algorithm (SPA) is a forward feature variable selection method that selects the combination of variables containing the least redundant information and the least covariance. The projection analysis of vectors is used to reduce the dimensionality of sensitive features and to obtain the sensitive features needed for modeling [38]. In this study, SPA-sensitive feature extraction was performed using MATLAB 2020a (The MathWorks, Inc., Natick, MA, USA) for vegetation indices and texture features. It starts from a selected feature, calculates the projection on the unselected wavelength in each cycle, and introduces the feature with the largest projection vector into the feature combination, finally forming a sensitive feature set.

2.3.3. Forest Tree Damage Recognition Model

A CNN is a deep neural network with a convolutional structure that can effectively reduce the amount of memory occupied by the deep network. It includes an input layer, multiple hidden layers, and an output layer, where the hidden layer contains a convolutional layer, a pooling layer, a Rectified linear unit (ReLU) layer, and a fully connected layer [39,40], as shown in Figure A1. After the data enter the input layer, the original numbers are processed by deaveraging and normalization. The convolutional and ReLU layers perform multilevel abstraction of the input features, making the features better for linear division; the learned features are mapped to the sample-labeled space by the fully connected layer; finally, the classification results are output by the output layer. In contrast, the input of a 1D convolutional neural network is a vector and a convolutional kernel, and the output is also a vector; usually, the input vector length is much larger than the convolutional kernel length.
RF is a machine learning algorithm derived from decision trees, and it essentially combines multiple decision trees to improve them, with the generation of each tree relying on an independently sampled sample set [41]. The basic idea is to randomly select n samples from the original training set to be used as a new training set by autonomous sampling, generate a random forest consisting of n classification trees by constructing classification decision trees, and finally vote on the attribution of new samples based on the results of each classification tree [42].
The core idea of SVM is to map data to a high-dimensional space and find the optimal classification hyperplane by minimizing an upper limit on the classification error [43,44]. Classical SVM is used to solve binary classification problems, and by improving SVM kernel functions, multiclassification problems can be achieved, and radial basis functions have been shown to outperform other functions in terms of improving model efficiency [45]. There are two important parameters in SVM: the penalty factor (C) and gamma. C is the tolerance for error. The larger C is, the greater the probability of overfitting; the smaller C is, the greater the probability of underfitting. Gamma is a self-contained parameter in the radial basis function (RBF), which determines the distribution of the data after mapping to the new feature space; the larger the gamma is, the smaller the support vector is, and vice versa [24,46].

2.3.4. Model Evaluation Metrics

This study was divided into modeling ensemble validation sets in the ratio of 0.75:0.25, with 640 samples used for model construction and 200 samples used for model evaluation. The overall accuracy, Kappa, Macro-Recall (Rmacro), and Macro-F1 score (F1macro) were selected as model evaluation metrics. The overall accuracy is the probability that the classified result is consistent with the test data type for each random sample, as in Equation (2); the Kappa coefficient is a measure of classification accuracy, and the result is usually between −1 and 1, but consistency is meaningful when Kappa > 0 (see Equation (3)); the Macro-Recall (Rmacro) and Macro-F1 score (F1macro) are the extension of the Recall and F1-score in dichotomy into multicategorical evaluation indices, which add up the evaluation indices of different categories (Precision/Recall/F1-score) for averaging, giving the same weight to all categories and finally obtaining the Rmacro Formula (4) and F1macro Formula (5); it takes values between 0 and 1, and the closer F1macro is to 1, the more stable the model is.
O A = T P + T N ( T P + T N + F P + F N )
where OA is the overall precision, TP and TN are the number of correctly classified model samples, and FP and FN are the number of incorrectly classified model samples.
K = P o P e ( 1 P e )
where K is the Kappa coefficient, Po is the overall classification result, and Pe is the chance consistency error.
R m a c r o = 1 n i = 1 n R i
where Rmacro is the overall recall of the model and Ri is the recall of the i-th classification.
F 1 m a c r o = 1 n i = 1 n F 1 i
where F1macro is the overall F1-score of the model and F1i is the F1-score of the i-th classification.

3. Results

3.1. Sensitivity Analysis

3.1.1. Sensitivity of Spectral Reflectance to the Degree of Damage of Forest Trees

Pest attacks on the forest trees resulted in changes in the biochemical components and physiological structure of the canopy, and the spectral reflectance was different from that of healthy trees. By extracting the spectral reflectance of the forest canopy from UAV multispectral single-band images, the changes in the spectral reflectance of the forest canopy under different victimization conditions were analyzed after normalization. As shown in Figure 3, the spectral reflectance of the canopy of trees with different degrees of damage was significantly changed. The canopy reflectance increased in the blue and red bands with increasing damage; the canopy spectral reflectance decreased in the red-edge and near-infrared bands with increasing damage; the heavy reflectance in the green band was higher than the moderate one, which may be due to the sparse needles of the tree branches, resulting in the segmentation of the ground into the canopy, causing the spectral reflectance to increase. The spectral reflectance of all bands varied to different degrees with different degrees of damage. Therefore, the vegetation index derived from spectral reflectance can effectively characterize the severity of forest damage.

3.1.2. ANOVA of Spectral Characteristics on the Degree of Damage to Forest Trees

Variance is a measure in probability theory of the degree of deviation between a random variable and its expectation. In order to analyze the significance between the selected variables and the degree of stand damage, variance values were calculated for all variables. Figure 4a shows the variance values of all variables—with the largest variance being NDVI* and NDSI*, both at 2629.8, and the smallest variance being EVI at 1.38—and the sensitivity between the vegetation index and the degree of damage due to textural characteristics. A further variance ratio test (F) was calculated to verify significant differences between the variables and the degree of stand damage, with an F of 10.63 at alpha = 1 × 10−6. To clearly represent the spectral features with variance values smaller than the F-test value, we have zoomed in on the part of the spectral features with variance values between 1 and 100, as shown in Figure 4b. As can be seen, the variance values for EVI and EVIreg are smaller than the F-test values (1.38 and 3.79, respectively), indicating that the remaining spectral features are all extremely significant, i.e., their sensitivity is high. We further analyzed the sensitivity of the feature characteristics using the Honestly Significant Difference (HSD) post-hoc test (Table A1). Features with higher variance values are sensitive between classes, while features with lower variance values are not sensitive between individual classes, especially texture features, and EVI and EVIreg with values lower than F values are not sensitive between classes.

3.2. Sensitive Feature Extraction for SPA

As the ranges of values of the different vegetation indices and texture features varied widely, all VIs and TF values were mapped uniformly between 0 and 1 through normalization, and the sets of vegetation indices (VIs) and vegetation indices and textural features (VIs + TF) sensitive to different damage levels of the forest trees were extracted separately by SPA. The change in the root mean square error (RMSE) of the SPA during the selection of sensitive features is shown in Figure 5; the orange line represents the VI set, and the blue line represents the combined VI + TF set. The orange box indicates the final SPA-selected sensitive vegetation index variables, the number of which is 9; the corresponding RMSE is 0.4079, which is close to the lowest value. The final selected vegetation indices are Clreg, GMSR, Int*, Int2*, NDSI*, SAVI, SI3*, SI2reg, and MSRreg. The blue box indicates the final selection of the sensitive vegetation indices and texture features by SPA, the number of which is 12; the corresponding RMSE is 0.4081, which is also close to the lowest value. The final selected combination of vegetation indices and texture features is ARI, DVIreg, GMNLI, MTVI2, NDVIreg, 2NLI, Int2reg, Dis3 × 3, Dis7 × 7, Hom3 × 3, and Hom7 × 7.

3.3. Construction of a Classification Model for the Degree of Damage of EJD in Forest Trees

A total of six types of EJD forest tree damage classification models were established through the combination of the three model types with the two feature sets (VIs and VIs + TF), and the results are shown in Table 3. The OA, Kappa, Rmacro, and F1macro values of all models were greater than 0.80, indicating that all six classification models achieved successful classification of the degree of EJD damage to forest trees. The highest overall accuracy was observed with the Kappa, Rmacro, and F1macro of CNNVIs values of 0.8950, 0.8666, 0.8859, and 0.8839, respectively, while the lowest overall accuracy was observed with the Kappa, Rmacro, and F1macro of SVMVIs+TF values of 0.8450, 0.8082, 0.8415, and 0.8335, respectively. Comparing different feature combinations, we found that the model constructed with VIs as features had a higher accuracy and smaller difference than the model constructed with VIs + TF as features. For all three models, the 1D-CNN constructed a severity classification model with better results than RF and SVM. Among the six classification models, CNNVIs had the most potential to identify the damage level of EJD stands, followed by CNNVIs+TF, and SVMVIs+TF was the least effective.
Figure 6 shows the confusion matrix for the six classification models, with the number of correct classifications on the diagonal, the number of incorrect classifications on the rest, the last row for producer accuracy, and the last column for user accuracy. All models achieved the best identification in terms of health level, followed by heavily damaged stands, and there was a poor identification of lightly and moderately damaged stands in terms of both producer and user accuracies. The highest classification accuracies in healthy, lightly affected, and moderately affected stands were all CNNVIs with 98.1%, 82.4%, and 80.6%, respectively, while the highest classification accuracy in heavily affected stands was the CNNVIs+TF with 96.5%. After adding texture features, the model exhibited some improvement in accuracy in both healthy and heavy classification, while in light and moderate classification, the accuracy was lower. A comprehensive comparison revealed that the 1D-CNN model showed the best user accuracy among all models and had a high confidence in the classification of the degree of EJD damage to forest trees.

3.4. Identification of the Severity of EJD Stands

By calculating the sensitive vegetation index and texture characteristics of all stands in the test area, the sensitive characteristics were input into the severity classification model. Finally, the severity of the EJD-infested stands in the test area was obtained, and the classification results are shown in Table 4. Except for SVMVIs+TF, the remaining models showed small differences in the classification results of forest trees in each class. Specific analysis revealed that the severity classification model constructed based on VIs was more stable than the severity classification model constructed based on VIs + TF in identifying the degree of stand damage. Among the four damage classes, the difference between healthy and severely damaged stands was small, while the difference between lightly damaged and moderately damaged stands was large.
Among the above-mentioned six classification results, the SVMVIs+TF classification results were different from the other five classification results. Its ability to identify healthy and mildly and moderately damaged stands was lower than that of other classification models, while the accuracy for the severely damaged stands was much higher than that of other models (primarily misclassifying the lightly and moderately damaged stands as severely damaged stands), which was consistent with the accuracy of the SVMVIs+TF classification model. The rest of the models yielded smaller differences in the results of forest severity identification in the test area.
Figure 7 shows the visualized results of EJD pest severity identification. Most of the trees in the test area were attacked by the pest and were mainly concentrated in the upper-left and central parts of the test area; the lower-right affected trees were fewer and showed a point distribution, which was consistent with the field survey results. The spatial distribution of forest tree damage was centered on Serious and spread in a circular pattern in all directions, and the degree of forest tree damage gradually decreased as the distance from the center increased. This is because, after hatching on the ground, EJD larvae select nearby forest needles to feed on and immediately attack other nearby forest trees when they finish eating the needles of the current forest tree. The stands marked with black circles in the focus areas in the figure are the stands included in the model validation sample. By comparing the recognition results, we found that the models misclassified lightly damaged and heavily damaged stands when recognizing the degree of forest tree damage throughout the whole forest area, especially SVMVIs+TF, which was consistent with the recognition accuracy of the models and the overall accuracy for each class. However, in general, the models were able to accurately identify the damage degrees of forest tree stands.

4. Discussion

After the EJD attack of LSL, stands differ from healthy vegetation in terms of canopy color, biochemical components, and physiological changes, and changes in the spectral reflectance of the forest canopy can characterize changes in these indicators to enable the classification of the degree of forest damage [11,12,13,14,15]. The results of this study showed that there were significant differences in the spectral reflectance of forest canopies with different degrees of damage, and the reflectance of the blue and red bands increased with increasing damage; the reflectance of the red-edge band and near-infrared band decreased continuously with increasing damage. The green band was also expected to have decreasing reflectance with increasing damage; however, in the manual visual interpretation, the surface may have been misclassified into the vegetation canopy, resulting in the green band spectral reflectance not showing homogeneity. The absorption of forest tree conifer pigments affects the spectral reflectance in the visible band. The blue and red bands are strong absorbers of chlorophyll, and the canopy chlorophyll content decreases after forest tree damage, resulting in an increase in the spectral reflectance in the blue and red bands. In contrast, the green, red-edge, and near-infrared bands are affected by a variety of biochemical components, and the content of biochemical components decreases with increasing damage, resulting in a significant decrease in reflectance [47]. To date, the extraction of the forest canopy from remote sensing images has been based on manual visual interpretation, watershed segmentation algorithm, the threshold segmentation method, deep learning, and other methods to extract forest canopy [48,49,50]. Although some progress has been made, the results have a difficult time reaching the ideal state. In this study, we extracted the forest canopy by manual visual interpretation of the ROI. In a future study, we will try to extract the forest canopy by canopy segmentation and compare the manual visual interpretation and canopy segmentation methods to find the optimal canopy extraction method to eliminate the effect of surface vegetation and shadows on the spectral reflectance of the forest canopy.
The rational selection of spectral features that are sensitive to the degree of damage to a stand is key to the effective identification of different levels of damage to a stand. Therefore, to analyze the sensitivity of our selected features to the extent of stand damage, we calculated the variance of all features. Ultimately, only EVI and EVIreg were not significant. This may be because EVI and EVIreg were more significant in densely vegetated areas, whereas our test area stands were heavily infested with insect damage, and substantial needle loss resulted in more ground surface showing. This made EVI and EVIreg ineffective in distinguishing between the extents of stand damage. In addition, to eliminate the occurrence of redundancy in model construction, SPA was used to extract sensitive variables for all variables with the degree of forest damage for building classification models. Although the RMSE corresponding to the number of sensitive features extracted by SPA is not the lowest value, the number of sensitive features selected increases geometrically as the RMSE continues to decrease. Therefore, we believe that the use of SPA can effectively extract sensitive features with different degrees of victimization. We also found that the vegetation indices extracted by SPA contained either near-infrared bands or red-edge bands, which were important for reflecting vegetation health, and we confirmed that these vegetation indices can effectively characterize the health of forest trees [51,52]. The texture features in this study represented a smaller proportion of the sensitive features extracted by SPA. The HSD test on the features revealed that the texture features were more likely to distinguish between healthy and heavily damaged stands, while it was more difficult to distinguish between lightly and moderately damaged stands, making the texture features appear less sensitive in identifying different levels of damage. As a result, texture features are given less weight in the set of sensitive features extracted by SPA.
Although all the severity classification models presented in this study performed well in terms of the extent of stand damage, they exhibited some limitations. For example, there is some variation in the accuracy of the RF, SVC, and 1D-CNN models in identifying the degree of damage (healthy, mildly damaged, moderately damaged, and severely damaged stands). After EJD larvae hatch in early to late May, the larvae crawl along trunks and feed close to the trees, so the lower canopy may be breached first, and the needles shed, but the upper canopy is not infested and has the same structure and color as a healthy stand. However, the upper canopy is not infested and has the same structure and color as a healthy stand. The variation between different levels of damage is more pronounced when observed from bottom to top during ground truthing; drones observed the forest canopy from top to bottom and showed less pronounced differences in the multispectral images, particularly between healthy and lightly damaged trees and between lightly damaged and moderately damaged trees. This resulted in a mismatch between the extent of forest damage from the field survey and the vegetation indices and texture features extracted from the UAV multispectral imagery, leading to errors in the ability of the model to identify damage. The key issue in the remote sensing monitoring of forest pests is the early monitoring of pest occurrence, and the key to this is accurately identifying healthy and lightly damaged stands. All models in this study achieved over 70% recognition rates for lightly damaged stands, with 1D-CNNVI achieving over 80% accuracy. The models identified lightly damaged stands more ideally, thus providing some help in the early monitoring of pest occurrence. At this stage, although UAV multispectral imagery has been shown to be effective in monitoring the health of vegetation pests and diseases, UAV multispectral images have limited wavelength bands, and their extracted vegetation indices and texture features are less likely to characterize deeper feature changes. While hyperspectral remote sensing can form hundreds of spectrally contiguous spectra, it is more sensitive to changes in vegetation chlorophyll content and water content and can identify subtle changes in forest canopies [53,54]; the unique advantages of LiDAR, such as the extremely high angular resolution, distance resolution, and anti-interference ability, make it possible to obtain high-precision forest structure information, spatial information, and ground information, and it has a unique advantage in single-wood-scale forest information [55,56]. In the future, multisource remote sensing data combined with multispectral, hyperspectral, and LiDAR sensors on board UAVs can be used to mine deeper forest canopy spectral information, canopy structure information, and ground information to improve forest severity classification accuracy and the early identification monitoring of forest damage. In the future, multi-source remote sensing data combined with multi-spectral, hyperspectral, and LiDAR sensors on UAVs can be used to tap deeper forest canopy spectral information, canopy structure information, and ground information, which can effectively improve the accuracy of forest damage identification and the ability of the early identification and monitoring of forest pests.
Unlike previous studies in which model classification accuracy was improved using the combination of vegetation indices and texture features, the accuracy of the recognition model based on VIs in this study was higher than that of VIs + TF [24,57]. There are three reasons for this phenomenon. First, texture is an effective method in the face of images with large differences in thickness and sparsity [58], but in this study, textural characteristics did not show significant differences among lightly, moderately, and heavily affected stands. Figure A2 shows the normalized values of texture features for different damage levels, which did not show homogeneity in the variation of stands with different damage levels. The normalized value of DIS decreased with increasing damage but increased in the case of severe damage; the normalized value of HOM increased with increasing damage and decreased in the case of severe damage. Secondly, the relatively low variance values of the textural features compared to the vegetation index indicate that the textural features are less significant than the vegetation index for different levels of damage to the stand. The inclusion of texture features therefore makes the significance between different levels of damage less pronounced than that with the vegetation index alone, leading to a reduction in model accuracy. Finally, the trees in the study area had large differences in their height and sparsity distribution, and some of the canopy layers overlap. When extracting the canopy layers of the trees, the wrong canopy layer was added to make the texture features show features that originally did not belong to the tree, resulting in the wrong classification of the model after adding texture features to reduce its accuracy.
In this study, three methods, 1D-CNN, RF, and SVM, were used to construct an unmanned multispectral vegetation index and texture feature EJD forest tree damage classification model. By comparing the recognition results of different classification models, we found that 1D-CNN showed the best results, followed by RF, and the poorest results were obtained using SVM. The 1D-CNN demonstrated the best performance in this study, which is in line with the findings of other studies [59]. The essence of convolutional neural networks is the sliding of filters over layers and the computation of dot product layers and layer values of filters that can reveal finer image features at more abstract and higher levels [39]. Its accuracy is mainly influenced by the number and size of convolutional and pooling layers, the number of hidden units in the fully connected layers, and the choice of optimization parameters and the learning rate [60]. The optimization parameters in this study were selected based on the ‘Adam’ algorithm, a stochastic objective function optimization algorithm based on low-order moment-adaptive estimation with high computational efficiency and low memory requirements [61]. Due to the different numbers of features in the vegetation index set and the combined vegetation index and texture feature set, we set different learning rates for the vegetation index model (0.01) and the combined vegetation index and texture feature model (0.001) to achieve the best results in order to prevent the model from overfitting or underfitting. The essence of random forests is an extension of decision trees, whose results are influenced by the correlation between any two trees and the classification ability of each tree. When the number of features decreases, the classification ability of the tree decreases; conversely, the classification ability of the tree increases. Therefore, the key to determining the classification accuracy of a random forest is to determine the optimal number of features [62,63,64]. In this study, we characterized the degree of stand damage to the maximum extent by SPA with the minimum set of features, which led to a better performance in the classification results. Support vector machines are a method for finding the optimal hyperplane in an N-dimensional classification space, relying mainly on the choice of the kernel function and the size of the penalty coefficient (C) [43,63,65]. In this study, the RBF function was chosen as the kernel function to determine the best C and gamma values by cross-validation. The low accuracy of the support vector machine in this classification may be related to the fact that the classification hyperplane was not optimal between the classes, resulting in poor results. At present, computer technology and image recognition technology are developing rapidly. Although the forest tree damage severity recognition model selected in this study can effectively classify the forest tree damage caused by EJD, it may not be the best classification model. Therefore, the next step is to find the best classification model for the severity of forest tree damage caused by EJD to achieve fast and accurate forest tree damage severity recognition.
LSL extend from south-western Russia to the south-east and even into the northern parts of Mongolia, playing an important role in the forest ecosystems of the temperate regions of the northern hemisphere [65]. The influences affecting the LSL canopy, which differ from those affecting healthy stands, can be divided into biotic and abiotic influences, with biotic factors being mainly pest infestation, while abiotic influences include temperature and precipitation. Currently, the pests that most affect LSL in Mongolia are DST and EJD, with outbreaks occurring from late May to late August and from late May to mid-July, respectively [8]. Although both pest outbreaks occur at similar times, the probability of both pest outbreaks attacking larches in the same area is extremely low, and the canopy changes caused by the two pests are not the same. The larch canopies attacked by DST are white, while those attacked by EJD are red. The effects of changes in precipitation and canopy temperature are mainly due to the higher temperatures and less precipitation, which can lead to moisture stress on canopy needles. When stands are subjected to moisture stress during the growing season, changes in canopy spectral reflectance can occur, similar to those seen in the early stages of pest attack, which can lead to confusion between early forest pest monitoring and moisture stress. A better solution to this problem is to use radar or LiDAR to obtain vertical information about the forest canopy and to distinguish between the causes of stress affecting canopy changes by changes in the canopy structure.
At this stage, small UAV platforms can be broadly divided into two models: multi-rotor and fixed-wing models. Multirotor UAVs are highly stable, flexible, and easy to operate and are less affected by terrain factors, but their range is poor, making large-area monitoring challenging. A fixed-wing UAV can last for a long time and can monitor a large area, but it is less flexible, less maneuverable, and more affected by terrain factors [66]. Forests are mostly located in mountainous areas with complex and uneven terrains, where the flexible maneuverability of multirotor drones makes monitoring easier. Mongolia’s forests are extensive and require a large number of events to acquire imagery if only multirotor drones are used, but the use of fixed-wing drones would require the training of drone operators and would be heavily influenced by the terrain. Therefore, a combination of multirotor and fixed-wing UAVs can be used to achieve large-area and efficient forest health monitoring.

5. Conclusions

Erannis jacobsoni Djak. is the main larch-leaf-eating pest in Mongolia. In this study, the multispectral images obtained by the DJ Phantom 4 RTK multispectral version of a small multirotor UAV with a multispectral image acquisition platform and the field survey results were used as datasets to extract sensitive vegetation indices and texture features of forest trees with different damage levels using ANOVA and a continuous projection algorithm (SPA) and to build an EJD pest severity model by combining the use of RF and CNN. The following conclusions were drawn:
  • The spectral reflectance of forest trees with different degrees of damage had obvious differences, especially in the red band, red-edge band, and near-infrared band. With increasing damage, the spectral reflectance of the green, red-edge, and near-infrared bands decreased, while the spectral reflectance of the red band continued to increase.
  • Sensitive spectral indices and texture features were extracted by ANOVA with SPA for different damage levels, and, finally, 9 sensitive VIs and 12 VIs + TF combinations were obtained. The sensitive characteristics extracted by SPA were basically greater than the F test values, indicating that there was a difference between the sensitive VIs and VIs + TF extracted using ANOVA with SPA and the degree of stand damage, showing significant sensitivity.
  • The overall accuracy, Kappa, Rmacro, and F1macro coefficients of all six classification models exceeded 0.8, indicating that all models could effectively identify the degree of forest tree damage. CNNVIs had the best accuracy (OA: 0.8950, Kappa: 0.8666, Rmacro: 0.8859, and F1macro: 0.8839), followed by CNNVIs+TF (OA: 0.8800, Kappa: 0.8485, Rmacro: 0.8670, and F1macro: 0.8672), while SVMVIs+TF had the worst accuracy (OA: 0.8450, Kappa: 0.8082, Rmacro: 0.8415, and F1macro: 0.8335).

Author Contributions

L.M. and X.H. analyzed the data and wrote the paper; X.H. conceived and designed the experiments; G.D., T.N., A.D., M.A. and D.E. executed the experiments and measured the data; Q.H., B.G., S.T. and Y.B. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the National Natural Science Foundation of China (41861056), the Inner Mongolia Autonomous Region Science and Technology Plan Project (2021GG0183), the Natural Science Foundation of Inner Mongolia Autonomous Region (2022MS04005), the Young Scientific and Technological Talents in High Schools (NJYT22030), and the Introduction of High-Level Talents to Start Scientific Research Projects of Inner Mongolia Normal University (2020YJRC051).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Schematic diagram of the convolutional neural network model.
Figure A1. Schematic diagram of the convolutional neural network model.
Forests 13 02104 g0a1
Table A1. Features’ honestly significant difference.
Table A1. Features’ honestly significant difference.
Dependent VariableMean
Difference
(I–J)
Standard ErrorStatistical Significance99.9999%
Confidence Intervals
IJLower LimitUpper Limit
WDRVIHealthMild0.3482 *0.00680.00000.31250.3840
Moderate0.4627 *0.00680.00000.42700.4985
Severe0.5571 *0.00680.00000.52140.5929
MildHealth−0.3482 *0.00680.0000−0.3840−0.3125
Moderate0.1145 *0.00680.00000.07880.1502
Severe0.2089 *0.00680.00000.17320.2446
ModerateHealth−0.4627 *0.00680.0000−0.4985−0.4270
Mild−0.1145 *0.00680.0000−0.1502−0.0788
Severe0.0944 *0.00680.00000.05860.1301
SevereHealth−0.5571 *0.00680.0000−0.5929−0.5214
Mild−0.2089 *0.00680.0000−0.2446−0.1732
Moderate−0.0944 *0.00680.0000−0.1301−0.0586
EVIregHealthMild−0.00030.00350.9996−0.01880.0181
Moderate0.00300.00350.8278−0.01550.0215
Severe−0.00410.00350.6438−0.02260.0144
MildHealth−0.00330.00350.7752−0.02180.0151
Moderate−0.00300.00350.8278−0.02150.0155
Severe−0.00710.00350.1782−0.02560.0114
ModerateHealth0.00380.00350.7055−0.01470.0222
Mild0.00410.00350.6438−0.01440.0226
Severe0.00710.00350.1782−0.01140.0256
SevereHealth0.00030.00350.9996−0.01810.0188
Mild−0.00110.00310.9856−0.01770.0155
Moderate−0.00910.00310.0211−0.02570.0075
EVIHealthMild−0.00030.00310.9998−0.01690.0163
Moderate−0.00110.00310.9856−0.01770.0155
Severe−0.00910.00310.0211−0.02570.0075
MildHealth0.00030.00310.9998−0.01630.0169
Moderate−0.00080.00310.9937−0.01740.0158
Severe−0.00880.00310.0271−0.02540.0078
ModerateHealth0.00110.00310.9856−0.01550.0177
Mild0.00080.00310.9937−0.01580.0174
Severe−0.00800.00310.0555−0.02460.0086
SevereHealth0.00910.00310.0211−0.00750.0257
Mild0.00880.00310.0271−0.00780.0254
Moderate0.00800.00310.0555−0.00860.0246
……
Mean3 × 3HealthMild−0.1230 *0.01450.0000−0.1996−0.0464
Moderate−0.1816 *0.01450.0000−0.2581−0.1050
Severe−0.1701 *0.01450.0000−0.2467−0.0936
MildHealth0.1230 *0.01450.00000.04640.1996
Moderate−0.05850.01450.0003−0.13510.0180
Severe−0.04710.01450.0066−0.12370.0295
ModerateHealth0.1816 *0.01450.00000.10500.2581
Mild0.05850.01450.0003−0.01800.1351
Severe0.01140.01450.8598−0.06510.0880
SevereHealth0.1701 *0.01450.00000.09360.2467
Mild0.04710.01450.0066−0.02950.1237
Moderate−0.01140.01450.8598−0.08800.0651
Hom3 × 3HealthMild−0.2696 *0.01070.0000−0.3260−0.2132
Moderate−0.3193 *0.01070.0000−0.3757−0.2629
Severe−0.4984 *0.01070.0000−0.5548−0.4421
MildHealth0.2696 *0.01070.00000.21320.3260
Moderate−0.04970.01070.0000−0.10610.0067
Severe−0.2288 *0.01070.0000−0.2852−0.1724
ModerateHealth0.3193 *0.01070.00000.26290.3757
Mild0.04970.01070.0000−0.00670.1061
Severe−0.1791 *0.01070.0000−0.2355−0.1228
SevereHealth0.4984 *0.01070.00000.44210.5548
Mild0.2288 *0.01070.00000.17240.2852
Moderate0.1791 *0.01070.00000.12280.2355
Dis3 × 3HealthMild0.1216 *0.01310.00000.05240.1908
Moderate0.1743 *0.01310.00000.10520.2435
Severe0.1592 *0.01310.00000.09010.2284
MildHealth−0.1216 *0.01310.0000−0.1908−0.0524
Moderate0.05270.01310.0004−0.01640.1219
Severe0.03760.01310.0218−0.03150.1068
ModerateHealth−0.1743 *0.01310.0000−0.2435−0.1052
Mild−0.05270.01310.0004−0.12190.0164
Severe−0.01510.01310.6570−0.08430.0541
SevereHealth−0.1592 *0.01310.0000−0.2284−0.0901
Mild−0.03760.01310.0218−0.10680.0315
Moderate0.01510.01310.6570−0.05410.0843
* The significance level of the difference between the means is 1 × 10−6.
Figure A2. Variation characteristics of sensitive texture features of forest trees with different degrees of damage: (a) dissimilarity of 3 × 3 windows, (b) dissimilarity of 7 × 7 windows, (c) homogeneity of 3 × 3 windows, (d) homogeneity of 7 × 7 windows.
Figure A2. Variation characteristics of sensitive texture features of forest trees with different degrees of damage: (a) dissimilarity of 3 × 3 windows, (b) dissimilarity of 7 × 7 windows, (c) homogeneity of 3 × 3 windows, (d) homogeneity of 7 × 7 windows.
Forests 13 02104 g0a2

References

  1. Jugnee, P.; Dorjsuren, A.; Enkhnasan, D. Forest Insects of Mongolia; Best Color International Printing Company: Ulaanbaatar, Mongolia, 2020; p. 306. [Google Scholar]
  2. van Lierop, P.; Lindquist, E.; Sathyapala, S.; Franceschini, G. Global forest area disturbance from fire, insect pests, diseases and severe weather events. For. Ecol. Manag. 2015, 352, 78–88. [Google Scholar] [CrossRef] [Green Version]
  3. Foster, A.C.; Shuman, J.K.; Shugart, H.H.; Dwire, K.A.; Fornwalt, P.J.; Sibold, J.; Negron, J. Validation and application of a forest gap model to the southern Rocky Mountains. Ecol. Model. 2017, 351, 109–128. [Google Scholar] [CrossRef]
  4. Dulamsuren, C. Organic carbon stock losses by disturbance: Comparing broadleaved pioneer and late-successional conifer forests in Mongolia’s boreal forest. For. Ecol. Manag. 2021, 499, 119636. [Google Scholar] [CrossRef]
  5. Haynes, K.J.; Allstadt, A.J.; Klimetzek, D. Forest defoliator outbreaks under climate change: Effects on the frequency and severity of outbreaks of five pine insect pests. Glob. Chang. Biol. 2014, 20, 2004–2018. [Google Scholar] [CrossRef]
  6. Hall, R.J.; Castilla, G.; White, J.C.; Cooke, B.J.; Skakun, R.S. Remote sensing of forest pest damage: A review and lessons learned from a Canadian perspective. Can. Entomol. 2016, 148, 296–356. [Google Scholar] [CrossRef]
  7. Tungalag, M.; Altanzagas, B.; Gerelbaatar, S.; Dorjsuren, C. Tree-Level Above-Ground Biomass Equations for Pinus sylvestris L. in Mongolia. Mong. J. Biol. Sci. 2020, 18, 13–21. [Google Scholar] [CrossRef]
  8. Environmental Database (Байгаль Орчны Мэдээллийн Сан). Copyright Environmental Information Center. Available online: https://eic.mn/forestresource/ (accessed on 15 July 2022).
  9. Xi, G.; Huang, X.; Xie, Y.; Gang, B.; Bao, Y.; Dashzebeg, G.; Nanzad, T.; Dorjsuren, A.; Enkhnasan, D.; Ariunaa, M. Detection of Larch Forest Stress from Jas’s Larch Inchworm (Erannis jacobsoni Djak) Attack Using Hyperspectral Remote Sensing. Remote Sens. 2021, 14, 124. [Google Scholar] [CrossRef]
  10. Tumenjargal, B.; Ishiguri, F.; Aiso, H.; Takahashi, Y.; Nezu, I.; Takashima, Y.; Baasan, B.; Chultem, G.; Ohshima, J.; Yokota, S. Physical and mechanical properties of wood and their geographic variations in Larix sibirica trees naturally grown in Mongolia. Sci. Rep. 2020, 10, 12936. [Google Scholar] [CrossRef]
  11. Zhang, J.; Huang, Y.; Pu, R.; Gonzalez-Moreno, P.; Yuan, L.; Wu, K.; Huang, W. Monitoring plant diseases and pests through remote sensing technology: A review. Comput. Electron. Agric. 2019, 165, 104943. [Google Scholar] [CrossRef]
  12. Zhang, J.; Huang, Y.; Yuan, L.; Yang, G.; Chen, L.; Zhao, C. Using satellite multispectral imagery for damage mapping of armyworm (Spodoptera frugiperda) in maize at a regional scale. Pest Manag. Sci. 2016, 72, 335–348. [Google Scholar] [CrossRef]
  13. Calderón, R.; Navas-Cortés, J.A.; Lucena, C.; Zarco-Tejada, P.J. High-resolution airborne hyperspectral and thermal imagery for early detection of Verticillium wilt of olive using fluorescence, temperature and narrow-band spectral indices. Remote Sens. Environ. 2013, 139, 231–245. [Google Scholar] [CrossRef]
  14. Cheng, T.; Rivard, B.; Sánchez-Azofeifa, G.A.; Feng, J.; Calvo-Polanco, M. Continuous wavelet analysis for the detection of green attack damage due to mountain pine beetle infestation. Remote Sens. Environ. 2010, 114, 899–910. [Google Scholar] [CrossRef]
  15. Zhang, J.; Pu, R.; Huang, W.; Lin, Y.; Luo, J.; Wang, J. Using in-situ hyperspectral data for detecting and discriminating yellow rust disease from nutrient stresses. Field Crop. Res. 2012, 134, 165–174. [Google Scholar] [CrossRef]
  16. Huang, X.; Zhang, Q.; Hu, L.; Zhu, T.; Zhou, X.; Zhang, Y.; Xu, Z.; Ju, W. Monitoring Damage Caused by Pantana phyllostachysae Chao to Moso Bamboo Forests Using Sentinel-1 and Sentinel-2 Images. Remote Sens. 2022, 14, 5012. [Google Scholar] [CrossRef]
  17. Broge, N.H.; Leblanc, E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens. Environ. 2001, 76, 156–172. [Google Scholar] [CrossRef]
  18. Barry, K.; Corkrey, R.; Thi, H.; Ridge, S.; Mohammed, C. Spectral characterization of necrosis from reflectance of Eucalyptus globulus leaves with Mycosphaerella leaf disease or subjected to artificial lesions. Int. J. Remote Sens. 2011, 32, 9243–9259. [Google Scholar] [CrossRef]
  19. Yu, L.; Zhan, Z.; Ren, L.; Zong, S.; Luo, Y.; Huang, H. Evaluating the Potential of WorldView-3 Data to Classify Different Shoot Damage Ratios of Pinus yunnanensis. Forests 2020, 11, 417. [Google Scholar] [CrossRef] [Green Version]
  20. Harati, S.; Pérez, L.; Molowny-Horas, R. Integrating Neighborhood Effect and Supervised Machine Learning Techniques to Model and Simulate Forest Insect Outbreaks in British Columbia, Canada. Forests 2020, 11, 1215. [Google Scholar] [CrossRef]
  21. Tang, L.; Shao, G. Drone remote sensing for forestry research and practices. J. For. Res. 2015, 26, 791–797. [Google Scholar] [CrossRef]
  22. Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in Europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar] [CrossRef]
  23. Su, J.; Liu, C.; Hu, X.; Xu, X.; Guo, L.; Chen, W.-H. Spatio-temporal monitoring of wheat yellow rust using UAV multispectral imagery. Comput. Electron. Agric. 2019, 167, 105035. [Google Scholar] [CrossRef]
  24. Xu, Z.; Zhang, Q.; Xiang, S.; Li, Y.; Huang, X.; Zhang, Y.; Zhou, X.; Li, Z.; Yao, X.; Li, Q.; et al. Monitoring the Severity of Pantana phyllostachysae Chao Infestation in Moso Bamboo Forests Based on UAV Multi-Spectral Remote Sensing Feature Selection. Forests 2022, 13, 418. [Google Scholar] [CrossRef]
  25. Severtson, D.; Callow, N.; Flower, K.; Neuhaus, A.; Olejnik, M.; Nansen, C. Unmanned aerial vehicle canopy reflectance data detects potassium deficiency and green peach aphid susceptibility in canola. Precis. Agric. 2016, 17, 659–677. [Google Scholar] [CrossRef] [Green Version]
  26. Ye, H.; Huang, W.; Huang, S.; Cui, B.; Dong, Y.; Guo, A.; Ren, Y.; Jin, Y. Recognition of Banana Fusarium Wilt Based on UAV Remote Sensing. Remote Sens. 2020, 12, 938. [Google Scholar] [CrossRef] [Green Version]
  27. Anting, G.; Huang, W.; Dong, Y.; Ye, H.; Ma, H.; Liu, B.; Wu, W.; Ren, Y.; Ruan, C.; Geng, Y. Wheat Yellow Rust Detection Using UAV-Based Hyperspectral Technology. Remote Sens. 2021, 13, 123. [Google Scholar] [CrossRef]
  28. Rodríguez, J.; Lizarazo, I.; Prieto, F.; Angulo-Morales, V. Assessment of potato late blight from UAV-based multispectral imagery. Comput. Electron. Agric. 2021, 184, 106061. [Google Scholar] [CrossRef]
  29. Ferentinos, K.P. Deep learning models for plant disease detection and diagnosis. Comput. Electron. Agric. 2018, 145, 311–318. [Google Scholar] [CrossRef]
  30. Wu, B.; Liang, A.; Zhang, H.; Zhu, T.; Zou, Z.; Yang, D.; Tang, W.; Li, J.; Su, J. Application of conventional UAV-based high-throughput object detection to the early diagnosis of pine wilt disease by deep learning. For. Ecol. Manag. 2021, 486, 118986. [Google Scholar] [CrossRef]
  31. Deng, X.; Tong, Z.; Lan, Y.; Huang, Z. Detection and Location of Dead Trees with Pine Wilt Disease Based on Deep Learning and UAV Remote Sensing. AgriEngineering 2020, 2, 294–307. [Google Scholar] [CrossRef]
  32. Duarte, A.; Acevedo-Muoz, L.; Gonalves, C.I.; Mota, L.; Valente, C. Detection of Longhorned Borer Attack and Assessment in Eucalyptus Plantations Using UAV Imagery. Remote Sens. 2020, 12, 3153. [Google Scholar] [CrossRef]
  33. Syifa, M.; Park, S.-J.; Lee, C.-W. Detection of the Pine Wilt Disease Tree Candidates for Drone Remote Sensing Using Artificial Intelligence Techniques. Engineering 2020, 6, 919–926. [Google Scholar] [CrossRef]
  34. Sishodia, R.P.; Ray, R.L.; Singh, S.K. Applications of Remote Sensing in Precision Agriculture: A Review. Remote Sens. 2020, 12, 3136. [Google Scholar] [CrossRef]
  35. Ulziibaatar, M.; Matsui, K. Herders’ Perceptions about Rangeland Degradation and Herd Management: A Case among Traditional and Non-Traditional Herders in Khentii Province of Mongolia. Sustainability 2021, 13, 7896. [Google Scholar] [CrossRef]
  36. Shlens, J. A Tutorial on Principal Component Analysis. arXiv 2014, arXiv:1404.1100. [Google Scholar]
  37. Zhang, L.; Dong, W.; Zhang, D.; Shi, G. Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recognit. 2010, 43, 1531–1549. [Google Scholar] [CrossRef] [Green Version]
  38. Mário César Ugulino, A.; Teresa Cristina Bezerra, S.; Roberto Kawakami Harrop, G.; Takashi, Y.; Henrique Caldas, C.; Valeria, V. The successive projections algorithm for variable selection in spectroscopic multicomponent analysis. Chemom. Intell. Lab. Syst. 2001, 57, 65–73. [Google Scholar]
  39. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in Vegetation Remote Sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  40. Zhu, X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
  41. Blanchet, L.; Vitale, R.; van Vorstenbosch, R.; Stavropoulos, G.; Pender, J.; Jonkers, D.; Schooten, F.-J.v.; Smolinska, A. Constructing bi-plots for random forest: Tutorial. Anal. Chim. Acta 2020, 1131, 146–155. [Google Scholar] [CrossRef]
  42. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  43. Kumar, B.; Vyas, O.; Vyas, R. A comprehensive review on the variants of support vector machines. Mod. Phys. Lett. B 2019, 33, 1950303. [Google Scholar] [CrossRef]
  44. Burges, C.J.C. A Tutorial on Support Vector Machines for Pattern Recognition. Data Min. Knowl. Discov. 1998, 2, 121–167. [Google Scholar] [CrossRef]
  45. Amari, S.; Wu, S. Improving support vector machine classifiers by modifying kernel functions. Neural Netw. 1999, 12, 783–789. [Google Scholar] [CrossRef] [PubMed]
  46. Guo, A.; Huang, W.; Ye, H.; Dong, Y.; Ma, H.; Ren, Y.; Chao, R. Identification of Wheat Yellow Rust using Spectral and Texture Features of Hyperspectral Images. Remote Sens. 2020, 12, 1419. [Google Scholar] [CrossRef]
  47. Chen, Y.; Guerschman, J.P.; Cheng, Z.; Guo, L. Remote sensing for vegetation monitoring in carbon capture storage regions: A review. Appl. Energy 2019, 240, 312–326. [Google Scholar] [CrossRef]
  48. Yang, M.D.; Tseng, H.H.; Hsu, Y.C.; Hui, P.T. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sens. 2020, 12, 633. [Google Scholar] [CrossRef] [Green Version]
  49. Yang, J.; He, Y.; Caspersen, J. Region merging using local spectral angle thresholds: A more accurate method for hybrid segmentation of remote sensing images. Remote Sens. Environ. 2017, 190, 137–148. [Google Scholar] [CrossRef]
  50. Li, D.; Zhang, G.; Wu, Z.; Yi, L. An Edge Embedded Marker-Based Watershed Algorithm for High Spatial Resolution Remote Sensing Image Segmentation. IEEE Trans. Image Process. 2010, 19, 2781–2787. [Google Scholar]
  51. Gitelson, A.; Merzlyak, M.; Chivkunova, O. Optical Properties and Nondestructive Estimation of Anthocyanin Content in Plant Leaves. Photochem. Photobiol. 2007, 74, 38–45. [Google Scholar] [CrossRef]
  52. Niu, Q.; Feng, H.; Zhou, X.; Zhu, J.; Yong, B.; Li, H. Combining UAV Visible Light and Multispectral Vegetation Indices for Estimating SPAD Value of Winter Wheat. Trans. Chin. Soc. Agric. Mach. 2021, 52, 183–194. [Google Scholar]
  53. Govender, M.; Chetty, K.; Bulcock, H. A review of hyperspectral remote sensing and its application in vegetation and water resource studies. Water SA 2007, 33, 145–151. [Google Scholar] [CrossRef] [Green Version]
  54. Huo, L.; Lindberg, E.; Holmgren, J. Towards low vegetation identification: A new method for tree crown segmentation from LiDAR data based on a symmetrical structure detection algorithm (SSD). Remote Sens. Environ. 2022, 270, 112857. [Google Scholar] [CrossRef]
  55. Coops, N.C.; Tompalski, P.; Goodbody, T.; Queinnec, M.; Hermosilla, T. Modelling lidar-derived estimates of forest attributes over space and time: A review of approaches and future trends. Remote Sens. Environ. 2021, 260, 112477. [Google Scholar] [CrossRef]
  56. Zhou, Y.; Lao, C.; Yang, Y.; Zhang, Z.; Yang, N. Diagnosis of winter-wheat water stress based on UAV-borne multispectral image texture and vegetation indices. Agric. Water Manag. 2021, 256, 107076. [Google Scholar] [CrossRef]
  57. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  58. Kamilaris, A.; Prenafeta-Boldú, F.X. A review of the use of convolutional neural networks in agriculture. J. Agric. Sci. 2018, 156, 312–322. [Google Scholar] [CrossRef] [Green Version]
  59. Tao, H.; Li, C.; Zhao, D.; Deng, S.; Jing, W. Deep learning-based dead pine trees detection from unmanned aerial vehicle images. Int. J. Remote Sens. 2020, 41, 8238–8255. [Google Scholar] [CrossRef]
  60. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  61. Belgiu, M.; Draguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  62. Rodriguez-Galiano, V.; Sanchez-Castillo, M.; Chica-Olmo, M.; Chica-Rivas, M. Machine learning predictive models for mineral prospectivity: An evaluation of neural networks, random forest, regression trees and support vector machines. Ore Geol. Rev. 2015, 71, 804–818. [Google Scholar] [CrossRef]
  63. Boateng, E.Y.; Otoo, J.; Abaye, D. Basic Tenets of Classification Algorithms K-Nearest-Neighbor, Support Vector Machine, Random Forest and Neural Network: A Review. J. Data Anal. Inf. Process. 2020, 8, 341–357. [Google Scholar] [CrossRef]
  64. Nalepa, J.; Kawulok, M. Selecting training sets for support vector machines: A review. Artif. Intell. Rev. 2018, 52, 857–900. [Google Scholar] [CrossRef] [Green Version]
  65. Abaimov, A.P. Geographical Distribution and Genetics of Siberian Larch Species; Springer: Dordrecht, The Netherlands, 2010. [Google Scholar]
  66. Zhang, J.; Qiu, X.; Wu, Y.; Zhu, Y.; Cao, Q.; Liu, X.; Cao, W. Combining texture, color, and vegetation indices from fixed-wing UAS imagery to estimate wheat growth parameters using multivariate regression methods. Comput. Electron. Agric. 2021, 185, 106138. [Google Scholar] [CrossRef]
Figure 1. Geographical location of the study area: (a) elevation map of the test area, (b) sample tree distribution, and (c) visible image of the canopy of the affected forest.
Figure 1. Geographical location of the study area: (a) elevation map of the test area, (b) sample tree distribution, and (c) visible image of the canopy of the affected forest.
Forests 13 02104 g001
Figure 2. The general workflow of the experiment.
Figure 2. The general workflow of the experiment.
Forests 13 02104 g002
Figure 3. UAV multispectral spectral reflectance of the forest canopy with different degrees of damage: (a) Blue band; (b) Green band; (c) Red band; (d) RedEdge band; (e) Near Infrared band (NIR).
Figure 3. UAV multispectral spectral reflectance of the forest canopy with different degrees of damage: (a) Blue band; (b) Green band; (c) Red band; (d) RedEdge band; (e) Near Infrared band (NIR).
Forests 13 02104 g003
Figure 4. Vegetation index and variance of texture characteristics: (a) all characteristic variance values and (b) characteristics with variance values less than 100.
Figure 4. Vegetation index and variance of texture characteristics: (a) all characteristic variance values and (b) characteristics with variance values less than 100.
Forests 13 02104 g004
Figure 5. RMSE plot of the number of variables selected by SPA.
Figure 5. RMSE plot of the number of variables selected by SPA.
Forests 13 02104 g005
Figure 6. Confusion matrices of different classification models for forest tree damage severity.
Figure 6. Confusion matrices of different classification models for forest tree damage severity.
Forests 13 02104 g006
Figure 7. Identification of different levels of damage in EJD stands, where (ac) are VIs models and (df) are combined Vis + TF models.
Figure 7. Identification of different levels of damage in EJD stands, where (ac) are VIs models and (df) are combined Vis + TF models.
Forests 13 02104 g007
Table 1. Multispectral camera parameters.
Table 1. Multispectral camera parameters.
BandWavelength Range/nmCentral Wavelength/nmBandwidth/nm
Blue band (B)434–466450±16
Green band (G)544–576560±16
Red band (R)634–666650±16
Red-edge band (RE)714–746730±16
Near-infrared band (NIR)814–866840±26
Table 2. Sample tree damage rating table.
Table 2. Sample tree damage rating table.
Victim LevelHealthyMildModerateSevere
Tree Class
Leaf loss rate0%–5%6%–30%31%–70%71%–100%
Canopy appearance characteristicsNeedles and leaves are abundant; no branches exposed; green crownNeedles are bushier; a few branches are bare; canopy is starting to turn yellowNeedles are sparse and a large number of branches are exposed; crown is yellow and redOnly a few needles; branches are all bare; canopy is gray
Table 3. Accuracy evaluation of classification models for different damage levels of forest trees.
Table 3. Accuracy evaluation of classification models for different damage levels of forest trees.
Classification
Models
VIsVIs + TF
RFVIsSVMVIs1D-CNNVisRFVIs+TFSVMVIs+TF1D-CNNVIs+TF
OA0.86500.87000.89500.86000.84500.8800
Kappa0.83110.83690.86660.82560.80820.8485
Rmacro0.85360.86030.88590.85590.84150.8670
F1macro0.85300.85820.88390.85060.83350.8672
Table 4. Results of identifying the degree of EJD damage to forest trees. Unit: tree.
Table 4. Results of identifying the degree of EJD damage to forest trees. Unit: tree.
Classification ModelVIsVIs + TF
Victimization Level RFVIsSVMVIsCNNVIsRFVIs+TFSVMVIs+TFCNNVIs+TF
Healthy9799871010931740971
Mild726746726677296623
Moderate635543611599359612
Severe6026665957351547736
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, L.; Huang, X.; Hai, Q.; Gang, B.; Tong, S.; Bao, Y.; Dashzebeg, G.; Nanzad, T.; Dorjsuren, A.; Enkhnasan, D.; et al. Model-Based Identification of Larix sibirica Ledeb. Damage Caused by Erannis jacobsoni Djak. Based on UAV Multispectral Features and Machine Learning. Forests 2022, 13, 2104. https://doi.org/10.3390/f13122104

AMA Style

Ma L, Huang X, Hai Q, Gang B, Tong S, Bao Y, Dashzebeg G, Nanzad T, Dorjsuren A, Enkhnasan D, et al. Model-Based Identification of Larix sibirica Ledeb. Damage Caused by Erannis jacobsoni Djak. Based on UAV Multispectral Features and Machine Learning. Forests. 2022; 13(12):2104. https://doi.org/10.3390/f13122104

Chicago/Turabian Style

Ma, Lei, Xiaojun Huang, Quansheng Hai, Bao Gang, Siqin Tong, Yuhai Bao, Ganbat Dashzebeg, Tsagaantsooj Nanzad, Altanchimeg Dorjsuren, Davaadorj Enkhnasan, and et al. 2022. "Model-Based Identification of Larix sibirica Ledeb. Damage Caused by Erannis jacobsoni Djak. Based on UAV Multispectral Features and Machine Learning" Forests 13, no. 12: 2104. https://doi.org/10.3390/f13122104

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop