Next Article in Journal
Linking Vegetation Diversity and Soils on Highway Slopes: A Case Study of the Zhengzhou–Xinxiang Section of the Beijing–Hong Kong–Macau Highway
Next Article in Special Issue
Three-Dimensional Quantification and Visualization of Leaf Chlorophyll Content in Poplar Saplings under Drought Using SFM-MVS
Previous Article in Journal
Volatile Compound Chemistry and Insect Herbivory: Pinus edulis Engelm. (Pinaceae) Seed Cone Resin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Eucalyptus Plantation Area Extraction Based on SLPSO-RFE Feature Selection and Multi-Temporal Sentinel-1/2 Data

1
College of Geomatics and Geoinformation, Guilin University of Technology, No. 12 Jian’gan Road, Guilin 541006, China
2
Guangxi Zhuang Autonomous Region Mineral Resources Reserve Evaluation Center, Nanning 530022, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(9), 1864; https://doi.org/10.3390/f14091864
Submission received: 13 August 2023 / Revised: 8 September 2023 / Accepted: 11 September 2023 / Published: 13 September 2023
(This article belongs to the Special Issue Imaging Sensors for Monitoring Forest Dynamics)

Abstract

:
An accurate and efficient estimation of eucalyptus plantation areas is of paramount significance for forestry resource management and ecological environment monitoring. Currently, combining multidimensional optical and SAR images with machine learning has become an important method for eucalyptus plantation classification, but there are still some challenges in feature selection. This study proposes a feature selection method that combines multi-temporal Sentinel-1 and Sentinel-2 data with SLPSO (social learning particle swarm optimization) and RFE (Recursive Feature Elimination), which reduces the impact of information redundancy and improves classification accuracy. Specifically, this paper first fuses multi-temporal Sentinel-1 and Sentinel-2 data, and then carries out feature selection by combining SLPSO and RFE to mitigate the effects of information redundancy. Next, based on features such as the spectrum, red-edge indices, texture characteristics, vegetation indices, and backscatter coefficients, the study employs the Simple Non-Iterative Clustering (SNIC) object-oriented method and three different types of machine-learning models: Random Forest (RF), Classification and Regression Trees (CART), and Support Vector Machines (SVM) for the extraction of eucalyptus plantation areas. Each model uses a supervised-learning method, with labeled training data guiding the classification of eucalyptus plantation regions. Lastly, to validate the efficacy of selecting multi-temporal data and the performance of the SLPSO–RFE model in classification, a comparative analysis is undertaken against the classification results derived from single-temporal data and the ReliefF–RFE feature selection scheme. The findings reveal that employing SLPSO–RFE for feature selection significantly elevates the classification precision of eucalyptus plantations across all three classifiers. The overall accuracy rates were noted at 95.48% for SVM, 96% for CART, and 97.97% for RF. When contrasted with classification outcomes from multi-temporal data and ReliefF–RFE, the overall accuracy for the trio of models saw an increase of 10%, 8%, and 8.54%, respectively. The accuracy enhancement was even more pronounced when juxtaposed with results from single-temporal data and ReliefF-RFE, at increments of 15.25%, 13.58%, and 14.54% respectively. The insights from this research carry profound theoretical implications and practical applications, particularly in identifying and extracting eucalyptus plantations leveraging multi-temporal data and feature selection.

1. Introduction

Eucalyptus is one of the broad-leaved forest species widely planted in the world. The plantation range covers 95 countries, and its total coverage area has exceeded 22.57-million hectares [1]. In China, eucalyptus is mainly planted in Guangdong, Guangxi, and Hainan, and it can be cultivated on a large scale under a suitable environment [2,3]. Specifically, in the Guangxi region, eucalyptus timber production constitutes 70% of the region’s total timber output, fulfilling the requirements of China’s timber industry, paper industry, and cellulose industry [4]. As per the 2020 statistics, the plantation area of eucalyptus in Guangxi spans 30-million acres, yielding a production value of 300 billion, thereby providing substantial economic benefits to the local area [5]. However, eucalyptus cultivation also brings with it escalating negative impacts, with its implications on the ecological environment becoming increasingly noticeable [6,7]. Owing to its characteristics such as a short growth cycle and high growth rate, eucalyptus can contribute to issues like a reduction in local soil fertility, overconsumption of water resources, and a decrease in biodiversity [8]. Facing the challenge of climate change, the climate adaptability of eucalyptus has attracted attention. Research has shown that certain eucalyptus species are more tolerant of drought, while others are able to cope with temperature fluctuations, making them a sustainable forestry option in times of future climate uncertainty [9]. Hence, an accurate and efficient estimation of the eucalyptus planting area is crucial for monitoring its indiscriminate expansion and assessing the quality of the ecological environment [10].
Compared to traditional manual statistical methods, remote-sensing technology offers a more rapid and efficient approach to estimating the planting area of eucalyptus [11]. However, single remote-sensing images often fail to precisely reflect complex ground features due to multiple factors. Optical images, affected by elements such as spectral resolution and spatial resolution, often lead to phenomena like spectral confusion and variability [12]. Even though Synthetic Aperture Radar (SAR) data provide the benefits of all-weather, all-day operability, and immunity to cloudy and rainy conditions, they can be easily compromised by speckle noise interference [13]. Among the many high spatio-temporal resolution satellites available, the Sentinel series of active and passive remote-sensing satellites have emerged as the primary data source for vegetation monitoring [14]. Battsetseg Tuvdendorj et al. [15] used the time series of S1 and S2 to classify crops in Northern Mongolia and obtained an OA of 0.93. Rumeng Li et al. [16] classified Ta-pieh Mountain forest vegetation through time series S1 and S2, and they obtained the optimal OA of 99.68%. Consequently, the integration of optical and SAR images holds considerable significance for the accurate extraction of eucalyptus plantation areas.
Adding a temporal dimension, the fusion of multitemporal acquisitions of remote-sensing data offers the opportunity to detect changes in vegetation cover of eucalyptus and other forests around the world over time [17]. For example, in forest regions, multi-temporal data can discern seasonal shifts in spectral features and radiative backscatter [18]. Research by Schriever et al. [19] and Mickelson et al. [20] highlights the importance of satellite images from the start and end of the growing season, as spring and autumn exhibit the most pronounced phenological variation among tree species. Zhu et al. [21] leveraged a multi-seasonal Landsat dataset along with Support Vector Machines to classify pine, oak, and mixed mesophytic species. They found that the utilization of images from March to November yielded the highest Kappa accuracy (84%) compared to single-date classifications. The Sentinel series of active and passive remote sensing satellites, with their high-acquisition frequency, open up new possibilities for extracting multi-temporal vegetation information.
Presently, remote-sensing image classification predominantly utilizes pixel-based [22,23,24] and object-oriented methods [25,26,27,28] in conjunction with optical imagery. Traditional pixel-based techniques often generate fragmented object information. In addition, due to the influence of image resolution, the mixed pixels in the edge area are often misclassified, resulting in image noise enhancement [29]. In contrast, the object-oriented method has better anti-noise ability and can combine several adjacent homogeneous pixels into non-overlapping objects through different segmentation algorithms, effectively reducing noise and improving classification accuracy [30,31]. The Simple Non-Iterative Clustering (SNIC) super-pixel segmentation algorithm, an evolution of the Simple Linear Iterative Clustering (SLIC) algorithm, stands as one of the most advanced image-segmentation algorithms currently available [32,33]. It has the benefits of requiring less memory and delivering high computational efficiency. Numerous scholars have successfully applied the SNIC super-pixel segmentation in combination with machine-learning models, achieving commendable results in crop and vegetation extraction [34,35,36,37]. For instance, Lingbo Yang et al. [38] achieved impressive results in rice extraction using SNIC and the Random Forest model, with the highest overall accuracy reaching 0.84.
The phenomenon of “same spectrum, different objects” and “same object, different spectrum” frequently occurs due to spectral characteristics [39]. The synergy of optics and radar offers a powerful solution to this problem. Radar data, by assessing the physical structure of objects, combined with the ion-exchange resolution capabilities of optical data, provide a powerful tool for more accurately classifying objects [40]. Therefore, many researchers have integrated various sources of remote-sensing data, vegetation indices, texture features, and backscatter characteristics as feature variables for classification extraction, effectively improving the aforementioned phenomena [41]. However, while the comprehensive multi-feature variable approach can enrich feature variables, it can also introduce issues of information redundancy [42,43,44,45]. Consequently, there is a need for feature selection to determine the optimal feature subset. Presently, filter-based and wrapper-based methods dominate the realm of feature selection [46,47]. The filter-based method [48,49] offers advantages such as simplicity and high efficiency, but it does not include the high intercorrelation between feature variables. Relief F, proposed by Kononenko [50] as an improvement of the Relief algorithm, is one of the commonly used filter-based algorithms. Conversely, the wrapper-based methods [51,52] consider the interaction between feature dimensions and can effectively evaluate the degree of feature criticality. However, these methods include the disadvantages of high computational complexity and low efficiency. Additionally, the search results of wrapper-based algorithms are influenced directly by the performance of the machine-learning model they are based upon. Recursive Feature Elimination (RFE), proposed by Guyon [53] et al., is representative of wrapper-based algorithms. Metaheuristic search algorithms [54,55] offer an advanced strategy for solving optimization problems. They provide a general framework for exploring and utilizing the search space and are typically used to solve complex optimization problems. The advantage of metaheuristic search is that it can seek global optimal solutions, not just local ones [56]. This makes them highly effective when dealing with complex problems that have multiple local optimal solutions. Particle Swarm Optimization (PSO) is one of the commonly used metaheuristic search algorithms [57]. Numerous studies have used PSO for multi-objective optimization, yielding satisfactory results [58,59,60,61]. SL–PSO, proposed by Ran Cheng [62] et al., is an improved version of PSO. By introducing a learning sequence and dynamic factor adjustment, SL–PSO boasts higher efficiency and improved search performance.
Based on this premise, we present a synergistic use of SL–PSO and RFE for feature selection. We begin with multi-temporal Sentinel-1/2 data for remote sensing data processing. Initial feature selection is undertaken with the Relief F algorithm. This is followed by refining the features using the SLPSO–RFE model, leading to the delineation of an optimal feature subset. We also harness the SNIC super-pixel segmentation method in conjunction with three machine-learning algorithms—RF, CART, and SVM—for advanced spatial information extraction. By contrasting diverse classification frameworks, this research underscores the significant role of multi-temporal active and passive remote-sensing data, and the SLPSO–RFE feature selection in pinpointing Eucalyptus plantations. The inherent robustness of this methodology assures precise and efficient data acquisition related to Eucalyptus plantation management.

2. Study Area and Data Source

2.1. Study Area Profile

The focus of this study is Luzhai County in Liuzhou City, a representative area for our investigation. Luzhai [63] County, located in the southwestern part of Liuzhou City in the Guangxi Zhuang Autonomous Region, spans an area of 2974.8 square kilometers. It lies between the geographical coordinates of ~24°14′–24°50′ N and ~109°28′–110°12′ E. The region enjoys an average annual temperature of approximately 20.4 °C, with temperatures soaring to 38 °C in the summer and dropping to 0 °C in the winter. It receives ample rainfall, with annual precipitation approximately 1483.8 mm, primarily occurring between April and August. The central and southern parts of the county are predominantly flat, featuring gentle hills, plateaus, and small plains. Thanks to the region’s conducive geographical conditions and fertile soil, Luzhai County has a wealth of forestry resources, with almost 100,000 hectares covered by forests. Eucalyptus, a fast-growing and high-yielding timber forest, is the principal species of artificial forest in the study area, as depicted in Figure 1.

2.2. Data Source and Preprocessing

The data for this study are derived from the multi-temporal (S1) and (S2) data from the Copernicus program, subjected to preprocessing procedures. To capture the significant phenological shifts within the annual cycle, four synthetic images were generated. This includes a winter composite of both S1 and S2 following the senescence phase of the eucalyptus trees, as well as a summer composite of S1 and S2 post-flowering of the eucalyptus. All satellite imagery preprocessing was executed within the Google Earth Engine (GEE) environment. The four obtained images were fused through the stacking method [64] to obtain a 28-band image.

2.2.1. Sentinel-1 Data

Sentinel-1 is a Synthetic Aperture Radar (SAR) satellite equipped with two polar orbit instruments [65]. It conducts radar imaging via four channels, relying on the 5.405 GHz C-band, and operates in both vertical–vertical and vertical–horizontal dual polarization modes. With a swath of up to 400 km and a spatial resolution of 10 m, this sensor facilitates rapid data acquisition. In this study, the VH and VV bands of Sentinel-1 GRD data were selected for experimentation. Median composite images from the summer months of June to August 2020, as well as those from December 2020 to February 2021, were used to extract the Eucalyptus plantation areas. Specific band information for Sentinel-1 can be found in Table 1.

2.2.2. Sentinel-2 Data

Sentinel-2 is divided into two levels according to the state of atmospheric correction: 1C and 2A. [66]: 1C and 2A. Level 1C delivers top-of-atmosphere reflectance data, whereas level 2A offers surface reflectance data that has undergone atmospheric correction. For this study, we selected images captured during the summer (from June to August 2020) and winter (from December 2020 to February 2021) seasons. To mitigate the influences of clouds and cloud shadows, we utilized the QA60 band. Within this QA60 band, specific bits, notably the 10th and 11th bits, are designated to indicate the presence of clouds and cirrus clouds within the pixels, respectively. When the 10th bit is set to 1, it signifies that the pixel contains clouds; conversely, when the 11th bit is set to 1, it indicates the presence of cirrus clouds. Leveraging this information, we were able to effectively mask clouds, cloud shadows, and cirrus clouds. Subsequently, we adopted the median composite method to synthesize quarterly median composite images for both summer and winter seasons, ensuring the acquisition of high-quality surface reflectance data. Detailed information about the four scenes of image data can be found in Table 2.

2.2.3. Sample Selection

The quality of samples significantly affects the extraction results of eucalyptus plantations. Thus, selecting representative and typical sample points is of paramount importance. In this study, we conducted field surveys using Google’s high-resolution imagery, documenting observations through photographs and written records. Following this, we selected training and testing samples for both eucalyptus and non-eucalyptus vegetation. We adhered to the principle that samples should be evenly distributed throughout the research area. In total, we selected 400 training samples, which comprised 180 eucalyptus vegetation samples, 130 non-eucalyptus vegetation samples, and 90 non-vegetation samples. Additionally, we chose 300 testing samples. The distribution of sample types and data from the field survey are depicted in Figure 2.

3. Methodology

This study harnesses fused data and employs the SLPSO–RFE model for feature selection, combined with an object-oriented approach and machine-learning models for classification. The specific technical process, depicted in Figure 3, involves the following steps. Firstly, dataset construction entails image selection, cloud masking, image cropping, index calculation, and median synthesis for Sentinel-2. Initial weights for each feature are calculated using ReliefF, and the 10% of features with the lowest weights are subsequently removed. The remaining features are input into the SLPSO–RFE algorithm to search for the optimal feature subset. Secondly, the extraction of eucalyptus plantations is conducted using SNIC super-pixel segmentation along with three machine-learning methods: RF, CART, and SVM. Finally, the accuracy of the extraction results is assessed using a confusion matrix and drone imagery.

3.1. Feature Set Construction

In this study, we utilized an Analytical Spectral Devices (ASD) FieldSpec4 spectrometer to measure the spectral reflectance of eucalyptus leaves. As depicted in Figure 4, the spectral reflectance of eucalyptus leaves exhibits a sharp increase in the red-edge band, showing a significant difference between the red and near-infrared bands. This observation is consistent with the research findings of Liang Ji et al. [18]. Consequently, the chosen vegetation indices should include characteristic indices of the near-infrared band or red-edge band. This study selects five representative vegetation index features and six red-edge index features to construct the feature set. Among these, the Normalized Difference Vegetation Index (NDVI) is widely used to accurately reflect the growth condition of vegetation-covered areas. Nevertheless, saturation might occur when vegetation coverage is high. The Enhanced Vegetation Index (EVI) effectively addresses the NDVI’s saturation issue but may increase the risk of misidentifying non-vegetated areas as vegetated ones. Thus, combining the two indices can enhance classification results. The Chlorophyll Absorption Ratio Index (CARI) effectively minimizes the impact of photosynthetic radiation changes caused by non-photosynthetic activity in the vegetation canopy. However, it remains vulnerable to soil reflectance influences. For this reason, we utilize the Transformed CARI (TCARI). The Vegetation Index green (VIgreen) and Green Normalized Difference Vegetation Index (GNDVI) incorporate the green band to replace the near-infrared and red bands in NDVI, respectively. Research by Wang Ziyi et al. [67] indicates that vegetation indices that include the red-edge band can effectively enhance vegetation classification accuracy. In this paper, GEE is used to calculate the characteristic variables. The selected feature variables and their respective calculation formulas are outlined in Table 3.

3.2. Construction and Principle of Feature Selection Model

Feature selection’s objective [68] is to identify the optimal subset of features for pattern recognition or classification. This method aims to mitigate the decrease in classification accuracy resulting from information redundancy while simultaneously enhancing the computational efficiency of the classification model. The Relief F algorithm, an improvement over the Relief algorithm, is effective in eliminating irrelevant features but does not address information redundancy. In contrast, Recursive Feature Elimination (RFE) is a wrapper feature selection algorithm that incrementally removes insignificant features via recursive elimination, thereby improving the model’s accuracy and interpretability. Nevertheless, RFE is computationally demanding and susceptible to local optima due to varying factors. To resolve this, we incorporated SL-PSO to optimize RFE in this study. SL-PSO, a variant of the Particle Swarm Optimization (PSO) algorithm, introduces a social-learning mechanism to the standard PSO. This mechanism allows particles to learn from and exchange information with each other, thereby enhancing the global search performance of the algorithm. Consequently, this research employs a combination of SL-PSO and RFE for feature optimization, where RFE selects three model variables (RF, CART, and SVM). SL-PSO is used to determine the optimal model and parameters for RFE. The feature selection process employed in this study is depicted in Figure 5. At the outset, the Relief F algorithm allocates varying weights to individual features, subsequently establishing a threshold for invalid feature elimination to ascertain the preliminary feature subset. Subsequently, SLPSO-RFE is used to search for the optimal feature subset. The feature selection process is shown in Figure 5.
The Relief F algorithm’s fundamental concept [69] involves iteratively sampling the dataset and calculating the distance between samples to assess the importance of features. During each iteration, a base sample is randomly selected. The algorithm then calculates its distance to the closest samples within the same class and those in different classes, thereby determining the differences between this sample and its nearest neighbors. Should the discrepancy between the current sample and its proximate counterpart within the same class be minimal, and simultaneously, the divergence from the nearest sample of a distinct class be substantial, the pertinent feature will be accorded an augmented weight. Conversely, if the difference between the base sample and the nearest neighbor sample from a different class is smaller, the corresponding feature will receive a lower weight. The Relief F weight calculation follows the formula below:
W i + 1 f l = W i f l j = 1 k d i f f f l , R i , H j m × k + C l a b e l R i P C 1 P l a b e l R i × j = 1 k d i f f f l , R i , M j C m × k
In Formula (1), W i f i is the weight of the L-th feature f in the i-th sample, and its initial weight is set to 0; d i f f f l , R i , H j is the distance from the same sample of R i ; H j j = 1 , 2 , , k is the j sample among the k closest samples of the same type as R i . P C is the proportion of the training samples belonging to class C ; P l a b e l R i is the ratio of samples of the same type to total samples, where l a b e l R i is the label of R i . d i f f f l , R i , M j C is the distance from R i heterogeneous samples; M j C j = 1 , 2 , , k is the j sample out of k closest samples of the same class as R i . The inter-sample distance is ascertained employing the Euclidean metric. The calculation results of Relief F feature weights are shown in Figure 6.
The Recursive Feature Elimination (RFE) method [70] is a feature-selection technique that recursively removes fewer significant features, thereby enhancing the model’s accuracy and interpretability [37]. To implement RFE, the model is initially trained to rank each feature’s importance. The least significant feature is then removed, creating a new feature subset. This subset is fed back into the model to repeat the process until the preset number of features is attained, yielding the optimal feature subset. However, RFE has its drawbacks: it is computationally intensive and vulnerable to various factors. If the feature count is too low, it may lead to model underfitting or overfitting. If the step size is too small, it can extend computation time; if too large, it may cause premature model convergence and essential feature exclusion. Moreover, each RFE iteration requires random forest parameter adjustment. Thus, this study introduces SLPSO for comprehensive RFE optimization. SLPSO [71], an enhancement of the Particle Swarm Optimization (PSO), determines the number of particle groups and iterations to seek the optimal path. In contrast to the original PSO, where each particle could only find the optimal path from historical and global optimum solutions, SL-PSO incorporates the concepts of learning sequence and dynamic adjustment of learning factors. The learning sequence allows each particle to learn from other particles’ historical experiences, enhancing SLPSO’s search capabilities. Dynamic adjustment of learning factors enables updating of these factors based on the particle’s fitness value, promoting better information exchange between particles and facilitating the location of the global optimal solution. Thus, this study uses SLPSO to optimize RFE. The RF, SVM, and CART classification models each serve as a single position, with one classifier model randomly selected to form a particle alongside other RFE parameters. RFE designates cv, n_features_to_select, and step as particles. SLPSO is set up with 200 particles and 20 iterations. The optimal RFE model identified by SLPSO is RF, with the optimal n_estimators set at 365. Optimal RFE parameters are: n_features_to_select = 12, step = 5, cv = 6. The feature weight calculation result of SLPSO-RFE is illustrated in Figure 7. The optimal feature subset is shown in Table 4.

3.3. SNIC Superpixel Segmentation

Traditional pixel-based classifications suffer from issues such as misclassification, omission, and noise. Object-oriented classifications address these issues effectively by considering given domain information and dividing the image into several objects based on specified parameters [72].
Simple Linear Iterative Clustering (SLIC) stands out as a leading super-pixel segmentation method, utilizing five-dimensional CIELAB color and local k-means optimization within the image domain, starting from regularly positioned seeds [73]. It has attracted attention for its application in remote sensing image segmentation due to its simplicity, processing speed, efficient boundary consistency, and limited adjacency. The Simple Non-Iterative Clustering (SNIC) super-pixel segmentation algorithm [74], an improvement on the SLIC algorithm, has non-iterative advantages, requires less memory, and enforces strong connections from the start, satisfying the requirements of good boundary adhesion and limited adjacency. Initially, SNIC established K unified grids for remote-sensing images. For the super-pixel center of the kth grid, it is expressed as C[k] = {Xk, Ck}, where Xk and Ck denote the spatial direction and CIELAB hue, respectively. These K grids pave the way for a priority queue, prioritizing the smallest distances and starting from the center of the super-pixel. Pixels with the shortest distance are dequeued and merged into appropriate super-pixels. The gap between the k-th super-pixel center C[k] and the j-th prospective pixel is provided by dj,k, which is derived as follows.
d j , k = X j X k 2 2 s + C j C k 2 2 m
where s and m are normalization factors for spatial and color distances, respectively. The main parameters of SNIC are super-pixel seed spacing and compactness factor, which refer to the distance between super-pixel centroids and the shape compactness of super pixels, respectively.
This study utilizes the Google Earth Engine (GEE) remote-sensing processing platform to perform super-pixel segmentation. The details and structure of high-resolution images are conducive to segmentation, so we use 56 bands of the optimal feature subset for segmentation [75]. The selection of parameters for the super-pixel segmentation algorithm is vital. If the super-pixel segmentation size is too large, a super pixel may contain multiple categories, leading to a decrease in classification accuracy. Conversely, if the super-pixel size is too small, the segmentation efficiency is reduced. Additionally, compactness must also be balanced: the larger the compactness value, the worse the boundary connectivity [76]. After multiple experiments, this study has set the following five parameters for implementing SNIC: size is 5, compactness is 0, connectivity is 8, neighborhood size is 256, and seed is null. The result of the SNIC super-pixel segmentation is illustrated in Figure 8.

3.4. Classification Method

In this study, the Google Earth Engine (GEE) remote-sensing cloud platform was utilized to implement three machine-learning classifiers: SVM, CART, and RF. The Random Forest (RF) [77] is an efficient machine-learning algorithm that achieves classification tasks by constructing multiple decision trees. The fundamental idea of this algorithm is to conduct random sampling and feature selection on the training data, create multiple decision trees, and then integrate them for classification, thereby enhancing the model’s accuracy and stability. The Classification and Regression Trees (CART) [78] selects training samples and their corresponding feature variables and classifies them by generating a classification rule tree through iterative binary splitting of the training samples. The Support Vector Machine (SVM) [79] delivers effective classification results even from intricate and noisy data. It originates from statistical learning theory, which uses decision trees to segregate classes, maximizing the margin between them. This boundary is commonly referred to as the optimal hyperplane, with data points nearest to the hyperplane being termed support vectors. This study employed the GEE for implementing machine-learning classification. Through multiple experiments, the number of Trees for RF was set to 345, while other parameters were kept at their default values. SVM used LibSVM for classification, where the kernelType was selected as RBF, gamma was set to 0.5, and cost was determined as 50. The maxNodes for CART was set to 19.
The classification principles of the three classifiers are as follows:
Random Forest (RF) is an ensemble learning method that constructs multiple decision trees and amalgamates their outputs to provide a more accurate and stable classification prediction. Compared to a single decision tree, which might be prone to overfitting, the ensemble nature of RF aids in better generalization. For classification tasks, assuming we have C categories, for an input sample x, the voting result from each tree is denoted as Ti(x) (belonging to one of the C categories). The output classification of the Random Forest is:
Y x = mode T 1 x , T 2 x , , T n x
Among them, Y(x) is the predicted classification of input x, n is the total number of decision trees, Ti(x) is the classification prediction of input x by the i-th tree, and the mode function returns the most frequently occurring category in its input value.
The Classification and Regression Tree (CART) is a decision tree structured model. It constructs a binary tree by iteratively partitioning the feature space into exclusive regions and assigning a class label to each region. At each node, the algorithm evaluates all potential splits (considering all features and their possible values) and chooses an optimal split point. This choice is made based on a specified criterion, often a measure of impurity increase like the Gini impurity. Once the best split is determined, the data are divided into two subsets, and each subset further undergoes recursive splitting at its node. This process continues until certain stopping conditions are met, such as the number of data points in a node being less than a threshold or the purity reaching a specified level. To prevent overfitting, CART typically involves a post-pruning process. This means the algorithm initially builds a very expansive tree and then progressively prunes some leaves, streamlining the model to enhance its generalization capabilities on new data. The formula for calculating the Gini impurity is:
Gini p = 1 i = 1 C p i 2
Among them, C is the total number of categories, and Pi is the proportion of the i-th category in the node.
The Support Vector Machine (SVM) is a supervised learning model primarily tailored for classification and regression tasks. Its fundamental concept revolves around discovering a hyperplane that maximizes the margin between two classes. The crux of SVM’s efficacy lies in the choice of its kernel function, with its performance being highly contingent upon the selection of an apt kernel. In this study, the Radial Basis Function (RBF) kernel was chosen. This kernel operates by employing a nonlinear mapping function to project the input sample vectors into a high-dimensional feature space, wherein an optimal classification hyperplane is constructed to facilitate classification. Thus, in contrast to the linear kernel, it can accommodate scenarios where the relationship between class labels and attributes is nonlinear. The quantity of hyperparameters impacts the complexity of model selection, and the RBF kernel harbours fewer hyperparameters compared to the polynomial kernel. A key point is that 0 < Kij ≤ 1, which is different from the polynomial kernel, whose value may tend to infinity ( γ x i T x j + r > 1 ) or zero ( γ x i T x j + r < 1 ) at higher orders. Therefore, the RBF function is more accurate and robust compared to other functions. The RBF kernel calculation formula is:
K x , x = exp γ x x 2
Among them, x and x’ are data points, and γ is a positive constant that determines the distribution of data points mapped to high-dimensional space.
Detailed classification schemes are presented in Table 5.

4. Result Analysis and Accuracy Verification

4.1. Result Analysis

Comparative analysis was conducted between UAV (Unmanned Aerial Vehicle) imagery and the classification results. The schematic of the UAV imagery is shown in Figure 9. The UAV used for capturing was the Phantom 4 RTK, equipped with a 1-inch CMOS lens, boasting an effective pixel count of 20 million and a spatial resolution of 0.05 m.
Figure 10 showcases the results obtained from employing an object-oriented SNIC super-pixel segmentation approach, combined with the SLPSO–RFE feature selection technique, and subsequently using SVM, CART, and RF classifiers on the integrated dataset. According to the research of Ziyan, W. [63] et al., non-eucalyptus plantation areas in Luzhai County are predominantly located in the eastern and northwestern parts, with the remaining forested areas being eucalyptus plantations. As observed, within the first set of experiments (Schemes 1–3), the results from Schemes 1 and 2 are closely related, with significant misclassifications pertaining to eucalyptus vegetation. For scheme 3, the misclassifications in the eastern and northwestern regions have been reduced, indicating that the RF, as an ensemble-learning model, outperforms the other two machine-learning models. In the second set of experiments (Schemes 4–6), Scheme 4 exhibited a higher rate of misclassification, whereas Schemes 5 and 6 had more comparable results. Within this group, the RF model consistently delivered superior performance, while the CART model demonstrated an enhanced classification capability. In the third experimental set (Schemes 7–9), the classification results from Schemes 7 and 9 were commendable, but Scheme 8 exhibited significant misclassification. Across all nine experiments, misclassifications were not pronounced in any of the schemes. In terms of classifier performance, the RF model consistently emerged as the top performer in each experimental set, while SVM and CART models exhibited their respective strengths.
Figure 11 and Figure 12 present the validation results derived from UAV imagery. The red vector boundary demarcates the spatial distribution region of the eucalyptus forest, which serves as a commercially cultivated forest. The patches in the classification results represent the eucalyptus plantation areas classified by each respective scheme. For the UAV images, the images of the two areas at points Aand C in Figure 9 were selected for detailed analysis. In Area 1, the results reveal that Schemes 1–3 have a significant number of omissions and misclassifications, while the classification results from Schemes 4–9 appear to be more comprehensive. For Area 2, the results suggest that Schemes 1–6 exhibit numerous areas of omissions and misclassifications, whereas the results from Schemes 7–9 are more complete. However, there are notable omissions in the central part of Scheme 7. The outcomes from Schemes 8 and 9 are closely aligned, standing out as the two best amongst the nine schemes presented.

4.2. Accuracy Verification Based on Confusion Matrix

This study evaluates the accuracy of the classification results of the nine schemes by constructing a confusion matrix based on Overall Accuracy (OA), Kappa coefficient, F1-Score, User’s Accuracy (UA), and Producer’s Accuracy (PA). User’s Accuracy is the ratio of reference points that are correctly classified within a particular category on the classification map. However, Producer’s Accuracy is the likelihood of correctly classifying the actual ground reference data for a particular category. Misclassification error refers to the probability of incorrect categorization, while the omission error pertains to the probability of overlooking a category during classification. Overall Accuracy denotes the ratio of the total number of correctly classified reference points for all land cover categories to the total number of extracted reference points. In other words, within the confusion matrix, it is the sum of all diagonal values divided by the sum of all samples. The Kappa indicator reflects how much the classification result surpasses random classification. The Kappa coefficient takes into consideration two types of consistency: the agreement between automatic classification and reference data, and the congruence between sampling and reference classification. Generally, the Kappa value falls between 0 and 1, with a higher value indicating superior classification accuracy. The F1-Score, a measure of test accuracy, is connected to precision and recall. It evaluates the classification model’s accuracy by harmonizing the relationship between precision and recall. The results of this analysis are displayed in Table 6, Table 7 and Table 8.
Figure 13 and Figure 14 visually present the results of the confusion matrix. As inferred from Figure 11, Scheme 1–3, which are based on single-phase data, have Overall Accuracies (OA) of 80.23%, 82.42%, and 83.43%, with Kappa coefficients of 0.71, 0.75, and 0.75, respectively. These schemes form the group with the lowest classification accuracy, which is among all the schemes. Conversely, Schemes 4–6, based on multi-phase data, achieve OAs of 85.48%, 88%, and 89.43%, along with Kappa coefficients of 0.79, 0.81, and 0.84, respectively. Judging from the OA and Kappa coefficients, these three classification models exhibit superior classification effects with multi-phase data compared to single-phase data. This indicates that integrating multi-phase data from different seasons can significantly enhance classification accuracy. Lastly, Schemes 7–9, which employ both ReliefF and SLPSO-RFE for feature selection on multi-phase data, achieve OAs of 95.48%, 96%, and 97.97%, and Kappa coefficients of 0.94, 0.94, and 0.96, respectively. This suggests a substantial improvement in the performance of the RFE search for the optimal feature subset when combined with SLPSO.
In addition to OA and Kappa coefficient, we also computed Producer Accuracy (PA) and User Accuracy (UA) via the confusion matrix. As can be seen from Figure 12, for Schemes 7–9, the PA and UA values for eucalyptus, non-eucalyptus, and non-vegetation are all above 90%. Furthermore, the PA values for eucalyptus and non-eucalyptus, as well as the UA value, for non-vegetation have all reached 100%. This suggests that the optimal feature subsets obtained by combining Relief F with SLPSO-RFE have achieved satisfactory results across all three classifiers. Among these, RF yields the best results in the SLPSO search model and classification extraction, followed by CART, with SVM being the least effective among the three classifiers. For Schemes 4–6, the PA and UA values for eucalyptus, non-eucalyptus, and non-vegetation have significantly declined compared to the previous group. While the UA values for non-vegetation in Schemes 4 and 5 and the PA and UA values for Scheme 6 are above 90%, most of the index values are below 90%, with several PA and UA values falling below 70%. This could be due to the lack of SLPSO algorithm integration. RFE could only model based on a single classification model, and running RFE under default parameters significantly decreased its performance. The first group, consisting of Schemes 1–3, performed the worst among the three groups, with some PA and UA values falling below 70%. The number of PA and UA values below 80% is significantly higher than the second group. The reason behind this is that the difference in reflectance of Eucalyptus is most pronounced between summer and winter within a year. Integrating multi-temporal data from these two pronounced seasons enhances the discriminative capacity of terrestrial objects. Augmenting data dimensions and seeking optimal feature subsets within this elevated dimensionality yields more robust outcomes.

4.3. Comparative Analysis Based on UAV Image Verification

The UAV images were compared with the classification results, and the area of eucalyptus in the verification area was calculated. The classification verification accuracy is obtained by the ratio of the classification result area to the UAV image verification area. The results are shown in Table 9. The classification results of Schemes 7–9 in this paper are all higher than 90% in agreement with the UAV verification area. The verification results show the reliability of the multitemporal data and SLPSO-RFE method proposed in this paper.

5. Discussion

The research indicates that the synergy of multi-temporal data, feature selection algorithms, object-oriented techniques, and machine learning can significantly enhance the classification accuracy of Eucalyptus plantations (refer Table 6, Table 7 and Table 8). Distinguishing Eucalyptus from other vegetation proves challenging within the 10 m resolution multispectral imagery of Sentinel-2, making sample selection particularly arduous. However, by integrating high-resolution UAV imagery (with a resolution of 0.05 m) and clear Google Maps imagery (resolution of 0.5 m), the authors adeptly selected samples. These samples then informed the classification of Eucalyptus plantations within the 10 m resolution imagery from both Sentinel-1 and Sentinel-2. Leveraging SLPSO-RFE for multidimensional feature data selection, the findings illustrate variable overall accuracy (OA) performance under the three distinct machine learning models, although the results were closely aligned. Among them, the ensemble learning model RF outperformed the other two classifiers. R Zhou [80] et al. endeavored to seek the optimal feature set from multi-dimensional features through the RF–RFE algorithm. However, the search results of RFE are intricately tied to the classification model and hyperparameters. Thus, determining the optimal model and the best combination of hyperparameters across different classification scenarios is of paramount significance. This study also unveils that incorporating SLPSO in the RFE for optimal machine learning model and hyperparameter selection (refer Table 9) elevates the OA across all three machine learning models compared to utilizing a singular model with default parameters (refer Table 8).
Deep learning algorithms have been successfully employed in the classification of Eucalyptus plantations. JON Firigato [81] and colleagues, using GEE combined with deep-learning models, produced maps of Eucalyptus cultivation in the Brazilian Savanah. However, deep learning typically demands a larger volume of labelled training data to boost model performance. Additionally, the complexity of these models often results in substantial computational time costs. In contrast, our approach, harnessing object-oriented and machine learning techniques, yields satisfactory classification results in significantly less time with a limited number of classification samples. Relying on object-oriented and machine learning classification algorithms, across three experimental scenarios with varied feature combinations, RF consistently achieved the highest overall accuracy (OA), with SVM and CART producing closely aligned OA results. The conclusion of this study is consistent with the conclusion of previous research. For example, Xiao Shang [82] and others used RF and SVM to classify forest vegetation, and their OA was 94.7% and 84.5% respectively. Although the classification accuracies varied amongst the three algorithms, this research confirms the classification capabilities of SVM, CART, and RF for Eucalyptus plantation delineation. This conclusion resonates with the outcomes drawn by Salvatore Praticò [83] and peers, with our study even boasting a superior OA.
By segmenting multi-dimensional data into distinct scenarios, this study examined the impact of various features on the accuracy of Eucalyptus plantation classification. The research combined median composite imagery from both summer and winter seasons of Sentinel-1 and Sentinel-2, along with their derived data, to construct nine different classification scenarios. Among these, while multi-temporal data showed a boost in overall accuracy (OA) for all three models compared to single-temporal data within the same scenario, the improvement was subtle. This suggests that while there are seasonal variations in eucalyptus, they are not pronounced. The feature importance ranking derived from SLPSO-RFE revealed that red-edge features (including both the red-edge band and indices) and texture features consistently held high significance. This implies that red-edge and texture features play a crucial role in model construction, aligning with the findings of Wang Ziyi [67] and colleagues, as well as those of Yaoliang Chen [84] and his team. Wang Ziyi and his team enhanced the decision tree model by incorporating red-edge features, leading to an 11.25% increase in OA compared to models without this feature. Yaoliang Chen and colleagues, using GLCM to derive texture features, established through RF and CART importance analysis that these texture features were highly significant. The accuracy achieved in this study surpasses that of the aforementioned research.
As depicted in Figure 15, an analysis of the backscatter intensity for 195 Eucalyptus samples from 2020–2021 was conducted on Sentinel-1’s VH and VV bands, focusing on statistical features such as median and mean intensities. The analysis reveals a seasonal variation in the backscatter intensity of the eucalyptus. During the summer months of June to August, the backscatter intensity peaks, whereas it hits its lowest point in winter. The scattering intensities in spring and autumn don’t differ significantly; however, there is a consistent rise in spring and a gradual decline in autumn. This suggests that the extreme seasonal fluctuations in the eucalyptus backscatter intensity occur in summer and winter, corroborating the findings of H Qian [11] and his associates. The SAR features used in this study only include backscattering coefficients and texture features, and do not use polarization features. The research by Canran Tu [64] et al. has confirmed that the SAR polarization matrix can effectively improve the classification accuracy. However, although GEE has massive data storage and efficient cloud computing capabilities, it does not support SLC data that can perform polarization matrix calculations. Therefore, this paper does not use the polarization feature of Sentinel-1.

6. Conclusions

The precise identification of Eucalyptus plantations holds significant implications for global forest management and ecological conservation. In this study, we harness the synergistic classification of multi-temporal Sentinel-1 and Sentinel-2 (S1 and S2) data. We employed the SLPSO-RFE for feature selection, integrating object-oriented and machine learning approaches for the extraction of Eucalyptus plantations. The study reveals that the proposed SLPSO-RFE feature selection technique effectively identifies the optimal feature combination. Compared to other feature selection strategies, SLPSO-RFE can boost the Overall Accuracy (OA) by up to 10%. When juxtaposed against the classification results of multi-temporal data and Relief F–RFE feature selection, the OA for SVM, CART, and RF using SLPSO-RFE augmented by 10%, 8%, and 8.54%, respectively. Notably, against the results from single-temporal data and Relief F-RFE, the accuracy increment is even more pronounced, rising by 15.25%, 13.58%, and 14.54% respectively. Using multi-temporal data compared to single-temporal data has improved the classification outcomes in the SVM, CART, and RF models by 5.25%, 5.58%, and 6%, respectively. Moreover, among the four seasons, the backscatter coefficients of Sentinel-1 manifest the most significant disparity between summer and winter. The intensity of backscatter coefficients for VV and VH polarizations in summer stands at −5 dB and −12 dB, respectively, shifting to −13 dB and 24 dB in winter. Consequently, this research underlines that relying on summer and winter S1 and S2 imagery, as opposed to solely utilizing summer imagery, can enhance classification precision. Comparing the classification of Eucalyptus plantations using UAV imagery revealed that strategies based on single-temporal data (Scenarios 1–3) showcased more misclassifications and omissions in both areas studied. Scenarios based on multi-temporal data (Scenarios 4–6) displayed fewer such inaccuracies, with Scenarios 7–9 emerging as the most effective strategies in this study. UAV-based accuracy validation underscores that, within the same classification scenario, SLPSO-RFE feature selection substantively ameliorates accuracy. In particular, SVM’s verification accuracy improved by 3.47% and 7.5% in Areas 1 and 2, respectively, whereas CART’s grew by 3.37% and 11.88%, and RF’s by 4% and 10.54%. UAV validation results align with the conclusions derived from the confusion matrix. Within the same scenario, classifications based on multi-temporal data consistently outperformed those based on single-temporal data. It’s salient to note that, in this context, SVM and CART’s verification accuracy in Area 2 slightly decreased, whereas RF’s accuracy continually rose. The UAV imagery-based accuracy verification further attests to the reliability and applicability of our methodology. Among the three machine learning models, RF, in contrast to SVM and CART, achieved the highest classification accuracy, evidencing superior robustness and stability in its classification prowess. In the future, we aspire to utilize fully polarized SAR imagery alongside higher spatial resolution optical imagery to delve deeper into the polarization characteristics of SAR imagery and the applicability of high spatial resolution optical imagery in classifying eucalyptus plantations.

Author Contributions

Conceptualization, C.R.; methodology, X.L. and C.R.; software, X.L.; validation, X.L., Y.L. and W.Y.; formal analysis, X.L. and C.R.; investigation, all authors; resources, X.L.; data curation, X.L.; writing—original draft preparation, X.L. and C.R.; writing—review and editing, W.Y. and X.L.; visualization, X.L. and A.Y.; supervision, C.R.; funding acquisition, C.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant number 42064003).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, Y.; Wang, X. Geographical Spatial Distribution and Productivity Dynamic Change of Eucalyptus Plantations in China. Sci. Rep. 2021, 11, 19764. [Google Scholar] [CrossRef] [PubMed]
  2. de Oliveira, B.R.; da Silva, A.A.P.; Teodoro, L.P.R.; de Azevedo, G.B.; Azevedo, G.T.d.O.S.; Baio, F.H.R.; Sobrinho, R.L.; da Silva Junior, C.A.; Teodoro, P.E. Eucalyptus Growth Recognition Using Machine Learning Methods and Spectral Variables. For. Ecol. Manag. 2021, 497, 119496. [Google Scholar] [CrossRef]
  3. Whitehead, D.; Beadle, C.L. Physiological Regulation of Productivity and Water Use in Eucalyptus: A Review. For. Ecol. Manag. 2004, 193, 113–140. [Google Scholar] [CrossRef]
  4. Deng, X.; Guo, S.; Sun, L.; Chen, J. Identification of Short-Rotation Eucalyptus Plantation at Large Scale Using Multi-Satellite Imageries and Cloud Computing Platform. Remote Sens. 2020, 12, 2153. [Google Scholar] [CrossRef]
  5. Batish, D.R.; Singh, H.P.; Kohli, R.K.; Kaur, S. Eucalyptus Essential Oil as a Natural Pesticide. For. Ecol. Manag. 2008, 256, 2166–2174. [Google Scholar] [CrossRef]
  6. Sibanda, M.; Buthelezi, S.; Ndlovu, H.S.; Mothapo, M.C.; Mutanga, O. Mapping the Eucalyptus Spp Woodlots in Communal Areas of Southern Africa Using Sentinel-2 Multi-Spectral Imager Data for Hydrological Applications. Phys. Chem. Earth Parts A/B/C 2021, 122, 102999. [Google Scholar] [CrossRef]
  7. Oliveira, D.; Martins, L.; Mora, A.; Damásio, C.; Caetano, M.; Fonseca, J.; Ribeiro, R.A. Data Fusion Approach for Eucalyptus Trees Identification. Int. J. Remote Sens. 2021, 42, 4087–4109. [Google Scholar] [CrossRef]
  8. Bayle, G. Ecological and Social Impacts of Eucalyptus Tree Plantation on the Environment. J. Biodivers. Conserv. Bioresour. Manag. 2019, 5, 93–104. [Google Scholar] [CrossRef]
  9. Hughes, L.; Cawsey, E.M.; Westoby, M. Climatic Range Sizes of Eucalyptus Species in Relation to Future Climate Change. Glob. Ecol. Biogeogr. Lett. 1996, 5, 23–29. [Google Scholar] [CrossRef]
  10. Le Maire, G.; Marsden, C.; Nouvellon, Y.; Grinand, C.; Hakamada, R.; Stape, J.-L.; Laclau, J.-P. MODIS NDVI Time-Series Allow the Monitoring of Eucalyptus Plantation Biomass. Remote Sens. Environ. 2011, 115, 2613–2625. [Google Scholar] [CrossRef]
  11. Qiao, H.; Wu, M.; Shakir, M.; Wang, L.; Kang, J.; Niu, Z. Classification of Small-Scale Eucalyptus Plantations Based on NDVI Time Series Obtained from Multiple High-Resolution Datasets. Remote Sens. 2016, 8, 117. [Google Scholar] [CrossRef]
  12. Liao, C.; Wang, J.; Shan, B.; Shang, J.; Dong, T.; He, Y. Near Real-Time Detection and Forecasting of within-Field Phenology of Winter Wheat and Corn Using Sentinel-2 Time-Series Data. ISPRS J. Photogramm. Remote Sens. 2023, 196, 105–119. [Google Scholar] [CrossRef]
  13. Chen, Z.; Zhao, S. Automatic Monitoring of Surface Water Dynamics Using Sentinel-1 and Sentinel-2 Data with Google Earth Engine. Int. J. Appl. Earth Obs. 2022, 113, 103010. [Google Scholar] [CrossRef]
  14. Martinis, S.; Groth, S.; Wieland, M.; Knopp, L.; Rättich, M. Towards a Global Seasonal and Permanent Reference Water Product from Sentinel-1/2 Data for Improved Flood Mapping. Remote Sens. Environ. 2022, 278, 113077. [Google Scholar] [CrossRef]
  15. Tuvdendorj, B.; Zeng, H.; Wu, B.; Elnashar, A.; Zhang, M.; Tian, F.; Nabil, M.; Nanzad, L.; Bulkhbai, A.; Natsagdorj, N. Performance and the Optimal Integration of Sentinel-1/2 Time-Series Features for Crop Classification in Northern Mongolia. Remote Sens. 2022, 14, 1830. [Google Scholar] [CrossRef]
  16. Li, R.; Xia, H.; Zhao, X.; Guo, Y. Mapping Evergreen Forests Using New Phenology Index, Time Series Sentinel-1/2 and Google Earth Engine. Ecol. Indic. 2023, 149, 110157. [Google Scholar] [CrossRef]
  17. Bjerreskov, K.S.; Nord-Larsen, T.; Fensholt, R. Classification of Nemoral Forests with Fusion of Multi-Temporal Sentinel-1 and 2 Data. Remote Sens. 2021, 13, 950. [Google Scholar] [CrossRef]
  18. Persson, M.; Lindberg, E.; Reese, H. Tree Species Classification with Multi-Temporal Sentinel-2 Data. Remote Sens. 2018, 10, 1794. [Google Scholar] [CrossRef]
  19. Schriever, J.R.; Congalton, R.G. Evaluating Seasonal Variability as an Aid to Cover-Type Mapping from Landsat Thematic Mapper Data in the Northeast. Photogramm. Eng. Remote Sens. 1995, 61, 321–327. [Google Scholar]
  20. Mickelson, J.G.; Civco, D.L.; Silander, J.A. Delineating Forest Canopy Species in the Northeastern United States Using Multi-Temporal TM Imagery. Photogramm. Eng. Remote Sens. 1998, 64, 891–904. [Google Scholar]
  21. Zhu, X.; Liu, D. Accurate Mapping of Forest Types Using Dense Seasonal Landsat Time-Series. ISPRS J. Photogramm. Remote Sens. 2014, 96, 1–11. [Google Scholar] [CrossRef]
  22. Liang, J.; Zheng, Z.; Xia, S.; Zhang, X.; Tang, Y. Crop Recognition and Evaluationusing Red Edge Features of GF-6 Satellite. Yaogan Xuebao/J. Remote Sens. 2020, 24, 1168–1179. [Google Scholar] [CrossRef]
  23. Wu, N.; Crusiol, L.G.T.; Liu, G.; Wuyun, D.; Han, G. Comparing Machine Learning Algorithms for Pixel/Object-Based Classifications of Semi-Arid Grassland in Northern China Using Multisource Medium Resolution Imageries. Remote Sens. 2023, 15, 750. [Google Scholar] [CrossRef]
  24. Bindhu, J.S.; Pramod, K.V. Texture and Pixel-Based Satellite Image Classification Using Cellular Automata. Multimed. Tools Appl. 2023, 82, 9913–9937. [Google Scholar] [CrossRef]
  25. Wang, M.; Mao, D.; Xiao, X.; Song, K.; Jia, M.; Ren, C.; Wang, Z. Interannual Changes of Coastal Aquaculture Ponds in China at 10-m Spatial Resolution during 2016–2021. Remote Sens. Environ. 2023, 284, 113347. [Google Scholar] [CrossRef]
  26. Zhao, L.; Wang, S.; Xu, Y.; Sun, W.; Shi, L.; Yang, J.; Dash, J. Evaluating the Capability of Sentinel-1 Data in the Classification of Canola and Wheat at Different Growth Stages and in Different Years. Remote Sens. 2023, 15, 2731. [Google Scholar] [CrossRef]
  27. Cheng, K.; Su, Y.; Guan, H.; Tao, S.; Ren, Y.; Hu, T.; Ma, K.; Tang, Y.; Guo, Q. Mapping China’s Planted Forests Using High Resolution Imagery and Massive Amounts of Crowdsourced Samples. ISPRS J. Photogramm. Remote Sens. 2023, 196, 356–371. [Google Scholar] [CrossRef]
  28. Rizayeva, A.; Nita, M.D.; Radeloff, V.C. Large-Area, 1964 Land Cover Classifications of Corona Spy Satellite Imagery for the Caucasus Mountains. Remote Sens. Environ. 2023, 284, 113343. [Google Scholar] [CrossRef]
  29. Li, X.; Meng, Q.; Gu, X.; Jancso, T.; Yu, T.; Wang, K.; Mavromatis, S. A Hybrid Method Combining Pixel-Based and Object-Oriented Methods and Its Application in Hungary Using Chinese HJ-1 Satellite Images. Int. J. Remote Sens. 2013, 34, 4655–4668. [Google Scholar] [CrossRef]
  30. Li, B.; Gong, A.; Chen, Z.; Pan, X.; Li, L.; Li, J.; Bao, W. An Object-Oriented Method for Extracting Single-Object Aquaculture Ponds from 10 m Resolution Sentinel-2 Images on Google Earth Engine. Remote Sens. 2023, 15, 856. [Google Scholar] [CrossRef]
  31. Trinh, X.T.; Nguyen, L.D.; Takeuchi, W. Sentinel-2 Mapping of a Turbid Intertidal Seagrass Meadow in Southern Vietnam. Geocarto Int. 2023, 38, 2186490. [Google Scholar] [CrossRef]
  32. Zhang, S.; Xu, M.; Zhou, J.; Jia, S. Unsupervised Spatial-Spectral Cnn-Based Feature Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5524617. [Google Scholar] [CrossRef]
  33. Vizzari, M. PlanetScope, Sentinel-2, and Sentinel-1 Data Integration for Object-Based Land Cover Classification in Google Earth Engine. Remote Sens. 2022, 14, 2628. [Google Scholar] [CrossRef]
  34. Castelo-Cabay, M.; Piedra-Fernandez, J.A.; Ayala, R. Deep Learning for Land Use and Land Cover Classification from the Ecuadorian Paramo. Int. J. Digit. Earth 2022, 15, 1001–1017. [Google Scholar] [CrossRef]
  35. Zhang, S.; Tang, D.; Li, N.; Jia, X.; Jia, S. Superpixel-Guided Variable Gabor Phase Coding Fusion for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5523816. [Google Scholar] [CrossRef]
  36. Yang, H.; Liu, X.; Chen, Q.; Cao, Y. Mapping Dongting Lake Wetland Utilizing Time Series Similarity, Statistical Texture, and Superpixels With Sentinel-1 SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8235–8244. [Google Scholar] [CrossRef]
  37. Liu, Y.; Zhang, H.; Zhang, M.; Cui, Z.; Lei, K.; Zhang, J.; Yang, T.; Ji, P. Vietnam Wetland Cover Map: Using Hydro-Periods Sentinel-2 Images and Google Earth Engine to Explore the Mapping Method of Tropical Wetland. Int. J. Appl. Earth Obs. Geoinf. 2022, 115, 103122. [Google Scholar] [CrossRef]
  38. Yang, L.; Wang, L.; Abubakar, G.A.; Huang, J. High-Resolution Rice Mapping Based on SNIC Segmentation and Multi-Source Remote Sensing Images. Remote Sens. 2021, 13, 1148. [Google Scholar] [CrossRef]
  39. Al-Khafaji, S.L.; Zhou, J.; Zia, A.; Liew, A.W.-C. Spectral-Spatial Scale Invariant Feature Transform for Hyperspectral Images. IEEE Trans. Image Process. 2017, 27, 837–850. [Google Scholar] [CrossRef]
  40. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T. A Review of the Application of Optical and Radar Remote Sensing Data Fusion to Land Use Mapping and Monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef]
  41. Li, B.; Zhang, H.; Xu, F. Water Extraction in High Resolution Remote Sensing Image Based on Hierarchical Spectrum and Shape Features. In Proceedings of the IOP Conference Series: Earth and Environmental Science, Beijing, China, 22–26 April 2014; IOP Publishing: Bristol, UK, 2014; Volume 17, p. 012123. [Google Scholar]
  42. Fan, Y.; Chen, B.; Huang, W.; Liu, J.; Weng, W.; Lan, W. Multi-Label Feature Selection Based on Label Correlations and Feature Redundancy. Knowl.-Based Syst. 2022, 241, 108256. [Google Scholar] [CrossRef]
  43. Chen, Z.; Gu, S.; Lu, G.; Xu, D. Exploiting Intra-Slice and Inter-Slice Redundancy for Learning-Based Lossless Volumetric Image Compression. IEEE Trans. Image Process. 2022, 31, 1697–1707. [Google Scholar] [CrossRef] [PubMed]
  44. Jin, X.-B.; Wang, Z.-Y.; Gong, W.-T.; Kong, J.-L.; Bai, Y.-T.; Su, T.-L.; Ma, H.-J.; Chakrabarti, P. Variational Bayesian Network with Information Interpretability Filtering for Air Quality Forecasting. Mathematics 2023, 11, 837. [Google Scholar] [CrossRef]
  45. Zhang, P.; Li, T.; Yuan, Z.; Luo, C.; Wang, G.; Liu, J.; Du, S. A Data-Level Fusion Model for Unsupervised Attribute Selection in Multi-Source Homogeneous Data. Inf. Fusion 2022, 80, 87–103. [Google Scholar] [CrossRef]
  46. Vommi, A.M.; Battula, T.K. A Hybrid Filter-Wrapper Feature Selection Using Fuzzy KNN Based on Bonferroni Mean for Medical Datasets Classification: A COVID-19 Case Study. Expert. Syst. Appl. 2023, 218, 119612. [Google Scholar] [CrossRef]
  47. Agrawal, U.; Rohatgi, V.; Katarya, R. Normalized Mutual Information-Based Equilibrium Optimizer with Chaotic Maps for Wrapper-Filter Feature Selection. Expert Syst. Appl. 2022, 207, 118107. [Google Scholar] [CrossRef]
  48. Pashaei, E.; Pashaei, E. An Efficient Binary Chimp Optimization Algorithm for Feature Selection in Biomedical Data Classification. Neural Comput. Appl. 2022, 34, 6427–6451. [Google Scholar] [CrossRef]
  49. Tiwari, A.; Chaturvedi, A. A Hybrid Feature Selection Approach Based on Information Theory and Dynamic Butterfly Optimization Algorithm for Data Classification. Expert Syst. Appl. 2022, 196, 116621. [Google Scholar] [CrossRef]
  50. Robnik-Šikonja, M.; Kononenko, I. Theoretical and Empirical Analysis of ReliefF and RReliefF. Mach. Learn. 2003, 53, 23–69. [Google Scholar] [CrossRef]
  51. Fu, B.; He, X.; Yao, H.; Liang, Y.; Deng, T.; He, H.; Fan, D.; Lan, G.; He, W. Comparison of RFE-DL and Stacking Ensemble Learning Algorithms for Classifying Mangrove Species on UAV Multispectral Images. Int. J. Appl. Earth Obs. 2022, 112, 102890. [Google Scholar] [CrossRef]
  52. Hu, J.; Gui, W.; Heidari, A.A.; Cai, Z.; Liang, G.; Chen, H.; Pan, Z. Dispersed Foraging Slime Mould Algorithm: Continuous and Binary Variants for Global Optimization and Wrapper-Based Feature Selection. Knowl.-Based Syst. 2022, 237, 107761. [Google Scholar] [CrossRef]
  53. Guyon, I.; Weston, J.; Barnhill, S.; Vapnik, V. Gene Selection for Cancer Classification Using Support Vector Machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
  54. Akinola, O.O.; Ezugwu, A.E.; Agushaka, J.O.; Zitar, R.A.; Abualigah, L. Multiclass Feature Selection with Metaheuristic Optimization Algorithms: A Review. Neural Comput. Appl. 2022, 34, 19751–19790. [Google Scholar] [CrossRef] [PubMed]
  55. Ma, J.; Xia, D.; Guo, H.; Wang, Y.; Niu, X.; Liu, Z.; Jiang, S. Metaheuristic-Based Support Vector Regression for Landslide Displacement Prediction: A Comparative Study. Landslides 2022, 19, 2489–2511. [Google Scholar] [CrossRef]
  56. He, B.; Armaghani, D.J.; Lai, S.H. Assessment of Tunnel Blasting-Induced Overbreak: A Novel Metaheuristic-Based Random Forest Approach. Tunn. Undergr. Space Technol. 2023, 133, 104979. [Google Scholar] [CrossRef]
  57. Veeraiah, V.; Khan, H.; Kumar, A.; Ahamad, S.; Mahajan, A.; Gupta, A. Integration of PSO and Deep Learning for Trend Analysis of Meta-Verse. In Proceedings of the 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 28–29 April 2022; IEEE: New York, NY, USA, 2022; pp. 713–718. [Google Scholar]
  58. Du, B.; Huang, S.; Guo, J.; Tang, H.; Wang, L.; Zhou, S. Interval Forecasting for Urban Water Demand Using PSO Optimized KDE Distribution and LSTM Neural Networks. Appl. Soft Comput. 2022, 122, 108875. [Google Scholar] [CrossRef]
  59. Garg, V.; Shukla, A.; Tiwari, R. AERPSO—An Adaptive Exploration Robotic PSO Based Cooperative Algorithm for Multiple Target Searching. Expert Syst. Appl. 2022, 209, 118245. [Google Scholar] [CrossRef]
  60. Aslan, M.F.; Durdu, A.; Sabanci, K. Goal Distance-Based UAV Path Planning Approach, Path Optimization and Learning-Based Path Estimation: GDRRT*, PSO-GDRRT* and BiLSTM-PSO-GDRRT. Appl. Soft Comput. 2023, 137, 110156. [Google Scholar] [CrossRef]
  61. Moazen, H.; Molaei, S.; Farzinvash, L.; Sabaei, M. PSO-ELPM: PSO with Elite Learning, Enhanced Parameter Updating, and Exponential Mutation Operator. Inf. Sci. 2023, 628, 70–91. [Google Scholar] [CrossRef]
  62. Cheng, R.; Jin, Y. A Social Learning Particle Swarm Optimization Algorithm for Scalable Optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  63. Duan, Y.; Yang, Z.; Yu, T.; Yang, Q.; Liu, X.; Ji, W.; Jiang, H.; Zhuo, X.; Wu, T.; Qin, J. Geogenic Cadmium Pollution in Multi-Medians Caused by Black Shales in Luzhai, Guangxi. Environ. Pollut. 2020, 260, 113905. [Google Scholar] [CrossRef] [PubMed]
  64. Tu, C.; Li, P.; Li, Z.; Wang, H.; Yin, S.; Li, D.; Zhu, Q.; Chang, M.; Liu, J.; Wang, G. Synergetic Classification of Coastal Wetlands over the Yellow River Delta with GF-3 Full-Polarization SAR and Zhuhai-1 OHS Hyperspectral Remote Sensing. Remote Sens. 2021, 13, 4444. [Google Scholar] [CrossRef]
  65. Torres, R.; Snoeij, P.; Geudtner, D.; Bibby, D.; Davidson, M.; Attema, E.; Potin, P.; Rommen, B.; Floury, N.; Brown, M. GMES Sentinel-1 Mission. Remote Sens. Environ. 2012, 120, 9–24. [Google Scholar] [CrossRef]
  66. Wang, Q.; Shi, W.; Li, Z.; Atkinson, P.M. Fusion of Sentinel-2 Images. Remote Sens. Environ. 2016, 187, 241–252. [Google Scholar] [CrossRef]
  67. Wang, Z.; Ren, C.; Liang, Y.; Shi, Y.; Li, X.; Zhang, S. Object-Oriented Eucalyptus Plantation Forest Information Extraction Based on the Red-Edge Feature of GF-6. Bull. Surv. Mapp. 2021, 6, 6–11. [Google Scholar]
  68. Chandrashekar, G.; Sahin, F. A Survey on Feature Selection Methods. Comput. Electr. Eng. 2014, 40, 16–28. [Google Scholar] [CrossRef]
  69. Reyes, O.; Morell, C.; Ventura, S. Scalable Extensions of the ReliefF Algorithm for Weighting and Selecting Features on the Multi-Label Learning Context. Neurocomputing 2015, 161, 168–182. [Google Scholar] [CrossRef]
  70. Shao, Z.; Yang, S.; Gao, F.; Zhou, K.; Lin, P. A New Electricity Price Prediction Strategy Using Mutual Information-Based SVM-RFE Classification. Renew. Sustain. Energy Rev. 2017, 70, 330–341. [Google Scholar] [CrossRef]
  71. Sun, C.; Jin, Y.; Cheng, R.; Ding, J.; Zeng, J. Surrogate-Assisted Cooperative Swarm Optimization of High-Dimensional Expensive Problems. IEEE Trans. Evol. Comput. 2017, 21, 644–660. [Google Scholar] [CrossRef]
  72. Valjarević, A.; Djekić, T.; Stevanović, V.; Ivanović, R.; Jandziković, B. GIS Numerical and Remote Sensing Analyses of Forest Changes in the Toplica Region for the Period of 1953–2013. Appl. Geogr. 2018, 92, 131–139. [Google Scholar] [CrossRef]
  73. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed]
  74. Tassi, A.; Vizzari, M. Object-Oriented Lulc Classification in Google Earth Engine Combining Snic, Glcm, and Machine Learning Algorithms. Remote Sens. 2020, 12, 3776. [Google Scholar] [CrossRef]
  75. Petitjean, F.; Kurtz, C.; Passat, N.; Gançarski, P. Spatio-Temporal Reasoning for the Classification of Satellite Image Time Series. Pattern Recogn. Lett. 2012, 33, 1805–1815. [Google Scholar] [CrossRef]
  76. Wang, X.; Wang, J.; Lian, Z.; Yang, N. Semi-Supervised Tree Species Classification for Multi-Source Remote Sensing Images Based on a Graph Convolutional Neural Network. Forests 2023, 14, 1211. [Google Scholar] [CrossRef]
  77. Das, S.; Imtiaz, M.S.; Neom, N.H.; Siddique, N.; Wang, H. A Hybrid Approach for Bangla Sign Language Recognition Using Deep Transfer Learning Model with Random Forest Classifier. Expert Syst. Appl. 2023, 213, 118914. [Google Scholar] [CrossRef]
  78. Wei, M.; Wang, H.; Zhang, Y.; Li, Q.; Du, X.; Shi, G.; Ren, Y. Investigating the Potential of Crop Discrimination in Early Growing Stage of Change Analysis in Remote Sensing Crop Profiles. Remote Sens. 2023, 15, 853. [Google Scholar] [CrossRef]
  79. Lu, J.; Han, L.; Liu, L.; Wang, J.; Xia, Z.; Jin, D.; Zha, X. Lithology Classification in Semi-Arid Area Combining Multi-Source Remote Sensing Images Using Support Vector Machine Optimized by Improved Particle Swarm Algorithm. Int. J. Appl. Earth Obs. Geoinf. 2023, 119, 103318. [Google Scholar] [CrossRef]
  80. Zhou, R.; Yang, C.; Li, E.; Cai, X.; Yang, J.; Xia, Y. Object-Based Wetland Vegetation Classification Using Multi-Feature Selection of Unoccupied Aerial Vehicle RGB Imagery. Remote Sens. 2021, 13, 4910. [Google Scholar] [CrossRef]
  81. Firigato, J.O.N.; Junior, J.M.; Gonçalves, W.N.; Bacani, V.M. Deep Learning and Google Earth Engine Applied to Mapping Eucalyptus. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; IEEE: New York, NY, USA, 2021; pp. 4696–4699. [Google Scholar]
  82. Shang, X.; Chisholm, L.A. Classification of Australian Native Forest Species Using Hyperspectral Remote Sensing and Machine-Learning Classification Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2481–2489. [Google Scholar] [CrossRef]
  83. Praticò, S.; Solano, F.; Di Fazio, S.; Modica, G. Machine Learning Classification of Mediterranean Forest Habitats in Google Earth Engine Based on Seasonal Sentinel-2 Time-Series and Input Image Composition Optimisation. Remote Sens. 2021, 13, 586. [Google Scholar] [CrossRef]
  84. Chen, Y.; Peng, Z.; Ye, Y.; Jiang, X.; Lu, D.; Chen, E. Exploring a Uniform Procedure to Map Eucalyptus Plantations Based on Fused Medium–High Spatial Resolution Satellite Images. Int. J. Appl. Earth Obs. 2021, 103, 102462. [Google Scholar] [CrossRef]
Figure 1. The location of the study area is: (a) The geographic location of the study area; (b) The synthetic image of the Sentinel-2 true color band of the study area.
Figure 1. The location of the study area is: (a) The geographic location of the study area; (b) The synthetic image of the Sentinel-2 true color band of the study area.
Forests 14 01864 g001
Figure 2. Sample distribution and fieldwork display map.
Figure 2. Sample distribution and fieldwork display map.
Forests 14 01864 g002
Figure 3. Technical flow chart.
Figure 3. Technical flow chart.
Forests 14 01864 g003
Figure 4. Eucalyptus spectral reflectance.
Figure 4. Eucalyptus spectral reflectance.
Forests 14 01864 g004
Figure 5. Feature selection flowchart.
Figure 5. Feature selection flowchart.
Forests 14 01864 g005
Figure 6. Feature weights of ReliefF.
Figure 6. Feature weights of ReliefF.
Forests 14 01864 g006
Figure 7. Feature weights of SLPSO-RFE.
Figure 7. Feature weights of SLPSO-RFE.
Forests 14 01864 g007
Figure 8. Picture (a) is the result of super-pixel segmentation (Partial); and picture (b) is the real situation of the image.
Figure 8. Picture (a) is the result of super-pixel segmentation (Partial); and picture (b) is the real situation of the image.
Forests 14 01864 g008
Figure 9. UAV data collection distribution map. (AF) is a schematic of the UAV image taken in December 2020 with a resolution of 0.05 m. The image on the top left is a false color image of Zhuihai-1. Where (A,B) are Mountainous region, (C,D) are Mud flat, and (E,F) are Flat.
Figure 9. UAV data collection distribution map. (AF) is a schematic of the UAV image taken in December 2020 with a resolution of 0.05 m. The image on the top left is a false color image of Zhuihai-1. Where (A,B) are Mountainous region, (C,D) are Mud flat, and (E,F) are Flat.
Forests 14 01864 g009
Figure 10. Classification results. (ai) show the classification results of Schemes 1–9 respectively, in which green represents eucalyptus plantations, yellow represents non-eucalyptus vegetation, and red represents non-vegetation types.
Figure 10. Classification results. (ai) show the classification results of Schemes 1–9 respectively, in which green represents eucalyptus plantations, yellow represents non-eucalyptus vegetation, and red represents non-vegetation types.
Forests 14 01864 g010
Figure 11. Comparative analysis and classification results of area 1 drone images.
Figure 11. Comparative analysis and classification results of area 1 drone images.
Forests 14 01864 g011
Figure 12. Comparative analysis and classification results of area 2 drone images.
Figure 12. Comparative analysis and classification results of area 2 drone images.
Forests 14 01864 g012
Figure 13. OA, Kappa, and F1-Score results charts.
Figure 13. OA, Kappa, and F1-Score results charts.
Forests 14 01864 g013
Figure 14. Plot of PA and UA results.
Figure 14. Plot of PA and UA results.
Forests 14 01864 g014
Figure 15. Eucalyptus VH and VV ranges and trends in different periods.
Figure 15. Eucalyptus VH and VV ranges and trends in different periods.
Forests 14 01864 g015
Table 1. Sentinel-1 parameters.
Table 1. Sentinel-1 parameters.
Band NameResolution/m
VH polarization10
VV polarization10
Table 2. Sentinel-2 parameters.
Table 2. Sentinel-2 parameters.
BandnamesCentral Wavelength/nmSpatial Resolution/m
B1Coastal44360
B2Blue49010
B3Green560
B4Red665
B5Red-Edge170520
B6Red-Edge2740
B7Red-Edge3783
B8Nir84210
B8ANarrow Nir86520
B9Water vapour94560
B10Cirrus137560
B11SWIR1161020
B12SWIR2219020
Table 3. Characteristic variables and the calculation formulae.
Table 3. Characteristic variables and the calculation formulae.
Sensors Feature VariablesSymbol or Calculation Formula
Sentinel-1Texture
feature
Var
Contrast
Corr
Ent
Sentinel-2Index
feature
NDVI ρ 820 ρ 670 / ρ 820 + ρ 670
EVI 2.5 ρ 820 ρ 670 ρ 820 6 ρ 670 7.5 ρ 500 + 1
Vigreen ρ 566 ρ 670 / ρ 566 + ρ 670
GNDVI ρ 820 ρ 566 / ρ 820 + ρ 566
Red-
edge
feature
IRECI ρ 783 ρ 665 / ρ 705 / ρ 740
NDRE1 ρ 750 ρ 705 0.2 × ρ 750 + ρ 705
MCARI2 ρ 750 ρ 705 0.2 × ρ 750 ρ 550 × ρ 750 / ρ 705
MNDRE ρ 750 ρ 705 / ρ 750 + ρ 705 2 × ρ 400
NDVI710 ρ 710 ρ 670 / ρ 710 + ρ 670
NDVI750 ρ 750 ρ 670 / ρ 750 + ρ 670
Table 4. Relief F-RFE preferred feature results.
Table 4. Relief F-RFE preferred feature results.
Feature Selection StrategyFeature NumberOptimal Feature Subset
SLPSO-RFE56B1, B11, B11_1, B12, B12_1, B1_1, B2, B2_1, B3, B3_1, B4, B4_1, B5, B5_1, B6, B6_1, B7, B7_1, B8, B8_1, B8A, B8A_1, B9, B9_1, CIRE_S, CIRE_W, EVI_S, EVI_W, GNDVI_S, GNDVI_W, IRECI_W, MCARI2_S, MCARI_W, MNDRE_S, MNDRE_W, NDRE_S, NDRE_W, NDVI750_S, NDVI750_W, NDVI_W, VV_W_asm, VV_W_asm_1, VV_W_com, VV_W_con_1, VV_W_diss, VV_W_diss_1, VV_W_ent, VV_W_ent_1, VV_W_idm, VV_W_idm_1, VV_diss, VV_diss_1, VV_ent, VV_ent_1, VV_idm, VV_diidm_1
Table 5. Classification scheme. Among them, the single-temporal data mentioned in scenarios 1–3 refers to using only the median composite image in summer, and the multi-temporal data refers to using the median composite image in summer and winter.
Table 5. Classification scheme. Among them, the single-temporal data mentioned in scenarios 1–3 refers to using only the median composite image in summer, and the multi-temporal data refers to using the median composite image in summer and winter.
Classification SchemeScheme Content
First set of experiments1SVM classification based on Relief-RFE feature selection in single phase
2CART classification based on Relief-RFE feature selection in a single phase
3RF classification based on ReliEF-RFE feature selection in a single phase
The second set of experiments4SVM classification based on Relief-RFE feature selection under multi-temporal conditions
5CART classification based on Relief-RFE feature selection in multi-temporal conditions
6RF classification based on Relief-RFE feature selection over multi-temporal periods
The third set of experiments7SVM classification based on SLPSO and Relief-RFE feature selection under multi-temporal conditions
8CART classification based on SLPSO and Relief-RFE feature selection in multi-temporal conditions
9RF classification based on SLPSO and Relief-RFE feature selection in multi-temporal conditions
Table 6. Scenario 1–3 confusion accuracy matrix evaluation results.
Table 6. Scenario 1–3 confusion accuracy matrix evaluation results.
Scenario 1Scenario 2Scenario 3
PA/%UA/%PA/%UA/%PA/%UA/%
Eucalyptus64.5274.0777.7887.57988.43
Other vegetation92.0889.428683.3387.185.32
Non-vegetation64.7161.1188.8985.0389.0588.67
OA/%80.2382.4283.43
Kappa0.710.750.75
F1-Score73.5375.0775.2
Table 7. Scenario 4–6 confusion accuracy matrix evaluation results.
Table 7. Scenario 4–6 confusion accuracy matrix evaluation results.
Scenario 4Scenario 5Scenario 6
PA/%UA/%PA/%UA/%PA/%UA/%
Eucalyptus74.3373.3377.787779.4180.32
Other vegetation81.2576.478283.3382.9881.76
Non-vegetation88.8990.2689.6791.4590.2391.97
OA/%85.488889.43
Kappa0.790.810.84
F1-Score80.3382.3583.23
Table 8. Scenario 7–9 confusion accuracy matrix evaluation results.
Table 8. Scenario 7–9 confusion accuracy matrix evaluation results.
Scenario 7Scenario 8Scenario 9
PA/%UA/%PA/%UA/%PA/%UA/%
Eucalyptus93.7693.5596.2596.3810092.91
Other vegetation91.496.5794.7395.9510099
Non-vegetation98.459798.899990100
OA/%95.489697.97
Kappa0.940.940.96
F1-Score93.3394.0196.43
Table 9. UAV Image Validation Results.
Table 9. UAV Image Validation Results.
ScenarioClassification Verification Accuracy/%
Area 1Area 2
188.4792.31
291.4593.14
391.5693
492.3182.81
593.1481.56
69394.24
795.7890.31
896.5193.44
99795.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, X.; Ren, C.; Li, Y.; Yue, W.; Liang, J.; Yin, A. Eucalyptus Plantation Area Extraction Based on SLPSO-RFE Feature Selection and Multi-Temporal Sentinel-1/2 Data. Forests 2023, 14, 1864. https://doi.org/10.3390/f14091864

AMA Style

Lin X, Ren C, Li Y, Yue W, Liang J, Yin A. Eucalyptus Plantation Area Extraction Based on SLPSO-RFE Feature Selection and Multi-Temporal Sentinel-1/2 Data. Forests. 2023; 14(9):1864. https://doi.org/10.3390/f14091864

Chicago/Turabian Style

Lin, Xiaoqi, Chao Ren, Yi Li, Weiting Yue, Jieyu Liang, and Anchao Yin. 2023. "Eucalyptus Plantation Area Extraction Based on SLPSO-RFE Feature Selection and Multi-Temporal Sentinel-1/2 Data" Forests 14, no. 9: 1864. https://doi.org/10.3390/f14091864

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop