Next Article in Journal
Advantages of Nonlinear Intensity Components for Contrast-Based Multispectral Pansharpening
Next Article in Special Issue
Correction: Tian et al. Multiscale Superpixel-Based Fine Classification of Crops in the UAV-Based Hyperspectral Imagery. Remote Sens. 2022, 14, 3292
Previous Article in Journal
Prediction of Sea Surface Temperature in the East China Sea Based on LSTM Neural Network
Previous Article in Special Issue
Evaluation of Point Hyperspectral Reflectance and Multivariate Regression Models for Grapevine Water Status Estimation
 
 
Correction published on 8 February 2023, see Remote Sens. 2023, 15(4), 929.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multiscale Superpixel-Based Fine Classification of Crops in the UAV-Based Hyperspectral Imagery

1
Faculty of Resources and Environmental Science, Hubei University, Wuhan 430062, China
2
Hubei Key Laboratory of Regional Development and Environmental Response, Hubei University, Wuhan 430062, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(14), 3292; https://doi.org/10.3390/rs14143292
Submission received: 7 May 2022 / Revised: 20 June 2022 / Accepted: 23 June 2022 / Published: 8 July 2022 / Corrected: 8 February 2023
(This article belongs to the Special Issue Hyperspectral Imaging for Precision Farming)

Abstract

:
As an effective approach to obtaining agricultural information, the remote sensing technique has been applied in the classification of crop types. The unmanned aerial vehicle (UAV)-based hyperspectral sensors provide imagery with high spatial and high spectral resolutions. Moreover, the detailed spatial information, as well as abundant spectral properties of UAV-based hyperspectral imagery, opens a new avenue to the fine classification of crops. In this manuscript, multiscale superpixel-based approaches are proposed for the fine identification of crops in the UAV-based hyperspectral imagery. Specifically, the multiscale superpixel segmentation is performed and a series of superpixel maps can be obtained. Then, the multiscale information is integrated into image classification by two strategies, namely pre-processing and post-processing. For the pre-processing strategy, the superpixel is regarded as the minimum unit for image classification, whose feature is obtained by using the average of spectral values of pixels within it. At each scale, the classification is performed on the basis of the superpixel. Then, the multiscale classification results are combined to generate the final map. For the post-processing strategy, the pixel-wise classification is implemented to obtain the label and posterior probabilities of each pixel. Subsequently, the superpixel-based voting is conducted at each scale, and these obtained voting results are fused to generate the multiscale voting result. To evaluate the effectiveness of the proposed approaches, three open-sourced hyperspectral UAV-based datasets are employed in the experiments. Meanwhile, seven training sets with different numbers of labeled samples and two classifiers are taken into account for further analysis. The results demonstrate that the multiscale superpixel-based approaches outperform the single-scale approaches. Meanwhile, the post-processing strategy is superior to the pre-processing strategy in terms of higher classification accuracies in all the datasets.

Graphical Abstract

1. Introduction

Agriculture is the foundation of the national economy, and crop production affects the quality of human life. Specifically, obtaining the spatial distribution and growing status of crops is crucial for agricultural monitoring and policy development [1,2]. However, the traditional field measurement, investigation and statistic methods are time-consuming and labor-intensive, making it difficult to obtain the agricultural information of a large area in the required time [3,4].
Thanks to the development of earth observation technology, the remote sensing approach has been widely applied in agriculture for years, as it is able to achieve a large area of farmland with higher data collection frequency and lower costs [5,6,7]. In this context, the researchers pay attention to the interpretation of remote sensing images for achieving crop information. Moreover, state-of-the-art machine learning methods are utilized and evaluated for the agricultural crop classification in remote sensing images. Ok et al. [8] analyzed the performance of the random forest and maximum likelihood classification methods for crop recognition with multispectral SPOT 5 images. Zhao et al. [9] compared three deep learning models for early crop classification on Sentinel-1A imagery. Piedelobo et al. [10] explored the high-resolution crop mapping in big areas by fusing the open-source remote sensing data from Sentinel-2 and Landsat-8 satellites. Moreover, Chakhar et al. [11] combined Landsat-8 and Sentinel-2 information for irrigated crop classification and assessed the performance of 22 nonparametric algorithms for classifying crops. Kussul et al. [12] proposed a multilevel deep learning architecture for crop type classification based on the images acquired by Landsat-8 and Sentinel-1A RS satellites. Sonobe et al. [13] used the data provided by Sentinel-1A C-SAR and Sentinel-2A MultiSpectral Instrument for the identification of six crop types. Kumar et al. [14] viewed the Resourcesat-2 as a highly suitable satellite for crop classification studies owing to its improved features and capabilities and compared the classification performance given by several algorithms performed on this imagery.
In recent years, the unmanned aerial vehicle (UAV) opens a new avenue to precision agriculture owing to its flexibility and intelligence [15,16]. To extract the crop distribution from UAV-based imagery, a lot of research has been conducted. Senthilnath et al. [17] investigated the application of a UAV imaging platform for vegetation analysis based on spectral-spatial methods. In this study, vertical take-off and landing (VTOL) quadcopters and fixed-wing were used to acquire images for vegetation analysis, and experiments illustrated the effectiveness of the spectral-spatial methods. Ye et al. [18] used a UAV equipped with a five-band multi-spectral sensor to capture imagery for the identification of banana fusarium wilt using supervised classification algorithms. Moreover, the UAV-based hyperspectral imagery shows abundant spectral properties as well as detailed spatial information [19,20], making it a satisfactory data source for the accurate recognition of crops [21,22]. A survey focused on the combination of UAV and hyperspectral sensors was proposed, in which the hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forest were investigated [23]. Ishida et al. [24] used a liquid crystal tunable filter to select the optimal combination of spectral bands for vegetation classification. Wei et al. [25] proposed a spectral-spatial-location fusion method based on conditional random fields, in which the spectral information, the spatial context, the spatial features, and the spatial location information were integrated into the conditional random field for crop recognition. Zhong et al. [26] built a UAV-borne hyperspectral dataset with high spectral and spatial resolution and proposed a deep convolutional neural network with a conditional random field for precise crop identification.
Meanwhile, the concept of superpixel has been introduced in hyperspectral image interpretation. A superpixel can be regarded as a region consisting of several spatial-coherent pixels with similar properties. It is able to avoid outliers and preserve the boundary of objects in the image [27,28]. The researchers viewed the superpixel as the minimum processing unit for image classification. Li et al. [29] developed a superpixel-level sparse representation classification framework with multitask learning for hyperspectral imagery. Fang et al. [30] proposed a superpixel-based discriminative sparse model for spectral-spatial classification of hyperspectral images, where pixels within each superpixel were represented via a joint sparse regularization and the label of superpixel was determined by the recovered sparse coefficients. Cui et al. [31] proposed a hyperspectral image classification method on the basis of superpixel and multi-classifier fusion, which made use of the spectral information of superpixels and the spatial information of hyperspectral images. Li et al. [32] combined the probability outputs of the pixel-level and superpixel-level classification in a maximum a posteriori estimation model. The aforementioned research illustrated the superiority of superpixel-based approaches compared to the conventional pixel-based ones. However, few works have paid attention to the superpixel-based crop fine classification. Meanwhile, the performance of superpixel-based approaches relys on the segmentation result, and it is difficult to select optimal parameters for the description of different kinds of objects in the agricultural hyperspectral image.
In this manuscript, multi-scale superpixel-based approaches are developed for the fine classification of crops in UAV-based hyperspectral imagery. On the basis of the spectral similarity and spatial relationship among pixels, the image is segmented as a series of superpixels. To exploit the multiscale information of remote sensing image, several segmentation maps with different numbers of superpixels are generated. The superpixel information can be introduced in classification by two different approaches, namely the pre-processing method and the post-processing method. Specifically, the pre-processing method regards each superpixel as a minimum processing unit instead of a pixel, and the post-processing method combines the superpixel segmentation maps with pixel-wise classification results by using a voting strategy. For the pre-processing method, the classification is performed on superpixels, and the feature of a superpixel is calculated based on the pixels located in it. For each scale, both the crisp and soft classification outputs can be obtained. Therefore, label-based and probability-based approaches are proposed to fuse the multiscale information. For the post-processing method, pixel-wise classification is first performed to obtain the label and probability information of each pixel. Based on the superpixel segmentation map at each scale, label-based and probability-based voting can be implemented. Then, the multiscale information is fused by combining the voting results obtained at different scales. To test the effectiveness of the proposed method, three UAV hyperspectral images obtained by UAV over agricultural areas are adopted in the experiments.
The rest of this paper is organized as follows: The multiscale superpixel-based classification approaches are introduced in Section 2. Section 3 and Section 4 show the experimental results and discussions. The conclusions are drawn in Section 5.

2. Methodology

2.1. Superpixel Segmentation

To obtain the superpixel segmentation result, the entropy rate superpixel (ERS) algorithm is employed in this research. For the ERS algorithm, superpixel segmentation is regarded as a clustering problem. An image can be as mapped as an undirected graph G = V , E , where V is the vertex set and E is the edge set. The vertices denote the pixels in the image, and the edge weights denote the similarity between vertices given in the form of a similarity matrix. To segment an image into K superpixels, we search for a subset of edges A E that makes the resulting graph G = V , A contain K connected subgraphs. The superpixel segmentation result can be obtained via optimizing the following objective function with respect to the edge set
max A H ( A ) + λ B ( A ) subject   to A E
where H ( A ) represents the entropy rate of a random walk on graph G = V , A and B A denotes the balancing function. λ 0 is the weight of the balancing term. Specifically, the entropy rate of the random walk is employed as a criterion to achieve compact and homogeneous clusters, which encourages the division of images on perceptual boundaries and favors superpixels overlapping with only one object. While the balancing function is used to encourage clusters with similar sizes and reduce the number of unbalanced superpixels. By combining the entropy rate and the balancing function, the objective function favors compact, homogeneous, and balanced clusters.
Note that, since the inclusion of any edge will increase the uncertainty of a jump of the random walk, the entropy rate is monotonically increasing. On the other hand, the balancing function is also a monotonically increasing and submodular function under the given graph construction. Therefore, as a linear combination with non-negative coefficients, the objective function is submodular and monotonically increasing that can be optimized by a Greedy algorithm. Starting with an empty edge set, the algorithm adds edges to the set sequentially. At each iteration, the edges that yield the largest gain in the objective function are selected. With the update of the edge set, the number of connected graphs is also changed. When the number of connected graphs reaches the preset K, the iterations are stopped and the superpixel segmentation result is achieved. More detailed information about the objective function and the resulting algorithm of ERS segmentation can be found in [33].

2.2. Superpixel-Based Classification

A superpixel is composed of several spatial adjacent pixels with similar spectral properties that should be assigned to the same label. Thus, the classification can be performed on the basis of the superpixel instead of the original pixels, where the superpixel is used as a minimum processing unit to avoid the salt-and-pepper noises and preserve the boundary of objects. Specifically, to describe the properties of each superpixel, we take the average value of spectral responses of pixels within it into account. The spectral properties of the superpixel are input into the pre-trained classifier to predict its label information, and then a classification map can be generated.
On the other hand, inspired by the object-based voting strategy [34], the superpixel-based voting strategy is also developed. Firstly, a pixel-wise classification is conducted on the original image to obtain a classification map. Then, the superpixel segmentation result is introduced to improve the classification performance. For the superpixel-based voting strategy, the pixels located in the same superpixel should be assigned to the same class of the superpixel. In this work, we use two methods to obtain the label of superpixels. Specifically, the label of a superpixel can be determined using the dominated class of pixels within it, where the most frequently occurred class is used as the label of the superpixel. On the other hand, the probability outputs can also be used to determine the label of the superpixel. The posterior probabilities of pixels belonging to different classes are given by the classifier. For a superpixel, its class-specific probabilities are calculated using the mean probabilities of the pixels within it. Then, the probabilities of the pixel located in the superpixel are modified and can be described as
p ¯ ( x ) = 1 N s p x s p p ( x )
where p ( x ) is the probability output of pixel x given by the pixel-wise classification, s p is the superpixel that pixel x belongs to, and N s p is the number of pixels contained in the superpixel s p . Then, the label of the pixel is assigned to the class with the highest probability. Therefore, the pixel-wise classification map is refined by tuning the label of pixels according to the superpixel-voting strategy.

2.3. Multiscale Superpixel-Based Classification

The objects in remote sensing images always show different characteristics in different scales, making it difficult to select the optimal scale to represent different kinds of objects. Hence, the multiscale superpixel-based approaches are proposed for the fine classification of crops in the UAV-based hyperspectral image. Similar to the traditional superpixel-based classification approaches, the proposed approaches can be divided into the pre-processing and post-processing methods. The flow chart of multiscale superpixel-based approaches is showed in Figure 1.
For the pre-processing method, the average value of the spectral feature of the pixels within a superpixel is used to represent its characteristics at each scale. Subsequently, a classifier is used to give the posterior probabilities and the label output of superpixels. Moreover, each pixel obtains the label and the probability information according to the superpixel it belongs to. Therefore, two approaches are developed to mine the multiscale information of hyperspectral images according to the label and probability outputs. Specifically, the multiscale label fusion (MLF) approach regards the dominant label of a pixel obtained at a different scale as its final class label. Moreover, the multiscale probability fusion (MPF) combines the posterior probabilities obtained at a series of scales to generate a probability output, which can be described as
p M P F ( x ) = 1 S s = 1 S p s ( x ¯ )
where x ¯ is the average value of feature of the pixels within the superpixel that pixel x is located in, p s ( x ¯ ) is the corresponding probability output obtained at s -th scale, S is the number of scales, and p M P F ( x ) is the fused probability output of pixel x . For a pixel, the class with the highest fused probability is assigned as its label.
For the post-processing method, the pixel-wise classification is performed on the original hyperspectral image to obtain the label and probabilities of each pixel. Based on the superpixel segmentation maps with different scales, superpixel-based voting is conducted and a series of voting results are generated. Moreover, the label-based and probability-based voting strategies are developed in this work. In the multiscale label voting (MLV) strategy, the superpixel-based voting result is calculated using the label information of pixels at each scale. Then, the voting results generated at a series of scales are fused, where the class occurred with the highest frequency is selected as the final label of pixel. Meanwhile, in the multiscale probability voting (MPV) strategy, the initial pixel-wise posterior probabilities are modified using probability voting on the basis of the superpixel segmentation map at each scale. Then, for a pixel, the average of the fused probabilities obtained at several scales is leveraged to represent its membership belonging to a different class.
p M P V ( x ) = 1 S s = 1 S p ¯ s ( x )
where p ¯ s ( x ) is the modified output obtained by probability voting at s -th scale. The final label of pixel is assigned as the class with the highest multiscale fused probabilities.

3. Experiments and Discussion

3.1. Dataset

In the experiments, the Wuhan UAV-borne hyperspectral image (WHU-Hi) dataset was employed to test the effectiveness of the multiscale superpixel-based methods for the fine classification of crops, which was collected and shared by the Intelligent Data Extraction, Analysis and Applications of Remote Sensing (RSIDEA) research group of Wuhan University. For the WHU-Hi dataset, the preprocessing, including radiometric calibration and geometric correction, was conducted with the HyperSpec software provided by the instrument manufacturer. In the radiometric calibration, the raw digital number values were converted into radiance values with the laboratory calibration parameters of the sensor. Specifically, this dataset was acquired over farming areas in Hubei province, China, by a Headwall Nano-Hyperspec sensor equipped on a UAV platform, including three individual UAV-borne hyperspectral datasets, namely LongKou, HanChuan, and HongHu datasets. An overview of these datasets is provided in Figure 2, Figure 3 and Figure 4 and Table 1, Table 2 and Table 3.
The LongKou dataset was acquired over a simple agricultural scene in Longkou Town, Hubei province, China. The size of the image is 550 × 400 pixels, with 270 bands from 400 to 1000 nm. The UAV flew at an altitude of 500 m, and the spatial resolution of the image is about 0.463 m. The Hanchuan dataset was acquired over a rural-urban fringe zone in Hanchuan City, Hubei province, China. The size of the image is 1217 × 303 pixels, with 274 bands from 400 to 1000 nm. The UAV flew at an altitude of 250 m, and the spatial resolution of the image is about 0.109 m. The Hanchuan dataset was acquired over a complex agricultural scene in Hanchuan City, Hubei province, China. The size of the image is 940 × 475 pixels, with 270 bands from 400 to 1000 nm. The UAV flew at an altitude of 100 m, and the spatial resolution of the image is about 0.043 m.

3.2. Experimental Setup

To test the performance of the proposed multiscale superpixel-based classification approaches, two classifiers, namely the support vector machine (SVM) and random forest (RF), are utilized. For SVM, the radial basis kernel is used, and 5-fold cross-validation is employed to select the optimal value of bandwidth and penalty factor. For RF, the number of trees is set to 500. Meanwhile, the superpixel-based classification with a single scale is also taken into consideration. The number of scales is set as 12, and the number of superpixels for each scale is related to the number of pixels contained in the image. In the s-th scale, the number of superpixels is set as 2 log 2 N s , where N is the total number of image pixels. The training set is provided by the RSIDEA group, which contains seven sets with 25, 50, 100, 150, 200, 250, and 300 labeled samples per class. All of the experiments are implemented using Matlab/Simulink on a personal computer with Intel(R) Core(TM) i7-8700K CPU and 32 GB RAM.
In this work, overall accuracy (OA), Kappa coefficient and class-specific accuracy are employed to evaluate the performance of different approaches [35]. OA represents the probability that an individual sample will be correctly recognized by a classifier, that is, the number of corrected predicted samples divided by the total number of testing samples. Kappa takes both the omission and commission errors into account, which is a more robust evaluation measurement than OA. The value of Kappa ranges from −1 to 1. Specifically, a value of 0 indicated that the classification is equal to a random classification. A negative number indicates the classification is worse than random, and a value close to 1 indicates that the classification is significantly better than random. For the class-specific accuracy, the F-score is utilized, which can be expressed as
F = 2 P A U A P A + U A
where PA and UA are the producer’s accuracy and user’s accuracy, respectively. The producer’s accuracy represents the probability of reference samples being correctly predicted, and the user’s accuracy indicates the probability that a predicted sample in the classification map actually represents the class on the ground.

3.3. Results

To analyze the superpixel-based classification method, we compare the performance of the single-scale approaches that only use one superpixel segmentation map for the crop classification. The classification results obtained using SVM are shown in Figure 5, Figure 6 and Figure 7 for Longkou, Hanchuan, and Honghu datasets, while the results with RF are presented in Figure 8, Figure 9 and Figure 10 for the three datasets. In each figure, the vertical axis donates the OA obtained by different approaches, and the horizontal axis represents the superpixel segmentation scale. Meanwhile, with the increase in scale, the number of superpixels included in the segmentation map decreases. From all of the figures, it can be found that the OAs increase steadily at the first few scales, then it reaches the best accuracy and begins to decrease. This phenomenon can be attributed to the fact that, in the first few scales, the image is over-segmented, and one object may be divided into several superpixels. Moreover, the superpixel number is much less in the last few scales, resulting in the under-segmentation phenomenon that different objects may be included in one superpixel. Both the over-segmentation and under-segmentation are unfavorable for the fine classification of crops. Notably, the single-scale superpixel-based approaches also show different performances. The superpixel-based classification approach which views the superpixel as the minimum unit gives an unsatisfactory result. On one hand, its best classification accuracy is lower than that of the other approaches. On the other hand, the OA curve decreases rapidly with the increase in scale. Especially on a larger scale, its OA is even lower than the pixel-wise classification. For this reason, the number of superpixels on a larger scale is small, and a superpixel may contain lots of objects belonging to different crop types. The inclusion of different kinds of objects distorts the average spectral feature of the superpixel, resulting in the misclassification of the superpixel and unsatisfactory performance. Moreover, the accuracy curves of voting approaches decrease as the scale increases, but the descent rate is much slower. The classification accuracy given by the superpixel-based approaches is better than the original pixel-wise classification since the employment of the superpixel introduces spatial information for the crop classification that avoids the isolated misclassified noises. In particular, the highest accuracies of the single-scale approaches are all given by the post-processing methods. Among the 21 results obtained on 3 datasets with 7 training sets using SVM, the probability-based and label-based voting methods give 16 and 5 best performances, respectively. Specifically, the best results achieved with 25, 50, 100, 150, 200, 250, and 300 training samples per class are 97.77%, 98.56%, 98.45%, 98.53%, 98.45%, 98.68%, and 98.77% in the Longkou dataset, respectively. In the Hanchuan dataset, the highest OAs provided by the single-scale approach are 67.19%, 76.17%, 85.20%, 87.52%, 88.84%, 88.52%, and 88.85% with different training sets, respectively. As for the Honghu dataset, the optimal accuracies are 85.31%, 84.77%, 88.04% 88.81%, 89.92%, 90.68%, and 90.85% obtained with different training sample numbers, respectively. When using RF, 17 of the optimal 21 results with different datasets and training sets are provided by the probability-based voting method, and the rest are given by the label-based voting method. For the Longkou dataset, the best results provided by the single-scale approach are 95.44%, 96.41%, 95.07%, 96.83%, 97.99%, 97.27%, and 97.99% using 25, 50, 100, 150, 200, 250, and 300 training samples per class, respectively. In the Hanchuan dataset, the highest OAs achieved with the seven predefined training sets are 69.32%, 79.22%, 81.06%, 81.76%, 84.48%, 84.51%, and 85.13%, respectively. As for the Honghu dataset, the optimal accuracies obtained are 81.57%, 82.68%, 85.10%, 84.37%, 85.13%, 85.71%, and 87.07% with different numbers of training samples per class, respectively.
In the meantime, the OAs and Kappa coefficients provided by the multiscale superpixel-based approaches, including MLF, MPF, MLV, and MPV, as well as the pixel-wise spectral-based approach based on SVM are reported in Table 4, Table 5 and Table 6 for Longkou, Hanchuan, and Honghu datasets, respectively. The classification results obtained with RF are shown in Table 7, Table 8 and Table 9 for the three datasets. Obviously, compared to the aforementioned single-scale approaches, the multiscale approaches show more satisfactory classification performance. The proposed multiscale approaches give similar and even better accuracies to the optimal result achieved by the single-scale approaches. The phenomenon reveals that the employment of multiscale information benefits the recognition of crops, and the multiscale superpixel-based method avoids the optimal scale selection problem in image analysis. Comparing the result obtained with different sample sets, it can be observed that the accuracy is improved as the number of training samples increases, which illustrates that sufficient samples are conducive to the construction of a discriminative classification model. Among the proposed multiscale superpixel-based methods, MLV and MPV show much better results than MLF and MPF in terms of higher OAs and kappa coefficients. This phenomenon is similar to the results of the single-scale approaches, demonstrating that the post-processing strategy is more effective than pre-processing strategy in identifying the crops in the hyperspectral image. It can be also found that MLF and MPF show similar results, and the accuracies given by MLV and MPV are very close. Actually, the OAs and Kappa coefficients of probability-based approaches are slightly higher than that of label-based approaches, meaning the probability-based approaches may be more suitable for crop classification. Moreover, the classification results obtained using SVM are better than that using RF, especially for the pixel-wise spectral-based approach. Although SVM and RF show different distinguishing abilities in recognizing the crops in hyperspectral images, the accuracy improvements achieved by introducing the multiscale superpixel information are still evident. Overall, the proposed multiscale superpixel-based approaches give satisfactory results in the testing datasets, while MPV shows the most promising performance as it obtains the highest accuracies in most cases.

4. Discussion

For further discussion and analysis, we compare the classification performances of different approaches with 100 training samples per class in this subsection. Table 10 reports the OAs, kappa coefficient, and class-specific accuracies given by SVM and RF for the Longkou dataset. The spectral-based classification accuracy with SVM is 94.22%, which is higher than 86.03% obtained by RF. By introducing the multiscale superpixel information in crop classification, the MLF, MPF, MLV and MPV increase the accuracy by 3.22%, 3.72%, 4.25% and 4.30% with SVM. For RF, the accuracy improvements obtained by the proposed methods are 3.81%, 5.57%, 7.77% and 9.30%, respectively. In this image, the sesame and narrow-leaf soybean achieve the lowest accuracies, where the accuracies given by SVM are 73.64% and 72.65%, while the accuracies given by RF are 33.07% and 51.38%. However, MPV with SVM gives 97.90% and 89.39% for sesame and narrow-leaf soybean, which is a satisfactory result, and MPV with RF achieve 69.91% and 65.75% for these two crops, which is much better than the original spectral-based result. The classification accuracies for Huanchuan datasets obtained with 100 training samples are shown in Table 11. The OAs given by MLF, MPF, MLV and MPV with SVM are 82.91%, 83.15%, 86.41%, and 86.21%, while the OA of the spectral-based approach is only 73.97%. As for RF, the spectral-based approach gives 71.46%, while MLF, MPF, MLV and MPV give 74.00%, 75.64%, 79.62%, and 81.03%, respectively. In this dataset, the post-processing methods show much better performance than the pre-processing ones, as the accuracies obtained by MLV and MPV are 3% higher than that obtained by MLF and MPF. For the class-specific accuracy, all of the approaches achieve unsatisfactory results in water spinach, watermelon, plastic and bare soil with accuracies lower than 60%. Especially for the water spinach, the accuracies given by MLF, MPF, MLV and MPV with RF are only 17.92%, 18.61%, 22.03%, and 25.00%. The classification accuracies given by different methods for the Honghu dataset using SVM and RF with 100 training samples per class are presented in Table 12. It can be observed that the best result of the Honghu image is achieved by MPV whether using SVM or RF. Specifically, the MPV with SVM gives an accuracy of 89.31%, which is 5.23% higher than the accuracy of 84.08% given by MPV with RF. As for the pixel-wise spectral-based classification, the classification accuracies obtained with SVM and RF are 73.42% and 67.28%. The comparison between the spectral-based and multiscale superpixel-based approaches illustrates the effectiveness of the employment of spatial information. Moreover, among the 22 classes, 7 classes achieve a satisfactory accuracy that is higher than 90% and only 3 classes’ accuracies are lower than 60% in the result given by MPV with SVM. For visual interpretation, the classification maps obtained with SVM for Longkou, Hanchuan, and Honghu datasets are shown in Figure 11, Figure 12 and Figure 13. Meanwhile, Figure 14, Figure 15 and Figure 16 show the classification maps given by different approaches on the testing dataset using RF. It is observed that the pixel-wise spectral-based classification is subject to the salt-and-pepper misclassification noises. While the superpixel-based approaches show satisfactory results with less salt-and-pepper noises and more accurate object boundaries, which illustrate the superiority of the proposed multiscale superpixel-based methods in identifying the crops in hyperspectral imagery.
In addition, the effectiveness of the multiscale superpixel-based methods relays on the superpixel segmentation results. Obviously, different superpixel segmentation algorithms will result in different performances, and thus, it is important to select a suitable segmentation algorithm to generate the superpixel results. Meanwhile, the number of scales used in the proposed method restricts the final classification performance. A larger scale number always indicates a higher computational burden and time cost, while a smaller scale number cannot comprehensively exploit the spectral-spatial information of images.

5. Conclusions

In this manuscript, multiscale superpixel-based approaches were developed for the fine recognition of crop types in UAV-based hyperspectral images. Superpixel segmentation was performed with different parameters to exploit the multiscale information of objects, and several superpixel maps can be obtained. To fuse the multiscale superpixel information, the pre-processing and post-processing strategies were proposed according to different principles. Specifically, the pre-processing strategy views the superpixel as the minimum image processing unit, and the classification was conducted on the superpixel level at each scale. Then, the label of each pixel was assigned to the domain class among multiscale results. Moreover, the post-processing strategy was inspired by the voting approach, and the class information of the superpixel was determined by the majority classes of pixels within it. By fusing the voting result obtained at different scales, we can obtain the final classification map. Note that, for the pre-processing and post-processing methods, the class probability output and label information were taken into consideration to generate the final classification results by different approaches.
The experiments were conducted on the WHU-Hi dataset provided by the RSIDEA research group, which contains three individual UAV-based hyperspectral images. Moreover, for each dataset, seven training sets with different number of labeled samples were supplied, as well as the hyperspectral image. Meanwhile, SVM and RF were employed to test the effectiveness of the proposed methods. The comparison of the single-scale approaches demonstrates that it is hard to select an optimal scale for a complex image scene. Moreover, the best result among the single-scale superpixel-based approaches was inferior to the multiscale superpixel-based approaches. Furthermore, it is found that the post-processing strategy shows better result than the pre-processing strategy, which illustrates the effectiveness of voting methods. Additionally, the classification maps show that the proposed method is able to preserve the object boundaries while avoiding the discrete misclassification pixels. Future work will focus on the extraction of superpixel-based features for better classification of crops.

Author Contributions

Conceptualization, S.T. and Q.L.; methodology, Q.L.; validation, Q.L.; formal analysis, Q.L.; resources, Q.L. and L.W.; data curation, S.T. and L.W.; writing—original draft preparation, S.T. and Q.L.; writing—review and editing, Q.L. and L.W.; visualization, S.T.; project administration, L.W.; funding acquisition, Q.L. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Hubei Key Research and Development Program (2021BID002), the Natural Science Foundation of Hubei Province (2021CFB116), the National Key Research and Development Program of China (2019YFB2102902), the Natural Science Foundation Key Projects of Hubei Province(2020CFA005), the Scientific Research Project of Hubei Provincial Education Department (Q20201003), the Opening Foundation of Hubei Key Laboratory of Regional Development and Environmental Response (2019(B)002 and 2020(B)001) and the Opening Foundation of Hunan Engineering and Research Center of Natural Resource Investigation and Monitoring (2020-2).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Karthikeyan, L.; Chawla, I.; Mishra, A.K. A Review of Remote Sensing Applications in Agriculture for Food Security: Crop Growth and Yield, Irrigation, and Crop Losses. J. Hydrol. 2020, 586, 124905. [Google Scholar] [CrossRef]
  2. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Crop Type Classification Using a Combination of Optical and Radar Remote Sensing Data: A Review. Int. J. Remote Sens. 2019, 40, 6553–6595. [Google Scholar] [CrossRef]
  3. Mathur, A.; Foody, G.M. Crop Classification by Support Vector Machine with Intelligently Selected Training Data for an Operational Application. Int. J. Remote Sens. 2008, 29, 2227–2240. [Google Scholar] [CrossRef]
  4. Moran, M.S.; Inoue, Y.; Barnes, E.M. Opportunities and Limitations for Image-Based Remote Sensing in Precision Crop Management. Remote Sens. Environ. 1997, 61, 319–346. [Google Scholar] [CrossRef]
  5. Liaghat, S.; Balasundram, S.K. A Review: The Role of Remote Sensing in Precision Agriculture. Am. J. Agric. Biol. Sci. 2010, 5, 50–55. [Google Scholar] [CrossRef]
  6. Mulla, D.J. Twenty Five Years of Remote Sensing in Precision Agriculture: Key Advances and Remaining Knowledge Gaps. Biosyst. Eng. 2013, 114, 358–371. [Google Scholar] [CrossRef]
  7. Seelan, S.K.; Laguette, S.; Casady, G.M.; Seielstad, G.A. Remote Sensing Applications for Precision Agriculture: A Learning Community Approach. Remote Sens. Environ. 2003, 88, 157–169. [Google Scholar] [CrossRef]
  8. Ok, A.O.; Akar, O.; Gungor, O. Evaluation of Random Forest Method for Agricultural Crop Classification. Eur. J. Remote Sens. 2012, 45, 421–432. [Google Scholar] [CrossRef]
  9. Zhao, H.; Chen, Z.; Jiang, H.; Jing, W.; Sun, L.; Feng, M. Evaluation of Three Deep Learning Models for Early Crop Classification Using Sentinel-1A Imagery Time Series—A Case Study in Zhanjiang, China. Remote Sens. 2019, 11, 2673. [Google Scholar] [CrossRef]
  10. Piedelobo, L.; Hernández-López, D.; Ballesteros, R.; Chakhar, A.; Pozo, S.D.; González-Aguilera, D.; Moreno, M.A. Scalable Pixel-Based Crop Classification Combining Sentinel-2 and Landsat-8 Data Time Series: Case Study of the Duero River Basin. Agric. Syst. 2019, 171, 36–50. [Google Scholar] [CrossRef]
  11. Chakhar, A.; Ortega-Terol, D.; Hernández-López, D.; Ballesteros, R.; Ortega, J.F.; Moreno, M.A. Assessing the Accuracy of Multiple Classification Algorithms for Crop Classification Using Landsat-8 and Sentinel-2 Data. Remote Sens. 2020, 12, 1735. [Google Scholar] [CrossRef]
  12. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  13. Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K. Assessing the Suitability of Data from Sentinel-1A and 2A for Crop Classification. GISci. Remote Sens. 2017, 54, 918–938. [Google Scholar] [CrossRef]
  14. Kumar, P.; Gupta, D.K.; Mishra, V.N.; Prasad, R. Comparison of Support Vector Machine, Artificial Neural Network, and Spectral Angle Mapper Algorithms for Crop Classification Using LISS IV Data. Int. J. Remote Sens. 2015, 36, 1604–1617. [Google Scholar] [CrossRef]
  15. Hassler, S.C.; Baysal-Gurel, F. Unmanned Aircraft System (UAS) Technology and Applications in Agriculture. Agronomy 2019, 9, 618. [Google Scholar] [CrossRef]
  16. Maes, W.H.; Steppe, K. Perspectives for Remote Sensing with Unmanned Aerial Vehicles in Precision Agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef]
  17. Senthilnath, J.; Kandukuri, M.; Dokania, A.; Ramesh, K.N. Application of UAV Imaging Platform for Vegetation Analysis Based on Spectral-Spatial Methods. Comput. Electron. Agric. 2017, 140, 8–24. [Google Scholar] [CrossRef]
  18. Ye, H.; Huang, W.; Huang, S.; Cui, B.; Dong, Y.; Guo, A.; Ren, Y.; Jin, Y. Identification of Banana Fusarium Wilt Using Supervised Classification Algorithms with UAV-Based Multi-Spectral Imagery. Int. J. Agric. Biol. Eng. 2020, 13, 136–142. [Google Scholar] [CrossRef]
  19. Uto, K.; Seki, H.; Saito, G.; Kosugi, Y. Characterization of Rice Paddies by a UAV-Mounted Miniature Hyperspectral Sensor System. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 851–860. [Google Scholar] [CrossRef]
  20. Zhong, Y.; Wang, X.; Xu, Y.; Wang, S.; Jia, T.; Hu, X.; Zhao, J.; Wei, L.; Zhang, L. Mini-UAV-Borne Hyperspectral Remote Sensing: From Observation and Processing to Applications. IEEE Geosci. Remote Sens. Mag. 2018, 6, 46–62. [Google Scholar] [CrossRef]
  21. Lu, B.; Dao, P.D.; Liu, J.; He, Y.; Shang, J. Recent Advances of Hyperspectral Imaging Technology and Applications in Agriculture. Remote Sens. 2020, 12, 2659. [Google Scholar] [CrossRef]
  22. Sahoo, R.N.; Ray, S.S.; Manjunath, K.R. Hyperspectral Remote Sensing of Agriculture. Curr. Sci. 2015, 108, 848–859. [Google Scholar]
  23. Adão, T.; Hruška, J.; Pádua, L.; Bessa, J.; Peres, E.; Morais, R.; Sousa, J.J. Hyperspectral Imaging: A Review on UAV-Based Sensors, Data Processing and Applications for Agriculture and Forestry. Remote Sens. 2017, 9, 1110. [Google Scholar] [CrossRef]
  24. Ishida, T.; Kurihara, J.; Viray, F.A.; Namuco, S.B.; Paringit, E.C.; Perez, G.J.; Takahashi, Y.; Marciano, J.J. A Novel Approach for Vegetation Classification Using UAV-Based Hyperspectral Imaging. Comput. Electron. Agric. 2018, 144, 80–85. [Google Scholar] [CrossRef]
  25. Wei, L.; Yu, M.; Liang, Y.; Yuan, Z.; Huang, C.; Li, R.; Yu, Y. Precise Crop Classification Using Spectral-Spatial-Location Fusion Based on Conditional Random Fields for UAV-Borne Hyperspectral Remote Sensing Imagery. Remote Sens. 2019, 11, 2011. [Google Scholar] [CrossRef]
  26. Zhong, Y.; Hu, X.; Luo, C.; Wang, X.; Zhao, J.; Zhang, L. WHU-Hi: UAV-Borne Hyperspectral with High Spatial Resolution (H2) Benchmark Datasets and Classifier for Precise Crop Identification Based on Deep Convolutional Neural Network with CRF. Remote Sens. Environ. 2020, 250, 112012. [Google Scholar] [CrossRef]
  27. Li, D.; Wang, Q.; Kong, F. Superpixel-Feature-Based Multiple Kernel Sparse Representation for Hyperspectral Image Classification. Signal Process. 2020, 176, 107682. [Google Scholar] [CrossRef]
  28. Li, J.; Zhang, H.; Zhang, L. Efficient Superpixel-Level Multitask Joint Sparse Representation for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5338–5351. [Google Scholar]
  29. Lu, Q.; Wei, L. Multiscale Superpixel-Based Active Learning for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  30. Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral–Spatial Classification of Hyperspectral Images with a Superpixel-Based Discriminative Sparse Model. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4186–4201. [Google Scholar] [CrossRef]
  31. Cui, B.; Cui, J.; Hao, S.; Guo, N.; Lu, Y. Spectral-Spatial Hyperspectral Image Classification Based on Superpixel and Multi-Classifier Fusion. Int. J. Remote Sens. 2020, 41, 6157–6182. [Google Scholar] [CrossRef]
  32. Li, S.; Lu, T.; Fang, L.; Jia, X.; Benediktsson, J.A. Probabilistic Fusion of Pixel-Level and Superpixel-Level Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7416–7430. [Google Scholar] [CrossRef]
  33. Liu, M.-Y.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2097–2104. [Google Scholar]
  34. Huang, X.; Lu, Q.; Zhang, L.; Plaza, A. New Postprocessing Methods for Remote Sensing Image Classification: A Systematic Study. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7140–7159. [Google Scholar] [CrossRef]
  35. Foody, G.M. Status of Land Cover Classification Accuracy Assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the research.
Figure 1. Flow chart of the research.
Remotesensing 14 03292 g001
Figure 2. The LongKou dataset: (a) Hyperspectral image; (b) Ground truth; (c) Typical crop photos in the study area.
Figure 2. The LongKou dataset: (a) Hyperspectral image; (b) Ground truth; (c) Typical crop photos in the study area.
Remotesensing 14 03292 g002
Figure 3. The HanChuan dataset: (a) Hyperspectral image; (b) Ground truth; (c) Typical crop photos in the study area.
Figure 3. The HanChuan dataset: (a) Hyperspectral image; (b) Ground truth; (c) Typical crop photos in the study area.
Remotesensing 14 03292 g003
Figure 4. The HongHu dataset: (a) Hyperspectral image; (b) Ground truth; (c) Typical crop photos in the study area.
Figure 4. The HongHu dataset: (a) Hyperspectral image; (b) Ground truth; (c) Typical crop photos in the study area.
Remotesensing 14 03292 g004
Figure 5. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Longkou dataset using SVM, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Figure 5. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Longkou dataset using SVM, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Remotesensing 14 03292 g005
Figure 6. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Hanchuan dataset using SVM, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Figure 6. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Hanchuan dataset using SVM, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Remotesensing 14 03292 g006
Figure 7. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Honghu dataset using SVM, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Figure 7. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Honghu dataset using SVM, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Remotesensing 14 03292 g007
Figure 8. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Longkou dataset using RF, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Figure 8. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Longkou dataset using RF, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Remotesensing 14 03292 g008
Figure 9. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Hanchuan dataset using RF, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Figure 9. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Hanchuan dataset using RF, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Remotesensing 14 03292 g009
Figure 10. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Honghu dataset using RF, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Figure 10. Accuracies given by the single-scale approaches with (a) 25, (b) 50, (c) 100, (d) 150, (e) 200, (f) 250, and (g) 300 training samples per class for the Honghu dataset using RF, where the vertical axis represents the OAs, and the horizontal axis represents the superpixel segmentation scale.
Remotesensing 14 03292 g010
Figure 11. The classification results for the Longkou dataset using SVM: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Figure 11. The classification results for the Longkou dataset using SVM: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Remotesensing 14 03292 g011
Figure 12. The classification results for the Hanchuan dataset using SVM: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Figure 12. The classification results for the Hanchuan dataset using SVM: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Remotesensing 14 03292 g012
Figure 13. The classification results for the Honghu dataset using SVM: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Figure 13. The classification results for the Honghu dataset using SVM: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Remotesensing 14 03292 g013
Figure 14. The classification results for the Longkou dataset using RF: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Figure 14. The classification results for the Longkou dataset using RF: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Remotesensing 14 03292 g014
Figure 15. The classification results for the Hanchuan dataset using RF: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Figure 15. The classification results for the Hanchuan dataset using RF: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Remotesensing 14 03292 g015
Figure 16. The classification results for the Honghu dataset using RF: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Figure 16. The classification results for the Honghu dataset using RF: (a) Pixel-wise spectral classification; (b) MLF; (c) MPF; (d) MLV; (e) MPV.
Remotesensing 14 03292 g016
Table 1. Ground truth classes for the LongKou dataset and the corresponding sample number.
Table 1. Ground truth classes for the LongKou dataset and the corresponding sample number.
No.Class NameSamples
C1Corn34,511
C2Cotton8374
C3Sesame3031
C4Broad-leaf soybean63,212
C5Narrow-leaf soybean4151
C6Rice11,854
C7Water67,056
C8Roads and houses7124
C9Mixed weed5229
Table 2. Ground truth classes for the HanChuan dataset and the corresponding sample number.
Table 2. Ground truth classes for the HanChuan dataset and the corresponding sample number.
No.Class NameSamples
C1Strawberry44,735
C2Cowpea22,753
C3Soybean10,287
C4Sorghum5353
C5Water spinach1200
C6Watermelon4533
C7Greens5903
C8Trees17,978
C9Grass9469
C10Red roof10,516
C11Gray roof16,911
C12Plastic3679
C13Bare soil9116
C14Road18,560
C15Bright object1136
C16Water75,401
Table 3. Ground truth classes for the Honghu dataset and the corresponding sample number.
Table 3. Ground truth classes for the Honghu dataset and the corresponding sample number.
No.Class NameSamples
C1Red roof14,041
C2Road3512
C3Bare soil21,821
C4Cotton163,285
C5Cotton firewood6218
C6Rape44,557
C7Chinese cabbage24,103
C8Pakchoi4054
C9Cabbage10,819
C10Tuber mustard12,394
C11Brassica parachinensis11,015
C12Brassica chinensis8954
C13Small Brassica chinensis22,507
C14Lactuca sativa7356
C15Celtuce1002
C16Film covered lettuce7262
C17Romaine lettuce3010
C18Carrot3217
C19White radish8712
C20Garlic sprout3486
Table 4. OAs (%) and Kappa coefficients obtained by different methods for the Longkou dataset using SVM with different numbers of training samples per class.
Table 4. OAs (%) and Kappa coefficients obtained by different methods for the Longkou dataset using SVM with different numbers of training samples per class.
Training SamplesSpectralMLFMPFMLVMPV
OAKappaOAKappaOAKappaOAKappaOAKappa
2590.320.87595.420.94096.090.94997.180.96397.540.968
5092.830.90797.150.96397.790.97198.700.98398.960.986
10094.220.92597.440.96697.940.97398.470.98098.520.981
15095.640.94397.750.97098.380.97998.910.98699.030.987
20096.170.95098.470.98098.870.98598.850.98599.070.988
25096.230.95198.430.97998.860.98599.080.98899.210.990
30096.980.96198.680.98398.990.98799.270.99099.330.991
Table 5. OAs (%) and Kappa coefficients obtained by different methods for the Hanchuan dataset using SVM with different numbers of training samples per class.
Table 5. OAs (%) and Kappa coefficients obtained by different methods for the Hanchuan dataset using SVM with different numbers of training samples per class.
Training SamplesSpectralMLFMPFMLVMPV
OAKappaOAKappaOAKappaOAKappaOAKappa
2559.280.54163.060.58364.360.59764.430.59866.600.622
5068.670.64373.300.69575.040.71576.650.73377.090.738
10073.930.70082.910.80283.150.80586.410.84286.210.840
15078.240.74985.590.83386.230.84089.750.88189.650.880
20080.080.77087.060.85087.480.85590.880.89490.670.891
25080.660.77686.640.84587.020.84990.470.88990.680.892
30080.890.77987.770.85887.960.86090.640.89190.790.893
Table 6. OAs (%) and Kappa coefficients obtained by different methods for the Honghu dataset using SVM with different numbers of training samples per class.
Table 6. OAs (%) and Kappa coefficients obtained by different methods for the Honghu dataset using SVM with different numbers of training samples per class.
Training SamplesSpectralMLFMPFMLVMPV
OAKappaOAKappaOAKappaOAKappaOAKappa
2566.980.60581.510.77082.560.78383.930.79983.390.794
5064.080.58182.160.77983.350.79381.630.77481.800.777
10073.420.68188.260.85388.740.85988.870.86189.310.866
15074.740.69588.500.85688.900.86189.710.87190.060.876
20077.050.72190.240.87790.650.88290.870.88591.050.888
25077.430.72690.800.88491.280.89091.700.89691.860.898
30079.770.75290.880.88591.060.88792.070.90092.400.904
Table 7. OAs (%) and Kappa coefficients obtained by different methods for the Longkou dataset using RF with different numbers of training samples per class.
Table 7. OAs (%) and Kappa coefficients obtained by different methods for the Longkou dataset using RF with different numbers of training samples per class.
Training SamplesSpectralMLFMPFMLVMPV
OAKappaOAKappaOAKappaOAKappaOAKappa
2579.790.74686.100.82387.170.83791.080.88691.310.888
5084.140.79985.100.81287.140.83792.020.89894.690.931
10086.030.82389.840.87091.600.89293.800.92095.330.940
15089.150.86194.320.92694.540.92997.070.96297.130.963
20090.670.88094.410.92794.850.93397.470.96797.420.966
25089.640.86794.380.92794.770.93296.830.95997.140.963
30090.760.88194.020.92294.740.93297.480.96797.750.971
Table 8. OAs (%) and Kappa coefficients obtained by different methods for the Hanchuan dataset using RF with different numbers of training samples per class.
Table 8. OAs (%) and Kappa coefficients obtained by different methods for the Hanchuan dataset using RF with different numbers of training samples per class.
Training SamplesSpectralMLFMPFMLVMPV
OAKappaOAKappaOAKappaOAKappaOAKappa
2557.690.52462.690.57863.030.58264.550.59964.540.599
5069.380.64971.360.67172.730.68676.980.73478.490.751
10071.460.67374.000.70275.640.72079.620.76581.030.781
15074.610.70878.410.75179.450.76282.720.80083.530.809
20077.190.73680.260.77180.910.77985.960.83786.160.839
25077.240.73779.270.76080.030.76985.440.83185.980.837
30077.150.73679.930.76881.190.78285.230.82986.240.840
Table 9. OAs (%) and Kappa coefficients obtained by different methods for the Honghu dataset using RF with different numbers of training samples per class.
Table 9. OAs (%) and Kappa coefficients obtained by different methods for the Honghu dataset using RF with different numbers of training samples per class.
Training SamplesSpectralMLFMPFMLVMPV
OAKappaOAKappaOAKappaOAKappaOAKappa
2562.790.55976.740.71378.150.73079.500.74578.560.736
5059.990.53777.750.72778.690.73878.570.73978.470.739
10067.280.61280.640.76082.530.78383.630.79784.080.803
15068.600.62681.690.77382.430.78284.470.80784.480.808
20069.960.64082.120.77882.460.78284.750.81085.690.822
25070.440.64683.160.79184.050.80285.320.81786.450.832
30072.080.66483.760.79884.420.80686.600.83387.710.847
Table 10. Classification accuracies given by different methods for the Longkou dataset using SVM and RF with 100 training samples per class.
Table 10. Classification accuracies given by different methods for the Longkou dataset using SVM and RF with 100 training samples per class.
No.SVMRF
SpectralMLFMPFMLVMPVSpectralMLFMPFMLVMPV
C197.6699.5199.5499.5399.5593.2899.1499.4899.4499.55
C279.1697.0897.8195.9696.0468.3968.1768.2597.0995.37
C373.6497.3199.0097.4997.9033.0745.0456.0652.0369.91
C492.2198.1898.5598.1998.1081.6488.9091.5090.1992.86
C572.6571.2075.0290.0489.3951.3843.8944.5263.9765.75
C698.4498.9198.8899.1599.2190.4498.1598.5798.8598.61
C799.9699.9699.9699.9699.9699.9599.9699.9699.9599.95
C888.1583.7285.3894.3795.2484.9179.5582.1391.9993.48
C985.1684.7190.8991.8493.8959.2467.5577.6286.3887.67
OA94.2297.4497.9498.4798.5286.0389.8491.6093.8095.33
Kappa0.9250.9660.9730.9800.9810.8230.8700.8920.9200.940
Table 11. Classification accuracies given by different methods for the Hanchuan dataset using SVM and RF with 100 training samples per class.
Table 11. Classification accuracies given by different methods for the Hanchuan dataset using SVM and RF with 100 training samples per class.
No.SVMRF
SpectralMLFMPFMLVMPVSpectralMLFMPFMLVMPV
C178.8390.9591.5992.6692.0974.4581.9884.3086.9687.37
C260.1972.7973.7181.5178.9944.5241.1442.4455.5560.12
C364.9978.8880.5888.4988.5162.6261.7461.1475.5778.15
C487.9191.4391.6691.6192.6782.0692.2794.8693.5295.45
C525.0425.7428.4639.8341.6018.6517.9218.6122.0325.00
C628.5751.9053.2155.9355.8423.2836.1442.7340.1142.44
C764.3275.8177.2373.5074.3465.5875.3176.2873.7774.08
C860.5973.3575.9378.7780.0961.7467.7068.8771.7173.01
C962.2575.2274.8782.0282.7050.3156.5956.2861.8464.69
C1085.5288.7890.3492.3892.1679.6185.3486.6484.5786.17
C1175.9885.8186.3584.7984.3378.2880.1283.5683.7685.03
C1233.1255.4550.3165.0958.8832.3237.4738.6250.2249.86
C1341.0255.4754.2556.9958.5144.5547.3248.0557.5359.04
C1468.3569.4568.9078.0078.9966.5460.6264.0369.0773.31
C1558.3472.8771.7873.0877.6558.5661.2364.3671.6171.67
C1695.7097.9697.8498.0698.0897.0596.5397.4698.5498.69
OA73.9382.9183.1586.4186.2171.4674.0075.6479.6281.03
Kappa0.7000.8020.8050.8420.8400.6730.7020.7200.7650.781
Table 12. Classification accuracies given by different methods for the Honghu dataset using SVM and RF with 100 training samples per class.
Table 12. Classification accuracies given by different methods for the Honghu dataset using SVM and RF with 100 training samples per class.
No.SVMRF
SpectralMLFMPFMLVMPVSpectralMLFMPFMLVMPV
C192.2096.0195.9096.7397.3785.0991.0690.7493.7894.31
C270.6778.4680.8674.7281.4365.8067.8971.3670.5377.66
C383.6987.9687.6191.3491.6880.7982.2584.3090.1691.08
C486.0497.9397.9197.0797.0582.0694.9296.1094.2593.57
C534.1372.9178.0467.7069.7826.0250.2660.0450.9747.77
C687.1592.9293.5393.6994.1681.9689.0390.3390.8092.48
C768.4377.3977.8680.5381.0756.8062.9561.5567.5170.17
C824.8354.0357.9359.5159.1519.4333.2439.2343.2045.94
C994.5595.5895.4995.8796.1392.1194.7394.9995.9496.15
C1057.5184.5685.2984.3885.8942.7150.3552.6874.8678.58
C1141.0673.9173.2373.8272.9933.3752.1857.8067.5364.05
C1253.3370.3071.6471.5371.3851.8961.9463.3664.8267.93
C1359.7570.3872.7172.0874.0658.6068.6272.6169.5472.22
C1470.2474.0176.5677.1681.9761.3469.2268.6073.3076.48
C1513.6663.9863.8978.1776.2213.6232.2931.1380.0463.29
C1685.0394.6795.1895.8596.2881.6490.0192.7293.8594.81
C1770.2888.9989.7093.4194.3368.1378.1384.7991.2293.47
C1845.1373.3476.8676.1775.6233.2849.4148.1851.4552.33
C1975.0989.0888.8787.4988.2567.2783.6183.2381.0984.80
C2058.3176.9877.5082.7082.6336.6256.6358.6876.9675.15
C2121.2529.0829.7736.5841.6920.3921.2022.5432.8038.83
C2238.8364.5760.7861.9457.1428.6355.3754.7644.5443.38
OA73.4288.2688.7488.8789.3167.2880.6482.5383.6384.08
Kappa0.6810.8530.8590.8610.8660.6120.7600.7830.7970.803
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tian, S.; Lu, Q.; Wei, L. Multiscale Superpixel-Based Fine Classification of Crops in the UAV-Based Hyperspectral Imagery. Remote Sens. 2022, 14, 3292. https://doi.org/10.3390/rs14143292

AMA Style

Tian S, Lu Q, Wei L. Multiscale Superpixel-Based Fine Classification of Crops in the UAV-Based Hyperspectral Imagery. Remote Sensing. 2022; 14(14):3292. https://doi.org/10.3390/rs14143292

Chicago/Turabian Style

Tian, Shuang, Qikai Lu, and Lifei Wei. 2022. "Multiscale Superpixel-Based Fine Classification of Crops in the UAV-Based Hyperspectral Imagery" Remote Sensing 14, no. 14: 3292. https://doi.org/10.3390/rs14143292

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop