Next Article in Journal
Potential of Time-Series Sentinel 2 Data for Monitoring Avocado Crop Phenology
Previous Article in Journal
The Grain for Green Program Enhanced Synergies between Ecosystem Regulating Services in Loess Plateau, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Possibility-Based Method for Urban Land Cover Classification Using Airborne Lidar Data

School of Information and Communication Engineering, North University of China, Taiyuan 030051, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(23), 5941; https://doi.org/10.3390/rs14235941
Submission received: 29 August 2022 / Revised: 11 November 2022 / Accepted: 21 November 2022 / Published: 24 November 2022

Abstract

:
Airborne light detection and ranging (LiDAR) has been recognized as a reliable and accurate measurement tool in forest volume estimation, urban scene reconstruction and land cover classification, where LiDAR data provide crucial and efficient features such as intensity, elevation and coordinates. Due to the complex urban environment, it is difficult to classify land cover accurately and quickly from remotely sensed data. Methods based on the Dempster–Shafer evidence theory (DS theory) offer a possible solution to this problem. However, the inconsistency in the correspondence between classification features and land cover attributes constrains the improvement of classification accuracy. Under the original DS evidence theory classification framework, we propose a novel method for constructing a basic probability assignment (BPA) function based on possibility distributions and apply it to airborne LiDAR land cover classification. The proposed approach begins with a feature classification subset selected by single-feature classification results. Secondly, the possibility distribution of the four features was established, and the uncertainty relationship between feature values and land cover attributes was obtained. Then, we selected suitable interval cut-off points and constructed a BPA function. Finally, DS evidence theory was used for land cover classification. LiDAR and its co-registration data acquired by Toposys Falcon II were used in the performance tests of the proposed method. The experimental results revealed that it can significantly improve the classification accuracy compared to the basic DS method.

Graphical Abstract

1. Introduction

Land cover classification has made significant contributions to many applications [1,2,3], such as global ecosystem change [4], urban transport design management [5], land planning [6] and smart city building [7]. The Airborne Laser Scanning and Ranging System [8] is capable of acquiring high resolution data sets [9] and 3D topographic data [10], which allows the data to perform well in topographic and land surveys [11,12]. With the improvement of standard of population urbanization, it has led to an urgent need for urban land cover, which consists of complex physical materials and surfaces [13,14]. It is difficult to accurately classify urban land cover from remotely sensed data [15,16]. Accurate land cover classification is a necessary prerequisite for airborne LiDAR to be used to a unique advantage in applications such as environmental monitoring and 3D city modelling. Therefore, a fast and accurate method for land cover classification is essential and meaningful for urban management plans, especially for some developing countries in the process of rapid urbanization, which need to monitor the regional changes of interest.
The analysis of observed land cover categories and attributes by using different types of remotely sensed data through feature representation methods is the main content of land cover classification research. Urban systems are among the most complex ecosystems, as they have a large number of different components. The process of urbanization is often closely linked to a city’s economic development, cultural exchange and tech-nonlogical progress, and changes in local structures or landscapes may coincide with changes in national or regional planning objectives and land use regulations [17,18,19]. Overall, remote sensing is an important tool for obtaining land cover information in rapidly growing urban areas, especially the remotely sensed data with high spatial and temporal resolution [20].
From the perspective of remotely sensed data acquisition, a land cover classification study can be divided into two categories: one is based on point cloud data [21,22], and the other is based on fusion of point cloud data and other sensor data [23,24]. In the absence of other sensor data, point cloud land cover classification methods were the main research direction at the very beginning due to its intuitive features [25,26]. Elberink et al. [27] used various heterogeneous texture features applied to unsupervised raw LiDAR data classification. The emerging machine learning techniques are currently receiving increasing attention in the literature [28,29]. Machine learning-based classification rules utilize point classifiers such as support vector machines (SVM) [30], random forests(RF) [31,32] to learn from labeled LiDAR training data to build a classification model. SVM has been shown to outperform other classifiers due to its overall high capacity to generalize complex features [33]. Wei S et al. [34] proposed a convolutional neural network (CNN)-based method for 3D target classification. However, although such methods can extract features directly from high-resolution 3D point cloud data, the existing point cloud 3D features are limited in type and lack rich real color texture information in the scene. Therefore, the existing research on land cover classification is mainly based on remote sensing image processing or multi-source data fusion. Due to the high resolution of airborne LiDAR point cloud data and the unique true color characteristics of remote sensing image data, the fusion of these two data is gradually becoming an effective idea and method for land cover classification research [35]. Kim et al. [36] proposed a solution to the problem of misclassification of architectural objects through output-level fusion of aerial imagery and LiDAR data. With increased spectral resolution, multispectral and hyperspectral can discriminate features based on differences in spectral characteristics, expanding the amount of information available from remote sensing [37,38]. Chen et al. [39] examined the application of multispectral and airborne LiDAR data to land cover mapping in large urban areas. The experimental results show that the multi-sensor data fusion method outperforms the method based on point cloud data. Therefore, combining aerial image data with airborne LiDAR point cloud data is an effective method for ground object classification [40]. Our study is also based on this type of approach and starting from the perspective of feature-level fusion strategy.
The DS evidence theory [41] is an effective method of information fusion [42], which can not only deal with the uncertainty and inconsistency of multi-sensor data [43], but also can handle the inevitably ambiguity and instability under noise or possible interference [44,45]. It also provides a solution for a fast and effective land cover classification of muti-source data. Rottensteiner et al. [46] proposed a layering technique based on generating DTMs and applying Dempster–Shafer theory to detect buildings from LiDAR data and multispectral images, but the method could not solve the problem of uncertain pixels located in mixed regions of different classes. Feng et al. [47] proposed a multi-classifier fusion method for the classification of high-resolution RS images. In evidence theory, the ability to directly express “uncertainty” is represented in the mass function and is retained in the evidence synthesis process.
Possibility theory [48] is based on fuzzy theory and probabilistic processing methods, which can directly measure the possibility of an event occurring and effectively characterize the mapping relationship between data [49]. It is also widely used in fields such as infrared image fusion and risk assessment [50]. The essence of land cover classification is to discern the possibility of a pixel belonging to different land cover categories. While possibility theory can reflect the relationship between classification features and land cover categories, efficiently modelling the possibility of classification results and maintaining good feature distribution characteristics.
In previous studies, probabilities were distributed to each category based on a BPA function under the DS evidence theory framework. Yang et al. [51] used a hierarchical combination framework to classify urban land cover, and they constructed a linear trust assignment function based on basic DS evidence theory, which can solve the problem of classifying fuzzy points, but their BPA function was difficult to accurately describe the uncertain relationship between the classified feature data and the class of land cover. Simple BPA functions may not conform to the true relationship between feature and land cover categories. Using only a single trust assignment function, which has led to ignoring the complementarity and differences between different features. This may lead to lots of classification errors in confused regions. In this study, we propose a method for constructing the BPA function based on the possibility distribution, which can accurately describe the uncertain information between features and classes.
The remainder of this paper is structured as follows. Section 2 presents our land classification method. Experimental results and related discussions are presented in Section 3. Section 4 presents the conclusion of this paper.

2. Materials and Methods

The workflow of the proposed methodology is shown in Figure 1. The data input of proposed method required airborne LiDAR point cloud data fused with RGB and IR-RGB data as the data input of our proposed method. Firstly, a feature classification subset was established according to the physical meaning of the feature and the correspondence of the land cover space; secondly, the possibility distribution was constructed using a fuzzy statistic method. We obtained the possibility degree of different land cover categories and constructed the possibility distribution curves of the four features, respectively; then, we selected the changing trend as well as the interval threshold point and Construct BPA functions; and finally, different features were synthesized using DS evidence theory to obtain the final classification results by the principle of maximum probability.

2.1. Selection of Classification Features

Point cloud data features are based on the point set structure of airborne LiDAR point cloud data [52], which can be divided into two categories: direct features and indirect features. The direct features are the point cloud data directly obtained through external acquisition and internal decomposition, including absolute coordinates, intensity, number of echoes, echo number, etc. The indirect features are mainly calculated through the local geometric features and statistical characteristics of the point cloud, generally including elevation difference, elevation standard deviation, as well as normal vector, normal curvature and other statistical features calculated through the local geometric properties of the point cloud. The analysis of image features is mainly based on the different bands of remote sensing data products, which can be divided into spectral features and texture features, while spectral features are divided into direct spectral features and indirect spectral features. The most intuitive spectral features are the direct acquisition of the brightness values of different bands, which can be used for manual visual interpretation, such as the brightness values of R, G, B, IR and other bands. The indirect spectral characteristics are mainly indices that, through the analysis of the spectral properties of different features, can qualitatively and quantitatively assess the growth pattern of various vegetation and the distribution pattern of buildings.
When the laser pulse comes into contact with the target under test, the reflected signal of part of the pulse energy is received and recorded, while the remaining pulse energy continues to propagate and is reflected again when another target or another part of the original target under test is encountered, making the airborne laser scanning system receive multiple echoes of information. We defined the first echo (FE) is the first echo signal received; the last echo signal received is called the last echo (LE). Intensity is a measure of the intensity of the LIDAR pulse echoes generated at a point. In addition to these direct features, derived features such as The Normalized Difference Vegetation Index (NDVI) and height difference (HD) are also important. NDVI is one of the most commonly used vegetation indices for estimating biomass. HD between the first and last echoes can be obtained from Equation (1):
H D = F E L E
N D V I = N I R R N I R + R
The key issue of classification is feature selection and suitability analysis of the data sources used. Feature selection is the selection of a set of features from remotely sensed data that can characterize the type of feature to be classified. By constructing the original feature space, better classification results can be achieved. The indicators above are well-validated in some of the previous literature. In urban areas, a number of experiments have demonstrated that LiDAR-derived height features can significantly differentiate between high and low vegetation [53,54]. Song et al. [55] were the first to talk about using airborne LiDAR intensity data as a feature space for classifying urban land cover, which can serve as an additional feature in the classification domain. The NDVI excels at separating the grass cover from the ground [56]. We analyze the variability between feature classification abilities to construct feature sets.
In order to count the relationship between features and classes, we chose an image of size 100 × 100 which contains all the land cover classes that need to be classified. In this article, four classes are considered based on data features and applications, namely buildings, trees, grass and roads. The grey scale values corresponding to each class are counted. Each classes shows a homogeneous pattern in a given feature space, and obtain the class/feature histogram. A histogram of the four features is shown in Figure 2. The x-axis is the grey scale value and the y-axis indicates the number of pixels corresponding to the grey scale value.
In our work, segmentation thresholds were selected based on feature category histograms to classify individual feature images. The thresholds used in this paper are shown in Table 1. Then, we analyzed complementarity and differences between the classified features through the classification results.

2.2. Possibility Distribution Construction

Possibility theory is a new approach for dealing with information uncertainty [57], extending variables that take the value of a point in probability theory to intervals in fuzzy theory. In the field of land cover classification, the traditional fixed threshold between categories is fuzzified, transforming the fixed threshold into the interval [ a , b ] , effectively avoiding the problem of one-size-fits-all hard threshold classification.
X is a possibility variable that takes values in a universe of discourse U , R ( X ) denotes an ambiguous restriction connected to X , then R ( X ) = F demonstrates that F acts as a fuzzy restriction in relation to X . The function π X represents a flexible restriction of the values of X with the following conventions: we define π X to be the possibility distribution over U ,
when   x = u   is   impossible ,   we   define   π X ( u ) = 0
when   x = u   is   totally   possible ,   we   define   π X ( u ) = 1
Let F be a fuzzy subset of U which is characterized by affiliation function μ F .
π X ( u ) = μ F ( u )
Thus, the possibility distribution function is numerically equivalent to the affiliation function. But affiliation functions and possibility distributions are not identical by equation. The relationship between the possibility distribution and the affiliation function is established through fuzzy constraints to predict the possibility of things in the future.
For the Air Quality assessment system, the CO content is closely related to the air quality, and when the air quality is good, the CO value is taken in the range [ 0.05 , 0.15 ] . The proposition “Air quality is good” in which good is a fuzzy subset of characterized by the affiliation function:
μ ( x ) = { 1 x 0.05 10 ( 0.15 x ) 0.05 < x < 0.15 0 x 0.15
Take for instance a numerical of x = 0.07 , whose affiliation grade in the fuzzy set good is roughly 0.8. First, we interpret 0.8 as the degree to which 0.07 is compatible with the term “good’’, then, we hypothesize that the statement “ Air quality is good” changes the meaning of 0.8 from the degree of 0.07 compatibility to the degree of possibility that x = 0.07 , whose membership grade in the fuzzy set good is roughly 0.8.
The basic steps in constructing a possibility distribution based on fuzzy statistic method are as follows:
Under each trial, make an exact determination of whether the feature grey value u 0 belongs to class A or class B.
If the number of times that element u 0 belongs to class A in the n trials made is m , then the frequency of element u 0 to class A is defined as:
affiliation   frequency = m n
The affiliation frequency of element u 0 is always stable at a certain number, and this stable number π A ˜ ( u 0 ) is the possibility of element u 0 to the fuzzy set:
π A ˜ ( u 0 ) = lim n m n
The degree of attribution between land cover classes describes the degree of possibility of belonging to land cover type at different values of feature, so the possibility distribution function can better reveal the uncertainty relationship between features and land cover classes.
In this work, based on possibility theory to reveal the uncertainty relationship between classification features and land cover classes, the attribution degree of land cover class is used to describe the degree of possibility belonging to a certain category of land when different values of classification features are taken, and construct a possibility distribution function to determine the basic form of the BPA function. At a grey scale value of u 0 , the number of pixel points belonging to different land categories is counted to obtain the affiliation frequencies, which is used to obtain the possibility distribution functions of the four different features. As shown in Figure 3:
From the analysis in Figure 3, it can be seen that a feature can classify land cover types into Class A and B according to the threshold value. As the gray scale value increasing, the possibility of Class A decreases, while the possibility of Class B increases.

2.3. BPA Function Construction

The DS evidence theory is based on the construction of the BPA function, which directly affects the classification results. In order to better adopt DS evidence synthesis rules to deal with classification confusion areas, the BPA functions are selected according to the change relationship between feature values and land cover classes. There are possible change relationships between feature values and categories: linear change, fast followed by slow type, slow followed by fast type. In this article, three typical basic BPA functions are used as examples, as shown in Figure 4 for three typical BPA functions.
The triangular probability assignment function is based on a linear relationship between the assumed feature value and the land cover class; the ridge probability assignment function is based on a fast-first and then a slow relationship between the assumed feature value and the land cover class; the point probability assignment function is based on a slow-first and then a fast relationship between the assumed feature value and the land cover class.
Considering the uncertainty of information obtained from different data sources, P 1 = 0.02 and P 2 = 0.98 is chosen in this article to represent the lower and upper limits of uncertainty in the actual situation, where P A i ( x ) , P B i ( x ) , P A B i ( x ) denotes the probability that feature point x belongs to categories A, B and A B on data source i , respectively.
The parameters of the BPA function are also selected through the possibility distribution function, and the uncertainty regions that are difficult to distinguish categories in the classification process are determined. The classification thresholds [ h 1 , h 2 ] for different features are determined as the fixed values of the BPA function parameters to construct a trust distribution function that is more consistent with the true distribution. We believe the point at which the possibility distributions of classes A and B intersect to be h 12 , which is the most difficult to distinguish. The BPA parameters used in the experiment are set as shown in Table 2.
After obtaining the possibility distribution function of features, in order to further determine the BPA function, we use the change trend of slope to characterize the relationship between land cover class and feature. In addition, we only analyzed half of the curves. The following Figure 5 shows the change of slope for different features corresponding to class A or class B.
We analysed the change in curvature of different probability assignment functions; the slope value of HD and NDVI change very little when the grey scale value changes, which can be approximated as constant. The slope of IN has been decreasing, indicating that the trend of the change in the probability distribution curve is becoming slower. This suggests that the BPA function for IN should be chosen as the fast-first, then slow type. The slope value of FE changes from large to small, the feature value changes quickly and then slowly with the land cover classes. In this article, the triangular probability assignment function was chosen for HD and NDVI. the ridge probability assignment function was chosen for IN, and the pointed probability assignment function was chosen for FE.
Relying on the acquired raw feature data, after selecting the BPA function form and parameters, a better BPA function describing uncertainty can be obtained. Then the final BPA functions for the four features are shown in Figure 6.
On the basis of the constructed BPA function, the DS evidence synthesis is used to discriminate the results of pixel point feature classification. While DS evidence synthesis is the fusion of BPA functions of different bodies of evidence under the same proposition, and the comparison of results by setting the corresponding decision criteria, ultimately achieving an accurate decision on the classification results.

2.4. DS Method

The DS theory of evidence allows flexible and efficient modelling of uncertainty without prior probabilities [58], which is highly advantageous in a multi-sensor and multi-classifier information fusion. It consists mainly of recognition framework, BPA function, Belief Function ( B e l ) and a possibility function ( p l ). The problem here is to classify the input data into four mutually exclusive classes. We denote Θ the space of the hypotheses. In image classification, is the set of hypotheses about the classes concerned. We set X as the Universe = {tree, building, grass, road}, and HD, NDVI, FE and IN are selected as indicators of the body of evidence under the identification framework. DS evidence theory assigns probabilities to each hypothesis in the recognition framework, which is the probability corresponding to the outcome; such a function m is named mass function or basic probability assignment (BPA).
m ( ) = 0
A Θ m ( A ) = 1
For any subset A of Θ , the p l function and B e l function can be defined as follows:
B e l ( A ) = B A m ( B )
p l ( A ) = B A m ( B )
B e l ( A ) and p l ( A ) denote the lower and upper bounds of the trust level of proposition A, respectively. In evidence theory, for a hypothesis in the identification framework, a trust function and a likelihood function can be calculated separately based on the BPA function, which is used to form a trust interval about the hypothesis.
The generalized basic assignment function for DS evidence synthesis is most commonly used to construct the mass function by mapping the information source to the [0, 1] space via a linear or non-linear function, and a fuzzy set is constructed to reduce the impact of category classification fuzziness on classification accuracy when probabilistically assigning features to the data.
The core of DS evidence synthesis is the combination rule, which is essentially a combination of multiple evidence outputs. The combination rule between evidence is defined as
m 1 m 2 ( A ) = 1 K B C = A m 1 ( B ) m 2 ( C )
K = B C m 1 ( B ) m 2 ( C ) = 1 B C = m 1 ( B ) m 2 ( C )
where K [ 0 , 1 ) is the conflict coefficient between evidence. The greater the K is, the stronger the conflict will be.
In the following, an example is illustrated applying Dempster’s synthesis rules. Two eyewitnesses provide evidence { m 1 , m 2 } , Suspect Framework Θ { P e t e r , P a u l , M a r y } . The details are shown in Table 3.
We calculated the result after the synthesis using the Dempster evidence synthesis formula.
Step 1: Calculate the normalization factor K .
K = m 1 ( P e t e r ) · m 2 ( P e t e r ) + m 1 ( P a u l ) · m 2 ( P a u l ) + m 1 ( M a r y ) · m 2 ( M a r y ) = 0.135
Step 2: Calculate the combined BPA of Peter, Paul, and Mary according to the evidence synthesis rule.
m 1 m 2 ( { P e t e r } ) = 1 K B C = P e t e r m 1 ( P e t e r ) m 2 ( P e t e r ) = 0.1274
m 1 m 2 ( { P a u l } ) = 0.8666
m 1 m 2 ( { M a r y } ) = 0.006
The combined probability of each hypothesis is given based on the probabilities provided by different witnesses. Based on the synthetic mass function obtained, we consider Paul to be a suspect.
Considering a land cover classification problem, a pixel point has been evaluated by four features in order to figure out which types the point is most likely to be. Using the above synthesis rules, the probability values m ( B ) , m ( C ) , m ( T ) , m ( R ) for each pixel point corresponding to the four land categories are obtained, then the probability maximum category is selected as the final classification decision result. The core idea of this rule is to select the maximum support of different features to the land cover classes, so as to judge the maximum possibility of the point corresponding to the land cover classes.

3. Results

3.1. Experiment Design

Experiments are designed to test the effectiveness of the method we have proposed in Section 2. The airborne LiDAR data set used in the experimental part of this paper was provided by the School of Computer Science, University of Reading, UK. The raw data was acquired by the TopoSys Falcon II airborne LiDAR data acquisition system and integrated with an image data acquisition system for simultaneous acquisition of true color and color infra-red image data. The data sets include LiDAR data observed Mannheim, city, Baden-Württemberg Land (state), southwestern Germany. It lies on the right bank of the Rhine River opposite Ludwigshafen, at the mouth of the canalized Neckar River. Table 4 and Table 5 show the system parameters for data collection.
Two regions were selected as experimental data sets in the original data set. Test Region A and Test Region B are marked in the visible image using red boxes as shown in Figure 7, with dimensions of pixels 300 × 300 and 220 × 300, respectively. The locations for test set 2 were chosen under similar physical and geographical conditions to those of test set 1 to verify the robustness of the method proposed.
Figure 8 and Figure 9 show the original feature images for the two data sets, respectively.
In order to evaluate the feature classification method proposed, the experimental results were compared—F-DS-FH [51] and DS. DS is the basic DS evidence theory approach, and F-DS-FH is the DS evidence theory approach based on a hierarchical framework with fuzzy BPA function and median filtering. The performance of the proposed feature classification method was evaluated using a confusion matrix to make an objective assessment. In addition, the Kappa coefficient was calculated for measuring the classification accuracy.
Assuming that the true number in each category is a 1 , a 2 , a c , and the number of samples in each category in the classification result is b 1 , b 2 , b c , respectively, and the total number of samples is n , we have
p e = a 1 × b 1 + a 2 × b 2 + + a c × b c n × n
K a p p a = p 0 p e 1 p e
where p 0 is the number of correctly classified land cover divided by the total number of samples and p e is the intermediate quantity in the calculation of kappa coefficient, which means accidental consistency.

3.2. Experimental Result

3.2.1. Feature Selection Experiments

Firstly, the histogram of the feature image is used to select a suitable threshold for the land cover classification, and the results are shown in Figure 10.
Trees and building outlines are clearly visible on the HD classification results map, while building tops, roads and grasses are visible on the FE image. It is due to the fact that the LiDAR penetration of trees can create multiple echoes and is prone to secondary echoes at the edges of buildings, while only one echo exists for impenetrable land cover type and the first and last echo height difference is zero. Hence, features with large height difference values such as vegetation and buildings can be separated from areas such as ground points and grass in the HD images. NDVI provides a good separation between vegetated and non-vegetated areas. The IN feature responds to the reflective properties of ground objects; trees have weak echoes. Therefore, trees can be well distinguished from other categories.
From the above analysis, it is clear that IN and HD are a pair of complementary features, and the combination of the two can extract trees more accurately, and when combined with NDVI can separate trees and grass, while when used with FE for classification can classify buildings and trees, and finally achieve the classification of four kinds of land cover. We select FE, IN, HD and NDVI as the classification features for land cover classification to ensure the algorithm runs efficiently and with minimum computational efficiency.

3.2.2. Classification Experiments

Figure 11 and Figure 12 show the feature classification results for the proposed methods in test set 1 and test set 2, compared with the other two methods described above. Table 6, Table 7, Table 8 and Table 9 show the classification accuracy results for all three methods in test set 1 and 2, respectively.
The results of the two test sets show that the proposed method is effective.

3.3. Subjective Evaluation and Objective Evaluation

The classification results of the proposed method are shown in the figure, where we use red for buildings, blue for trees, green for grass and yellow for roads. Compared with the DS method, we can clearly see that there are fewer classification error pixel points and the classification accuracy is greatly improved. Sometimes the LiDAR echoes cannot penetrate dense foliage, and it causes that buildings and trees are difficult to distinguish from each other. Compared to F-DS-FH, we can see that the classification contours are clearer than other methods and effectively reduces trees and buildings confusion. There is also confusion between grassland points and road, mainly due to the presence of some bare ground in grassland areas, but also due to the removal of small high grasses that are misclassified as trees.
Table 6 represents the accuracy of test set 1, where accuracy is the percentage of correctly classified pixel points. The classification accuracy of buildings, trees and grasses all achieve the highest accuracy, proving the effectiveness of our method in improving the classification accuracy. Comparing the average accuracy of each method, we also find that our method gives the optimal average accuracy, and our method improves the average accuracy compared to DS by 5.73%, compared to F-DS-FH, the accuracy was improved by 3.32%.
In test set 2, as shown in Table 7, the highest classification accuracies were also achieved for buildings, trees and grass, and when comparing the average accuracy of each method, our method P-DS improved the average accuracy by 7.83% compared to DS and by 3.16% compared to F-DS-FH.
Table 8 and Table 9 show the confusion matrices of the two test sets respectively, and we can calculate that the Kappa numbers are 88.10% and 85.82%, respectively, which can be considered that the classification results in this article are in high agreement with the actual land cover types, proving that P-DS has certain authenticity and reliability. The results reflect that the combination of feature BPA functions can improve the accuracy of classification results, and the method proposed for constructing BPA functions based on possibility distribution can accurately optimize the feature classification model according to the relationship between different features and land cover categories.

4. Discussion

Figure 13 presents the averaged assessment accuracy under various land cover types. Note that the proposed scheme has the largest accuracy for each test set. The experimental results on the data set show that the classification accuracy of all three of these feature types (buildings, trees and grassland) has improved significantly. The possibility distribution function constructed on the basis of our ideas can more effectively describe the correspondence between features and land cover categories. Under the influence of this factor, the land cover classification method constructs a more accurate combination of BPA functions, which leads to the improvement of the classification accuracy. The classification accuracy of roads has decreased, and small changes in the basic probability distribution function in D-S evidence synthesis theory can produce sharp changes in the combined results. There are two causes we have considered for the inaccuracy for road detection. On the one hand, it comes from the wrong division between road and other land cover classes, it mainly includes the shelter between vegetation and road junction area, as well as the difficulty in extracting bare road in some grassland areas; on the other hand, the selection of DS synthesis rules should be blamed for such inaccuracy. To produce accurate and robust classification results, in the future, we will further study the contribution of features to each land cover category, to identify the influence of DS synthesis rules of different features.
We also note that when features are selected with different combinations of BPA functions, the classification accuracy is generally higher than when a single BPA function is selected. As shown in Table 10 the highest classification accuracy of the three methods is 89.75%, which is lower than the method proposed in this paper. By testing different combinations of feature BPA functions, the one selected is the combination of feature optimal distribution, and the constructed possibility distribution can well reflect the mapping relationship between feature types and feature grey scale values, so the BPA function constructed based on the possibility distribution has good results in land cover classification.
Moreover, the results of the study show that the algorithm has a higher efficiency. It is worth to mention that the developed algorithm, due to its non-iterative and unsupervised nature, is computationally fast compared to most previous studies. Although it takes some time for the proposed assessment scheme to reassign the BPAs, the complexity keeps the smallest fluctuation against other methods. The results tested on test data 2 are shown in Table 11, where we can see that the fastest of the other three methods is ICM-MRF by [59], which took 13.16 s, while our method P-DS required only 0.96 s.
Similar to any remotely sensed data processing algorithm, the proposed workflow still has certain limitations. This article is based on downscaling point cloud data into 2D image data, however, with low point cloud density, voids will appear in the feature images after downscaling, and these void points will not be identified and classified, so the classification effect is relatively poor. Therefore, the experimental data set is relatively lacking and the robustness of the algorithm needs further study.

5. Conclusions

In this study, we have proposed a method for the classification of remote sensing images based on possibility distribution construction and the DS evidence method. Four source features are used, such as NDVI, HD, FE and IN, to perform fuzzy class synthesis between evidence through the construction of different feature BPA functions, and finally obtained the classification results. The following points summarize the novelty and main contributions of this work:
  • DS evidence theory applies to weak a priori knowledge. In this article, we use a specially designed BPA function to synthesize the results of discriminative pixel point feature classification using DS evidence based on the constructed trust allocation function, and the results show that the improved BPA function can effectively improve the classification accuracy, and the average classification accuracy can reach 90.21%.
  • The possibility distribution can describe the mapping relationship between feature type attributes and features. Fuzzy intervals are used instead of feature single value classification, and the basic form of BPA function corresponding to different features is selected. Relying on the original feature data obtained, through feature processing can obtain a better description of the uncertainty of the trust distribution function.
The existing DS evidence theory is able to solve the problem of conflict when data are fused, but cannot handle the case of complete conflict between evidence. Therefore, subsequent research needs to construct the possibility distribution of each feature and propose new synthesis rules to establish a distribution synthesis method for coordinating the inconsistency of multiple categorical features.

Author Contributions

Conceptualization, L.J., F.Y. and D.Z.; methodology, D.Z. and X.L.; software.; validation, D.Z. and F.Y.; formal analysis, D.Z. and X.L.; investigation, D.Z.; resources, X.L. and D.Z.; data curation, F.Y., X.L. and L.J.; writing—original draft preparation, D.Z. and F.Y.; writing—review and editing, L.J.; supervision, F.Y. and L.J.; project administration, F.Y.; funding acquisition, F.Y. and L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (NSFC), grant number 61972363, Central Government Leading Local Science and Technology Development Fund Project, grant number YDZJSX2021C008, the Postgraduate Education Innovation Project of Shanxi Province, grant number 2021Y612.

Data Availability Statement

The author is grateful for the data provided by TopoSys GmbH and the Stadt Mannheim, Germany. If you are interested in data used in our research work, you can contact dqzhao2020@163.com for the original dataset.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, C.K.; Tseng, Y.H.; Chu, H.J. Airborne Dual-Wavelength LiDAR Data for Classifying Land Cover. Remote Sens. 2014, 6, 700–715. [Google Scholar] [CrossRef] [Green Version]
  2. Wilkinson, G.G. Results and Implications of a Study of Fifteen Years of Satellite Image Classification Experiments. IEEE Trans. Geosci. Remote Sens. 2005, 43, 433–440. [Google Scholar] [CrossRef]
  3. Hanna, E. Radiative Forcing of Climate Change: Expanding the Concept and Addressing Uncertainties; National Academies Press: Washington, DC, USA, 2005; Volume 62, pp. 112–117. [Google Scholar]
  4. Lunetta, R.; Jayantha, E.; David, M.J.; John, G.L.; Alexa, J.M. Impacts of vegetation dynamics on the identification of land-cover change in a biologically complex community in North Carolina, USA. Remote Sens. Environ. 2002, 82, 258–270. [Google Scholar] [CrossRef]
  5. Fazilah, H.A.; Muhamad, A.K.; Khairul Nizam, A.M.; Azlina, A. Perceived Usefulness of Airborne LiDAR Technology in Road Design and Management: A Review. Sustainability 2021, 13, 11773. [Google Scholar]
  6. Sharma, M.; Garg, R.D.; Badenko, V.; Fedotov, A.; Min, L.; Yao, A.D. Potential of airborne LiDAR data for terrain parameters extraction. Quat. Int. 2021, 575, 317–327. [Google Scholar] [CrossRef]
  7. Kim, M. Airborne Waveform Lidar Simulator Using the Radiative Transfer of a Laser Pulse. Appl. Sci. 2019, 9, 2452. [Google Scholar] [CrossRef] [Green Version]
  8. Wehr, A.; Lohr, U. Theme issue on airborne laser scanning. ISPRS J. Photogramm. Remote Sens. 1999, 54, 61–83. [Google Scholar] [CrossRef]
  9. Glennie, C.L.; Carter, W.E.; Shrestha, R.L. Dietrich Geodetic Imaging with Airborne LiDAR: The Earth’s surface revealed. Rep. Prog. Phys. 2013, 76, 086801. [Google Scholar] [CrossRef]
  10. Gao, M.; Yang, F.; Wei, H.; Liu, X. Individual Maize Location and Height Estimation in Field from UAV-Borne LiDAR and RGB Images. Remote Sens. 2022, 14, 2292. [Google Scholar] [CrossRef]
  11. Telling, J.; Lyda, A.; Hartzell, P. Review of Earth science research using terrestrial laser scanning. Earth Sci. Rev. 2017, 169, 35–68. [Google Scholar] [CrossRef] [Green Version]
  12. Matikainen, L.; Karila, K.; Hyyppä, J.; Litkey, P.; Puttonen, E.; Ahokas, E. Object-based analysis of multispectral airborne laser scanner data for land cover classification and map updating. Remote Sens. 2017, 128, 298–313. [Google Scholar] [CrossRef]
  13. Huang, A.; Shen, R.; Li, Y.; Han, H.; Di, W.; Hagan, D.F.T. A Methodology to Generate Integrated Land Cover Data for Land Surface Model by Improving Dempster-Shafer Theory. Remote Sens. 2022, 14, 972. [Google Scholar] [CrossRef]
  14. Chirachawala, C.; Shrestha, S.; Babel, M.S.; Virdis, S.G.; Wichakul, S. Evaluation of global land use/land cover products for hydrologic simulation in the Upper Yom River Basin. Total Environ. 2020, 708, 135148. [Google Scholar] [CrossRef]
  15. Coutts, A.M.; Harris, R.J.; Phan, T.; Livesley, S.J.; Williams, N.S.G.; Tapper, N.J. Thermal infrared remote sensing of urban heat: Hotspots, vegetation, and an assessment of techniques for use in urban planning. Remote Sens. Environ. 2016, 186, 637–651. [Google Scholar] [CrossRef]
  16. Jürgens, C. Urban and suburban growth assessment with remote sensing. In Proceedings of the OICC 7th International Seminar on GIS Applications in Planning and Sustainable Development, Cairo, Egypt, 13–15 February 2001; pp. 13–15. [Google Scholar]
  17. Li, B.; Chen, C.; Hu, B. Governing urbanization and the New Urbanization Plan in China. Environ. Urban. 2016, 28, 515–534. [Google Scholar] [CrossRef]
  18. Medeiros, E.; van der Zwet, A. Sustainable and Integrated Urban Planning and Governance in Metropolitan and Medium-Sized Cities. Sustainability 2020, 12, 5976. [Google Scholar] [CrossRef]
  19. Ul Din, S.; Mak, H.W.L. Retrieval of Land-Use/Land Cover Change (LUCC) Maps and Urban Expansion Dynamics of Hyderabad, Pakistan via Landsat Datasets and Support Vector Machine Framework. Remote Sens. 2021, 13, 3337. [Google Scholar] [CrossRef]
  20. Soni, P.K.; Rajpal, N.; Mehta, R.; Mishra, V.K. Urban land cover and land use classification using multispectral sentinal-2 imagery. Multimed. Tools Appl. 2022, 81, 36853–36867. [Google Scholar] [CrossRef]
  21. Liao, L.; Tang, S.; Liao, J.; Li, X.; Wang, W.; Li, Y.; Guo, R. A Supervoxel-Based Random Forest Method for Robust and Effective Airborne LiDAR Point Cloud Classification. Remote Sens. 2022, 14, 1516. [Google Scholar] [CrossRef]
  22. Nazir, D.; Afzal, M.Z.; Pagani, A.; Liwicki, M.; Stricker, D. Contrastive Learning for 3D Point Clouds Classification and Shape Completion. Sensors 2021, 21, 7392. [Google Scholar] [CrossRef]
  23. Kuras, A.; Brell, M.; Rizzi, J.; Burud, I. Hyperspectral and Lidar Data Applied to the Urban Land Cover Machine Learning and Neural-Network-Based Classification: A Review. Remote Sens. 2021, 13, 3393. [Google Scholar] [CrossRef]
  24. Zhou, L.; Geng, J.; Jiang, W. Joint Classification of Hyperspectral and LiDAR Data Based on Position-Channel Cooperative Attention Network. Remote Sens. 2022, 14, 3247. [Google Scholar] [CrossRef]
  25. Miller, C.I.; Thomas, J.J.; Kim, A.M.; Metcalf, J.P.; Olsen, R.C. Application of image classification techniques to multispectral lidar point cloud data. In Laser Radar Technology and Applications XXI; SPIE: Bellingham, WA, USA, 2016; Volume 9832, pp. 286–297. [Google Scholar]
  26. Wichmann, V. Evaluating the potential of multispectral airborne lidar for topographic mapping and land cover classification. ISPRS Ann. Remote Sens. Spat. Informat. 2015, 2, 113–119. [Google Scholar] [CrossRef] [Green Version]
  27. Elberink, S.O.; Maas, H.G. The Use of Anisotropic Height Texture Measurements for the Segmentation of Airborne Laser scanner Data. Int. Arch. Photogramm. Remote Sens. 2000, 33, 678–684. [Google Scholar]
  28. Wang, J.Y.; Michael, B. Machine learning in modelling land-use and land cover-change (LULCC): Current status, challenges and prospects. Sci. Total Environ. 2022, 822, 153559. [Google Scholar] [CrossRef]
  29. Adugna, T.; Xu, W.; Fan, J. Comparison of Random Forest and Support Vector Machine Classifiers for Regional Land Cover Mapping Using Coarse Resolution FY-3C Images. Remote Sens. 2022, 14, 574. [Google Scholar] [CrossRef]
  30. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  31. Dietterich, T.G. An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization. Mach. Learn. 2000, 40, 139–157. [Google Scholar] [CrossRef]
  32. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  33. Abdulhakim, M.A. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GISci. Remote Sens. 2020, 57, 1–20. [Google Scholar]
  34. Wei, S.; Zhang, L.F.; Tian, Y.F.; Fong, S.; Lin, J.M.; Gozho, A. CNN-based 3D object classification using Hough space of LiDAR point clouds. Hum. Cent. Comput. Inf. Sci. 2020, 10, 1–14. [Google Scholar]
  35. Morsy, S.; Shaker, A.; Larocque, P.E. Airborne multispectral lidar data for land-cover classification and land/water mapping using different spectral indexes. Remote Sens. Spat. Informat. 2016, 3, 217–224. [Google Scholar]
  36. Kim, Y. Generation of Land Cover Maps through the Fusion of Aerial Images and Airborne LiDAR Data in Urban Areas. Remote Sens. 2016, 8, 521. [Google Scholar] [CrossRef] [Green Version]
  37. Luo, B.; Yang, J.; Song, S.; Shi, S.; Gong, W.; Wang, A.; Du, L. Target Classification of Similar Spatial Characteristics in Complex Urban Areas by Using Multispectral LiDAR. Remote Sens. 2022, 14, 238. [Google Scholar] [CrossRef]
  38. Zhang, Y.; Yang, W.; Sun, Y.; Chang, C.; Yu, J.; Zhang, W. Fusion of Multispectral Aerial Imagery and Vegetation Indices for Machine Learning-Based Ground Classification. Remote Sens. 2021, 13, 1411. [Google Scholar] [CrossRef]
  39. Chen, J.; Du, P.; Wu, C.; Xia, J.; Chanussot, J. Mapping Urban Land Cover of a Large Area Using Multiple Sensors Multiple Features. Remote Sens. 2018, 10, 872. [Google Scholar] [CrossRef] [Green Version]
  40. Sothe, C.; Dalponte, M.; Almeida, C.M.D.; Schimalski, M.B.; Lima, C.L.; Liesenberg, V.; Tommaselli, A.M.G. Tree Species Classification in a Highly Diverse Subtropical Forest Integrating UAV-Based Photogrammetric Point Cloud and Hyperspectral Data. Remote Sens. 2019, 11, 1338. [Google Scholar] [CrossRef] [Green Version]
  41. Ye, F.; Chen, J.; Li, Y. Improvement of DS Evidence Theory for Multi-Sensor Conflicting Information. Symmetry 2017, 9, 69. [Google Scholar] [CrossRef] [Green Version]
  42. Innal, F.; Rauzy, A.; Dutuit, Y. Handling epistemic uncertainty in fault trees: New proposal based on evidence theory and Kleene ternary decision diagrams. In Proceedings of the 6th International Conference on System Reliability and Safety, Milan, Italy, 20–22 December 2017; pp. 354–359. [Google Scholar]
  43. Xiao, F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inf. Fusion 2019, 46, 23–32. [Google Scholar] [CrossRef]
  44. Jiang, W.; Cao, Y.; Deng, X. A Novel Z-network Model Based on Bayesian Network and Z-number. Fuzzy Syst. 2020, 28, 1585–1599. [Google Scholar] [CrossRef]
  45. He, Z.; Jiang, W. An evidential Markov decision making model. Inf. Sci. 2018, 467, 357–372. [Google Scholar] [CrossRef] [Green Version]
  46. Rottensteiner, F.; Trinder, J.; Clode, S. Using the Dempster-Shafer method for the fusion of LIDAR data and multi-spectral images for building detection. Inf. Fusion 2005, 6, 283–300. [Google Scholar] [CrossRef]
  47. Feng, T.J.; Ma, H.R.; Cheng, X.W. Land-cover classification of high-resolution remote sensing image based on multi-classifier fusion and the improved Dempster–Shafer evidence theory. J. Appl. Remote Sens. 2021, 15, 014506. [Google Scholar] [CrossRef]
  48. Zadeh, L.A. Fuzzy logic and the calculi of fuzzy rules, fuzzy graphs and fuzzy probabilities. Comput. Math. Appl. 1999, 37, 35. [Google Scholar] [CrossRef] [Green Version]
  49. Yager, R.R. On the Conjunction of Possibility Measures. IEEE Trans. Fuzzy Syst. 2020, 28, 1572–1574. [Google Scholar] [CrossRef]
  50. Mehmet, L.K. Risk assessment of a vertical breakwater using possibility and evidence theories. Ocean. Eng. 2009, 36, 1060–1066. [Google Scholar]
  51. Yang, F.B.; Wei, H.; Feng, P.P. A hierarchical Dempster-Shafer evidence combination framework for urban area land cover classification. Measurement 2020, 151, 105916. [Google Scholar] [CrossRef]
  52. Li, D.L.; Shen, X.; Guan, H.Y.; Yu, Y.T.; Wang, H.Y.; Zhang, G.; Li, J.; Li, D.R. AGFP-Net: Attentive geometric feature pyramid network for land cover classification using airborne multispectral LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2022, 108, 102723. [Google Scholar] [CrossRef]
  53. Charaniya, A.P.; Manduchi, R.; Lodha, S.K. Supervised parametric classification of aerial LiDAR data. In Proceedings of the IEEE 2004 Conference on Computer Vision and Pattern Recognition Workshop, Washington, DC, USA, 27 June–2 July 2004; Volume 3, pp. 1–8. [Google Scholar]
  54. Huang, Y.; Yu, B.; Zhou, J.; Hu, C.; Tan, W.; Hu, Z.; Wu, J. Toward automatic estimation of urban green volume using airborne LiDAR data and high resolution remote sensing images. Front. Earth Sci. 2013, 7, 43–54. [Google Scholar] [CrossRef]
  55. Song, J.H.; Han, S.H.; Yu, K.Y.; Kim, Y.I. Assessing the possibility of land-cover classification using LiDAR intensity data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 259–262. [Google Scholar]
  56. MacFaden, S.W.; O’Neil-Dunne, J.P.; Royar, A.R.; Lu, J.W.; Rundle, A.G. High-resolution tree canopy mapping for New York City using LiDAR and object-based image analysis. J. Appl. Remote Sens. 2012, 6, 063567. [Google Scholar] [CrossRef]
  57. Dominik, H.; Michael, H. A Universal approach to imprecise probabilities in possibility theory. Int. J. Approx. Reason. 2021, 133, 133–158. [Google Scholar]
  58. Jiang, W.; Zhan, J. A modified combination rule in generalized evidence theory. Appl. Intel. 2017, 46, 630–640. [Google Scholar] [CrossRef]
  59. Cao, Y.; Wei, H.; Zhao, H.J. An effective approach for land-cover classification from airborne Lidar fused with co-registered data. Int. J. Remote Sens. 2012, 33, 5927–5953. [Google Scholar] [CrossRef]
Figure 1. Workflow of the proposed method.
Figure 1. Workflow of the proposed method.
Remotesensing 14 05941 g001
Figure 2. Feature category histogram:(a) Building; (b) Tree; (c) Grass; (d) Road.
Figure 2. Feature category histogram:(a) Building; (b) Tree; (c) Grass; (d) Road.
Remotesensing 14 05941 g002
Figure 3. The possibility distribution function of feature: (a) HD; (b) IN; (c) FE; (d) NDVI.
Figure 3. The possibility distribution function of feature: (a) HD; (b) IN; (c) FE; (d) NDVI.
Remotesensing 14 05941 g003
Figure 4. Three typical BPA functions: (a) triangular; (b) ridge-shaped; (c) pointed.
Figure 4. Three typical BPA functions: (a) triangular; (b) ridge-shaped; (c) pointed.
Remotesensing 14 05941 g004
Figure 5. The change in the slope of the feature possibility distribution function: (a) HD (b) IN (c) FE (d) NDVI.
Figure 5. The change in the slope of the feature possibility distribution function: (a) HD (b) IN (c) FE (d) NDVI.
Remotesensing 14 05941 g005
Figure 6. The BPA function of features: (a) HD; (b) IN; (c) FE; (d) NDVI.
Figure 6. The BPA function of features: (a) HD; (b) IN; (c) FE; (d) NDVI.
Remotesensing 14 05941 g006
Figure 7. The labeling of experiment area A and B.
Figure 7. The labeling of experiment area A and B.
Remotesensing 14 05941 g007
Figure 8. Test set 1 used in classification process: (a) FE; (b) LE; (c) HD; (d) IN; (e) NDVI; (f) RGB.
Figure 8. Test set 1 used in classification process: (a) FE; (b) LE; (c) HD; (d) IN; (e) NDVI; (f) RGB.
Remotesensing 14 05941 g008
Figure 9. Test set 2 used in classification process: (a) FE; (b) LE; (c) HD; (d) IN; (e) NDVI; (f) RGB.
Figure 9. Test set 2 used in classification process: (a) FE; (b) LE; (c) HD; (d) IN; (e) NDVI; (f) RGB.
Remotesensing 14 05941 g009
Figure 10. Single feature classification results:(a) FE; (b) IN; (c) HD; (d) NDVI.
Figure 10. Single feature classification results:(a) FE; (b) IN; (c) HD; (d) NDVI.
Remotesensing 14 05941 g010
Figure 11. The land cover map of test set 1: (a) DS; (b) F-DS-FH; (c) the proposed method P-DS; (d) ground truth.
Figure 11. The land cover map of test set 1: (a) DS; (b) F-DS-FH; (c) the proposed method P-DS; (d) ground truth.
Remotesensing 14 05941 g011
Figure 12. The land cover map of test set 2: (a) DS; (b) F-DS-FH; (c) the proposed method P-DS; (d) ground truth.
Figure 12. The land cover map of test set 2: (a) DS; (b) F-DS-FH; (c) the proposed method P-DS; (d) ground truth.
Remotesensing 14 05941 g012
Figure 13. Averaged classification accuracy: (a) test set 1; (b) test set 2.
Figure 13. Averaged classification accuracy: (a) test set 1; (b) test set 2.
Remotesensing 14 05941 g013
Table 1. Thresholds of feature images.
Table 1. Thresholds of feature images.
Feature h 1 h 2
HD76178
IN72180
FE50122
NDVI102200
Table 2. BPA function parameters.
Table 2. BPA function parameters.
Feature h 1 h 12 h 2
HD100146180
IN140180200
FE5080140
NDVI110133160
Table 3. Evidence of suspects.
Table 3. Evidence of suspects.
m 1 m 2 DS
Peter0.860.020.1274
Paul0.130.900.8666
Mary0.010.080.006
Table 4. Key technical parameters of point cloud data acquisition system.
Table 4. Key technical parameters of point cloud data acquisition system.
Point Cloud Data Acquisition SystemKey Technical Parameters
Scanning methodFiber optic scanning method
Flight height600 m
System scanning frequency83,000 Hz
Field of view14.3°
Number of echoes9
Laser wavelength1560 nm
Vertical accuracy0.15 m
Horizontal accuracy0.25 m
Table 5. Key technical parameters of image data acquisition system.
Table 5. Key technical parameters of image data acquisition system.
Image Data Acquisition SystemsKey Technical Parameters
Red band wavelengths620 nm
Green band wavelengths540 nm
Blue band wavelengths470 nm
Near-infrared wavelengths830 nm
Optical field of view21.6°
Table 6. Classification accuracy (%) for land cover classes on test set 1.
Table 6. Classification accuracy (%) for land cover classes on test set 1.
MethodBuildingTreeGrassRoadAverage
DS88.7676.5287.1187.8085.41
F-DS-FH90.6190.5286.9582.8287.82
P-DS92.9192.3694.5283.4391.14
Table 7. Classification accuracy (%) for land cover classes on test set 2.
Table 7. Classification accuracy (%) for land cover classes on test set 2.
MethodBuildingTreeGrassRoadAverage
DS85.2169.4485.7688.6282.38
F-DS-FH86.4083.6491.6386.7387.05
P-DS92.2287.6293.3885.4990.21
Table 8. Confusion matrix on test set 1.
Table 8. Confusion matrix on test set 1.
BuildingTreeGrassRoadTotal
Building23,12126919849224,080
Tree36418,07290319119,530
Grass599117824,124263628,537
Road8014829816,70617,853
Ground-truth24,88519,56725,52320,02590,000
Table 9. Confusion matrix on test set 2.
Table 9. Confusion matrix on test set 2.
BuildingTreeGrassRoadTotal
Building17,91774439459119,646
Tree30113,59841115314,463
Grass489129814,484148517,756
Road7215822213,13414,135
Ground-truth19,42815,69815,51115,36366,000
Table 10. Classification result accuracy (%) by single BPA function on test set 1.
Table 10. Classification result accuracy (%) by single BPA function on test set 1.
MethodBuildingTreeGrassRoadAverage
triangular93.5286.3793.2784.8989.75
ridge-shaped84.2794.3788.7483.9287.64
pointed.86.8894.1689.6584.7588.77
Table 11. Comparison of the computational cost on test data 2.
Table 11. Comparison of the computational cost on test data 2.
MethodComputational Time (s)
ICM-MRF13.16
SA-MRF38,572.49
EBP-MRF127.85
Ours0.96
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhao, D.; Ji, L.; Yang, F.; Liu, X. A Possibility-Based Method for Urban Land Cover Classification Using Airborne Lidar Data. Remote Sens. 2022, 14, 5941. https://doi.org/10.3390/rs14235941

AMA Style

Zhao D, Ji L, Yang F, Liu X. A Possibility-Based Method for Urban Land Cover Classification Using Airborne Lidar Data. Remote Sensing. 2022; 14(23):5941. https://doi.org/10.3390/rs14235941

Chicago/Turabian Style

Zhao, Danjing, Linna Ji, Fengbao Yang, and Xiaoxia Liu. 2022. "A Possibility-Based Method for Urban Land Cover Classification Using Airborne Lidar Data" Remote Sensing 14, no. 23: 5941. https://doi.org/10.3390/rs14235941

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop