Next Article in Journal
Multi-Objective Support Vector Regression Reduces Systematic Error in Moderate Resolution Maps of Tree Species Abundance
Previous Article in Journal
Characteristics and Applications of the Ground-Based X Band Low Elevation Angle Brightness Temperatures under Low Sea State Based on Measured Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PolSAR Image Feature Extraction via Co-Regularized Graph Embedding

State Key Lab of Management and Control for Complex System, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(11), 1738; https://doi.org/10.3390/rs12111738
Submission received: 16 April 2020 / Revised: 22 May 2020 / Accepted: 25 May 2020 / Published: 28 May 2020

Abstract

:
Dimensionality reduction (DR) methods based on graph embedding are widely used for feature extraction. For these methods, the weighted graph plays a vital role in the process of DR because it can characterize the data’s structure information. Moreover, the similarity measurement is a crucial factor for constructing a weighted graph. Wishart distance of covariance matrices and Euclidean distance of polarimetric features are two important similarity measurements for polarimetric synthetic aperture radar (PolSAR) image classification. For obtaining a satisfactory PolSAR image classification performance, a co-regularized graph embedding (CRGE) method by combing the two distances is proposed for PolSAR image feature extraction in this paper. Firstly, two weighted graphs are constructed based on the two distances to represent the data’s local structure information. Specifically, the neighbouring samples are sought in a local patch to decrease computation cost and use spatial information. Next the DR model is constructed based on the two weighted graphs and co-regularization. The co-regularization aims to minimize the dissimilarity of low-dimensional features corresponding to two weighted graphs. We employ two types of co-regularization and the corresponding algorithms are proposed. Ultimately, the obtained low-dimensional features are used for PolSAR image classification. Experiments are implemented on three PolSAR datasets and results show that the co-regularized graph embedding can enhance the performance of PolSAR image classification.

Graphical Abstract

1. Introduction

Regardless of the influence of the light and the weather, polarimetric synthetic aperture radar (PolSAR) has the ability to gain high-resolution images. Moreover, PolSAR uses different polarization combinations to obtain more abundant information of land covers than the single-polarization SAR [1,2,3]. Therefore PolSAR is a powerful tool for land cover classification. How to extract appropriate features affects the PolSAR image classification performance greatly. Thus, this paper focuses on feature extraction.
In the early years, Lee et al. [4] put forward the Wishart classifier to classify PolSAR images and the definition of the Wishart distance was given. By using Bayes maximum likelihood classifier specific to the complex Wishart distribution, the Wishart distance was defined. The Wishart classifier becomes one of most classical PolSAR image classification methods and is widely used. Afterwards, Lee et al. [5] cooperated H/A/ α decomposition with Wishart classifier for unsupervised PolSAR image classification. Three important polarimetric parameters of H/A/ α decomposition (i.e., entropy H, anisotropy A, and alpha angle α ) and the total backscattering power for H/A/ α decomposition SPAN were used to initialize cluster centers for the Wishart classifier in [6]. Wu et al. [7] employed Markov random field (MRF) in the basis of the Wishart distribution and region for PolSAR image classification. The symmetric revised Wishart (SRW) distance is proposed to construct a weighted graph for spectral clustering in [8]. Ersahin et al. [9] used the SRW distance for spectral graph partitioning based on the contour information and spatial proximity. The SRW distance has also been introduced into a superpixel segmentation method, i.e., simple linear iterative clustering (SLIC) for segmenting PolSAR images into superpixels [10]. Recently, the Wishart distance was used in the deep learning architecture, i.e., deep stacking network (DSN), named Wishart DSN for PolSAR image classification [11].
For the majority of PolSAR image classification methods, polarimetric features are usually exploited as features [12,13,14,15,16,17]. Some original PolSAR data, i.e., the covariance matrix, scattering matrix, and coherency matrix can yield polarimetric features. Moreover, polarimetric features are also usually obtained from considerable polarimetric target decomposition methods, namely Yamaguchi decomposition, Pauli decomposition, Yang decomposition, Krogager decomposition, Huynen decomposition, Van Zyl decomposition, Freeman decomposition, and H/A/ α decomposition [12,13,14,16]. Zhang et al. [14] used sparse representation based on polarimetric features to classify PolSAR images. The coherency matrix was converted into a 6D feature vector as feature to input deep convolutional neural networks (CNN) [15]. The complex-valued CNN [17] was put forward by using PolSAR data’s amplitude and phase information for PolSAR image classification. In some recent works, visual features, namely color features and texture features, were extracted for PolSAR image classification [18,19,20]. Uhlmann et al. [18] concluded three kinds of features, i.e., polarimetric features, color features and texture features of PolSAR images and evaluated the performance of different combinations of features for image classification. Ren et al. [20] proposed manifold regularized low-rank representation method based on polarimetric features and texture features for PolSAR image classification.
To acquire low-dimensional features for PolSAR image classification, some dimensionality reduction (DR) methods were applied. Tu et al. [12] used Laplacian eigenmaps (LE) to extract low-dimensional features for classification. Shi et al. [13] proposed supervised graph embedding (SGE) based on the discriminative information for supervised PolSAR image classification. Because of the spatial information’s significance, some tensor-based DR methods were introduced for PolSAR image feature extraction, such as tensor local discriminant embedding (TLDE) [21], tensorial independent component analysis (TICA) [22] and tensorial locally linear embedding [23]. Ren et al. [24] proposed the tensor embedding framework for PolSAR image feature extraction.
DR methods conclude linear and nonlinear methods. Traditional DR methods, i.e., linear discriminant analysis (LDA) and principal component analysis (PCA) are supervised and unsupervised linear methods, respectively [25]. The kernel method is used to extend the linear DR methods to nonlinear ones [26], i.e., kernel fisher discriminant analysis and kernel PCA. Some nonlinear DR methods aim to preserve the local or global geometric information of the nonlinear manifold structure, such as Laplacian eigenmap (LE) [27], Isomap [28] and local linear embedding (LLE) [29]. Yan et al. raised a framework to reformulate all the above DR methods by the graph embedding framework [30]. For the graph embedding DR framework, the weighted graph is constructed to represent the structure information of the data which we attempt to preserve in the process of DR. Different weighted graphs generate different methods. The similarity measurement plays a key role in constructing a weighted graph. The weighted graph of LE is constructed based on local pairwise Euclidean distances [27] and the weighted graph of Isomap is constructed based on global pairwise geodesic distances [28].
However, the above DR methods are mainly specific to single-view data. Even for multi-view data, they only directly concatenate vectors from multiple views into a vector and neglect the complementarity and correlation of different views. Kan et al. [31] proposed multiview discriminant analysis (MDA) to extend LDA to multiview cases. Multiview spectral embedding (MSE) was proposed to learn the complementarity of multiple views [32]. Kumar et al. [33] put forward a spectral clustering method based on co-regularization for multiview data. These methods are also based on graph embedding, which construct different graphs for the data from different views.
From above works, we can see that Wishart distance and polarimetric features are two indispensable factors for PolSAR image classification. For graph embedding DR methods, the similarity measurement is a crucial factor for constructing a weighted graph. Different distances can describe the similarity of samples more comprehensively from different perspectives. Therefore both Wishart distance of covariance matrices and Euclidean distance of polarimetric features are used in this paper. To extract more comprehensive information for a better classification performance, this paper attempts to put forward a feature extraction method by combining Wishart distance of covariance matrices and Euclidean distance of polarimetric features. Part of this work was introduced in [34]. Based on the two distances, we construct two weighted graphs to represent the local structure information. To use spatial information and decrease computation cost, we seek neighbouring samples in a local patch instead of globally. Then the co-regularized graph embedding (CRGE) model is formed on the basis of the two weighted graphs and co-regularization. The co-regularization is defined to measure the the dissimilarity of low-dimensional features corresponding to two graphs. The obtained low-dimensional features are used for ultimate image classification.
The remainder of this paper is organized as follows. Some background knowledge is described briefly in Section 2. The proposed co-regularized graph embedding method is presented elaborately in Section 3. Section 4 shows the experiments which are conducted on three PolSAR data sets and the analysis of results. Section 5 gives some discussions specific to the proposed method. At last, Section 6 concludes this paper.

2. Background

2.1. Graph Embedding DR Framework

Given n samples E = [ e 1 , e 2 , , e n ] T R n × D , where n and D denote the number and the dimensionality of samples, respectively, the graph embedding DR framework which is given in [30] aims to find low-dimensional representations F = [ f 1 , f 2 , , f n ] T R n × d , where d is the reduced dimensionality and d < D .
For graph embedding, we firstly need to construct an intrinsic weighted graph G to describe the similarity relationship among samples. The optimization problem is given:
F = arg min tr F T P F = b i j f i f j 2 G i j = arg min tr F T P F = b tr F T L F
where L = D G , i.e, the Laplacian matrix. D is a diagonal matrix and its elements D i i = j G i j . P can be the Laplacian matrix specific to a penalty graph G p which characterizes the similarity properties which we try to constrain, or a diagonal matrix for scale normalization. b is usually a constant.

2.2. PolSAR Data

For PolSAR, an observed target can be characterized by the scattering matrix S with size 2 × 2 [1]. That is,
S = S H H S H V S V H S V V
where S H H , S H V , S V H and S V V represent the backscattering coefficients corresponding to four channels: HH, HV, VH and VV. Based on the reciprocity property, S H V is equal to S V H .
The scattering matrix S is able to be vectorized as the scattering vector k = [ S H H , 2 S H V , S V V ] T . Regrading mulitilook PolSAR data, the covariance matrix is gained by computing as follows:
C = 1 n L k i k i T
where n L is the number of looks, the superscript * represents the complex conjugation.

2.3. Wishart Distance

The Wishart distance is frequently used as the similarity measurement for PolSAR image classification [1,4,5]. By using Bayes maximum likelihood classifier specific to the complex Wishart distribution, the Wishart distance was obtained [4]. The definition of the original Wishart distance is like:
d W ( C , Σ ) = ln | Σ | + tr Σ 1 C
where C is a sample covariance matrix, Σ is the cluster mean of the class. In this paper, we use a symmetric revised Wishart (SRW) distance to construct a graph. Because SRW distance satisfies some conditions for a general metric, such as definiteness, generalised nonnegativity, symmetry [8]. The SRW distance of two p × p covariance matrices A and B is denoted as:
d S R W ( A , B ) = tr B 1 A + B A 1 p .

3. The Proposed Method

This section will describe the proposed co-regularized graph embedding method elaborately. Figure 1 presents the whole process based on co-regularized graph embedding for PolSAR image classification. The proposed method is in basis of a superpixel rather than a single pixel to decrease computation cost and use spatial information. Therefore, we firstly segment PolSAR images into superpixels which will be described detailedly in Section 4. Based on Wishart distance of covariance matrices and Euclidean distance of polarimetric features, we construct two corresponding weighted graphs. Then the co-regularized graph embedding DR model is built based on the two graphs and co-regularized for feature extraction. The obtained low-dimensional features are used for PolSAR image classification.

3.1. Polarimetric Features

Polarimetric features can describe polarimetric scattering and physical properties of targets and usually can be extracted from PolSAR data and some polarimetric decomposition methods. In the proposed method, we extract thirty polarimetric features from the covariance matrix C and five decomposition methods (i.e., Pauli decomposition, H/A / α decomposition, Krogager decomposition, Huynen decomposition and Freeman decomposition). In detail,
  • six are from the elements of covariance matrix C , i.e., C 11 , C 22 , C 33 , Re C 12 , Im C 12 , Re C 13 , Im C 13 , Re C 23 , Im C 23 ;
  • three are from Pauli decomposition, i.e., | a | 2 , | b | 2 , | c | 2 which denote powers of a isotropic odd, a even and a π / 4 even scatter;
  • three are from Krogager decomposition i.e., | k s | 2 , | k d | 2 , | k h | 2 which denote powers of a sphere, a diplane and a helix scatter;
  • three are from Freeman decomposition i.e., P s , P d , P v which denote powers of a surface, a double-bounce and a volume scatter;
  • nine are from Huynen decomposition, i.e., A 0 , B 0 + B , B 0 B , C , D , E , F , G , H denote information corresponding to symmetry, nonsymmetry and irregularity, linear, curvature, torsion, helicity, glue and orientation;
  • six are from H/A / α decomposition, i.e., λ 1 , λ 2 , λ 3 , H , A , α denote three eigenvalues, entropy, anisotropy and alpha angle.
Therefore, sample i is denoted as a 30D feature vector e i R 30 . Here, “sample” means a superpixel rather than a single pixel. The details will be introduced in Section 4.
The similarity of two samples is characterized by the Euclidean distance of polarimetric features. Concretely, for sample i and sample j, the Euclidean distance d E is
d E ( e i , e j ) = e i e j 2

3.2. Constructing Two Graphs

Based on Wishart distance of covariance matrices d S R W and Euclidean distance of polarimetric features d E , we construct two corresponding graphs G ( 1 ) and G ( 2 ) .
To use the spatial and reduce the computation burden, we search for k neighbouring samples in a h × h local region rather than the full image. Specifically, for sample i, we search for k nearest samples in the h × h region centered on sample i based on d S R W and d E , which are denoted as O ( C i , k ) and O ( e i , k ) .
Then we construct two graphs G ( 1 ) and G ( 2 ) as follows:
  • if C j O ( C i , k ) , G i j ( 1 ) = e d S R W ( C i , C j ) t 1 , else G i j ( 1 ) = 0 ,
  • if e j O ( e i , k ) , G i j ( 2 ) = e d E ( e i , e j ) t 2 , else G i j ( 2 ) = 0 ,
where t 1 and t 2 are two parameters. Here, we select t 1 = m a x ( d S R W ( C i , C j ) ) and t 2 = m a x ( d E ( e i , e j ) ) .

3.3. Co-Regularized Graph Embedding Model

Based on two graphs G ( 1 ) and G ( 2 ) , we pursue corresponding low-dimensional representations F ( 1 ) R d × n and F ( 2 ) R d × n which can be obtained via the following objective functions:
min F ( 1 ) tr F ( 1 ) T L ( 1 ) F ( 1 )
min F ( 2 ) tr F ( 2 ) T L ( 2 ) F ( 2 )
where L ( 1 ) and L ( 2 ) are normalized Laplacian matrices for G ( 1 ) and G ( 2 ) . Concretely, L ( 1 ) = I D ( 1 ) 1 2 G ( 1 ) D ( 1 ) 1 2 and L ( 2 ) = I D ( 2 ) 1 2 G ( 2 ) D ( 2 ) 1 2 . D ( 1 ) and D ( 2 ) are two diagonal matrices specific to L ( 1 ) and L ( 2 ) . Furthermore, their diagonal elements are D i i ( 1 ) = j G i j ( 1 ) and D i i ( 2 ) = j G i j ( 2 ) , respectively.
To not only exploit polarimetric features, but also bring in the Wishart distance, we combine problem (7) and (8) by a co-regularization term. Then the optimization problem of the co-regularized graph embedding DR method is constructed as follows:
min F ( 1 ) , F ( 2 ) tr α F ( 1 ) T L ( 1 ) F ( 1 ) + ( 1 α ) F ( 2 ) T L ( 2 ) F ( 2 ) + λ Cor ( F ( 1 ) , F ( 2 ) ) s . t . F ( 1 ) T F ( 1 ) = I , F ( 2 ) T F ( 2 ) = I
where α and 1 α are parameters to balance two low-dimensional features, λ is the parameter for the co-regularization term. Here Cor ( F ( 1 ) , F ( 2 ) ) is the co-regularization term, which measures the similarity of two low-dimensional features F ( 1 ) and F ( 2 ) . A natural way is as follows:
Cor 1 ( F ( 1 ) , F ( 2 ) ) = F ( 1 ) F ( 2 ) F 2
As with [33], another commonly used similarity measure is as follows:
Cor 2 ( F ( 1 ) , F ( 2 ) ) = F ( 1 ) F ( 1 ) T F ( 2 ) F ( 2 ) T F 2

3.4. Optimization

Specific to two different co-regularization terms (10) and (11), we will describe how to solve the corresponding optimization problems.
(1) When Cor 1 ( F ( 1 ) , F ( 2 ) ) = F ( 1 ) F ( 2 ) F 2 , problem (9) becomes
min F ( 1 ) , F ( 2 ) tr α F ( 1 ) T L ( 1 ) F ( 1 ) + ( 1 α ) F ( 2 ) T L ( 2 ) F ( 2 ) + λ F ( 1 ) F ( 2 ) F 2 s . t . F ( 1 ) T F ( 1 ) = I , F ( 2 ) T F ( 2 ) = I
Assuming f ( 1 ) , f ( 2 ) are arbitrary column vectors from F ( 1 ) , F ( 2 ) , the Lagrange function of problem (12) is as follows:
L ( f ( 1 ) , f ( 2 ) , μ ) = α f ( 1 ) T L ( 1 ) f ( 1 ) + ( 1 α ) f ( 2 ) T L ( 2 ) f ( 2 ) + λ f ( 1 ) f ( 2 ) F 2 + μ ( f ( 1 ) T f ( 1 ) 1 ) + μ ( f ( 2 ) T f ( 2 ) 1 )
where μ is the Lagrange multiplier. Sequently, we do the partial derivative of L ( f ( 1 ) , f ( 2 ) , μ ) . Set L / f ( 1 ) = 0 and L / f ( 2 ) = 0 , i.e.,
L f ( 1 ) = 2 α L ( 1 ) f ( 1 ) + 2 λ ( f ( 1 ) f ( 2 ) ) + 2 μ f ( 1 ) = 0
L f ( 2 ) = 2 ( 1 α ) L ( 2 ) f ( 2 ) + 2 λ ( f ( 2 ) f ( 1 ) ) + 2 μ f ( 2 ) = 0
The above two equations are reformulated by using a matrix form, i.e.,
α L ( 1 ) + λ I λ I λ I ( 1 α ) L ( 2 ) + λ I f ( 1 ) f ( 2 ) = μ f ( 1 ) f ( 2 )
Through doing the eigenvalue decomposition of matrix α L ( 1 ) + λ I λ I λ I ( 1 α ) L ( 2 ) + λ I , we select the d eigenvectors specific to the smallest d eigenvalues for F ( 1 ) F ( 2 ) . The detailed algorithm is given in Algorithm 1.
Algorithm 1: Co-regularized graph embedding (CRGE) with Cor1
Input: Two Laplacian matrices L ( 1 ) and L ( 2 ) , the reduced dimensionality d.
Step1: Do the eigenvalue decomposition of matrix α L ( 1 ) + λ I λ I λ I ( 1 α ) L ( 2 ) + λ I .
Step2: Select the d eigenvectors specific to the smallest d eigenvalues as F ( 1 ) F ( 2 ) .
Output: Low-dimensional features F ( 1 ) and F ( 2 )
(2) When Cor 2 ( F ( 1 ) , F ( 2 ) ) = F ( 1 ) F ( 1 ) T F ( 2 ) F ( 2 ) T F 2 , based on F ( 1 ) T F ( 1 ) = I , F ( 2 ) T F ( 2 ) = I , A F 2 = tr ( AA T ) and regardless of the constant term and the scale term, we approximately have
F ( 1 ) F ( 1 ) T F ( 2 ) F ( 2 ) T F 2 = tr F ( 1 ) F ( 1 ) T F ( 2 ) F ( 2 ) T
Therefore, problem (9) becomes:
min F ( 1 ) , F ( 2 ) tr α F ( 1 ) T L ( 1 ) F ( 1 ) + ( 1 α ) F ( 2 ) T L ( 2 ) F ( 2 ) λ tr F ( 1 ) F ( 1 ) T F ( 2 ) F ( 2 ) T s . t . F ( 1 ) T F ( 1 ) = I , F ( 2 ) T F ( 2 ) = I
Problem (18) can be solved by using an alternate optimization algorithm for F ( 1 ) and F ( 2 ) . Firstly, by doing eigenvalue decomposition of L ( 1 ) and L ( 2 ) , d eigenvectors which are specific to d smallest eigenvalues are used as the initialized F ( 1 ) and F ( 2 ) . The iterative process is as follows:
  • Fixing F ( 1 ) , solve for F ( 2 ) . Problem (18) becomes:
    min F ( 2 ) tr F ( 2 ) T ( 1 α ) L ( 2 ) λ F ( 1 ) F ( 1 ) T F ( 2 ) s . t . F ( 2 ) T F ( 2 ) = I
    Therefore d eigenvectors which are specific to d smallest eigenvalues of ( 1 α ) L ( 2 ) λ F ( 1 ) F ( 1 ) T form F ( 2 ) .
  • Fixing F ( 2 ) , solve for F ( 1 ) . Problem (18) becomes:
    min F ( 1 ) tr ( F ( 1 ) T ( α L ( 1 ) λ F ( 2 ) F ( 2 ) T ) F ( 1 ) ) s . t . F ( 1 ) T F ( 1 ) = I
    Therefore d eigenvectors which are specific to d smallest eigenvalues of α L ( 1 ) λ F ( 2 ) F ( 2 ) T form F ( 1 ) .
The stop error ε or the maximum number of iterations T m a x can be used as the stop criterion of the iterative process. The detailed algorithm is given in Algorithm 2.
Based on the obtained low-dimensional features F ( 1 ) and F ( 2 ) , F ( 1 ) F ( 2 ) is used as features for the final classification.
Algorithm 2: Co-regularized graph embedding (CRGE) with Cor2
Input: Two Laplacian matrices L ( 1 ) and L ( 2 ) , the reduced dimensionality d.
Initialize: d eigenvectors specific to d smallest eigenvalues of L ( 1 ) form F ( 1 ) , d eigenvectors specific to d smallest eigenvalues of L ( 2 ) forms F ( 2 ) .
Step1: Fix F ( 1 ) to update F ( 2 )
(1) Do the eigenvalue decomposition of the matrix ( 1 α ) L ( 2 ) λ F ( 1 ) F ( 1 ) T
(2) Select d eigenvectors specific to d smallest eigenvalues as F ( 2 )
Step2: Fix F ( 2 ) to update F ( 1 )
(1) Do the eigenvalue decomposition of the matrix α L ( 1 ) λ F ( 2 ) F ( 2 ) T
(2) Select d eigenvectors specific to d smallest eigenvalues as F ( 1 )
T = T + 1 until T = T m a x
End iteration
Output: Low-dimensional features F ( 1 ) and F ( 2 )

3.5. Computational Complexity Analysis of the Proposed Method

The computational complexity of the proposed method mainly consists of two steps, i.e., constructing two weighted graphs and solving the CRGE model. Firstly, because the neighbouring samples are searched in a local region, where the number of samples is approximately equal to k. Thus, the computation complexity is O ( n k ) , where n is the number of samples and k is the number of neighbouring samples. Then, for CRGE with Cor1, we need do the eigenvalue decomposition on a 2 n × 2 n matrix, thus the computational complexity is O ( ( 2 n ) 3 ) . For CRGE with Cor2, the alternating iteration algorithm is needed. In each iteration, the algorithm relates to two eigenvalue decomposition problems on two n × n matrices. Thus, the computational complexity is each iteration is O ( 2 n 3 ) . Therefore, the total computational complexity for the whole iterative process is O ( 2 T n 3 ) , where T is the number of iterations.

4. Experiments

To validate the effectiveness of the co-regularized graph embedding method, experiments are implemented on three PolSAR data sets.

4.1. Description of Data Sets

The first one is the Flevoland data set for a part of cropland from Flevoland in the Netherlands. The size of the Flevoland data set is 750 × 1024 . Fifteen classes of crops are considered: stem beans, potatoes, peas, lucerne, wheat I, forest, beet, bare soil, grass, rapeseed, barely, wheat II, wheat III, water, and buildings. Figure 2a shows the PauliRGB image of this data set, where red is for | S H H S V V | , green is for | S H V + S V H | , and blue is for | S H H + S V V | . Figure 2b describes the corresponding ground truth for fifteen classes of crops. To conduct more experiments to discuss parameter setting, we firstly select a subarea of the Flevoland data set with 200 × 320 pixels. The subset consists of nine classes of crops: stem beans, rapeseed, wheat I, grass, lucerne, potatoes, bare soil, wheat II, and sugar beat. The PauliRGB image of the subset and its ground truth map are shown in Figure 2d,e.
The second data set is the Oberpfaffenhofen data set for the Oberpfaffenhofen area from Germany. The data set has 700 × 700 pixels which include three classes of land covers: wood land, built-up areas and open areas. Figure 3a shows the corresponding PauliRGB image and Figure 3b is the ground truth map.
The third data set is the San Francisco Bay data set for the San Francisco Bay area from the USA. It is four-look L-band data set and has 900 × 1024 pixels. Four classes of land covers are considered, i.e., mountains, sea, buildings and grass [35]. Figure 4a shows the PauliRGB image and Figure 4b is the ground truth map.
Because of the coherent imaging mechanism, there exist strong speckle noise in PolSAR images. Therefore, the refined Lee filter firstly is exploited for speckle reduction. Furthermore, we segment a PolSAR image into many superpixels to enhance the classification performance and decrease the computation cost. In our experiments, the adaptive superpixel generation method [36] is used to obtain superpixels. Figure 2c, Figure 3c and Figure 4c show the the segmentation results for three data sets, respectively. Then we conduct experiments based on a superpixel rather than a single pixel. The mean of polarimetric feature vectors of pixels in a superpixel is used as the polarimetric feature vector of the superpixel. The mean of covariance matrices of pixels in a superpixel is used as the covariance matrix of the superpixel.
Regarding methods for comparison, the classical PolSAR image classification method, i.e., the Wishart classifier (WC) [4] is used as a baseline. Moreover, two DR methods, i.e., principle component analysis (PCA), Laplacian eigenmaps (LE) [12] are employed for comparison with the proposed method. To further validate the effectiveness of the combination by co-regularization, two extreme cases of the proposed method are also used for comparison. One case when λ = 0 , α = 1 means that we just use a weighted graph based on Wishart distance, which is equal to Wishart distance-based Laplacian embedding (WDLE). In the same way, the other case when λ = 0 , α = 0 means that we just use a weighted graph based on Euclidean distance of polarimetric features, which is equal to polarimetric feature-based Laplacian embedding (PFLE). Please note that for WDLE and PFLE, neighbouring samples are searched in a local region, which is different from LE [12] (LE search neighbouring samples globally). In addition, all methods are based on superpixels. To emphasize the role of feature extraction methods in PolSAR image classification, we just employ nearest neighbour (NN) classifier for classification based on obtained low-dimensional features by PCA, LE, PFLE, WDLE and the proposed CRGE. We randomly select 1 % samples for training in the NN classifier and compute the average accuracy of 10 experiments as the final accuracy.

4.2. Parameter Setting

The parameter setting is discussed in this part. The experiments which are specific to parameter setting discussion are conducted on the subset of the Flevoland data set. About two types of regularization mentioned in Section 3, i.e., Equations (10) and (11), the corresponding two models are problems (12) and (18). The first one as is denoted as CRGE-Cor1 and the other one is denoted as CRGE-Cor2. Experiments of CRGE-Cor1 and CRGE-Cor2 are carried out with different sizes of the local patch h × h and the number of neighbouring samples k. The classification accuracies are shown in Figure 5. We list three cases when h = 41 , h = 61 and h = 81 . Because the proposed method is based on superpixel which contains about 200 pixels, the number of superpixels in the h × h patch is limited, such as the 41 × 41 region contains 7 superpixels at most, which is shown in Figure 5a. We can see that CRGE-Cor2 performs much better than CRGE-Cor1. CRGE-Cor1 performs poorly under different sizes of the local region and the number of the neighbouring samples. The highest classification accuracy of CRGE-Cor1 is lower than 90 % and is much lower than the classification accuracy of GRGE-Cor2. Therefore, we adopt the second type of regularization, i.e., CRGE-Cor2 for the following experiments. Please note that CRGE means CRGE-Cor2 in the following description. Moreover, regarding the size of the local patch h × h and the number of neighbouring samples k, taking the classification performance and the computation burden into account, we select 61 × 61 local region and 10 neighbouring samples, i.e., h = 61 , k = 10 for the experiments in the subset of the Flevoland data set.
About the reduced dimensionality d, Figure 6 shows the classification accuracy of six methods with dimensionality which ranges from 1 to 10. We can see that the classification accuracy stays unchange when the dimensionality is larger than 5. Therefore, we select d = 6 as the reduced dimensionality. Moreover, the proposed co-regularized graph embedding method has a higher accuracy than other methods with ranging reduced dimensionality. When the reduced dimensionality d = 4 , the classification accuracy of the proposed co-regularized graph embedding method reached nearly 100 % . Moreover, for other three data sets, i.e., the full Flevoland data set, the Oberpfaffenhofen data set and the San Francisco Bay data set, the classification accuracies of the proposed method with dimensionality ranging from 1:10 are shown in Figure 7. We can see that when the dimensionality reaches 6, the classification accuracy almost stop increasing. Therefore, the reduced dimensionality is set as 6 for three data sets. The stop error ε for the iterative process is set as ε = 10 6 . The maximum number of iterations T m a x is set as T m a x = 10 . The parameter α and λ are set as α = 0 . 1 , λ = 0 . 2 .

4.3. Experimental Results

Table 1 gives classification accuracies of six methods on the subset of the Flevoland data set, which include user accuracy (UA), producer accuracy (PA) for each classes and overall accuracy (OA). Note that the bold means the highest UA, PA or OA. Figure 8 presents the visual classification results maps of six methods on the subset of the Flevoland data set. We can see that the LE performs worst among six methods. WC performs nearly as well as PCA. PFLE performs a little better than PCA and WC. WDLE performs fairly worse than WC, PCA and PFLE. The overall accuracy (OA) of the proposed CRGE is higher than other methods. The user accuracy (UA) for each class of the proposed method is higher than other comparing methods. The overall accuracy of the proposed method is larger than 99 % and the classification map is nearly similar to the ground truth map. Moreover, although both PFLE and WDLE cannot achieve a satisfactory classification performance, the combination of two distances based on co-regularization contributes to the improvement of the classification performance.
Table 2 gives the classification accuracies of six methods on the Flevoland data set, which include user accuracy (UA), producer accuracy (PA) for each classes and overall accuracy (OA). Figure 9 presents visual classification maps of six methods on the Flevoland data set. It is obvious that PFLE and WDLE performs very badly while LE performs better than them. The reason may be that the Flevoland data set has a large size and contains many classes. Samples from one class distribute quite diversely, which results in that local search just based on one distance is not fit for this data set. However, the proposed CRGE performs best among the comparing methods, which can evaluate the combination of the two distances by co-regularization can improve the classification performance. PCA performs better than LE and WC. The overall accuracy of the proposed CRGE method is about 5 % larger than ones of other methods.
Table 3 gives the classification accuracies of six methods on the Oberpfaffenhofen data set, which include user accuracy (UA), producer accuracy (PA) for each classes and overall accuracy (OA). Figure 10 presents visual classification maps of six methods on the Oberpfaffenhofen data set. The size of the local patch is set as h = 81 . The number of neighbouring samples is set as k = 15 . The Wishart classifier performs worst among compared methods mainly because a great amount of “built-up areas” are misclassified as shown in Figure 10b. Table 2 presents that the user accuracy of class “built-up areas” is 36 . 39 % while the user accuracies of other classes are larger than 85 % . PCA, LE, PFLE and WDLE perform similarly. The classification accuracy of the proposed CRGE method reach 98 % . The classification map of the proposed method also seem closer to the ground truth map.
Table 4 gives the classification accuracies of six methods on the San Francisco Bay data set, which include user accuracy (UA), producer accuracy (PA) for each classes and overall accuracy (OA). Figure 11 presents the visual classification maps of six methods on the San Francisco Bay data set. The size of the local patch and the number of neighbouring samples are set as h = 101 and k = 20 . The Wishart classifier still performs worst among compared methods mainly because many samples from three classes of land covers: mountains, grass and buildings are misclassified. PCA, LE and WDLE perform similarly and PFLE performs worse than them. The proposed co-regularized graph embedding method is superior to other methods. The classification accuracy is about 2 % larger than ones of other methods. The user accuracy and producer accuracy for each class of the proposed CRGE method are higher than other comparing method. Obviously, the classification map of the proposed method appears more similar to the ground truth map.

4.4. Computational Speed Comparison

Table 5 shows the computational time of six methods on three data sets. We can see that WC costs little time on the Oberpfaffenhofen data set and the San Francisco Bay data set because the classification of WC is based on the mean of a class. Other methods employ the nearest neighbour (NN) classifier, which is based on each pixel of a class. Therefore, other methods cost more time than WC. However, for the Flevoland data set, WC takes more time than PCA because the number of classes is larger than other datasets’. The computational time of PCA is relatively less than LE, PFLE, WDLE and CRGE because PCA has the explicit projection matrix to extract low-dimensional features, which can reduce computational time. PFLE and WDLE take less time than LE because of local search. The proposed CRGE costs the most time because it has an iterative process to pursue the low-dimensional features.

5. Discussion

Because the Wishart distance of covariance matrices and Euclidean distance of polarimetric features are two important similarity measurements for PolSAR image classification, this paper attempts to propose the co-regularized graph embedding DR method to combine the two distances. Two weighted graphs are constructed corresponding to the two distances. Then the DR model is built based on the two graphs and co-regularization.
Observing the experimental results, we can see that Cor2 is proper for PolSAR image classification and the corresponding DR model can enhance the classification performance as shown in Figure 5. Under different sizes of the local patch and the number of neighbouring samples, CRGE-Cor1 performs much worse than CRGE-Cor2. Therefore, we employ CRGE-Cor2 for experiments. It is obvious that the proposed method performs better than other comparing methods on three data sets, especially, on the Flevoland data set. The proposed method can increase the overall accuracy by 5 % and has a better classification map on the Flevoland data set. In fact, classification of the Flevoland data set is a more difficult problem because it contains 15 classes and has a large size, which is a strong evidence of the superiority of the proposed method. Moreover, two extreme cases of the proposed method, i.e., PFLE and WDLE perform worst on the Flevoland data set among six method. On other data sets, both of them cannot achieve a satisfactory classification performance. Therefore, we can conclude that the combination of the two distances by co-regularization indeed can improve the classification performance including the numerical results and visual results.
From Table 5, we can see that the proposed method costs much time for the data sets with a large size. To take a comprehensive consideration of balance the accuracy and speed, for a data set with a small size and many classes, the proposed method is a good choice for PolSAR image classification.

6. Conclusions

This paper put forward a co-regularized graph embedding method in basis of two types of distances (i.e., Wishart distance of covariance matrices and Euclidean distance of polarimetric features) for PolSAR image classification. Two weighted graphs are constructed specific to two distances. Then the DR model is constructed based on two weight graphs and co-regularization. The co-regularization aims to minimize the dissimilarity of low-dimensional features corresponding to two graphs. The co-regularized graph-embedding DR model can achieve a better classification performance than compared methods.
Moreover, the proposed co-regularized embedding method mainly focuses on graphs corresponding to different similarity measurements rather than the multiple data sets from different perspectives. We directly pursue the low-dimensional features specific to different graphs. Therefore, the original data sets become unnecessary and the similarity between two samples is indispensable, which can result in two advantages: (1) for one data set, the proposed method can use different similarity measurements to extract features from different perspectives, which can characterize the data more comprehensively; (2) sometimes great difference in multiple data sets from different perspectives may influence the classification performance or bring challenges for data representation, which can be avoided by the proposed method.
The proposed method has some limitations: (1) the proposed method is specific to two weighted graphs and cannot deal with more graphs; (2) although CRGE with Cor2 can enhance the classification performance, it is time consuming.
In future work, we will mainly focus on extending the proposed method for more graphs, and design the co-regularization which can meet the requirements of accuracy and speed.

Author Contributions

Conceptualization, X.H.; methodology, X.H.; software, X.H. and X.N.; validation, X.H.; formal analysis, X.H. and X.N.; investigation X.H.; resource X.H.; data curation X.N., writing—original draft X.H.; writing—review and editing, X.H. and X.N.; supervision, H.Q.; projection administration H.Q.; funding acquisition, H.Q. and X.H.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program of China (2017YFB1300200, 2017YFB1300203), the National Natural Science Foundation of China under Grant 61802408, Grant 91648205, Grant 61627808, Grant 91948303 and Grant 61806202. This work is also supported in part by the Strategic Priority Research Program of Chinese Academy of Science under Grant No.XDB32000000 and also the Fundamental Research Funds for the Central Universities under Grant 22120200149.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, J.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  2. Liu, X.; Jiao, L.; Tang, X.; Sun, Q.; Zhang, D. Polarimetric Convolutional Network for PolSAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 3040–3054. [Google Scholar] [CrossRef] [Green Version]
  3. Xie, W.; Jiao, L.; Hou, B.; Ma, W.; Zhao, J.; Zhang, S.; Liu, F. POLSAR Image Classification via Wishart-AE Model or Wishart-CAE Model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3604–3615. [Google Scholar] [CrossRef]
  4. Lee, J.S.; Grunes, M.R.; Kwok, R. Classification of multi-look polarimetric SAR imagery based on complex Wishart distribution. Int. J. Remote Sens. 1994, 15, 2299–2311. [Google Scholar] [CrossRef]
  5. Lee, J.S.; Grunes, M.R.; Ainsworth, T.L.; Du, L.J.; Schuler, D.L.; Cloude, S.R. Unsupervised classification using polarimetric decomposition and the complex Wishart classifier. IEEE Trans. Geosci. Remote Sens. 1999, 37, 2249–2258. [Google Scholar] [CrossRef]
  6. Cao, F.; Hong, W.; Wu, Y.; Pottier, E. An Unsupervised Segmentation With an Adaptive Number of Clusters Using the SPAN/H/α/A Space and the Complex Wishart Clustering for Fully Polarimetric SAR Data Analysis. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3454–3467. [Google Scholar] [CrossRef]
  7. Wu, Y.; Ji, K.; Yu, W.; Su, Y. Region-Based Classification of Polarimetric SAR Images Using Wishart MRF. IEEE Geosci. Remote Sens. Lett. 2008, 5, 668–672. [Google Scholar]
  8. Anfinisen, S.; Jenssen, R.; Eltoft, T. Spectral clustering of polarimetric SAR data with Wishart-derived distance measures. In Proceedings of the POLInSAR 2007, Frascati, Italy, 22–26 January 2007. [Google Scholar]
  9. Ersahin, K.; Cumming, I.G.; Ward, R.K. Segmentation and Classification of Polarimetric SAR Data Using Spectral Graph Partitioning. IEEE Trans. Geosci. Remote Sens. 2010, 48, 164–174. [Google Scholar]
  10. Qin, F.; Guo, J.; Lang, F. Superpixel Segmentation for Polarimetric SAR Imagery Using Local Iterative Clustering. IEEE Geosci. Remote Sens. Lett. 2015, 12, 13–17. [Google Scholar]
  11. Jiao, L.; Liu, F. Wishart Deep Stacking Network for Fast POLSAR Image Classification. IEEE Trans. Image Process. 2016, 25, 3273–3286. [Google Scholar] [CrossRef]
  12. Tu, S.T.; Chen, J.Y.; Yang, W.; Sun, H. Laplacian Eigenmaps-Based Polarimetric Dimensionality Reduction for SAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2012, 50, 170–179. [Google Scholar] [CrossRef]
  13. Shi, L.; Zhang, L.; Yang, J.; Zhang, L.; Li, P. Supervised graph embedding for polarimetric SAR image classification. IEEE Geosci. Remote Sens. Lett. 2013, 10, 216–220. [Google Scholar] [CrossRef]
  14. Zhang, L.; Sun, L.; Zou, B.; Moon, W.M. Fully Polarimetric SAR Image Classification via Sparse Representation and Polarimetric Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3923–3932. [Google Scholar] [CrossRef]
  15. Zhou, Y.; Wang, H.; Xu, F.; Jin, Y. Polarimetric SAR Image Classification Using Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1935–1939. [Google Scholar] [CrossRef]
  16. Wang, S.; Liu, K.; Pei, J.; Gong, M.; Liu, Y. Unsupervised Classification of Fully Polarimetric SAR Images Based on Scattering Power Entropy and Copolarized Ratio. IEEE Geosci. Remote Sens. Lett. 2013, 10, 622–626. [Google Scholar] [CrossRef]
  17. Zhang, Z.; Wang, H.; Xu, F.; Jin, Y. Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 7177–7188. [Google Scholar] [CrossRef]
  18. Uhlmann, S.; Kiranyaz, S. Integrating color features in polarimetric SAR image classification. IEEE Trans. Geosci. Remote Sens. 2015, 52, 2197–2216. [Google Scholar] [CrossRef]
  19. Masjedi, A.; Zoej, M.J.V.; Maghsoudi, Y. Classification of Polarimetric SAR Images Based on Modeling Contextual Information and Using Texture Features. IEEE Trans. Geosci. Remote Sens. 2016, 54, 932–943. [Google Scholar] [CrossRef]
  20. Ren, B.; Hou, B.; Zhao, J.; Jiao, L. Unsupervised Classification of Polarimetirc SAR Image Via Improved Manifold Regularized Low-Rank Representation With Multiple Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 580–595. [Google Scholar] [CrossRef]
  21. Huang, X.; Qiao, H.; Zhang, B.; Nie, X. Supervised Polarimetric SAR Image Classification Using Tensor Local Discriminant Embedding. IEEE Trans. Image Process. 2018, 27, 2966–2979. [Google Scholar] [CrossRef]
  22. Tao, M.; Zhou, F.; Liu, Y.; Zhang, Z. Tensorial Independent Component Analysis-Based Feature Extraction for Polarimetric SAR Data Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2481–2495. [Google Scholar] [CrossRef]
  23. Liu, H.; Wang, Z.; Shang, F.; Yang, S.; Gou, S.; Jiao, L. Semi-Supervised Tensorial Locally Linear Embedding for Feature Extraction Using PolSAR Data. IEEE J. Sel. Top. Signal Process. 2018, 12, 1476–1490. [Google Scholar] [CrossRef]
  24. Ren, B.; Hou, B.; Chanussot, J.; Jiao, L. PolSAR Feature Extraction Via Tensor Embedding Framework for Land Cover Classification. IEEE Trans. Geosci. Remote. Sens. 2019, 1–15. [Google Scholar] [CrossRef]
  25. Martinez, A.M.; Kak, A.C. PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 228–233. [Google Scholar] [CrossRef] [Green Version]
  26. Muller, K.; Mika, S.; Ratsch, G.; Tsuda, K.; Scholkopf, B. An introduction to kernel-based learning algorithms. IEEE Trans. Neural Netw. 2001, 12, 181–201. [Google Scholar] [CrossRef] [Green Version]
  27. Belkin, M.; Niyogi, P. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Comput. 2003, 15, 1373–1396. [Google Scholar] [CrossRef] [Green Version]
  28. Tenenbaum, J.; Silva, V.; Langford, J. A Global Geometric Framework for Nonlinear Dimensionality Reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef]
  29. Roweis, S.; Saul, L. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [Green Version]
  30. Yan, S.; Xu, D.; Zhang, B.; Zhang, H.; Yang, Q.; Lin, S. Graph Embedding and Extensions: A General Framework for Dimensionality Reduction. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 40–51. [Google Scholar] [CrossRef] [Green Version]
  31. Kan, M.; Shan, S.; Zhang, H.; Lao, S.; Chen, X. Multi-View Discriminant Analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 188–194. [Google Scholar] [CrossRef]
  32. Xia, T.; Tao, D.; Mei, T.; Zhang, Y. Multiview Spectral Embedding. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2010, 40, 1438–1446. [Google Scholar]
  33. Kumar, A.; Rai, P.; Hal, D. Co-regularized Multi-view Spectral Clustering. In Proceedings of the Advances in Neural Information Processing Systems 24, 25th Annual Conference on Neural Information Processing Systems 2011, Granada, Spain, 12–14 December 2011; pp. 1413–1421. [Google Scholar]
  34. Huang, X.; Nie, X.; Qiao, H. PolSAR image feature extraction based on co-ragularization. In Proceedings of the 2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 19–24 July 2020. under review. [Google Scholar]
  35. He, C.; Deng, J.; Xu, L.; Li, S.; Duan, M.; Liao, M. A novel over-segmentation method for polarimetric SAR images classification. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 4299–4302. [Google Scholar] [CrossRef]
  36. Xiang, D.; Ban, Y.; Wang, W.; Su, Y. Adaptive superpixel gen- eration for polarimetric SAR images with local iterative clustering and SIRV model. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3115–3131. [Google Scholar] [CrossRef]
Figure 1. The whole process based on co-regularized graph embedding for PolSAR image classification.
Figure 1. The whole process based on co-regularized graph embedding for PolSAR image classification.
Remotesensing 12 01738 g001
Figure 2. Display of the Flevoland data set. (a) the PauliRGB image, (b) the corresponding ground truth and (c) the segmentation results are for the complete Flevoland data set. (df) are the subset of the Flevoland data set with 200 × 320 pixels. (d) the PauliRGB image of the subset, (e) the corresponding ground truth of the subset and (f) the segmentation results of the subset.
Figure 2. Display of the Flevoland data set. (a) the PauliRGB image, (b) the corresponding ground truth and (c) the segmentation results are for the complete Flevoland data set. (df) are the subset of the Flevoland data set with 200 × 320 pixels. (d) the PauliRGB image of the subset, (e) the corresponding ground truth of the subset and (f) the segmentation results of the subset.
Remotesensing 12 01738 g002
Figure 3. Display of the Oberpfaffenhofen data set. (a) the PauliRGB image, (b) the corresponding ground truth and (c) the segmentation results.
Figure 3. Display of the Oberpfaffenhofen data set. (a) the PauliRGB image, (b) the corresponding ground truth and (c) the segmentation results.
Remotesensing 12 01738 g003
Figure 4. Display of the San Francisco Bay data set. (a) the PauliRGB image, (b) the corresponding ground truth and (c) the segmentation results.
Figure 4. Display of the San Francisco Bay data set. (a) the PauliRGB image, (b) the corresponding ground truth and (c) the segmentation results.
Remotesensing 12 01738 g004
Figure 5. Classification accuracies comparison of CRGE-Cor1 and CRGE-Cor2 with different sizes of the local patch h and numbers of neighbouring samples k on the subset of the Flevoland data set. (a) h = 41 with k ranging from 1 to 7, (b) h = 61 with k ranging from 1 to 13 and (c) h = 81 with k ranging from 1 to 21.
Figure 5. Classification accuracies comparison of CRGE-Cor1 and CRGE-Cor2 with different sizes of the local patch h and numbers of neighbouring samples k on the subset of the Flevoland data set. (a) h = 41 with k ranging from 1 to 7, (b) h = 61 with k ranging from 1 to 13 and (c) h = 81 with k ranging from 1 to 21.
Remotesensing 12 01738 g005
Figure 6. Classification accuracies of six methods with varying dimensionality ranging from 1 to 10 on the subset of the Flevoland data set.
Figure 6. Classification accuracies of six methods with varying dimensionality ranging from 1 to 10 on the subset of the Flevoland data set.
Remotesensing 12 01738 g006
Figure 7. Classification accuracies of the proposed method with varying dimensionality ranging from 1 to 10 on three data sets. (a) the the Flevoland data set, (b) the Oberpfaffenhofen data set and (c) the San Francisco Bay data set.
Figure 7. Classification accuracies of the proposed method with varying dimensionality ranging from 1 to 10 on three data sets. (a) the the Flevoland data set, (b) the Oberpfaffenhofen data set and (c) the San Francisco Bay data set.
Remotesensing 12 01738 g007
Figure 8. Visual classification results on the subset of the Flevoland data set. (a) the Wishart classifier, (b) PCA, (c) LE (d) PFLE, (e) WDLE and (f) the proposed method.
Figure 8. Visual classification results on the subset of the Flevoland data set. (a) the Wishart classifier, (b) PCA, (c) LE (d) PFLE, (e) WDLE and (f) the proposed method.
Remotesensing 12 01738 g008
Figure 9. Visual classification results on the Flevoland data set. (a) the Wishart classifier, (b) PCA, (c) LE (d) PFLE, (e) WDLE and (f) the proposed method.
Figure 9. Visual classification results on the Flevoland data set. (a) the Wishart classifier, (b) PCA, (c) LE (d) PFLE, (e) WDLE and (f) the proposed method.
Remotesensing 12 01738 g009
Figure 10. Visual classification results on the Oberpfaffenhofen data set. (a) the Wishart classifier, (b) PCA, (c) LE (d) PFLE, (e) WDLE and (f) the proposed method.
Figure 10. Visual classification results on the Oberpfaffenhofen data set. (a) the Wishart classifier, (b) PCA, (c) LE (d) PFLE, (e) WDLE and (f) the proposed method.
Remotesensing 12 01738 g010
Figure 11. Visual classification results on the San Francisco Bay data set. (a) the Wishart classifier, (b) LE, (c) PCA, (d) PFLE, (e) WDLE and (f) the proposed method.
Figure 11. Visual classification results on the San Francisco Bay data set. (a) the Wishart classifier, (b) LE, (c) PCA, (d) PFLE, (e) WDLE and (f) the proposed method.
Remotesensing 12 01738 g011
Table 1. Classification accuracies of six methods on the subset of the Flevoland data set.
Table 1. Classification accuracies of six methods on the subset of the Flevoland data set.
MethodWCPCALEPFLEWDLECRGE
UAPAUAPAUAPAUAPAUAPAUAPA
Potatoes99.6099.3096.2096.2897.8498.1492.1099.2892.5496.6610099.92
Grass86.0496.9297.8196.6089.4890.0998.1394.8798.7097.4298.70100
Beet99.7598.2590.3791.8787.3395.0495.4497.3195.1891.1699.7599.49
Lucerne90.2966.7581.9696.2993.0489.2210084.3794.17100100100
Wheat I98.5610099.4310086.0795.7810010094.1095.9210099.91
Wheat II98.4497.1710097.7882.5474.5910010010076.35100100
Stem beans96.3397.6496.3360.9796.3356.4510091.749792.0910087.46
Bare soil10010010010088.9051.89100100100100100100
Rapeseed10091.7510099.1310082.3110099.1377.19100100100
OA95.5095.6591.0796.7694.8899.47
Table 2. Classification accuracies of six methods on the Flevoland data set.
Table 2. Classification accuracies of six methods on the Flevoland data set.
MethodWCPCALEPFLEWDLECRGE
UAPAUAPAUAPAUAPAUAPAUAPA
Stem beans88.7192.6679.0687.1179.2795.2478.1671.4576.7277.4394.6494.88
Peas93.5198.2795.0385.5994.1894.8384.6889.5580.3880.6098.5789.88
Forest94.3397.8794.9898.9491.3995.4075.6876.7486.4287.1497.3197.61
Lucerne83.7798.3492.6387.8394.0390.4676.3879.2081.9578.2599.8194.90
Wheat I8696.5090.3085.7887.2783.4578.0676.1081.5883.2910098.35
Beet95.8384.7588.0389.2283.7787.577173.5373.7276.6893.1892.91
Potatoes93.8595.7091.399291.1284.7976.8677.0890.4985.9997.2794.87
Bare soil99.9435.4198.4491.7999.9498.9174.7976.1085.1272.9899.29100
Grasses79.5391.6488.9387.7082.6077.4769.6478.0677.8091.9395.5298.49
Rapeseed77.5781.1388.5588.5293.790.919074.7971.9379.7675.2395.2698.93
Barely96.8472.3390.9291.4985.96848382.6479.3978.2088.3893.3998.38
Wheat II92.3871.2992.1394.7494.2195.9189.5982.2278.8180.0894.7498.67
Wheat III9597.6492.4195.6890.5993.2682.4483.7786.5085.6596.3298.08
Water58.3610097.0299.6599.0910081.0182.1998.5197.2598.9599.67
Buildings81.7210088.4577.5391.8110081.7297.0198.5310081.7295.81
OA87.6891.6690.7878.7383.7796.87
Table 3. Classification accuracies of six methods on the Oberpfaffenhofen data set.
Table 3. Classification accuracies of six methods on the Oberpfaffenhofen data set.
MethodWCPCALEPFLEWDLECRGE
UAPAUAPAUAPAUAPAUAPAUAPA
Built-up areas36.3947.5290.3390.4385.8086.3295.2195.1696.5294.8196.8395.38
Wood land80.6889.7197.8096.8295.7895.8595.6095.0697.1297.1999.0598.67
Open areas98.8974.5595.9997.6096.5996.0890.9191.8393.7894.8096.9998.62
OA77.4195.8094.0794.1396.0198
Table 4. Classification accuracies of six methods on the San Francisco Bay data set.
Table 4. Classification accuracies of six methods on the San Francisco Bay data set.
MethodWCPCALEPFLEWDLECRGE
UAPAUAPAUAPAUAPAUAPAUAPA
Sea97.5498.2699.0197.5298.9398.3790.1689.7796.6795.5499.0698.63
Mountains78.5156.7889.1094.4890.9690.9281.1680.2790.1288.6896.7998.38
Grass71.1249.4085.4482.0684.9983.0177.9074.5589.0786.9293.1189.49
Buildings72.5192.4193.0894.9393.5795.0388.7990.7993.5495.7295.9797.52
OA82.3093.9794.0687.2193.8696.80
Table 5. Computational time of six methods on three data sets.
Table 5. Computational time of six methods on three data sets.
MethodWCPCALEPFLEWDLECRGE
Flevoland data set70 s29 s100 s70 s90 s420 s
Oberpfaffenhofen data set30 s35 s57 s42 s48 s180 s
San Francisco Bay data set17 s105 s298 s256 s210 s980 s

Share and Cite

MDPI and ACS Style

Huang, X.; Nie, X.; Qiao, H. PolSAR Image Feature Extraction via Co-Regularized Graph Embedding. Remote Sens. 2020, 12, 1738. https://doi.org/10.3390/rs12111738

AMA Style

Huang X, Nie X, Qiao H. PolSAR Image Feature Extraction via Co-Regularized Graph Embedding. Remote Sensing. 2020; 12(11):1738. https://doi.org/10.3390/rs12111738

Chicago/Turabian Style

Huang, Xiayuan, Xiangli Nie, and Hong Qiao. 2020. "PolSAR Image Feature Extraction via Co-Regularized Graph Embedding" Remote Sensing 12, no. 11: 1738. https://doi.org/10.3390/rs12111738

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop