Next Article in Journal
Rock Location and Quantitative Analysis of Regolith at the Chang’e 3 Landing Site Based on Local Similarity Constraint
Previous Article in Journal
A Comparative Study of Bulk Parameterization Schemes for Estimating Cloudy-Sky Surface Downward Longwave Radiation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Kernel-Based Nonlinear Spectral Unmixing with Dictionary Pruning

School of Marine Science and Technology, Northwestern Polytechnical University, Key Laboratory of Ocean Acoustics and Sensing, Ministry of Industry and Information Technology, Xi’an 710072, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(5), 529; https://doi.org/10.3390/rs11050529
Submission received: 21 January 2019 / Revised: 26 February 2019 / Accepted: 1 March 2019 / Published: 5 March 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Spectral unmixing extracts subpixel information by decomposing observed pixel spectra into a collection of constituent spectra signatures and their associated fractions. Considering the restriction of linear unmixing model, nonlinear unmixing algorithms find their applications in complex scenes. Kernel-based algorithms serve as important candidates for nonlinear unmixing as they do not require specific model assumption and have moderate computational complexity. In this paper we focus on the linear mixture and nonlinear fluctuation model. We propose a two-step kernel-based unmixing algorithm to address the case where a large spectral library is used as the candidate endmembers or the sparse mixture case. The sparsity-inducing regularization is introduced to perform the endmember selection and the candidate library is then pruned to provide more accurate results. Experimental results with synthetic and real data, particularly those laboratory-created labeled, show the effectiveness of the proposed algorithm compared with state-of-art methods.

Graphical Abstract

1. Introduction

Hyperspectral images integrate the spectrum representing radiant attributes of an observed scene with homological images standing for spatial and geometric relations. Thanks to the high spectrum resolution and rich spectral information, hyperspectral imaging has been widely used in applications ranging from forest mapping and monitoring, land cover change detection, mineral exploration, object identifying, etc.
Hyperspectral image analysis plays a key role in revealing information from high-dimensional data. In the last decade, spectral unmixing has received considerable attention. It aims to decompose the observed pixel spectra into a collection of constituent spectra signatures (called endmembers) and estimate fractions associated with each component (called abundances) [1]. Spectral unmixing is particularly useful to analyze pixels with low spatial or with photon interactions among materials. Unmixing techniques have been extensively studied and most of them perform the endmember extraction and abundance estimation either in an independent or simultaneous manner. Provided that the pure pixels are present in the observed scene, methods such as pixel purity index algorithm (PPI) [2], vertex component analysis (VCA) [3] and the N-FINDR algorithm [4] are proposed for extracting endmembers. Meanwhile some algorithms consider to generate virtual endmembers and abundances due to the absence of pure pixels, for example, the minimum volume simplex analysis (MVSA) [5], the minimum volume constrained nonnegative matrix factorization [6]. To pursue interpretable and tractable solutions, most algorithms are based on the linear mixture model (LMM). The LMM assumes that an observed pixel is a combination of signature spectra weighted by the abundances and to be physically meaningful, the problem is usually subject to two constraints: the abundance nonnegativity constraint (ANC) and the abundance sum-to-one constraint (ASC). The linear unmixing algorithms can be classified as the least squares principle [7], statistical framework [8,9], sparse regression [10,11], and independence component analysis (ICA) based algorithms [9,12,13].
Since the LMM may not be appropriate in some practical situations where the light is scattered by multiple reflective or interacted materials [14], the nonlinear mixture model (NLMM) then provides an alternative to overcome the limitation of the LMM. For instance, a specific class of nonlinear models referred to as bilinear models has been studied in [15,16,17,18,19] for modeling the second-order reflectance by introducing extra additional bilinear terms to LMM. The postnonlinear mixture model (PNMM) has also been introduced in [20,21] that considers an appropriate nonlinear function mapping from [ 0 , 1 ] L into [ 0 , 1 ] L [18]. In [22], based on nonlinear models, the authors proposed a parameter-free unmixing algorithm where abundance fractions and a set of nonlinear parameters are under the specific constraints which can be satisfied by minimizing the penalty function. The above algorithms are presented based on certain models, and thus lack the flexibility. The neural networks based methods can be regarded as model-free methods. In [23], the authors designed a multi-layer perceptron neutral network combined with a Hopfield neural network to deal with nonlinear mixture. In [24] the auto-associative neural network for abundance estimation was introduced for nonlinear unmixing purposes consisting of dimension reduction stage and the mapping stage from feature to the abundance percentages. Furthermore, in [25] the authors proposed an end-to-end unmixing method based on the convolutional neural network to improve the accuracy where the spatial information is taken into account. The kernel-based methods can also be regarded as the model-free methods.
The kernel-based methods serve as one of the most popular tools in addressing nonlinear learning problems. These methods map the data to a feature space of higher dimension where the mapped data can be represented with a linear model in this space [26]. It is important to note that finding the explicit mapping is however bypassed via the kernel trick [27,28,29,30,31]. In [32,33], the authors proposed a model consisting of a linear mixture and a nonlinear fluctuation. The nonlinear fluctuation function characterizes the high-order interactions between endmembers and is restricted within a reproducing kernel Hilbert space (RKHS). The so called K-Hype and SK-Hype algorithms are proposed therein. In contrast to several other classes of algorithms, the kernel-based methods is independent of a specific observation model and has the generalization ability in multiple scenes. Thus, sebsequent models have been further studied in the literature. An 1 -spatial regularization term is added to the K-Hype problem in [34] to promote the piece-wise spatial continuity of the abundance maps. In [35,36], the nonlinear fluctuation term is also called the residual term and the associated algorithm is called the residual component analysis. Post-nonlinear models as well as some robust unmixing models that consider such a fluctuation are presented in [37]. In [38], the authors extended this method by accounting for band-dependent and neighboring nonlinear contributions using separable kernels. Further, kernel-based nonnegative matrix factorization (NMF) techniques are studied to simultaneously capture nonlinear dependence features and estimate the abundance.
The above kernel-based algorithms has its inherent limitations. On one hand, these algorithms focus on the abundance estimation, and do not consider the extraction of the endmembers. On the other hand, the structures of the K-hype type algorithms and its variations impose a regularization with unclear physical interpretation on the abundance vector (as elaborated in Section 2) and the fluctuation is independent of the abundance fractions. In this work, we propose a kernel-based sparse nonlinear spectral unmixing. The nonlinear unmixing algorithm is designed to run with a large number of candidate spectral signatures. A sparse regularization step and a dictionary pruning step are conducted in a sequential manner: the former select endmembers with significant contributions; and the latter then performs the abundance estimation with a more proper optimization problem using the pruned endmember dictionary. The contributions of this work are summarized as follows:
  • A kernel based sparse nonlinear unmixing problem is formulated and a two-step solving strategy is proposed. This strategy allows to use a spectral library for selecting the endmembers and bypass the problem of endmember extraction in the nonlinear unmixing.
  • A more reasonable formulation of the optimization problem is proposed for solving the linear mixture/nonlinear fluctuation model. This formulation improves the K-Hype formulation in several aspects and serves as a key component in the proposed sparse unmixing scheme.
  • The algorithm is tested using real data with ground-truth created in our laboratory. Lack of publicly available data sets with ground truth imposes difficulties to compare unmixing algorithms. Most of the existing works rely on the use of numerically produced synthetic data and real data without ground truth. Using a labeled real data provides a more meaningful comparison results.
The remainder of this paper is organized as follows. Section 2 introduces the related work based on kernel methods. The proposed basic kernel-based algorithm is described in Section 3. Section 4 describes experimental results with simulated hyperspectral data and real hyperspectral data sets. Finally the conclusion is given in Section 5.

2. Kernel-Based Nonlinear Abundance Estimation

Notation. Scalars are denoted by italic letters. Vectors and matrices are denoted by boldface small and capital letters respectively. Specifically, if each pixel of the hyperspectral image consists of a reflectance vector in L contiguous spectral bands, then r = [ r 1 , r 2 , , r L ] R L is an observed pixel, M = [ m 1 , m 2 , , m L ] R L × R is the endmember matrix which is a spectral library consisting of spectral signatures m i with R numbers of endmembers, and m λ is the vector of R endmember signatures at the -th wavelength bands of M , α = [ α 1 , x 2 , , α R ] R R is the abundance vector.
In this section, we first review the general unmixing model and the kernel-based linear mixture/nonlinear fluctuation model proposed in [33]. A general mixing mechanism can be formulated as
r = ψ ( M , α ) + n
where ψ is an unknown function that defines the interactions between the endmembers in matrix M parameterized by their associated abundance fractions α , and n is the modeling noise. Though general enough, this strategy may fail if the function ψ cannot be adequately and finitely parameterized. A semi-parametric model proposed in [33] is described by:
ψ ( m λ ) = m λ α + ψ nlin ( m λ ) .
This model is composed by a linear mixture term and a nonlinear fluctuation term defined by ψ nlin . Several useful nonlinear mixing model can be considered as a specific case of (2). For instance, (2) reduces to a bilinear model if the second-order polynomial is used for ψ nlin . Models and algorithms of the residual component analysis follows the same principle [35,36,37]. Assume that the endmember matrix M is known, and ψ nlin ( m λ ) is a real-valued function of a reproducing kernel Hilbert space H , endowed with the reproducing kernel κ , i.e.,
ψ nlin ( m λ ) = ψ nlin , κ ( · , m λ ) H .
Selecting a proper kernel is essential for describing the mixture. For instance, the second-order homogeneous polynomial kernel
κ ( m λ k , m λ ) = ( m λ k m λ ) 2
is able to represent the second-order interactions between endmembers; and the Gaussian kernel
κ ( m λ k , m λ ) = exp m λ k m λ 2 2 2 = exp m λ k 2 + m λ 2 2 σ 2 i = 0 ( m λ k m λ ) i i ! σ 2 i
involves an infinity order of interactions since it can be expanded as the sum of polynomials of all orders. Then [33] proposes to evaluate the abundances and the nonlinear function by solving the following optimization problem:
α * , ψ nlin * = arg min ψ nlin H 1 2 ( α 2 + ψ nlin H 2 ) + 1 2 μ = 1 L e 2 s . t . e = r m λ α ψ nlin ( m λ ) α 0 and 1 α = 1 ,
where μ is a positive parameter. The above problem minimizes the reconstruction error, as well as the regularity of ψ nlin and α is characterized by the squared 2 -norm with the ANC and ASC. This convex problem can be solved via its duality theory and leads to the so-called K-Hype algorithm. The -th element of this pixel is then reconstructed by
r * = m λ α * + ψ nlin * ( m λ ) .
where * denotes the optimal estimates. K-Hype has been shown efficient in addressing several nonlinear mixture scenarios. However, it has the following restrictions:
  • R1: While the 2 -regularization on ψ nlin controls it regularity, the 2 -regularization on α does not possess a clear interpretation, as there is no reason to consider that an abundance vector is having a small 2 -norm.
  • R2: Problem (6) aims to estimate the abundances with the known endmember matrix. Considering to use a large spectral library as candidates, specific variants for extracting active endmembers are needed.
Facts R1 and R2 motivate us to derive an improved model and the associated unmixing algorithms by modfiying the 2 -regularization and considering an endmember selection strategy.

3. Kernel-Based Nonlinear Unmixing Problem

In this section, we derive a new sparse kernel-based nonlinear unmixing algorithm with the data model (2). We shall first consider R1 by removing the 2 -regularization of the abundances and devise the associated algorithm. Thereafter, we consider R2 by proposing a sparse unmixing technique consisting of an 1 -regularization step and a dictionary prunning step.

3.1. Nonlinear Unmixing with the Regularity Constraint on ψ nlin .

Under the assumption that the endmember matrix M is known, we estimate the abundance vector α and to infer the nonlinear function ψ nlin H by solving the following functional optimization problem, that is,
α * , ψ nlin * = arg min ψ nlin H 1 2 ψ nlin H 2 + 1 2 μ l = 1 L e 2 s . t . e = r m λ α ψ nlin ( m λ ) α 0 and 1 α = 1 .
In contrast to (6), the 2 -norm regularization of α is discarded, then the optimal α * can not be explicitly expressed with the dual variables as in ([33] Equation (18)). In this case, resorting to the semi-parametric Representation Theorem [39] leads to the following expression for the nonlinear function ψ nlin :
ψ nlin = = 1 L β κ ( · , m λ ) .
By substituting (9) to (8), we get the following problem in a quadratic form with respect to parameters α and β :
min α , β 1 2 β K β + 1 2 μ r M α K β 2 2 = 1 2 β α K + 1 μ K K 1 μ K M 1 μ M K 1 μ M M β α + 1 μ K r 1 μ M r β α s . t . α 0 and 1 α = 1 ,
where K is a Gram matrix with ( k , ) -th entry being κ ( m λ k , m λ ). Then the abundance vector can be evaluated by solving the problem (10) under the ASC and ANC via a quadratic programming solver.

3.2. Kernel-Based Sparse Nonlinear Unmixing with Dictionary Pruning

In this subsection, we consider a kernel-based model with pruning dictionary via a two-step method. It is possible to use a large spectral library as a dictionary, and then select a few elements as the endmembers that contribute to the mixture. This way also provides a solution to the simultaneous endmember extraction and abundance estimation for the nonlinear unmixing.

3.3. Problem Formulation

We first consider to add 1 -norm regularization to (8) and to promote the sparsity of estimated abundances. The optimization problem is then formulated as
α , ψ nlin = arg min ψ nlin H 1 2 ψ nlin H 2 + 1 2 μ l = 1 L e 2 + λ α 1 s . t . e = r m λ α ψ nlin ( m λ ) α 0 .
Note that the ASC is ignored since it is contradictory to the 1 -norm regularization. In order to solve (11), we introduce an auxiliary variable ζ and reformulate the problem to an equivalent form:
α , ζ , ψ nlin = arg min Ψ nlin H 1 2 ψ nlin H 2 + 1 2 μ l = 1 L e 2 + λ ζ 1 + I R + ( ζ ) s . t . e = r m λ α ψ nlin ( m λ ) α = ζ ,
where the I R + ( · ) is the indicator function such that I R + ( · ) is 0 if its argument belongs to the nonnegative orthant and + otherwise. This new variable ζ allows us to decouple the non-smooth 1 -norm regularization functional from the constrained problem. As studied in [40], the split-Bregman iteration algorithm is an efficient method to deal with a broad class of 1 -regularized problems. By applying this framework to (12), the following formulation is obtained:
α ( k + 1 ) , ψ nlin ( k + 1 ) , ζ ( k + 1 ) = argmin α , ψ nlin H , ζ 1 2 ψ nlin H 2 + 1 2 μ = 1 L r m λ α ψ nlin ( m λ l ) 2 + λ ζ 1 + I R + ( ζ ) + ρ 2 α ζ + ξ ( k ) 2 2
and
ξ ( k + 1 ) = ξ ( k ) + α ( k + 1 ) ζ ( k + 1 ) .
Because of the way that we have split the components of the cost function, we can now perform the above minimization efficiently by iteratively minimizing with respect to α and ψ nlin , and ζ separately. The two steps that we have to perform are as follows:
  • Optimization with respect to α and ψ nlin : Discarding irrelevant variables, the optimization problem (13) reduces to
    min α , ψ nlin H 1 2 ψ nlin H 2 + 1 2 μ = 1 L e 2 + ρ 2 α ζ ( k ) + ξ ( k ) 2 2 s . t . e = r m λ α ψ nlin ( m λ ) .
    By introducing the Lagrange multipliers { β } = 1 L , the Lagrange function associated with the problem (15) can be written as
    L 1 = 1 2 ψ nlin H 2 + 1 2 μ = 1 L e 2 + ρ 2 α ζ ( k ) + ξ ( k ) 2 2 + = 1 L β r m λ α ψ nlin ( m λ ) e .
    The conditions for optimality of L 1 with respect to the primal variables are given by
    α = 1 ρ M β + ζ ( k ) ξ ( k ) ψ n = = 1 L β κ ( · , m λ ) e = μ β , for = 1 , , L ,
    where β = [ β 1 , , β L ] . By substituting (17) into (16), this results in a quadratic form with respect to the Lagrange multiplier and we get the following dual problem:
    max β 1 2 β K + μ I + 1 ρ M M β + 1 ρ M ξ ( k ) M ζ ( k ) + r β .
    This is a quadratic program and the updated value β ( k + 1 ) is readily obtained by
    β ( k + 1 ) = K + μ I + 1 ρ M M 1 × 1 ρ M ξ ( k ) M ζ ( k ) + r .
    Clearly, matrix K + μ I + 1 ρ M M is positive-definite and invertible.
  • Optimization with respect to ζ : Discarding irrelevant variables, the problem reduced to
    min ζ λ ζ 1 + I R + ( ζ ) + ρ 2 α ( k + 1 ) ζ + ξ ( k ) 2 2 .
    Its solution can be expressed by the well-known soft threshold function
    ζ ( k + 1 ) = S α ( k + 1 ) + ξ ( k ) +
    where S ( · , · ) denotes the component-wise application of the soft threshold function, given by
    S ( x , y ) = sign ( x ) max ( | x | y , 0 ) ,
    and ( · ) + projects the argument to the nonnegative orthant by setting a negative value to 0.
These iterations are repeated until convergence. The stopping criteria as in [41] can be used where the primal and dual residuals must be smaller than some tolerance thresholds. We denote the optimal solution obtained from this step by α , as this quantity will be used in the later part.

3.4. Dictionary Pruning

By solving the above problem, the abundance vector is obtained from a relatively large original dictionary. We then propose to refine the estimation and prune the large spectral library to a smaller one by discarding the endmembers with zero fractions. Therefore, in this second step, we reformulate the optimization problem using this pruned dictionary as follows:
α * , ψ nlin * = argmin ψ nlin H 1 2 ψ nlin H 2 + 1 2 μ = 1 L [ r x ( m λ supp ( α ) ) ψ nlin ( m λ supp ( α ) ) ] 2 + λ α 1 s . t . α 0 ,
where supp ( α ) { 0 , 1 } is a vector that denotes the support of the abundance estimated by the first step and ⊙ denotes the Hadamard product. The Hadamard product between the endmember information and the support set discards the inactive ones defined in α . Note that a large number of endmembers interact in ψ nlin with the model in Section 3.3, and only a small active part is involved after pruning the dictionary, which is reasonable for providing a better model. Problem (23) can be approached in a similar way as in Section 3.3.

4. Experiments

In this section, experiments are conducted to validate the proposed unmixing algorithms and compare them with several state-of-the-art methods. The algorithms are tested on synthetic data, real data captured created in our laboratory with ground-truth, and the well-known Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Cuprite image over the Cuprite mining region in NV, USA. Note that testing unmixing algorithms using real data with ground-truth would produce more meaningful results.

4.1. Experiments with Synthetic Data

4.1.1. Simulation Settings

Two synthetic datasets are generated for the experiments. In the first scene (denoted by S 1 ), a large spectral library is considered, since the sparse unmixing algorithm aims to select proper endmembmers from this library. A pruned-version endmember library [42] is used with 342 candidates whose reflectance values are measured for 224 spectral bands distributed uniformly in the interval 0.4–2.5 μm. A total of 1000 pixels are generated for each mixture setting. In each pixel, three endmembers in the library are randomly activated, with their associated abundance vectors being generated in the simplex defined by non-negative and sum-to-one constraints.
In the second scene (denoted by S 2 ), nine endmembers are selected from the above library to generate a 50-by-50 image with N = 2500 pixels, using the upper-left region of ([11] Figure 2) for the associated abundances. These mixtures in this scene involved only nine endmembers, however, very few endmembers are activated in each pixel as shown in Figure 1.
Both of these two scenes are generated with the general bilinear model (GBM) defined by:
r = M α + i = 1 R 1 j = i + 1 R γ i , j α i α j m i m j + n ,
and the PNMM defined by
r = ( M x ) τ + n .
In these experiments, the parameter γ i , j is simply set to 1 and τ is set to 0.7, same as [33]. Finally, all data are corrupted with an additive white Gaussian noise with two levels of SNR, namely, 30 dB and 15 dB respectively.

4.1.2. Comparative Simulations

Several algorithms are conducted in order to compare their unmixing performance. Their tuning parameters are set during preliminary experiments on independent data, via a simple search over the grids defined hereafter.
  • The FCLS algorithm [7]: This classical algorithm relies on the linear model and seeks the optimal solution of a least-square problem subject to the ASC and ANC.
  • The sparse unmixing by variable splitting and augmented Lagrangian (SUnSAL) [10]: This algorithms fits the observed (mixed) hyperspectral vectors with sparse linear mixtures of spectral signatures from a large dictionary available a priori. The parameter λ SUnSAL controls the trade-off between the fitting and sparsity.
  • The K-Hype algorithm [33]: This algorithm is based on the scalar RKHS model. The parameter μ K Hype controls the trade-off between the fitting and the functional regularity.
  • The nonlinear neighbor and band dependent unmixing (NDU) [38]: This method extends the K-Hype by accounting for band-dependent and neighboring nonlinear contributions with vector-valued kernel function. The separable (Sp.) structure is considered due to the high computational complexity. This algorithm is characterized by the trade-off parameter λ NDU , fitting parameter μ NDU and penalty parameter ρ NDU .
  • The improved incremental KNMF (IIKNMF) [31]: This method extends KNMF by introducing partition matrix theory and considering the relationships among dividing blocks. The incremental KNMF (IKNMF) is proposed to reduce the computing requirements for large-scale data and IIKNMF aims to further improve the abundance results. The parameter k controls the number of blocks. For each experiment of IIKNMF, 5 independent runs are carried out and the results are averaged.
The root mean square error between the true values α n and the estimates α ^ n defined by
R M S E = 1 N R n = 1 N α n α ^ n 2
is used to evaluate the performance of the abundance estimation. All the kernel-based methods are tested with a Gaussian ( G ) given by
κ ( m λ , m λ p ) = exp m λ m λ p 2 2 σ 2 ,
and a second order homogeneous polynomial ( P ) kernel given by
κ ( m λ , m λ p ) = ( m λ m λ p ) 2 ,
respectively. The bandwidth σ of the Gaussian kernel in the algorithms K-Hype, NDU, IIKNMF and the proposed algorithm is varied within { 1 , 2 , 3 , , 10 } . In these algorithms, the tuning parameters λ SunSAL , λ NDC and λ of the sparse regularization are tested in the range { 1 , 0.5 , 0.1 , 0.05 , 0.01 , 0.005 , 0.001 , 0.0005 } , and parameters μ , μ K Hype and μ NDU for the fitting term are tested in the range { 1000 , 500 , 100 , 20 , 10 , 5 , 2 , 1 , 0.5 , 0.2 , 0.1 , 0.05 , 0.01 , 0.005 } . The penalty parameter ρ NDU in NDU and ρ in the proposed method are both set to 1. All the parameters used in the experiments of S 1 and S 2 are reported in Appendix Section (Table A1 and Table A2 respectively). In the experiment with S 1 , the results of IIKNMF are not given, since its computational time is beyond acceptance with a very large endmember library. Besides selecting the Gaussian kernel parameter from the candidate values, we also use an automatic parameter setting strategy, where we set σ = max , k d ( m λ , m λ k ) , so that the kernel width covers the variability range of the input. Associated performance of this setting is denoted by “(auto)” in the results.

4.1.3. Results

The obtained RMSEs for S 1 and S 2 are reported in Table 1 and Table 2. Generally, under the low SNR (15 dB), the compared algorithms have similar performance. Meanwhile all of the algorithms show an increased accuracy as the SNR increases. It is observed that NDU performs slightly better than our proposed algorithm in some settings of S 1 , however its computational complexity is several times higher. In S 2 , it is observed that the proposed algorithm outperforms the competing methods in most cases, and even in the low SNR setting, the proposed algorithm significantly outperforms the others. We also observe that the automatic kernel parameter setting strategy leads to results that are comparable to those from the pre-selected setting. The synthetic data can be restrictive to provide a proper performance evaluation due to their inherent limitation. In the next section, we shall validate the advantage of the proposed algorithm with labeled real data.

4.1.4. Sensitivity Analysis

We analyze the sensitivity of the proposed method with respect to the algorithm parameters λ and μ . This test is conducted with S 1 using both Gaussian and Polynomial kernels under SNR of 30 dB. The results are shown in Figure 2 and Figure 3. As can be seen within a reasonable range around the optimal parameter values, the algorithm exhibits satisfactory RMSE.

4.2. Experiments with the Laboratory Created Real Data

4.2.1. Data Presentation

In order to perform quantitive evaluation of unmixing performance, we designed several experimental scenes in our laboratory with the GaiaField (Sichuan Dualix Spectral Image Technology CO. Ltd., GaiaField-V10) and GaiaSorter systems. Our GaiaField is a push-boom imaging spectrometer with a HSIA-OL50 lens, covering the visible and NIR wavelengths ranging from 400 nm to 1000 nm. GaiaSorter sets an environment that isolates external lights, and is endowed with a conveyer to move samples for the push-boom imaging. A dataset is created by imaging these scenes with the hyperspectral camera in our laboratory, providing mixtures with 256 wavelength bands. The experimental settings are strictly controlled so that pure material spectral signatures and material compositions are known. See [43,44] for experiment and data description in detail.
In this subsection, uniform mixture #1, mixture #7 and two non-uniform mixtures cases of Scene-II in [44] are used (In [44], to get the ground truth, the aligned high-resolution RGB images of S 5 and S 6 were captured, and then the percentage of each colored sand in a low-resolution hyperspectral pixel could be analyzed with the help of the associated RGB image). In this scene, quartz sand of different colors are mixed, and the diameter of the granules is controlled to be approximately 0.03 mm (see Figure 4). Seven endmembers (four different pure colored quartz sand and three other materials in the dataset) are included as candidate spectral signatures, shown in Figure 5. The true abundances are provided either by priorly known fractions (uniform mixtures) or analysis of aligned high-resolution images (mixtures with spatial patterns). One uniform mixture is formed by mixing 50% blue sand and 50% white sand so that the true abundance vector α = [ 0.5 , 0.5 , 0 , 0 , 0 , 0 , 0 ] (denoted by S 3 ). The other uniform mixture is formed by mixing the blue, red and green sand so that the true abundance vector α = [ 1 / 3 , 0 , 1 / 3 , 1 / 3 , 0 , 0 , 0 ] (denoted by S 4 ). S 5 and S 6 respectively denote the two mixtures with spatial patterns formed by mixing two and three types of sand. As this scene mimics the intimate mixture model, the Gaussian kernel is used for this high-order nonlinear case.

4.2.2. Results

The obtained RMSEs are reported in Table 3, with the algorithm parameters listed in Table A3. We observe that the proposed algorithm achieves the lowest RMSEs, and the proposed algorithm with automatically set bandwidth also leads to satisfactory performance. For data S 5 and S 6 , abundance maps estimated by the compared algorithms are shown in Figure 6 and Figure 7. We see that most algorithms recovered the spatial pattern of the abundance maps. However, the K-Hype algorithm produces a low contrast map for the first component in S 5 . The proposed algorithm results in sharper abundance maps with lowest RMSEs. Experiments with these labeled real data show the benefit of the proposed algorithm.

4.3. Experiment with AVIRIS Cuprite Data

This section illustrates the performance of the proposed algorithms and other algorithms when applied to real remote sensing hyperspectral data. The scene that is used for our experiment is the well-known image captured on the Cuprite mining district (NV, USA) by AVIRIS. A sub-image of 250 × 191 pixels is chosen to evaluate the algorithms. This area of interest has 188 spectral bands. This scene is denoted by S 7 . The same twelve spectra signatures as in [33] are used for the endmember matrix, as shown in Figure 8. Results from most existing works illustrate that only few materials are active in each pixel, thus sparse unmixing is potentially beneficial for analyzing this scene.
Note that this real data is extensively used in the literature, however no ground-truth information is available for an objective performance evaluation. The quality of these algorithms can be defined by the Reconstruction Error (RE) which is defined as
RE = 1 N L n = 1 N r n r ^ n 2 .
In general, RE can be used to provide a quantitative information of the unmixing performance. However, as pointed in [33], without the ground-truth information on the abundances for the real data, the reconstruction quality measured by RE is not necessarily proportional to the quality of the abundance estimation, and therefore it can only be considered as complementary information. In this experiment, the selected parameters are given in Table A4 and the RE comparison by the linear and nonlinear models is reported in Table 4. Figure 9 illustrates the maps of the reconstruction error. These tables and figures clearly indicate that nonlinear unmixing algorithms lead to significantly lower REs compared to linear algorithms, especially at several particular spatial regions. The proposed algorithm exhibits a comparable RE compared to K-Hype with Gaussian kernel, recalling that it is common that a regularization term increases the data fitting cost. Figure 10 shows the abundance maps of selected materials estimated by these algorithms. The proposed algorithm provides a sharper map with several particular locations emphasized.

5. Conclusions

In this work a new nonlinear sparse unmixing method is proposed. Under the assumption that the model consists of a linear mixture and a nonlinear fluctuation defined in an RHKS, the unmixing is conducted by the kernel-based method with a sparsity promoting term and the dictionary pruning technique. Experiments with both synthetic and real data were conducted to illustrate the performance of the proposed algorithm and compared algorithms. The proposed method outperformed the competing ones in most cases. Particularly, quantitive abundance estimation errors were reported with laboratory created real data, and highlighted the advantage of the proposed algorithm. Although the NDU performed slightly better in one synthetic data setting, it had higher computation complexity. In future work the automatic kernel parameter learning can be considered to increase the flexibility of the algorithm.

Author Contributions

Conceptualization, J.C.; methodology, Z.L.; software, Z.L.; formal analysis, Z.L.; investigation, J.C. and Z.L.; supervision, J.C and S.R..; writing–original draft preparation, Z.L.; writing–review and editing, J.C. and S.R.

Funding

The work of J. Chen was supported in part by the NSFC under grant 61671382, Natural Science Foundation of Shenzhen under grant JCYJ2017030155315873, and 111 project (B18041).

Acknowledgments

The authors would like to thank Risheng Huang for providing MATLAB codes of IIKNMF used in our experiments.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The optimal parameters used in the experiments are provided in Table A1, Table A2, Table A3 and Table A4.
Table A1. The selected algorithm parameters for S 1 .
Table A1. The selected algorithm parameters for S 1 .
SNR = 15 dBSNR = 30 dB
GBMPNMMGBMPNMM
SUnSAL λ = 0.1 λ = 0.05 λ = 0.005 λ = 0.005
K-Hype ( G ) μ = 0.2, σ = 10 μ = 0.2, σ = 10 μ = 0.005, σ = 10 μ = 0.01, σ = 10
K-Hype ( P ) μ = 0.2 μ = 0.2 μ = 0.01 μ = 0.01
NDU ( G ) λ = 0.1, μ = 100, σ = 10 λ = 0.1, μ = 1, σ = 10 λ = 0.01, μ = 0.1, σ = 10 λ = 0.01, μ = 0.2, σ = 10
NDU ( P ) λ = 0.1, μ = 100 λ = 0.1, μ = 1 λ = 0.01, μ = 0.05 λ = 0.01, μ = 0.1
Proposed ( G ) λ = 0.005, μ = 20, σ = 2 λ = 0.001, μ = 100, σ = 10 λ = 0.001, μ = 1, σ = 2 λ = 0.001, μ = 2, σ = 10
Proposed (auto) ( G ) λ = 0.5, μ = 5, σ = 6.62 λ = 0.001, μ = 100, σ = 6.62 λ = 0.001, μ = 1, σ = 6.62 λ = 0.001, μ = 2, σ = 6.62
Proposed ( P ) λ = 0.005, μ = 10 λ = 0.001, μ = 100 λ = 0.01, μ = 0.2 λ = 0.001, μ = 2
Table A2. The selected algorithm parameters for S 2 .
Table A2. The selected algorithm parameters for S 2 .
SNR = 15 dBSNR = 30 dB
GBMPNMMGBMPNMM
SUnSAL λ = 0.05 λ = 0.1 λ = 0.0005 λ = 0.05
K-Hype ( G ) μ = 0.01, σ = 10 μ = 0.01, σ = 10 μ = 0.005, σ = 10 μ = 0.005, σ = 10
K-Hype ( P ) μ = 0.1 μ = 0.1 μ = 0.005 μ = 0.2
NDU ( G ) λ = 0.1, μ = 100, σ = 1 λ = 0.001, μ = 0.001, σ = 10 λ = 0.005, μ = 0.005, σ = 2 λ = 0.005, μ = 0.005, σ = 10
NDU ( P ) λ = 0.1, μ = 10 λ = 0.05, μ = 0.2 λ = 0.1, μ = 1 λ = 0.01, μ = 0.01
IIKNMF ( G )k = 100, σ = 10k = 100, σ = 10k = 100, σ = 10k = 100, σ = 10
IIKNMF ( P )k = 100k = 100k = 100k = 100
Proposed ( G ) λ = 0.05, μ = 1, σ = 1 λ = 0.05, μ = 2, σ = 2 λ = 0.1, μ = 0.1, σ = 2 λ = 0.05, μ = 0.1, σ = 1
Proposed (auto) ( G ) λ = 0.05, μ = 1, σ = 1.27 λ = 0.05, μ = 2, σ = 1.27 λ = 0.1, μ = 0.1, σ = 1.27 λ = 0.05, μ = 0.2, σ = 1.27
Proposed ( P ) λ = 0.05, μ = 1 λ = 0.1, μ = 1 λ = 0.05, μ = 0.2 λ = 0.05, μ = 1
Table A3. The parameters in algorithms of S 3 , S 4 , S 5 , S 6 .
Table A3. The parameters in algorithms of S 3 , S 4 , S 5 , S 6 .
SUnSALK-HypeNDUIIKNMFProposedProposde (auto)
S 3 λ = 1 μ = 1, δ = 10 μ = 200, λ = 0.001, δ = 10k = 200, δ = 10 μ = 500, λ = 0.01, δ = 1 μ = 500, λ = 0.01, δ = 1.54
S 4 λ = 0.001 μ = 0.05, δ = 10 μ = 1000, λ = 0.001, δ = 1k = 200, δ = 10 μ = 10, λ = 0.001, δ = 1 μ = 10, λ = 0.001, δ = 1.54
S 5 λ = 0.001 μ = 0.05, δ = 10 μ = 200, λ = 0.001, δ = 10k = 200, δ = 10 μ = 2, λ = 0.001, δ = 10 μ = 5, λ = 0.001, δ = 1.54
S 6 λ = 0.01 μ = 0.2, δ = 10 μ = 1000, λ = 0.001, δ = 10k = 200, δ = 10 μ = 10, λ = 0.001, δ = 10 μ = 10, λ = 0.001, δ = 1.54
Table A4. Parameters used in algorithms of S 7 .
Table A4. Parameters used in algorithms of S 7 .
SUnSALK-HypeNDUIIKNMFProposed
LMM λ = 0.001
G μ = 0.0001, σ = 10 μ = 0.01, σ = 2, λ = 0.001k = 400, σ = 2 μ = 0.0001, σ = 1, λ = 0.001
P μ = 0.0001 μ = 0.01, λ = 0.001k = 400 μ = 0.001, λ = 0.001

References

  1. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  2. Boardman, J.W. Automating spectral unmixing of aviris data using convex geometry concepts. In Proceedings of the Fourth Annual JPL Airborne Geoscience Workshop, Arlington, VA, USA, 25–29 October 1993. [Google Scholar]
  3. Nascimento, J.M.P.; Bioucas-Dias, J.M. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  4. Winter, M.E. N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data. Proc. SPIE Image Spectrom. V 1999, 3753, 266–275. [Google Scholar]
  5. Li, J.; Bioucas-Dias, J.M. Minimum Volume Simplex Analysis: A Fast Algorithm to Unmix Hyperspectral Data. In Proceedings of the 2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 8–11 July 2008; Volume 3753, pp. 250–253. [Google Scholar]
  6. Miao, L.; Qi, H. Endmember Extraction From Highly Mixed Data Using Minimum Volume Constrained Nonnegative Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2007, 45, 765–777. [Google Scholar] [CrossRef]
  7. Heinz, D.C.; Chang, C. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2002, 39, 529–545. [Google Scholar] [CrossRef]
  8. Dobigeon, N.; Tourneret, J.Y.; Chang, C.I. Semi-supervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery. IEEE Trans. Signal Process. 2008, 56, 2684–2695. [Google Scholar] [CrossRef]
  9. Moussaoui, S.; Schmidt, F.; Jutten, C.; Chanussot, J.; Brie, D.; Benediktsson, J.A. On the decomposition of Mars hyperspectral data by ICA and Bayesian positive source separation. Neurocomputing 2008, 71, 2194–2208. [Google Scholar] [CrossRef] [Green Version]
  10. Bioucas-Dias, J.M.; Figueiredo, M.A.T. Alternating Direction Algorithms for Constrained Sparse Regression: Application to Hyperspectral Unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  11. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Total variation spatial regularization for sparse hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2012, 50, 4484–4502. [Google Scholar] [CrossRef]
  12. Xia, W.; Liu, X.; Wang, B.; Zhang, L. Independent component analysis for blind unmixing of hyperspectral imagery with additional constraints. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2165–2179. [Google Scholar] [CrossRef]
  13. Jia, S.; Qian, Y. Constrained nonnegative matrix factorization for hyperspectral unmixing. IEEE Trans. Geosci. Remote Sens. 2009, 47, 161–173. [Google Scholar] [CrossRef]
  14. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  15. Halimi, A.; Altmann, Y.; Dobigeon, N.; Tourneret, J.Y. Nonlinear unmixing of hyperspectral images using a generalized bilinear model. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4153–4162. [Google Scholar] [CrossRef]
  16. Altmann, Y.; Dobigeon, N.; Tourneret, J.Y. Bilinear models for nonlinear unmixing of hyperspectral images. In Proceedings of the IEEE GRSS Workshop Hyperspectral Image Signal Process.: Evolution in Remote Sens. (WHISPERS), Shanghai, China, 6–9 June 2011; pp. 1–4. [Google Scholar]
  17. Fan, W.; Hu, B.; Miller, J.; Li, M. Comparative study between a new nonlinear model and common linear model for analysing laboratory simulated-forest hyperspectral data. Int. J. Remote Sens. 2009, 30, 2951–2962. [Google Scholar] [CrossRef]
  18. Qu, Q.; Nasrabadi, N.M.; Tran, T.D. Abundance Estimation for Bilinear Mixture Models via Joint Sparse and Low-Rank Representation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4404–4423. [Google Scholar]
  19. Sigurdsson, J.; Ulfarsson, M.O.; Sveinsson, J.R. Blind Sparse Nonlinear Hyperspectral Unmixing Using an q Penalty. IEEE Geosci. Remote Sens. Lett. 2018, 99, 1–5. [Google Scholar]
  20. Altmann, Y.; Halimi, A.; Dobigeon, N.; Tourneret, J.Y. Supervised Nonlinear Spectral Unmixing Using a Postnonlinear Mixing Model for Hyperspectral Imagery. IEEE Trans. Image Process. 2012, 21, 3017–3025. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Zhong, L.; Luo, W.; Gao, L. A particle swarm optimization algorithm for unmixing the polynomial post-nonlinear mixing model. In Proceedings of the Congress on Image and Signal Process., BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 596–600. [Google Scholar]
  22. Pu, H.; Chen, Z.; Wang, B.; Xia, W. Constrained Least Squares Algorithms for Nonlinear Unmixing of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1287–1303. [Google Scholar] [CrossRef]
  23. Plaza, J.; Martínez, P.; Pérez, R.; Plaza, A. Nonlinear neural network mixture models for fractional abundance estimation in AVIRIS hyperspectral images. In Proceedings of the XIII NASA/Jet Propulsion Lab. Airborne Sci. Workshop, Pasadena, CA, USA, 31 March–2 April 2004. [Google Scholar]
  24. Licciardi, G.A.; Del Frate, F. Pixel unmixing in hyperspectral data by means of neural networks. IEEE Trans. Geosci. Remote sens. 2011, 49, 4163–4172. [Google Scholar] [CrossRef]
  25. Zhang, X.; Sun, Y.; Zhang, J.; Wu, P.; Jiao, L. Hyperspectral Unmixing via Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 99, 1–5. [Google Scholar] [CrossRef]
  26. Heylen, R.; Parente, M.; Gader, P. A Review of Nonlinear Hyperspectral Unmixing Methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1844–1868. [Google Scholar] [CrossRef]
  27. Broadwater, J.; Chellappa, R.; Banerjee, A.; Burlina, P. Kernel fully constrained least squares abundance estimates. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 4041–4044. [Google Scholar]
  28. Zhang, L.; Wu, B.; Huang, B.; Li, P. Nonlinear estimation of subpixel proportion via kernel least square regression. Int. J. Remote Sens. 2007, 28, 4157–4172. [Google Scholar] [CrossRef]
  29. Wang, W.; Qian, Y. Kernel based sparse NMF algorithm for hyperspectral unmixing. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 6970–6973. [Google Scholar]
  30. Huang, R.; Li, X.; Zhao, L. Incremental kernel non-negative matrix factorization for hyperspectral unmixing. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 6569–6572. [Google Scholar]
  31. Huang, R.; Li, X.; Zhao, L. Hyperspectral Unmixing Based on Incremental Kernel Nonnegative Matrix Factorization. IEEE Trans. Geosci. Remote Sens. 2018, 99, 1–18. [Google Scholar] [CrossRef]
  32. Chen, J.; Richard, C.; Honeine, P. Nonlinear unmixing of hyperspectral images with multi-kernel learning. In Proceedings of the 2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Shanghai, China, 4–7 June 2012; pp. 1–4. [Google Scholar]
  33. Chen, J.; Richard, C.; Honeine, P. Nonlinear Unmixing of Hyperspectral Data Based on a Linear-Mixture/Nonlinear-Fluctuation Model. IEEE Trans. Signal Process. 2013, 480–492. [Google Scholar] [CrossRef]
  34. Chen, J.; Richard, C.; Honeine, P. Nonlinear estimation of material abundances in hyperspectral images with 1-norm spatial regularization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2654–2665. [Google Scholar] [CrossRef]
  35. Altmann, Y.; Dobigeon, N.; McLaughlin, S.; Tourneret, J. Residual component analysis of hyperspectral images–application to joint nonlinear unmixing and nonlinearity detection. IEEE Trans. Image Process. 2014, 23, 2148–2158. [Google Scholar] [CrossRef] [PubMed]
  36. Altmann, Y.; Pereyra, M.; McLaughlin, S. Bayesian Nonlinear Hyperspectral Unmixing with Spatial Residual Component Analysis. IEEE Trans. Image Process. 2015, 1, 174–185. [Google Scholar] [CrossRef]
  37. Févotte, C.; Dobigeon, N. Nonlinear hyperspectral unmixing with robust nonnegative matrix factorization. IEEE Trans. Image Process. 2015, 24, 4810–4819. [Google Scholar] [CrossRef] [PubMed]
  38. Ammanouil, R.; Ferrari, A.; Richard, C.; Mathieu, S. Nonlinear unmixing of hyperspectral data with vector-valued kernel functions. IEEE Trans. Image Process. 2017, 26, 340–354. [Google Scholar] [CrossRef] [PubMed]
  39. Scholkopf, B.; Herbrich, R.; Smola, A.J. A generalized representer theorem. In Proceedings of the International Conference on Computational Learning Theory, Sydney, NSW, Australia, 8–10 July 2001; pp. 416–426. [Google Scholar]
  40. Goldstein, T.; Osher, S. The Split Bregman Method for L1-Regularized Problems. SIAM J. Img. Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  41. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  42. Iordache, M.D.; Bioucas-Dias, J.M.; Plaza, A. Sparse Unmixing of Hyperspectral Data. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2014–2039. [Google Scholar] [CrossRef] [Green Version]
  43. Zhao, M.; Chen, J. A dataset with ground-truth for hyperspectral unmixing. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5081–5084. [Google Scholar]
  44. Zhao, M.; Chen, J.; He, Z. A laboratory-created dataset with ground-truth for hyperspectral unmixing evaluation. arXiv, 2019; arXiv:1902.08347. [Google Scholar]
Figure 1. Ground-truth abundance maps associated to nine endmembers used in S 2 .
Figure 1. Ground-truth abundance maps associated to nine endmembers used in S 2 .
Remotesensing 11 00529 g001
Figure 2. RMSE as a function of the regularization parameters for the proposed method with S 1 using Gaussian kernel.
Figure 2. RMSE as a function of the regularization parameters for the proposed method with S 1 using Gaussian kernel.
Remotesensing 11 00529 g002
Figure 3. RMSE as a function of the regularization parameters for the proposed method with S 1 using Polynomial kernel.
Figure 3. RMSE as a function of the regularization parameters for the proposed method with S 1 using Polynomial kernel.
Remotesensing 11 00529 g003
Figure 4. Laboratory created data for unmixing performance evaluation (RGB images). Subfigures (ad): pure quartz sand of 0.03 mm of four colors, and they serve as pure materials for providing endmembers. Subfigures (e,f): uniform mixtures of sand of two (e) and three colors (f). Subfigures (g,h): mixtures with spatial patterns of sand of two (g) and three colors (h). Square regions of 60-by-60 pixels in the center of each subfigures are clipped out and used in experiments.
Figure 4. Laboratory created data for unmixing performance evaluation (RGB images). Subfigures (ad): pure quartz sand of 0.03 mm of four colors, and they serve as pure materials for providing endmembers. Subfigures (e,f): uniform mixtures of sand of two (e) and three colors (f). Subfigures (g,h): mixtures with spatial patterns of sand of two (g) and three colors (h). Square regions of 60-by-60 pixels in the center of each subfigures are clipped out and used in experiments.
Remotesensing 11 00529 g004
Figure 5. Spectral signatures of seven materials extracted from the dataset. They form the endmember matrix that is used in Section 4.2.
Figure 5. Spectral signatures of seven materials extracted from the dataset. They form the endmember matrix that is used in Section 4.2.
Remotesensing 11 00529 g005
Figure 6. Abundances maps of S 5 . From left to right columns: ground-truth, estimated abundances of FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), proposed ( G ). From top to bottom rows: distributions of red quartz sand and green quartz sand.
Figure 6. Abundances maps of S 5 . From left to right columns: ground-truth, estimated abundances of FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), proposed ( G ). From top to bottom rows: distributions of red quartz sand and green quartz sand.
Remotesensing 11 00529 g006
Figure 7. Abundances maps of S 6 . From left to columns: ground-truth, estimated abundances of FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), proposed ( G ). From top to bottom rows: distributions of blue quartz sand, red quartz sand and green quartz sand.
Figure 7. Abundances maps of S 6 . From left to columns: ground-truth, estimated abundances of FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), proposed ( G ). From top to bottom rows: distributions of blue quartz sand, red quartz sand and green quartz sand.
Remotesensing 11 00529 g007
Figure 8. AVIRIS Cuprite data (denoted by S 7 ) used in the experiment. (a) The false color image illustrated with bands { 30 , 40 , 80 } of the sub-image of the AVIRIS Cuprite data set. (b) Twelve endmembers used for unmixing.
Figure 8. AVIRIS Cuprite data (denoted by S 7 ) used in the experiment. (a) The false color image illustrated with bands { 30 , 40 , 80 } of the sub-image of the AVIRIS Cuprite data set. (b) Twelve endmembers used for unmixing.
Remotesensing 11 00529 g008
Figure 9. Maps of reconstruction error. From left to right: FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), Proposed ( G ).
Figure 9. Maps of reconstruction error. From left to right: FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), Proposed ( G ).
Remotesensing 11 00529 g009
Figure 10. Abundances maps of selected materials of S 7 . From left to right: FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), Proposed ( G ). From top to bottom: the 1st endmember, the 6th endmember, the 10th endmember.
Figure 10. Abundances maps of selected materials of S 7 . From left to right: FCLS, SUnSAL, K-Hype ( G ), NDU ( G ), IIKNMF ( G ), Proposed ( G ). From top to bottom: the 1st endmember, the 6th endmember, the 10th endmember.
Remotesensing 11 00529 g010
Table 1. RMSE comparison of S 1 .
Table 1. RMSE comparison of S 1 .
SNR = 15 dBSNR = 30 dB
GBMPNMMGBMPNMM
FCLS0.0296 ± 0.000550.0319 ± 0.000520.0224 ± 0.000460.0235 ± 0.00033
SUnSAL0.0283 ± 0.000540.0321 ± 0.000540.0171 ± 0.000340.0214 ± 0.00034
K-Hype ( G )0.0295 ± 0.000370.0319 ± 0.000360.0214 ± 0.000400.0244 ± 0.00032
K-Hype ( P )0.0292 ± 0.000380.0322 ± 0.000360.0213 ± 0.000380.0254 ± 0.00032
NDU ( G )0.0278 ± 0.000540.0311 ± 0.000500.0153 ± 0.000300.0192 ± 0.00028
NDU ( P )0.0277 ± 0.000540.0309 ± 0.000520.0150 ± 0.000310.0193 ± 0.00028
Proposed ( G )0.0279 ± 0.000510.0315 ± 0.000510.0152 ± 0.000290.0210 ± 0.00032
Proposed (auto) ( G )0.0291 ± 0.000540.0315 ± 0.000510.0155 ± 0.000300.0210 ± 0.00032
Proposed ( P )0.0278 ± 0.000520.0315 ± 0.000510.0150 ± 0.000320.0210 ± 0.00032
Table 2. RMSE comparison of S 2 .
Table 2. RMSE comparison of S 2 .
SNR = 15 dBSNR = 30 dB
GBMPNMMGBMPNMM
FCLS0.0564 ± 0.00310.1185 ± 0.00820.0488 ± 0.00270.1139 ± 0.0070
SUnSAL0.0523 ± 0.00350.0839 ± 0.00600.0288 ± 0.00080.0561 ± 0.0016
K-Hype ( G )0.0482 ± 0.00290.0726 ± 0.00510.0161 ± 0.00030.0338 ± 0.0011
K-Hype ( P )0.0945 ± 0.00700.1346 ± 0.01070.0533 ± 0.00260.1218 ± 0.0078
NDU ( G )0.0551 ± 0.00300.1107 ± 0.00900.0447 ± 0.00200.0888 ± 0.0070
NDU ( P )0.0524 ± 0.00250.1127 ± 0.00930.0355 ± 0.00110.0825 ± 0.0068
IIKNMF ( G )0.2396 ± 0.04880.2290 ± 0.03640.2418 ± 0.04720.2360 ± 0.0439
IIKNMF ( P )0.2155 ± 0.03240.2066 ± 0.02530.2241 ± 0.03490.2107 ± 0.0342
Proposed ( G )0.0529 ± 0.00390.0662 ± 0.00520.0278 ± 0.00080.0269 ± 0.0005
Proposed (auto) ( G )0.0536 ± 0.00370.0665 ± 0.00530.0233 ± 0.00050.0271 ± 0.0006
Proposed ( P )0.0515 ± 0.00370.0867 ± 0.00610.0159 ± 0.00030.0604 ± 0.0011
Table 3. RMSE comparison of S 3 , S 4 , S 5 , S 6 .
Table 3. RMSE comparison of S 3 , S 4 , S 5 , S 6 .
ModelLMMNLMM
AlgorithmFCLSSUnSALK-Hype ( G )NDU ( G )IIKNMF ( G )Proposed ( G )Proposed (auto) ( G )
RMSE S 3 0.1192 ± 0.00540.1410 ± 0.00720.1445 ± 0.00530.1235 ± 0.00930.2213 ± 0.06460.1124 ± 0.00440.1138 ± 0.0045
S 4 0.0891 ± 0.00410.1020 ± 0.00490.0662 ± 0.00420.1032 ± 0.01120.1709 ± 0.01840.0573 ± 0.00250.0585 ± 0.0025
S 5 0.1619 ± 0.02790.1733 ± 0.03180.1802 ± 0.02290.1617 ± 0.02800.2502 ± 0.07530.1576 ± 0.02750.1585 ± 0.0283
S 6 0.1799 ± 0.02500.1817 ± 0.02520.1895 ± 0.02490.1789 ± 0.02490.2398 ± 0.05980.1699 ± 0.02420.1703 ± 0.0244
Table 4. RE comparison of S 7 .
Table 4. RE comparison of S 7 .
ModelLMMNLMM
AlgorithmFCLSSUnSALK-Hype ( G )NDU( G )IIKNMF ( G )Proposed ( G )K-Hype ( P )NDU( P )IIKNMF ( P )Proposed ( P )
RE0.00560.00500.00270.02820.01640.00280.00210.02820.05190.0034

Share and Cite

MDPI and ACS Style

Li, Z.; Chen, J.; Rahardja, S. Kernel-Based Nonlinear Spectral Unmixing with Dictionary Pruning. Remote Sens. 2019, 11, 529. https://doi.org/10.3390/rs11050529

AMA Style

Li Z, Chen J, Rahardja S. Kernel-Based Nonlinear Spectral Unmixing with Dictionary Pruning. Remote Sensing. 2019; 11(5):529. https://doi.org/10.3390/rs11050529

Chicago/Turabian Style

Li, Zeng, Jie Chen, and Susanto Rahardja. 2019. "Kernel-Based Nonlinear Spectral Unmixing with Dictionary Pruning" Remote Sensing 11, no. 5: 529. https://doi.org/10.3390/rs11050529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop