Next Article in Journal
Cardiopulmonary Activity Monitoring Using Millimeter Wave Radars
Previous Article in Journal
Correction: Miraglio, T., et al. Monitoring LAI, Chlorophylls, and Carotenoids Content of a Woodland Savanna Using Hyperspectral Imagery and 3D Radiative Transfer Modeling. Remote Sensing 2020, 12, 28
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation

1
School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
2
School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(14), 2264; https://doi.org/10.3390/rs12142264
Submission received: 1 June 2020 / Revised: 10 July 2020 / Accepted: 13 July 2020 / Published: 15 July 2020

Abstract

:
Low-rank tensors have received more attention in hyperspectral image (HSI) recovery. Minimizing the tensor nuclear norm, as a low-rank approximation method, often leads to modeling bias. To achieve an unbiased approximation and improve the robustness, this paper develops a non-convex relaxation approach for low-rank tensor approximation. Firstly, a non-convex approximation of tensor nuclear norm (NCTNN) is introduced to the low-rank tensor completion. Secondly, a non-convex tensor robust principal component analysis (NCTRPCA) method is proposed, which aims at exactly recovering a low-rank tensor corrupted by mixed-noise. The two proposed models are solved efficiently by the alternating direction method of multipliers (ADMM). Three HSI datasets are employed to exhibit the superiority of the proposed model over the low rank penalization method in terms of accuracy and robustness.

Graphical Abstract

1. Introduction

Hyperspectral image (HSI) has drawn a lot of attention on account of its rich spectral information. However, HSI can be easily disturbed by many external factors, such as missing entries, noise, and so on, which not only degrade the visual quality of the image but also limit the precision of the subsequent image interpretation and analysis. Therefore, HSI restoration has attracted an increasing interest in recent years and various HSI restoration models have been developed [1,2,3]. Among them, considering the spatial geometric similarity and spectral correlation of HSI, low-rank prior modeling has become one of the most popular technologies [4]. Besides, to enhance the structural details, a total variation regularized low-rank matrix factorization (LRTV) restoration model was proposed in Ref. [5], which obtains satisfying restoration results in case of Gaussian noise. As for removing the mixed-noise, enhancing edge and spectral signatures, a weighted total variation regularized low-rank (LRWTV) model was proposed [6]. Candes et al. employed robust principle component analysis (RPCA) to separate clean HSI from corrupted sparse noise in polynomial time [7]. Chang et al. established a low-rank-based single-image decomposition model (LRSID) was established to remove the stripe noise in HSI [1].
With the development of low-rank approaches, non-convex low-rank approximation approaches, including weighted nuclear norm minimization (WNNM) [8], Schatten p-norm minimization [9], weighted Schatten p-norm minimization (WSNM) [10] and minimax concave penalty (MCP) function [11], etc., have also been studied due to the robustness and unbiasedness. Wang et al. [12] developed the MCP function for the rank minimization problem and to obtain a so-called γ-norm as a non-convex relaxation of the matrix rank. A non-convex relaxation HSI low-rank recovery with TV regularized (NRLRWTV) model [13] was proposed in order to achieve high estimation for HSI as well as capture spatial and spectral information. NMC [14] was proposed as a non-convex matrix completion method to recover the observed image. By reweighting singular values of matrices reconstructed from HSI, a non-convex, non-smooth minimization denoising method, named as the iteratively reweighted nuclear norm (IRNN) was developed [15].
However, as HSI is 3D cube data, matrix-based methods may destroy its intrinsic spectral and spatial correlations. Therefore, tensor representation is proposed and low-rank tensor prior is also introduced to make full use of the inherent structure of HSI [16,17,18,19,20,21,22,23,24]. Different from the matrix rank, the tensor rank is diverse according to the non-uniqueness of tensor decomposition [16]. One of the most popular techniques to define the tensor rank is to take the unfolding matrices of the tensor into consideration. In this case, tensor n-rank based on Tucker decomposition (TD) was developed and the low-rank tensor completion (LRTC) model was proposed to recover the missing values in visual data [17]. As singular value decomposition (SVD) used in LRTC results in more time consumption or even non-applicability for large-scale problems, Xu proposed the TMac [22] method, which applies low-rank matrix factorization to each unfolding mode of the tensor. Although the n-rank can flexibly exploit the correlation along each mode by adjusting the weights [23], as pointed out in [24,25], unfolding the tensor as a matrix destroys the intrinsic structure of the tensor. Therefore, tensor tubal rank (multi-rank) based on tensor SVD (t-SVD) has received considerable attention [22,26,27,28,29,30,31,32,33,34,35]. In fact, the tubal-rank (multi-rank) is an NP-hard problem, and tensor nuclear norm (TNN) is developed as the convex surrogate in [26], and the exactly-recover-property for TNN-based tensor robust principle component analysis (TRPCA) model is also proved. Zhang et al. [30,31] proposed the t-SVD based tensor completion and TRPCA models in video restoration. Hu et al. [32] proposed a twist tensor nuclear norm (t-TNN) for video completion.
Although a low-rank of tensor model has been widely used, they are also based on matrix rank, which depends on the nuclear norm of the matrix. Therefore, for tensor low-rank approximation, the non-convex technique used in matrix can be extended to tensor [33,34,35]. Followed by this idea, several tensor low-rank relaxation methods have been developed. For instance, Cao [36] proposed a new tensor completion model via folded-concave penalty for estimating missing values in tensor data, in which tensor n-rank is substituted. A non-convex logDet function, as a smooth approximation for tensor rank instead of the convex tensor nuclear norm, was introduced [37]. The low-rank tensor completion using the MCP function as a relaxation of tensor tubal rank was proposed in our previous paper [38]. Zhang [39] proposed a novel non-convex relaxation tensor completion method, together with the Proximal Linear Minimization (PLM) algorithm. Cai [40] developed a t-Gamma norm-based non-convex tensor approximation, which can better keep the low-rank property of HSI. In Ref. [41], the Non-Convex Regularized Tensor (NORT) tensor completion method was proposed to improve the computation efficiency. Li [42] introduced a non-convex penalty of tensor rank based on t-SVD, which can effectively avoid the bias of l 1 –norm. However, by integrating a non-convex function in [43] with an overlapped nuclear norm [44], the operator in NORT was computed by the unfolding matrix of the tensor, which can destroy the intrinsic structure of the tensor.
Considering the problems mentioned above in low-rank approximation and HSI restoration, we propose non-convex low-rank tensor approximation in two cases for HSI restoration. The main process of this paper is summarized in Figure 1, and the paper is organized as follows. Section 2 introduces some notations and preliminaries of tensors, where several algebraic structures of 3-order tensors are defined. In Section 3, the proposed non-convex tensor nuclear norm (tensor γ-norm) (NCTNN) penalized tensor completion is developed. Meanwhile, a non-convex low-rank approximation is merged into the tensor robust PCA (NCTRPCA), which separates low multi-rank components from sparse corruptions. Furthermore, the alternating direction method of multipliers (ADMM) algorithm [45] is developed to solve the two optimization models. Section 4 presents the simulation and real experiments for recovering HSI with limited sampled data and noise corruptions, meanwhile, parameters analysis and discussions are also given. Conclusions are drawn in Section 5.

2. Notations and Preliminaries

Throughout this paper, we use X ,   Χ , x   and   x   to denote tensor, matrix, vector and scalar, respectively. For a 3-order tensor X n 1 × n 2 × n 3 , the ( i , j , k ) -th entry is denoted as X i j k and X ( i ) is the mode-i matricization. We use the Matlab notation X ( i , : , : ) , X ( : , i , : ) and X ( : , : , i ) to denote, respectively, the i -th horizontal, lateral and frontal slice. More often, the frontal slice X ( : , : , i ) is denoted as X ( i ) . The tube is denoted as X ( i , j , : ) . The inner product of matrices Χ and Y  in n 1 × n 2 is defined as X , Y = Tr ( X * Y ) , where X * denotes the conjugate transpose of X and Tr(·) denotes the matrix trace. The inner product of the two tensors X and Y in n 1 × n 2 × n 3 is defined as X , Y = i = 1 n 3 X ( i ) , Y ( i ) .
Some norms of vector, matrix and tensor are used. We denote the l 1 -norm, Frobenius norm and nuclear norm as | | X | | 1 = i j k | X i j k | , | | X | | F = i j k | X i j k | 2 and | | X | | * = i | | X ( i ) | | * , respectively. The above norms reduce to the vector or matrix norms if X is a vector or a matrix. Specifically, the nuclear norm of the matrix Χ is defined as | | X | | * = i r σ i ( X ) , in which σ i ( X ) is the singular values of matrix Χ with σ 1 σ 2 σ r , and r is the rank of matrix Χ .
The Fourier transformation of tensor X n 1 × n 2 × n 3 is denoted by X ^ . In particular, we denote Χ ^ as a block diagonal matrix with each block on diagonal as the frontal slice X ^ ( i ) of X ^ , i.e.,
Χ ^ = b l k d i a g ( X ^ ) = [ X ^ ( 1 ) X ^ ( 2 ) X ^ ( n 3 ) ]
The block circulant matrix of tensor X n 1 × n 2 × n 3 has size n 1 n 3 × n 2 n 3 , i.e.,
b c i r c ( X ) = [ X ( 1 ) X ( n 3 ) X ( 2 ) X ( 1 ) X ( 2 ) X ( 3 ) X ( n 3 ) X ( n 3 1 ) X ( 1 ) ]
We also define the following operator
unfold ( X ) = [ X ( 1 ) X ( 2 ) X ( n 3 ) ] ,   fold ( unfold ( X ) ) = X
Then, t-product between two 3-order tensors can be defined as follows.
Definition 1.
(t-product) [19]: The t-product X * Y of two tensors X n 1 × n 2 × n 3 and Y n 2 × n 3 × n 4 is a tensor of size n 1 × n 3 × n 4 , and X * Y = f o l d ( b c i r c ( X ) · u n f o l d ( Y ) ) .
Definition 2.
(f-diagonal tensor) [19]: A tensor is called f-diagonal if each of its frontal slices is a diagonal matrix.
Theorem 1.
(t-SVD) [19]: For a tensor X n 1 × n 2 × n 3 , the t-SVD of X is given by X = U * S * V T , where U n 1 × n 1 × n 3 , V n 2 × n 2 × n 3 are orthogonal, and S is a rectangular f-diagonal tensor of size n 1 × n 2 × n 3 .
Definition 3.
(Tensor multi rank and tubal rank) [33]: The tensor multi rank of X n 1 × n 2 × n 3 is a vector r n 3 with its i-th entry as the rank of the i-th frontal slice of X ^ , i.e., r m = r a n k ( X ^ ( i ) ) .
The tensor tubal rank, denoted as r a n k t ( X ) , is defined as the number of nonzero singular tubes of S , where S is from the t-SVD of X = U * S * V T .
That is
r a n k t ( X ) = # { i : S ( i , i , : ) 0 } = max i r i .
Definition 4.
(TNN norm) [33]: Another definition of the tensor nuclear norm of X n 1 × n 2 × n 3 , denoted as | | X | | T N N , is defined as the average of the nuclear norm of all the frontal slices of X ^ , i.e.,
| | X | | T N N = 1 n 3 r = 1 r k X ^ ( k ) *
which can be rewritten as | | X | | T N N = 1 n 3 k = 1 n 3 r = 1 r k σ k r ( X ^ ( k ) ) .

3. Proposed Methods

Minimizing the rank is equal to minimizing the l 0 -norm; it is a NP-hard problem and can be solved by convex or non-convex relaxation methods. Nuclear norm and weighted nuclear norm are common convex approximations, while the convex surrogate will over-penalize larger singular values, resulting in the approximation deviation. Therein, non-convex surrogate function, which can keep large singular values unchanged, has a nearly unbiased approximation to the l 0 -norm.

3.1. Non-Convex Low-Rank Relaxation

As it has been investigated in [10], the γ-norm | | X | | γ (γ is the positive factor) of matrix X with rank r is defined as the sum of MCP functions of the matrix singular values
| | X | | γ = i = 1 r ψ M C P ( σ i ( X ) )
With
ψ M C P ( t ) = { γ λ 2 2 ,                           | t | > γ λ λ | t | t 2 2 γ ,         o t h e r w i s e
where ψ M C P ( · ) is the so-called MCP function, λ is a threshold parameter.
It can be derived that when γ , the non-convex MCP function tends to   l 1 -norm and it satisfies   ψ M C P ( X ) λ | | X | | 1 , which leads to a nearly optimal non-convex approximation to   l 0 -norm. As such, the derived matrix γ -norm defined by the MCP function with regard to singular values generates a nearly optimal non-convex approximation to low-rank (NRLR). The solution of γ -norm results in singular value soft thresholding (SVST). Compared with the popular singular values thresholding (SVT) of matrix nuclear norm, SVST keeps large singular values of a matrix unchanged, which has the advantages of nearly unbiasedness and more accuracy, as illustrated in Figure 2.
Inspired by matrix γ-norm, for a third-order tensor X n 1 × n 2 × n 3 , we define the tensor γ-norm as follows:
| | X | |   γ = 1 n 3 k = 1 n 3 | | X ^ ( k ) | | γ = 1 n 3 k = 1 n 3 r = 1 r k ψ M C P ( σ k r ( X ^ ( k ) ) )
where σ k r ( X ^ ( k ) ) stands for the r-th singular values of the matrix. The defined tensor γ-norm is the average of γ -norm of all the frontal slices of X ^ , and can be computed by the sum of MCP function with singular values of X ^ ( k ) as variables, which is a non-convex relaxation of the matrix nuclear norm. Therefore, the defined tensor γ-norm can be regarded as a nearly unbiased non-convex relaxation of tensor rank, which leads to better approximation to tensor rank than the TNN norm or other convex relaxation methods. Moreover, the proposed tensor γ-norm has some good properties that can be summarized in Theorem 2.
Theorem 2.
X n 1 × n 2 × n 3 , it has the decomposition of t-SVD with X = U * S * V T , where U n 1 × n 1 × n 3 , V n 2 × n 2 × n 3 are orthogonal tensors, and S is a rectangular f-diagonal tensor of size n 1 × n 2 × n 3 . The proposed tensor norm | | X | | γ satisfies:
(i) 
| | X | | γ 0 , | | X | | γ = 0 iff X = 0 .
(ii) 
| | X | | γ increases by γ.
(iii) 
| | X | | γ is a concave function of σ k r ( X ^ ( k ) ) .
(iv) 
| | X | | γ is orthogonal unitary invariant, i.e., | | X | | γ = | | P * X * P   T | | γ where P n 1 × n 2 × n 3 , Q   n 2 × n 2 × n 3 are orthogonal.
Proof. 
(i)
Since ψ M C P ( t ) 0 , and | | X | | γ is the sum of ψ M C P ( t ) , we have | | X | | γ 0 . If X = 0 , then σ k r ( X ^ ( k ) ) = 0 , that means | | X | | γ = 0 . On the other hand, | | X | | γ = 0 means σ k r ( X ^ ( k ) ) = 0 , that is X ^ = 0 , so we have X = i f f t ( X ^ ) = 0
(ii)
Because ψ M C P ( t ) is increasing with respect to γ , and | | X | | γ is the sum of ψ M C P ( t ) , according to the properties of ψ M C P ( t ) , we have | | X | | γ is increasing with respect to γ
(iii)
Because ψ M C P ( t ) is concave over [ 0 , ) , and | | X | | γ is the sum of ψ M C P ( t ) , by regarding σ k r ( X ^ ( k ) ) as t , we know | | X | | γ is concave over σ k r ( X ^ ( k ) ) .
(iv)
Since X has t-SVD X = U * S * V T , then P*X*QT = P*U*S*VT*QT. Because S is f-diagonal, we only need to verify that and ( V T * Q T ) T = Q * V are orthogonal.
( P * U ) * ( P * U ) T = P * U * U T * P T = ( Q * V ) * ( Q * V ) T = Q * V * V T * Q T =
So we have
| | p * X * Q T | |   γ = 1 n 3 k = 1 n 3 r = 1 r k ψ M C P ( σ k r ( X ^ ( k ) ) ) = | | X | |   r
then | | X | | γ is the orthogonal invariant. □

3.2. Non-Convex Soft Shrinkage Operator of Tensor

As it is stated in the matrix γ-norm, minimization of the γ -norm gets the solution in the form of SVST. For the proposed tensor γ-norm minimization, we can also infer the same result as the following theorem presented.
Theorem 3.
The solution of non-convex tensor minimization:
min X | | X | | γ     s . t .     X = Ψ
is a non-convex singular value soft shrinkage operator in the Fourier domain, as follows:
X * = D λ , γ 3 ( X ) = U * ( S λ , γ * ) * V T
where * means t-product, U , V comes from the t-SVD of Ψ with Ψ = U * S * V T S λ , γ is a f-diagonal tensor with diagonal elements D λ , γ ( σ 1 ) , , D λ , γ ( σ m ) as
D λ , γ ( σ i ) = {       σ i                 i f   σ i γ σ i γ 1 λ / γ             i f   λ σ i < γ       0                       o t h e r w i s e
is a f-diagonal tensor, its i-th diagonal tube ( i , i , : ) has Fourier transformation ^ ( i , i , : ) = [ m ^ i 1 , , m ^ i n 3 ] . The elements m ^ i k are defined as
m ^ i k = { 1                   i f   S ^ ( i , i , k ) γ 1 γ / S ^ ( i , i , k ) 1 λ / γ       i f     λ S ^ ( i , i , k ) < γ 0                                 o t h e r w i s e , k = 1 , , n 3
Proof. 
Firstly, we consider a general form of minimization problem Equation (2) as follows:
ϕ ( X ) = 1 2 | | Ψ X | | F 2 + λ | | X | | γ
where | | . | | F is the Frobenius norm.
We rewrite Equation (4) as a form of the solution in terms of the frontal slice of X ^   as follows:
X ^ ( k ) * = a r g min X ^ ( k ) { 1 2 | | Ψ ^ ( k ) X ^ ( k ) | | F 2 + λ | | X ^ ( k ) | | γ }
where matrix Ψ ^ ( k ) can be represented as Ψ ^ ( k ) = U S V T , U , V are orthogonal matrices, S is a diagonal matrix with diagonal elements σ 1 , , σ r .
According to matrix-based non-convex soft shrinkage operator in [44], the solution of Equation (5) is
X ^ ( k ) = U D λ , γ ( S ) V T
where D λ , γ ( S ) = {   σ i   i f   σ i γ σ i γ 1 λ / γ   i f   λ σ i < γ   0   o t h e r w i s e .
Let D ¯ λ , γ 1 = {   1   i f   σ i γ 1 γ / σ i 1 λ / γ   i f   λ σ i < γ   0   o t h e r w i s e , then D λ , γ ( S ) = σ i × D ¯ λ , γ 1 , which means D λ , γ ( S ) is the multiplication of the i-th singular value of S and D ¯ λ , γ 1 . In the following, we will show how to solve X through the expression of X ^ ( k )   as Equation (6).
Assuming Ψ has t-SVD Ψ = U * S * V T , S has Fourier transform S ^ , then the elements in the singular tube of X ^ are produced by multiplying S ^ ( i , i , k ) and   D ¯ λ , γ 2 , where
D ¯ λ , γ 2 = {     1               i f   S ^ ( i , i , k ) γ 1 γ / σ i 1 λ / γ             i f   λ S ^ ( i , i , k ) < γ 0                       o t h e r w i s e
The above mentioned is processed in the Fourier domain; in the spatial domain, an equivalent formulation can be represented by S * , where is a f-diagonal tensor and its Fourier transform of the i-th diagonal tube is denoted as ^ ( i , i , : ) = f f t ( ( i , i , : ) , [   ] , 3 ) . Hence, the elements of ^ ( i , i , : )   are m ^ i k and are computed as follows:
We have X = U * ( S λ , γ * ) * V T : = D λ , γ 3 ( Ψ ) .  □
m ^ i k = { 1                   i f   S ^ ( i , i , k ) γ 1 γ / S ^ ( i , i , k ) 1 λ / γ       i f     λ S ^ ( i , i , k ) < γ 0                                 o t h e r w i s e ,   k = 1 , , n 3
This theorem proves that the optimization of the proposed tensor γ-norm leads to SVST. As illustrated in Figure 2, SVST retains more of the large singular values of frontal slice X ^ ( k ) than SVT does, which in turn captures the most important information of HSI. Taking this advantage into consideration, we utilize the proposed tensor γ-norm as a non-convex low-rank regularization to the following HSI restoration models.

3.3. Tensor γ-Norm in HSI Recovery

3.3.1. Non-Convex Low-Rank Tensor Completion

Assuming P Ω ( O ) of size n 1 × n 2 × n 3 is the observed HSI in a set Ω , which has missing values compared with original HSI X of size n 1 × n 2 × n 3 ,   Ω [ 1 , n 1 ] × [ 1 , n 2 ] × [ 1 , n 3 ] denotes the so-called sampling set. The objective of HSI completion is to estimate the original HSI X from the observed HSI P Ω ( O ) . The HSI completion problem can be formulated as follows:
P Ω ( O ) {       P Ω ( X )         i f ( i 1 , i 2 , i 3 ) Ω 0               o t h e r w i s e
where P Ω :   R n 1 × n 2 × n 3 R n 1 × n 2 × n 3 is a projection operator, ( i 1 , i 2 , i 3 ) is the pixel of X .
By incorporating tensor γ-norm into the TNN minimization model [33] as a non-convex relaxation of low-rank, a novel low-rank tensor completion method named as non-convex TNN (NCTNN) minimization is developed as follows:
min X | | X | | γ   s . t . P Ω ( X ) = P Ω ( O )
According to Theorem 3, tensor γ -norm in problem (7) has a solution with the SVST operator, as Equation (3) has shown. As it is well known, low-rank prior represents the inherence spectral–spatial correlation of HSI, and large singular values stand for the most useful information. In SVST, the large singular values are nearly untouched, and other tiny values are shrunk, which provides nearly unbiased and more accurate low-rank approximation. Moreover, due to the low-rank relaxation term, the model can capture the spatial and spectral information of HSI simultaneously.
The optimization problem (7) is solved in the Fourier transform domain. Let Y be the available sampled data Y = P Ω ( O ) . Define G = 3 P Ω 3 ^ , where 3 and 3 ^ are Fourier and inverse Fourier transformation along the third dimension of tensors, respectively. Then, we have Y ^ = G ( O ^ ) , where Y ^ and O ^ are the Fourier transforms of Y and O along the third mode, respectively. So Equation (7) is equivalent to the following:
min X ^ | | b l k d i a g ( X ^ ) | | γ   s . t . Y ^ = G ( X ^ )
The optimization problem (8) can be solved by various methods. Considering that t-SVD converts the TNN minimization into matrix rank minimization in the Fourier domain, the argument Lagrange formulation of Equation (8) in the Fourier domain is written as follows:
l ( X , Z , Q ) = | | b l k d i a g ( Z ^ ) | | γ + 1   Y = P Ω ( X ) + Q , X ^ Z ^ + λ / 2 | | X ^ Z ^ | | F 2
where Z is the auxiliary variable which satisfies X ^ = Z ^ , 1 is the indicator function, Q   is Lagrange multiplier, λ is the parameter.
Then we incorporate t-SVD with the alternating direction method of multipliers (ADMM) [30,31,36,37,45] to solve the proposed model as shown in Algorithm 1. The iteration is ended if the error of two successive iterations   is smaller than a constant ε , which is usually sufficiently small. The smaller ε   is, the more time consumption takes. In Algorithm 1, we set ε = 10 6 empirically.
Algorithm 1 Solve NCTNN (7) by ADMM
Input: observed image O , parameters λ , r ,   γ .
Initialize: X = 0 , Z = 0 , Q = 0 , ε = 10 6 , k = 0 .
while not converged do
1.
Update X k + 1 by
X k + 1 = Z k Q k
2.
Update Z k + 1 by
Z k + 1 = D λ , γ 3 ( X ^ k + 1 + Q k )
3.
Update Q k + 1 by
Q k + 1 = Q k + ( X k + 1 Z k + 1 )
4.
Check the convergence conditions
| | Z k + 1 Z k | | F 2 ε
end while

3.3.2. Non-Convex Tensor Robust Principle Component Analysis

Although the aforementioned NCTNN model can recover degraded HSI well, it may have a weakness in removing mixed-noise efficiently, such as the sparse noise. Robust principle component analysis (RPCA) [7] fully considers the sparse noise in HSI by separating the image into signal and sparse noise or outliers, and gives low-rank and sparsity priors to the two parts, respectively. Unfortunately, RPCA in [7] regards HSI as a matrix, which destroys the intrinsic structure of HSI, while tensor robust principle component analysis (TRPCA) associated with TNN in [33] may obtain an image with residual noise on account of the limited low-rank approximation of TNN. Hence, to remove the sparse noise effectively, we introduce the proposed tensor γ-norm to the RPCA model, resulting in the second proposed method: non-convex tensor robust principle component analysis (NCTRPCA) as follows:
  min X ,     S   | | X | | γ + λ | | S | | 1 , 1 , 2   s . t . O = X + S
where O n 1 × n 2 × n 3 is the observed image, n 1 and n 2 are the spatial ways and n 3 is the spectral way, 𝒳 is the original image, S is the sparse noise, λ is a parameter, | | S | | 1 , 1 , 2 is defined as i , j | | S ( i , j , : ) | | F .
The non-convex optimization problem Equation (10) is solved by the same strategy as Algorithm 1, in which ADMM and Fourier transform are also employed.
The argumented Lagrange function of Equation (10) can be written as follows:
l ( X , S , W ) = | | X | | γ + λ | | S | | 1 , 1 , 2 + W , O X S + ρ / 2 | | O X S | | F 2
where W is Lagrangian multiplier, λ ,   ρ are parameters.
Algorithm 2 summarizes the solving steps. As mentioned in Algorithm 1, iteration termination condition is decided by   τ X   and   τ S , and we set τ X = 10 3 ,   τ S = 10 3   empirically in Algorithm 2.
Algorithm 2 Solve NCTRPCA (10) by ADMM
Input: observed image O , parameter λ , γ , r .
Initialize: X = 0 , , W = 0 , ρ = 2 3 , k = 0 , τ X = 10 3 ,   τ S = 10 3 .
while not converged do
1.
Update X k + 1 by
X k + 1 = D 1 ρ , γ 3 ( O S k W k )
2.
Update S k + 1 by
S k + 1 = D λ ρ ( X k + 1 O + W k )
3.
Update W k + 1 by
W k + 1 = W k + X k + 1 + S k + 1 O
4.
Check the convergence conditions
| | X k + 1 X k | | F 2 τ X  
| | S k + 1 S k | | F 2 τ S
end while
where D λ ρ ( x ) = sgn ( x ) [ | x | λ ρ ] + is the soft shrinkage operator.

4. Experimental Results and Analysis

In this section, to verify the effectiveness of the proposed methods, experiments are performed on three HSI datasets. Among them, Washington DC Mall with a size of 151 × 151 × 191, and Pavia University with a size of 200 × 200 × 103 are used for simulated experiments, while HYDICE Urban dataset with a size of 200 × 200 × 210 is used for real experiments. The gray values of each HSI band are normalized to [0, 1].
The parameters in the compared methods are adjusted manually to optimal. While in our method, parameter rank r, λ   and γ are adjusted corresponding to the parameters analysis given later on.
The quantitative assessment indices, including the relative square error (RSE), the mean peak signal-to-noise ratio (MPSNR), the mean structural similarity (MSSIM) and the mean spectral angle distance (MSAD) are adopted.

4.1. Experiment 1: Comparative Experiments for NCTNN

In order to verify the effectiveness of NCTNN in the restoration of HSI, we applied some matrix and tensor completion methods from the perspective of non-convex relaxation and convex relaxation to HSI recovery. The observed HSI is obtained by randomly choosing a certain percentage element from the original HSI tensor, a large percentage denotes a high sampling rate (SR) and vice versa. Washington DC and Pavia University datasets are used in the experiments.

4.1.1. Non-Convex Methods Comparative Experiments

Firstly, the proposed NCTNN is compared with two non-convex methods, NMC [14] and NORT [41], among them, NMC is a matrix completion method, and NORT belongs to a tensor completion method.
The rank of the matrix in NMC is 15, and the tensor rank of NORT is r = [ 10 , 10 , 10 ] , and in the proposed NCTNN method, r = [ 25 , 25 , 25 ] ,   γ = 2 , respectively.
Table 1 summarizes the MPSNR, MSSIM values obtained from the three methods for Pavia University in different sampling rates (SR). The best index is in boldface. It is obvious that NORT and NCTNN, two tensor completion methods, perform better than the matrix-based NMC method in dealing with HSI recovery. NCTNN has improvements in MPSNR, MSSIM with nearly 0.7535 dB, 0.0122 over NORT on average, respectively. In addition, MSAD values of NCTNN perform best on spectral maintenance. Furthermore, the performance of NCTNN has a rising trend with the decrease in the sampling rate.
Figure 3 shows PSNR and SSIM curves of the three methods with a 50% sampling rate. We can find that NORT has more abnormal bands than the other two methods, while the curves of NMC, NCTNN are much more stable, and NCTNN performs best.
The recovery images of the three methods on band 100 with a 50% sampling rate are shown in Figure 4. NMC has more missing pixels, while the other two tensor-based methods can recover most information of the image, and the result of NCTNN has more details in the red box.

4.1.2. Convex Methods Comparative Experiments

The state-of-the-art low-rank tensor convex relaxation methods, including LRTC [17], TMac [23] and TNN [30], are employed in this section. TNN uses tensor tubal-rank to tensor completion, while LRTC and TMac utilize the same tensor rank, which is defined by matrix factorization of each mode unfolding of the tensor. Moreover, LRTC provides algorithms to get efficient or high accuracy estimations, while TMac solves the tensor completion with faster algorithms as well as rank-adjusting strategies.
The parameters are set empirically. In TMac, a smaller rank is given due to its adaptation in the algorithm, e.g., r = [ 10 , 10 , 10 ] . While for other three methods, the same rank r = [ 20 , 20 , 20 ] is chosen to give a relatively fair comparison. Parameters   λ and   γ   in NCTNN are set as λ = 1 / n 1 n 2 , γ = 2 .
Table 2 summarizes the RSE, MPSNR, MSSIM and MSAD values obtained from four tensor completion methods for Washington DC Mall in different sampling ratios. Obviously, the indices of TNN and NCTNN are superior to those of the LRTC and TMac methods, which is mainly due to the different definitions of tensor rank. The proposed NCTNN method achieves the best performance, and TNN ranks second best. Compared with TNN, NCTNN has higher indices in MPSNR, MSSIM with nearly 0.01 dB and 0.002 values, respectively. Meanwhile, NCTNN also achieves the lowest value in MSAD, which means an improvement in spectral fidelity.
In order to highlight the advantages of NCTNN over TNN minimization, the PSNR and SSIM curves for Washington DC Mall with 30% sampling rates are given in Figure 5. The indices of NCTNN are greater than that of TNN’s, and the curve of NCTNN is more stable, which further demonstrates the outstanding performance of our proposed method in data completion with unbiasedness, as well as robustness.
In Figure 5, the abnormal band 75 has a lower value than those of other normal bands. Figure 6 gives the restoration results. It can be obviously observed that TMac and LRTC perform worse in retaining the edges and textures. In comparison to NCTNN, TNN deletes some important textural information. In general, due to the non-convex relaxation of the tensor nuclear norm, NCTNN perfectly preserves the spatial geometric structures, which further illustrates that NCTNN has the highest recovery ability and the best stability compared with the other three methods.
To further verify the advantages of the proposed model, Figure 7 shows the spectral signatures before and after restoration of the Washington DC dataset with a 50% sampling rate. We choose the spectral signature of pixel (68, 42); it is obvious that the curve of NCTNN is the most similar to the original one, and the amplitude of vibration is the smallest, which shows that NCTNN has an improvement in spectral preservation as well as robustness.

4.1.3. Parameter Analysis

In order to illustrate the effect of rank selection on image restoration, we selected the Washington DC Mall dataset (SR = 10%) to carry out image restoration simulation experiments with different r, where r is taken from [20,20,20] to [40,40,40], and the results are shown in Figure 8a,b. It can be seen that the overall trend of MPSNR and MSSIM increases with the increase in r. When r > [25,25,25], the curve begins to decline. Therefore, r = [25,25,25] may be as a reference for other datasets.
In order to study the approximation ability of the proposed tensor γ -norm on the tensor rank, we choose the Washington DC Mall dataset (SR = 10%) to carry out image restoration simulation experiments with different γ , where the value of γ is from 0.5 to 6, and the results are shown in Figure 8c,d. It can be found that when γ > 1 , the effect of γ on the curve changes slowly. When γ = 2 , the curve gets the peak. Therefore, in experiment 1, we fixed it to 2 empirically.

4.2. Experiment 2: Comparative Experiments for NCTRPCA

In this experiment, the proposed NCTRPCA is compared with four methods: two convex methods: RPCA [7], TRPCA [31] and two non-convex methods: IRNN [15], NRLRWTV [13]. RPCA separates clean HSI from corrupted sparse noise with low-rank and sparsity regularization of the matrix, TRPCA is a convex relaxation based on TNN norm of tensor, IRNN is a non-convex non-smooth minimization denoising method by iteratively reweighted nuclear norm, and NRLRWTV is a non-convex low-rank approximation along with a weighted total variation.
Both the simulated and real HSI restoration experiments are performed on three different datasets, and the parameters are fixed empirically during experiments. In IRNN, NRLRWTV, and RPCA, r = 10 , γ = 2 . For TRPCA, r = [ 10 , 10 , 10 ] . In our method of NCTRPCA, r = [ 15 , 15 , 15 ] , γ = 1.5 , λ = 1 / n 1 n 2 , respectively.

4.2.1. Comparative Experiments

For simulated experiments, Washington DC Mall and Pavia University datasets are corrupted by mixed Gaussian and salt-and-pepper noise. Each band is added with the same noise intensity. Several different noise intensities are tested, specifically, for zero mean Gaussian noise, variances G are 0.025, 0.05, 0.075, and 0.1, while for salt-and-pepper noise, the percentages P are 0.05, 0.1, 0.15, and 0.2, respectively.
Table 3 summarizes the values of RSE, MPSNR, MSSIM and MSAD after the recovery of Washington DC Mall and Pavia University datasets by five methods. It can be seen that TRPCA and NCTRPCA are superior to other matrix-based methods in denoising performance, which benefits from structural properties of the tensor. At the same time, compared with convex method TRPCA, the MPSNR and MSSIM indices of NCTRPCA have relatively large values, which indicate a better performance in maintaining the spatial structure and spectral information.
For the Washington DC Mall dataset, Figure 9 shows the PSNR and SSIM curves recovered by three denoising methods with the noise intensity of G = 0.05 and P = 0.1. The curves of NRLRWTV and NCTRPCA are more stable. Meanwhile, the index values of NCTRPCA and TRPCA are higher than those of IRNN and NRLRWTV. This is because the IRNN, NRLRWTV and RPCA methods decompose HSI into matrix and solve the calculation in the process of model construction, while NCTRPCA and TRPCA deal with HSI in the third-order tensor manner. The tensor representation makes better use of the structural information contained in the spatial similarity and spectral correlation of HSI. Figure 10 shows the recovered image of abnormal band 100 with five denoising methods, among which the image of IRNN and RPCA are very blurry, and the edges of NRLRWTV, TRPCA and NCTRPCA are clearer. However, the recovered images of NRLRWTV contain more noise, and TRPCA is somewhat blurry, as shown in the enlarged parts. The proposed NCTRPCA exhibits a restored image with the highest quality.
For the Pavia University dataset, the advantages of NCTRPCA and TRPCA over other matrix-based methods are illustrated in Figure 11 by PSNR and SSIM curves with G = 0.05, P = 0.1. As it has been shown previously, the tensor-based methods are more robust than matrix-based methods. At the same time, the fluctuation of the NCTRPCA curve has the highest value with the lowest range, which means it has the best denoising ability and achieves the best unbiasedness. The visual comparison of band 87 is shown in Figure 12, along with the enlarged red box. Compared with the other four methods, NCTRPCA also obtains a noise-free image with more geometrical details simultaneously.
For real experiments that we performed on the HYDICE Urban dataset to compare the recovery results, the parameters are set the same as in the simulated experiments.
In the real experiments, the indices of RSE, PSNR and SSIM are incapable on account of missing groundtruth. However, MSAD between the noisy and denoised images can be computed to give a measure of the spectral similarity. Therefore, Table 4 gives the MSAD after the recovery of the HYDICE Urban dataset by five methods. It can be seen that compared with convex methods, non-convex methods perform better in spectral preservation. TRPCA and NCTRPCA are superior to matrix-based methods. Furthermore, NCTRPCA has larger values than TRPCA, which shows a better performance in spectral preservation.
A visual comparison of spectral similarity is shown by plotting the spectral signatures of the noisy and the resulted images in Figure 13. The closer to the original curve, the higher similarity is. As an illustration, pixel (60, 50) of HYDICE Urban is selected randomly. We can find that IRNN has the biggest spectral distortion, while the curves of other non-convex methods are more close to those of the original. In addition, most of the curves are more stable except some abnormal bands. The results of RPCA and TRPCA, NRLRWTV and NCTRPCA are comparable, due to the same low-rank relaxation they employed. On the whole, the results of NCTRPCA are best, both on visual and quantitative assessment, which further demonstrates the effectiveness of the proposed method in spectral distortion.

4.2.2. Parameter Analysis

In order to illustrate the influence of rank selection on image restoration, as an example, we chose the Washington DC Mall dataset (G = 0.025, P = 0.05) to conduct noise removal simulation experiments with different r, where r is taken from [5,5,5] to [25,25,25], and the results are shown in Figure 14a,b. On the whole, the values of MPSNR and MSSIM first increase then decrease with the increase in r. When r = [15,15,15], the curves of MPSNR and MSSIM can reach an approximate peak value, and the model has the best denoising effect.
In order to study the approximate effect of the tensor γ -norm on the tensor rank, we chose the Washington DC Mall dataset (G = 0.025, P = 0.05) to carry out noise removal simulation experiments under different   γ . Among them, the value of γ is from 0.5 to 2.5 as shown in Figure 14c,d. It can be found that when γ > 1.5 , the curve changes slowly. When γ = 1.5 , both MPSNR and MSSIM reach the maximum value. Therefore, we fixed γ to 1.5 in the experimental parameter selection.
λ is to adjust the regularization parameters of the sparse term; we chose the Washington DC Mall dataset (G = 0.025, P = 0.05) to conduct noise removal simulation experiments under different λ , where λ = p / n 1 n 2 , and the value of p is from 0.5 to 15, as shown in Figure 14e,f. It is easy to see that when p > 1 , the value curves of MPSNR and MSSIM are relatively stable, that is, NCTRPCA is robust in the selection of parameters λ . Therefore, when p = 1 , the curve reaches the peak value, and the best denoising performance can also be obtained in visual effects.
Finally, time complexity is also analyzed. The proposed methods, along with other tensor-based methods, take more time than matrix-based methods. In addition, LRTC and TMac are faster than other methods; in LRTC, the Nesterov strategy is used to implement the nonsmooth optimization, while in TMac, low-rank matrix factorization is applied to speed up the algorithm. The other tensor-based methods, namely TNN, TRPCA, NCTNN, and NCTRPCA, are mainly negatively affected by the high complexity of SVD. More efficient algorithms for commutating SVD, such as random or parallel based algorithms, are encouraged for further investigation.

5. Conclusions

In this paper, to obtain an unbiased HSI restoration, we introduce the MCP penalization as a new non-convex penalty in tensor TNN minimization and TRPCA. On the whole, convex low-rank approximation methods are inferior to non-convex methods, and matrix-based methods also perform worse than tensor-based methods. Compared with the state-of-the-art non-convex and convex completion and denoising methods, the proposed methods, NCTNN and NCTRPCA, perform best both in visual and quantitative assessments, and achieve stable and robust results, especially on retaining spectral structure.
Although the proposed methods achieve stratifying restoration results, there are some aspects that need to be improved further. The low-rank-based methods focus on the global correlation while ignoring the nonlocal spatial similarity. Nonlocal tensor patches, which have been proven to be effective in image modeling, are going to be introduced to the proposed models in our future work. Besides, as mentioned above, the tensor-based methods are quite time-consuming, efficient processing techniques, e.g., parallel and random, are our on-going work.

Author Contributions

H.L. (Hanyang Li) conceptualized the work, carried out the data experiments and image processing. H.L. (Hongyi Liu) supervised the research and helped to write the manuscript. Z.W. (Zebin Wu) and Z.W. (Zhihui Wei) contributed to the review of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61971223, Grant 61772274, Grant 61671243 and Grant 61701238, in part by the Fundamental Research Funds for the Central Universities under Grant 30917015104, and in part by the Jiangsu Provincial Natural Science Foundation of China under Grant BK20170858.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chang, Y.; Yan, L.X.; Wu, T.; Zhong, S. Remote sensing image stripe noise Removal: From image decomposition perspective. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7018–7031. [Google Scholar] [CrossRef]
  2. Huang, Z.H.; Li, S.T.; Hu, F. Hyperspectral image denoising with multiscale low-rank matrix recovery. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017; pp. 5442–5445. [Google Scholar]
  3. Garzelli, A. A review of image fusion algorithms based on the super-resolution paradigm. Remote Sens. 2016, 8, 797. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, X.; Li, C.; Zhang, J.; Chen, Q.; Feng, J.; Jiao, L.; Zhou, H. Hyperspectral unmixing via low-rank representation with space consistency constraint and spectral library pruning. Remote Sens. 2018, 10, 339. [Google Scholar] [CrossRef] [Green Version]
  5. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  6. Liu, H.Y.; Sun, P.P.; Du, Q.; Wu, Z.B.; Wei, Z.H. Hyperspectral image restoration based on low-rank recovery with a local neighborhood weighted spectral-spatial total variation model. IEEE Trans. Geosci. Remote Sens. 2019, 57, 1409–1422. [Google Scholar] [CrossRef]
  7. Candes, E.J.; Li, X.D.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011, 58, 1–37. [Google Scholar] [CrossRef]
  8. Gu, S.H.; Zhang, L.; Zuo, W.M.; Feng, X.C. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 2862–2869. [Google Scholar]
  9. Liu, L.; Huang, W.; Chen, D.R. Exact minimum rank approximation via Schatten p-norm minimization. J. Comput. Appl. Math 2014, 267, 218–227. [Google Scholar] [CrossRef]
  10. Xie, Y.; Gu, S.H.; Liu, Y.; Zuo, W.M.; Zhang, W.S.; Zhang, L. Weighted schatten p-norm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 2016, 25, 4842–4857. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, S.; Liu, D.; Zhang, Z. Nonconvex relaxation approaches to robust matrix recovery. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011. [Google Scholar]
  13. Li, H.Y.; Sun, P.P.; Liu, H.Y.; Wu, Z.B.; Wei, Z.H. Non-convex low-rank approximation for hypersectral image recovery with weighted total variation regularization. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 2733–2736. [Google Scholar]
  14. Wen, F.; Ying, R.; Liu, P.; Truong, T.-K. Matrix completion via nonconvex regularization: Convergence of the proximal gradient algorithm. arXiv 2019, arXiv:1903.00702. [Google Scholar]
  15. Lu, C.Y.; Tang, J.H.; Yan, S.C.; Lin, Z.C. Generalized nonconvex nonsmooth low-rank minimization. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 4130–4137. [Google Scholar]
  16. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. Siam Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  17. Liu, J.; Musialski, P.; Wonka, P.; Ye, J.P. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 208–220. [Google Scholar] [CrossRef] [PubMed]
  18. Deng, Y.J.; Li, H.C.; Fu, K.; Du, Q.; Emery, W.J. Tensor low-rank discriminant embedding for hyperspectral image dimensionality reduction. IEEE Trans. Geosci. Remote Sens. 2018, 56, 7183–7194. [Google Scholar] [CrossRef]
  19. Fan, H.Y.; Chen, Y.J.; Guo, Y.L.; Zhang, H.Y.; Kuang, G.Y. Hyperspectral image restoration using low-rank tensor recovery. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 2017, 10, 4589–4604. [Google Scholar] [CrossRef]
  20. Gandy, S.; Recht, B.; Yamada, I. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Probl. 2011, 27, 2–025010. [Google Scholar] [CrossRef] [Green Version]
  21. Kilmer, M.E.; Martin, C.D. Factorization strategies for third-order tensors. Linear Algebra Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef] [Green Version]
  22. Xu, Y.Y.; Hao, R.R.; Yin, W.T.; Su, Z.X. Parallel matrix factorization for low-rank tensor completion. Inverse Probl. Imaging 2015, 9, 601–624. [Google Scholar] [CrossRef] [Green Version]
  23. Ng, M.K.P.; Yuan, Q.Q.; Yan, L.; Sun, J. An adaptive weighted tensor completion method for the recovery of remote sensing images with missing data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3367–3381. [Google Scholar] [CrossRef]
  24. Kilmer, M.E.; Braman, K.; Hao, N.; Hoover, R.C. Third-order tensors as operatorson matrices: A theoretical and computional framework with applications in imaging. Siam J. Matrix Anal. Appl. 2013, 34, 148–172. [Google Scholar] [CrossRef] [Green Version]
  25. Xie, Q.; Zhao, Q.; Meng, D.Y.; Xu, Z.B. Kronecker-basis-representation based tensor sparsity and its applications to tensor recovery. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1888–1902. [Google Scholar] [CrossRef]
  26. Lu, C.Y.; Feng, J.S.; Chen, Y.D.; Liu, W.; Lin, Z.C.; Yan, S.C. Tensor robust principal component analysis: Exact recovery of corrupted low-rank tensors via convex optimization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 5249–5257. [Google Scholar]
  27. Zhao, L.; Xu, Y.; Wei, Z.H.; Yu, R.P.; Qian, L. Hyperspectral image denoising via coupled spectral-spatial tensor representation. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4784–4787. [Google Scholar]
  28. Liu, Y.Y.; Shang, F.H. An Efficient matrix factorization method for tensor completion. IEEE Signal Process. Lett. 2013, 20, 307–310. [Google Scholar] [CrossRef]
  29. Goldfarb, D.; Qin, Z. Robust low-rank tensor recovery: Models and algorithms. Siam J. Matrix Anal. Appl. 2014, 35, 225–253. [Google Scholar] [CrossRef] [Green Version]
  30. Zhang, Z.M.; Aeron, S. Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
  31. Zhang, Z.M.; Ely, G.; Aeron, S.; Hao, N.; Kilmer, M. Novel methods for multilinear data completion and de-noising based on tensor-SVD. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 3842–3849. [Google Scholar]
  32. Hu, W.; Tao, D.; Zhang, W.; Xie, Y.; Yang, Y. The twist tensor nuclear norm for video completion. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2961–2973. [Google Scholar] [CrossRef] [PubMed]
  33. Zhou, P.; Feng, J. Outlier-robust tensor PCA. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–21 July 2017; pp. 2263–2271. [Google Scholar]
  34. Zhou, P.; Lu, C.; Lin, Z.; Zhang, C. Tensor factorization for low-rank tensor completion. IEEE Trans. Image Process. 2017, 27, 1152–1163. [Google Scholar] [CrossRef] [PubMed]
  35. Jiang, T.-X.; Huang, T.-Z.; Zhao, X.-L.; Deng, L.-J. A novel nonconvex approach to recover the low-tubal-rank tensor data: When t-SVD meets PSSV. arXiv 2017, arXiv:1712.05870. [Google Scholar]
  36. Cao, W.F.; Wang, Y.; Yang, C.; Chang, X.Y.; Han, Z.; Xu, Z.B. Folded-concave penalization approaches to tensor completion. Neurocomputing 2015, 152, 261–273. [Google Scholar] [CrossRef]
  37. Ji, T.Y.; Huang, T.Z.; Zhao, X.L.; Ma, T.H.; Deng, L.J. A non-convex tensor rank approximation for tensor completion. Appl. Math. Model. 2017, 48, 410–422. [Google Scholar] [CrossRef]
  38. Li, H.; Liu, H.; Zhang, J.; Wu, Z.; Wei, Z. Non-convex relaxation low-rank tensor completion for hyperspectral image recovery. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 1935–1938. [Google Scholar]
  39. Zhang, X.J. A nonconvex relaxation approach to low-rank tensor completion. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 1659–1671. [Google Scholar] [CrossRef]
  40. Cai, S.T.; Luo, Q.L.; Yang, M.; Li, W.; Xiao, M.Q. Tensor robust principal component analysis via non-convex low rank approximation. Appl. Sci. 2019, 9, 1411. [Google Scholar] [CrossRef] [Green Version]
  41. Yao, Q.; Kwok, J.T.-Y.; Han, B. Efficient nonconvex regularized tensor completion with structure-aware proximal iterations. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 7035–7044. [Google Scholar]
  42. Li, T.; Ma, J. Non-convex penalty for tensor completion and robust PCA. arXiv 2019, arXiv:1904.10165. [Google Scholar]
  43. Gui, H.; Gu, Q. Towards faster rates and oracle property for low-rank matrix estimation. arXiv 2015, arXiv:1505.04780. [Google Scholar]
  44. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
  45. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed method.
Figure 1. The flowchart of the proposed method.
Remotesensing 12 02264 g001
Figure 2. Non-convex soft shrinkage and soft shrinkage ( λ = 1 ).
Figure 2. Non-convex soft shrinkage and soft shrinkage ( λ = 1 ).
Remotesensing 12 02264 g002
Figure 3. PSNR and SSIM values of each band for Pavia University (SR = 50%): (a) PSNR; (b) SSIM.
Figure 3. PSNR and SSIM values of each band for Pavia University (SR = 50%): (a) PSNR; (b) SSIM.
Remotesensing 12 02264 g003
Figure 4. Restoration results for Pavia University of band 100 (SR = 50%): (a) sampling; (b) NMC; (c) NORT; (d) NCTNN.
Figure 4. Restoration results for Pavia University of band 100 (SR = 50%): (a) sampling; (b) NMC; (c) NORT; (d) NCTNN.
Remotesensing 12 02264 g004
Figure 5. PSNR and SSIM values of each band for Washington DC Mall (SR = 30%): (a) PSNR; (b) SSIM.
Figure 5. PSNR and SSIM values of each band for Washington DC Mall (SR = 30%): (a) PSNR; (b) SSIM.
Remotesensing 12 02264 g005
Figure 6. Restoration results for Washington DC Mall of band 75 (SR = 10%): (a) sampling; (b) TMac; (c) LRTC; (d) TNN; (e) NCTNN.
Figure 6. Restoration results for Washington DC Mall of band 75 (SR = 10%): (a) sampling; (b) TMac; (c) LRTC; (d) TNN; (e) NCTNN.
Remotesensing 12 02264 g006
Figure 7. Spectrum of pixel (68, 42) in the restoration results for Washington DC Mall (SR = 50%).
Figure 7. Spectrum of pixel (68, 42) in the restoration results for Washington DC Mall (SR = 50%).
Remotesensing 12 02264 g007
Figure 8. Sensitivity analysis of parameter r and γ for Washington DC Mall (SR = 10%): (a) MPSNR of r; (b) MSSIM of r; (c) MPSNR of γ ; (d) MSSIM of of   γ .
Figure 8. Sensitivity analysis of parameter r and γ for Washington DC Mall (SR = 10%): (a) MPSNR of r; (b) MSSIM of r; (c) MPSNR of γ ; (d) MSSIM of of   γ .
Remotesensing 12 02264 g008
Figure 9. PSNR and SSIM values of each band for Washington DC Mall (G = 0.05, P = 0.1): (a) PSNR; (b) SSIM.
Figure 9. PSNR and SSIM values of each band for Washington DC Mall (G = 0.05, P = 0.1): (a) PSNR; (b) SSIM.
Remotesensing 12 02264 g009
Figure 10. Restoration results for Washington DC Mall of band 100: (a) G = 0.05, P = 0.1; (b) IRNN; (c) NRLRWTV; (d) RPCA; (e) TRPCA; (f) NCTRPCA.
Figure 10. Restoration results for Washington DC Mall of band 100: (a) G = 0.05, P = 0.1; (b) IRNN; (c) NRLRWTV; (d) RPCA; (e) TRPCA; (f) NCTRPCA.
Remotesensing 12 02264 g010
Figure 11. PSNR and SSIM values of each band for Pavia University (G = 0.025, P = 0.05): (a) PSNR; (b) SSIM.
Figure 11. PSNR and SSIM values of each band for Pavia University (G = 0.025, P = 0.05): (a) PSNR; (b) SSIM.
Remotesensing 12 02264 g011
Figure 12. Restoration results for Pavia University of band 87: (a) G = 0.025, P = 0.05; (b) IRNN; (c) NRLRWTV; (d) RPCA; (e) TRPCA; (f) NCTRPCA.
Figure 12. Restoration results for Pavia University of band 87: (a) G = 0.025, P = 0.05; (b) IRNN; (c) NRLRWTV; (d) RPCA; (e) TRPCA; (f) NCTRPCA.
Remotesensing 12 02264 g012
Figure 13. Spectrum of pixel (60, 50) in the restoration results for HYDICE Urban.
Figure 13. Spectrum of pixel (60, 50) in the restoration results for HYDICE Urban.
Remotesensing 12 02264 g013
Figure 14. Sensitivity analysis of parameters for Washington DC Mall (G = 0.025,P = 0.05): (a) MPSNR of r; (b) MSSIM of r; (c) MPSNR of γ ; (d) MSSIM of γ ; (e) MPSNR of λ ; (f) MSSIM of λ .
Figure 14. Sensitivity analysis of parameters for Washington DC Mall (G = 0.025,P = 0.05): (a) MPSNR of r; (b) MSSIM of r; (c) MPSNR of γ ; (d) MSSIM of γ ; (e) MPSNR of λ ; (f) MSSIM of λ .
Remotesensing 12 02264 g014
Table 1. Quantitative evaluation of completion results for Pavia University.
Table 1. Quantitative evaluation of completion results for Pavia University.
SRIndexNMCNORTNCTNN
RSE0.17150.14980.1311
10%MPSNR29.945431.010132.0785
MSSIM0.80100.81790.8480
MSAD10.79257.64007.4383
RSE0.12490.10610.1024
50%MPSNR32.907833.160433.7900
MSSIM0.87490.87730.8832
MSAD7.77706.11265.5622
RSE0.05040.04900.0466
70%MPSNR39.641139.967840.5303
MSSIM0.96530.96990.9704
MSAD3.21172.80112.6770
Table 2. Quantitative evaluation of completion results for Washington DC Mall.
Table 2. Quantitative evaluation of completion results for Washington DC Mall.
SRIndexTMacLRTCTNNNCTNN
RSE0.17780.25490.10980.1085
10%MPSNR36.486434.576934.856734.8649
MSSIM0.80920.74990.83540.8378
MSAD10.629810.47916.11926.1147
RSE0.07960.12470.05280.0519
30%MPSNR38.292538.653039.969140.6430
MSSIM0.92840.92390.94280.9360
MSAD5.17225.60943.18033.0635
RSE0.14230.0750.05080.0405
50%MPSNR39.554238.923440.139840.8033
MSSIM0.92840.90990.93800.9361
MSAD6.55913.79113.18942.7657
RSE0.06370.06450.06130.0404
80%MPSNR42.143441.443440.524541.1576
MSSIM0.92680.92320.93160.9443
MSAD4.04373.62043.92432.6206
Table 3. Quantitative evaluation of denoising results for simulated experiments.
Table 3. Quantitative evaluation of denoising results for simulated experiments.
DatasetNoise LevelIndexIRNNNRLRWTVRPCATRPCANCTRPCA
Washington DC RSE0.08290.05560.22100.04390.0402
G = 0.025MPSNR40.837042.752626.975444.660345.9740
P = 0.05MSSIM0.90000.93230.57100.97740.9796
MSAD3.76652.268210.88072.26782.2237
RSE0.08000.06010.34660.07640.0599
G = 0.05MPSNR35.513238.053322.608643.390442.0658
P = 0.1MSSIM0.79730.85110.39020.95420.9599
MSAD5.36074.398718.78473.98033.9644
RSE0.12930.08600.16700.14010.1293
G = 0.075MPSNR32.304834.896535.558040.149739.7160
P = 0.15MSSIM0.72830.80130.81470.87750.9266
MSAD7.71416.34129.50875.47845.4685
RSE0.20850.11500.20410.15040.2085
G = 0.1MPSNR30.071032.825034.071137.706237.6901
P = 0.2MSSIM0.67360.77070.79650.87160.8979
MSAD9.76156.841812.04377.65756.8022
Pavia University RSE0.06880.05030.22330.04880.0469
G = 0.025MPSNR37.513639.669426.494540.012240.4803
P = 0.05MSSIM0.94900.97010.67720.96550.9701
MSAD3.56432.79678.06923.21302.7195
RSE0.13980.08110.27620.07520.0724
G = 0.05MPSNR34.072136.306724.473035.938736.5155
P = 0.1MSSIM0.88900.93140.62770.92660.9320
MSAD8.02354.814310.58374.71534.7020
RSE0.22420.13030.36700.10220.1068
G = 0.075MPSNR31.907032.856722.020233.187133.7116
P = 0.15MSSIM0.83140.87520.50170.87740.8823
MSAD11.86628.185914.43446.11675.6251
RSE0.28710.17020.48300.13090.1149
G = 0.1MPSNR29.685129.922019.669031.024032.0927
P = 0.2MSSIM0.75880.80040.35730.81970.8487
MSAD15.641010.709418.54207.41846.8022
Table 4. MASD values for real experiments.
Table 4. MASD values for real experiments.
DatasetIndexIRNNNRLRWTVRPCATRPCANCTRPCA
HYDICE UrbanMSAD11.12428.858414.73288.18387.5465

Share and Cite

MDPI and ACS Style

Liu, H.; Li, H.; Wu, Z.; Wei, Z. Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation. Remote Sens. 2020, 12, 2264. https://doi.org/10.3390/rs12142264

AMA Style

Liu H, Li H, Wu Z, Wei Z. Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation. Remote Sensing. 2020; 12(14):2264. https://doi.org/10.3390/rs12142264

Chicago/Turabian Style

Liu, Hongyi, Hanyang Li, Zebin Wu, and Zhihui Wei. 2020. "Hyperspectral Image Recovery Using Non-Convex Low-Rank Tensor Approximation" Remote Sensing 12, no. 14: 2264. https://doi.org/10.3390/rs12142264

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop