Next Article in Journal
A New Algorithm for Daily Sea Ice Lead Identification in the Arctic and Antarctic Winter from Thermal-Infrared Satellite Imagery
Next Article in Special Issue
Meta-XGBoost for Hyperspectral Image Classification Using Extended MSER-Guided Morphological Profiles
Previous Article in Journal
Application of the GPM-IMERG Products in Flash Flood Warning: A Case Study in Yunnan, China
Previous Article in Special Issue
Object Detection in Remote Sensing Images Based on Improved Bounding Box Regression and Multi-Level Features Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Denoising Based on Nonlocal Low-Rank and TV Regularization

1
School of Automation, Northwestern Polytechnical University, Shanxi 710072, China
2
Ministry of Basic Education, Sichuan Engineering Technical College, Deyang 618000, China
3
Department of Electronics and Informatics, Vrije Universiteit Brussel, 1050 Brussel, Belgium
4
School of Mechanical and Electrical Engineering, Xi’an University of Architecture and Technology, Xi’an 710075, China
5
Chinese Academy of Engineering, Beijing 100088, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(12), 1956; https://doi.org/10.3390/rs12121956
Submission received: 12 May 2020 / Revised: 13 June 2020 / Accepted: 15 June 2020 / Published: 17 June 2020

Abstract

:
Hyperspectral image (HSI) acquisitions are degraded by various noises, among which additive Gaussian noise may be the worst-case, as suggested by information theory. In this paper, we present a novel tensor-based HSI denoising approach by fully identifying the intrinsic structures of the clean HSI and the noise. Specifically, the HSI is first divided into local overlapping full-band patches (FBPs), then the nonlocal similar patches in each group are unfolded and stacked into a new third order tensor. As this tensor shows a stronger low-rank property than the original degraded HSI, the tensor weighted nuclear norm minimization (TWNNM) on the constructed tensor can effectively separate the low-rank clean HSI patches. In addition, a regularization strategy with spatial–spectral total variation (SSTV) is utilized to ensure the global spatial–spectral smoothness in both spatial and spectral domains. Our method is designed to model the spatial–spectral non-local self-similarity and global spatial–spectral smoothness simultaneously. Experiments conducted on simulated and real datasets show the superiority of the proposed method.

Graphical Abstract

1. Introduction

For various hyperspectral image (HSI) applications, it is important to fully exploit useful spatial–spectral features of HSI. However, because of the limitations of the hyperspectral imaging system and the influence of the atmospheric environment and other transmission factors, the captured HSIs are always contaminated by various noises during image acquisition, among those the Gaussian noise is the most common and most challenging [1]. This makes HSI denoising a necessary preprocessing step for HSI applications, including classification [2], super-resolution [3,4], compressive sensing [5,6], and so forth.
Traditionally, HSI can be denoised by a vector method [7] or matrix method [8,9,10,11,12]. Unfolding all of the bands in HSI to a long vector is done by the vector method [7]. This kind of method has a high processing speed at the cost of destroying the spatial structure and spectral correlation. The matrix method can be divided into the following two categories: band-by-band method and tensor-matrixing method [8]. The former is a natural generalization of the procession of a gray-level image. However, as it ignores the correlation between the adjacent spectral bands, such a method cannot provide satisfactory results. Tensor-matrixing is conducted by unfolding each band into a vector, and all of the vectors are cascaded into a matrix. Although this kind of method considers spectral correlation, the spatial structure could still be destroyed. Given the shortcomings of such methods, more effective strategies and methods have been proposed targeting the correlation in both the spatial and spectral domains. For example, a spatial–spectral wavelet shrinkage method has been proposed by the authors of [9] in order to utilize the difference in both the spatial and spectral domains of HSI. To simultaneously utilize the spatial and spectral dependences in a unified probabilistic framework, Yuan et al. [10] proposed a spectral–spatial adaptive total variation model. Some advanced techniques in traditional image processing have also been adopted for HSI denoising, such as nonlocal similarity [11] and anisotropic diffusion [12].
Low-rank (LR) is an important property and common characteristic of HSI, and various approaches based on the LR constraint have been proposed for HSI denoising [13,14,15,16,17,18]. One of the popular approaches for the LR constraint is rank minimization [19,20], in which a nuclear norm [21] is applied in order to estimate the rank of a matrix. However, shrinking the singular values of the nuclear norm (NN) equally will lead to over-estimating or under-estimating the matrix rank. To overcome this problem, Gu et al. [19,20] proposed a weighted nuclear norm minimization (WNNM) model. From a physical point of view, each singular value has a special physical meaning—WNNM considers that larger singular values signify more physical information. Therefore, each singular value should be treated differently. In particular, the large singular values of a clean image carry more physical information, they should be assigned larger weights, and small singular values should be assigned smaller weights. For better denoising results, WNNM is always combined with total variation (TV); the advantage of TV regularization is that it removes noise while keeping the edge texture of HSI. In the literature [22] and [23], TV-regularized WNNM was proposed for HSI denoising combined with spatial low-rankness and spectral piecewise smoothness. Although these methods improve the denoising performance, they still have some disadvantages. Firstly, they deal with spatial domain and spectral domain separately, which may have adverse effects on noise removal. Secondly, these methods fail to fully exploit the prior knowledge on the intrinsic structures of HSI. The recent development of tensor technologies can tackle the aforementioned problem. For example, our previous work [24] enhanced the denoising performance by considering the global and nonlocal low-rank property. The method in the literature [25] integrated the structure tensor TV [26] into the WNNM model, and outperformed the band-by-band TV-regularized WNMM method.
To overcome the drawbacks, we present a novel model to jointly consider the spatial nonlocal similarity and high spectral correlation. To summarize, our contributions are as follows.
First, each group of full band patches (FBPs) is collected by nearest neighbor search (NNS) [27], and the matrix-based WNNM is extended to tensor-based WNNM (TWNNM) so as to keep the multi-dimensional structure.
Second, to reserve a more refined structure, we use 3D weighted total variation regularization to exploit the prior local smoothness in spatial–spectral domain.
Third, we propose a novel HSI denoising model by combining low-rank and TV, and the alternating direction method of multipliers (ADMM) is designed to solve the proposed model. We conduct experiments on both synthetic and real datasets so as to illustrate the validity and efficiency of the proposed method.
Figure 1 shows the flowchart of our method. It is noteworthy that the proposed TWNNM-TV is applied to the constructed HSI from the group of similar patches, not the original degraded HSI. For the sake of brevity and readability, we omit the continuous summation symbol Σ in the model.

2. Notations and Preliminaries

The mathematical symbols and explanations used in this paper are listed in Table 1. For further information on the tensor algebra, interested readers are referred to [28,29] for more details. For the three-order tensor X R n 1 × n 2 × n 3 , its block circulant matrix is defined as
b c i r c ( X ) = [ X ( 1 ) X ( n 3 ) X ( 2 ) X ( 2 ) X ( 1 ) X ( 3 ) X ( n 3 ) X ( n 3 1 ) X ( 1 ) ]
where X ( k ) = X ( : , : , k ) is the k-th front slice of X .

3. Proposed Model

3.1. From WNNM to TWNNM

We consider the extension of NNM to WNNM as follows [19,20]
min X Y X F 2 + X w ,
where X w , = i w i σ i ( X ) represents the WNN of matrix X, w = [ w 1 , w 2 , , w n ] ( w i ≥ 0) denotes the weight vector, and σ i ( X ) is the i-th singular value of matrix X. The Problem (1) has a closed-form solution of X ^ = US w / 2 ( ) V T , where Y = UΣVT is the singular value decomposition (SVD) of matrix Y and S w / 2 ( ) is a soft thresholding operator, which is defined as S w / 2 ( i ) = m a x ( i w i / 2 , 0 ) .
The HSI collected by optical imaging system are always contaminated by Gaussian noise [24]. The HSI data are a three dimensional cube, which can be denoted as a three-order tensor X = { X 1 , X 2 , , X b } R h × v × b , where each matrix X i R h × v ( i = 1 , 2 , , b ) represents the i-th band of his; h and v represent the height and width in each band, respectively; and the HSI has b spectral bands. As our degradation model considers only Gaussian noise, the additive degradation model is
Y = X + N
where X , Y , N R h × v × b denotes the underlying clean HSI, observed degraded his, and the Gaussian noise, respectively. According to our previous work [24], the tensor weighted nuclear norm minimization (TWNNM) model can be formulated as follows:
1 2 Y X F 2 + X w ,

3.2. Weighted Tensor Total Variation Regularization

Even though the method in the literature [24] can remove most of the noise, there is still room for improvement. As 2D total variation (TV) has been shown to preserve the local spatial piecewise smooth structure and suppress noise, it is widely applied to visual processing tasks [8,10]. HSI has a spatial dimension and spectral dimension. The clean spectral band should be smooth, so it is natural to use 3D weighted TV to preserve both the spatial and spectral smooth structure. It is defined as
X 3 D W T V = λ 1 D h X 1 + λ 2 D v X 1 + λ 3 D p X 1
where λ 1 , λ 2 , and λ 3 are three weight parameters, and D h , D v and D p are the differential operators along the spatial horizontal direction, spatial vertical direction, and spectral direction, respectively. Based on the notations in Section 2, D h X , D v X , and D p X at location (i, j, and k) are given by
D h X i j k = | X i + 1 , j , k X i j k | ,   D v X i j k = | X i , j + 1 , k X i j k | ,   D p X i j k = | X i , j , k + 1 X i j k |

3.3. Nonlocal Low-Rank Tensor Construction

When constructing the nonlocal low-rank tensor, we use the traditional nearest neighbor search (NNS). For an individual reference FBP with size m × m × b, we use NNS to find its k similar patches, Then, each FBP is unfolded into a matrix with size m2 × 1 × b; all the k + 1 FBPs (including the reference one) are stacked into a three-order tensor with size m2 × (k + 1) × b. This operation corresponds to the unfolding and stacking stages in Figure 1. Note that the constructed three order tensor jointly utilizes the spatial local sparsity, the non-local similarity in the spectral and spatial domains, and spectral high correlation. All of the FBPs denoised by the proposed TWNNM-TV are split into matrices with size m2 × b; each matrix is folded as a FBP with size m × m × b, and all of the FBPs are aggregated into final denoised HSI. This operation corresponds to the splitting and folding stage in Figure 1.
To illustrate that the patch groups have a stronger low-rank property than the original HSI, we plot the first 40 singular values of the patch groups (blue curve) and the original patch (red curve) in Figure 2a. For a closer observation, we show the zoomed-in part of the singular value numbers between 10 and 20, as shown in Figure 2b. It can be seen from Figure 2a that the singular values of the patch groups are lower than those of the original HSI, and they decrease rapidly. This phenomenon demonstrates that the rank of the patch group is absolutely lower than that of the original HSI, and the same conclusion can be drawn from Figure 2b. Therefore, we apply the LR constraints on the patch group instead of on the original HSI.

3.4. Model Proposal and Optimization

Combined with the low-rank prior (TWNNM) and spatial–spectral smooth prior (TV) of the image component, the final optimization model for denoising HSI is as follows:
1 2 Y X F 2 + X w , + λ 1 D h X 1 + λ 2 D v X 1 + λ 3 D p X 1
The ADMM method [30] is used to solve the proposed model in Model (6). To this end, we introduce four auxiliary variables, P , D1, D2, D3 to Model (6), and it is equivalent to the following problem:
1 2 Y X F 2 + X w , + λ 1 D h X 1 + λ 2 D v X 1 + λ 3 D p X 1 , s . t . X = P , D 1 = D h X , D 2 = D v X , D 3 = D p X
Problem (7) can be rewritten as its augmented Lagrangian form, as follows:
L ( X , P , D 1 , D 2 , D 3 , Λ i ) = 1 2 Y X F 2 + P w , + λ 1 D 1 1 + λ 2 D 2 1 + λ 3 D 3 1 + μ 2 ( X P + Λ 1 / μ F 2 + D 1 D h X + Λ 2 / μ F 2 + D 2 D v X + Λ 3 / μ F 2 + D 3 D p X + Λ 4 / μ F 2 )
where Λ i (i = 1, 2, 3, 4) are Lagrange multipliers, and μ represents the positive penalty parameter. For the multivariable optimization problem, the usual way is to fix other variables and optimize them alternately one by one. The optimization process is collected in Algorithm 1.
Algorithm 1 Optimization Process for Proposed Solver
1:
Input: Noisy image Y , regularization parameters λ1 = 1, λ2 = 1 and λ3 = 0.4, ε, kmax = 100, μmax = 106, ρ.
2:
Initialize: Let Y = X , D1 = D2 = D3 = 0, P = 0, k = 0, Λi = 0 (i = 1, 2, 3, 4)
while not covered do
3:
Update P via P = fold { S ν , ω ( X + Λ 1 / μ ) }
4:
Update D1 via D 1 = soft ( D h X + Λ 2 / μ , λ 1 / μ )
5:
Update D2 via D 2 = soft ( D v X + Λ 3 / μ , λ 2 / μ )
6:
Update D3 via D 3 = soft ( D p X + Λ 4 / μ , λ 3 / μ )
7:
Compute X via FFT: X = ifftn ( C ( 1 + μ ) 1 + μ D )
8:
Compute the Lagrange multipliers by Λ 1 = Λ 1 + μ ( X P ) ,   Λ 2 = Λ 2 + μ ( D 1 D h X ) ,   Λ 3 = Λ 3 + μ ( D 2 D v X ) ,   Λ 4 = Λ 4 + μ ( D 3 D p X )
9:
Update the penalty parameter μ = min { ρ μ ,   μ max } .
10:
end while
Output: The restoration result X .
By fixing the other variables, each of them can be optimized as following:
P1: argmin D 1   λ 1 D 1 1 + μ 2 X P + Λ 1 / μ F 2
P2: argmin D 2   λ 2 D 2 1 + μ 2 D 1 D h X + Λ 2 / μ F 2
P3: argmin D 3   λ 3 D 3 1 + μ 2 D 2 D v X + Λ 3 / μ F 2
P4: argmin P   P w , + μ 2 D 3 D p X + Λ 4 / μ F 2
P5: argmin X   1 2 Y X F 2 + μ 2 ( X P + Λ 1 / μ F 2 + D 1 D h X + Λ 2 / μ F 2 + D 2 D v X + Λ 3 / μ F 2 + D 3 D p X + Λ 4 / μ F 2 )
The subproblems P1, P2, and P3 are of the same form, and by using the soft-threshold shrinkage operator in the literature [31], they can be updated by
D 1 = soft ( D h X + Λ 2 / μ , λ 1 / μ ) ,   D 2 = soft ( D v X + Λ 3 / μ , λ 2 / μ ) ,   D 3 = soft ( D p X + Λ 4 / μ , λ 3 / μ )
where soft ( r , θ ) = sign ( r ) max ( | r | θ , 0 ) .
The subproblem P5 can be solved by the following linear system:
X = Y + μ ( P Λ 1 / μ ) + μ ( D 1 + Λ 2 / μ ) + μ ( D 2 + Λ 3 / μ ) + μ ( D 3 + Λ 4 / μ )
where X = ( 1 + μ ) + μ ( D h T D h + D v T D v + D p T D p ) , and the denotes unit tensor, D h T , D v T , and D p T represent the transaction of D h , D v , and D p , respectively. Here, it takes the periodic boundary condition for X into consideration, and the X in above linear system can be efficiently updated via 3D fast Fourier transform (FFT), as follows:
X = ifftn ( C ( 1 + μ ) 1 + μ D )
where, C = fftn ( Y + μ ( P Λ 1 / μ ) + μ ( D 1 + Λ 2 / μ ) + μ ( D 2 + Λ 3 / μ ) + μ ( D 3 + Λ 4 / μ ) ) , D = D h T D h + D v T D v + D p T D p , fftn and ifftn represent the 3D FFT and its inverse operation, respectively.
For the subproblem P4, with Problem (1) in mind and according to the authors of [32], its closed-form solution is P = fold { S ν , ω ( X + Λ 1 / μ ) } , where ν = 1 / μ .

4. Experimental Results and Analysis

To evaluate our method for HSI denoising, we perform experiments on simulated and real-world data. The compared state-of-the-art denoising methods include the TV-regularized low-rank matrix factorization (LRTV) [8], the low-rank matrix recovery (LRMR) [18], the automatic hyperspectral image restoration (HyRes) [33], noise-adjusted iterative low-rank matrix approximation (NAILRMA) [34], and total variation regularized low-rank tensor decomposition (LRTDTV) [35]. The codes of these methods are downloaded from the authors’ homepages. For the parameters in the compared methods, they are manually adjusted to get the best results. For the weight parameters λ i (i = 1, 2, 3) in TV regularization, considering that λ 1 and λ 2 both control the spatial dimension of HSI, they should be assigned to the same weights. For simplicity, we set λ 1 = λ 2 = 1, and then we tune λ 3 according to reconstruction performance; it is found that the result is better when λ 3 = 0.4.

4.1. Experiment with Simulated Data

In the experiment with the simulated data, we add different intensity Gaussian noise to the Indian Pines dataset [36], whose size is 145 × 145 × 224.
(1) Visual effectiveness comparison: For visual comparison, the denoising results of different methods with the 11th band are presented in Figure 3 and Figure 4, and the corresponding variances of Gaussian noise are 20 and 60, respectively. It can be seen from Figure 3b and Figure 4b that the clean HSIs suffer degradation to a different degree. When the noise variance is 20, the compared methods could remove most of the Gaussian noise, but there is still obvious residual noise in LRMR and NAIRLMA. When the variance is 60, there is obvious residual noise in all of the compared methods. It can be observed from the enlarged yellow squares in the top left corner of Figure 3 and the top right corner of Figure 4 that the results obtained by our method preserve clearer and sharper edges, but the results obtained by HyRes and LRTV have blurred edges or an over-smooth phenomenon. In general, our method outperforms all of the compared methods at different noise levels.
(2) Quantitative comparison: Some frequently-used objective evaluation indexes are adopted, including the mean peak signal-to-noise ratio (MPSNR) [37], the mean structural similarity index (MSSIM) [37], the mean spectral angle mapper (MSAM) [38], and the Erreur Relative Globale Adimensionnelle de Synthese (ERGAS; relative dimensionless global error in synthesis in English) [39]. PSNR (its unit is dB) and SSIM are utilized to assess the similarity between the denoised image and the original image based on mean square error (MSE) and structural consistency, respectively. Larger values of MPSNR and MSSIM indicate that the results are better. ERGAS is used to measure the fidelity of the denoised image by calculating the weighted sums of the MSE of all the bands, while SAM denotes the average angle of spectrum vectors between the denoised HSI and its corresponding original image across all spatial positions. SAM fully reflects the spectral consistency of the denoised HSI with the original image. Smaller values of these two indexes represent better denoised results. The definitions of these indexes are as follows:
MPSNR = 1 b i = 1 b 10 l o g 10 255 2 × n i u ^ i u i 2
where b represents the number of spectral bands, and u ^ i and u i are the restored image and the i-th band of the original clean image—they are of the same size. ni represents the total number of pixels of image u i .
SSIM = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,   MSSIM = 1 b i = 1 b S S I M i
where μ x and μ y represent the average value of x and y images. σ x and σ y stand for the variance of x and y images, respectively, and σ x y is the covariance of these two images. C1 and C2 are constant here.
SAM = arccos [ i = 1 n x i y i i = 1 n x i 2 i = 1 n y i 2 ]
where S1 = (x1, x2, …, xn) and S2 = (y1, y2, …, yn) represent the spectral vectors at the same location in HSI.
ERGAS = 100 h v 1 b i = 1 b ( R M S E ( x i ) μ ( i ) ) 2
where h, v and b are defined above. RMSE (xi) denotes the root-mean-square error (RMSE) for image xi, and μ(i) denotes the mean of image yi.
It is easy to see from Table 2 that the MPSNR values of our method are 3–7 dB higher than the maximum PSNR values of the compared methods. For MSSIM, the TV-regularization methods (LRTV, LRTDTV, and our method) achieve better results than the other methods, but the MSSIM values of our method are still higher than those of LRTV and LRTDTV. This indicates that the denoising results of our method have a better visual effect, and this is consistent with what we see in Figure 3 and Figure 4. To take a closer look at the SSIM and PSNR values of all the bands, we use noise variance 60 (see Figure 5) as an example. The results from Figure 5 show that our method outperforms almost all of the compared methods for each band, except that the SSIM values of our method in some bands are lower than those of LRTV.
On the account of nonlocal similarity, it can be seen from Table 2 that the MSAM and ERGAS values of our method are much lower than the other five methods. This can be interpreted as our method beng able to better maintain spectral information. To demonstrate the spectral fidelity achieved by our method, Figure 6 shows the spectral reflectance spectrum at location (55, 55) from all of the compared methods. In Figure 6, the blue curve represents the spectral reflectance values of the original image, and the orange curve represents the denoised spectral reflectance. It is not difficult to see that the spectral curve obtained by the proposed method achieves less spectral distortion and fits better to the original spectral curve than the compared methods.

4.2. Real-World Data Experiments

The performance on the simulated dataset is evaluated in Section 4.1. In this section, we choose two widely used real-world HSI datasets to verify the denoising performance. The first one is the Hyperspectral Digital Imagery Collection Experiment (HYDICE) urban dataset [40] and the second one is the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Salinas dataset [36].

4.2.1. HYDICE Urban Dataset

The original dataset has 210 bands, and each band is a 307 pixel × 307 pixel grayscale image. In the experiment, we manually adjust the parameters of the compared methods accordingly to achieve the best results.
Figure 7 and Figure 8 display the denoising results of band 138 and band 206, respectively, with different methods. There is still plenty of residual noise in HyRes, LRMR, NAILRMA, and LRTDTV. For the LRTV, most noise is removed, but the result is over smooth. By considering the nonlocal low-rank property and spectral–spatial TV regularization, our proposed method shows superior performance on removing the Gaussian noise and preserving the spatial texture information and spectral information.
Based on the above analysis, we go further to evaluate all of the denoising algorithms with the mean cross-track profile (MCTP) [24]. All the MTCPs of 206th spectral band after denoising, along with the original MTCP, are presented in Figure 9. The horizontal axis and vertical axis in Figure 9 represent the column number and the mean digital number (MDN) values of each column, respectively. The existing noise leads to severe disturbances in the profile of the original image. After denoising, the disturbances are suppressed by the compared methods with different levels of success. In particular, the dead lines in Figure 7a are also eliminated, and the corresponding MTCPs are smoothed, as shown in the red circles in Figure 9. Evidently, our method provides a smoother mean profile, which is consistent with the results shown in Figure 7.

4.2.2. AVIRIS Salinas Dataset

This data contains 224 bands, and each band is a 512 × 217 pixel grayscale image. The original image is too large for display, so we extract a subimage of 300 × 217 pixels to show the denoising results. The second band in this dataset is contaminated by heavy Gaussian noise (Figure 10a), so we select this band to evaluate the denoising performance, the results are presented in Figure 10b–g. We can observe that the LRMR method completely failed to denoise this band. The HyRes and NAILRMA can remove some noise, but there is still obvious noise remaining. As for LRTV and LRTDTV, they have over-smoothed the image and distorted the structure, as presented in Figure 10c,f, and thus fail to give satisfactory results. Figure 10g indicates that the proposed method can still keep sharp edge when removing heavy noise. All of the curves of MDN with band 2 before and after denoising are presented in Figure 11. By comparison, our method achieves a better restoration result than the compared methods.

4.3. Discussion

(1) Parameters selection
The regularization parameters are discussed in Section 4.1. The other two parameters are related to the patch (size m × m × b), the parameters involved are m and k, and k is the number of similar patches in each cluster.
To determine the optimal values of m and k, the MPSNR index in the simulated experiments is used as the criterion. The curves of the MPSNR values with m under three different noise variance cases are shown in Figure 12a, in which the σ 2 represents noise variance. It can be seen that in the interval from 5 to 10, the MPSNR value increases with the increase of m. The most likely reason for this is that the damaged structure can be restored with the increase of m. As the value of m continues to increase, the MPSNR value gradually decreases, which indicates that under the optimal selection of m, a reasonably satisfactory denoising effect can be obtained. When the variance is 50, the MPSNR has a higher value when m = 11 than when m = 10. As the improvement is negligible, the patch size is set as m = 10 for all the experiments.
Moreover, the number of similar FBPs (k) is evaluated with the parameter m being fixed. The result in Figure 12b shows that the MPSNR index increases gradually until k = 40, thereafter it drops slowly. It can be explained that too many patches will destroy the low-rank property. Furthermore, more FBPs means a higher time cost, and we finally set the number of similar FBPs to 40.
(2) Convergence Analysis
To illustrate the convergence of the proposed method, the relative changes and MPSNR values versus the iteration number of our method are presented in Figure 13. It can be seen that the values of these two indexes tend to be stable after about 35 iterations, which clearly shows the convergence of our method.
(3) Operation time analysis
To test the computational complexity with compared algorithms, we select the running time of the Salinas dataset for comparison. The running times (in seconds) are shown in the Table 3. LRMR is the fastest, but its performances are the worst. Our algorithm is not the fastest, but also not the slowest.

5. Conclusions

To remove Gaussian noise, we propose a TV-regularization TWNNM model. In this model, we apply TWNNM on the HSI group clustered by NNS to characterize the LR property with similar patches. Moreover, TV regularization is utilized to not only suppress noise, but also keep the local smoothness in both the spatial and spectral domain. Experiments on both simulated and real HSI datasets indicate that our method can retain detailed information of the image better, while noise points are removed, which can be explained as by the fact that the combined LR and smooth prior information of the image component has the ability to accurately suppress noise and keep the smooth structure. Our method outperforms the state-of-the-art methods both in visual quality and evaluation criteria. Furthrmore, in further works, we will extend our method to other restoration tasks, such as magnetic resonance imaging (MRI) [41] and optical coherence tomography (OCT) images [42,43,44].

Author Contributions

X.K. conducted experiments, analyzed the results, and wrote the paper. Y.Z. conceived the experiments and was responsible for the research analysis. J.X., J.C.-W.C., Z.R., H.H., and J.Z. collected and processed the original data. All of the co-authors helped revise the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Shaanxi Key R&D Plan(2020ZDLGY07-11), the National Natural Science Foundation of China (61771391, 61371152), the Shenzhen Municipal Science and Technology Innovation Committee (JCYJ20170815162956949, JCYJ20180306171146740), and the Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University (CX201917), and the Natural Science Basic Research Plan in Shaanxi Province of China (no. 2018JM6056).

Acknowledgments

We are grateful to the authors of the compared methods for providing the source codes.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shomorony, I.; Avestimehr, A.S. Is Gaussian noise the worst-case additive noise in wireless networks? In Proceedings of the 2012 IEEE International Symposium on Information Theory Proceedings (ISIT), Cambridge, MA, USA, 1–6 July 2012. [Google Scholar] [CrossRef]
  2. Yang, J.; Zhao, Y.; Chan, J.C. Learning and Transferring Deep Joint Spectral–Spatial Features for Hyperspectral Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4729–4742. [Google Scholar] [CrossRef]
  3. Yi, C.; Zhao, Y.Q.; Chan, J.C.W. Spectral super-resolution for multispectral image based on spectral improvement strategy and spatial preservation strategy. IEEE Trans. Geosci. Remote Sens. 2019. [Google Scholar] [CrossRef]
  4. Bu, Y.; Zhao, Y.; Xue, J. Hyperspectral and Multispectral Image Fusion via Graph Laplacian-Guided Coupled Tensor Decomposition. IEEE Trans. Geosci. Remote Sens. 2020, 99, 1–15. [Google Scholar] [CrossRef]
  5. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Hyper-Laplacian Regularized Nonlocal Low-rank Matrix Recovery for Hyperspectral Image Compressive Sensing Reconstruction. Inf. Sci. 2019, 501, 406–420. [Google Scholar] [CrossRef]
  6. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.-W. Nonlocal Tensor Sparse Representation and Low-Rank Regularization for Hyperspectral Image Compressive Sensing Reconstruction. Remote Sens. 2019, 11, 193. [Google Scholar] [CrossRef] [Green Version]
  7. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  8. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  9. Othman, H.; Qian, S.-E. Noise reduction of hyperspectral imagery using hybrid spatial-spectral derivative-domain wavelet shrinkage. IEEE Trans. Geosci. Remote Sens. 2006, 44, 397–408. [Google Scholar] [CrossRef]
  10. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral–spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  11. Qian, Y.; Ye, M. Hyperspectral imagery restoration using nonlocal spectral-spatial structured sparse representation with noise estimation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 499–515. [Google Scholar] [CrossRef]
  12. Wang, Y.; Niu, R.; Yu, X. Anisotropic diffusion for hyperspectral imagery enhancement. IEEE Sens. J. 2010, 10, 469–477. [Google Scholar] [CrossRef]
  13. Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y. Robust Principal Component Analysis: Exact Recovery of Corrupted Low-Rank Matrices via Convex Optimization. In Proceedings of the Neural Information Processing Systems 2009, Vancouver, BC, Canada, 6–8 December 2009; pp. 2080–2088. [Google Scholar] [CrossRef]
  14. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.-W. Nonlocal Low-Rank Regularized Tensor Decomposition for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2019. [Google Scholar] [CrossRef]
  15. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef] [PubMed]
  16. Xue, J.; Zhao, Y.; Liao, W.; Kong, S.G. Joint Spatial and Spectral Low-Rank Regularization for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2017, 99, 1–19. [Google Scholar] [CrossRef]
  17. Zheng, Y.; Liu, G.; Sugimoto, S.; Yan, S.; Okutomi, M. Practical Low-Rank Matrix Approximation under Robust L1-Norm. In Proceedings of the 2012 Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 1410–1417. [Google Scholar] [CrossRef]
  18. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4729–4743. [Google Scholar] [CrossRef]
  19. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar] [CrossRef] [Green Version]
  20. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
  21. Cai, J.; Candes, E.J.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  22. Wu, Z.; Wang, Q.; Wu, Z.; Shen, Y. Total Variation-Regularized Weighted Nuclear Norm Minimization for Hyperspectral Image Mixed Denoising. J. Electron. Imaging 2016, 25, 13037. [Google Scholar] [CrossRef]
  23. Du, B.; Huang, Z.; Wang, N.; Zhang, Y.; Jia, X. Joint weighted nuclear norm and total variation regularization for hyperspectral image denoising. Int. J. Remote Sens. 2018, 39, 334–355. [Google Scholar] [CrossRef]
  24. Kong, X.; Zhao, Y.; Xue, J.; Chan, J.-W. Hyperspectral Image Denoising Using Global Weighted Tensor Norm Minimum and Nonlocal Low-Rank Approximation. Remote Sens. 2019, 11, 2281. [Google Scholar] [CrossRef] [Green Version]
  25. Wu, Z.; Wang, Q.; Jin, J.; Shen, Y. Structure tensor total variation regularized weighted nuclear norm minimization for hyperspectral image mixed denoising. Signal Process. 2017, 131, 202–219. [Google Scholar] [CrossRef]
  26. Lefkimmiatis, S.; Roussos, A.; Maragos, P.; Unser, M. Structure tensor total variation. SIAM J. Imaging Sci. 2015, 8, 1090–1122. [Google Scholar] [CrossRef]
  27. Berchtold, S.; Ertl, B.; Keim, D.A.; Kriegel, H.-P.; Seidl, T. Fast Nearest Neighbor Search in High-Dimensional Space. In Proceedings of the 14th International Conference on Data Engineering, Orlando, FL, USA, 23–27 February 1998; pp. 209–218. [Google Scholar] [CrossRef]
  28. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  29. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C.; Kong, S.G. Enhanced Sparsity Prior Model for Low-Rank Tensor Completion. IEEE Trans. Neural Netw. Learn. Syst. 2019, 12, 1–15. [Google Scholar] [CrossRef] [PubMed]
  30. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  31. Donoho, D.L. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  32. Zhang, C.; Hu, W.; Jin, T.; Mei, Z. Nonlocal image denoising via adaptive tensor nuclear norm minimization. Neural Comput. Appl. 2015. [Google Scholar] [CrossRef]
  33. Rasti, B.; Ulfarsson, M.O.; Ghamisi, P. Automatic hyperspectral image restoration using sparse and low-rank modeling. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2335–2339. [Google Scholar] [CrossRef] [Green Version]
  34. He, W.; Zhang, H.; Zhang, L.; Shen, H. Hyperspectral Image Denoising via Noise-Adjusted Iterative Low-Rank Matrix Approximation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3050–3061. [Google Scholar] [CrossRef]
  35. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.-L.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
  36. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 11 May 2020).
  37. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Yuhas, R.H.; Goetz, A.F.H.; Boardman, J.W. Discrimination among Semi-Arid Landscape Endmembers Using the Spectral Angle Mapper (SAM) Algorithm. In Proceedings of the 1992 Summaries of the 3rd Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992; JPL: Pasadena, CA, USA; pp. 147–149. [Google Scholar]
  39. Kong, X.; Zhao, Y.; Xue, J.; Chan, J.-W.; Kong, S.G. Global and Local Tensor Sparse Approximation Models for Hyperspectral Image Destriping. Remote Sens. 2020, 12, 704. [Google Scholar] [CrossRef] [Green Version]
  40. Available online: http://www.tec.army.mil/hypercube (accessed on 11 May 2020).
  41. Yuan, J. MRI denoising via sparse tensors with reweighted regularization. Appl. Math. Model. 2019, 69, 552–562. [Google Scholar] [CrossRef]
  42. Turani, Z.; Fatemizadeh, E.; Blumetti, T.; Daveluy, S.; Moraes, A.F.; Chen, W.; Mehregan, D.; Andersen, P.E.; Nasiriavanak, M. Optical Radiomic Signatures Derived from Optical Coherence Tomography Images to Improve Identification of Melanoma. Cancer Res. 2019, 79, 2021–2030. [Google Scholar] [CrossRef] [Green Version]
  43. Adabi, S.; Rashedi, E.; Clayton, A.; Mohebbi-Kalkhoran, H.; Chen, X.-W.; Conforto, S.; Nasiriavanaki, M. Learnable despeckling framework for optical coherence tomography images. J. Biomed. Opt. 2018, 23, 1–12. [Google Scholar] [CrossRef] [Green Version]
  44. Eybposh, M.H.; Turani, Z.; Mehregan, D.; Nasiriavanaki, M. Cluster-based filtering framework for speckle reduction in OCT images. Biomed. Opt. Express 2018, 9, 6359–6373. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed method.
Figure 1. The flowchart of the proposed method.
Remotesensing 12 01956 g001
Figure 2. (a) Comparison of the low-rank property between the patch group and the original hyperspectral image (HSI). (b) Zoomed-in comparison of (a) when the singular number is between 10 and 20.
Figure 2. (a) Comparison of the low-rank property between the patch group and the original hyperspectral image (HSI). (b) Zoomed-in comparison of (a) when the singular number is between 10 and 20.
Remotesensing 12 01956 g002
Figure 3. Denoised results of the 11th band by different methods when the variance is 20: (a) original, (b) noisy, (c) HyRes, (d) total variation-regularized low-rank matrix factorization (LRTV), (e) low-rank matrix recovery (LRMR), (f) noise-adjusted iterative low-rank matrix approximation (NAILRMA), (g) total variation regularized low-rank tensor decomposition (LRTDTV), (h) and proposed.
Figure 3. Denoised results of the 11th band by different methods when the variance is 20: (a) original, (b) noisy, (c) HyRes, (d) total variation-regularized low-rank matrix factorization (LRTV), (e) low-rank matrix recovery (LRMR), (f) noise-adjusted iterative low-rank matrix approximation (NAILRMA), (g) total variation regularized low-rank tensor decomposition (LRTDTV), (h) and proposed.
Remotesensing 12 01956 g003
Figure 4. Denoised results of the 11th band by different methods when the variance is 60: (a) original, (b) noisy, (c) HyRes, (d) LRTV, (e) LRMR, (f) NAIRLMA, (g) LRTDTV, and (h) proposed.
Figure 4. Denoised results of the 11th band by different methods when the variance is 60: (a) original, (b) noisy, (c) HyRes, (d) LRTV, (e) LRMR, (f) NAIRLMA, (g) LRTDTV, and (h) proposed.
Remotesensing 12 01956 g004
Figure 5. The peak signal-to-noise ratio (PSNR) value (a) and structural similarity index (SSIM) value (b) of each band when the noise variance is 60.
Figure 5. The peak signal-to-noise ratio (PSNR) value (a) and structural similarity index (SSIM) value (b) of each band when the noise variance is 60.
Remotesensing 12 01956 g005
Figure 6. Full band spectral reflectance curve at a spatial position (55, 55): (a) HyRes, (b) LRTV, (c) LRMR, (d) NAIRLMA, (e) LRTDTV, and (f) proposed.
Figure 6. Full band spectral reflectance curve at a spatial position (55, 55): (a) HyRes, (b) LRTV, (c) LRMR, (d) NAIRLMA, (e) LRTDTV, and (f) proposed.
Remotesensing 12 01956 g006
Figure 7. Denoised results by different methods of the 206th band in an urban dataset: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Figure 7. Denoised results by different methods of the 206th band in an urban dataset: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Remotesensing 12 01956 g007
Figure 8. Denoised results by different methods of the 138th band in an urban dataset: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Figure 8. Denoised results by different methods of the 138th band in an urban dataset: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Remotesensing 12 01956 g008
Figure 9. Vertical mean profiles of the denoising results of the 206 spectral band in the Hyperspectral Digital Imagery Collection Experiment (HYDICE) urban image: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f)LRTDTV, and (g) proposed.
Figure 9. Vertical mean profiles of the denoising results of the 206 spectral band in the Hyperspectral Digital Imagery Collection Experiment (HYDICE) urban image: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f)LRTDTV, and (g) proposed.
Remotesensing 12 01956 g009
Figure 10. Denoised results by different methods of the second band in the Salinas dataset: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Figure 10. Denoised results by different methods of the second band in the Salinas dataset: (a) original, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Remotesensing 12 01956 g010
Figure 11. Vertical mean profiles of the denoising results of the second spectral band in the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Salinas image: (a) original band, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Figure 11. Vertical mean profiles of the denoising results of the second spectral band in the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) Salinas image: (a) original band, (b) HyRes, (c) LRTV, (d) LRMR, (e) NAILRMA, (f) LRTDTV, and (g) proposed.
Remotesensing 12 01956 g011
Figure 12. PSNR value changes: (a) mean peak signal-to-noise ratio (MPSNR) versus patch size (m) and (b) MPSNR versus patch number (k).
Figure 12. PSNR value changes: (a) mean peak signal-to-noise ratio (MPSNR) versus patch size (m) and (b) MPSNR versus patch number (k).
Remotesensing 12 01956 g012
Figure 13. Convergence analysis with the iteration number: (a) relative changes X k + 1 X k F / X k F and (b) MPSNR versus the iteration number in the simulated Indian Pines dataset.
Figure 13. Convergence analysis with the iteration number: (a) relative changes X k + 1 X k F / X k F and (b) MPSNR versus the iteration number in the simulated Indian Pines dataset.
Remotesensing 12 01956 g013
Table 1. The main mathematical symbols used in the paper and their corresponding explanations.
Table 1. The main mathematical symbols used in the paper and their corresponding explanations.
NatationDescription
x, x, X, X scalar, vector, matrix, tensor
xii-th entry of a vector x
xijelement (i, j) of a matrix X
X i j k the element in location (i, j, k) of a three-order tensor X
X F = i 1 = 1 I 1 i 2 = 1 I 2 i N = 1 I N x i 1 i 2 i N 2 Frobenius norm of an N order tensor X R I 1 × I 2 × × I N
Table 2. Quantitative comparison of all of the compared methods under different noise variances for the Indian Pines dataset.
Table 2. Quantitative comparison of all of the compared methods under different noise variances for the Indian Pines dataset.
VarianceIndexNoisyHyResLRTVLRMRNAIRLMALRTDTVProposed
MPSNR26.78042.39541.06642.50343.08843.50349.79
10MSSIM0.65660.98740.99580.98490.98450.99760.9987
MSAM0.07890.01160.1080.01140.01040.01000.0038
ERGAS91.7716.0220.3615.2514.2215.177.22
MPSNR20.76337.46938.81536.52238.21441.09243.330
MSSIM0.43660.96650.99270.94720.95860.9900.9953
20MSAM0.15650.02000.01430.02270.01740.01360.0090
ERGAS183.4528.2625.8230.41225.2219.3214.891
MPSNR17.2434.7535.6532.9534.9737.8440.16
30MSSIM0.32400.94170.98630.89490.92190.96850.9902
MSAM0.23160.02700.02050.03430.02480.02040.0127
ERGAS275.1538.3033.9446.1036.1527.4420.48
MPSNR14.74132.86033.79830.41833.10535.01738.012
40MSSIM0.25280.91500.97870.83790.89330.9310.9830
MSAM0.30370.03320.02520.04610.02990.02850.0157
ERGAS366.9847.70842.38961.36745.07837.7623.67
MPSNR12.8031.4432.1028.5131.2232.7635.78
MSSIM0.20310.89640.96970.78070.85240.88460.9734
50MSAM0.37190.03850.03090.05740.03670.03710.0212
ERGAS458.7356.2150.5176.0955.7749.1234.81
MPSNR11.21930.27930.89826.98129.72330.82134.983
60MSSIM0.16880.86870.96200.72590.81260.83390.9649
MSAM0.43590.04380.03600.06830.04320.04640.0228
ERGAS550.4464.0659.8491.3366.5161.4737.81
MPSNR9.8829.2429.7125.6128.5829.2632.70
70MSSIM0.13880.84880.95140.67450.78120.78060.9521
MSAM0.49640.04960.04280.08000.04850.05520.0321
ERGAS642.0471.6069.80105.4475.0272.9748.95
MPSNR8.71728.45928.78724.57227.58227.93932.202
80MSSIM0.11670.82620.94150.62970.74650.73140.9425
MSAM0.55230.05390.04720.09060.05440.06460.0333
ERGAS734.1378.0975.95119.7785.1185.0151.61
MPSNR7.7027.7227.6823.5226.6126.7131.62
90MSSIM0.09920.79810.92720.58560.71220.68060.9288
MSAM0.60440.05920.05840.10240.06060.07470.0354
ERGAS825.2684.9286.87135.1995.0197.0854.24
MPSNR6.7827.1026.6022.6225.7125.7231.13
100MSSIM0.08550.78270.91180.54630.67920.63960.9184
MSAM0.65280.06320.06650.11360.06680.08350.0362
ERGAS917.0991.8799.68149.93103.87108.5157.96
Table 3. Comparison of the running time.
Table 3. Comparison of the running time.
MethodHyResLRMRLRTVNAIRLMALRTDTVProposed
Time (second)290.209483.42439.21942.92587.08

Share and Cite

MDPI and ACS Style

Kong, X.; Zhao, Y.; Xue, J.; Chan, J.C.-W.; Ren, Z.; Huang, H.; Zang, J. Hyperspectral Image Denoising Based on Nonlocal Low-Rank and TV Regularization. Remote Sens. 2020, 12, 1956. https://doi.org/10.3390/rs12121956

AMA Style

Kong X, Zhao Y, Xue J, Chan JC-W, Ren Z, Huang H, Zang J. Hyperspectral Image Denoising Based on Nonlocal Low-Rank and TV Regularization. Remote Sensing. 2020; 12(12):1956. https://doi.org/10.3390/rs12121956

Chicago/Turabian Style

Kong, Xiangyang, Yongqiang Zhao, Jize Xue, Jonathan Cheung-Wai Chan, Zhigang Ren, HaiXia Huang, and Jiyuan Zang. 2020. "Hyperspectral Image Denoising Based on Nonlocal Low-Rank and TV Regularization" Remote Sensing 12, no. 12: 1956. https://doi.org/10.3390/rs12121956

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop