Next Article in Journal
Optimization of Samples for Remote Sensing Estimation of Forest Aboveground Biomass at the Regional Scale
Previous Article in Journal
Study on the Landscape Space of Typical Mining Areas in Xuzhou City from 2000 to 2020 and Optimization Strategies for Carbon Sink Enhancement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation

1
School of Geography, Liaoning Normal University, Dalian 116029, China
2
College of Information Science and Engineering, Northeastern University, Shenyang 110167, China
3
School of Computer and Information Technology, Liaoning Normal University, Dalian 116029, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(17), 4184; https://doi.org/10.3390/rs14174184
Submission received: 13 June 2022 / Revised: 3 August 2022 / Accepted: 22 August 2022 / Published: 25 August 2022
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
The enormous amount of data that are generated by hyperspectral remote sensing images (HSI) combined with the spatial channel’s limited and fragile bandwidth creates serious transmission, storage, and application challenges. HSI reconstruction based on compressed sensing has become a frontier area, and its effectiveness depends heavily on the exploitation and sparse representation of HSI information correlation. In this paper, we propose a low-rank sparse constrained HSI reconstruction model (LRCoSM) that is based on joint spatial-spectral HSI sparseness. In the spectral dimension, a spectral domain sparsity measure and the representation of the joint spectral dimensional plane are proposed for the first time. A Gaussian mixture model (GMM) that is based on unsupervised adaptive parameter learning of external datasets is used to cluster similar patches of joint spectral plane features, capturing the correlation of HSI spectral dimensional non-local structure image patches while performing low-rank decomposition of clustered similar patches to extract feature information, effectively improving the ability of low-rank approximate sparse representation of spectral dimensional similar patches. In the spatial dimension, local-nonlocal HSI similarity is explored to refine sparse prior constraints. Spectral and spatial dimension sparse constraints improve HSI reconstruction quality. Experimental results that are based on various sampling rates on four publicly available datasets show that the proposed algorithm can obtain high-quality reconstructed PSNR and FSIM values and effectively maintain the spectral curves for few-band datasets compared with six currently popular reconstruction algorithms, and the proposed algorithm has strong robustness and generalization ability at different sampling rates and on other datasets.

1. Introduction

Hyperspectral remote sensing images (HSI) provide the possibility to observe features at a high level of detail due to their ‘atlas-integrated’ and spectrally high-resolution characteristics. However, the enormous amount of data and the limited and fragile bandwidth of the spatial channel create great challenges for HSI transmission and reconstruction [1]. Limited by signal bandwidth and Nyquist’s sampling theorem [2], early HSI reconstruction schemes that were based on signal sampling techniques were difficult to assure the quality of the reconstructed images. With the continuous development of compressive sensing theory [3,4], people have directly sampled the features of sparse signals and achieved signal reconstruction that is based on a small amount of uncorrelated observation information under the constraint of prior sparse property, which has greatly improved the quality of the reconstructed signals [5].
In recent years, the classical approach to obtaining reconstructed images has been to extract the sparse feature information from the image by making expert domain prior knowledge regularization terms and solving an inverse optimization problem to reach an optimal reconstruction [6]. Early HSI reconstructed images can use discrete Fourier transform (DFT) filters [7], discrete cosine transform (DCT) filters [8], discrete wavelet transform (DWT) filters [9], etc., to obtain sparse feature coefficients by changing the dictionary basis space. Subsequently, Kawakami et al. [10] proposed a classical OMP algorithm for solving sparse coefficients with greedy ideas, which is widely used in HSI reconstruction tasks. From the perspective of extracting similar texture structure features, Zhang et al. [11] used a principal component analysis (PCA) unsupervised dimensionality reduction algorithm to map HSI high-dimensional streamer feature information into a low-dimensional subspace, overcoming the curse of dimensionality while capturing super-pixel global and local features, and thus achieving a high-quality reconstruction effect. To further explore the sparse properties of the HSI structure at a deeper level, Xu et al. [12] used the unsupervised clustering K-means method to group the existing spectral inter-band, combined with the OMP algorithm to batch down the dimension and obtain the sparse coefficients to achieve the prediction and reconstruction of other bands. On this basis, Azimpour et al. [13] proposed a clustering method with variational Bayesian maximum posterior probability estimation to mine the HSI global similarity properties in a statistical Bayesian framework and thus obtain better reconstruction results. In recent years, Gaussian mixture models have been widely used in image reconstruction, classification, and anomaly detection because they can effectively estimate the feature information of a dataset for complex and variable feature classes starting from Bayesian probability statistics. Qu et al. [14] used the GMM method to extract the anomalous feature pixels in each band and effectively implemented the HSI anomaly detection that was based on the GMM weighting method. Subsequently, Ma et al. [15] used super-pixel segmentation to mine spatial dimensional similarity based on a Gaussian mixture model as a way to obtain homogeneous regions and combine low-rank attributes to effectively solve the hyperspectral unmixing problem; however the work lacked an exploration of the sparsity of the HSI spectral dimensional structure.
Based on the sparsity characteristic of the spatial-spectral high redundancy of HSI in a specific dictionary basis, [16] lays the foundation for HSI reconstruction with compressive sensing. In order to achieve better reconstruction results, the sparse properties of HSI have been studied in different spatial and spectral dimensions, and corresponding HSI reconstruction methods have been proposed [17,18,19,20,21,22,23,24]. Reconstruction methods that were based on a combination of sparse representation and low-rank approximation have also received attention in recent years [25,26]. Xue et al. [27] proposed a spatial-spectral structured sparse low-rank representation model that learned from the low-rank factor of the affinity matrix based on the existence of spatial non-local similarity and spectral band correlation in HSI, and implemented HR-MSI and LR-HSI fusion as a representative to obtain super-resolution HSI reconstructed images, which is of strong value for HSI sparse reconstruction tasks. Yi et al. [28] effectively merged HS and MS data information to obtain high-quality HSI-reconstructed images by considering both spatial and spectral correlations and forming a regular term constraint on the overcomplete dictionary and low-rank features. This class of algorithms extracts features using sparse representation and low-rank approximation methods, then reconstructs the image by fixing the orthogonal base dictionary, resulting in high-quality reconstruction when the extracted image features match the dictionary features. However, this consistency of features is difficult to achieve, and for this reason, reversible projection matrix learning algorithms [29] and K-SVD algorithms [30] have been proposed to alternate the training of sparse coefficients of dictionaries and image patches, respectively, for reconstruction purposes. Fotiadou et al. [31], for example, used sparse dictionary learning methods instead of the traditional fixed observation matrix. In order to better maintain the 3D structure of HSI, HSI reconstruction methods that are based on tensor representation have also been better developed. Chen et al. proposed HSI reconstruction methods for non-local tensor ring decomposition [32] and weighted group sparse regularized low-rank tensor decomposition (LRTDGS) [33] for HIS reconstruction, respectively. The former achieves image reconstruction by applying a low-rank constraint on the subspace, while the latter improves the reconstruction quality by implementing a low-rank Tucker decomposition that allows HSI to capture the spatial-spectral correlation in three dimensions and to approximate the unfolded image by a l 2 parameter number. Xue et al. [34] used a l 1 kernel-based parametrization as a tensor sparse low-rank regularization term to characterize the spatial-spectral structural correlation properties instead of the original Tucker decomposition setting a priori rank. The effectiveness of the above methods depends heavily on the mining and effective representation of the relevance of HSI information, the complexity of actual Earth observation scenarios, and the multidimensionality of HSI data structures, and the high redundancy of their data leaves a space for much development of current research in this area.
Furthermore, the recent success of deep learning techniques in image vision has led to their application in the field of HSI reconstruction, where deep neural networks are used to mine HSI features, constrain the reconstruction process, and further obtain reconstructed images by optimizing the heuristic network [35,36,37,38,39,40]. However, the method’s shortcomings, such as excessive training cost burden, weak generalization, and poor interpretability, severely limit its usefulness. The current HSI reconstruction scheme that is based on sparse representation still needs to be studied in depth, and the reconstruction quality of HSI still needs to be improved. In particular, how to ensure the structural features and spectral characteristics of the reconstructed HSI while improving the quality of spatial information during the reconstruction process has not been well addressed. This paper proposes a reconstructed model of HSI with low-rank sparse representation that is based on an in-depth study of the sparse properties of joint spatial-spectral dimensional HSI.
The main contributions of this paper are listed as follows:
  • By grouping adjacent spectral dimensional planes and stitching them together in a folded fan fashion to form a joint spectral dimensional plane. The innovative design of a “joint spectral dimensional plane” structure leads to the conclusion that the joint spectral dimensional plane of HSI can be used to capture similar patches of HSI spectral dimensional non-local structure more effectively, laying the foundation for determining more effective sparsity constraints. This conclusion can be applied not only to the sparse reconstruction of HSI but also to other applications of HSI based on sparse representation.
  • A Gaussian mixture model (GMM) that is based on unsupervised adaptive parameter learning of external datasets is proposed for guiding the clustering of similar patches of joint spectral dimensional plane features, which not only reduces the setting of a priori domain hyperparameters but also breaks through the traditional non-local similar patch search with fixed small size windows and further improves the ability of low-rank approximate sparse representation of similar patches.
  • A low-rank approximate representation of the HSI sparse reconstruction model (LRCoSM) with collaborative spatial dimensional local-nonlocal correlation and joint spectral dimensional plane structure correlation constraints is designed, which solves the problem of insufficient waveband datasets and well maintains the structural information of the HSI while effectively improving the spatial quality of the reconstructed images.
The remainder of the paper is organized as follows. The theoretical meaning and principles of compressed sensing that are based on image processing are presented in detail, and the existence of non-local similarity of HSI is proven in Section 2. The construction of a joint spectral dimensional plane is proposed, and a flowchart of our proposed algorithm is given, together with a detailed approach to its implementation in Section 3. The proposed algorithm is analyzed numerically and discussed in relation to the comparison algorithm in Section 4. The conclusions of the paper are given in Section 5.

2. Related Works

2.1. Compressed Sensing of Image Patches

Image sparse representation is the process of pre-selecting bases or dictionaries that can be compressed and sparsely represented to obtain a matrix of sparse coefficients that are based on the image patch’s own texture characteristics. Specifically, for an image patch X of size m × n , the original target image patch can be represented by a linear combination of far fewer than m × n non-zero coefficients. A common representation is the decomposition of the two-dimensional image X m × n into a linear combination of K linearly independent unitary orthogonal basis matrices, as shown in Equation (1):
X = i = 1 K θ i Ψ i = Ψ θ
where Ψ = ( Ψ 1 , Ψ 2 , Ψ k ) denotes a series of orthogonal basis matrices and θ = [ θ 1 , θ 2 , θ k ] T denotes the sparsity factor of the image X under the projection of basis Ψ . K is often referred to as the sparsity. The birth of dictionaries has broken the traditional constraint of not getting good sparse solutions for basis functions, giving rise to sparse decomposition of high-dimensional complex signals using over-complete dictionaries, where the number of rows in the K > n dictionary matrix is less than the number of columns. The compressed sensing sampling differs from the traditional sampling approach by allowing sampling while compressing, by expanding the image patch X vector of length N by an observation matrix of size M × N , and thus achieving a non-adaptive linear projection and obtaining an observation vector Y, which can be written in the matrix form as:
Y = Φ X
Subsequently, bringing Equation (1) into Equation (2) gives:
Y = Φ X = Φ Ψ θ = Θ θ
where Φ denotes the measurement matrix and Ψ is the sparse transformation basis, multiplying the measurement matrix and the sparse transformation basis to obtain the sensing matrix Θ . The process of recovering and reconstructing the original image X is then expressed as follows:
min X 0   s . t .   Φ X = y
Most image reconstruction methods use the minimization l 1 norm convex relaxation method to solve the non-convex l 0 norm optimization problem, which is an NP-hard problem, by replacing the unstable l 0 norm non-convex sparse term with the convex l 1 norm one to extract image features while removing redundant information, and using the convex optimized sparse term as an a priori constraint to construct a regularized reconstruction model and iterate. In turn, the optimal solution of the convergence interval is obtained and the best reconstructed estimate is obtained as follows:
X ^ = arg min X 1 2 y Φ X 2 + λ X 1
where the first factor, called the residual term, is used in the iterative algorithm to measure the error between the reconstructed image X ^ and the original image X. When the loop iterations converge to a certain value ( y Φ X 2 α ( α ) ) or the maximum number of loops is reached, the loop is jumped out to obtain the reconstructed image. λ ( λ > 0 ) denotes the penalty parameter, which is used to control the sparsity of the sparse code X.

2.2. Hyperspectral Image Band Non-Local Correlation

The features that are reflected by light emissivity in a single band of HSI have a regional periodic distribution, so that the gray value of each image pixel in the HSI also changes with the region, and an image pixel in the same region has a strong correlation with its neighboring image pixel. The search area is also extended according to the principle of locality, where a particular pixel at the center of a certain range is surrounded by pixels of the same size patches with a very high degree of similarity, implying the existence of non-local correlation. Extending the non-local similarity property of the above single image to the multi-band case, we chose bands 30, 40, 50, 60, 70, and 80 from a total of 102 bands that were captured by the ROSIS spectrograph at the Pavia Center (intercepted in the upper left corner and with a size of 256 × 256 , see Section 4.1), and selected representative locations of roofs, open spaces, and roads as the center coordinates of a reference patch of size 8 × 8 , as shown in Figure 1. At the same time, a search window of fixed size 10 × 10 is used to find image patches that are similar to the reference patch, and their similarity is measured using the average Euclidean distance Equation (6) between image patches:
d I m , I n = x = 1 m y = 1 n I m ( x , y ) I n ( x , y ) 2 m × n × q
where I m and I n denote the two image patches to be evaluated, both of size m × n , since the Euclidean distance correlation between pure black and pure white image patches is the weakest, we use its correlation coefficient q as a normalization factor and assign it a value of 31.875, and d ( I m , I n ) denotes the similarity of the two image patches based on the normalized mean Euclidean distance. The smaller the value, the more similar.
In order to quantify the relationship between the distribution of non-locally similar patches and the degree of similarity between bands, we calculated the coordinates of the reference patches at the same location in each band and the upper left corner of the similar patches as shown in Table 1. There are three reference patches that are listed, (19,49) for the roof of the school, which is identical in the 30–60 band; (149,48) for the open space, which is identical in the 40, 60, 70, and 80 bands; and (150,208) for the road, which has some fluctuation in similarity due to the influence of pedestrians and vehicles on the reflectance at different times, but the overall location distribution does not vary much.
Based on the non-local correlation of HSI bands, this paper adopts the “overall non-local similarity patch” method of our paper [23] in the processing of HSI spatial domain correlation, in which the first band of each group is called the key band, a reference patch is identified, similar patches are found on the key band, and then the remaining patches in each band are stacked together to form the overall non-local similarity patch of the constructed reconstruction model. The patches in the same position are then stacked together to form an overall non-local similarity patch, which in turn forms the sparse regularization constraint for the reconstructed model, as described in Section 3.4.1.

3. Methodology

In this paper, we propose a hyperspectral image reconstruction model (LRCoSM) with a joint spatial-spectral domain low-rank sparse representation to achieve high-quality sparse reconstruction of multi-band HSI at different sampling rates. The overall architecture of the model is shown in Figure 2. The model consists of four processes: first, the original dataset is combined with compressed sensing theory, and the original dataset is mapped to a low sampling rate sparse dimension to obtain the image feature representation; second is to find the local-nonlocal similar image patch feature information in the spatial domain as the spatial sparse constraint regular term; third is to construct the “joint spectral dimensional plane” based on the low-rank sparse representation of similar image patches by GMM to obtain the spectral domain feature information, and then form the spectral sparse constraint regular term; and fourth, the reconstructed image is obtained by iteratively solving the optimization problem with constraints and the compressed sensing inverse transform. In this section, the joint spectral dimensional surface is first defined, based on which a Gaussian mixture model low-rank clustering algorithm is proposed. Finally, the reconstructed image is obtained by combining the compressive sensing framework.

3.1. Joint Spectral Dimensional Structure and Its Correlation

Based on the segmental smoothness of the HSI feature spectral curves, in our previous paper [24] we proposed the concept of “spectral dimensional plane” in the HSI spectral domain and demonstrated that the spatial dimensional local correlation of the HSI band group has similar spectral curves in the same row or column, and there is also some correlation between the different bands in a single spectral curve. This paper extends the previous research work by deep feature mining in the global range spectral domain and proposes a “joint spectral dimensional plane” structure to detect and capture the correlation between long-range spectral curves of complex scenes HSI. We stitch the adjacent spectral curves in a grouped manner as in Figure 3, for the hyperspectral image X m × n × q ( X 1 , X 2 , X q ) ( m denotes the rows of the spatial dimensional plane, n denotes the columns of the spatial dimensional plane, and q is the number of bands), the spatial dimensional single-band image is denoted as x S p _ k ( S p = m × n , k { 1 , 2 , , q } ) , and then the spectral dimensional plane is denoted as x S n _ j ( S n = m × q , j { 1 , 2 , , n } ) . The spectral dimensional plane x S n _ 1 , where the first column of the spatial dimensional plane is located, is taken as the first set of reference dimensional planes, and N consecutive spectral dimensional planes are selected among them to form the first experimental dataset. Subsequently, we fully consider that since features are locally correlated in the spatial dimension, similarly neighboring spectral dimensional planes from the surface to the lower ground are also correlated to some extent, and we then perform a Γ operation on the spectral dimensional planes of j = 2 , 4 , 6 , , N , with a mirror inversion along the m -axis, to achieve an effective extension of the structural correlation. In the second group, x S n _ N + 1 is used as the second reference dimensional plane, and N consecutive spectral dimensions are also selected to form the second experimental dataset, and so on, so that a total of r = n / N experimental datasets can be formed. Usually, the value of N is a factor of n and is denoted by the integer divisor “|”. If it cannot be integer divisible, the spectral dimensions of similar bands can be used to piece together to obtain an integer r joint spectral dimensional dataset. At the same time, to satisfy n = q × N as much as possible, the multiplication of the number of bands q and the selected N spectral dimensional planes is the number of original HSI columns n . The stitching Equation (7) is also given as:
x S n j r = Γ x S n j r , j = 2 , 4 , 6 , , N x S n j r , j = 1 , 3 , 5 , , N 1 r = N | n n = q × N
In order to construct an integer number of joint spectral planes and verify the existence of non-local correlations in the joint spectral planes at a low number of bands while recovering high quality HSI, we selected the joint spectral unfolding stitching example of Pavia Center (85–92 band images, 256 × 256 × 8 ) at 8 bands, where we let N = 32 , thus constructing eight sets of joint spectral dimensional datasets as shown in Figure 4. Subsequently, the non-local extraction of structurally similar patches operation that was introduced in Equation (7) is performed on each of these eight experimental datasets.
The blue part of the image above is the search area size 15 × 15 . The red image patch is the reference patch and the green image patch is the similar patch, both of size 10 × 10 . We selected the top five most similar patches in the search area and made the edge length of the image patch larger than the width of the single-band spectral plane. To further verify our conjecture, we extended the search area to the global range. Figure 5 gives an example of patch matching for some similar patches in the global range of the joint spectral dimension.
We select the reference coordinates of the upper-left corner of the Figure 5a–f joint spectral plane patches as the center point and use the Euclidean distance discriminant formula to draw a rainbow plot of similarity. The closer the blue part is to the reference coordinate point, the more similar it is. The similarity patches in the single joint spectral plane are found to be discrete and irregularly distributed across rows and columns, which confirms the existence of non-local correlation in the joint spectral plane.

3.2. GMM Guides Image Patches Clustering

This paper uses a generative GMM to perform sparse a priori mining of joint spectral dimensional structural correlations as a Bayesian posterior problem. For this purpose, the overlapping patches of a specific size on the joint spectral dimensional plane { x S n _ j r } and sliding searched for at a certain step size are taken as independent initial samples x i = { 1 , 2 , 3 , , j } . Assuming that all patches satisfy a Gaussian distribution, the probability density function value (PDF) of each sample is given:
g x i | μ , Σ = 1 ( 2 π ) D 2 | Σ | 1 2 exp 1 2 x i μ T Σ 1 x i μ
where μ and Σ denote the mean and symmetric positive definite covariance matrix of a single Gaussian distribution, respectively, and D is the dimension of the space in which the sample is located. Since the image patch is two-dimensional, D = 2 is chosen. Due to the large amount of texture information on the joint spectral dimension, similar patches can be clustered into K clusters, which means that a Gaussian distribution cannot adequately represent the probability values of the categories to which the patches belong, and thus it is considered as a linear superposition of K Gaussian distribution functions, and a Gaussian mixture model (GMM) is proposed as follows:
P x i = k = 1 K π k g x i | μ k , Σ k
where u k and Σ k denote the parameters of the k - th Gaussian mixture component, respectively, and π k > 0 denotes the value of the weight that is occupied by each Gaussian distribution whose weight value satisfies Σ k = 1 K π k = 1 . P ( x i ) is called the joint probability density function, implying the probability of each sample image patch to the coordinates of each cluster center.
Further, in order to clarify the specific Gaussian distribution from which the sample image patches come and to which cluster they belong, we introduce the discrete random hidden variable probability cluster labels Z = ( 1 , 2 , , k ) . Since the image patches are divided into K clusters, each image patch corresponds to a total of k hidden variable labels, and subsequently solves for the posterior probability P ( Z = k x i ) that the sample image patch x i belongs to the cluster under the label Z with the largest probability density function value. From Bayes’ theorem and Equation (9) we obtain:
P Z = k | x i = P ( Z = k ) · P x i | Z = k P x i = π k · P x i | μ k , Σ k j = 1 K π j P x i | μ j , Σ j
where P ( Z = k ) denotes the prior probability, which is equivalent to the cluster k Gaussian distribution weight value π k ; P ( x i Z = k ) is the likelihood conditional probability of the sample image patch x i relative to the hidden variable label, representing the Gaussian distribution to which the current conditional sample belongs; and P ( x i ) is the normalized evidence factor, which is the sum of the joint probability densities of all the clusters K .
Since the method of maximum likelihood estimation (MLE) does not yield an analytic solution in high-dimensional space, the solution of Equation (10) can be obtained by finding the expectation for the dependent variables P ( Z = k ) , μ k , Σ k , and π k (see Equation (11)) and maximizing the log likelihood of the sample x i until convergence:
L x i = i = 1 j ln k = 1 K π k p x i | μ k , Σ k
The parameters are further updated using the expectation maximization iterative algorithm (EM). The weights of each Gaussian distribution are updated according to Equation (12):
π k t + 1 = i = 1 j p t Z = k | x i j
where the weights of the initialized individual Gaussian distributions are kept equal and all are 1 / K , and are updated and assigned to π k t + 1 by averaging the posterior of the sample components. Secondly, the sample mean μ i is initialized, unlike the conventional random initialization of μ i . We use the unsupervised hard clustering Kmeans++ algorithm [41] for adaptive selection of sample clustering centers, using Euclidean distance to maximize the distance between cluster and cluster centers while minimizing the distance from sample x i to center μ k within the cluster (see Equation (13)). Figure 6 presents the KSC 30th band pseudo-color image (see Section 4.1 later) and shows a comparison between Kmeans clustering, Kmeans++ clustering, and clustering by Kmeans++ and GMM guidance. It can be seen that when K < 3 , (a) and (b) algorithms have low clustering accuracy; when K 3 , (a) and (b) algorithms have clustering confusion. The effect of using Kmeans++ for parameter μ k preprocessing operation to guide GMM clustering is improved significantly, which effectively improves the recognition ability of image edge texture information and avoids clustering confusion.
D x i = arg min x i μ k 2 2 , i = 1 , 2 , , j
Finally, the clustering center point μ k that was obtained from the preprocessing is brought into the EM iterative algorithm to solve Equation (14). The analytical solution Equation (15) is further obtained by taking the partial derivative of Σ i in the maximum log-likelihood:
μ k t + 1 = i = 1 j p t Z = k | x i x i i = 1 j p t Z = k | x i
Σ k t + 1 = i = 1 j x i μ k x i μ k T p t Z = k | x i i = 1 j p t Z = k | x i
In this way, the parameters μ k , Σ k , and π k are continuously modified in the EM algorithm until the maximum number of iterations is reached, or until the maximum log-likelihood function L ( x i ) reaches a state of convergence, and finally all samples are divided into the number of clusters K .

3.3. Low-Rank Sparse Representation of Clustered Image Patches

Based on the previous clustering of similar patches in the joint spectral dimensional plane, a low-rank sparse approximation was used to reduce the dimensionality of the data to obtain the main features of the image. Assuming that after GMM model clustering, in every k cluster, similar patches are collated in a vectorized pull column expansion as shown in Figure 7. Each data is a column vector and subsequently collapsed into a matrix N x i = { N 1 x i , N 2 x i , , N k x i } . On this basis, the collocation matrix N k x i to be processed is decomposed into a low-rank matrix Q k and a sparse redundancy matrix V k by matrix SVD:
N k x i = Q k + V k
According to Equation (10), we extract all the clustered similar image patches in the set of k clusters with maximized probability density function values and converts the N k image patches x i Bayesian maximum a posteriori probability clustering problem into the following constrained solution problem on a low-rank sparse matrix minimizing energy function E ( Q ^ ) :
E ( Q ^ ) = arg min x i k = 1 K N k x i Q k F 2 + τ Q k
where τ denotes the selected threshold percentage and is an artificially settable hyperparameter that controls the sparsity of the matrix Q k eigenvectors. To obtain the low-rank matrix Q k , an eigenvalue decomposition of the covariance matrix is performed on the clustered sample data N k x i to solve the SVD, resulting in U denoting the left singular matrix with each row denoting the eigenvector { u k } , Σ = d i a g ( α 1 , α 2 , α j ) the diagonal matrix, which in general denotes the eigenvalues of the singular vectors and is ordered from highest to lowest energy, and V T the right singular matrix with each column denoting the eigenvector { v k } , as in Equation (18):
C = 1 m N k x i N k x i T = 1 m U Σ V T V Σ U T = 1 m U Σ 2 U T
where m denotes the number of similar patches in each cluster. The eigenvectors of the covariance matrix C are the same as the eigenvectors of the left singular matrix U . Thus, it is proven by Equation (18) that the eigenvalues of the covariance matrix can be used to low-rank approximate the similar patch sample matrix. The higher the structural relevance within an image patch, the more energy is concentrated in the first few features. By setting a threshold to discard the less energetic eigenvalues in the diagonal matrix Σ , as well as discarding the row eigenvectors { u k } and column eigenvectors { v k } corresponding to the eigenvalues, combining Equations (11) and (17) enables the use of GMM on each joint spectral dimensional plane to learn clustered similar image patches while guiding the corresponding cluster similar patches into the low-rank subspace optimization, which in turn removes redundant information. Thus the low-rank approximate sparse representation model of the joint spectral dimensional plane is transformed into an optimization problem with GMM guided clustering and low-rank constraints on the dual regular term constraints:
Y x ^ i , Z , { Q ^ k } = arg min x i , Z , Q k λ Y x i 2 2 L z | x i , μ k , Σ k i = 1 j ln k = 1 K π k p N k x i , Z | μ k , Σ k + k = 1 K N k x i Q k F 2 + Q k *
where λ is a positive parameter, chosen to be 0.18 in the experiments of this paper. According to the log-likelihood formula, the second term of the equation has a negative sign. The exact calculation procedure of Equation (19) can be found in the literature [42].

3.4. Model Expression and Numerical Computation

3.4.1. Model Representation

The hyperspectral image is divided into S g band groups based on spectral correlation. For band group { X m × n × q ( z ) } ( z { 1 , 2 , , S g } ) , where q denotes the number of bands in each band group, and the “joint spectral dimension” corresponding to band group X m × n × q ( z ) can be obtained according to Equation (7), denoted as { x S n _ j r } ( z ) , where S n _ j denotes the number of consecutive selected spectral dimensions, a total of S g sets of experiments are required.
1.
Spatial domain regularity constraints
The orthogonal transform of T 3 D is performed after stacking all the waveband groups of overall non-local 3D similar patches E v 3 D on the spatial dimension as described in Section 2.2, followed by a 2D wavelet transform for each 2D image patch in the spatial dimension, and a 1DDCT transform between similar patches, as in Figure 7 color section, assuming that the two dimensional similar patches are of the same size, and calculating the sparsity coefficients of the image patches passing through the transform domain. The sparse coefficients that are obtained from the transform domain are used to express the sparse prior of the overall non-local correlated patches of the multi-band group [24]:
f 1 X m × n × q ( z ) Δ ̲ ̲ Ψ N L o c S p a B l o c X m × n × q ( z ) 0 = v = 1 ( m i ) × ( n j ) / S k T 3 D E v 3 D ( z ) 0 ( m i ) × ( n j ) × i × j × q
where S k denotes the number of closest patches to the reference patch that was selected in the spatial dimension according to the Euclidean distance, i and j denote the size of the reference patch, respectively, ( i = j = 8 was chosen for the experiments), and the denominator ( m i ) × ( n j ) × i × j × q is the normalization factor. We use f 1 ( X m × n × q ( z ) ) as the spatial domain regularization term in the proposed algorithm. A detailed description and solution of spatially non-local correlation sparsity for HSI band groups can be found in our preface research literature [23].
2.
Spectral domain regularity constraints
From Figure 4, it can be seen that the texture features on the joint spectral dimension are more obvious and the global similar patch distribution has a certain regularity. Assuming that the parameters of each Gaussian distribution are learned adaptively by the EM algorithm, the image patches on the joint spectral dimension { x S n _ j r } ( z ) are divided into K clusters by the Kmeans++ and GMM algorithms, and then the eigenvalues are solved using SVD low-rank:
f 2 x S n j r ( z ) Δ ¯ ¯ Ψ j o i _ s p a B l o c x m × S n q × n S n ( r ) ( z ) 2 = k = 1 K N ^ k ( z ) x ^ i ( z ) p
where the denominator p is the normalization factor. We use N ^ k ( z ) x ^ i ( z ) as the stitched image after low-rank clustering of each joint spectral dimensional plane in multiple sets of experiments, and we use f 2 ( x S n _ j r ) ( z ) as the spectral domain regularization term for the reconstruction algorithm.
3.
The final form of the model
Based on the above two dimensions of the regular constraint Equations (20) and (21), a representation of the proposed joint space-spectral domain low-rank sparse representation of the HSI reconstruction model (LRCoSM) is given:
L R C o S M X m × n × q ( z ) = C 1 ( z ) f 1 X m × n × q ( z ) + C 2 ( z ) f 2 x S n _ j ( z ) f 1 ( X m × n × q ( z ) ) Δ ̲ ̲ Ψ N L o c _ S p a _ B l o c ( X m × n × q ( z ) ) 0 = v = 1 ( m i ) × ( n j ) / S k T 3 D ( E v 3 D ( z ) ) 0 ( m i ) × ( n j ) × i × j × q f 2 x S n , j ( r ) ( z ) Δ ̲ ̲ Ψ j o i s p a _ B l o c x m × S n q × n S n ( r ) ( z ) 2 = k = 1 K N ^ k ( z ) x ^ i ( z ) 2 p
where C 1 ( z ) and C 2 ( z ) are non-negative weighting coefficients to balance the contribution of the spatial domain regular term and the spectral domain regular term, which satisfies C 1 ( z ) + C 2 ( z ) = 1 . It is chosen as ( C 1 ( 1 ) , C 2 ( 1 ) ) = ( 0.50 , 0.50 ) in the experiments of this paper, and for z 2 , the weighting coefficients C 1 ( z ) and C 2 ( z ) for the current band group can be determined according to Equation (23):
C 1 ( z ) = f 1 X m × n × q ( z 1 ) f 1 X m × n × q ( z 1 ) + f 2 x S n j ( z 1 ) C 2 ( z ) = f 2 x S n j ( z 1 ) f 1 X m × n × q ( z 1 ) + f 2 x S n j ( z 1 )

3.4.2. Numerical Calculation and Algorithm Implementation of the Model

The spatially dimensionally filtered non-locally similar patches E v 3 D and the joint spectral dimensionally low-rank clustered patches N ^ k x ^ i , in turn, are expanded from left to right and top to bottom according to the index as the sparse constrained image X ^ ( z ) , where the colored part of Figure 7 shows the special case of two dimensionally similar patches of the same size, size ( i ( z ) × j ( z ) ) × ( N ( z ) E v 3 D + N k ( z ) x i ( z ) ) , which is then transformed into the optimization problem of solving the objective function of the linear regression model as follows:
min X ^ ( z ) , U 1 ( z ) 1 2 A Φ X ^ ( z ) 2 2 + ε C 1 ( z ) f 1 U 1 ( z ) + C 2 ( z ) f 2 U 2 ( z ) s . t . X ^ ( z ) = U ^ ( z )
where Φ is the observation matrix, A is the sampled data, each column of which corresponds to the measurement of the corresponding band image, and the reconstructed image is obtained by an inverse transformation of the sparsely constrained image compression sensing. ε is a non-negative hyperparameter, and the intermediate variable U 1 ( z ) is introduced such that U 1 ( z ) = X m × n × q ( z ) , while the corresponding { x S n _ j r } ( z ) will be denoted as U 2 ( z ) , and X ^ ( z ) will be denoted as U ^ ( z ) . Furthermore, the above Equation (24) is transformed by the SBI algorithm into the following three iterative subproblems:
X ^ ( z ) ( t + 1 ) = arg min X ^ ( z ) 1 2 A Φ X ^ ( z ) 2 2 + δ 2 X ^ ( z ) U ^ ( z ) B ^ 2 2 U ^ ( z ) ( t + 1 ) = arg min U ^ ( z ) ζ 2 X ^ ( z ) ( t + 1 ) U ^ ( z ) B ^ ( t ) 2 2 + λ C 1 ( z ) f 1 U 1 ( z ) + C 2 ( z ) f 2 U 2 ( z ) B ^ ( t + 1 ) = B ^ ( t ) X ^ ( z ) ( t + 1 ) U ^ ( z ) ( t + 1 )
where δ and ζ are positive parameters, which are chosen to be 0.025 and 0.05 in the experiments of this paper. The specific calculation procedure of Equation (25) can be found in the literature [43].
Based on the previous discussion, the proposed HSI reconstruction steps are shown in Algorithm 1.
Algorithm 1: Joint space-spectral domain low-rank sparse representation of the HSI reconstruction model (LRCoSM)
Input: HSI image X of original size m × n × q , sampling rate r , number of distribution clusters K .
Output: HSI reconstructed image X ^ of size m × n × q after removal of redundant information.
Initialization: Using the mean u i 0 obtained in the Kmeans++ algorithm, a random initialization of the variance estimate Σ i 0 , and a priori probability weights π k = 1 / k .
Step 1: for i = 1 : i t e r s do
     for j = 1 : q do
     The regular term f 1 ( x m × n × q ) is calculated on the spatial dimensional plane by means of Equation (20).
   end
Step 2: for j = 1 : q do
     for e = 1 : S n do
     The joint spectral dimensional plane { x s p _ j n } is obtained via Equation (7).
repeat
Step 3: E-Step.
  Update p ( x i ) to find the current parameter likelihood function probability value by Equations (8), (9), and (11).
Step 4: M-Step.
  Update π k t + 1 through Equation (12), update u i t + 1 through Equation (14), and update Σ i t + 1 through Equation (15).
until the parameter size does not change or the maximum number of iterations is reached.
Select sample points to cluster into k classes, by Equation (10).
  end
  end
Step 5: Low-rank sparse solution is used for clustering similar patches via Equation (17) and Equation (19).
  for j = 1 : q do
  for e = 1 : S n do
  Put back the joint spectral dimensional plane { x S n _ j r } .
  end
  end
Step 6: Using Equations (20) and (21) as regular terms, the reconstruction result is obtained through Equation (22).
end

4. Experimental Results

4.1. Datasets

We selected four types of hyperspectral images, shown in Figure 8 as test images, including hyperspectral images from Kennedy Space Center (KSC), Florida [44]; Pavia Center (PA), Italy [45]; The Cooke City MT (CoC), USA [46]; and the Okavango Delta, Botswana (Bot), South Africa [47].
(1)
KSC dataset: It is taken by the NASA AVIRIS sensor. Its spatial area size contains 512 × 614 pixels and contains obvious feature information such as buildings, bridges, coastal zones, etc. The spectral range is 400–2500 nm. After removing low signal-to-noise ratio and absorbing noise bands, there are 176 bands for the experiment.
(2)
Pavia Center dataset: taken through the ROSIS spectrometer, the original image spatial size is 1096 × 715 pixels. It has a spectral range of 430–840 nm and contains mainly ground-truth samples of buildings, bare soil, meadows, and asphalt in 102 bands.
(3)
The Cooke City dataset: captured by the HyMap airborne hyperspectral imaging sensor, containing 280 × 800 pixels in space, capturing mainly mountains, houses, and vehicles, with a spectral range of 450–2480 nm and 126 effective bands.
(4)
Botswana dataset: captured by the Hyperion spectrometer, the original remote sensing image size is 1476 × 256 pixels, recording a variety of vegetation and water, etc. The spectral range is 400–2500 nm with a total of 145 spectral bands.

4.2. Parameters Settings

The comparison experiments were conducted on a computer with Intel(R) Core(TM) CPU i7-8750 H at 2.21 GHz and 16 G RAM, using a 64-bit Windows 10 operating system and MATLAB R2019b experimental simulation software, while the comparison algorithm ISTA-Net was tested using Python 3.7 and the TensorFlow 2.1 runtime platform framework was tested with the GPU NVIDIA GTX1060 for training.
(1)
Description of image patch size and band selection: In this experiment, in order to construct a joint spectral dimensional dataset { x S n _ j r } with equal m = n rows and columns, all four types of hyperspectral images cropped the pixel region in the upper left corner 256 × 256 as the experimental dataset and selected 8 bands for the experiment, with KSC selecting 30–37 bands, Pavia Center selecting 85–92 bands, Cooke City choosing the 50–57 band, and Botswana choosing the 115–122 band, which also implies S n = 32 in Section 3.1. In order to save computational space, hyperspectral images are divided into multiple patches to process the input information, and the spatial domain is specified to search for image patches with a sliding window of size 8 × 8 . A total of 62,001 samples are searched in a single band; to increase the number of clustered sample datasets and to improve accuracy, the joint spectral domain image patch size is set to 7 × 7 , with a total of 256,036 samples in a single joint spectral dimension.
(2)
Key experimental parameters: Before sampling, we randomly initialize the Gaussian measurement matrix Φ , setting the maximum PDF corresponding to each sample to the category K to which the current sample belongs, being set to K = 10 . When performing the low-rank SVD decomposition, the threshold is chosen to be the eigenvalue D Τ ( Σ ) = d i a g ( { α 0.1 α } ) of 90% of the main diagonal, and the algorithm SVT [48] is called to solve it while the number of iterations is i t e r = 100 .

4.3. Numerical Statistics and Visualization

In order to compare and evaluate the compressed reconstructed images, three evaluation metrics are proposed in this section, namely peak signal to noise ratio (PSNR) [49], feature similarity index measure (FSIM) [50], and spectral angle mapping [51], which are given by Equations (26), (27), and (28). We also recorded the default unit of running time for each algorithm in seconds(s) and the letter h for hour.
P S N R = 10 lg 255 × 255 1 m n i = 1 M j = 1 N [ x ( i , j ) x ^ ( i , j ) ] 2
The denominator part of the above Equation (26) represents the normalization operation of the difference between the original image and the reconstructed image, also known as the mean squared error of the two. In general, the higher the PSNR value, the closer the reconstructed image is to the original image and the better the reconstruction is. Structural feature similarity (FSIM), a quality evaluation that is based on feature similarity, is an authoritative measure of reconstruction effectiveness by calculating the local similarity between the original image and the reconstructed image and expressing it in numerical form. The defining Equation (27) is as follows:
F S I M = x Ω S L ( x ( i , j ) ) · P C m ( x ( i , j ) ) x Ω P C ( x ( i , j ) )
where Ω represents the range of pixel point values for the whole image, x ( i , j ) represents the coordinates of the pixel points to be measured, P C m represents the multiple phase consistency feature extraction, S L represents the power exponent of the coupling of gradient feature extraction (GM) and phase consistency extraction, and FSIM takes values in the range of (0,1). The closer the value is to 1, the higher the quality of the reconstructed image.
S A M = 1 m n n j = 1 m i = 1 cos 1 α = 1 q I ( i , j , α ) I ( i , j , α ) α = 1 q I ( i , j , α ) 2 α = 1 q I ( i , j , α ) 2
where I ( i , j , α ) represents a pixel in the α band of the hyperspectral remote sensing image X m × n × q , and I ( i , j , α ) represents a pixel in the same position after reconstruction. The spectral angle takes a range of S A M [ 0 , 90 ] . When the value of the spectral angle is smaller, it means that the difference between the two spectral curves is smaller, thus indicating that the reconstructed image is closer to the original image and the reconstruction quality is higher.
The proposed algorithm was compared with six classical compressed sensing reconstruction algorithms, including the improved SLF_GPSR [52], RCoS [43], HICoSM [22], TR [53], ISTA_Net [35], and TV [54]. Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 show visualizations and numerical line charts for various datasets at different sampling. In Figure 9, Figure 11, Figure 13, and Figure 15, we show one of the reconstructed images for the four datasets when the sampling rate is 0.2, 0.3, and 0.4. The last column of each set is the ground-truth (GT) for that band, and the residual image of the reconstructed image and the original image are also given and plotted with a rainbow plot, where closer to blue means less reconstruction error and closer to red means the reconstruction error is larger. In Figure 10, Figure 12, Figure 14 and Figure 16, we present the line statistics of PSNR values of the multi-band comparison algorithms for the four datasets at different sampling rates.
In terms of the objective PSNR metric values of the reconstructed band images, the algorithms in this paper improve by approximately 10.59 dB, 4.57 dB, 4.76 dB, 16.35 dB, 6.51 dB, and 6.26 dB on average over the SLF_GPSR, RCoS, HiCoSM, TR, ISTA_Net, and TV algorithms at a 0.2 sampling rate. The average improvement over the comparison algorithm at 0.3 sampling rate is approximately 10.42 dB, 5.37 dB, 5.38 dB, 15.07 dB, 5.44 dB, and 6.23 dB, with the PSNR values for each band shown in Table 2. The average improvement over the comparison algorithms is approximately 9.20 dB, 5.50 dB, 5.48 dB, 9.89 dB, 4.29 dB, and 6.46 dB at a 0.4 sampling rate. Table 3 shows the numerical statistics of FSIM for KSC at different sampling rates. The results are all greater than the other algorithms, and the spectral angle (SAM) results are still approaching smaller values, with the LRCoSM algorithm reconstructing the best results.
With the PA dataset, the algorithms in this paper improved by approximately 16.57 dB, 8.77 dB, 8.90 dB, 14.71 dB, 12.00 dB, and 11.91 dB on average over the SLF_GPSR, RCoS, HiCoSM, TR, ISTA_Net, and TV algorithms at a sampling rate of 0.2. At a 0.3 sampling rate, the average improvement over the above algorithm is about 19.05 dB, 10.75 dB, 10.95 dB, 14.31 dB, 10.31 dB, and 13.77 dB; at a 0.4 sampling rate, the average improvement over the comparison algorithm is about 19.07 dB, 12.56 dB, 12.89 dB, 8.71 dB, 9.55 dB, and 15.38 dB. The specific PSNR values for each band are shown in Table 4. Table 5 shows the numerical statistics of the Pavia Center’s FSIM at different sampling rates. The results are all greater than the other algorithms, the SAM metric tends to be at a minimum, and the LRCoSM algorithm reconstructs the best results.
Under The Cooke City dataset, the algorithm in this paper improves approximately 20.70 dB, 9.63 dB, 9.56 dB, 11.02 dB, 14.39 dB, and 16.31 dB on average over SLF_GPSR, RCoS, HiCoSM, TR, ISTA_Net, and TV algorithms at a 0.2 sampling rate, and the PSNR values of each band are shown in Table 6. At a 0.3 sampling rate, the average improvement over the above algorithm is approximately 21.45 dB, 10.80 dB, 10.56 dB, 7.28 dB, 11.19 dB, and 15.87 dB; at a 0.4 sampling rate, the average improvement over the comparison algorithm is approximately 18.98 dB, 11.70 dB, 11.65 dB, 6.43 dB, 8.79 dB, and 17.18 dB. Table 7 shows the numerical statistics of FSIM for CoC at different sampling rates. The results are greater than the other algorithms, the SAM metrics show better retention of the spectral angle, and the LRCoSM algorithm reconstructs the best.
With the Botswana dataset, the algorithms in this paper improve on average by approximately 11.41 dB, 3.41 dB, 3.43 dB, 2.87 dB, 5.75 dB, and 7.48 dB over the SLF_GPSR, RCoS, HiCoSM, TR, ISTA_Net, and TV algorithms at a sampling rate of 0.2. The average improvement over the above algorithm is about 9.29 dB, 3.81 dB, 3.76 dB, 0.65 dB, 3.94 dB, and 5.98 dB at a 0.3 sampling rate; and about 7.86 dB, 4.09 dB, 4.03 dB, 0.42 dB, 3.2 dB, and 6.62 dB at a 0.4 sampling rate compared to the comparison algorithm, and the PSNR values of each band are shown in Table 8. Table 9 shows the numerical statistics of FSIM for Bot at different sampling rates. The results are greater than the other algorithms, the SAM metrics show better retention of the spectral angle, and the LRCoSM algorithm reconstructs the best.

4.4. Discussions and Analysis

After numerical experiments and visualization analysis, it can be seen that the SLF_GPSR and TV algorithms are convex optimization sparse reconstruction methods. Although this method can effectively reduce the computational effort, it is an approximate solution method resulting in poor reconstruction quality, which is prone to motion artifacts while falling into local optimal solutions. The HiCoSM algorithm is an improvement on the RCoS algorithm, which introduces a predictive sparsity measure of spectral correlation based on RCoS. However, the algorithm has the risk of reconstruction prediction errors between long range bands in the multi-band case and is computationally expensive. The TRLRF tensor ring decomposition algorithm decomposes a high order large scale 3D rectangular matrix of hyperspectral remote sensing images into multiple small-scale tensor factor multiplication structures and imposes low-rank constraints on each factor. However, the algorithm requires a large amount of hyperparametric prior knowledge and is not highly generalizable.
Recent deep learning-based compressed sensing reconstruction algorithms, such as the most typical ISTA-NeT, OPINE-Net [55], are used to obtain a convergent loss function for image reconstruction by training a symmetric deep neural network architecture such as CNN with weights w , bias b , and an observation matrix Φ at different sampling rates. There are three shortcomings of this type of algorithm. Firstly, the algorithm has high demand for hardware such as GPU, and the algorithm takes more than 10 h. Second, over-reliance on the category and quantity of the original data. The tasks that are currently done by the algorithms are all baseline problems that are based on open-source datasets, and it is more difficult to deal with specific problems in specific domains. Third, different sampling rates correspond to an adaptive learning of the observation matrix Φ . Other sampling rates still need to be re-learned, and the training samples are thrown into the network architecture together with the encapsulated Φ . This process is similar to a black box problem where generalization and interpretability are not strong.
The proposed LRCoSM algorithm can significantly improve the reconstruction results at different sampling rates and has strong robustness. In particular, it is possible to construct a joint spectral dimension surface and thus achieve a better sparse representation, especially for the case where the number of samples in the band is extremely limited. In this paper, we innovatively proposed the joint spectral domain based on the previous study [23], and broke the limitation of weak similarity of long-range spectral curve image blocks on image elements in the paper [24], verified the existence of non-local similarity on the joint spectral plane, and combined with the optimization algorithm of SBI [43] to obtain a more adequate sparse prior for HSI. Here, it should be noted that the a priori hyperparameter cluster number K in the method needs to be set manually with reference to ground-truth. Figure 17 shows the KSC where the first group of joint spectral dimensional surface downscaling mapping clustering scatter plots are displayed. The different color pixel points represent the image patches belonging to different clusters, and the horizontal and vertical coordinate systems represent the mapping to the two-dimensional fixed area boundary range, which facilitates the observation of the cluster contour looseness and accuracy. Observing the purple sample clustering, when K = 5 , the distribution of clustered samples is more discrete. When K = 20 , the cluster number labeling too much aggregation returns is reduced. At the same time, it appears to be clustering confusion, which consumes a huge amount of computation. The selection of K can be done with reference to the kind of picture ground-truth category label.
For the dataset without ground-truth, we give different hyperparametric clustering K values and different low-rank principal diagonal thresholds for experimentation through the algorithm in this paper. In Table 10, we can see that different datasets in this paper can be effectively reconstructed by the LRCoSM model under different hyperparametric clustering K values, but in this paper, K = 10 for various types of datasets to achieve the best reconstruction effect. This is because only the closest to the real ground-truth clustering K , reconstruction effect will be better. Too small K value clustering is not sufficient. The disadvantages of over-clustering and the time-consuming cost that are brought by too large a K value setting are obvious. In the case of no datasets without ground-truth provided, we suggest that a slightly larger estimated K value can be selected for downward debugging. At the same time, Table 11 shows that the selection of low-rank principal diagonal thresholds for different datasets has an impact on the reconstruction effect, and the magnitude of low-rank principal diagonal thresholds is positively related to the reconstruction quality when other environmental variables are certain. Therefore, how to find the cluster number K that is closest to the actual feature ground-truth and how to choose the adaptive hyperparametric low-rank main diagonal threshold setting scheme to remove more redundant information but maintain the high quality reconstruction effect and improve the reconstruction effect of land water and object edges for different datasets is the focus of our future research.

5. Conclusions

In this paper, a low-rank constrained sparse reconstruction model (LRCoSM) is proposed for compressive reconstruction of HSI at low sampling rates. For the first time, a sparse measurement and representation of the spectral domain in the joint spectral dimensional plane is proposed which effectively overcomes the impact of the restricted number of spectral bands on the reconstruction quality of HSI. On this basis, the existence of non-local correlation on the joint spectral dimension is verified, and a GMM adaptive unsupervised learning mechanism is proposed for guiding image patch clustering, which expands the search range of non-local similar patches and improves the effectiveness of the low-rank sparse regular constraints that are formed after clustering. Experimental results on a large number of datasets at different sampling rates show that the proposed LRCoSM algorithm has better reconstruction quality and stronger generalization ability than the popular and classical algorithms at this stage.

Author Contributions

Formal analysis, investigation, and writing—original draft preparation, S.X.; validation, data curation, S.W.; analysis and guide, C.S.; conceptualization and methodology, X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 41971388) and the Innovation Team Support Program of Liaoning Higher Education Department (Grant No. LT2017013).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the editors and the reviewers for their valuable suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bacca, J.; Correa, C.V.; Arguello, H. Noniterative hyperspectral image reconstruction from compressive fused measurements. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1231–1239. [Google Scholar] [CrossRef]
  2. Wang, L.; Xiong, Z.; Huang, H.; Shi, G.; Wu, F.; Zeng, W. High-speed hyperspectral video acquisition by combining nyquist and compressive sampling. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 857–870. [Google Scholar] [CrossRef] [PubMed]
  3. Donoho, D. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  4. Candes, E.; Wakin, M. An Introduction to Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  5. Erkoc, M.E.; Karaboga, N. Multi-objective Sparse Signal Reconstruction in Compressed Sensing. In Nature-Inspired Metaheuristic Algorithms for Engineering Optimization Applications; Carbas, S., Toktas, A., Ustun, D., Eds.; Springer: Singapore, 2021; pp. 373–396. [Google Scholar]
  6. Li, Y.; Dong, W.; Xie, X.; Shi, G.; Li, X.; Xu, D. Learning parametric sparse models for image super-resolution. Adv. Neural Inf. Process. Syst. 2016, 29, 4664–4672. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, T.; Su, X.; Li, H.; Li, S.; Liu, J.; Zhang, G.; Feng, X.; Wang, S.; Liu, X.; Wang, Y.; et al. Learning a Fully Connected U-Net for Spectrum Reconstruction of Fourier Transform Imaging Spectrometers. Remote Sens. 2022, 14, 900. [Google Scholar] [CrossRef]
  8. Luo, H.; Zhang, N.; Wang, Y. Modified Smoothed Projected Landweber Algorithm for Adaptive Block Compressed Sensing Image Reconstruction. In Proceedings of the 2018 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018; pp. 430–434. [Google Scholar]
  9. Matin, A.; Dai, B.; Huang, Y.; Wang, X. Ultrafast Imaging with Optical Encoding and Compressive Sensing. J. Light. Technol. 2018, 37, 761–768. [Google Scholar] [CrossRef]
  10. Kawakami, R.; Matsushita, Y.; Wright, J.; Ben-Ezra, M.; Tai, Y.; Ikeuchi, K. High-resolution hyperspectral imaging via matrix factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 2329–2336. [Google Scholar]
  11. Zhang, X.; Jiang, X.; Jiang, J.; Zhang, Y.; Liu, X.; Cai, Z. Spectral-Spatial and Superpixelwise PCA for Unsupervised Feature Extraction of Hyperspectral Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–10. [Google Scholar] [CrossRef]
  12. Xu, P.; Chen, B.; Xue, L.; Zhang, J.; Zhu, L. A prediction-based spatial-spectral adaptive hyperspectral compressive sensing algorithm. Sensors 2018, 18, 3289. [Google Scholar] [CrossRef]
  13. Azimpour, P.; Bahraini, T.; Yazdi, H. Hyperspectral Image Denoising via Clustering-Based Latent Variable in Variational Bayesian Framework. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3266–3276. [Google Scholar] [CrossRef]
  14. Qu, J.; Du, Q.; Li, Y.; Tian, L.; Xia, H. Anomaly detection in hyperspectral imagery based on Gaussian mixture model. IEEE Trans. Geosci. Remote Sens. 2020, 59, 9504–9517. [Google Scholar] [CrossRef]
  15. Ma, Y.; Jin, Q.; Mei, X.; Dai, X.; Fan, F.; Li, H.; Huang, J. Hyperspectral unmixing with Gaussian mixture model and low-rank representation. Remote Sens. 2019, 11, 911. [Google Scholar] [CrossRef]
  16. Yin, J.; Sun, J.; Jia, X. Sparse analysis based on generalized Gaussian model for spectrum recovery with compressed sensing theory. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 8, 2752–2759. [Google Scholar] [CrossRef]
  17. Wang, Z.; He, M.; Ye, Z.; Nian, Y.; Qiao, L.; Chen, M. Exploring error of linear mixed model for hyperspectral image reconstruction from spectral compressive sensing. J. Appl. Remote Sens. 2019, 13, 036514. [Google Scholar] [CrossRef]
  18. Wang, Z.; Feng, Y.; Jia, Y. Spatio-spectral hybrid compressive sensing of hyperspectral imagery. Remote Sens. Lett. 2015, 6, 199–208. [Google Scholar] [CrossRef]
  19. Zhuang, L.; Ng, M.K.; Fu, X.; Bioucas, J.M. Hy-demosaicing:Hyperspectral blind reconstruction from random spectral projections. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar]
  20. Wang, L.; Feng, Y.; Gao, Y.; Wang, Z.; He, M. Compressed sensing reconstruction of hyperspectral images based on spectral unmixing. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1266–1284. [Google Scholar] [CrossRef]
  21. Yang, F.; Chen, X.; Chai, L. Hyperspectral Image Destriping and Denoising Using Stripe and Spectral Low-Rank Matrix Recovery and Global Spatial-Spectral Total Variation. Remote Sens. 2021, 13, 827. [Google Scholar] [CrossRef]
  22. Wang, X.; Song, H.; Song, C.; Tao, J. Hyperspectral image compressed sensing model based on the collaborative sparsity of the intra-frame and inter-band. Sci. Sin. Inf. 2016, 46, 361–375. [Google Scholar] [CrossRef]
  23. Wang, X.; Wang, S.; Li, Y.; Xie, S.; Tao, J.; Song, D. Hyperspectral image sparse reconstruction model based on collaborative multidimensional correlation. Appl. Soft Comput. 2021, 105, 107250. [Google Scholar] [CrossRef]
  24. Wang, X.; Wang, S.; Xie, S.; Li, Y.; Tao, J.; Song, C. Spectral dimensional correlation and sparse reconstruction model of hyperspectral images. Sci. Sin. Inf. 2021, 51, 449–467. [Google Scholar] [CrossRef]
  25. Liu, Y.; Yuan, X.; Suo, J.; Brady, D.; Dai, Q. Rank minimization for snapshot compressive imaging. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 41, 2990–3006. [Google Scholar] [CrossRef] [PubMed]
  26. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 4729–4743. [Google Scholar] [CrossRef]
  27. Xue, J.; Zhao, Y.; Bu, Y.; Liao, W.; Chan, J.; Philips, W. Spatial-spectral structured sparse low-rank representation for hyperspectral image super-resolution. IEEE Trans. Image Process. 2021, 30, 3084–3097. [Google Scholar] [CrossRef]
  28. Yi, C.; Zhao, Y.; Chan, J. Hyperspectral image super-resolution based on spatial and spectral correlation fusion. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4165–4177. [Google Scholar] [CrossRef]
  29. Dai, S.; Liu, W.; Wang, Z.; Li, K. A Task-Driven Invertible Projection Matrix Learning Algorithm for Hyperspectral Compressed Sensing. Remote Sens. 2021, 13, 295. [Google Scholar] [CrossRef]
  30. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  31. Fotiadou, K.; Tsagkatakis, G.; Tsakalides, P. Spectral super resolution of hyperspectral images via coupled dictionary learning. IEEE Trans. Geosci. Remote Sens. 2018, 57, 2777–2797. [Google Scholar] [CrossRef]
  32. Chen, Y.; Huang, T.; He, W.; Yokoya, N.; Zhao, X. Hyperspectral image compressive sensing reconstruction using subspace-based nonlocal tensor ring decomposition. IEEE Trans. Image Process. 2020, 29, 6813–6828. [Google Scholar] [CrossRef]
  33. Chen, Y.; He, W.; Yokoya, N.; Huang, T. Hyperspectral image restoration using weighted group sparsity regularized low-rank tensor decomposition. IEEE Trans. Cybern. 2019, 50, 3556–3570. [Google Scholar] [CrossRef]
  34. Xue, J.; Zhao, Y.; Liao, W.; Chan, J.C. Nonlocal tensor sparse representation and low-rank regularization for hyperspectral image compressive sensing reconstruction. Remote Sens. 2019, 11, 193. [Google Scholar] [CrossRef] [Green Version]
  35. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1828–1837. [Google Scholar]
  36. Fu, Y.; Zhang, T.; Wang, L.; Hua, H. Coded hyperspectral image reconstruction using deep external and internal learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3404–3420. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, L.; Sun, C.; Fu, Y.; Kim, M.; Huang, H. Hyperspectral image reconstruction using a deep spatial-spectral prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–19 June 2019; pp. 8032–8041. [Google Scholar]
  38. Huang, W.; Xu, Y.; Hu, X.; Wei, Z. Compressive hyperspectral image reconstruction based on spatial-spectral residual dense network. IEEE Geosci. Remote Sens. Lett. 2019, 17, 884–888. [Google Scholar] [CrossRef]
  39. Li, T.; Cai, Y.; Cai, Z.; Liu, X.; Hu, Q. Nonlocal band attention network for hyperspectral image band selection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3462–3474. [Google Scholar] [CrossRef]
  40. Li, T.; Gu, Y. Progressive Spatial–Spectral Joint Network for Hyperspectral Image Reconstruction. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  41. Zimichev, E.; Kazanskii, N.; Serafimovich, P. Spectral-spatial classification with k-means++ particional clustering. Comput. Opt. 2014, 38, 281–286. [Google Scholar] [CrossRef]
  42. Chen, F.; Zhang, L.; Yu, H. External patch prior guided internal clustering for image denoising. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 603–611. [Google Scholar]
  43. Zhang, J.; Zhao, D. Split Bregman Iteration based Collaborative Sparsity for Image Compressive Sensing Recovery. Intell. Comput. Appl. 2014, 4, 60–64. [Google Scholar]
  44. Roy, S.; Hong, D.; Kar, P.; Wu, X.; Liu, X.; Zhao, D. Lightweight heterogeneous kernel convolution for hyperspectral image classification with noisy labels. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
  45. Shafaey, M.; Salem, M.; Al-Berry, M.; Ebied, H.; Tolba, M. Review on Supervised and Unsupervised Deep Learning Techniques for Hyperspectral Images Classification. In Proceedings of the International Conference on Artificial Intelligence and Computer Vision (AICV), Settat, Morocco, 28–30 June 2021; pp. 66–74. [Google Scholar]
  46. Bhandari, A.; Tiwari, K. Loss of target information in full pixel and subpixel target detection in hyperspectral data with and without dimensionality reduction. Evolv. Syst. 2021, 12, 239–254. [Google Scholar] [CrossRef]
  47. Li, Z.; Huang, H.; Zhang, Z.; Shi, G. Manifold-Based Multi-Deep Belief Network for Feature Extraction of Hyperspectral Image. Remote Sens. 2022, 14, 1484. [Google Scholar] [CrossRef]
  48. Cai, J.; Candès, E.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  49. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electr. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  50. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
  51. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef]
  52. Ji, Z.; Kong, F. Hyperspectral image compressed sensing based on linear filter between bands. Acta Photo. Sinica 2012, 41, 82. [Google Scholar]
  53. Yuan, L.; Li, C.; Mandic, D.; Cao, J.; Zhao, Q. Tensor ring decomposition with rank minimization on latent space: An efficient approach for tensor completion. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Honolulu, HI, USA, 27 January–1 February 2019; pp. 9151–9158. [Google Scholar]
  54. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef]
  55. Zhang, J.; Zhao, C.; Gao, W. Optimization-inspired compact deep compressive sensing. IEEE J. Sel. Topics Signal Process. 2020, 14, 765–774. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Feature point selection for the Pavia Center dataset in a total of 102 bands that were taken by the ROSIS spectrometer, Remotesensing 14 04184 i001 for roof coordinates (19,49); Remotesensing 14 04184 i002 for open space coordinates (149,48); Remotesensing 14 04184 i003 for road coordinates (150,208): (a) Band 30; (b) Band 40; (c) Band 50; (d) Band 60; (e) Band 70; (f) Band 80.
Figure 1. Feature point selection for the Pavia Center dataset in a total of 102 bands that were taken by the ROSIS spectrometer, Remotesensing 14 04184 i001 for roof coordinates (19,49); Remotesensing 14 04184 i002 for open space coordinates (149,48); Remotesensing 14 04184 i003 for road coordinates (150,208): (a) Band 30; (b) Band 40; (c) Band 50; (d) Band 60; (e) Band 70; (f) Band 80.
Remotesensing 14 04184 g001
Figure 2. The LRCoSM algorithm is divided into two dimensions. The overall non-local similar patches are found in the spatial dimension through filters as the first sparse regular term. The global similar patches are found in the joint spectral dimension using Kmeans++ and GMM clustering algorithms through low-rank decomposition as the second sparse regular term, and the sparsely reconstructed image is solved by iterations: (a) Sparse representation of the image; (b) Sparse reconstruction of the image.
Figure 2. The LRCoSM algorithm is divided into two dimensions. The overall non-local similar patches are found in the spatial dimension through filters as the first sparse regular term. The global similar patches are found in the joint spectral dimension using Kmeans++ and GMM clustering algorithms through low-rank decomposition as the second sparse regular term, and the sparsely reconstructed image is solved by iterations: (a) Sparse representation of the image; (b) Sparse reconstruction of the image.
Remotesensing 14 04184 g002
Figure 3. Diagram of joint spectral dimensional planes grouping expansion stitching.
Figure 3. Diagram of joint spectral dimensional planes grouping expansion stitching.
Remotesensing 14 04184 g003
Figure 4. The non-local correlation of the joint spectral dimensional plane of groups (ah).
Figure 4. The non-local correlation of the joint spectral dimensional plane of groups (ah).
Remotesensing 14 04184 g004
Figure 5. Joint spectral dimensional plane global similarity patch matching point cloud map: (a) Coordinate point is (19,45); (b) coordinate point is (23,139); (c) coordinate point is (119,97); (d) coordinate point is (193,195); (e) coordinate point is (139,198); (f) coordinate point is (223,66).
Figure 5. Joint spectral dimensional plane global similarity patch matching point cloud map: (a) Coordinate point is (19,45); (b) coordinate point is (23,139); (c) coordinate point is (119,97); (d) coordinate point is (193,195); (e) coordinate point is (139,198); (f) coordinate point is (223,66).
Remotesensing 14 04184 g005
Figure 6. Comparison of clustering effects of different algorithms under K-clusters. The black boxes represent the comparison areas: (a) Kmeans, (b) Kmeans++, (c) Kmeans++, and GMM.
Figure 6. Comparison of clustering effects of different algorithms under K-clusters. The black boxes represent the comparison areas: (a) Kmeans, (b) Kmeans++, (c) Kmeans++, and GMM.
Remotesensing 14 04184 g006
Figure 7. Non-locally similar patches of spatially and spectrally dimensional planes are pulled into columns and stitched together.
Figure 7. Non-locally similar patches of spatially and spectrally dimensional planes are pulled into columns and stitched together.
Remotesensing 14 04184 g007
Figure 8. 3D visualization of four types of hyperspectral remote sensing images: (a) KSC dataset; (b) Pavia Center dataset; (c) The Cooke City dataset; (d) Botswana dataset.
Figure 8. 3D visualization of four types of hyperspectral remote sensing images: (a) KSC dataset; (b) Pavia Center dataset; (c) The Cooke City dataset; (d) Botswana dataset.
Remotesensing 14 04184 g008
Figure 9. Comparison of the different algorithms for reconstructing images of the KSC when the sampling rate is 0.3: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Figure 9. Comparison of the different algorithms for reconstructing images of the KSC when the sampling rate is 0.3: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Remotesensing 14 04184 g009aRemotesensing 14 04184 g009b
Figure 10. Line graph of PSNR (dB) values for KSC dataset that was reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Figure 10. Line graph of PSNR (dB) values for KSC dataset that was reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Remotesensing 14 04184 g010
Figure 11. Comparison of different algorithms for reconstructing images of the PA when the sampling rate is 0.4: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Figure 11. Comparison of different algorithms for reconstructing images of the PA when the sampling rate is 0.4: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Remotesensing 14 04184 g011aRemotesensing 14 04184 g011b
Figure 12. Line graph of PSNR (dB) values for the PA dataset that was reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Figure 12. Line graph of PSNR (dB) values for the PA dataset that was reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Remotesensing 14 04184 g012
Figure 13. Comparison of the different algorithms for reconstructing images of the CoC when the sampling rate is 0.2: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Figure 13. Comparison of the different algorithms for reconstructing images of the CoC when the sampling rate is 0.2: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Remotesensing 14 04184 g013aRemotesensing 14 04184 g013bRemotesensing 14 04184 g013c
Figure 14. Line graph of PSNR (dB) values for the CoC that was dataset reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Figure 14. Line graph of PSNR (dB) values for the CoC that was dataset reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Remotesensing 14 04184 g014
Figure 15. Comparison of different algorithms for reconstructing images of the Bot when the sampling rate is 0.4: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Figure 15. Comparison of different algorithms for reconstructing images of the Bot when the sampling rate is 0.4: (a) SLF_GPSR; (b) RCoS; (c) HICoSM; (d) TR; (e) ISTA_Net; (f) TV; (g) LRCoSM; (h) GT.
Remotesensing 14 04184 g015aRemotesensing 14 04184 g015b
Figure 16. Line graph of PSNR (dB) values for the Bot dataset that was reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Figure 16. Line graph of PSNR (dB) values for the Bot dataset that was reconstructed by different algorithms at other sampling rates: (a) rate = 0.2; (b) rate = 0.3; (c) rate = 0.4.
Remotesensing 14 04184 g016
Figure 17. Two-dimensional mapping scatterplot of the joint KSC spectral dimensional plane for different K clusters: (a) K = 5; (b) K = 10; (c) K = 20.
Figure 17. Two-dimensional mapping scatterplot of the joint KSC spectral dimensional plane for different K clusters: (a) K = 5; (b) K = 10; (c) K = 20.
Remotesensing 14 04184 g017
Table 1. The most similar HSI non-locally similar patch coordinates of typical landforms such as roofs, open spaces, and roads at different wavebands are given according to the Euclidean distance formula.
Table 1. The most similar HSI non-locally similar patch coordinates of typical landforms such as roofs, open spaces, and roads at different wavebands are given according to the Euclidean distance formula.
Similar patch coordinates in the search area for different bandsReference CoordinatesBand NumberCoordinates of the Top Five Image Patches
According to Similarity
(19,49)30(18,49)(20,49)(19,48)(18,48)(19,50)
40(18,49)(20,49)(19,48)(18,48)(19,50)
50(18,49)(20,49)(19,48)(18,48)(19,50)
60(18,49)(20,49)(19,48)(18,48)(19,50)
70(18,49)(20,49)(19,48)(19,50)(18,48)
80(18,49)(20,49)(19,48)(19,50)(18,48)
(149,48)30(148,48)(148,49)(148,47)(150,50)(147,47)
40(148,48)(150,50)(148,49)(150,49)(148,47)
50(150,50)(148,48)(148,49)(150,49)(148,47)
60(148,48)(150,50)(148,49)(150,49)(148,47)
70(148,48)(150,50)(148,49)(150,49)(148,47)
80(148,48)(150,50)(148,49)(150,49)(148,47)
(150,208)30(151,208)(150,209)(151,209)(155,211)(164,215)
40(151,208)(155,211)(150,209)(151,209)(155,210)
50(151,208)(155,211)(150,210)(151,209)(154,210)
60(151,208)(155,211)(154,210)(155,210)(151,209)
70(151,208)(164,215)(155,210)(155,211)(165,216)
80(151,208)(155,211)(150,209)(151,209)(156,211)
Table 2. PSNR (dB) values for the band 30–37 KSC dataset at 0.3 sampling rate.
Table 2. PSNR (dB) values for the band 30–37 KSC dataset at 0.3 sampling rate.
dB3031323334353637
SLF_GPSR22.7627.9028.2929.1629.8130.0730.1428.13
RCoS32.8332.8133.0633.5433.8633.8833.5533.13
HICoSM32.8332.7933.133.6233.9233.7733.4633.09
TR22.7322.9623.0723.6224.1624.1424.3424.00
ISTA_Net32.7532.7233.0533.50 33.8433.8333.4732.93
TV32.5732.4932.632.7432.9232.5832.1731.70
LRCoSM36.6238.2239.4740.7441.1840.0937.9435.31
Table 3. The average FSIM and SAM as well as the algorithmic time statistics of the KSC dataset after seven algorithms with different sampling rates.
Table 3. The average FSIM and SAM as well as the algorithmic time statistics of the KSC dataset after seven algorithms with different sampling rates.
Algorithms
SLF_GPSRRCoSHiCoSMTRISTA-NetTVLRCoSM
Sampling Rate0.2FSIM
SAM
TIME
0.77
14.8030
6.53 s
0.88
3.1140
108.47 s
0.89
3.4572
109.00 s
0.78
18.6800
548.95 s
0.78
5.0357
13.6 h
0.84
5.7238
197.34 s
0.94
3.5184
4.04 h
0.3FSIM
SAM
TIME
0.83
12.1261
6.22 s
0.91
2.6975
109.38 s
0.92
2.9500
121.39 s
0.79
18.0189
551.67 s
0.90
3.1685
13.5 h
0.88
3.7016
295.37 s
0.97
3.0597
3.64 h
0.4FSIM
SAM
TIME
0.89
7.7499
6.31 s
0.94
2.3868
111.80 s
0.94
2.6118
113.76 s
0.88
10.8569
585.44 s
0.94
2.4544
13.0 h
0.91
3.2386
458.07 s
0.98
2.8145
3.86 h
Table 4. PSNR (dB) values for the band 85–92 PA dataset at a 0.4 sampling rate.
Table 4. PSNR (dB) values for the band 85–92 PA dataset at a 0.4 sampling rate.
dB8586878889909192
SLF_GPSR22.5924.3024.3924.3524.2724.2524.2724.28
RCoS30.6630.6030.6030.6030.5330.5230.8030.50
HICoSM30.6630.6430.2530.2029.9930.0730.3230.01
TR34.2634.3134.5034.4934.6634.6334.0834.65
ISTA_Net33.7433.6733.6833.6333.5533.5533.5633.51
TV27.8227.8327.8627.8327.7427.7227.7127.65
LRCoSM37.7643.0145.4346.4346.1945.3443.0638.09
Table 5. The average FSIM and SAM as well as the algorithmic time statistics of the PA dataset after seven algorithms with different sampling rates.
Table 5. The average FSIM and SAM as well as the algorithmic time statistics of the PA dataset after seven algorithms with different sampling rates.
Algorithms
SLF_GPSRRCoSHiCoSMTRISTA-NetTVLRCoSM
SamplingRate0.2FSIM
SAM
TIME
0.70
14.9715
4.72 s
0.88
2.1600
108.70 s
0.88
2.9165
109.68 s
0.82
10.9885
558.45 s
0.67
2.9703
13.6 h
0.79
2.0679
198.82 s
0.98
3.7412
6.30 h
0.3FSIM
SAM
TIME
0.75
12.0685
4.59 s
0.92
2.0605
112.43 s
0.91
2.8681
110.03 s
0.88
10.2932
550.33 s
0.90
2.4701
13.5 h
0.84
2.0046
297.57 s
0.99
3.4349
6.12 h
0.4FSIM
SAM
TIME
0.82
4.7885
4.66 s
0.94
2.0608
112.47 s
0.93
2.9389
114.92 s
0.97
5.5989
538.95 s
0.96
2.1384
13.0 h
0.88
3.9672
516.50 s
0.99
3.0741
5.95 h
Table 6. PSNR (dB) values for the band 50–57 CoC dataset at a 0.2 sampling rate.
Table 6. PSNR (dB) values for the band 50–57 CoC dataset at a 0.2 sampling rate.
dB5051525354555657
SLF_GPSR10.6819.0920.6320.6420.6620.6520.6420.65
RCoS30.2630.3530.3130.3330.3230.2730.2130.19
HICoSM30.2630.3730.5330.5930.3330.2030.1430.37
TR28.8628.8328.8528.9428.8328.9528.8929.01
ISTA_Net25.8325.9225.8925.9025.8925.8325.7825.77
TV23.9223.9823.9723.9723.9523.9123.8823.83
LRCoSM37.2740.2841.3841.8342.0041.4040.0737.72
Table 7. The average FSIM and SAM as well as the algorithmic time statistics of the CoC dataset after seven algorithms with different sampling rates.
Table 7. The average FSIM and SAM as well as the algorithmic time statistics of the CoC dataset after seven algorithms with different sampling rates.
Algorithms
SLF_GPSRRCoSHiCoSMTRISTA-NetTVLRCoSM
SamplingRate0.2FSIM
SAM
TIME
0.76
12.4156
5.13 s
0.92
0.4091
108.39 s
0.93
0.9412
109.90 s
0.93
3.2539
558.08 s
0.74
0.6093
13.6 h
0.83
0.5323
197.38 s
0.99
0.9757
3.35 h
0.3FSIM
SAM
TIME
0.81
5.7284
4.77 s
0.95
0.3759
110.42 s
0.95
0.7528
121.20 s
0.98
1.9829
551.69 s
0.93
0.4640
13.5 h
0.87
0.3893
421.12 s
0.99
0.8702
3.37 h
0.4FSIM
SAM
TIME
0.88
1.1161
4.55 s
0.97
0.3415
92.28 s
0.97
0.3416
116.37 s
0.99
1.4440
569.76 s
0.98
0.3649
13.0 h
0.89
0.3637
955.26 s
0.99
0.7722
3.11 h
Table 8. PSNR (dB) values for the band 115–122 Bot dataset at a 0.4 sampling rate.
Table 8. PSNR (dB) values for the band 115–122 Bot dataset at a 0.4 sampling rate.
dB115116117118119120121122
SLF_GPSR25.2328.5228.7230.0529.8929.0529.1530.00
RCoS31.4531.7332.1733.1833.1432.9132.8833.30
HICoSM31.4531.7332.0933.1633.1933.0133.0333.54
TR33.9334.9335.8437.3537.1736.8936.7737.26
ISTA_Net32.2832.4833.0334.4134.2834.0934.0534.59
TV29.2629.4729.8330.7530.730.4830.4830.84
LRCoSM33.635.1536.4838.638.5838.0437.5336.84
Table 9. The average FSIM and SAM as well as the algorithmic time statistics of the Bot dataset after seven algorithms with different sampling rates.
Table 9. The average FSIM and SAM as well as the algorithmic time statistics of the Bot dataset after seven algorithms with different sampling rates.
Algorithms
SLF_GPSRRCoSHiCoSMTRISTA-NetTVLRCoSM
SamplingRate0.2FSIM
SAM
TIME
0.78
14.0492
5.19 s
0.85
2.5653
108.38 s
0.86
2.6522
109.06 s
0.92
4.0471
550.00 s
0.63
3.1180
13.6 h
0.81
2.8417
198.87 s
0.93
2.8129
4.37 h
0.3FSIM
SAM
TIME
0.83
5.9134
4.91 s
0.89
2.2644
107.79 s
0.90
2.3336
125.22 s
0.95
3.2629
559.54 s
0.81
2.7003
13.5 h
0.84
2.3265
293.58 s
0.95
2.6394
4.73 h
0.4FSIM
SAM
TIME
0.86
3.4510
4.98 s
0.92
2.0281
119.39 s
0.93
2.0493
118.49 s
0.97
2.3532
512.88 s
0.89
2.2328
13.0 h
0.86
2.2003
435.70 s
0.97
2.3477
6.50 h
Table 10. Reconstructed PSNR (dB) values for the 85–92 band Pavia dataset at a 0.4 sampling rate with different clustering clusters K .
Table 10. Reconstructed PSNR (dB) values for the 85–92 band Pavia dataset at a 0.4 sampling rate with different clustering clusters K .
DatasetsNumber of K-Clustering Distributions
K = 5K = 10K = 15K = 20
Kennedy Space Center40.3140.3840.3240.36
Pavia Center40.5843.1640.6340.67
The Cooke City43.8746.0943.8543.93
Botswana36.6336.8536.6236.64
Table 11. Reconstructed PSNR (dB) values for four datasets with a sampling rate of 0.4 for different Σ diagonal matrix eigenvalue low-rank thresholds.
Table 11. Reconstructed PSNR (dB) values for four datasets with a sampling rate of 0.4 for different Σ diagonal matrix eigenvalue low-rank thresholds.
Datasets Σ Diagonal Matrix Eigenvalue Low-Rank Threshold Percentage Selection
10%30%50%70%90%
Kennedy Space Center36.9937.0139.4240.0440.38
Pavia Center34.3134.3337.4739.1643.16
The Cooke City35.7435.7538.6641.4246.09
Botswana35.3535.4035.8936.6236.85
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xie, S.; Wang, S.; Song, C.; Wang, X. Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation. Remote Sens. 2022, 14, 4184. https://doi.org/10.3390/rs14174184

AMA Style

Xie S, Wang S, Song C, Wang X. Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation. Remote Sensing. 2022; 14(17):4184. https://doi.org/10.3390/rs14174184

Chicago/Turabian Style

Xie, Shicheng, Shun Wang, Chuanming Song, and Xianghai Wang. 2022. "Hyperspectral Image Reconstruction Based on Spatial-Spectral Domains Low-Rank Sparse Representation" Remote Sensing 14, no. 17: 4184. https://doi.org/10.3390/rs14174184

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop