Next Article in Journal
Towards Monitoring Waterlogging with Remote Sensing for Sustainable Irrigated Agriculture
Previous Article in Journal
Multipath Error Fusion Modeling Methods for Multi-GNSS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral and Multispectral Image Fusion Using Coupled Non-Negative Tucker Tensor Decomposition

1
Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz 13876-71557, Iran
2
Imec-Vision Lab, Department of Physics, University of Antwerp, 2610 Antwerp, Belgium
*
Author to whom correspondence should be addressed.
Remote Sens. 2021, 13(15), 2930; https://doi.org/10.3390/rs13152930
Submission received: 25 June 2021 / Revised: 16 July 2021 / Accepted: 21 July 2021 / Published: 26 July 2021

Abstract

:
Fusing a low spatial resolution hyperspectral image (HSI) with a high spatial resolution multispectral image (MSI), aiming to produce a super-resolution hyperspectral image, has recently attracted increasing research interest. In this paper, a novel approach based on coupled non-negative tensor decomposition is proposed. The proposed method performs a tucker tensor factorization of a low resolution hyperspectral image and a high resolution multispectral image under the constraint of non-negative tensor decomposition (NTD). The conventional matrix factorization methods essentially lose spatio-spectral structure information when stacking the 3D data structure of a hyperspectral image into a matrix form. Moreover, the spectral, spatial, or their joint structural features have to be imposed from the outside as a constraint to well pose the matrix factorization problem. The proposed method has the advantage of preserving the spatio-spectral structure of hyperspectral images. In this paper, the NTD is directly imposed on the coupled tensors of the HSI and MSI. Hence, the intrinsic spatio-spectral structure of the HSI is represented without loss, and spatial and spectral information can be interdependently exploited. Furthermore, multilinear interactions of different modes of the HSIs can be exactly modeled with the core tensor of the Tucker tensor decomposition. The proposed method is straightforward and easy to implement. Unlike other state-of-the-art approaches, the complexity of the proposed approach is linear with the size of the HSI cube. Experiments on two well-known datasets give promising results when compared with some recent methods from the literature.

1. Introduction

Hyperspectral imagery utilizes a broad range of the electromagnetic spectrum to obtain information of the imaged scene, allowing better identification of materials or detection of processes. As each band of a HSI contains the spectral response to a narrow interval of the electromagnetic spectrum, it is necessary to collect reflectance from a wider area on the scene, decreasing the spatial resolution of HSIs. HSI acquisition instruments have extensive limitations for capturing high-resolution images, and there is always a tradeoff between spectral resolution and spatial resolution.
Hence, we always need spatial information provided from outside; using an auxiliary image consisting of panchromatic image, multispectral image or an RGB one is a well-known approach. In recent years, there have been several attempts which fuse a high resolution multispectral image (HRMSI) with a low resolution hyperspectral image (LRHSI) [1,2,3,4,5,6,7,8,9,10,11,12,13] to produce a high spatio-spectral resolution HSI. Basically, all fusion approaches can be grouped in the following categories: methods using a Bayesian framework [6,11,14,15,16,17,18], matrix factorization based methods [4,5,12,19,20,21], tensor factorization based methods [1,3,9,22,23,24,25] and deep learning based methods [26,27,28].
Bayesian methods apply the observed LRHSI and HRMSI and some prior information or regularization terms to build up the statistical model for estimating a super-resolution HSI [29]. In [11], a Bayesian framework is introduced, based on a sparse representation along with the alternating direction method of multipliers (ADMM) to solve the fusion problem. In our previous work [15], smooth graph signal modeling is employed to regularize in order to incorporate the spatio-spectral joint structural features of the HSI. The method proposed in [6] uses spectral unmixing and sparse representations in a Bayesian framework to increase the spatial resolution of the HSI. The most important deficiency of the Bayesian framework is that regularization terms are required that should comprehensively represent spatial information of HSIs, which is partially lost by matricizing the HSI.
As an alternative approach, matrix factorization approaches have been widely applied to fuse LRHSI with HRMSI [4,5,12,19,20,21]. These methods describe the targeted high resolution HSI (HRHSI) as a product of a matrix of basis vectors of spectral signatures, learned from the high spectral resolution LRHSI, and a coefficient matrix, estimated from auxiliary data, such as a HRMSI. The method proposed in [4] exploits non-local self-similarity, based on a spatial and spectral sparse representation to estimate both basis vectors and coefficient matrices. Ref. [5] employed a non-negative structured sparse representation to estimate the spectral basis vectors, and a structured sparse representation is utilized to determine the coefficient matrices. In [19,20], the spatial structure of HRHSI is applied as regularization to estimate the super-resolution HSI. The method proposed in [12] applied spectral unmixing to regularize the fusion problem. This method alternately updates spectral basis vectors and coefficient matrices, subject to non- negativity and sum-to-one constraints. As HSIs are essentially non-negative data, non-negative matrix factorization (NMF) frameworks are quite compatible with the observed data. Therefore, the approach proposed in [21] alternately factorizes the LRHSI and HRMSI under non-negativity constraints, and multiplicative update rules (MUR) [30] are engaged to estimate the two factorized matrices in each stage. Of note, in Bayesian frameworks and NMF-based approaches, the 3D data structure is stacked into a matrix form, which causes a loss in the neighborhood structures, smoothness, and continuity characteristics. To avoid this, tensors or multiway arrays have been frequently used in hyperspectral data analysis for the purposes of image classification [31,32,33], data compression [34,35], change detection [36,37], target and anomaly detection [36,38], and denoising [39,40]. Additionally, exploiting the capability of multilinear algebra on the multiway array representations allows more flexibility in choosing constraints and describing data structures. Moreover, more general features can be extracted from the data compared with the matrix-based approaches. More recently, tensor-based representations have been widely used for HSI super-resolution [1,9,22,24,41]. The representation of a HSI as a tensor of order three is a structural and natural model without any loss of information. In most of these methods, a low rank tensor representation is exploited to estimate the HRHSI. It offers the benefits of extreme noise and memory usage reduction, and extraction of discriminative features [41]. Accordingly, two well-known tensor decompositions, the Canonical Polyadic decomposition (CPD) and the Tucker decomposition, are frequently used to estimate HRHSIs [9,22,24,25,31,42]. The former decomposes a tensor into the sum of component rank-one tensors, while the later factors three-dimensional data into the multiplication (mode-n products) of four fractions: a core tensor and three dictionary matrices corresponding to each mode (see Figure 1). In [22], a low rank tensor-train representation is incorporated for HSI super-resolution. In [23], nonlocal sparse tensor factorization is proposed to model non-local self-similarity of HSIs. In [24], a nonlocal coupled CPD framework is used to fuse hyperspectral and multispectral images. A spatio-spectral sparsity prior constraint is imposed on the core tensor in a coupled sparse tensor factorization method in [9]. In [38], a low-rank constraint is applied to the core tensor in the Tucker representation to detect anomalies in HSI. It effectively shows the superposition of a spectral background and the anomaly signatures. A coupled CP decomposition based method is proposed in [43], which fused LRHSI and HRMSI to produce a HRHSI in a tensor framework. Note that the CP-based rank of a tensor is defined as the minimum of rank-one component tensors that are summed to express that tensor, which is unknown and generally not easy to estimate. Meanwhile, in the Tucker representation there is just one component tensor which is called the core tensor. It can comprehensively express the relations between the various modes and multilinear interactions among them.
Moreover, with the recent success of deep learning techniques in various image processing tasks, an increasing research interest arose for deep learning-based image fusion. In [27] a fusion method based on convolutional neural networks (CNN) was proposed, in which the two CNN branches are devoted to spectral features of the LR-HSI and spatial neighborhood features of the HR-MSI, respectively. In [28], an unsupervised deep approach was proposed for a blind fusion of HRMSI and LRHSI, considering no prior assumptions on spatial and spectral degradation functions. Instead, they are modeled with deep network structures. The largest deficiency of the deep learning-based methods is their requirement of huge amounts of training data, which is practically not available. Furthermore, deep learning-based methods have limited generalization ability regarding the different sensor characteristics such as spectral range and the observational models.
AS HSI is naturally non-negative data, it has a strong prior knowledge to apply to the tensor factorization method, which is expected to further improve the performance of the fusion. In this paper, we extend NMF to a tensor framework, applying a non-negative tensor decomposition. Contrary to the conventional NMF-based methods, where the spatial, spectral, or their joint structural features must be additionally imposed as constraints, in this paper, the spatio-spectral joint structures of HSIs are preserved without having to impose constraints. We optimally exploit the spectral properties of the LRHSI and the spatial properties of the HRMSI, by directly applying a coupled (non-negative) Tucker decomposition to both images. We will refer to the proposed method as coupled non-negative Tucker decomposition (CNTD).
Using the Tucker tensor representation, the proposed method comprehensively models multilinear modes of the HSI, where the core tensor precisely expresses the relations between the various modes. Therefore we proposed this tensor framework to benefit from its ability to preserve spatio-spectral joint structures. Regarding the huge amount of data in hyperspectral image processing and analysis, computation efficiency is a significant factor. Thus, we propose an algorithm which is straightforward and easy to implement, with a complexity, linear with the size of the hyperspectral data cube. The main contributions of this paper are highlighted below.
  • The application of non-negativity priors to the Tucker tensor decomposition of LRHSI and HRMSI, to estimate spectral and spatial mode-dictionaries in a Tucker model, respectively. To the best of our knowledge, this is the first time that a non-negative Tucker decomposition is used to represent hyperspectral images in a HSI fusion framework.
  • The preservation of spatio-spectral joint structures of HSIs without prior knowledge requirements and much lower information losses than matrix frameworks.
  • The construction of an algorithm with lower-order complexity than the state-of-the-art.
The remainder of this paper is organized as follows. Some preliminaries on tensors are given in Section 2. Section 3 formulates the HSI-MSI fusion framework. The proposed coupled non-negative tensor decomposition (CNTD) method for super-resolution HSI is introduced in Section 4. The complexity of the proposed method is elaborated in Section 5. Section 6 describes some experimental results on two well-known datasets, Pavia University and Indian Pines. Finally, conclusions and future work are described in Section 7.

2. Preliminaries on Tensors

Let us denote a tensor of order N as X I 1 × I 2 × × I N , having N indices i 1 , i 2 , , i N and its members by x i 1 i 2   i N where 1 i n I n . Tensor matricization unfolds a tensor of order N into a matrix. The mode- n matricization of X reorders the elements of X to form the matrix Χ ( n ) I n × I n + 1 I n + 2 I N I 1 I 2 I n 1 . Tensor matricization can be regarded as an extension to matrix vectorization.
The mode-n product of tensor X I 1 × I 2 × × I N and matrix A J n × I n , is defined by M = X × n A I 1 × I 2 × × J n × × I N , and entries are calculated by:
𝓂 i 1 i n 1 J n i n + 1 i N = i n 𝓍 i 1 i n 1 i n i n + 1 i N a j n i n
The mode-n product X × n A can also be denoted in matrix form as M ( n ) = A Χ ( n ) . The mode-n product has two important properties: (i) the order of multiple mode-n products with different modes is arbitrary:
( X × m A ) × n B = ( X × n B ) × m A           ( m n )
and for multiple mode-n products with the same modes, the order is relevant:
( X × n A ) × n B = X × n ( B A )
The scalar product of two tensors X , Y indicated as < X , Y > = i 1 , i 2 , , i N 𝓍 i 1 , i 2 , , i N 𝓎 i 1 , i 2 , , i N . The Frobenius norm of a tensor X is indicated as X F = < X , X > .
The Tucker decomposition of X I 1 × I 2 × × I N is expressed as mode products of a core tensor U K 1 × K 2 × × K N and N mode matrices V ( n ) I n × K n , which is expressed as:
X = U × 1 V ( 1 ) × 2 V ( 2 ) × N V ( N )
which has an element-wise form given by:
𝓍 i 1 i N = k 1 k N 𝓊 k 1 k N v i 1 k 1 ( 1 ) v i 2 k 2 ( 2 ) v i N k N ( N )
The mode-n matricization form of X is expressed by Kronecker products ( ) of the mode-n matricization of the core tensor and mode matrices, as:
X ( n ) = V ( n ) U ( n ) [ V ( n 1 ) V ( 2 ) V ( 1 ) V ( N ) V ( n + 2 ) V ( n + 1 )
where U ( n ) is the mode-n matricization of the core tensor U . The Kronecker product of two matrices A I × J and B K × L is a matrix denoted by A B I K × J L , defined as:
A B = [ a 11 B a 12 B a 1 J B a 21 B a 22 B a 2 J B a I 1 B a I 2 B a I J B ]
The following Kronecker product properties and the vectorization operation ( v e c ( · ) ) are used in this paper:
v e c ( U A V T ) = ( V U ) v e c ( A ) ( V U ) T = V T U T ( V U ) ( A B ) = V A U B
All basic notations are presented in Table 1.

3. HSI-MSI Fusion Problem Formulation

In this paper, the HRHSI, LRHSI, and HRMSI are denoted as tensors of order three. The target HRHSI is denoted by Z W × H × S , where W , H and S are the width, height and number of spectral bands, respectively. The LRHSI and HRMSI are denoted by Y h w × h × S and Y m W × H × s , respectively, with w W , h H and s S . In this paper, we aim at estimating a HRHSI in a fusion framework, based on the observations Y h and Y m . In this section, we briefly describe the matrix factorization-based fusion scheme and then detail the tensor decomposition-based scheme.

3.1. Matrix Factorization-Based Fusion Scheme

In the matrix factorization-based fusion scheme, each spectral signature of the HRHSI is considered to be a linear mixture of a small number of basis vectors:
Z ( 3 ) = E A
where, Z ( 3 ) S × W H is the mode-three matricization of Z . E S × k contains the basis vectors in its columns and A k × W H is the coefficient matrix. Similarly Y h ( 3 ) S × w h and Y m ( 3 ) s × W H are the mode-three matricization of Y h and Y m , respectively. Conventionally, they can be considered as spectral and spatial down-sampled versions of the target HRHSI. Thus:
Y h ( 3 ) = Z ( 3 ) M
where, M W H × w h is the point spread function (PSF) matrix and the spatial down-sampling in the hyperspectral sensor. It can be separated regarding width and height modes [9]:
M = ( P 2 P 1 ) T
where P 1 w × W and P 2 h × H are spatial separable down-sampling operators of the width and height spatial modes, respectively.
Similarly:
Y m ( 3 ) = P 3 Z ( 3 )
where P 3 s × S is a matrix, modeling spectral down-sampling in the multispectral sensor. Therefore, the spectral response functions (SRF) of the multispectral sensor is included within its rows.
Commonly, in the HSI-MSI fusion problem formulation, the SRF and PSF are assumed to be known [4,9,11,17,25]. the proposed approach in [19] is also used to approximate the SRF and PSF from observed data. Following (10):
P 3 Y h ( 3 ) = P 3 Z ( 3 ) M
which, according to (12), can also be written as:
P 3 Y h ( 3 ) = Y m ( 3 ) M
In, an optimization framework, based on quadratic regularizing, is presented to estimate P 3 and M from the observed LRHSI and HRMSI:
min P 3 , M P 3 Y h ( 3 ) Y m ( 3 ) M 2 + α 1 Γ 1 ( P 3 ) + α 2 Γ 2 ( M )
where Γ 1 ( · ) and Γ 2 ( · ) are quadratic regularizers, α 1 and α 2 are the respective regularization parameters. See [19] for more details.

3.2. Tensor Decomposition-Based Fusion Scheme

As HSIs are naturally 3D data, the tensor is a more efficient representation than a matrix form, and we can benefit from its ability to exploit intrinsic structures of the HSI and multilinear interactions between its different modes. Hence, in this paper, the target HRHSI Z is formally expressed by:
Z = C × 1 W × 2 H × 3 S
which is called the Tucker representation, where W W × n w , H H × n h and S S × n s are width, height and spectral dictionary matrices, respectively. n w , n h and n s are the number dictionary atoms of each mode, and C n w × n h × n s is the core tensor that denotes the interactions between several modes.
Mode-n  ( n = 1 , 2 , 3 ) matricizations of Z are given by:
Z ( 1 ) = W C ( 1 ) ( S H ) T Z ( 2 ) = H C ( 2 ) ( S W ) T Z ( 3 ) = S C ( 3 ) ( H W ) T
Similarly, both LRHSI and HRMSI are represented by:
Y h = C × 1 W h × 2 H h × 3 S + E h
Y m = C × 1 W × 2 H × 3 S m + E m
where W h w × n w , H h h × n h and S m s × n s are width, height, and spectral dictionary matrices, respectively, and E h w × h × S and E m W × H × s denote Independent-Identically-Distributed (i.i.d.) noise of Y h and Y m , respectively. As the LRHSI and HRMSI are the spatially and spectrally down-sampled form of the HRHSI, respectively, one has:
W h = P 1 W
H h = P 2 H
S m = P 3 S

4. Proposed CNTD Approach

The goal of a LRHSI and HRMSI fusion framework is to estimate a high spatio-spectral target HSI. Since w W , h H and s S , the super resolution problem is severely ill-posed and prior information is needed to regularize the fusion problem. Orthogonality and statistical independency of basis vectors in the Tucker representation, and sparsity, smoothness, and non-negativity of HSIs are some constraints that help to find a unique solution for the super-resolution problem of HSIs [44,45,46].
In this paper, we propose a new algorithm based on coupled non-negative Tucker decomposition (CNTD). The proposed method performs Tucker tensor decomposition of the LRHSI and HRMSI, subject to NTD. The original NMF method inherently loses spatio-spectral joint structure information when unfolding the 3D data into matrix form. Therefore, in this paper, we impose NTD to both tensors of the HSI and MSI directly. The CNTD method effectively combines multiple data tensors, where the intrinsic spatio-spectral joint structures of HSI can be represented without loss and interdependently exploited. The proposed CNTD approach is depicted in Figure 2, illustrating the fusion of the spatial information of HRMSI and the spectral information of LRHSI to produce the HRHSI.
Considering (16), (18), and (19), the LRHSI and HRMSI fusion problem is formulated by the following constrained least square optimization problems:
min C , W h , H h , S   Y h C × 1 W h × 2 H h × 3 S F 2 s . t .       C , W h , H h , S 0
min C , W ,   H ,     S m Y m C × 1 W × 2 H × 3 S m F 2 s . t .       C , W , H , S m 0
Hyperspectral and multispectral data fusion, based on NTD, is executed by the estimation of the corresponding dictionaries and the core tensor. The non-negative Tucker tensor decomposition is non-convex in its entirety, but it is convex in each of its factors. We easily derive updated rules for each mode-dictionary matrix by matricizing the Tucker model into its corresponding modes. Therefore, these can be considered as conventional NMF for which we use a block coordinate descent scheme. Therefore, the cost functions are optimized with respect to each factor while keeping the others fixed. Traditionally in gradient descent, the learning rates are positive, but since the subtraction of terms in the updated rules can lead to negative elements, Lee and Seung [30] proposed to use adaptive learning rates to avoid subtraction and thus the production of negative elements. Like the conventional NMF, the optimization algorithm is easy to implement and computationally efficient. NTD attempts to decompose a non-negative data tensor into the multilinear products of a non-negative core tensor and non-negative mode-dictionary matrices [47]. To minimize the predefined optimization problems (23) and (24), the multiplicative update rule (MUR) is applied, and can be directly achieved by NMF multiplicative rule, for which the convergence to local optima under the non-negativity constraints has been proven in [30,48].

4.1. Updating Mode-Dictionary Matrices

Updating algorithms for each mode-dictionary matrix can be easily derived by matricizing the Tucker model into its corresponding modes. Lee and Seung’s multiplicative update rule has been a widely known approach owing to the simplicity of its implementation [30]. We use an extended MUR for the mode-dictionary matrices of Y h . The first mode matricization of Y h is given by:
Y h ( 1 )         W h         1 s t   f r a c t i o n   o f   t h e c o n v e n t i o n a l   N M F α C ( 1 ) ( S H h ) T 2 n d   f r a c t i o n   o f   t h e c o n v e n t i o n a l   N M F β
where Y h ( 1 ) and C ( 1 ) are the first mode matricization of LRHSI ( Y h ) and the core tensor ( C ), respectively. Equation (25) can be rewritten as Y h ( 1 ) α β , just like the conventional NMF, for which the updating rules are given by [49]:
α α Y h ( 1 ) β T α β β T β β α T Y h ( 1 ) α T α β
Equation (25) can be treated as the conventional NMF, where each fraction is updated using MUR (26):
W h W h Y h ( 1 ) [ C ( 1 ) ( S H h ) T ] T W h C ( 1 ) ( S H h ) T [ C ( 1 ) ( S H h ) T ] T
where the fraction line denotes the element-wise division.
Similarly, the second mode matricization of Y h is given by:
Y h ( 2 )         H h         1 s t   f r a c t i o n   o f   t h e c o n v e n t i o n a l   N M F C ( 2 ) ( S W h ) T 2 n d   f r a c t i o n   o f   t h e c o n v e n t i o n a l   N M F
and the second mode-dictionary matrix is updated as:
H h H h Y h ( 2 ) [ C ( 2 ) ( S W h ) T ] T H h C ( 2 ) ( S W h ) T [ C ( 2 ) ( S W h ) T ] T
Finally, the third mode matricization of Y h is given by Y h ( 3 ) S C ( 3 ) ( H h W h ) T . Therefore, the spectral mode-dictionary ( S ) is updated as:
S S Y h ( 3 ) [ C ( 3 ) ( H h W h ) T ] T S C ( 3 ) ( H h W h ) T [ C ( 3 ) ( H h W h ) T ] T

4.2. Updating Core Tensor

Applying (8) to (25):
v e c ( Y h ( 1 ) ) = v e c ( W h C ( 1 ) ( S H h ) T ) = ( S H h W h ) 1 s t   f r a c t i o n   o f   t h e   c o n v e n t i o n a l   N M F v e c ( C ( 1 ) ) 2 n d   f r a c t i o n   o f   t h e   c o n v e n t i o n a l   N M F
which can be treated as the conventional NMF as well. Incorporating MUR to calculate the core tensor ( C ):
v e c ( C ( 1 ) ) v e c ( C ( 1 ) ) ( S H h W h ) T v e c ( Y h ( 1 ) ) ( S H h W h ) T ( S H h W h ) v e c ( C ( 1 ) )
Applying (8) to the numerator of (32):
( S H h W h ) T v e c ( Y h ( 1 ) ) = ( ( S H h ) T W h T ) v e c ( Y h ( 1 ) ) = v e c ( W h T Y h ( 1 ) ( S T H h T ) T ) = v e c ( ( Y h × 1 W h T × 2 H h T × 3 S T ) ( 1 ) )
and its denominator is given by:
( S H h W h ) T ( S H h W h ) v e c ( C ( 1 ) ) = ( ( S H h ) T W h T ) ( ( S H h ) W h ) v e c ( C ( 1 ) ) = [ ( ( S H h ) T ( S H h ) ) ( W h T W h ) ] v e c ( C ( 1 ) ) = v e c ( ( W h T W h ) C ( 1 ) ( S T S H h T H h ) ) = v e c ( ( C × 1 W h T W h × 2 H h T H h × 3 S T S ) ( 1 ) )
As a result, the core tensor C is updated by:
C C Y h × 1 W h T × 2 H h T × 3 S T C × 1 W h T W h × 2 H h T H h × 3 S T S
When performing MUR for Y m in a similar way, the following updating relations for Y m are obtained:
W W Y m ( 1 ) [ C ( 1 ) ( S m H ) T ] T W C ( 1 ) ( S m H ) T [ C ( 1 ) ( S m H ) T ] T
H H Y m ( 2 ) [ C ( 2 ) ( S m W ) T ] T H C ( 2 ) ( S m W ) T [ C ( 2 ) ( S m W ) T ] T
S m S m Y m ( 3 ) [ C ( 3 ) ( H W ) T ] T S m C ( 3 ) ( H W ) T [ C ( 3 ) ( H W ) T ] T
C C Y m × 1 W T × 2 H T × 3 S m T C × 1 × 1 W T W × 2 H T H × 3 S m T S m
As the LRHSI and HRMSI contain, respectively, spectral and spatial information of the target image, the spectral mode-dictionary S is initialized using the former and the spatial mode-dictionary matrices of the height H and width W are initialized using the latter. The spectral mode-dictionary S is initialized using the simplex identification split augmented Lagrangian (SISAL) algorithm [50], which efficiently identifies a minimum volume simplex containing the LRHSI spectral vectors. The spatial mode-dictionary matrices W and H are initialized from the mode-one and -two matricization of the HRMSI, respectively, via dictionary update cycles of the KSVD method [51]. The core tensor C is initialized using the ADMM framework presented in [9].
To fully exploit its spectral information, the proposed algorithm CNTD starts with applying NTD to the LRHSI. W h and H h are initialized by (20) and (21), respectively, to inherit the reliable spatial information from the HRMSI, while the other variables are fixed. Then, W h , H h , S and C are alternately updated by (27), (29), (30), and (35), respectively, until convergence of the objective function in (23).
The next step of the proposed algorithm is to apply NTD to the HRMSI. S m is initialized by (22), to exploit the spectral information of the LRHSI. In the optimization phase, W , H , S m and C are alternately updated using (36)–(39) while the other variables are fixed, until convergence of the objective function in (24). The super-resolution HSI is calculated using the estimated core tensor and mode-dictionary matrices. Algorithm 1 gives the pseudocode of the proposed CNTD algorithm.
Algorithm 1: The proposed coupled non-negative tensor decomposition method.
Input: LRHSI ( Y h ), HRMSI ( Y m ).
Output: HRHSI ( Z )
Estimate PSF ( P 1 , P 2 ), SRF ( P 3 ), using method from [19].
Initialize the core tensor ( C ) via ADMM [9], and mode-dictionaries ( W ,   H ,   S ) via DUC KSVD [51].
NTD for Y h
Initialize W h , H h   b y (20), (21), respectively.
Update W h , H h , S and C alternately by (27), (29), (30) and (35), respectively until convergence of
NTD for Y m
Initialize S m by (22)
Update W , H , S m and C alternately by (36)–(39) until convergence of the objective function in (24).
Using the estimated W , H , S and C to calculate the HRHSI ( Z ) via Tucker tensor decomposition (16).

5. Computational Complexity

In this section, we analyze the computational complexity of the proposed method. According to Algorithm 1, the proposed method includes two sub-optimization problems, which engage MURs to estimate the NTD of Y h and Y m . Each sub-optimization problem mainly contains four updating steps. In each step, the heaviest parts are the multiplications of the core matrix with the output of the Kronecker products. For optimizing the width dictionary, this term is given by C ( 1 ) ( S H h ) T , which has a complexity order of O ( n w n h n s S h ) . For the height dictionary, the term C ( 2 ) ( S W h ) T has a complexity order of O ( n w n s n h S w ) . For the spectral dictionary, the term C ( 3 ) ( H h W h ) T has a complexity order of O ( n w n s n h h w ) . Finally, the highest complexity order of the core tensor is O ( n w n s 2 n h 2 ) . Similarly, the heaviest parts of the second sub-optimization step have complexity orders O ( n w n h n s s H ) , O ( n w n s n h s W ) , O ( n w n s n h H W ) , and O ( n w n s 2 n h 2 ) for W , H , S m , and C , respectively. Given that, LR-HSI and HR-HSI are the spatial and spectral down-sampled versions of HR-HSI, respectively, W , H and S are multiples of w , h and s , respectively. Therefore, the overall computational complexity of the proposed algorithm can be expressed as:
                                          O ( n w n h n s H S ) + O ( n w n h n s W S ) + O ( n w n h n s W H ) + O ( n w n h 2 n s 2 )
From (40), one can observe that the overall computational complexity is linear with the size of the HSI cube ( W , H , S ). This is just the same as that of the conventional NMF algorithm, owing to the fact that each update step of the proposed CNTD method can be considered as an NMF problem. Of note, the complexity of the proposed method outperforms the other state-of-the-art tensor factorization methods [4,9,23].

6. Experimental Observations and Results

6.1. Data Sets

The proposed method CNTD is performed on two well-known data sets, depicted in Figure 3. The first data set is the Pavia University image [52], which is captured by the Reflective-Optics-System-Imaging-Spectrometer (ROSIS) optical sensor upon the Pavia University in Italy. The reference HRHSI is a 120 × 120 × 93 image with 1.3 m per pixel spatial resolution. The wavelength domain from 430 to 860 mm is removed because of low SNR and water vapor absorptions. The 30 × 30 × 93 LRHSI is produced by applying a Gaussian spatial blurring filter on each band of the reference image, and down-sampling with a factor of four in both width and height directions. The HRMSI of size 120 × 120 × 4 is produced by filtering the reference image with the IKONOS-like reflectance spectral response function depicted in Figure 4. (see [19] for more details about SRF and PSF).
The Indian Pines image is the second test data set. It was acquired by the NASA Aeronautics-and-Space-Administration-Airborne-Visible-Infrared-Imaging-Spectrometer (AVIRIS) [53] over the Indian Pines scene situated in North-Western Indiana. The reference image of size 120 × 120 × 224 has a wavelength domain from 400 nm to 2500 nm and a 20 m per pixel spatial resolution. We reduced its number of bands to 185 after removing the water absorption and noisy bands ( 1 4 , 104 115 , 150 170 , 223 and 224 ). The LRHSI of size 30 × 30 × 185 was constructed after down-sampling and application of the same blurring filters as which were applied to the first data set. The LANDSAT 7-like spectral response function, depicted in Figure 4, is engaged to produce a HRMSI of size 120 × 120 × 6 . The reference, LRHSI and HRMSI of both data sets are depicted in Figure 3.

6.2. Evaluation Criteria

The performance of the proposed method is evaluated using five different indices. The first index, which is a measure of spectral distortion, is the spectral angle mapper (SAM) in degrees. The angle between pixel spectral responses 𝔃 ^ j and 𝔃 j of the estimated HRHSI ( Z ^ ) and the reference image ( Z ) are calculated, then averaged over all pixels. It is expressed as:
SAM ( Z , Z ^ ) = 1 W H j = 1 W H arccos ( 𝔃 ^ j T 𝔃 ^ 𝔃 ^ j 2   𝔃 j 2 )
the ideal SAM value being zero.
The second index is the root mean squared error (RMSE) that evaluates the quality of the estimated HRHSI ( Z ^ ), compared to the reference image ( Z ). It is defined as:
RMSE ( Z , Z ^ ) = Z Z ^ F 2 W H S
The third evaluation index is the relative dimensionless global error in synthesis (ERGAS), which measures the spectral quality of the estimated HRHSI, and is defined as:
ERGAS = 100 W H w h 1 S i = 1 S ( ( RMSE ( Z ^ i , : , Z i , : ) ) μ Z i , : ) 2
where Z ^ i , : and Z i , : denote the ith band of Z ^ and Z , respectively. μ Z i , : is the mean of Z i , : . A lower ERGAS value means a lower spectral distortion between the estimated HRHSI ( Z ^ ) and the reference image ( Z ). In the case of perfect reconstruction, it is zero.
The degree of the distortion (DD) is the fourth index, defined as:
DD ( Z , Z ^ ) = 1 W H S v e c ( Z ) v e c ( Z ^ ) 1
where 1 is 1 norm, and v e c ( Z ) and v e c ( Z ^ ) are the vectorization of tensors Z and Z ^ , respectively. The smaller the value of DD, the lower the spectral distortion.
The fifth index is the universal image quality index (UIQI) [54]. It is calculated by averaging over 32 × 32 windows. The UIQI between the ith band of Z ^ and Z is calculated by:
UIQI ( Z i , Z ^ i ) = 1 d j = 1 d σ Z j i Z ^ j i σ Z j i σ Z ^ j i 2 μ Z j i μ Z ^ j i μ Z j i + μ Z ^ j i 2 σ Z j i σ Z ^ j i σ Z j i + σ Z ^ j i
where d is the number of windows, Z ^ j i and Z j i denote the jth window of the ith band of Z ^ and Z , respectively, σ Z j i Z ^ j i is the sample covariance between Z j i and Z ^ j i , μ Z j i and σ Z j i are the mean and standard deviations of Z j i , respectively. The UIQI index between Z ^ and Z is the average UIQI value of all bands, which is:
UIQI ( Z , Z ^ ) = 1 S i = 1 S UIQI ( Z i , : , Z ^ i , : )
The ideal UIQI value is one.
All of the experiments were performed using MATLAB version R2016a, and have been run by a computer with an Intel Core i5 central processing unit (CPU) at 3.4 GHz and 32 GB random access memory (RAM).

6.3. Evaluation of the Parameters

In order to evaluate the sensitivity of the proposed CNTD approach w. r. t. and its essential parameters, i.e., the number of mode (width, height, and depth) dictionary atoms n w , n h and n s , the proposed method has been performed for different numbers of mode-dictionary atoms. Figure 5a–c shows the RMSE of the estimated Pavia University and Indian Pines data sets as functions of the number of mode-dictionary atoms n w , n h and n s , respectively. As Figure 5a,b shows, the RMSE of both data sets strongly declines when n w and n h increase from 5 to 200. After that, the RMSE does not improve any further. Therefore, we chose to set the values of n w and n h to 167 for both data sets for all remaining experiments. As can be seen from Figure 5c, the RMSE for Pavia University decreases while n s increases from 5 to 40. For Indian Pines, the RMSE decreases as n s increases from 5 to 100, after which it does not decrease any further. Hence, we set n s to 30 for both aforementioned data sets. That the proposed CNTD method requires larger width and height mode-dictionaries compared to the spectral mode-dictionary is because generally the HSIs spectral vectors belong to a much lower dimension subspace.

6.4. Comparison with State of the Art Fusion Methods

The proposed fusion method is compared with state-of-the-art methods. In order to see the impact of non-negative priors, the CNTD method is compared with tensor decomposition methods CSFT [9] and NLSTF [23], which do not include non-negativity. Additionally, the proposed CNTD approach is compared with the well-known matrix framework CNMF [21], aiming to evaluate the ability of the proposed CNTD approach to preserve the spatio-spectral structure. Furthermore, our Tucker based method is compared with a CP tensor decomposition method, referred to as STEREO [43]. Finally, we have compared with the two-branched CNN method of [28].
The experiments validate the superiority of the proposed CNTD method on three aspects: the advantage of non-negative priors, the ability to preserve the spatio-spectral structure, and the computational complexity. RMSE, SAM, DD, ERGAS and UIQI for all approaches are shown in Table 2 and Table 3 for the Pavia University and Indian Pines data sets, respectively. To evaluate the computational complexity of the proposed method in comparison with the matrix-based method and the other tensor frameworks, the computation times of CNMF, CSTF, NLSTF and the proposed CNTD are shown in Table 4. The best values are depicted in bold.
It can be observed from Table 2 and Table 3 that the proposed method outperforms the other competing methods in terms of RMSE, DD, and UIQI indices, and shows promising results for the other indices. The proposed method outperforms CNMF, CNN and STEREO in almost all of the indices on both data sets. Of note, the efficiency of the CNN method highly depends on the training sample rate, while a huge amount of hyperspectral training data is practically unavailable. Furthermore, the proposed CNTD method outperforms the CSFT for most indices on both data sets. It also has better values of RMSE, DD, and UIQI than NLSTF.
As can be observed from Table 4, the proposed method is much more computationally efficient than the competing tensor-based approaches, and is comparable to the CNMF method. This agrees with the detailed description of the computational complexity in Section V. In contrast to the proposed method, for some of the state-of-the-art methods, such as CSTF and NLTF, the complexity increases faster than that which is linear with the size of the HSI cube.
In order to validate the performance with respect to preserving spatial structures, in Figure 6, band 30 of the LRHSI and the estimated HRHSI with CNMF, CSTF, NLSTF, CNN, STEREO and the proposed CNTD are compared with the reference HRHSI. It can be observed that the proposed CNTD approach can correctly estimate most of the spatial details of the HRHSI, though there are a few distortions in the fusion results. Additionally, the error images of band 30, which reflect the differences between the estimated HRHSI and the reference image of both data sets are shown in Figure 7. The error images of the LRHSI, CNMF, CSTF, NLSTF, CNN, STEREO and the proposed CNTD are depicted. The proposed approach estimates spatial details of the HRHSI with much lower error than the competing methods. With CNMF and CNN, the edge structures of the HSI are lost, while CSTF, NLSTF and STEREO suffer from errors in homogeneous regions. The proposed approach performs better in preserving the spatial structures of HSIs at both edges and homogeneous regions.

7. Conclusions

The main objective of this paper was to extend the matrix formulation of non-negative matrix factorization to a tensor framework for the purpose of hyperspectral and multispectral image fusion. We proposed a coupled non-negative tensor decomposition approach that can be treated as a conventional NMF-based model. The proposed approach performs a tucker tensor factorization of a LRHSI and a HRMSI under the constraint of non-negative tensor decomposition. Unlike other state of the art methods, the complexity of the proposed approach is linear with the size of the HSI cube. The proposed approach gives competitive results with the state-of-the-art fusion approaches. As a future plan, we will incorporate prior information, such as spectral self-similarity, sparsity, smoothness and local consistence in the non-negative tensor decomposition, in order to find better unique basis vectors for the Tucker representation. Furthermore, in this paper we assumed the SRF and PSF to be known and also two input images considered to be registered. Therefore, we will try to overcome these limitations in our future work.

Author Contributions

M.Z.: conceptualization, methodology, software, writing, review and editing. M.S.H.: supervision. K.K. and P.S.: investigation, writing, review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank Jose Bioucas-Dias from the Instituto de Telecomunicaes and Instituto Superior Tecnico, Universidade de Lisboa for sharing the IKONOS-like reflectance spectral responses. Also, J. Inglada, from Centre National d’Études Spatiales, for providing the LANDSAT spectral responses used in the experiments. The authors also highly appreciate the time and consideration of the editors and the anonymous referees for their constructive suggestions that greatly improved the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dian, R.; Fang, L.; Li, S. Hyperspectral image super-resolution via non-local sparse tensor factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 5344–5353. [Google Scholar]
  2. Kwan, C.; Choi, J.H.; Chan, S.H.; Zhou, J.; Budavari, B. A super-resolution and fusion approach to enhancing hyperspectral images. Remote Sens. 2018, 10, 1416. [Google Scholar] [CrossRef] [Green Version]
  3. Dian, R.; Li, S.; Fang, L.; Bioucas-Dias, J. Hyperspectral image super-resolution via local Low-rank and sparse representations. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4003–4006. [Google Scholar]
  4. Dian, R.; Li, S.; Fang, L.; Wei, Q. Multispectral and hyperspectral image fusion with spatial-spectral sparse representation. Inf. Fusion 2019, 49, 262–270. [Google Scholar] [CrossRef]
  5. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Trans. Image Process. 2016, 25, 2337–2352. [Google Scholar] [CrossRef]
  6. Ghasrodashti, E.; Karami, A.; Heylen, R.; Scheunders, P. Spatial resolution enhancement of hyperspectral images using spectral unmixing and bayesian sparse representation. Remote Sens. 2017, 9, 541. [Google Scholar] [CrossRef] [Green Version]
  7. Joshi, M.V.; Upla, K.P. Multi-Resolution Image Fusion in Remote Sensing; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  8. Lanaras, C.; Baltsavias, E.; Schindler, K. Hyperspectral super-resolution with spectral unmixing constraints. Remote Sens. 2017, 9, 1196. [Google Scholar] [CrossRef] [Green Version]
  9. Li, S.; Dian, R.; Fang, L.; Bioucas-Dias, J.M. Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef] [PubMed]
  10. Li, X.; Tian, L.; Zhao, X.; Chen, X. A super resolution approach for spectral unmixing of remote sensing images. Int. J. Remote Sens. 2011, 32, 6091–6107. [Google Scholar] [CrossRef]
  11. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.-Y. Hyperspectral and multispectral image fusion based on a sparse representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3658–3668. [Google Scholar] [CrossRef] [Green Version]
  12. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.-Y.; Chen, M.; Godsill, S. Multiband image fusion based on spectral unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7236–7249. [Google Scholar] [CrossRef] [Green Version]
  13. Yang, J.; Li, Y.; Chan, J.; Shen, Q. Image fusion for spatial enhancement of hyperspectral image via pixel group based non-local sparse representation. Remote Sens. 2017, 9, 53. [Google Scholar] [CrossRef] [Green Version]
  14. Akhtar, N.; Shafait, F.; Mian, A. Bayesian sparse representation for hyperspectral image super resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3631–3640. [Google Scholar]
  15. Zare, M.; Helfroush, M.S.; Kazemi, K. Fusing hyperspectral and multispectral images using smooth graph signal modelling. Int. J. Remote Sens. 2020, 41, 8610–8630. [Google Scholar] [CrossRef]
  16. Hardie, R.C.; Eismann, M.T.; Wilson, G.L. MAP estimation for hyperspectral image resolution enhancement using an auxiliary sensor. IEEE Trans. Image Process. 2004, 13, 1174–1184. [Google Scholar] [CrossRef]
  17. Nezhad, Z.H.; Karami, A.; Heylen, R.; Scheunders, P. Fusion of hyperspectral and multispectral images using spectral unmixing and sparse coding. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2377–2389. [Google Scholar] [CrossRef]
  18. Wei, Q.; Dobigeon, N.; Tourneret, J.-Y. Bayesian fusion of multi-band images. IEEE J. Sel. Top. Signal Process. 2015, 9, 1117–1127. [Google Scholar] [CrossRef] [Green Version]
  19. Simões, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2014, 53, 3373–3388. [Google Scholar] [CrossRef] [Green Version]
  20. Veganzones, M.A.; Simoes, M.; Licciardi, G.; Yokoya, N.; Bioucas-Dias, J.M.; Chanussot, J. Hyperspectral super-resolution of locally low rank images from complementary multisource data. IEEE Trans. Image Process. 2015, 25, 274–288. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  22. Dian, R.; Li, S.; Fang, L. Learning a low tensor-train rank representation for hyperspectral image super-resolution. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 2672–2683. [Google Scholar] [CrossRef] [PubMed]
  23. Dian, R.; Li, S.; Fang, L.; Lu, T.; Bioucas-Dias, J.M. Nonlocal sparse tensor factorization for semiblind hyperspectral and multispectral image fusion. IEEE Trans. Cybern. 2019, 50, 4469–4480. [Google Scholar] [CrossRef] [PubMed]
  24. Xu, Y.; Wu, Z.; Chanussot, J.; Comon, P.; Wei, Z. Nonlocal coupled tensor cp decomposition for hyperspectral and multispectral image fusion. IEEE Trans. Geosci. Remote Sens. 2020, 58, 348–362. [Google Scholar] [CrossRef]
  25. Zhang, G.; Fu, X.; Huang, K.; Wang, J. Hyperspectral super-resolution: A coupled nonnegative block-term tensor decomposition approach. In Proceedings of the 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Guadeloupe, France, 15–18 December 2019; pp. 470–474. [Google Scholar]
  26. Sun, W.; Ren, K.; Meng, X.; Xiao, C.; Yang, G.; Peng, J. A band divide-and-conquer multispectral and hyperspectral image fusion method. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
  27. Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W. Hyperspectral and multispectral image fusion via deep two-branches convolutional neural network. Remote Sens. 2018, 10, 800. [Google Scholar] [CrossRef] [Green Version]
  28. Zhang, L.; Nie, J.; Wei, W.; Li, Y.; Zhang, Y. Deep blind hyperspectral image super-resolution. IEEE Trans. Neural Netw. Learn. Syst. 2021. [Google Scholar] [CrossRef]
  29. Loncan, L.; De Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simoes, M. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef] [Green Version]
  30. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef] [PubMed]
  31. Jouni, M.; Dalla Mura, M.; Comon, P. Hyperspectral Image Classification Based on Mathematical Morphology and Using Tensor CP Decomposition. Math. Morphol.Theory Appl. 2019, 1, 1–30. [Google Scholar]
  32. Jouni, M.; Dalla Mura, M.; Comon, P. Classification of hyperspectral images as tensors using nonnegative CP decomposition. In Proceedings of the International Symposium on Mathematical Morphology and Its Applications to Signal and Image Processing, Saarbrücken, Germany, 8–10 July 2019; pp. 189–201. [Google Scholar]
  33. Zhao, G.; Tu, B.; Fei, H.; Li, N.; Yang, X. Spatial-spectral classification of hyperspectral image via group tensor decomposition. Neurocomputing 2018, 316, 68–77. [Google Scholar] [CrossRef]
  34. Das, S. Hyperspectral image, video compression using sparse tucker tensor decomposition. IET Image Process. 2021, 15, 964–973. [Google Scholar] [CrossRef]
  35. Fang, L.; He, N.; Lin, H. CP tensor-based compression of hyperspectral images. JOSA A 2017, 34, 252–258. [Google Scholar] [CrossRef]
  36. Huang, F.; Yu, Y.; Feng, T. Hyperspectral remote sensing image change detection based on tensor and deep learning. J. Vis. Commun. Image Represent. 2019, 58, 233–244. [Google Scholar] [CrossRef]
  37. Tan, J.; Zhang, J.; Zhang, Y. Target detection for polarized hyperspectral images based on tensor decomposition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 674–678. [Google Scholar] [CrossRef]
  38. Li, S.; Wang, W.; Qi, H.; Ayhan, B.; Kwan, C.; Vance, S. Low-rank tensor decomposition based anomaly detection for hyperspectral imagery. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 4525–4529. [Google Scholar]
  39. Fan, H.; Li, C.; Guo, Y.; Kuang, G.; Ma, J. Spatial–spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6196–6213. [Google Scholar] [CrossRef]
  40. Zhang, H.; Liu, L.; He, W.; Zhang, L. Hyperspectral image denoising with total variation regularization and nonlocal low-rank tensor decomposition. IEEE Trans. Geosci. Remote Sens. 2019, 58, 3071–3084. [Google Scholar] [CrossRef]
  41. Qian, Y.; Xiong, F.; Zeng, S.; Zhou, J.; Tang, Y.Y. Matrix-vector nonnegative tensor factorization for blind unmixing of hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2017, 55, 1776–1792. [Google Scholar] [CrossRef] [Green Version]
  42. Kanatsoulis, C.I.; Fu, X.; Sidiropoulos, N.D.; Ma, W.-K. Hyperspectral super-resolution: Combining low rank tensor and matrix structure. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 3318–3322. [Google Scholar]
  43. Kanatsoulis, C.I.; Fu, X.; Sidiropoulos, N.D.; Ma, W.-K. Hyperspectral super-resolution: A coupled tensor factorization approach. IEEE Trans. Signal Process. 2018, 66, 6503–6517. [Google Scholar] [CrossRef] [Green Version]
  44. Cichocki, A.; Mandic, D.; De Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, H.A. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef] [Green Version]
  45. Cichocki, A.; Zdunek, R.; Phan, A.H.; Amari, S.-I. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  46. Zhou, G.; Cichocki, A. Fast and unique Tucker decompositions via multiway blind source separation. Bull. Pol. Acad. Sci. Tech. Sci. 2012, 60, 389–405. [Google Scholar] [CrossRef] [Green Version]
  47. Kim, Y.-D.; Choi, S. Nonnegative tucker decomposition. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  48. Liu, X.; Xia, W.; Wang, B.; Zhang, L. An approach based on constrained nonnegative matrix factorization to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2010, 49, 757–772. [Google Scholar] [CrossRef]
  49. Burred, J.J. Detailed derivation of multiplicative update rules for NMF. Paris Fr. 2014. Available online: https://www.semanticscholar.org/paper/Detailed-derivation-of-multiplicative-update-rules-Burred/3376b4df752f2428c451e530f9c6e0ce3a3f05e4 (accessed on 25 May 2021).
  50. Bioucas-Dias, J.M. A variable splitting augmented Lagrangian approach to linear spectral unmixing. In Proceedings of the 2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Grenoble, France, 26–28 August 2009; pp. 1–4. [Google Scholar]
  51. Smith, L.N.; Elad, M. Improving dictionary learning: Multiple dictionary updates and coefficient reuse. IEEE Signal Process. Lett. 2012, 20, 79–82. [Google Scholar] [CrossRef]
  52. Dell’Acqua, F.; Gamba, P.; Ferrari, A.; Palmason, J.A.; Benediktsson, J.A.; Árnason, K. Exploiting spectral and spatial information in hyperspectral urban data with high resolution. IEEE Geosci. Remote Sens. Lett. 2004, 1, 322–326. [Google Scholar] [CrossRef]
  53. Green, R.O.; Eastwood, M.L.; Sarture, C.M.; Chrien, T.G.; Aronsson, M.; Chippendale, B.J.; Faust, J.A.; Pavri, B.E.; Chovit, C.J.; Solis, M. Imaging spectroscopy and the airborne visible/infrared imaging spectrometer (AVIRIS). Remote Sens. Environ. 1998, 65, 227–248. [Google Scholar] [CrossRef]
  54. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
Figure 1. Tensor factorization; (a) 3-dimensional tensor, (b) Tucker tensor decomposition.
Figure 1. Tensor factorization; (a) 3-dimensional tensor, (b) Tucker tensor decomposition.
Remotesensing 13 02930 g001
Figure 2. Illustration of the proposed CNTD method for hyperspectral and multispectral data fusion.
Figure 2. Illustration of the proposed CNTD method for hyperspectral and multispectral data fusion.
Remotesensing 13 02930 g002
Figure 3. Composite color images of (a) HRMSI, (b) LRHSI (c) reference images of the Pavia University data set (top row) and Indian Pines data set (bottom row).
Figure 3. Composite color images of (a) HRMSI, (b) LRHSI (c) reference images of the Pavia University data set (top row) and Indian Pines data set (bottom row).
Remotesensing 13 02930 g003
Figure 4. Spectral response functions; (a) IKONOS like spectral response function, (b) LANDSAT 7-like spectral response function.
Figure 4. Spectral response functions; (a) IKONOS like spectral response function, (b) LANDSAT 7-like spectral response function.
Remotesensing 13 02930 g004
Figure 5. The RMSE as function of the number of atoms n w , n h and n s for the proposed CNTD approach: (a) n w ; (b) n h ; (c) n s .
Figure 5. The RMSE as function of the number of atoms n w , n h and n s for the proposed CNTD approach: (a) n w ; (b) n h ; (c) n s .
Remotesensing 13 02930 g005
Figure 6. Band 30 of the Pavia University (two top rows) and Indian Pines data sets (two bottom rows), respectively; the first and third rows (a) LRHSI, (b) CNMF, (c) CSTF, (d) NLSTF. The second and fourth rows (a) reference image, (b) CNN, (c) STEREO, (d) proposed CNTD.
Figure 6. Band 30 of the Pavia University (two top rows) and Indian Pines data sets (two bottom rows), respectively; the first and third rows (a) LRHSI, (b) CNMF, (c) CSTF, (d) NLSTF. The second and fourth rows (a) reference image, (b) CNN, (c) STEREO, (d) proposed CNTD.
Remotesensing 13 02930 g006
Figure 7. The error images of the 30th band of Pavia University (two top rows) and Indian Pines data sets (two bottom rows), the first and third rows show the error image of (a) LRHSI, (b) CNMF, (c) CSTF, and (d) NLSTF. The second and fourth rows show the error images of (a) reference error image, (b) CNN, (c) STEREO, and (d) proposed CNTD method.
Figure 7. The error images of the 30th band of Pavia University (two top rows) and Indian Pines data sets (two bottom rows), the first and third rows show the error image of (a) LRHSI, (b) CNMF, (c) CSTF, and (d) NLSTF. The second and fourth rows show the error images of (a) reference error image, (b) CNN, (c) STEREO, and (d) proposed CNTD method.
Remotesensing 13 02930 g007
Table 1. Basic Notation.
Table 1. Basic Notation.
NotationDescription
X Tensor
XMatrix
𝓍 Tensor element
𝓍 Spectral vector of tensor
XScaler
× n Mode-n product
Kronecker product
Hadamard product
    Χ ( n ) Mode-n matricization of tensor X
    X ( n ) n mode matrix in Tucker decomposition
Table 2. Quantitative metrics of the different fusion methods on the Pavia University data set.
Table 2. Quantitative metrics of the different fusion methods on the Pavia University data set.
MethodPavia University Data Set
RMSESAMDDERGASUIQI
Ideal value0.0000.0000.0000.0001.000
CNMF [21]0.1404.3130.0174.9890.952
CSTF [9]2.1602.3901.0551.2300.991
NLSTF [23]1.4520.9640.8460.5200.993
CNN [27]0.0162.2030.1031.4470.976
STEREO [43]0.0613.9220.0101.8650.989
CNTD method0.0081.9630.0051.1690.996
Table 3. Quantitative metrics of the different fusion methods on the Indian Pines data set.
Table 3. Quantitative metrics of the different fusion methods on the Indian Pines data set.
MethodIndian Pines Data Set
RMSESAMDDERGASUIQI
Ideal value0.0000.0000.0000.0001.000
CNMF [21]0.0542.1420.0081.7890.954
CSTF [9]1.5331.3630.9971.0820.974
NLSTF [23]0.8990.7680.4840.7550.984
CNN [27]0.0132.2702.0901.0600.820
STEREO [43]0.0422.3030.0072.5380.932
CNTD method0.0091.6610.0061.2490.972
Table 4. Computational time (s).
Table 4. Computational time (s).
MethodPavia University Data SetIndian Pines Data Set
CNMF [21]8.28310.660
NLSTF [23]14.01716.503
CSTF [9]90.19192.660
CNTD method6.3019.508
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zare, M.; Helfroush, M.S.; Kazemi, K.; Scheunders, P. Hyperspectral and Multispectral Image Fusion Using Coupled Non-Negative Tucker Tensor Decomposition. Remote Sens. 2021, 13, 2930. https://doi.org/10.3390/rs13152930

AMA Style

Zare M, Helfroush MS, Kazemi K, Scheunders P. Hyperspectral and Multispectral Image Fusion Using Coupled Non-Negative Tucker Tensor Decomposition. Remote Sensing. 2021; 13(15):2930. https://doi.org/10.3390/rs13152930

Chicago/Turabian Style

Zare, Marzieh, Mohammad Sadegh Helfroush, Kamran Kazemi, and Paul Scheunders. 2021. "Hyperspectral and Multispectral Image Fusion Using Coupled Non-Negative Tucker Tensor Decomposition" Remote Sensing 13, no. 15: 2930. https://doi.org/10.3390/rs13152930

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop