Next Article in Journal
Optimal Implementations of 8b/10b Encoders and Decoders for AMD FPGAs
Previous Article in Journal
Linear-Extended-State-Observer-Based Adaptive RISE Control for the Wrist Joints of Manipulators with Electro-Hydraulic Servo Systems
Previous Article in Special Issue
Offline Mongolian Handwriting Recognition Based on Data Augmentation and Improved ECA-Net
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Scale Residual Spectral–Spatial Attention Combined with Improved Transformer for Hyperspectral Image Classification

1
Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China
2
Computer Science, Chubu University, Kasugai 487-8501, Japan
3
School of Undergraduate Education, Shenzhen Polytechnic University, Shenzhen 518115, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(6), 1061; https://doi.org/10.3390/electronics13061061
Submission received: 25 January 2024 / Revised: 3 March 2024 / Accepted: 8 March 2024 / Published: 13 March 2024

Abstract

:
Aiming to solve the problems of different spectral bands and spatial pixels contributing differently to hyperspectral image (HSI) classification, and sparse connectivity restricting the convolutional neural network to a globally dependent capture, we propose a HSI classification model combined with multi-scale residual spectral–spatial attention and an improved transformer in this paper. First, in order to efficiently highlight discriminative spectral–spatial information, we propose a multi-scale residual spectral–spatial feature extraction module that preserves the multi-scale information in a two-layer cascade structure, and the spectral–spatial features are refined by residual spectral–spatial attention for the feature-learning stage. In addition, to further capture the sequential spectral relationships, we combine the advantages of Cross-Attention and Re-Attention to alleviate computational burden and attention collapse issues, and propose the Cross-Re-Attention mechanism to achieve an improved transformer, which can efficiently alleviate the heavy memory footprint and huge computational burden of the model. The experimental results show that the overall accuracy of the proposed model in this paper can reach 98.71%, 99.33%, and 99.72% for Indiana Pines, Kennedy Space Center, and XuZhou datasets, respectively. The proposed method was verified to have high accuracy and effectiveness compared to the state-of-the-art models, which shows that the concept of the hybrid architecture opens a new window for HSI classification.

1. Introduction

Hyperspectral images (HSIs) contain both a high spatial resolution and continuous spectral bands of different objects at the same time, with the characteristics of “spectral image unity” [1,2]. They have been applied in a wide variety of applications, such as urban management [3], geological exploration [4], and military surveys [5].
HSI classification is a foundation component in Earth-monitoring applications, with the main goal of assigning each pixel in the HSI to specific land cover classes, thus achieving precise identification and classification of surface cover. Initially, HSI classification mainly used traditional machine learning methods to extract features. Typically, machine learning methods first adopted some dimension reduction methods to reduce spectral redundancy, such as principal component analysis (PCA) [6] and linear discriminant analysis (LDA) [7]. Traditional machine learning methods then employed classifiers such as the K-nearest neighbor method [8], support vector machine [9], random forest [10], decision tree [11], and other methods to classify the extracted features. Although traditional machine learning-based methods have made progress in improving classification performance, they often rely on hand-crafted features for HSI classification. With the rapid development of deep learning and practical progress in the task of HSI classification, deep learning-based methods fully absorb the early experience of HSI classification, combining spectral and spatial information to complete the classification task, and can directly extract effective deep features from the original image [12]. Chen et al. [13] firstly introduced deep learning into the field of HSI classification, and used the unsupervised deep feature-learning model stacked autoencoder (SAE) to extract the features from the original image, which improves the accuracy of HSI classification.
Due to the existence of spectral and spatial heterogeneity in HSI, it is difficult to accurately identify land cover types using only spectral information. Therefore, the model used to jointly extract spectral–spatial features from HSIs for classification has become a research focus. Convolutional neural networks (CNNs) have been widely used in the field of HSI classification to realize the joint extraction of spectral and spatial features [14]. Among these, the three-dimensional convolutional neural network (3D-CNN) achieves direct end-to-end deep spectral–spatial feature extraction on HSIs, providing a robust and reliable feature extraction mechanism [15,16]. Considering the importance of multi-scale information for improving network performance, Song et al. [17] proposed a deep feature fusion strategy that is able to effectively fuse multi-scale feature representations by creating interconnections between different layers of information. Zhong et al. [18] proposed the spectral–spatial residual network (SSRN), which sequentially uses spectral residual blocks and spatial residual blocks to learn deep features from HSIs. Roy et al. [19] proposed the attention-based adaptive spectral–spatial kernel improved residual network (A2S2K-ResNet) with spectral attention to capture discriminative spectral–spatial features in an end-to-end training approach. In addition, attention mechanisms are widely used for HSI classification. Zhou et al. [20] designed a Cross-Attention Fusion module in an Attention Multihop Graph and Multiscale Convolutional Fusion Network (AMGCFN) to highlight important information and enhance feature fusion in different subnets. Gou et al. [21] proposed a global spatial feature representation model to learn global spatial features based on an encoder–decoder structure with channel attention and spatial attention. The CNN-based approaches improved the local perception of the model by point-wise operations with pixels around the image, but were limited by the kernel size and the number of network layers, which results in an insufficient ability to capture global contextual feature information.
In recent years, some studies have introduced a transformer, which extracts features by convolution, and then used the transformer to obtain contextual information [22]. Dosovitskiy et al. [23] proposed the Vision Transformer (ViT) with a dynamic and global sensory field, which ensures the model performs well in image classification tasks and can learn the dependencies of different positions of the output image. ViT learns features mainly using the multi-head attention mechanism, which can extract global information from the non-overlapping parts of the image. Therefore, ViT can effectively capture the long-range dependencies form the input images, enabling the network to parse the information from a global perspective, and thus effectively assisting in describing the local semantic information [24]. For the task of feature classification in HSIs, applying the ViT to sequence data is more effective and flexible in analyzing the spectral data of HSIs [25]. Sun et al. [26] proposed the Spectral–Spatial Feature Tokenisation Transformer (SSFTT) to obtain spectral–spatial features and high-level semantic information.
However, when applying ViT to the HSI classification task, a prominent issue is that the computational burden of the self-attention mechanism grows quadratically with input, and its computational amount hinders the inference speed of the model. Additionally, unlike CNNs that can be expanded to deeper layers to improve performance, the performance of ViT saturates rapidly when expanded to deeper layers, the expansion difficulty is mainly due to the collapse in attention, and the feature maps generated in deeper structures tend to be the same. Therefore, to address the problem of computational burden, Zhang et al. [27] proposed a lightweight transformer (LiT) that achieved a balance between high computational efficiency and significant performance. Liu et al. [28] proposed the Swin-Transformer to use a shift window to capture the global features. Meanwhile, Lin et al. [29] proposed the Cross-Attention in Vision Transformer (CAT), which used Cross-Attention to focus on capturing local information inside the feature map patches, and captured global information between the feature map patches in a single channel. Both methods made the original square-growth computation become linear, which significantly reduced the computation of the transformer.
To address the problem of attention collapse, researchers from AI Lab proposed the Re-Attention mechanism, which regenerated the feature maps between layers to enhance the diversity between layers, avoiding the problem of feature maps converging to be the same in deeper layers [30]. Hybrid architectures combining the transformer and convolutions have garnered widespread attention in building lightweight, high-performance models. Some works have proposed a hybrid structure of a CNN and a transformer after analyzing the working principle of the CNN and the transformer in detail, where shallow features are extracted by the CNN and the extracted features are fed into a semantic tagger to tag the global semantic information [31,32,33].
Based on the above-mentioned analysis, in this paper, an efficient multi-scale residual spectral–spatial attention combined with an improved transformer (RSSAT) is proposed for HSI classification. In RSSAT, we designed a multi-scale residual spectral–spatial feature extraction module to improve the discriminative power of extracted features and adaptively fuse the acquired spectral and spatial information. In addition, we designed improved transformers to fully extract high-level semantic features and model long-range feature dependencies in HSI multidimensional datasets. Overall, our approach constructs a shallow-to-deep feature-learning model that effectively reduces misclassification of small target samples. The main contributions of this paper are summarized as follows:
  • In order to fully extract HSI high-level semantic features as well as to enhance the effective representation of global contextual information, this paper combines the respective representational features of a CNN and transformer, and proposes a new HSI classification method called RSSAT. RSSAT has strong advantages in discriminative feature extraction and capturing long-range dependencies with the best classification performance.
  • By investigating the characteristics of HSIs, a multi-scale residual spectral–spatial feature extraction module was designed. The module fully exploits the local information of HSIs in a two-layer cascade structure and selectively aggregates the information between spectral bands and spatial pixels to highlight discriminative information. The module alleviates the information loss in feature flow and retains more spectral and spatial information, reducing misclassification of small target samples and discrete samples.
  • In order to accurately capture long-range feature dependencies in HSI multidimensional datasets, we propose an improved transformer. For the transformer, we design the Cross-Re-Attention mechanism as an alternative to Self-Attention in the traditional transformer. The innovative strategy significantly enhances the model’s ability to learn high-level semantic features by introducing a learnable matrix that dynamically generates new attention mappings between each layer.
  • According to the experimental results, RSSAT significantly outperforms other state-of-the-art deep learning methods in terms of classification performance, especially when dealing with uneven samples, and achieves an excellent improvement in its classification accuracy.

2. Materials and Methods

Figure 1 demonstrates the framework of the RSSAT model. In general, the model architecture mainly includes a multi-scale residual spectral–spatial feature extraction module and an improved transformer module. The model skillfully integrates the advantages of the CNN and transformer to enable feature extraction from shallow to deep, which enables the model to fully utilize the rich spectral–spatial information in HSIs, further improving the performance and robustness of the model. In the model training process, first, after removing the spectral redundant bands by principal component analysis (PCA), the HSI data are fed into the convolution module to learn low-order features. Then, for the purpose of enhancing the spectral–spatial feature representation capability and robustness of the RSSAT model, residual spectral–spatial attention is embedded in the multi-scale residual feature-learning part. The multi-scale residual spectral–spatial feature extraction module re-adjusts and optimizes the extracted features through a two-level cascaded residual structure to highlight discriminative information. Meanwhile, the model can effectively establish channel connections between feature maps at different stages to enhance the convergence ability of the RSSAT. Finally, we propose the improved transformer to obtain long-distance dependencies of the sequential spectral features. The obtained discriminative spectral–spatial features are employed to obtain the classification results.

2.1. Residual Spectral–Spatial Attention

According to HSI pixel-level classification, there are two principles of joint extraction of spectral and spatial information [34]:
Principle 1: Spectral information is the basis of HSI pixel-level classification and is the most discriminative information.
Principle 2: Effective spatial information for HSI pixel-level classification refers to the information carried by neighboring pixels that are similar to the center pixel.
Based on the above two principles, this paper embeds the residual spectral–spatial attention module into the multi-scale feature extraction part to achieve the realignment and optimization of the spectral and spatial features to highlight the discriminative information, thereby improving the accuracy and efficiency of the HSI classification. Figure 2 illustrates the structure of the proposed residual spectral–spatial attention module.
In the paper, we introduce the spectral–spatial attention module [35] and combine it with residual operations to create the residual spatial–spectral attention module to enhance the feature extraction ability of RSSAT. First, we introduce the spectral attention module, which achieves the selection of specific spectral bands from the input HSI. The module highlights those bands that are useful for the classification task and reduces the influence of irrelevant bands. Next, we introduce the spatial attention module, which achieves fine extraction of spatial information by adaptively strengthening neighboring pixels that are the same category as the center pixel or weakening pixels of different categories. The two attention modules are arranged in a specific order. Based on the given input or intermediate features, spectral attention weights are computed and applied to the relevant features. Then, the obtained results are used as inputs to the spatial attention module.
Spectral Attention: The core purpose of the spectral attention module is to highlight those spectral features that are critical for HSI classification. To realize the refinement and selection of features, the spectral feature map is generated using the relationship between the spectral information of the features. The structure of spectral attention module is given in Figure 3.
To aggregate information and infer finer spectral attention, an average pooling layer and maximum pooling layer are employed. The two different feature descriptions are obtained for the feature mapping based on the different pooling schemes. The kth element of the output is calculated by Equation (1) and the kth channel of the output is calculated by Equation (2).
y a v g s e = 1 H × W i = 1 H j = 1 W y k ( i , j )
y m a x s e = m a x ( y k )
where y k ( i , j ) is the value at position ( i , j ) of the kth channel. y a v g s e and y max s e denote the output of the average pooling and maximum pooling, respectively. H and W denote the height and width, respectively.
For the purpose of fully understanding the interrelationships between different spectral bands and to improve the generalization ability of the model, the outputs of the average pooling layer and maximum pooling layer are directly fed into a shared MLP, which contains two fully connected (FC) layers. A new weight is assigned to each pixel through the SoftMax function. The output of the module is given as follows:
F s e = S i g m o i d ( M L P ( y a v g s e ) + M L P ( y m a x s e ) )
where F s e denotes the output of the spectral attention module.
Spatial Attention: The main purpose of the spatial attention module aims to enhance the spatial information of neighboring pixels that have the same class label as the center pixel, and to weaken the spatial information of pixels that have different category labels. The spatial attention module is given in Figure 4.
To fully aggregate the spatial information, an average pooling layer and a maximum pooling layer are used to mine the target features. The spatial attention module takes the output of the spectral attention module and passes it through the maximum pooling and average pooling operations to obtain two new feature maps. Then, the information carried by the two feature maps is horizontally concatenated and input into the 7 × 7 convolution operation. Finally, the weight of attention is assigned to each pixel using a sigmoid function. The mathematical expressions are shown as follows:
y a v g s a = 1 C C = 1 C y k * ( i , j )
y m a x s a = m a x ( y k * )
F s a = Sigmoid ( C o n v 7 × 7 ( C o n c a t ( y a v g s a , y m a x s a ) ) )
where y a v g s a and y m a x s a denote the output denote the output of the average pooling and maximum pooling, respectively. F s a denotes the output of the spatial attention module.

2.2. Multi-Scale Residual Spectral-Spatial Feature Extraction Module

The multi-scale information enables the effective enhancement of the robustness and increases the classification accuracy of the model [36]. Therefore, in this work, we designed a two-tier cascaded multi-scale residual spectral–spatial feature extraction module to refine the multi-scale information to obtain enhanced discriminative spectral–spatial features. Figure 5 illustrates the structure of the module.
The module uses convolution kernels of different sizes to obtain a better representation of the image to enhance the feature extraction capability of the model. In this work, we employed the 1 × 1 × 1, 3 × 3 × 3, and 5 × 5 × 5 convolution kernels. 1 × 1 × 1 convolution is employed to extract the global information of the image, and 3 × 3 × 3 and 5 × 5 × 5 can provide the local information of the image under different receptive fields. The proposed model uses a 3D convolution layer after the residual spectral–spatial attention module so that the spectral–spatial features extracted from the previous residual block achieve feature fusion by 3D convolution. Based on this approach, the following residual spectral–spatial attention module can acquire both the base features and the optimized features, which is conducive to better learning of feature information by the model. Meanwhile, in order to obtain more in-depth feature information extracted by each residual spectral–spatial attention module and enrich the learning hierarchy of the network, we use residual learning outside each residual spectral–spatial attention module to achieve the effective transfer of features and take full advantage of the independence of different features to complete the global fusion of the features obtained from different residual blocks. Finally, the features and information at different scales are fused using the Concat stitching operation to make the acquired spectral and spatial features more comprehensive.

2.3. Improved Transformer

For the purpose of further obtaining the long-distance relationship of sequential spectra, this work uses the transformer to enable the model to parse semantic information from a global perspective. However, when applied to HSI classification tasks, the transformer mainly suffers from the following two problems:
(1) Transformer architectures require large quantities of data and computational resources for training and optimization. The computational complexity of the MHSF in transformers shows quadratic growth with the input size. Therefore, using the transformer module to investigate high-resolution images can lead to reduced computational efficiency and slower model inference speed. The formula of computation can be expressed as:
F L O P M H S F = 4 H W C 2 + 2 H 2 W 2 C
where H denotes the height of the input, W denotes the width of the input, and C denotes the number of channels in the input.
(2) Unlike CNNs, which can enhance performance by stacking additional convolutional layers, the performance of the transformer is quickly saturated when scaling to deeper layers. The difficulty of scaling the transformer is mainly caused by the attention collapse problem. As the number of transformer layers increases, the attention maps gradually become similar, and even after certain layers, the attention maps are basically the same. This situation suggests that MHSF may not be able to efficiently learn useful feature representations in deep transformer structures, resulting in the model failing to obtain the desired performance gains [30].
Based on the above two points, this paper proposes a Cross-Re-Attention mechanism to alleviate the problems of attention collapse and the huge computational burden. Generating new feature maps between the layers of the transformer enhances the diversity of each layer to avoid similarity in feature maps at deep layers. Meanwhile, considering the contextual information extraction and communication, an attention processing method on a single-channel feature map is used. The computation is significantly reduced compared to that for attention on all channels. Figure 6 illustrates the framework of improved transformer block.
Patch merging is employed to an input that is down-sampled twice, and is used to diminish the resolution and adjust the number of channels. The Cross-Re-Attention block is composed of an Inner-Patch-Re-Attention (IPRA) block and a Cross-Patch-Re-Attention (CPRA) block. By stacking IPRA blocks and CPRA blocks, the module efficiently extracts and integrates features between pixels in a patch and between patches in a feature map. The IPRA part performs pixel-by-pixel Re-Attention computation within each patch to obtain information. Attention computation is performed pixel by pixel within each patch pixel, aiming to capture and utilize the relationship between pixels within the patch to obtain global information. This strategy not only significantly reduces the computational burden, but also greatly enhances the inference efficiency of the model. The mathematical expression of computation is as follows:
F L O P I P R A = 4 H W C 2 + 2 N 2 H W C
where N denotes the size of the patch in IPRA. Compared to the MHSA in the standard transformer, the computational complexity is reduced from quadratic correlation to linear correlation.
In CNN-based networks, although the perceptual field can be expanded by stacking convolutional kernels, its sparse connectivity restricts its global dependency capture and makes it difficult to expand the perceptual field to the global range. However, in a transformer, the feature map having a single channel inherently encompasses global information. CPRA partially takes an individual channel as one of the group inputs. Re-Attention is performed in one group to cross the information of different patches to obtain global semantic information. Meanwhile, the attention maps are regenerated in layers of the transformer to enhance their diversity on different layers.
By virtue of the Cross-Re-Attention mechanism, the existing transformer model can be trained to obtain deep transformer models with linear growth in computation. Specifically, the method is based on the head-generated attention maps and generates new attention maps through dynamic aggregation. A learnable matrix, θ , is defined. This matrix is then used to map attention to a regenerated new matrix, which is multiplied with the V matrix in the transformer as follows:
A t t e n t i o n ( Q , K , V ) = N o r m ( θ T ( S o f t m a x ( θ Κ T d ) ) ) V
where d indicates the dimension of K . The Norm function is employed to reduce the layer-wise variance. The SoftMax function is employed to compute the weights on the values. Q (Query), K (Key), and V (Value) are the projections of tokens, which are the matrices obtained by multiplying the input vectors with the weight matrices obtained after training.

3. Results

For the purpose of validating the performance of the RSSAT model, three public HSI datasets were selected, namely Indian Pines, Kennedy Space Center (KSC), and XuZhou datasets. To better understand the RSSAT structure, we used ablation experiments to investigate the validity of each component of the model by removing different modules. Meanwhile, we visualized the HSI classification maps to compare the feature extraction capabilities of the proposed RSSAT and other SOTA methods.

3.1. Dataset Description and Experiment Design

(1)
The Indian Pines dataset was imaged by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) in 1992. A region of size 145 × 145 was selected for annotation and used as a test dataset for HSI classification. The imaging wavelength range of the dataset is 0.4–2.5 μm, and it can continuously provide images at 220 hyperspectral bands with a spatial resolution of 20 m/pixel. Since bands 104–108, 150–163, and 220 cannot be reflected by water, these 20 bands are typically excluded from the research process, leaving only 200 bands for analysis. The dataset contains 10,249 labeled samples and 16 vegetation classes. It is worth mentioning that the number of samples for these 16 classes of ground objects is unevenly distributed, making the dataset prone to mixed pixels, which poses a challenge for classification.
(2)
The KSC dataset was acquired by the National Aeronautics and Space Administration (NASA) Airborne Visible/Infrared Imaging Spectrometer (VIRIS) instrument. The dataset covers 224 spectral bands and 13 classes, and an area of 512 × 614 pixels has been specially selected for detailed labeling to ensure the accuracy and usefulness of the data. It has an excellent spectral resolution of 10 nm, ranging from 0.4 to 2.5 μm, capturing subtle spectral differences and providing strong support for analysis. Meanwhile, the 18 m spatial resolution ensures the spatial accuracy of the image, fully demonstrating the spatial characteristics of the features.
(3)
The XuZhou dataset was acquired in November 2014 in XuZhou City, Jiangsu Province, China. The test area size of 500 × 260 pixel with 436 bands was selected for labeling to ensure classification accuracy. The test area, which is located near a coal mining area, has been categorized into nine classes of ground objects.
Table 1, Table 2 and Table 3 report the information of the classes and number of available samples. The Indian Pines and KSC datasets have 10,249 and 5211 labeled samples, respectively, while the XuZhou dataset has 68,877 labeled samples. Compared to the Indian Pines and KSC datasets, the XuZhou dataset has a larger number of samples. Therefore, this work designed different proportions of labeled samples as training strategies for all datasets and used different numbers of training samples to validate the performance of the RSSAT method. On the Indian pines and KSC datasets, 20% of labeled samples were randomly selected for training, 10% of labeled samples for validation, and 70% of labeled samples as the testing set. For the XuZhou dataset, 10%, 10%, and 80% of the labeled pixels were randomly selected as the training set, validation set, and test set, respectively.

3.2. Experiment Configuration

For a fair comparison, our experiments were conducted on an Intel (R) Xeon (R) CPU E5–2620 v4 @ 2.10 GHz processor, 128 GB RAM, and an NVIDIA GeForce RTX 2080Ti (GPU), Window10, using the PyTorch framework and the Python 3.7 compiler. In order to minimize the errors and contingencies of the experiments, all experimental results are the average of 10 experiments. For model training, all experiments used batch processing. We set the training batch size to 32 × 32. Meanwhile, the Adam optimizer was employed to learn the weights and the original learning rate was set to 0.003. To ensure that the model can be adequately trained and perform optimally, we set the maximum iteration time to 200 epochs. We set an early stopping strategy to avoid the overfitting problem.

3.3. Experiment Comparison and Analysis

In the study, the classification performance of the proposed RSSAT was verified by comparison with the SVM [37], 3D-CNN [14], residual neural network (ResNet) [38], Multi-Attention Fusion Network (MAFN) [39], Spectral–Spatial Feature Tokenisation Transformer (SSFTT) [26], SSRN [18], and Dual-View Spectral and Global Spatial Feature Fusion Network (DSGSF) [21] methods. SVM is a traditional image classification method. The remaining models are deep learning-based algorithms, which utilize deep neural networks to process HSI classification tasks. Among these, the experimental results were quantitatively evaluated by three metrics: overall accuracy (OA), average accuracy (AA), and Kappa (K) coefficient. OA represents the percentage of correctly classified pixels in all pixel classification results. The Kappa coefficient is employed to test uniformity and determine whether the prediction results of the method are identical to the true results. The value of the Kappa coefficient ranges from −1 to 1, where a positive value indicates superior classification performance, a negative value indicates poor classification performance, and a value close to 0 indicates average classification performance.
Quantitative classification results for evaluation indicators and the accuracy of each class are given in Table 4, Table 5 and Table 6, respectively (the standard deviation of ten runs was taken as the experimental result). Overall, it could be observed from all experimental results of the datasets that our proposal yields the best accuracy and relatively low standard deviations. Specifically, on the Indian Pines dataset, RSSAT achieves 97.31% in terms of AA; at the same time SVM, 3D-CNN, ResNet, MAFN, SSFTT, and SSRN achieve 79.76%, 76.95%, 92.64%, 96.65%, 96.08%, and 92.63%, respectively. On the Indian Pines, KSC, and XuZhou datasets, compared to SSRN, the increases in OA of our proposal are 0.30%, 0.44%, and 0.09%. The proposed RSSAT method consistently demonstrates superiority in performance compared to SSRN, which is a strong argument for the superiority of our method in improving the representation of specific spectral–spatial features by readjusting the high spatial correlation contexts over spectral bands. Moreover, in the Indian Pines dataset, 16 classes of samples are unevenly distributed in terms of quantity. For example, there are only 20 labeled samples for the 9th class (Oats), while the 11th class (Soybean-mintill) contains 2455 labeled samples. The uneven sample distribution presents a serious challenge to the HSI classification. For the accuracy of the 9th class, SSFTT (67.22 ± 7.29), SSRN (58.94 ± 48.22), and other methods with better performance still fail to provide a good solution.
In our proposed RSSAT method, we use a two-tier cascaded multi-scale residual spectral–spatial feature-learning module by introducing a spectral–spatial attention mechanism. Meanwhile, we strategically embed ResBlock to enhance the nonlinear representation capability. The module mitigates the information loss in the feature stream, preserves more spatial information, and better addresses the challenge of scale diversity under different land cover types. Therefore, RSSAT (85.32 ± 13.57)% achieves the best classification results on the 9th class. At the same time, we obtained the best classification performance in terms of overall evaluation metrics, and obtained the closest classification maps to the ground truth.
In addition, Figure 7, Figure 8 and Figure 9 show the learning curves of the proposed method. On the learning curves of these three datasets, as the number of epochs increases, both the loss values and accuracy tend to have a smoothed output. The maximum fluctuation in loss values is less than 0.5, effectively demonstrating the excellent convergence performance of the model. Meanwhile, the gradual fitting of the accuracy curves in the figure visually demonstrates the remarkable generalization ability of the model. Based on the above analysis, our method exploits complementary hybrid blocks to enable the efficient characterization of the deep spectral–spatial features.

3.4. Visualization of Classification Maps

In order to visually demonstrate the effectiveness of the RSSAT method, we analyzed the classification results over the Indian Pines, KSC, and XuZhou datasets, as shown in Figure 10, Figure 11 and Figure 12. These classification maps display that RSSAT has fewer misclassified pixels and cleaner boundaries than other SOTA models. Therefore, we can conclude that the RSSAT method outperforms all the methods for classification.

4. Discussions

4.1. Feature Visualization Analysis

For the purpose of investigating the feature representation capability of RSSAT, the t-distributed stochastic neighborhood embedding (t-SNE) algorithm [40] was used to visualize and compare the features extracted by ResNet and RSSAN in 2D space. As shown in Figure 13, Figure 14 and Figure 15, the samples belonging to the same class are clearly clustered into a group in the figures, while samples of different classes are easily separated from each other. From the visualization results, the RSSAT method is more significant and effective in clustering the features, which further proves that the method gains the abstract representation of spectral–spatial features for HSIs.

4.2. Time Cost Comparison

In order to comprehensively evaluate the efficiency of different methods in the HSI classification task, the running time and computational cost of each method are recorded in detail in Table 7. As seen from the data in the table, the training time of RSSAT is slightly longer compared to that of 3D-CNN, SSFTT, and SSRN. This is mainly attributed to the complexity of the RSSAT model design, which contains more layers, thus increasing the length of the training process to some extent. However, it is worth noting that RSSAT exhibits a significant advantage in classification accuracy. This performance enhancement, especially in the accurate classification of small target samples, compensates for its minor shortfall in training time. This balance between performance and efficiency of RSSAT is reasonable considering that classification accuracy is often a crucial metric in practical applications. Meanwhile, RSSAT shows significant advantages in both efficiency and performance compared to ResNet and MAFN. This further demonstrates that RSSAT is able to achieve superior classification performance with moderate computational cost, providing an efficient and feasible solution for the HSI classification task.
Overall, although RSSAT may not be the optimal choice from the perspectives of execution time and computational cost, its high-precision overall classification performance and its ability to accurately recognize small target samples make up for these shortcomings.

4.3. Different Numbers of Training Samples

In order to be closer to real-world application scenarios and to test the generalization ability of the model under limited data, we reduced the ratio of training samples to validation samples. The experimental results are shown in Table 8. Specifically, we randomly selected 5% of the samples in the Indian Pines dataset as the training set, 5% of the samples as the validation set, and the remaining samples as the test set. From the experimental results, the performance of each method shows a different degree of degradation as the number of samples is reduced. Compared with other methods, RSSAT still has obvious advantages with fewer samples, which proves that RSSAT has superior generalization ability.

4.4. Ablation Experiments Analysis

In this experiment, we still used the three datasets as examples to perform ablation experiments to investigate the gain in each component when using our RSSAT by removing different modules. The relevant results are reported in Table 9.
(1)
In this work, we employed the SSRN model with multi-scale information integration as the basic model architecture (the experimental model was defined as Base).
(2)
For the purpose of verifying the validity of the residual spectral–spatial attention module over RSSAT, the experiment only increased the improved transformer based on Base (the experimental model was defined as Base+IT).
(3)
For the purpose of verifying the validity of the improved transformer module over RSSAT, the experiment only increased the multi-scale residual spectral–spatial attention module based on Base (the experimental model was defined as Base+RSS).
Specifically, Base+IT increased OA by 0.17%, 0.57%, and 0.57% over the Indian Pines, KSC, and XuZhou datasets, respectively, which showed that the transformer adequately captured contextual information, enabling the network to parse semantic information from a global perspective. Base+RSS improved OA by 0.26%, 0.43%, and 0.16% on different datasets, demonstrating that the multi-scale residual spectral–spatial feature extraction module helped the architecture to adaptively learn the important features of each spectral–spatial domain while emphasizing the information-rich features and suppressing less useful features.

5. Conclusions

In the paper, a novel hybrid architecture is examined for HSI classification. Specifically, the proposed RSSAT method improves the representational ability of extracted features and captures relationships within a long range in the spectral domain by combining the strengths of a transformer and a CNN. For the RSSAT method, the residual spectral–spatial attention mechanism is embedded in the multi-scale feature-learning part for the joint extraction of spectral and spatial features on the selected multi-scale feature maps to highlight the discriminative information. For the characteristics of the HSI spectral approximation continuation, we propose the Cross-Re-Attention mechanism to improve the formal transformer to achieve deeper ViT training, which effectively alleviates the ViT attention collapse problem and computational volume problem. Overall, RSSAT successfully extracts discriminative features in complex regions and significantly enhances remote contextual information in the spectral domain. The classification performance is evaluated on three challenging datasets. The overall accuracy of the RSSAT model was 98.71%, 99.33%, and 99.72%, and average accuracy was 97.31%, 99.02%, and 99.72%, for the Indian Pines, KSC, and XuZhou datasets, respectively.
Since the number of samples in the Indian Pines dataset is small and unevenly distributed, there is still room for improvement in the classification performance of the RSSAT model. In future work, we will study methods such as data expansion, loss constraints between features and HSI data, and transformer optimization to facilitate the classification performance of a small-sample HSI dataset.

Author Contributions

Conceptualization, A.W., K.Z., H.W., Y.I. and H.C.; methodology, A.W., K.Z., H.W. and Y.I.; software, K.Z.; validation K.Z.; writing—review and editing A.W., K.Z., H.W., Y.I. and H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Key Research and Development Plan Project of Heilongjiang (JD2023SJ19) and the Natural Science Foundation of Heilongjiang Province (LH2023F034).

Data Availability Statement

Indian Pines and KSC: https://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 20 May 2011); Xuzhou: https://ieee-dataport.org/documents/xuzhou-hyspex-dataset (accessed on 2 November 2018).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, 110–122. [Google Scholar] [CrossRef]
  2. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  3. Yuen, P.; Richardson, M. An introduction to hyperspectral imaging and its application for security, surveillance and target acquisition. Imaging Sci. J. 2013, 58, 241–253. [Google Scholar] [CrossRef]
  4. Yang, X.; Yu, Y. Estimating soil salinity under various moisture conditions: An experimental study. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2525–2533. [Google Scholar] [CrossRef]
  5. Shimoni, M.; Haelterman, R.; Perneel, C. Hypersectral Imaging for Military and Security Applications: Combining Myriad Processing and Sensing Techniques. IEEE Geosci. Remote Sens. Mag. 2019, 7, 101–117. [Google Scholar] [CrossRef]
  6. Licciardi, G.; Marpu, P.R.; Chanussot, J.; Benediktsson, J.A. Linear Versus Nonlinear PCA for the Classification of Hyperspectral Data Based on the Extended Morphological Profiles. IEEE Geosci. Remote Sens. Lett. 2012, 9, 447–451. [Google Scholar] [CrossRef]
  7. Bandos, T.V.; Bruzzone, L.; Camps-Valls, G. Classification of Hyperspectral Images with Regularized Linear Discriminant Analysis. IEEE Trans. Geosci. Remote Sens. 2009, 47, 862–873. [Google Scholar] [CrossRef]
  8. Blanzieri, E.; Melgani, F. Nearest neighbor classification of remote sensing images with the maximal margin principle. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1804–1811. [Google Scholar] [CrossRef]
  9. Xu, M.; Zhao, Q.; Jia, S. Multiview Spatial–Spectral Active Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5512415. [Google Scholar] [CrossRef]
  10. Liu, Z.; Tang, B.; He, X.; Qiu, Q.; Liu, F. Class-Specific Random Forest with Cross-Correlation Constraints for Spectral–Spatial Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 257–261. [Google Scholar] [CrossRef]
  11. Friedl, M.; Brodley, C. Decision tree classification of land cover from remotely sensed data. Remote Sens. Environ. 1997, 61, 399–409. [Google Scholar] [CrossRef]
  12. Kang, X.; Zhuo, B.; Duan, P. Dual-Path Network-Based Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2019, 16, 447–451. [Google Scholar] [CrossRef]
  13. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Observ Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  14. Li, Y.; Zhang, H.; Shen, Q. Spectral–spatial classification of hyperspectral imagery with 3D convolutional neural network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
  15. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  16. Praveen, B.; Menon, V. Study of Spatial–Spectral Feature Extraction Frameworks With 3-D Convolutional Neural Network for Robust Hyperspectral Imagery Classification. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2021, 14, 1717–1727. [Google Scholar] [CrossRef]
  17. Song, W.; Li, S.; Fang, L.; Lu, T. Hyperspectral Image Classification with Deep Feature Fusion Network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3173–3184. [Google Scholar] [CrossRef]
  18. Zhong, Z.; Li, J.; Luo, Z.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  19. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-Based Adaptive Spectral–Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
  20. Zhou, H.; Luo, F.; Zhuang, H.; Weng, X.; Gong, X.; Lin, Z. Attention multihop graph and multiscale convolutional fusion network for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5508614. [Google Scholar] [CrossRef]
  21. Guo, T.; Wang, R.; Luo, F.; Gong, X.; Zhang, L.; Gao, X. Dual-View Spectral and Global Spatial Feature Fusion Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5512913. [Google Scholar] [CrossRef]
  22. Chen, H.; Qi, Z.; Shi, Z. Remote sensing image change detection with transformers. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5607514. [Google Scholar] [CrossRef]
  23. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar] [CrossRef]
  24. Wang, W. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14408–14419. [Google Scholar]
  25. Song, R.; Feng, F.; Cheng, W.; Mu, Z.; Wang, X. BS2T: Bottleneck Spatial–Spectral Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5532117. [Google Scholar] [CrossRef]
  26. Sun, L.; Zhao, G.; Zheng, Y.; Wu, Z. Spectral–Spatial Feature Tokenization Transformer for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5522214. [Google Scholar] [CrossRef]
  27. Zhang, X.; Su, Y.; Gao, L.; Bruzzone, L.; Gu, X.; Tian, Q. A Lightweight Transformer Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5517617. [Google Scholar] [CrossRef]
  28. Liu, Z. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 9992–10002. [Google Scholar]
  29. Lin, H.; Cheng, X.; Wu, X.; Shen, D. CAT: Cross Attention in Vision Transformer. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo (ICME), Taipei, Taiwan, 18–22 July 2022; pp. 1–6. [Google Scholar] [CrossRef]
  30. Zhou, D.; Kang, B.; Jin, X. Deepvit: Towards deeper vision transformer. arXiv 2021, arXiv:2103.11886. [Google Scholar] [CrossRef]
  31. Graham, B. LeViT: A Vision Transformer in ConvNet’s Clothing for Faster Inference. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 12239–12249. [Google Scholar] [CrossRef]
  32. Ouyang, E.; Li, B.; Hu, W.; Zhang, G.; Zhao, L.; Wu, J. When multigranularity meets spatial–spectral attention: A hybrid transformer for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4401118. [Google Scholar] [CrossRef]
  33. Zu, B.; Li, Y.; Li, J.; He, Z.; Wang, H.; Wu, P. Cascaded convolution-based transformer with densely connected mechanism for spectral-spatial hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5513119. [Google Scholar] [CrossRef]
  34. Liu, H.; Li, W.; Xia, X.G.; Zhang, M.; Gao, C.Z.; Tao, R. Central Attention Network for Hyperspectral Imagery Classification. IEEE Trans. Neural Netw. Learn Syst. 2023, 34, 8989–9003. [Google Scholar] [CrossRef]
  35. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual Spectral-Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 449–462. [Google Scholar] [CrossRef]
  36. Xu, F.; Zhang, G.; Song, C.; Wang, H.; Mei, S. Multiscale and Cross-Level Attention Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5501615. [Google Scholar] [CrossRef]
  37. Waske, B.; van der Linden, S.; Benediktsson, J.A.; Rabe, A.; Hostert, P. Sensitivity of support vector machines to random featureselection in classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2021, 48, 2880–2889. [Google Scholar] [CrossRef]
  38. He, K.; Zhang, M.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  39. Li, Z. Hyperspectral Image Classification with Multi attention Fusion Network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5503305. [Google Scholar] [CrossRef]
  40. Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Figure 1. Framework of the proposed RSSAT model for HSI classification.
Figure 1. Framework of the proposed RSSAT model for HSI classification.
Electronics 13 01061 g001
Figure 2. Residual spectral–spatial attention module.
Figure 2. Residual spectral–spatial attention module.
Electronics 13 01061 g002
Figure 3. Spectral attention module.
Figure 3. Spectral attention module.
Electronics 13 01061 g003
Figure 4. Spatial attention module.
Figure 4. Spatial attention module.
Electronics 13 01061 g004
Figure 5. Multi-scale residual spectral–spatial feature extraction module.
Figure 5. Multi-scale residual spectral–spatial feature extraction module.
Electronics 13 01061 g005
Figure 6. The internal structure of the improved transformer block.
Figure 6. The internal structure of the improved transformer block.
Electronics 13 01061 g006
Figure 7. Learning curves for the Indian Pines dataset. (a) Valid loss vs. train loss in each epoch. (b) Valid accuracy vs. train accuracy in each epoch.
Figure 7. Learning curves for the Indian Pines dataset. (a) Valid loss vs. train loss in each epoch. (b) Valid accuracy vs. train accuracy in each epoch.
Electronics 13 01061 g007
Figure 8. Learning curves for the KSC dataset. (a) Valid loss vs. train loss in each epoch. (b) Valid accuracy vs. train accuracy in each epoch.
Figure 8. Learning curves for the KSC dataset. (a) Valid loss vs. train loss in each epoch. (b) Valid accuracy vs. train accuracy in each epoch.
Electronics 13 01061 g008
Figure 9. Learning curves for the XuZhou dataset. (a) Valid loss vs. train loss in each epoch. (b) Valid accuracy vs. train accuracy in each epoch.
Figure 9. Learning curves for the XuZhou dataset. (a) Valid loss vs. train loss in each epoch. (b) Valid accuracy vs. train accuracy in each epoch.
Electronics 13 01061 g009
Figure 10. Classification maps of the Indian Pines dataset. (a) Ground truth. (b) SVM. (c) CNN. (d) ResNet. (e) MAFN. (f) SSFTT. (g) SSRN. (h) RSSAT.
Figure 10. Classification maps of the Indian Pines dataset. (a) Ground truth. (b) SVM. (c) CNN. (d) ResNet. (e) MAFN. (f) SSFTT. (g) SSRN. (h) RSSAT.
Electronics 13 01061 g010aElectronics 13 01061 g010b
Figure 11. Classification maps of the KSC dataset. (a) Ground truth. (b) SVM. (c) CNN. (d) ResNet. (e) MAFN. (f) SSFTT. (g) SSRN. (h) RSSAT.
Figure 11. Classification maps of the KSC dataset. (a) Ground truth. (b) SVM. (c) CNN. (d) ResNet. (e) MAFN. (f) SSFTT. (g) SSRN. (h) RSSAT.
Electronics 13 01061 g011
Figure 12. Classification maps of the XuZhou dataset. (a) Ground truth. (b) SVM. (c) CNN. (d) ResNet. (e) MAFN. (f) SSFTT. (g) SSRN. (h) RSSAT.
Figure 12. Classification maps of the XuZhou dataset. (a) Ground truth. (b) SVM. (c) CNN. (d) ResNet. (e) MAFN. (f) SSFTT. (g) SSRN. (h) RSSAT.
Electronics 13 01061 g012aElectronics 13 01061 g012b
Figure 13. Visualization of the 2D spectral–spatial features for the samples in the Indian Pines dataset via t-SNE. (a) ResNet. (b) RSSAT.
Figure 13. Visualization of the 2D spectral–spatial features for the samples in the Indian Pines dataset via t-SNE. (a) ResNet. (b) RSSAT.
Electronics 13 01061 g013
Figure 14. Visualization of the 2D spectral–spatial features for the samples in the KSC dataset via t-SNE. (a) ResNet. (b) RSSAT.
Figure 14. Visualization of the 2D spectral–spatial features for the samples in the KSC dataset via t-SNE. (a) ResNet. (b) RSSAT.
Electronics 13 01061 g014
Figure 15. Visualization of the 2D spectral–spatial features for the samples in the XuZhou dataset via t-SNE. (a) ResNet. (b) RSSAT.
Figure 15. Visualization of the 2D spectral–spatial features for the samples in the XuZhou dataset via t-SNE. (a) ResNet. (b) RSSAT.
Electronics 13 01061 g015
Table 1. Details of the Indian Pines dataset.
Table 1. Details of the Indian Pines dataset.
No.ClassColorSample NumbersFalse-Color MapGround-Truth Map
1AlfalfaElectronics 13 01061 i00146Electronics 13 01061 i002Electronics 13 01061 i003
2Corn-notillElectronics 13 01061 i0041428
3Corn-mintillElectronics 13 01061 i005830
4CornElectronics 13 01061 i006237
5Grass-pastureElectronics 13 01061 i007483
6Grass-treesElectronics 13 01061 i008730
7Grass-pasture-mowedElectronics 13 01061 i00928
8Hay-windrowedElectronics 13 01061 i010478
9OatsElectronics 13 01061 i01120
10Soybean-notillElectronics 13 01061 i012972
11Soybean-mintillElectronics 13 01061 i0132455
12Soybean-cleanElectronics 13 01061 i014593
13WheatElectronics 13 01061 i015205
14WoodsElectronics 13 01061 i0161265
15Buildings-Grass-Trees-DrivesElectronics 13 01061 i017386
16Stone-Steel-TowersElectronics 13 01061 i01893
Total 10,249
Table 2. Details of the KSC dataset.
Table 2. Details of the KSC dataset.
No.ClassColorSample NumbersFalse-Color MapGround-Truth Map
1ScrubElectronics 13 01061 i0191997Electronics 13 01061 i020Electronics 13 01061 i021
2WillowElectronics 13 01061 i0223726
3PalmElectronics 13 01061 i0231976
4PineElectronics 13 01061 i0241394
5BroadleafElectronics 13 01061 i0252678
6HardwoodElectronics 13 01061 i0263979
7SwapElectronics 13 01061 i0273579
8GraminoidElectronics 13 01061 i02811,213
9SpartinaElectronics 13 01061 i0296197
10CattailElectronics 13 01061 i0303249
11SaltElectronics 13 01061 i0311058
12MudElectronics 13 01061 i0321908
13WaterElectronics 13 01061 i033909
Total 5211
Table 3. Details of the XuZhou dataset.
Table 3. Details of the XuZhou dataset.
No.ClassColorSample NumbersFalse-Color MapGround-Truth Map
1Bareland-1Electronics 13 01061 i03426,396Electronics 13 01061 i035Electronics 13 01061 i036
2LakesElectronics 13 01061 i0374027
3CoalsElectronics 13 01061 i0382783
4CementElectronics 13 01061 i0395214
5Crops-1Electronics 13 01061 i04013,184
6TreesElectronics 13 01061 i0412436
7Bareland-2Electronics 13 01061 i0426990
8Crops-2Electronics 13 01061 i0434777
9Red-tilesElectronics 13 01061 i0443070
Total 68,877
Table 4. Quantitative classification performance of different methods on the Indian Pines dataset.
Table 4. Quantitative classification performance of different methods on the Indian Pines dataset.
MethodsSVM3D-CNNResNetMAFNSSFTTSSRNRSSAT
159.23 ± 19.0581.73 ± 7.0495.17 ± 10.0695.57 ± 6.1099.76 ± 0.7377.21 ± 31.493.66 ± 9.42
271.14 ± 1.3966.34 ± 7.9690.65 ± 6.6898.17 ± 2.1194.85 ± 1.1598.51 ± 1.3698.80 ± 0.82
374.59 ± 1.8172.20 ± 15.2191.16 ± 5.8296.68 ± 5.5099.24 ± 0.4797.81 ± 1.2498.63 ± 0.75
460.65 ± 7.4383.69 ± 7.9492.25 ± 9.2997.94 ± 2.6299.30 ± 1.0899.37 ± 0.8498.64 ± 1.19
589.19 ± 3.1594.91 ± 1.2098.41 ± 1.1798.58 ± 1.0598.78 ± 0.9495.91 ± 1.7698.18 ± 1.60
688.63 ± 1.5696.51 ± 1.3797.05 ± 2.9698.56 ± 1.8899.37 ± 0.3998.78 ± 1.4399.22 ± 0.84
785.71 ± 8.3188.56 ± 3.6880.91 ± 31.1594.94 ± 7.8198.40 ± 3.2070.01 ±48.297.66 ± 7.00
890.19 ± 1.6499.10 ± 0.8596.08 ± 2.9399.71 ± 0.4999.79 ± 0.4598.46 ± 2.2799.82 ± 0.53
974.12 ± 13.4964.01 ± 17.7686.00 ± 24.6781.89 ± 13.9867.22 ± 7.2958.94 ± 48.2285.32 ± 13.57
1075.28 ± 2.0482.57 ± 5.3591.40 ± 6.4496.13 ± 3.5497.54 ± 0.8997.54 ± 1.6198.12 ± 1.09
1178.16 ± 1.1964.15 ± 8.1692.31 ± 5.2298.61 ± 1.9499.22 ± 0.3599.22 ± 0.4998.96 ± 0.73
1272.68 ± 3.9481.93 ± 5.4190.8 ± 7.8196.59 ± 3.2695.96 ± 1.0998.52 ± 0.9598.09 ± 1.41
1392.23 ± 2.8599.02 ± 0.4595.76 ± 4.3798.92 ± 1.4198.86 ± 0.7198.67 ± 4.0498.48 ± 2.19
1491.66 ± 0.9589.61 ± 4.5693.38 ± 4.5299.45 ± 0.3699.38 ± 0.8899.17 ± 1.0199.39 ± 0.44
1574.67 ± 6.9587.44 ± 4.0794.99 ± 5.4897.94 ± 2.6598.01 ± 1.2199.22 ± 0.8898.32 ± 1.62
1698.10 ± 2.5393.97 ± 5.1495.58 ± 3.7195.63 ± 3.6491.69 ± 5.7994.79 ± 6.1495.68 ± 4.34
OA (%)79.92 ± 0.6577.15 ± 2.6992.40 ± 1.8197.98 ± 0.3898.07 ± 0.3998.41 ± 0.4498.71 ± 0.28
AA (%)79.76 ± 2.4376.95 ± 2.5492.64 ± 3.4096.65 ± 1.1096.08 ± 1.4492.63 ± 6.2097.31 ± 1.07
K × 10076.99 ± 0.7573.76 ± 2.9691.31 ± 2.0897.70 ± 0.4397.85 ± 0.5098.22 ± 0.5198.53 ± 0.32
Table 5. Quantitative classification performance of different methods on the KSC dataset.
Table 5. Quantitative classification performance of different methods on the KSC dataset.
MethodSVM3D-CNNResNetMAFNSSFTTSSRNRSSAT
192.81 ± 0.7994.29 ± 1.8194.27 ± 7.8397.85 ± 2.4199.91 ± 0.1498.95 ± 0.1499.72 ± 0.56
286.62 ± 5.1091.52 ± 4.6591.07 ± 9.9593.82 ± 4.8893.89 ± 4.3095.97 ± 9.5394.21 ± 10.00
373.33 ± 8.3588.81 ± 8.0484.39 ± 10.4884.62 ± 10.198.04 ± 2.8297.32 ± 3.0798.85 ± 1.94
454.48 ± 8.6480.55 ± 9.3674.18 ± 10.0675.65 ± 15.4598.21 ± 1.6497.63 ± 2.2698.57 ± 1.78
560.22 ± 12.1381.06 ± 5.1666.39 ± 20.9281.01 ± 9.9298.30 ± 2.3698.63 ± 2.7697.88 ± 4.08
665.47 ± 8.3485.64 ± 8.4186.83 ± 21.6288.64 ± 21.1899.88 ± 0.22100 ± 0.0099.94 ± 0.16
776.21 ± 3.8392.76 ± 13.280.35 ± 28.5288.19 ± 15.3499.77 ± 0.6883.60 ± 33.7698.52 ± 3.43
886.60 ± 5.0395.88 ± 1.9592.10 ± 11.3197.20 ± 2.5699.82 ± 0.4299.76 ± 0.2199.74 ± 0.35
988.45 ± 2.6697.39 ± 1.3191.30 ± 12.7494.32 ± 6.4999.97 ± 0.0799.85 ± 0.35100.00 ± 0.00
1096.30 ± 4.9499.91 ± 0.2098.51 ± 1.6799.18 ± 0.9199.96 ± 00.09100 ± 0.00100.00 ± 0.00
1196.16 ± 1.5298.12 ± 1.8697.45 ± 7.3399.76 ± 0.4499.91 ± 0.2699.63 ± 1.09100.00 ± 0.00
1293.61 ± 2.6798.12 ± 1.8693.50 ± 12.8996.12 ± 2.7099.89 ± 0.3098.94 ± 2.4699.87 ± 0.25
1399.68 ± 0.6899.97 ± 0.5799.36 ± 0.7799.68 ± 0.85100 ± 0.00100 ± 0.00100.00 ± 0.00
OA (%)87.94 ± 1.5795.49 ± 0.2591.37 ± 4.9294.09 ± 4.0099.24 ± 0.1498.89 ± 1.5299.33 ± 0.57
AA (%)82.30 ± 2.4992.01 ± 0.5088.44 ± 7.9792.41 ± 4.3498.58 ± 0.2297.79 ± 3.4399.02 ± 0.76
K × 10086.57 ± 1.7594.86 ± 0.2890.36 ± 5.5493.44 ± 4.4299.11 ± 0.1598.77 ± 1.6999.25 ± 0.64
Table 6. Quantitative classification performance of different methods on the XuZhou dataset.
Table 6. Quantitative classification performance of different methods on the XuZhou dataset.
MethodsSVM3D-CNNResNetMAFNSSFTTSSRNRSSAT
196.98 ± 0.2488.01 ± 4.8299.75 ± 0.1299.15 ± 0.5199.59 ± 0.2199.59 ± 0.2999.91 ± 0.04
299.91 ± 0.0594.96 ± 9.8099.94 ± 0.0699.46 ± 0.6499.98 ± 0.0399.91 ± 0.0599.98 ± 0.02
395.47 ± 0.4592.13 ± 5.0999.96 ± 0.0695.95 ± 6.6899.54 ± 0.2599.04 ± 0.47100.00 ± 0.00
497.79 ± 0.6989.12 ± 2.5399.56 ± 0.2197.41 ± 3.3199.92 ± 0.0899.82 ± 0.1699.98± 0.02
596.14 ± 0.1992.92 ± 2.0498.39 ± 0.7897.71 ± 1.3599.56 ± 0.2099.92 ± 0.1099.19± 0.26
689.81 ± 0.6289.31 ± 1.9296.77 ± 2.2794.65 ± 3.9899.64 ± 0.1799.78 ± 0.1099.69 ± 0.23
790.27 ± 0.4184.24 ± 1.4797.21 ± 2.7897.75 ± 1.3899.90 ± 0.0599.64 ± 0.4199.81 ± 0.11
898.68 ± 0.1792.64 ± 4.9495.91 ± 9.5498.11 ± 2.0999.94 ± 0.1398.23 ± 0.4499.62 ± 0.13
998.38 ± 0.3399.36 ± 0.3399.22 ± 0.8495.85 ± 7.9699.62 ± 0.1699.47 ± 0.4299.33± 0.25
OA (%)96.23 ± 0.0890.06 ± 2.6798.72 ± 0.9398.01 ± 1.3499.68 ± 0.0699.63 ± 0.0999.72 ± 0.05
AA (%)95.94 ± 0.1687.57 ± 1.7798.52 ± 1.1997.34 ± 2.3299.72 ± 0.0499.49 ± 0.1599.73 ± 0.05
K ×10095.21 ± 0.1187.51 ± 3.2298.38 ± 1.1897.48± 1.6899.60 ± 0.0999.51± 0.1299.64 ± 0.07
Table 7. Training time in minutes (m) and test time in seconds (s) between the comparison methods and the RSSAT method for three datasets.
Table 7. Training time in minutes (m) and test time in seconds (s) between the comparison methods and the RSSAT method for three datasets.
MethodsIndian PinesKSCXuZhou
Params (M)Training (m)Test (s)Params (M)Training (m)Test (s)Params (M)Training (m)Test (s)
3D-CNN0.261.874.430.141.212.211.382.713.02
ResNet83.5815.3613.0683.298.086.4186.3928.657.78
MAFN2.1112.456.221.887.625.942.9919.2112.15
SSFTT0.613.473.180.422.648.411.065.125.47
SSRN1.3911.504.671.255.092.262.7716.345.41
RSSAT1.6112.073.311.458.917.102.8518.559.27
Table 8. Classification performance under 5% training samples for Indian Pines dataset.
Table 8. Classification performance under 5% training samples for Indian Pines dataset.
MethodResNetMAFNSSFTTSSRNDSGSFRSSAN
OA (%)92.8795.7595.2695.1497.6897.61
AA (%)87.4494.3495.8776.3094.2995.03
K × 10091.8395.1489.8694.4597.3697.38
Table 9. Ablation experiments for each component.
Table 9. Ablation experiments for each component.
DatasetIndexBaseBase+ITBase+RSSBase+RSS+IT
OA (100%)98.36 ± 0.4798.53 ± 0.2098.62 ± 0.5198.71 ± 0.28
Indian PinesAA (100%)95.24 ± 2.7997.25 ± 0.5497.53 ± 1.3297.31 ± 1.07
K × 10098.13 ± 0.5398.32 ± 0.3398.42 ± 0.5898.53 ± 0.32
OA (100%)98.68 ± 0.5399.25 ± 0.6099.11 ± 0.3599.33 ± 0.57
KSCAA (100%)98.19 ± 2.2398.82 ± 1.1098.71 ± 0.3299.02 ± 0.76
K × 10098.53 ± 2.0499.17 ± 0.6799.01 ± 0.3999.25 ± 0.64
OA (100%)99.13 ± 0.6399.70 ± 0.0799.29 ± 0.0399.72 ± 0.05
XuZhouAA (100%)98.72 ± 1.1499.56 ± 0.1498.99 ± 0.0499.73 ± 0.05
K × 10099.01 ± 0.8199.62 ± 0.0899.10 ± 0.0499.64 ± 0.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, A.; Zhang, K.; Wu, H.; Iwahori, Y.; Chen, H. Multi-Scale Residual Spectral–Spatial Attention Combined with Improved Transformer for Hyperspectral Image Classification. Electronics 2024, 13, 1061. https://doi.org/10.3390/electronics13061061

AMA Style

Wang A, Zhang K, Wu H, Iwahori Y, Chen H. Multi-Scale Residual Spectral–Spatial Attention Combined with Improved Transformer for Hyperspectral Image Classification. Electronics. 2024; 13(6):1061. https://doi.org/10.3390/electronics13061061

Chicago/Turabian Style

Wang, Aili, Kang Zhang, Haibin Wu, Yuji Iwahori, and Haisong Chen. 2024. "Multi-Scale Residual Spectral–Spatial Attention Combined with Improved Transformer for Hyperspectral Image Classification" Electronics 13, no. 6: 1061. https://doi.org/10.3390/electronics13061061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop