Next Article in Journal
An Object-Oriented Approach to the Classification of Roofing Materials Using Very High-Resolution Satellite Stereo-Pairs
Next Article in Special Issue
Hyperspectral Anomaly Detection Based on Improved RPCA with Non-Convex Regularization
Previous Article in Journal
Precipitation Retrieval from Fengyun-3D Microwave Humidity and Temperature Sounder Data Using Machine Learning
Previous Article in Special Issue
Multiple Superpixel Graphs Learning Based on Adaptive Multiscale Segmentation for Hyperspectral Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion Classification of HSI and MSI Using a Spatial-Spectral Vision Transformer for Wetland Biodiversity Estimation

1
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
2
Shandong Provincial Key Laboratory of Restoration for Marine Ecology, Shandong Marine Resources and Environment Research Institute, Yantai 264006, China
3
Lab of the Marine Physics and Remote Sensing, First Institute of Oceanography, Ministry of Natural Resources, Qingdao 266061, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(4), 850; https://doi.org/10.3390/rs14040850
Submission received: 10 January 2022 / Revised: 29 January 2022 / Accepted: 6 February 2022 / Published: 11 February 2022

Abstract

:
The rapid development of remote sensing technology provides wealthy data for earth observation. Land-cover mapping indirectly achieves biodiversity estimation at a coarse scale. Therefore, accurate land-cover mapping is the precondition of biodiversity estimation. However, the environment of the wetlands is complex, and the vegetation is mixed and patchy, so the land-cover recognition based on remote sensing is full of challenges. This paper constructs a systematic framework for multisource remote sensing image processing. Firstly, the hyperspectral image (HSI) and multispectral image (MSI) are fused by the CNN-based method to obtain the fused image with high spatial-spectral resolution. Secondly, considering the sequentiality of spatial distribution and spectral response, the spatial-spectral vision transformer (SSViT) is designed to extract sequential relationships from the fused images. After that, an external attention module is utilized for feature integration, and then the pixel-wise prediction is achieved for land-cover mapping. Finally, land-cover mapping and benthos data at the sites are analyzed consistently to reveal the distribution rule of benthos. Experiments on ZiYuan1-02D data of the Yellow River estuary wetland are conducted to demonstrate the effectiveness of the proposed framework compared with several related methods.

1. Introduction

Coastal wetland is a transitional area between terrestrial and marine ecosystems, which has a complex environment and monitoring elements [1]. Accurate biodiversity monitoring of coastal wetlands is of great significance in water conservation [2], biodiversity conservation [3], and blue carbon sink development [4]. Recently, natural factors and human activities have deteriorated biotope and biodiversity.
The traditional on-site monitoring receives data by stations and sections, which is time-consuming and laborious. In contrast, remote sensing technology has the advantages of large-area coverage, spatio-temporal synchronization, and high spatial-spectral resolution [5], providing highly relevant information for a wide range of wetland monitoring applications. Therefore, biodiversity estimation based on remote sensing achieves economic and real-time data collection. In recent years, a lot of works have been developed for biodiversity estimation based on remote sensing.
The limitation of remote sensing including resolution and sensors makes the biodiversity estimation applied at a coarse scale [6]. Biodiversity is mainly divided into animal diversity and plant diversity. land-cover mapping is one of the most widely used applications of optical remote sensing, which directly serves the plant diversity estimation. In addition, considering the limitations of remote sensing monitoring of animal diversity, land-cover mapping is capable of estimating animal diversity indirectly [7]. Therefore, biodiversity estimation based on remote sensing relies on the interpretation of land-cover, for which hyperspectral images (HSI) have attracted significant attention [8]. Su et al. [9] designed an elastic network based on low-rank representation to classify HSI, which is collected by a GaoFen-5 satellite, thus plant diversity estimation of coastal wetland was achieved. Hong et al. [10] combined the convolutional neural network (CNN) and graph convolutional network (GCN) to extract different types for urban land-cover classification. Zhang et al. [11] developed a transferred 3D-CNN for HSI classification, for which the overfitting problem caused by insufficient labeled samples was alleviated. Wang et al. [12] proposed a generative adversarial network (GAN) for land-cover recognition, and achieved promising results with imbalanced samples. Zhu et al. [13] designed a spatial-temporal semantic segmentation model to harness temporal dependency for land-use and land-cover (LULC) classification. Zhang et al. [14] proposed a parcel-level ensemble method for land-cover classification based on Sentinel-1 synthetic aperture radar (SAR) time series and segmentation generated from GaoFen-6 images. In [15], the land-cover in coastal wetland were classified using an object-oriented random forest algorithm. In [16], a hierarchical classification framework (HCF) was developed for wetland classification. The HCF classifies land-cover from rough classes to their subtypes based on spectral, texture, and geometric features.
Generally speaking, HSIs exhibit great advantages in land-cover classification due to carrying plenty of spectral information [17]. However, the spatial resolution of HSI is usually lower because of the requirements of signal-to-noise ratio in long exposure [18]. In addition, the existing “different body with same spectrum” or “same body with different spectrum” phenomenon on HSI deteriorates the interpretation. Therefore, joining the complementary merits of multisource data further improves the accuracy of land-cover classification [19]. In the past decade, extensive classification techniques have been successfully applied to multisource data [20,21,22]. Some of the machine learning methods rely on support vector machine (SVM), extreme learning machine (ELM) and random forest (RF) [23,24,25]. More recently, deep learning has boosted the performance of classification in the remote sensing community. In [26], the adaptive differential evolution was utilized to optimize the classification decision from different data sources. Rezaee [27] et al. employed the deep CNN to classify wetland land-cover on a large scale. In [28], a 3D-CNN was designed for multispectral image (MSI) classification to serve wetland feature monitoring. Zhao et al. [29] developed a hierarchical random walk network (HRWN) to exploit the spatial consistency of land-cover over HSI and light detection and ranging (LiDAR) data. In [30], a three-steam CNN was designed to fuse HSI and LiDAR data, in which the multi-sensor composite kernels (MCK) scheme was employed for feature integration. Xu [31] et al. developed a dual-tunnel CNN and a cascaded network, named two-branch CNN, for feature extraction, and the multisource features were stacked for fusion. Liu [32] et al. improved the two-branch CNN through interclass sparsity-based discriminative least square regression (CS_DLSR), which encouraged the feature discrimination among different land-cover. Zhang [33] et al. designed an encoder–decoder to construct the latent representation between multisource data, and then fuse them for classification. In [34], a depth feature interaction network (DFINet) was developed for HSI and MSI classification in the Yellow River estuary wetland. In [35], a hierarchy-based classifier for urban vegetation classification was designed to incorporate the canopy height features into spectral and textural data.
Despite the intense interest in multisource data classification, it remains a highly challenging problem. The primary challenges are summed up into two aspects: (1) Data quality needs to be improved. A higher spatial and spectral resolution is conducive to the extraction of texture and spectral features, which improves the final classification results. (2) Sequential features need to be noticed. The continuity of spatially distribution and spectrum curves enhances the discrimination of features and the classification performance.
To address the aforementioned challenges, a systematic framework for multisource remote sensing image processing is constructed. The typical CNN is used for data fusion, and then a spatial-spectral vision transformer (SSVit) is employed for land-cover mapping which promotes biodiversity estimation. In stage 1, a CNN-based method is utilized to fuse the HSI and MSI over the Yellow River estuary wetland, thus both the spatial and spectral resolution of the fused image is improved. In stage 2, the land-cover mapping of fused image is generated by the SSViT, which exploits the sequential relationship of spatial and spectral information, respectively. After that, an external attention module is adopted for feature integration. Finally, biodiversity estimation is achieved by land-cover mapping and benthic collection in the study area. Extensive experiments conducted on the Yellow River estuary dataset and several related methods reveal that the proposed framework provides competitive advantages in terms of data quality and classification accuracy.
The main contributions of the proposed method are summarized as follows:
  • A systematic framework including stage 1 (data fusion) and stage 2 (classification) is constructed for land-cover mapping, which serves as the precondition of biodiversity estimation. In fact, the relationship between land-cover and biomass is of utmost importance for remote sensing monitoring of biodiversity. In this paper, the coarse-scale biodiversity estimation of the Yellow River estuary dataset is indirectly achieved based on land-cover mapping.
  • The classification stage is crucial for information interpretation. To explore the spatial-spectral sequential features of wetland, the spatial transformer and spectral transformer both with position embedding are utilized to extract the neighborhood correlation, which encourages the discrimination between different classes. In addition, an external attention module is employed to enhance the spatial-spectral features. Different from the self-attention module, the external attention module is optimized with all training sets. Finally, the pixel-wise prediction is achieved for land-cover mapping.
The rest of this paper is organized as follows: In Section 2, the study area and considered data are described. Section 3 illustrates the proposed method in detail. Section 4 presents the experimental results on the Yellow River estuary dataset to validate the proposed method, and then the biodiversity estimation is achieved. Finally, the conclusions are presented in Section 5, respectively.

2. Study Area and Data Description

The Yellow River delta wetland locates in Shandong Province, China 118 33 119 20 E, 37 35 38 12 N. The Yellow River delta wetland is an important ecological functional area in the Bohai Sea, while the estuarine wetland is a typical area [36]. In particular, the intertidal zone of the Yellow River estuary wetland involves rich salt marsh vegetation and benthos. It plays an important role in biodiversity protection and ecological restoration. However, the Yellow River delta wetland is struggling to face the reduction of natural area, biodiversity, and ecosystem service function.
Therefore, the monitoring of species composition and spatio-temporal distribution in the study area promotes biodiversity estimation and protection. As shown in Figure 1, the intertidal zone in the north of the Yellow River is selected as the study area, and 11 sites are arranged for benthos collection. The coordinate and real landscape of field sites are listed in Table 1.
In this paper, a mudflat quantitative sampler with the size of 0.25 m × 0.25 m × 0.3 m is utilized to collect intertidal biological samples at the sites according to the sites and number of quadrats listed in Table 1. The size of a quadrat is 0.0625 m 2 . Note that Bullacta exarat and Mactra veneriformis are usually located on the surface of the intertidal zone, which is recorded directly through field observation. Some samples at Sites A2 and A3 are scoured by tide during sampling, thus the remaining samples are recorded at the proportion of 80 % . All samples retained after elutriation are transferred to the sample bottle. To obtain the information of species composition and species density, the retained samples are further analyzed quantitatively under the stereomicroscope.
In addition, land-cover mapping is generated by multisource data including HSI and MSI, which are captured by ZiYuan1-02D (ZY1-02D) satellite on 26 September 2020. The MSI includes eight MSS (multi-spectral scanner) bands, with 10 m ground sample distance (GSD). The HSI is obtained by an AHSI sensor, with a spectral resolution of 10–20 nm. The parameters of ZiYuan1-02D satellite are listed in Table 2. The multisource data are preprocessed by image registration, atmospheric correction, and radiometric calibration. To be specific, the HSI and MSI are transformed into the WGS 84 geographic coordinate system, and then the considered images are registered by the automatic registration tool in ENVI. The Fast Line-of-Sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) module of ENVI is employed for atmospheric correction, and the radiometric calibration is carried out by using the gain and offset coefficients. The classification system is established based on the real landscape listed in Table 1. Here, five classes are selected for land-cover classification, and the Cloud occlusion (white area in Figure 1a) is eliminated. The benthic data collected at different sites are reported in Figure 2 and Table 3. The regions of interest are selected as the training set, in which the ground truth is annotated by experts with rich knowledge in field trips as listed in Table 4. Note that Tamarix forest is mainly the distribution area of Tamarix chinensis, which mixed with other vegetation such as Suaeda salsa and Phragmites australis.

3. Proposed Classification Framework

Give a set of multisource remote sensing images, including low-resolution HSI (LR-HSI X h s i R H × W × C h ) and high-resolution MSI (HR-MSI X m s i R 3 H × 3 W × C m ) from ZY1-02D. Here, H and W are the height and width of LR-HSI. C h = 166 and C m = 8 are the bands’ number of LR-HSI and HR-MSI, respectively.
Data fusion aims to integrate the spectral advantages of LR-HSI and the spatial advantages of HR-MSI, thus the fused image with high spatial-spectral resolution is generated. After that, the classification technology is conducted on the fused image for land-cover mapping. As shown in Figure 3, the proposed framework includes two stages: (1) In stage 1 (data fusion), the advantages of spatial-spectral information is reconstructed by the CNN-based method; (2) In stage 2 (classification), the proposed SSViT, including a spatial-spectral vision transformer and an external attention module, are employed to learn the sequential relationship of spatial-spectral information for classification.

3.1. Data Fusion Based on HSI and MSI

Information fusion of HSI and MSI improves the spatial-spectral resolution of the fused image, which is of great benefit to the subsequent interpretation. In [37], several methods were conducted to fuse the HSI and MSI of ZY1-02D. Due to the unavailable spatial and spectral response functions of sensors in the application, the performance of many fusion technologies is limited, while the deep learning methods are able to alleviate this problem [38]. Therefore, a CNN-based method is introduced for information fusion of HSI and MSI. It is worth mentioning that other method options are also available for data fusion combined with the practical demands.

3.1.1. Spatial and Spectral Sampling

CNN-based fusion methods require a proportionally large number of training samples for parameters optimization. However, the reference image is scarce in the wetland scene. Thus, the spatial sampling and spectral sampling are implemented on LR-HSI to obtain the degraded images. The LR-HSI is used as the reference image, and the degraded images are the training images during parameter optimization.
The spatial sampling is implemented on LR-HSI by Gaussian blur and downsampling operation according to the scale ratio of HR-MSI and LR-HSI. The degraded MSI is generated by equal interval band selection in the visible to near-infrared bands of LR-HSI. The generation of the degraded HSI X d h and MSI X d m is calculated as:
X d h = D o w n s a m p l i n g ( G a u s s i a n ( X h s i ) , 1 / r ) ,
X d m = X h s i ( b ) ,
where X d h is the degraded HSI. D o w n s a m p l e d ( · ) and G a u s s i a n ( · ) are spatially downsampling by bilinear operation and blur by Gaussian filter, respectively. X d m is sampled from original LR-HSI along the spectrum. b = R o u n d i n g ( C h / 8 ) * ω + 1 , ω = [ 0 , 1 , , C m 1 ] , and C h = 76 is the number of bands of visible to near-infrared in X h s i . R o u n d i n g ( · ) is the operation of rounding to 0.

3.1.2. Image Fusion Based on CNN

To achieve the complementary advantages of HSI and MSI, a CNN-based approach is designed for information fusion. Firstly, the preliminary fusion is applied for information integration. The preliminary fused image is denoted as:
X p r e ( i ) = X d m ( i ) , i f i = b U p s a m p l i n g ( X d h ( i ) ) , o t h e r w i s e ,
where X p r e is the preliminary fused image, i = [ 1 , 2 , , C h ] . U p s a m p l i n g ( · ) is spatially upsampling by bilinear operation.
The preliminary fused image is filtered through 3 × 3 convolutional layer (Conv) with stride 1, batch normalization (BN), and activation layers (ReLU), and then the other two sequential operations with skip-connection are conducted for further fusion. The fusion loss function L f is equipped for optimization, which is defined as:
L f = 1 H W C h h = 1 H w = 1 W c = 1 C h X h s i ( h , w , c ) X f u s e ( h , w , c ) 2 ,
where X f u s e is the fused image, and · 2 is L2-norm.
Note that three convolutional layers with a kernel size of 3 × 3 are deployed in the CNN-based method, and stride is set as 1 with padding operation. Thus, the spatial size of feature maps remains unchanged during training. More specifically, the learning rate is set to 1 × 10 4 , and the Adam is employed to train the CNN-based method, which is optimized 500 epochs.

3.2. Classification Based on SSViT

Accuracy land-cover mapping is crucial for biodiversity estimation based on remote sensing. The distribution of land-cover indirectly reflects biodiversity. To exploit the sequential relationship from spatial/spectral information, attention mechanisms have achieved promising performance [39]. The vision transformer (ViT) has boosted the performance in computer vision, owing to the position embedding and self-attention [40]. Inspired by the ViT, the spatial and spectral transformer assembled by an external attention module, named SSViT, is designed for land-cover classification. More specifically, the proposed SSViT is mainly composed of the spatial transformer and spectral transformer, which are utilized to extract the sequential relationship of spatial-spectral information, respectively. The framework of spatial/spectral transformer is illustrated in Figure 4.

3.2.1. The Spectral Transformer

HSIs sequentially record the information from the whole electromagnetic spectrum. Therefore, the discrimination of spectral response explicitly encourages the land-cover classification. However, the “different body with same spectrum” or “same body with different spectrum” phenomenon deteriorates the classification results. Thus, the spectral transformer with a depth of 8 is designed to extract the relationship of spectral information. The baseline of spectral transformer with a depth of 1 is exhibited in Figure 4. First, image patches centered at pixels of the fused image are fed into the spectral transformer. Give an image patch X R r × r × C h that is filtered by sequential operations (Conv, BN, and ReLU) to generate the feature map X f R r × r × c , which is reshaped in several sub-patches X s p e c R n ( r 2 · p 1 ) where n is the number of sub-patches, which is the sequence length for the transformer, and p 1 = c / n . After that, a trainable L i n e a r projection layer is utilized to map X s p e c to d dimensions vectors. To obtain the relationship between n sub-patches, a learnable 1D position embedding is preset to represent the image through the transformer, which is illustrated in the green box of Figure 4.
The transformer encoder consists of multi-head attention, normalization layer (Layer Norm) and multilayer perceptron (MLP). The outputs of multi-head attention are calculated as:
f = A ( Q , K , V ) = s o f t m a x ( Q K T d ) V ,
where Q , K , V R N × d are the query set, key set, and value set, respectively, and N is batch size. A ( · ) denotes the attention function. f R N × d represents the attention feature, which is generated by the weighted values V with respect to the attention learned from Q and K . Intuitively, multi-head attention helps the network capture richer information. Multi-head attention introduces several paralleled heads in which an independent scaled dot-product attention function A ( · ) is utilized to generate the attention features. Therefore, the attention feature f is redefined as:
f = [ h e a d 1 , h e a d 2 , , h e a d h ] W o ,
h e a d j = A ( Q W j Q , K W j K , V W j V ) j = [ 1 , 2 , , h ] ,
where W j Q , W j K , W j V R d × d h are the projection matrices of j t h head. W o R h d h × d is the projection matrix, and d h = d / h is the dimension of the features from each head.

3.2.2. The Spatial Transformer

The spatial distribution of wetland land-cover is continuous. The image patches cover rich spatial information, which is considered sequential. Therefore, a spatial transformer is utilized to extract the relationship of spatial information. In detail, the feature map X f R r × r × c is reshaped in several sub-patches X s p a R n ( p 2 2 · c ) , where n is the number of sub-patches, and p 2 = r / n . Similarly, a trainable L i n e a r projection layer is utilized to map X s p a to d dimensions vectors, and then a learnable 1D position embedding is preset to represent the image through the transformer. Finally, the relationship of spatial information is obtained by the transformer, and the detailed calculation of the transformer is described in Section 3.2.1. It is worth noting that the depth of the spatial and spectral transformers is set as 8.

3.2.3. The External Attention

To realize joint classification, an external attention module is utilized to integrate the feature extracted from the spatial and spectral transformer as shown in Figure 5b. Similar to the self-attention (Figure 5a) in spatial/spectral transformers, the self-correlation is obtained through Q , K , and V as computed in Equation (5). In external attention, two memory units ( M K R d × 2 d and M V R d × 2 d ) are used to replace K and V on the baseline of self-attention, which is optimized during the whole training set. Q e is generated by Q e = L i n e a r ( C ( f s p e c , f s p a ) ) . Here, f s p e c and f s p a are the outputs from the spectral transformer and the spatial transformer, respectively. Therefore, the relationship among the whole training set is learned as follows:
f e = A ( Q e , K , V ) = s o f t m a x ( Q e M K T d ) M V ,
where Q e R N × 2 d and d < 2 d , and. Finally, the pixel-wise prediction is achieved by a Linear layer and Softmax.
The proposed SSViT is deployed on an Nvidia GTX 3080 GPU in PyTorch. The loss function is cross-entropy loss defined in Equation (9), which is optimized by stochastic gradient descent (SGD). Specifically, the learning rate is 5 × 10 4 , and the number of epochs is 500. The batch size, momentum, and weight decay are selected as 128, 0.9 , and 5 × 10 4 , respectively,
L c = 1 N i = 1 N m = 1 M y i m l o g p i m ,
where N is batch size, M is the number of classes, and y is the real label. y i m = 1 is satisfied when the class of pixel i is m, whose prediction probability after S o f t m a x is p i m ; otherwise, y i m = 0 .

4. Experiments and Analysis

In this paper, the framework for multisource remote sensing image processing is developed for biodiversity estimation. A CNN-based method is employed for data fusion, in which the performance is evaluated by visual comparison. Next, the proposed SSViT is utilized to classify land-cover on the intertidal zone of the Yellow River estuary wetland. The superiority of the proposed SSViT is measured by the precision corresponding to some related methods. Finally, the correlation between land-cover and benthos is established by the sampling on the selected site. Thus, biodiversity estimation in the intertidal zone is achieved at a coarse scale.

4.1. The Performance of Data Fusion

HSI and MSI of ZY1-02D are utilized for collaborative land-cover classification. Firstly, the advantages of HSI and MSI are fused through the CNN-based method, and then the fused image with high spatial-spectral resolution is classified to generate land-cover mapping. In this paper, visual comparison and spectral angle mapper (SAM) are used to measure the quality of the fused image. Considering that the reference image is not available, the spectral information of the original HSI is used as the reference spectrum. Table 5 reports the SAM of each class according to the reference HSI. The visualized result of data fusion is shown in Figure 6. It is observed that the visual quality of the fused image is improved.

4.2. Classification Performance

To validate the superiority of the proposed SSViT, several comparison methods are selected to conduct the experimental validation, including SVM [23], LBP-ELM [24], S2FL [41], Residual CNN [42], two-branch CNN [31], and DFINet [34]. Note that the SVM and LBP-ELM are the spectral classifiers, and other comparison methods are spatial-spectral classifiers together with the proposed SSViT. The S2FL, two-branch CNN, and DFINet are the joint classification framework through multisource feature fusion. Overall accuracy (OA), average accuracy (AA), and kappa coefficient (Kappa) are utilized for quantitative assessment.

4.2.1. Analysis of the Image Patch Size

The spatial-spectral information boosts the performance of land-cover classification, which is affected by the image patch size. The proposed SSViT is employed to extract the sequential features from image sub-patches. The number of image sub-patches is set as 9. Therefore, the image patch size r is set to a multiple of 3 ( [ 9 , 12 , 15 , 18 , 21 ] ). The relationship between OA and image patch size is shown in Figure 7. It is found that, when r = 9 and r = 12 , the classification results are not satisfactory. This is because the land-cover in the intertidal zone are mixed and patchy, and a smaller image patch size leads to the fragmentation of the classification results. In contrast, when r > 15 , the OA value decreased gradually. Excessive spatial neighborhood reduces the discrimination of spatial information. Moreover, the computational burden of the model increases. Therefore, r = 15 is selected as the best choice.

4.2.2. Ablation Experiment

The spatial and spectral transformers are used to extract the sequential relationship of spatial and spectral information, and the external attention module is employed for feature integration. Next, the benefits of different modules on classification results are further discussed. In Table 6, spatial transformer and spectral transformer only mine the relationship of spatial or spectral information, which obtains low OA values. When combing the spatial transformer and spectral transformer, the OA value increased by at least 1.99 % . In addition, the external attention module further improves the classification performance, in which the OA improvement of the full model by 0.91 % is achieved. It confirms that the relationship of spatial-spectral tends to generate accurate land-cover mapping, and feature integration through external attention emphasizes the relationship among the whole training set.

4.2.3. Classification Results on the Yellow River Estuary Dataset

Figure 8 presents the land-cover mapping corresponding to the experiments reported in Table 7. The land-cover mapping obtained by SVM, LBP-ELM, and S2FL tends to be rather noisy, resulting in serious landscape fragmentation. The continuity of land-cover distribution is ignored because only the spectrum is introduced. For Residual CNN, the boundaries in different types suffer from artifacts to some extent, mainly because the spectral information is not specifically considered. Regarding the land-cover mapping produced by Two-branch CNN and DFINet, the fragmentation and artifacts are alleviated. Compared with other methods, the proposed SSViT generates better results in terms of class consistency.
From the results reported in Table 7, the original HSI performs the upsampling of bilinear interpolation to obtain the same spatial size of MSI. In Table 7, “HSI” denotes that the original HSI performs the upsampling of bilinear interpolation to obtain the same spatial size of MSI, and “MSI” represents the original MSI. “Fused” represents the fused image generated by the CNN-based method in Section 3.1, and “HSI+MSI” indicates that the upsampled HSI and original MSI are fed into the classifiers based on feature fusion. From the experimental results, it is possible to observe that the fused image achieves better performance than merely using HSI and MSI. For the proposed SSViT, the OA value is increased by at least 2.23 % . The fused image significantly improves the classification accuracy of Spartina alterniflora. In addition, deep learning methods achieve better performance than traditional methods in most cases. In particular, the two-branch CNN and DFINet achieve competitive results, but the improvement is finite. Generally speaking, the proposed SSViT outperforms the precision on most classes compared with the considered methods. The comparison demonstrates that the proposed SSViT is powerful in sequential feature extraction which encourages discrimination between different classes.

4.3. Biodiversity Estimation

After obtaining the land-cover mapping, the biodiversity estimation is further achieved using biomass information of the 11 sites. The location and dominant species of different sites are shown in Figure 9, and the corresponding diversity index and species number are listed in Table 8.

4.3.1. Biodiversity Estimation in the Intertidal Zone

Species richness, species evenness, and diversity index are introduced for biodiversity estimation in the intertidal zone. As listed in Table 8, the sections A, B, and C are sampled from high-tide to low-tide, respectively. Considering the diversity index and dominant species, it is found that the species distribution is continuous.
Combined with the land-cover mapping, it is found that most of the Tamarix forest is located in the high-tide with lower biodiversity, and the dominant species are mainly annelids such as Macrophthalmus japonicu and Helice tridens sheni. For the Suaeda salsa area in the middle-tide (A2 and B2), the biodiversity is relatively high, while the Spartina alterniflora area distributes in the low-tide areas (A3, B3, C2, C3). The Tidal creeks are beneficial to high biodiversity. In addition, section D is selected in the intertidal zone near the Tidal creek. The diversity index and species number are relatively high, which proves that the benthos diversity is closely related to the connectivity of tidal creeks. The low-tide sites (A3, B3, C2, C3) exhibit an increasing trend of the diversity index and species number as well as the sites (D1 and D2) near the tidal creeks.

4.3.2. Biodiversity Estimation in the Spartina alterniflora Area

Spartina alterniflora is one of the first invasive species in China. It competes with other vegetation in the intertidal zone, resulting in the disappearance of large salt marsh plants. Moreover, Spartina alterniflora with developed roots destroys the habitat of offshore organisms, affecting the seawater exchange capacity and hence leading to the decline of water quality. Consequently, biodiversity in the Spartina alterniflora area is inevitably destroyed to a certain extent. The biodiversity estimation in the Spartina alterniflora area is executed according to the real landscape in Table 1.
According to the real landscape reported in Table 1, three typical sites A3, C1, and C2 are selected, which are located in sparse area, dense area, and mixed area of Spartina alterniflora and Suaeda salsa (mixed area). As illustrated in Figure 10, the lowest species density and biomass are collected in the dense area. In the sparse area, the species density and biomass are relatively high. The diversity index of the mixed area was slightly higher than that of a dense area, mainly because Suaeda salsa played a positive role in soil remediation [43]. Furthermore, the species richness, species evenness, and diversity index of Spartina alterniflora area are further discussed. As shown in Figure 11, the species richness and diversity index of considered sites conform to the state in Figure 10. Note that site C1 has the highest species evenness, which is used to measure the stability of biological communities. This is because the lower benthos diversity in the dense area is difficult to change. Conclusively, Spartina alterniflora damages biodiversity, whose growth density is negatively correlated with biodiversity. Monitoring the expansion and management of Spartina alterniflora by remote sensing is of great significance for wetland biodiversity protection.

4.3.3. Biodiversity Estimation in the Suaeda salsa Area

Suaeda salsa is widely distributed in the intertidal zone of the Yellow River estuary wetland, which is conducive to the restoration of the ecological environment. As shown in Figure 12, the biodiversity estimation in the Suaeda salsa area is executed according to the real landscape in Table 1, from which sites A2, B2, and B3 are utilized for diversity analysis. Site B2 has the largest coverage, followed by site A2, and the lowest is site B3. Both sites A2 and B3 are located in the middle-tide area, and it is found that the dense area of Suaeda salsa is conducive to the distribution of benthos. For the sites B2 and B3 in section B, site B3 at the low-tide area does not realize the expected increase in the diversity index, which indicates that the distribution of Suaeda salsa has a positive effect on the distribution of benthos.

4.4. Discussion

Considering the complex environment in the wetlands, recognizing land-cover and estimating biodiversity based on remote sensing is full of challenges. In this paper, the designed remote sensing image processing framework is used to realize the classification of land-cover in the study area. The benthos diversity of the study area is estimated by integrating the collected data of benthos and land-cover mapping. In general, the proposed framework is capable of mining the sequential features of spatial-spectral information, which improves the precision of land-cover classification. Due to the mixed pixels of medium resolution remote sensing data, how to extract the subspace information for classification needs to be specially studied.
Different from [44,45], this paper realizes the coarse-scale monitoring of biodiversity by the distribution law between benthos and land-cover. In other words, biodiversity estimation is indirectly achieved by land-cover mapping. It is observed that most of the Spartina alterniflora and mudflat are distributed in the low-tide area with developed tidal creeks. Therefore, the diversity of benthos is higher. The middle-tide area is also covered by the semidiurnal tide, and the land-cover are mainly salt-tolerant Suaeda salsa. Due to the low frequency of tide covered in the high-tide area, Tamarix chinensis begins to grow. The diversity of benthos in a high-tide area is the lowest because of the little tidal, and the dominant species is arthropods such as crabs. In addition, it is found that Spartina alterniflora deteriorates the ecological environment and the biodiversity. In the future, the fine classification and time-series monitoring of land-cover combined with high-resolution remote sensing will become the focus of works.

5. Conclusions

This paper constructed a systematic framework for a multisource remote sensing image process in the wetland scene. The proposed framework benefits from two aspects. On the one hand, a CNN-based fusion method has been conducted over multisource data, thus the complementary merits have been assembled into the fused image. High-quality fused images consequently serve wetland monitoring. On the other hand, a spatial-spectral vision transformer (SSViT) has been designed for land-cover mapping. The sequential features of the fused image in spatial and spectral dimensions are extracted by the spatial transformer and spectral transformer, respectively. After that, the external attention module is utilized to integrate the spatial-spectral features. In addition, the biodiversity estimation of the study area is further achieved by combining the benthic data and land-cover mapping.
Extensive experiments are conducted on the Yellow River estuary dataset, which reveal the effectiveness of the established systematic framework for multisource remote sensing images. The proposed framework has superior performance in terms of improving data quality and classification accuracy. Combined with the benthic data, the biodiversity of the study area is achieved.

Author Contributions

Conceptualization, Y.G., X.S. and W.L.; methodology and validation, Y.G., W.L. and J.W.; formal analysis, X.S., J.H. and X.J.; investigation, X.S. and Y.F.; writing—original draft preparation, Y.G. and W.L.; writing—review and editing, Y.G. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

Not applicable.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work is supported by the Beijing Natural Science Foundation (Grant No. JQ20021), the National Natural Science Foundation of China (Grant Nos. 61922013 and 62001023), and the China Postdoctoral Science Foundation (Grant No. BX20200058).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brisco, B.; Ahern, F.; Murnaghan, K.; White, L.; Canisus, F.; Lancaster, P. Seasonal Change in Wetland Coherence as an Aid to Wetland Monitoring. Remote Sens. 2017, 9, 158. [Google Scholar] [CrossRef] [Green Version]
  2. Xia, Y.; Fang, C.; Lin, H.; Li, H.; Wu, B. Spatiotemporal Evolution of Wetland Eco-Hydrological Connectivity in the Poyang Lake Area Based on Long Time-Series Remote Sensing Images. Remote Sens. 2021, 13, 4812. [Google Scholar] [CrossRef]
  3. López-Tapia, S.; Ruiz, P.; Smith, M.; Matthews, J.; Zercher, B.; Sydorenko, L.; Varia, N.; Jin, Y.; Wang, M.; Dunn, J.B.; et al. Machine learning with high-resolution aerial imagery and data fusion to improve and automate the detection of wetlands. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102581. [Google Scholar] [CrossRef]
  4. Sun, S.; Wang, Y.; Song, Z.; Chen, C.; Zhang, Y.; Chen, X.; Chen, W.; Yuan, W.; Wu, X.; Ran, X.; et al. Modelling Aboveground Biomass Carbon Stock of the Bohai Rim Coastal Wetlands by Integrating Remote Sensing, Terrain, and Climate Data. Remote Sens. 2021, 13, 4321. [Google Scholar] [CrossRef]
  5. Ma, Y.; Wei, J.; Tang, W.; Tang, R. Explicit and stepwise models for spatiotemporal fusion of remote sensing images with deep neural networks. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102611. [Google Scholar] [CrossRef]
  6. Wang, R.; Gamon, J.A. Remote sensing of terrestrial plant biodiversity. Remote Sens. Environ. 2019, 231, 111218. [Google Scholar] [CrossRef]
  7. Filipponi, F.; Valentini, E.; Nguyen Xuan, A.; Guerra, C.A.; Wolf, F.; Andrzejak, M.; Taramelli, A. Global MODIS Fraction of Green Vegetation Cover for Monitoring Abrupt and Gradual Vegetation Changes. Remote Sens. 2018, 10, 653. [Google Scholar] [CrossRef] [Green Version]
  8. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. IEEE Trans. Geosci. Remote Sens. 2021, 1. [Google Scholar] [CrossRef]
  9. Su, H.; Yao, W.; Wu, Z.; Zheng, P.; Du, Q. Kernel low-rank representation with elastic net for China coastal wetland land cover classification using GF-5 hyperspectral imagery. ISPRS J. Photogramm. Remote Sens. 2021, 171, 238–252. [Google Scholar] [CrossRef]
  10. Hong, D.; Gao, L.; Yao, J.; Zhang, B.; Plaza, A.; Chanussot, J. Graph Convolutional Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5966–5978. [Google Scholar] [CrossRef]
  11. Zhang, H.; Li, Y.; Jiang, Y.; Wang, P.; Shen, Q.; Shen, C. Hyperspectral Classification Based on Lightweight 3D-CNN With Transfer Learning. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5813–5828. [Google Scholar] [CrossRef] [Green Version]
  12. Wang, Q.; Li, Q.; Li, X. A Fast Neighborhood Grouping Method for Hyperspectral Band Selection. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5028–5039. [Google Scholar] [CrossRef]
  13. Zhu, Y.; Geiß, C.; Thus, E.; Jin, Y. Multitemporal Relearning with Convolutional LSTM Models for Land Use Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3251–3265. [Google Scholar] [CrossRef]
  14. Zhang, M.; Lin, H. Wetland Classification Using Parcel-level Ensemble Algorithm based on GaoFen-6 Multispectral Imagery and Sentinel-1 Dataset. J. Hydrol. 2022, 127462. [Google Scholar] [CrossRef]
  15. Zhang, X.; Xu, J.; Chen, Y.; Xu, K.; Wang, D. Coastal Wetland Classification with GF-3 Polarimetric SAR Imagery by Using Object-Oriented Random Forest Algorithm. Sensors 2021, 21, 3395. [Google Scholar] [CrossRef]
  16. Jiao, L.; Sun, W.; Yang, G.; Ren, G.; Liu, Y. A Hierarchical Classification Framework of Satellite Multispectral/Hyperspectral Images for Mapping Coastal Wetlands. Remote Sens. 2019, 11, 2238. [Google Scholar] [CrossRef] [Green Version]
  17. Li, S.; Song, W.; Fang, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Deep Learning for Hyperspectral Image Classification: An Overview. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6690–6709. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, X.; Huang, W.; Wang, Q.; Li, X. SSR-NET: Spatial–Spectral Reconstruction Network for Hyperspectral and Multispectral Image Fusion. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5953–5965. [Google Scholar] [CrossRef]
  19. Zhang, M.; Li, W.; Tao, R.; Li, H.; Du, Q. Information Fusion for Classification of Hyperspectral and LiDAR Data Using IP-CNN. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5506812. [Google Scholar] [CrossRef]
  20. Meng, Y.; Rigall, E.; Chen, X.; Gao, F.; Dong, J.; Chen, S. Physics-Guided Generative Adversarial Networks for Sea Subsurface Temperature Prediction. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–14. [Google Scholar] [CrossRef]
  21. Sahour, H.; Kemink, K.M.; O’Connell, J. Integrating SAR and Optical Remote Sensing for Conservation-Targeted Wetlands Mapping. Remote Sens. 2022, 14, 159. [Google Scholar] [CrossRef]
  22. Zhou, R.; Yang, C.; Li, E.; Cai, X.; Yang, J.; Xia, Y. Object-Based Wetland Vegetation Classification Using Multi-Feature Selection of Unoccupied Aerial Vehicle RGB Imagery. Remote Sens. 2021, 13, 4910. [Google Scholar] [CrossRef]
  23. Han, X.; Pan, J.; Devlin, A. Remote sensing study of wetlands in the Pearl River Delta during 1995–2015 with the support vector machine method. Front. Earth Sci. 2017, 12, 521–531. [Google Scholar] [CrossRef]
  24. Li, W.; Chen, C.; Su, H.; Du, Q. Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3681–3693. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Cao, G.; Li, X.; Wang, B. Cascaded Random Forest for Hyperspectral Image Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1082–1094. [Google Scholar] [CrossRef]
  26. Zhong, Y.; Cao, Q.; Zhao, J.; Ma, A.; Zhao, B.; Zhang, L. Optimal Decision Fusion for Urban Land-Use/Land-Cover Classification Based on Adaptive Differential Evolution Using Hyperspectral and LiDAR Data. Remote Sens. 2017, 9, 868. [Google Scholar] [CrossRef] [Green Version]
  27. Rezaee, M.; Mahdianpari, M.; Zhang, Y.; Salehi, B. Deep convolutional neural network for complex wetland classification using optical remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3030–3039. [Google Scholar] [CrossRef]
  28. Pan, H. A feature sequence-based 3D convolutional method for wetland classification from multispectral images. Remote Sens. Lett. 2020, 11, 837–846. [Google Scholar] [CrossRef]
  29. Zhao, X.; Tao, R.; Li, W.; Li, H.C.; Du, Q.; Liao, W.; Philips, W. Joint Classification of Hyperspectral and LiDAR Data Using Hierarchical Random Walk and Deep CNN Architecture. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7355–7370. [Google Scholar] [CrossRef]
  30. Li, H.; Ghamisi, P.; Soergel, U.; Zhu, X. Hyperspectral and LiDAR Fusion Using Deep Three-Stream Convolutional Neural Networks. Remote Sens. 2018, 10, 1649. [Google Scholar] [CrossRef] [Green Version]
  31. Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource Remote Sensing Data Classification Based on Convolutional Neural Network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 937–949. [Google Scholar] [CrossRef]
  32. Liu, C.; Tao, R.; Li, W.; Zhang, M.; Sun, W.; Du, Q. Joint Classification of Hyperspectral and Multispectral Images for Mapping Coastal Wetlands. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 982–996. [Google Scholar] [CrossRef]
  33. Zhang, M.; Li, W.; Du, Q.; Gao, L.; Zhang, B. Feature Extraction for Classification of Hyperspectral and LiDAR Data Using Patch-to-Patch CNN. IEEE Trans. Cybern. 2020, 50, 100–111. [Google Scholar] [CrossRef] [PubMed]
  34. Gao, Y.; Li, W.; Zhang, M.; Wang, J.; Sun, W.; Tao, R.; Du, Q. Hyperspectral and Multispectral Classification for Coastal Wetland Using Depthwise Feature Interaction Network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5512615. [Google Scholar] [CrossRef]
  35. Zhao, S.; Jiang, X.; Li, G.; Chen, Y.; Lu, D. Integration of ZiYuan-3 multispectral and stereo imagery for mapping urban vegetation using the hierarchy-based classifier. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102594. [Google Scholar] [CrossRef]
  36. Li, C.; Zhu, L.; Dai, Z.; Wu, Z. Study on Spatiotemporal Evolution of the Yellow River Delta Coastline from 1976 to 2020. Remote Sens. 2021, 13, 4789. [Google Scholar] [CrossRef]
  37. Lu, H.; Qiao, D.; Li, Y.; Wu, S.; Deng, L. Fusion of China ZY-1 02D Hyperspectral Data and Multispectral Data: Which Methods Should Be Used? Remote Sens. 2021, 13, 2354. [Google Scholar] [CrossRef]
  38. Zheng, K.; Gao, L.; Liao, W.; Hong, D.; Zhang, B.; Cui, X.; Chanussot, J. Coupled Convolutional Neural Network with Adaptive Response Function Learning for Unsupervised Hyperspectral Super Resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2487–2502. [Google Scholar] [CrossRef]
  39. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  40. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  41. Hong, D.; Hu, J.; Yao, J.; Chanussot, J.; Zhu, X.X. Multimodal remote sensing benchmark datasets for land cover classification with a shared and specific feature learning model. ISPRS J. Photogramm. Remote Sens. 2021, 178, 68–80. [Google Scholar] [CrossRef]
  42. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  43. Yin, H.; Hu, Y.; Liu, M.; Li, C.; Chang, Y. Evolutions of 30-Year Spatio-Temporal Distribution and Influencing Factors of Suaeda salsa in Bohai Bay, China. Remote Sens. 2022, 14, 138. [Google Scholar] [CrossRef]
  44. Fogarin, S.; Madricardo, F.; Zaggia, L.; Sigovini, M.; Montereale-Gavazzi, G.; Kruss, A.; Lorenzetti, G.; Manfé, G.; Petrizzo, A.; Molinaroli, E.; et al. Tidal Inlets in the Anthropocene: Geomorphology and Benthic Habitats of the Chioggia Inlet, Venice Lagoon (Italy). Earth Surf. Process. Landforms 2019, 44, 2297–2315. [Google Scholar] [CrossRef]
  45. Trzcinska, K.; Tegowski, J.; Pocwiardowski, P.; Janowski, L.; Zdroik, J.; Kruss, A.; Rucinska, M.; Lubniewski, Z.; Schneider von Deimling, J. Measurement of Seafloor Acoustic Backscatter Angular Dependence at 150 kHz Using A Multibeam Echosounder. Remote Sens. 2021, 13, 4771. [Google Scholar] [CrossRef]
Figure 1. Location of the study area. (a) the ground truth image; (b) the MSI captured by ZY1-02D; (c) the HSI captured by ZY1-02D.
Figure 1. Location of the study area. (a) the ground truth image; (b) the MSI captured by ZY1-02D; (c) the HSI captured by ZY1-02D.
Remotesensing 14 00850 g001
Figure 2. The population and legend of benthos in the study area.
Figure 2. The population and legend of benthos in the study area.
Remotesensing 14 00850 g002
Figure 3. Framework of the proposed spatial-spectral vision transformer, which includes stage 1 (data fusion) and stage 2 (classification).
Figure 3. Framework of the proposed spatial-spectral vision transformer, which includes stage 1 (data fusion) and stage 2 (classification).
Remotesensing 14 00850 g003
Figure 4. The framework of spatial/spectral transformer with a depth of 1.
Figure 4. The framework of spatial/spectral transformer with a depth of 1.
Remotesensing 14 00850 g004
Figure 5. (a) Illustration of the self-attention; (b) illustration of the external attention.
Figure 5. (a) Illustration of the self-attention; (b) illustration of the external attention.
Remotesensing 14 00850 g005
Figure 6. Visualized results of data fusion using HSI and MSI.
Figure 6. Visualized results of data fusion using HSI and MSI.
Remotesensing 14 00850 g006
Figure 7. Relationship between OA and image patch size.
Figure 7. Relationship between OA and image patch size.
Remotesensing 14 00850 g007
Figure 8. Land-cover mapping using different methods on the Yellow River estuary dataset.
Figure 8. Land-cover mapping using different methods on the Yellow River estuary dataset.
Remotesensing 14 00850 g008
Figure 9. The dominant species of different sites in the intertidal zone of the Yellow River estuary wetland.
Figure 9. The dominant species of different sites in the intertidal zone of the Yellow River estuary wetland.
Remotesensing 14 00850 g009
Figure 10. The species density and biomass (g/m 2 ) of the Spartina alterniflora area.
Figure 10. The species density and biomass (g/m 2 ) of the Spartina alterniflora area.
Remotesensing 14 00850 g010
Figure 11. The species richness, species evenness, and the diversity index of the Spartina alterniflora area.
Figure 11. The species richness, species evenness, and the diversity index of the Spartina alterniflora area.
Remotesensing 14 00850 g011
Figure 12. The species richness, species evenness, and diversity index of the Suaeda salsa area.
Figure 12. The species richness, species evenness, and diversity index of the Suaeda salsa area.
Remotesensing 14 00850 g012
Table 1. The coordinate and real landscape of field sites.
Table 1. The coordinate and real landscape of field sites.
SiteQuadratsCoordinateReal Landscape
A12 119 9 45 . 360 E 37 47 15 . 324 NTamarix chinensis
A21 119 9 52 . 740 E 37 47 31 . 848 NSuaeda salsa
A31 119 10 4 . 224 E 37 47 57 . 082 NSparse area of Spartina alterniflora
B12 119 7 35 . 832 E 37 47 8 . 052 NMixed area of Suaeda salsa and Tamarix chinensis
B24 119 7 40 . 620 E 37 47 47 . 328 NDense area of Suaeda salsa
B32 119 7 48 . 000 E 37 48 13 . 464 NSparse area of Suaeda salsa
C12 119 6 31 . 498 E 37 49 35 . 351 NDense area of Spartina alterniflora
C22 119 6 39 . 232 E 37 49 37 . 604 NMixed area of Suaeda salsa and Spartina alterniflora
C32 119 6 44 . 322 E 37 49 34 . 668 NCoastal beach in invasion area of Spartina alterniflora
D11 119 4 33 . 575 E 37 49 6 . 440 NTidal creek in invasion area of Spartina alterniflora
D21 119 4 46 . 424 E 37 50 10 . 171 NMudflat near Tidal creek
Table 2. The parameters of ZiYuan1-02D satellite.
Table 2. The parameters of ZiYuan1-02D satellite.
TypeRange of WavelengthsSpatial ResolutionSpectral ResolutionWidth
MSIB02452-521nm10 m115 km
B03522–607 nm
B04635–694 nm
B05776–895 nm
B06416–452 nm
B07591–633 nm
B08708–752 nm
B09871–1047 nm
HSI400–2500 nm30 m10–20 nm60 km
Table 3. Benthic data collected at different sites.
Table 3. Benthic data collected at different sites.
SitesSpecies NameNumberSpecies Density (#/m 2 )Weight (g)Biomass (g/m 2 )
A1Macrophthalmus japonicu2161.33510.682
A2Glauconome primeana3603.12762.538
Macrophthalmus japonicu1202.22844.558
Perinereis aibuhitensis4800.86517.304
A3Glauconome primeana4803.51470.280
Potamocorbula laevis4800.2565.110
Bullacta exarata224.7764.776
Macrophthalmus japonicu1203.56571.306
Perinereis aibuhitensis2400.85817.166
Mactra veneriformis1120.61520.615
B1Macrophthalmus japonicu2161.25410.028
Helice tridens sheni185.22641.810
B2Macrophthalmus japonicu6248.13832.552
Helice tridens sheni2811.24244.968
Perinereis aibuhitensis5200.9563.825
Mactra veneriformis1112.57812.578
B3Glauconome primeana2161.2419.926
Corophium acherusicum12960.0230.186
Bullacta exarata10.51.4870.743
Perinereis aibuhitensis3240.1511.204
Umbonium thomasi4321.0578.454
C1Batillaria cumingi2162.23717.892
Macrophthalmus japonicu3243.43527.483
Perinereis aibuhitensis5400.5664.527
C2Batillaria cumingi2161.69013.520
Macrophthalmus japonicu4325.22141.770
Helice tridens sheni180.8256.602
Perinereis aibuhitensis5400.8827.056
Umbonium thomasi4320.0820.658
C3Glauconome primeana1814410.163406.504
Batillaria cumingi2217618.350146.802
Bullacta exarata111.8831.883
Helice tridens sheni21610.21581.720
Perinereis aibuhitensis151201.43911.510
Heteromastus filiformis4320.0050.184
Umbonium thomasi11880.0820.658
D1Potamocorbula laevis4640.2564.088
Macrophthalmus japonicu1163.25152.016
Perinereis aibuhitensis2320.0210.331
Heteromastus filiformis81280.0040.062
D2Glauconome primeana101601.90030.395
Chone collaris1160.0050.082
Corophium acherusicum162560.0330.526
Helice tridens sheni3486.530104.483
Perinereis aibuhitensis81280.84013.442
Heteromastus filiformis71120.0631.013
Table 4. Number of training and testing samples for the Yellow River estuary dataset.
Table 4. Number of training and testing samples for the Yellow River estuary dataset.
ClassNumber of Samples
No.NameTrainingTesting
1Spartina alterniflora73539,784
2Suaeda salsa2519118,213
3Tamarix forest106931,044
4Tidal creek52915,673
5Mudflat70224,592
Total5554229,306
Table 5. The SAM of each class according to the reference HSI.
Table 5. The SAM of each class according to the reference HSI.
Class No.12345
SAM0.12270.07820.08010.09710.0797
Table 6. Ablation experiment of the proposed SSViT on the Yellow River estuary dataset.
Table 6. Ablation experiment of the proposed SSViT on the Yellow River estuary dataset.
MethodOA (%)
Spatial transformer83.48
Spectral transformer83.37
Without external attention85.47
Full model86.38
Table 7. Class-specific classification accuracy (%) using different methods.
Table 7. Class-specific classification accuracy (%) using different methods.
No.SVMLBP-ELMResidual CNNS2FLTwo-Branch CNNDFINetSSViT
HSIMSIFusedHSIMSIFusedFusedHSI+MSIHSI+MSIHSI+MSIHSIMSIFused
190.9583.6893.7590.5283.4493.6891.6689.7492.8292.8993.2392.9293.45
282.0490.1883.5886.5393.8989.3883.5390.3587.8286.1084.3385.3687.96
371.4148.6862.2371.4544.0165.1678.6065.8068.0476.2076.2475.2377.78
479.0571.3672.4269.7464.4363.6180.5768.4686.0384.4081.8981.6282.43
570.3036.8278.0869.1721.4475.3174.8357.1980.2878.4177.3677.0580.69
OA (%)80.6876.4381.1082.1775.5483.5883.1481.8785.0885.0083.8784.1586.38
AA (%)78.7566.1478.0177.4861.4477.4381.8474.3183.0083.6082.6182.4484.46
Kappa0.71850.62730.72090.73410.59870.75140.75590.72130.77920.78020.76490.76830.7994
Table 8. Diversity index and species number of different sites.
Table 8. Diversity index and species number of different sites.
SiteDiversity IndexSpecies Number
A1–A301.4061.914136
B1–B30.9181.5681.665245
C1–C31.4852.4192.298357
D1–D21.6402.256-46-
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gao, Y.; Song, X.; Li, W.; Wang, J.; He, J.; Jiang, X.; Feng, Y. Fusion Classification of HSI and MSI Using a Spatial-Spectral Vision Transformer for Wetland Biodiversity Estimation. Remote Sens. 2022, 14, 850. https://doi.org/10.3390/rs14040850

AMA Style

Gao Y, Song X, Li W, Wang J, He J, Jiang X, Feng Y. Fusion Classification of HSI and MSI Using a Spatial-Spectral Vision Transformer for Wetland Biodiversity Estimation. Remote Sensing. 2022; 14(4):850. https://doi.org/10.3390/rs14040850

Chicago/Turabian Style

Gao, Yunhao, Xiukai Song, Wei Li, Jianbu Wang, Jianlong He, Xiangyang Jiang, and Yinyin Feng. 2022. "Fusion Classification of HSI and MSI Using a Spatial-Spectral Vision Transformer for Wetland Biodiversity Estimation" Remote Sensing 14, no. 4: 850. https://doi.org/10.3390/rs14040850

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop