remotesensing-logo

Journal Browser

Journal Browser

New Advances in Hyperspectral–Multispectral Image Classification and Fusion Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 8662

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
Interests: hyperspectral image processing and applications; deep learning; multisensor data fusion

E-Mail Website
Guest Editor
IMAA-CNR, Via del Fosso del Cavaliere 100, 00133 Roma, Italy
Interests: remote sensing images; atmospheric corrections; radiative transfer; multispectral and hyperspectral imagers
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Satellite imagery, such as multispectral/hyperspectral imagery, is a powerful source of information as it contains different spatial, spectral, and temporal resolutions. Recently, the development of remote sensing image classifiers has also been advancing. However, there are still certain challenges regarding multispectral/hyperspectral image classification and fusion. For example, hyperspectral remote sensors collect images in hundreds of narrow, continuous spectral channels, while multispectral remote sensors collect images in relative broad bands. In addition, hyperspectral imagers tend to have a lower spatial resolution than multispectral imagers, which usually will result in a trade-off between spectral and spatial resolution in applications. Therefore, there is an urgent need to develop image classification and fusion methods that can alleviate and deal with these problems.

It is our pleasure to announce the launch of a new Special Issue in Remote Sensing whose goal is to collect the latest advances in multispectral/hyperspectral image classification and fusion driven by recent developments in remote sensing technology and the related technical advances and innovations made after the two are combined.

Prof. Dr. Sen Jia
Dr. Federico Santini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multispectral/hyperspectral remote sensing
  • image classification
  • information fusion
  • feature extraction
  • machine learning/deep learning
  • applications in remote sensing

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 19569 KiB  
Article
Fusion of Hyperspectral and Multispectral Images with Radiance Extreme Area Compensation
by Yihao Wang, Jianyu Chen, Xuanqin Mou, Tieqiao Chen, Junyu Chen, Jia Liu, Xiangpeng Feng, Haiwei Li, Geng Zhang, Shuang Wang, Siyuan Li and Yupeng Liu
Remote Sens. 2024, 16(7), 1248; https://doi.org/10.3390/rs16071248 - 31 Mar 2024
Viewed by 530
Abstract
Although the fusion of multispectral (MS) and hyperspectral (HS) images in remote sensing has become relatively mature, and different types of fusion methods have their own characteristics in terms of fusion effect, data dependency, and computational efficiency, few studies have focused on the [...] Read more.
Although the fusion of multispectral (MS) and hyperspectral (HS) images in remote sensing has become relatively mature, and different types of fusion methods have their own characteristics in terms of fusion effect, data dependency, and computational efficiency, few studies have focused on the impact of radiance extreme areas, which widely exist in real remotely sensed scenes. To this end, this paper proposed a novel method called radiance extreme area compensation fusion (RECF). Based on the architecture of spectral unmixing fusion, our method uses the reconstruction of error map to construct local smoothing constraints during unmixing and utilizes the nearest-neighbor multispectral data to achieve optimal replacement compensation, thereby eliminating the impact of overexposed and underexposed areas in hyperspectral data on the fusion effect. We compared the RECF method with 11 previous published methods on three sets of airborne hyperspectral datasets and HJ2 satellite hyperspectral data and quantitatively evaluated them using 5 metrics, including PSNR and SAM. On the test dataset with extreme radiance interference, the proposed RECF method achieved well in the overall evaluation results; for instance, the PSNR metric reached 47.6076 and SAM reached 0.5964 on the Xiong’an dataset. In addition, the result shows that our method also achieved better visual effects on both simulation and real datasets. Full article
Show Figures

Figure 1

21 pages, 12467 KiB  
Article
AL-MRIS: An Active Learning-Based Multipath Residual Involution Siamese Network for Few-Shot Hyperspectral Image Classification
by Jinghui Yang, Jia Qin, Jinxi Qian, Anqi Li and Liguo Wang
Remote Sens. 2024, 16(6), 990; https://doi.org/10.3390/rs16060990 - 12 Mar 2024
Viewed by 522
Abstract
In hyperspectral image (HSI) classification scenarios, deep learning-based methods have achieved excellent classification performance, but often rely on large-scale training datasets to ensure accuracy. However, in practical applications, the acquisition of hyperspectral labeled samples is time consuming, labor intensive and costly, which leads [...] Read more.
In hyperspectral image (HSI) classification scenarios, deep learning-based methods have achieved excellent classification performance, but often rely on large-scale training datasets to ensure accuracy. However, in practical applications, the acquisition of hyperspectral labeled samples is time consuming, labor intensive and costly, which leads to a scarcity of obtained labeled samples. Suffering from insufficient training samples, few-shot sample conditions limit model training and ultimately affect HSI classification performance. To solve the above issues, an active learning (AL)-based multipath residual involution Siamese network for few-shot HSI classification (AL-MRIS) is proposed. First, an AL-based Siamese network framework is constructed. The Siamese network, which has relatively low demand for sample data, is adopted for classification, and the AL strategy is integrated to select more representative samples to improve the model’s discriminative ability and reduce the costs of labeling samples in practice. Then, the multipath residual involution (MRIN) module is designed for the Siamese subnetwork to obtain the comprehensive features of the HSI. The involution operation was used to capture the fine-grained features and effectively aggregate the contextual semantic information of the HSI through dynamic weights. The MRIN module comprehensively considers the local features, dynamic features and global features through multipath residual connections, which improves the representation ability of HSIs. Moreover, a cosine distance-based contrastive loss is proposed for the Siamese network. By utilizing the directional similarity of high-dimensional HSI data, the discriminability of the Siamese classification network is improved. A large number of experimental results show that the proposed AL-MRIS method can achieve excellent classification performance with few-shot training samples, and compared with several state-of-the-art classification methods, the AL-MRIS method obtains the highest classification accuracy. Full article
Show Figures

Graphical abstract

25 pages, 22709 KiB  
Article
Hybrid Convolutional Network Combining Multiscale 3D Depthwise Separable Convolution and CBAM Residual Dilated Convolution for Hyperspectral Image Classification
by Yicheng Hu, Shufang Tian and Jia Ge
Remote Sens. 2023, 15(19), 4796; https://doi.org/10.3390/rs15194796 - 01 Oct 2023
Cited by 3 | Viewed by 1163
Abstract
In recent years, convolutional neural networks (CNNs) have been increasingly leveraged for the classification of hyperspectral imagery, displaying notable advancements. To address the issues of insufficient spectral and spatial information extraction and high computational complexity in hyperspectral image classification, we introduce the MDRDNet, [...] Read more.
In recent years, convolutional neural networks (CNNs) have been increasingly leveraged for the classification of hyperspectral imagery, displaying notable advancements. To address the issues of insufficient spectral and spatial information extraction and high computational complexity in hyperspectral image classification, we introduce the MDRDNet, an integrated neural network model. This novel architecture is comprised of two main components: a Multiscale 3D Depthwise Separable Convolutional Network and a CBAM-augmented Residual Dilated Convolutional Network. The first component employs depthwise separable convolutions in a 3D setting to efficiently capture spatial–spectral characteristics, thus substantially reducing the computational burden associated with 3D convolutions. Meanwhile, the second component enhances the network by integrating the Convolutional Block Attention Module (CBAM) with dilated convolutions via residual connections, effectively counteracting the issue of model degradation. We have empirically evaluated the MDRDNet’s performance by running comprehensive experiments on three publicly available datasets: Indian Pines, Pavia University, and Salinas. Our findings indicate that the overall accuracy of the MDRDNet on the three datasets reached 98.83%, 99.81%, and 99.99%, respectively, which is higher than the accuracy of existing models. Therefore, the MDRDNet proposed in this study can fully extract spatial–spectral joint information, providing a new idea for solving the problem of large model calculations in 3D convolutions. Full article
Show Figures

Figure 1

23 pages, 9542 KiB  
Article
A Multipath and Multiscale Siamese Network Based on Spatial-Spectral Features for Few-Shot Hyperspectral Image Classification
by Jinghui Yang, Jia Qin, Jinxi Qian, Anqi Li and Liguo Wang
Remote Sens. 2023, 15(18), 4391; https://doi.org/10.3390/rs15184391 - 06 Sep 2023
Cited by 1 | Viewed by 838
Abstract
Deep learning has been demonstrated to be a powerful nonlinear modeling method with end-to-end optimization capabilities for hyperspectral Images (HSIs). However, in real classification cases, obtaining labeled samples is often time-consuming and labor-intensive, resulting in few-shot training samples. Based on this issue, a [...] Read more.
Deep learning has been demonstrated to be a powerful nonlinear modeling method with end-to-end optimization capabilities for hyperspectral Images (HSIs). However, in real classification cases, obtaining labeled samples is often time-consuming and labor-intensive, resulting in few-shot training samples. Based on this issue, a multipath and multiscale Siamese network based on spatial-spectral features for few-shot hyperspectral image classification (MMSN) is proposed. To conduct classification with few-shot training samples, a Siamese network framework with low dependence on sample information is adopted. In one subnetwork, a spatial attention module (DCAM), which combines dilated convolution and cosine similarity to comprehensively consider spatial-spectral weights, is designed first. Then, we propose a residual-dense hybrid module (RDHM), which merges three-path features, including grouped convolution-based local residual features, global residual features and global dense features. The RDHM can effectively propagate and utilize different layers of features and enhance the expression ability of these features. Finally, we construct a multikernel depth feature extraction module (MDFE) that performs multiscale convolutions with multikernel and hierarchical skip connections on the feature scales to improve the ability of the network to capture details. Extensive experimental evidence shows that the proposed MMSN method exhibits a superior performance on few-shot training samples, and its classification results are better than those of other state-of-the-art classification methods. Full article
Show Figures

Graphical abstract

23 pages, 1509 KiB  
Article
Shallow-Guided Transformer for Semantic Segmentation of Hyperspectral Remote Sensing Imagery
by Yuhan Chen, Pengyuan Liu, Jiechen Zhao, Kaijian Huang and Qingyun Yan
Remote Sens. 2023, 15(13), 3366; https://doi.org/10.3390/rs15133366 - 30 Jun 2023
Cited by 5 | Viewed by 1461
Abstract
Convolutional neural networks (CNNs) have achieved great progress in the classification of surface objects with hyperspectral data, but due to the limitations of convolutional operations, CNNs cannot effectively interact with contextual information. Transformer succeeds in solving this problem, and thus has been widely [...] Read more.
Convolutional neural networks (CNNs) have achieved great progress in the classification of surface objects with hyperspectral data, but due to the limitations of convolutional operations, CNNs cannot effectively interact with contextual information. Transformer succeeds in solving this problem, and thus has been widely used to classify hyperspectral surface objects in recent years. However, the huge computational load of Transformer poses a challenge in hyperspectral semantic segmentation tasks. In addition, the use of single Transformer discards the local correlation, making it ineffective for remote sensing tasks with small datasets. Therefore, we propose a new Transformer layered architecture that combines Transformer with CNN, adopts a feature dimensionality reduction module and a Transformer-style CNN module to extract shallow features and construct texture constraints, and employs the original Transformer Encoder to extract deep features. Furthermore, we also designed a simple Decoder to process shallow spatial detail information and deep semantic features separately. Experimental results based on three publicly available hyperspectral datasets show that our proposed method has significant advantages compared with other traditional CNN, Transformer-type models. Full article
Show Figures

Figure 1

19 pages, 7641 KiB  
Article
DCCaps-UNet: A U-Shaped Hyperspectral Semantic Segmentation Model Based on the Depthwise Separable and Conditional Convolution Capsule Network
by Siqi Wei, Yafei Liu, Mengshan Li, Haijun Huang, Xin Zheng and Lixin Guan
Remote Sens. 2023, 15(12), 3177; https://doi.org/10.3390/rs15123177 - 19 Jun 2023
Cited by 3 | Viewed by 2009
Abstract
Traditional hyperspectral image semantic segmentation algorithms can not fully utilize the spatial information or realize efficient segmentation with less sample data. In order to solve the above problems, a U-shaped hyperspectral semantic segmentation model (DCCaps-UNet) based on the depthwise separable and conditional convolution [...] Read more.
Traditional hyperspectral image semantic segmentation algorithms can not fully utilize the spatial information or realize efficient segmentation with less sample data. In order to solve the above problems, a U-shaped hyperspectral semantic segmentation model (DCCaps-UNet) based on the depthwise separable and conditional convolution capsule network was proposed in this study. The whole network is an encoding–decoding structure. In the encoding part, image features are firstly fully extracted and fused. In the decoding part, images are then reconstructed by upsampling. In the encoding part, a dilated convolutional capsule block is proposed to fully acquire spatial information and deep features and reduce the calculation cost of dynamic routes using a conditional sliding window. A depthwise separable block is constructed to replace the common convolution layer in the traditional capsule network and efficiently reduce network parameters. After principal component analysis (PCA) dimension reduction and patch preprocessing, the proposed model was experimentally tested with Indian Pines and Pavia University public hyperspectral image datasets. The obtained segmentation results of various ground objects were analyzed and compared with those obtained with other semantic segmentation models. The proposed model performed better than other semantic segmentation methods and achieved higher segmentation accuracy with the same samples. Dice coefficients reached 0.9989 and 0.9999. The OA value can reach 99.92% and 100%, respectively, thus, verifying the effectiveness of the proposed model. Full article
Show Figures

Figure 1

32 pages, 13615 KiB  
Article
Hyperspectral Image Classification Based on Multiscale Hybrid Networks and Attention Mechanisms
by Haizhu Pan, Xiaoyu Zhao, Haimiao Ge, Moqi Liu and Cuiping Shi
Remote Sens. 2023, 15(11), 2720; https://doi.org/10.3390/rs15112720 - 24 May 2023
Cited by 3 | Viewed by 1375
Abstract
Hyperspectral image (HSI) classification is one of the most crucial tasks in remote sensing processing. The attention mechanism is preferable to a convolutional neural network (CNN), due to its superior ability to express information during HSI processing. Recently, numerous methods combining CNNs and [...] Read more.
Hyperspectral image (HSI) classification is one of the most crucial tasks in remote sensing processing. The attention mechanism is preferable to a convolutional neural network (CNN), due to its superior ability to express information during HSI processing. Recently, numerous methods combining CNNs and attention mechanisms have been applied in HSI classification. However, it remains a challenge to achieve high-accuracy classification by fully extracting effective features from HSIs under the conditions of limited labeled samples. In this paper, we design a novel HSI classification network based on multiscale hybrid networks and attention mechanisms. The network consists of three subnetworks: a spectral-spatial feature extraction network, a spatial inverted pyramid network, and a classification network, which are employed to extract spectral-spatial features, to extract spatial features, and to obtain classification results, respectively. The multiscale fusion network and attention mechanisms complement each other by capturing local and global features separately. In the spatial pyramid network, multiscale spaces are formed through down-sampling, which can reduce redundant information while retaining important information. The structure helps the network better capture spatial features at different scales, and to improve classification accuracy. Experimental results on various public HSI datasets demonstrate that the designed network is extremely competitive compared to current advanced approaches, under the condition of insufficient samples. Full article
Show Figures

Figure 1

Back to TopTop