remotesensing-logo

Journal Browser

Journal Browser

Spectral-Spatial Segmentation and Classification of Remotely Sensed Hyperspectral Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 22172

Special Issue Editors


E-Mail Website
Guest Editor
Department of Biology, Chemistry, Pharmacy, Freie Universität Berlin, Berlin, Germany
Interests: pattern recognition; computer vision; remote sensing; hyperspectral image analysis; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institut National de la Recherche Scientifique (INRS), Centre Eau Terre Environnement, 490, rue de la Couronne, Québec, QC G1K 9A9, Canada
Interests: analysis of optical; hyperspectral and radar Earth observations through artificial intelligence and machine-learning approaches for urban and agro-environmental applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Machine Learning Group, Helmholtz Institute Freiberg for Resource Technology, Helmholtz-Zentrum Dresden-Rossendorf, 09599 Freiberg, Germany
2. Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Vienna, Austria
Interests: machine and deep learning; image and signal processing; hyperspectral image analysis; multisensor data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
CT Department, ICT Research Institute (ITRC), Tehran, Iran
Interests: machine and deep learning; image processing; hyperspectral image analysis

E-Mail Website
Guest Editor
Department of Geoscience and Remote Sensing (GRS), Delft University of Technology (TUDelft), Stevinweg 1, 2628 CN Delft, The Netherlands
Interests: quantitative remote sensing; radiative transfer models; inverse modeling

Special Issue Information

Dear Colleagues,

Hyperspectral imaging (HSI) has gained significant interest in recent decades, particularly in the field of remote sensing. Classification of HSI data plays a key role in remote-sensing applications and needs to be addressed efficiently.

The ever-growing resolution of the remotely sensed data and, consequently, the massive amount of acquired information cannot be addressed effectively by the use of traditional pixel-based approaches per se. More specifically, challenges arise when the spatial resolution of images is very high, and as a consequence, neighboring pixels are highly correlated. Such data often contain unwanted details and information which may lead to challenges and ambiguities in classification. This necessitates more careful design of classification scenarios.

One solution to overcome this problem is to develop data analysis methods that are able to sufficiently exploit different types of information which are present in the remotely sensed HSI data. The spectral–spatial classifiers are capable of simultaneously dealing with spectral, spatial, and/or textural features, and have been developed and broadly used during the recent decades to cope with the above-mentioned challenges.

A common framework for spectral–spatial classification may benefit from the direct extraction of the spatial and spectral features from the hyperspectral data and the favorable integration of such features before feeding them into the classifier. Another solution to this issue is to utilize the spatial–spectral features to extract segments/objects before the classification step. The acquired segments/objects are then passed to a classifier to be classified into one of the predefined classes. Quantifying the resulted thematic class-maps shows that classifying segments/objects rather than pixels often leads to higher classification accuracies and a considerable decrease in computational time.

This Special Issue is open to researchers working on hyperspectral data segmentation and classification as well as feature extraction including—but not limited to—the following topics:

  • Spectral-spatial classification of remote sensing data;
  • Object-based image analysis (OBIA);
  • Novel approaches for spectral-spatial feature extraction;
  • Innovative methods for integration of spectral and spatial information;
  • Spectral-Spatial segmentation of hyperspectral imagery;
  • Machine learning/deep learning for spectral–spatial classification of remote sensing data.

Insights from applying modern machine learning (and, more specifically, deep learning) scenarios for the development of spectral–spatial segmentation and classification approaches are particularly welcome.

Dr. Amin Zehtabian
Dr. Saeid Homayouni
Dr. Pedram Ghamisi
Dr. Fardin Mirzapour
Dr. Ali Mousivand
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remotely sensed hyperspectral images
  • Spectral–spatial classification
  • Object-based image analysis
  • Segmentation
  • Feature extraction
  • Machine and Deep learning.

Related Special Issue

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

23 pages, 4511 KiB  
Article
Multi-Prior Graph Autoencoder with Ranking-Based Band Selection for Hyperspectral Anomaly Detection
by Nan Wang, Yuetian Shi, Haiwei Li, Geng Zhang, Siyuan Li and Xuebin Liu
Remote Sens. 2023, 15(18), 4430; https://doi.org/10.3390/rs15184430 - 08 Sep 2023
Cited by 1 | Viewed by 693
Abstract
Hyperspectral anomaly detection (HAD) is an important technique used to identify objects with spectral irregularity that can contribute to object-based image analysis. Latterly, significant attention has been given to HAD methods based on Autoencoders (AE). Nevertheless, due to a lack of prior information, [...] Read more.
Hyperspectral anomaly detection (HAD) is an important technique used to identify objects with spectral irregularity that can contribute to object-based image analysis. Latterly, significant attention has been given to HAD methods based on Autoencoders (AE). Nevertheless, due to a lack of prior information, transferring of modeling capacity, and the “curse of dimensionality”, AE-based detectors still have limited performance. To address the drawbacks, we propose a Multi-Prior Graph Autoencoder (MPGAE) with ranking-based band selection for HAD. There are three main components: the ranking-based band selection component, the adaptive salient weight component, and the graph autoencoder. First, the ranking-based band selection component removes redundant spectral channels by ranking the bands by employing piecewise-smooth first. Then, the adaptive salient weight component adjusts the reconstruction ability of the AE based on the salient prior, by calculating spectral-spatial features of the local context and the multivariate normal distribution of backgrounds. Finally, to preserve the geometric structure in the latent space, the graph autoencoder detects anomalies by obtaining reconstruction errors with a superpixel segmentation-based graph regularization. In particular, the loss function utilizes 2,1-norm and adaptive salient weight to enhance the capacity of modeling anomaly patterns. Experimental results demonstrate that the proposed MPGAE effectively outperforms other state-of-the-art HAD detectors. Full article
Show Figures

Figure 1

26 pages, 9082 KiB  
Article
Building Change Detection with Deep Learning by Fusing Spectral and Texture Features of Multisource Remote Sensing Images: A GF-1 and Sentinel 2B Data Case
by Junfu Fan, Mengzhen Zhang, Jiahao Chen, Jiwei Zuo, Zongwen Shi and Min Ji
Remote Sens. 2023, 15(9), 2351; https://doi.org/10.3390/rs15092351 - 29 Apr 2023
Viewed by 1772
Abstract
Building change detection is an important task in the remote sensing field, and the powerful feature extraction ability of the deep neural network model shows strong advantages in this task. However, the datasets used for this study are mostly three-band high-resolution remote sensing [...] Read more.
Building change detection is an important task in the remote sensing field, and the powerful feature extraction ability of the deep neural network model shows strong advantages in this task. However, the datasets used for this study are mostly three-band high-resolution remote sensing images from a single data source, and few spectral features limit the development of building change detection from multisource remote sensing images. To investigate the influence of spectral and texture features on the effect of building change detection based on deep learning, a multisource building change detection dataset (MS-HS BCD dataset) is produced in this paper using GF-1 high-resolution remote sensing images and Sentinel-2B multispectral remote sensing images. According to the different resolutions of each Sentinel-2B band, eight different multisource spectral data combinations are designed, and six advanced network models are selected for the experiments. After adding multisource spectral and texture feature data, the results show that the detection effects of the six networks improve to different degrees. Taking the MSF-Net network as an example, the F1-score and IOU improved by 0.67% and 1.09%, respectively, compared with high-resolution images, and by 7.57% and 6.21% compared with multispectral images. Full article
Show Figures

Graphical abstract

21 pages, 20168 KiB  
Article
Hyper-LGNet: Coupling Local and Global Features for Hyperspectral Image Classification
by Tianxiang Zhang, Wenxuan Wang, Jing Wang, Yuanxiu Cai, Zhifang Yang and Jiangyun Li
Remote Sens. 2022, 14(20), 5251; https://doi.org/10.3390/rs14205251 - 20 Oct 2022
Cited by 7 | Viewed by 1543
Abstract
Hyperspectral sensors provide an opportunity to capture the intensity of high spatial/spectral information and enable applications for high-level earth observation missions, such as accurate land cover mapping and target/object detection. Currently, convolutional neural networks (CNNs) are good at coping with hyperspectral image processing [...] Read more.
Hyperspectral sensors provide an opportunity to capture the intensity of high spatial/spectral information and enable applications for high-level earth observation missions, such as accurate land cover mapping and target/object detection. Currently, convolutional neural networks (CNNs) are good at coping with hyperspectral image processing tasks because of the strong spatial and spectral feature extraction ability brought by hierarchical structures, but the convolution operation in CNNs is limited to local feature extraction in both dimensions. In the meanwhile, the introduction of the Transformer structure has provided an opportunity to capture long-distance dependencies between tokens from a global perspective; however, Transformer-based methods have a restricted ability to extract local information because they have no inductive bias, as CNNs do. To make full use of these two methods’ advantages in hyperspectral image processing, a dual-flow architecture named Hyper-LGNet to couple local and global features is firstly proposed by integrating CNN and Transformer branches to deal with HSI spatial-spectral information. In particular, a spatial-spectral feature fusion module (SSFFM) is designed to maximally integrate spectral and spatial information. Three mainstream hyperspectral datasets (Indian Pines, Pavia University and Houston 2013) are utilized to evaluate the proposed method’s performance. Comparative results show that the proposed Hyper-LGNet achieves state-of-the-art performance in comparison with the other nine approaches concerning overall accuracy (OA), average accuracy (AA) and kappa index. Consequently, it is anticipated that, by coupling CNN and Transformer structures, this study can provide novel insights into hyperspectral image analysis. Full article
Show Figures

Graphical abstract

22 pages, 3043 KiB  
Article
A 3-Stage Spectral-Spatial Method for Hyperspectral Image Classification
by Raymond H. Chan and Ruoning Li
Remote Sens. 2022, 14(16), 3998; https://doi.org/10.3390/rs14163998 - 17 Aug 2022
Cited by 3 | Viewed by 1784
Abstract
Hyperspectral images often have hundreds of spectral bands of different wavelengths captured by aircraft or satellites that record land coverage. Identifying detailed classes of pixels becomes feasible due to the enhancement in spectral and spatial resolution of hyperspectral images. In this work, we [...] Read more.
Hyperspectral images often have hundreds of spectral bands of different wavelengths captured by aircraft or satellites that record land coverage. Identifying detailed classes of pixels becomes feasible due to the enhancement in spectral and spatial resolution of hyperspectral images. In this work, we propose a novel framework that utilizes both spatial and spectral information for classifying pixels in hyperspectral images. The method consists of three stages. In the first stage, the pre-processing stage, the Nested Sliding Window algorithm is used to reconstruct the original data by enhancing the consistency of neighboring pixels and then Principal Component Analysis is used to reduce the dimension of data. In the second stage, Support Vector Machines are trained to estimate the pixel-wise probability map of each class using the spectral information from the images. Finally, a smoothed total variation model is applied to ensure spatial connectivity in the classification map by smoothing the class probability tensor. We demonstrate the superiority of our method against three state-of-the-art algorithms on six benchmark hyperspectral datasets with 10 to 50 training labels for each class. The results show that our method gives the overall best performance in accuracy even with a very small set of labeled pixels. Especially, the gain in accuracy with respect to other state-of-the-art algorithms increases when the number of labeled pixels decreases, and, therefore, our method is more advantageous to be applied to problems with a small training set. Hence, it is of great practical significance since expert annotations are often expensive and difficult to collect. Full article
Show Figures

Graphical abstract

24 pages, 7125 KiB  
Article
Fusion of Multidimensional CNN and Handcrafted Features for Small-Sample Hyperspectral Image Classification
by Haojin Tang, Yanshan Li, Zhiquan Huang, Li Zhang and Weixin Xie
Remote Sens. 2022, 14(15), 3796; https://doi.org/10.3390/rs14153796 - 06 Aug 2022
Cited by 7 | Viewed by 1861
Abstract
Hyperspectral image (HSI) classification has attracted widespread concern in recent years. However, due to the complexity of the HSI gathering environment, it is difficult to obtain a great number of HSI labeled samples. Therefore, how to effectively extract the spatial–spectral feature with small-scale [...] Read more.
Hyperspectral image (HSI) classification has attracted widespread concern in recent years. However, due to the complexity of the HSI gathering environment, it is difficult to obtain a great number of HSI labeled samples. Therefore, how to effectively extract the spatial–spectral feature with small-scale training samples is the crucial point of HSI classification. In this paper, a novel fusion framework for small-sample HSI classification is proposed to fully combine the advantages of multidimensional CNN and handcrafted features. Firstly, a 3D fuzzy histogram of oriented gradients (3D-FHOG) descriptor is proposed to fully extract the handcrafted spatial–spectral feature of HSI pixels, which is suggested to be more robust by overcoming the local spatial–spectral feature uncertainty. Secondly, a multidimensional Siamese network (MDSN), which is updated by minimizing both contrastive loss and classification loss, is designed to effectively exploit the CNN-based spatial–spectral features from multiple dimensions. Finally, the proposed MDSN combined with 3D-FHOG is utilized for small-sample HSI classification to verify the effectiveness of our proposed fusion framework. The experimental results on three public data sets indicate that the proposed MDSN combined with 3D-FHOG is significantly better than the representative handcrafted feature-based and CNN-based methods, which in turn demonstrates the superiority of the proposed fusion framework. Full article
Show Figures

Figure 1

22 pages, 45165 KiB  
Article
A Spatial–Spectral Combination Method for Hyperspectral Band Selection
by Xizhen Han, Zhengang Jiang, Yuanyuan Liu, Jian Zhao, Qiang Sun and Yingzhi Li
Remote Sens. 2022, 14(13), 3217; https://doi.org/10.3390/rs14133217 - 04 Jul 2022
Cited by 4 | Viewed by 1825
Abstract
Hyperspectral images are characterized by hundreds of spectral bands and rich information. However, there exists a large amount of information redundancy among adjacent bands. In this study, a spatial–spectral combination method for hyperspectral band selection (SSCBS) is proposed to reduce information redundancy. First, [...] Read more.
Hyperspectral images are characterized by hundreds of spectral bands and rich information. However, there exists a large amount of information redundancy among adjacent bands. In this study, a spatial–spectral combination method for hyperspectral band selection (SSCBS) is proposed to reduce information redundancy. First, the hyperspectral image is automatically divided into subspaces. Seven algorithms classified as four types are executed and compared. The means algorithm is the most suitable for subspace division of the input hyperspectral image, with the calculation being the fastest. Then, for each subspace, the spatial–spectral combination method is adopted to select the best band. The band with the maximum information and more prominent characteristics between the adjacent bands is selected. The parameters of Euclidean distance and spectral angle parameters are used to measure the intraclass correlation and interclass spectral specificity, respectively. Weight coefficient quantifying the intrinsic spatial–spectral relationship of pixels are constructed, and then the optimal bands are selected by a combination of the weight coefficients and the information entropy. Moreover, an automatic method is proposed in this paper to provide an appropriate number of band sets, which is out of consideration for existing research. The experimental results show, as compared to other competing methods, that the SSCBS approach has the highest classification accuracy on the three benchmark datasets and takes less computation time. These demonstrate that the proposed SSCBS achieves satisfactory performance against state-of-the-art algorithms. Full article
Show Figures

Figure 1

26 pages, 12856 KiB  
Article
Weakly Supervised Classification of Hyperspectral Image Based on Complementary Learning
by Lingbo Huang, Yushi Chen and Xin He
Remote Sens. 2021, 13(24), 5009; https://doi.org/10.3390/rs13245009 - 09 Dec 2021
Cited by 6 | Viewed by 2646
Abstract
In recent years, supervised learning-based methods have achieved excellent performance for hyperspectral image (HSI) classification. However, the collection of training samples with labels is not only costly but also time-consuming. This fact usually causes the existence of weak supervision, including incorrect supervision where [...] Read more.
In recent years, supervised learning-based methods have achieved excellent performance for hyperspectral image (HSI) classification. However, the collection of training samples with labels is not only costly but also time-consuming. This fact usually causes the existence of weak supervision, including incorrect supervision where mislabeled samples exist and incomplete supervision where unlabeled samples exist. Focusing on the inaccurate supervision and incomplete supervision, the weakly supervised classification of HSI is investigated in this paper. For inaccurate supervision, complementary learning (CL) is firstly introduced for HSI classification. Then, a new method, which is based on selective CL and convolutional neural network (SeCL-CNN), is proposed for classification with noisy labels. For incomplete supervision, a data augmentation-based method, which combines mixup and Pseudo-Label (Mix-PL) is proposed. And then, a classification method, which combines Mix-PL and CL (Mix-PL-CL), is designed aiming at better semi-supervised classification capacity of HSI. The proposed weakly supervised methods are evaluated on three widely-used hyperspectral datasets (i.e., Indian Pines, Houston, and Salinas datasets). The obtained results reveal that the proposed methods provide competitive results compared to the state-of-the-art methods. For inaccurate supervision, the proposed SeCL-CNN has outperformed the state-of-the-art method (i.e., SSDP-CNN) by 0.92%, 1.84%, and 1.75% in terms of OA on the three datasets, when the noise ratio is 30%. And for incomplete supervision, the proposed Mix-PL-CL has outperformed the state-of-the-art method (i.e., AROC-DP) by 1.03%, 0.70%, and 0.82% in terms of OA on the three datasets, with 25 training samples per class. Full article
Show Figures

Figure 1

Other

Jump to: Research

14 pages, 3121 KiB  
Technical Note
Potential Assessment of PRISMA Hyperspectral Imagery for Remote Sensing Applications
by Riyaaz Uddien Shaik, Shoba Periasamy and Weiping Zeng
Remote Sens. 2023, 15(5), 1378; https://doi.org/10.3390/rs15051378 - 28 Feb 2023
Cited by 11 | Viewed by 4601
Abstract
Hyperspectral imagery plays a vital role in precision agriculture, forestry, environment, and geological applications. Over the past decade, extensive research has been carried out in the field of hyperspectral remote sensing. First introduced by the Italian Space Agency ASI in 2019, space-borne PRISMA [...] Read more.
Hyperspectral imagery plays a vital role in precision agriculture, forestry, environment, and geological applications. Over the past decade, extensive research has been carried out in the field of hyperspectral remote sensing. First introduced by the Italian Space Agency ASI in 2019, space-borne PRISMA hyperspectral imagery (PHSI) is taking the hyperspectral remote sensing research community into the next era due to its unprecedented spectral resolution of ≤12 nm. Given these abundant free data and high spatial resolution, it is crucial to provide remote sensing researchers with information about the critical attributes of PRISMA imagery, making it the most viable solution for various land and water applications. Hence, in the present study, a SWOT analysis was performed for PHSI using recent case studies to exploit the potential of PHSI for different remote sensing applications, such as snow, soil, water, natural gas, and vegetation. From this analysis, it was found that the higher reflectance spectra of PHSI, which have comprehensive coverage, have greater potential to extract vegetation biophysical parameters compared to other applications. Though the possible use of these data was demonstrated in a few other applications, such as the identification of methane gases and soil mineral mapping, the data may not be suitable for continuous monitoring due to their limited acquisition, long revisiting times, noisy bands, atmospheric interferences, and computationally heavy processing, particularly when executing machine learning models. The potential applications of PHSI include large-scale and efficient mapping, transferring technology, and fusion with other remote sensing data, whereas the lifetime of satellites and the need for interdisciplinary personnel pose challenges. Furthermore, some strategies to overcome the aforementioned weaknesses and threats are described in our conclusions. Full article
Show Figures

Figure 1

17 pages, 4055 KiB  
Technical Note
Multiscale Information Fusion for Hyperspectral Image Classification Based on Hybrid 2D-3D CNN
by Hang Gong, Qiuxia Li, Chunlai Li, Haishan Dai, Zhiping He, Wenjing Wang, Haoyang Li, Feng Han, Abudusalamu Tuniyazi and Tingkui Mu
Remote Sens. 2021, 13(12), 2268; https://doi.org/10.3390/rs13122268 - 09 Jun 2021
Cited by 38 | Viewed by 3265
Abstract
Hyperspectral images are widely used for classification due to its rich spectral information along with spatial information. To process the high dimensionality and high nonlinearity of hyperspectral images, deep learning methods based on convolutional neural network (CNN) are widely used in hyperspectral classification [...] Read more.
Hyperspectral images are widely used for classification due to its rich spectral information along with spatial information. To process the high dimensionality and high nonlinearity of hyperspectral images, deep learning methods based on convolutional neural network (CNN) are widely used in hyperspectral classification applications. However, most CNN structures are stacked vertically in addition to using a onefold size of convolutional kernels or pooling layers, which cannot fully mine the multiscale information on the hyperspectral images. When such networks meet the practical challenge of a limited labeled hyperspectral image dataset—i.e., “small sample problem”—the classification accuracy and generalization ability would be limited. In this paper, to tackle the small sample problem, we apply the semantic segmentation function to the pixel-level hyperspectral classification due to their comparability. A lightweight, multiscale squeeze-and-excitation pyramid pooling network (MSPN) is proposed. It consists of a multiscale 3D CNN module, a squeezing and excitation module, and a pyramid pooling module with 2D CNN. Such a hybrid 2D-3D-CNN MSPN framework can learn and fuse deeper hierarchical spatial–spectral features with fewer training samples. The proposed MSPN was tested on three publicly available hyperspectral classification datasets: Indian Pine, Salinas, and Pavia University. Using 5%, 0.5%, and 0.5% training samples of the three datasets, the classification accuracies of the MSPN were 96.09%, 97%, and 96.56%, respectively. In addition, we also selected the latest dataset with higher spatial resolution, named WHU-Hi-LongKou, as the challenge object. Using only 0.1% of the training samples, we could achieve a 97.31% classification accuracy, which is far superior to the state-of-the-art hyperspectral classification methods. Full article
Show Figures

Figure 1

Back to TopTop