remotesensing-logo

Journal Browser

Journal Browser

Machine Learning for Remote Sensing Image/Signal Processing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 24317

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Engineering, University of Burgos, Avda Cantabria s/n, 09006 Burgos, Spain
Interests: multispectral; colour and grey scale image processing; colorimetry; vision physics; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Technology of Computers and Communications, University of Extremadura, 10003 Caceres, Spain
Interests: hyperspectral imaging; parallel computing; remote sensing; geoscience; GPU
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning techniques have been applied in remote sensing for more than 20 years now. We are, however, experiencing an explosion of new capabilities and application areas where machine learning in remote sensing is playing and will continue to play a capital role. In particular, new sensors with ever increasing capabilities and new computing hardware and software capabilities are allowing us to tackle problems that were considerably difficult to approach just a few years ago.

This Special Issue is aimed at presenting new machine learning techniques and new application areas in remote sensing. We particularly welcome papers focused on, although not limited to, one or more of the following topics:

  • Deep learning techniques for remote sensing
  • Machine learning techniques for inference and retrieval of bio–geo–physical variables
  • Machine learning for remote sensing data classification and regression
  • Multi-temporal and multi-sensor data fusion, assimilation, and processing
  • Machine learning platforms for big data and highly demanding remote sensing applications
  • Machine learning for multispectral and hyperspectral remote sensing platforms and applications
  • Machine learning for uncertainty analysis and assessment in remote sensing
  • Machine learning for remote sensing estimation and characterization of highly variable and dynamic earth processes

We would like this Special Issue to become an example of the most up-to-date machine learning approaches used to solve some of the problems considered by the remote sensing community.

Prof. Dr. Pedro Latorre-Carmona
Prof. Dr. Antonio J. Plaza
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • inference and retrieval
  • classification
  • data fusion
  • high performance computing
  • multispectral and hyperspectral data processing
  • uncertainty analysis and assessment
  • dynamic earth processes

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 94394 KiB  
Article
Densely Residual Network with Dual Attention for Hyperspectral Reconstruction from RGB Images
by Lixia Wang, Aditya Sole and Jon Yngve Hardeberg
Remote Sens. 2022, 14(13), 3128; https://doi.org/10.3390/rs14133128 - 29 Jun 2022
Cited by 2 | Viewed by 1916
Abstract
In the last several years, deep learning has been introduced to recover a hyperspectral image (HSI) from a single RGB image and demonstrated good performance. In particular, attention mechanisms have further strengthened discriminative features, but most of them are learned by convolutions with [...] Read more.
In the last several years, deep learning has been introduced to recover a hyperspectral image (HSI) from a single RGB image and demonstrated good performance. In particular, attention mechanisms have further strengthened discriminative features, but most of them are learned by convolutions with limited receptive fields or require much computational cost, which hinders the function of attention modules. Furthermore, the performance of these deep learning methods is hampered by tackling multi-level features equally. To this end, in this paper, based on multiple lightweight densely residual modules, we propose a densely residual network with dual attention (DRN-DA), which utilizes advanced attention and adaptive fusion strategy for more efficient feature correlation learning and more powerful feature extraction. Specifically, an SE layer is applied to learn channel-wise dependencies, and dual downsampling spatial attention (DDSA) is developed to capture long-range spatial contextual information. All the intermediate-layer feature maps are adaptively fused. Experimental results on four data sets from the NTIRE 2018 and NTIRE 2020 Spectral Reconstruction Challenges demonstrate the superiority of the proposed DRN-DA over state-of-the-art methods (at least −6.19% and −1.43% on NTIRE 2018 “Clean” track and “Real World” track, −6.85% and −5.30% on NTIRE 2020 “Clean” track and “Real World” track) in terms of mean relative absolute error. Full article
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)
Show Figures

Graphical abstract

23 pages, 12219 KiB  
Article
Registration of Multisensor Images through a Conditional Generative Adversarial Network and a Correlation-Type Similarity Measure
by Luca Maggiolo, David Solarna, Gabriele Moser and Sebastiano Bruno Serpico
Remote Sens. 2022, 14(12), 2811; https://doi.org/10.3390/rs14122811 - 11 Jun 2022
Cited by 5 | Viewed by 1662
Abstract
The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this [...] Read more.
The automatic registration of multisensor remote sensing images is a highly challenging task due to the inherently different physical, statistical, and textural characteristics of the input data. Information-theoretic measures are often used to favor comparing local intensity distributions in the images. In this paper, a novel method based on the combination of a deep learning architecture and a correlation-type area-based functional is proposed for the registration of a multisensor pair of images, including an optical image and a synthetic aperture radar (SAR) image. The method makes use of a conditional generative adversarial network (cGAN) in order to address image-to-image translation across the optical and SAR data sources. Then, once the optical and SAR data are brought to a common domain, an area-based 2 similarity measure is used together with the COBYLA constrained maximization algorithm for registration purposes. While correlation-type functionals are usually ineffective in the application to multisensor registration, exploiting the image-to-image translation capabilities of cGAN architectures allows moving the complexity of the comparison to the domain adaptation step, thus enabling the use of a simple 2 similarity measure, favoring high computational efficiency, and opening the possibility to process a large amount of data at runtime. Experiments with multispectral and panchromatic optical data combined with SAR images suggest the effectiveness of this strategy and the capability of the proposed method to achieve more accurate registration as compared to state-of-the-art approaches. Full article
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)
Show Figures

Graphical abstract

20 pages, 7801 KiB  
Article
HybridGBN-SR: A Deep 3D/2D Genome Graph-Based Network for Hyperspectral Image Classification
by Haron C. Tinega, Enqing Chen, Long Ma, Divinah O. Nyasaka and Richard M. Mariita
Remote Sens. 2022, 14(6), 1332; https://doi.org/10.3390/rs14061332 - 09 Mar 2022
Cited by 4 | Viewed by 2136
Abstract
The successful application of deep learning approaches in remote sensing image classification requires large hyperspectral image (HSI) datasets to learn discriminative spectral–spatial features simultaneously. To date, the HSI datasets available for image classification are relatively small to train deep learning methods. This study [...] Read more.
The successful application of deep learning approaches in remote sensing image classification requires large hyperspectral image (HSI) datasets to learn discriminative spectral–spatial features simultaneously. To date, the HSI datasets available for image classification are relatively small to train deep learning methods. This study proposes a deep 3D/2D genome graph-based network (abbreviated as HybridGBN-SR) that is computationally efficient and not prone to overfitting even with extremely few training sample data. At the feature extraction level, the HybridGBN-SR utilizes the three-dimensional (3D) and two-dimensional (2D) Genoblocks trained using very few samples while improving HSI classification accuracy. The design of a Genoblock is based on a biological genome graph. From the experimental results, the study shows that our model achieves better classification accuracy than the compared state-of-the-art methods over the three publicly available HSI benchmarking datasets such as the Indian Pines (IP), the University of Pavia (UP), and the Salinas Scene (SA). For instance, using only 5% labeled data for training in IP, and 1% in UP and SA, the overall classification accuracy of the proposed HybridGBN-SR is 97.42%, 97.85%, and 99.34%, respectively, which is better than the compared state-of-the-art methods. Full article
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)
Show Figures

Graphical abstract

23 pages, 3872 KiB  
Article
Unsupervised Generative Adversarial Network with Background Enhancement and Irredundant Pooling for Hyperspectral Anomaly Detection
by Zhongwei Li, Shunxiao Shi, Leiquan Wang, Mingming Xu and Luyao Li
Remote Sens. 2022, 14(5), 1265; https://doi.org/10.3390/rs14051265 - 05 Mar 2022
Cited by 4 | Viewed by 2089
Abstract
Lately, generative adversarial networks (GAN)-based methods have drawn extensive attention and achieved a promising performance in the field of hyperspectral anomaly detection (HAD) owing to GAN’s powerful data generation capability. However, without considering the background spatial features, most of these methods can not [...] Read more.
Lately, generative adversarial networks (GAN)-based methods have drawn extensive attention and achieved a promising performance in the field of hyperspectral anomaly detection (HAD) owing to GAN’s powerful data generation capability. However, without considering the background spatial features, most of these methods can not obtain a GAN with a strong background generation ability. Besides, they fail to address the hyperspectral image (HSI) redundant information disturbance problem in the anomaly detection part. To solve these issues, the unsupervised generative adversarial network with background spatial feature enhancement and irredundant pooling (BEGAIP) is proposed for HAD. To make better use of features, spatial and spectral features union extraction idea is also applied to the proposed model. To be specific, in spatial branch, a new background spatial feature enhancement way is proposed to get a data set containing relatively pure background information to train GAN and reconstruct a more vivid background image. In a spectral branch, irredundant pooling (IP) is invented to remove redundant information, which can also enhance the background spectral feature. Finally, the features obtained from the spectral and spatial branch are combined for HAD. The experimental results conducted on several HSI data sets display that the model proposed acquire a better performance than other relevant algorithms. Full article
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)
Show Figures

Figure 1

21 pages, 8829 KiB  
Article
Improved Transformer Net for Hyperspectral Image Classification
by Yuhao Qing, Wenyi Liu, Liuyan Feng and Wanjia Gao
Remote Sens. 2021, 13(11), 2216; https://doi.org/10.3390/rs13112216 - 05 Jun 2021
Cited by 104 | Viewed by 6002
Abstract
In recent years, deep learning has been successfully applied to hyperspectral image classification (HSI) problems, with several convolutional neural network (CNN) based models achieving an appealing classification performance. However, due to the multi-band nature and the data redundancy of the hyperspectral data, the [...] Read more.
In recent years, deep learning has been successfully applied to hyperspectral image classification (HSI) problems, with several convolutional neural network (CNN) based models achieving an appealing classification performance. However, due to the multi-band nature and the data redundancy of the hyperspectral data, the CNN model underperforms in such a continuous data domain. Thus, in this article, we propose an end-to-end transformer model entitled SAT Net that is appropriate for HSI classification and relies on the self-attention mechanism. The proposed model uses the spectral attention mechanism and the self-attention mechanism to extract the spectral–spatial features of the HSI image, respectively. Initially, the original HSI data are remapped into multiple vectors containing a series of planar 2D patches after passing through the spectral attention module. On each vector, we perform linear transformation compression to obtain the sequence vector length. During this process, we add the position–coding vector and the learnable–embedding vector to manage capturing the continuous spectrum relationship in the HSI at a long distance. Then, we employ several multiple multi-head self-attention modules to extract the image features and complete the proposed network with a residual network structure to solve the gradient dispersion and over-fitting problems. Finally, we employ a multilayer perceptron for the HSI classification. We evaluate SAT Net on three publicly available hyperspectral datasets and challenge our classification performance against five current classification methods employing several metrics, i.e., overall and average classification accuracy and Kappa coefficient. Our trials demonstrate that SAT Net attains a competitive classification highlighting that a Self-Attention Transformer network and is appealing for HSI classification. Full article
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)
Show Figures

Graphical abstract

25 pages, 2625 KiB  
Article
Rice-Yield Prediction with Multi-Temporal Sentinel-2 Data and 3D CNN: A Case Study in Nepal
by Ruben Fernandez-Beltran, Tina Baidar, Jian Kang and Filiberto Pla
Remote Sens. 2021, 13(7), 1391; https://doi.org/10.3390/rs13071391 - 04 Apr 2021
Cited by 43 | Viewed by 8648
Abstract
Crop yield estimation is a major issue of crop monitoring which remains particularly challenging in developing countries due to the problem of timely and adequate data availability. Whereas traditional agricultural systems mainly rely on scarce ground-survey data, freely available multi-temporal and multi-spectral remote [...] Read more.
Crop yield estimation is a major issue of crop monitoring which remains particularly challenging in developing countries due to the problem of timely and adequate data availability. Whereas traditional agricultural systems mainly rely on scarce ground-survey data, freely available multi-temporal and multi-spectral remote sensing images are excellent tools to support these vulnerable systems by accurately monitoring and estimating crop yields before harvest. In this context, we introduce the use of Sentinel-2 (S2) imagery, with a medium spatial, spectral and temporal resolutions, to estimate rice crop yields in Nepal as a case study. Firstly, we build a new large-scale rice crop database (RicePAL) composed by multi-temporal S2 and climate/soil data from the Terai districts of Nepal. Secondly, we propose a novel 3D Convolutional Neural Network (CNN) adapted to these intrinsic data constraints for the accurate rice crop yield estimation. Thirdly, we study the effect of considering different temporal, climate and soil data configurations in terms of the performance achieved by the proposed approach and several state-of-the-art regression and CNN-based yield estimation methods. The extensive experiments conducted in this work demonstrate the suitability of the proposed CNN-based framework for rice crop yield estimation in the developing country of Nepal using S2 data. Full article
(This article belongs to the Special Issue Machine Learning for Remote Sensing Image/Signal Processing)
Show Figures

Graphical abstract

Back to TopTop