remotesensing-logo

Journal Browser

Journal Browser

Advanced Machine Learning Approaches for Analysis of Remote Sensing Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 7297

Special Issue Editors

Electronic Systems Engineering, Faculty of Engineering and Applied Science, University of Regina, Regina, SK S4S 0A2, Canada
Interests: image analysis; multimodal image fusion; computer vision; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Agriculture and Agri-Food Canada (AAFC), Lethbridge Research Centre, 5403—1 Ave S., Lethbridge, AB, Canada
Interests: remote sensing; UAV imaging; plant phenomics; precision agriculture; crops mapping and big-data analytics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical Engineering, National University of Modern Languages, Islamabad, Pakistan
Interests: image analysis (multi-modal and multi-spectral); image matching and registration; feature points; semantic segmentation; deep learning

Special Issue Information

Dear Colleagues,

The quality (spatiotemporal resolution) and quantity of remote sensing data have recently increased multiplied. Similarly, machine learning and image processing methods have also drastically improved for big data analytics. These two parallel developments are diversifying remote sensing applications in various fields, such as environmental sciences, agriculture, geosciences and civil engineering. Mapping non-linear relationships, object detection, image segmentation and detection algorithms are a few of the essential tools in machine and deep learning that offer huge potential in remote sensing applications. In combination with traditional remote sensing, these advanced machine learning methods need to be adapted for multi-source data fusion, computer vision and predictive analytics to further remote sensing image analysis.

The scale and complexity of machine learning approaches and the availability of multi-source remote sensing data pose a significant challenge in the handling of big data and developing high-performance computational strategies for remote sensing applications. It requires improvements in machine learning techniques that could handle big data and developing methods to fuse multisource big data for improved performance in object detection, segmentation, classification and other remote sensing applications.

This Special Issue represents the latest advances in machine learning algorithms, image processing techniques and big data integration to improve AI-based remote sensing applications. We invite authors to submit all types of manuscripts, including original research, research concepts, communications, and reviews, mainly on (but not limited to) the following topics:

  • Imagery Data Analysis;
  • Remote Sensing;
  • Machine Learning;
  • Deep Learning;
  • Computer Vision;
  • Exploiting Big Data;
  • HPC and Predictive Analytics;
  • Multi-Source/Sensor Data Fusion;
  • Object Detection and Recognition;
  • Image Segmentation.

Dr. Abdul Bais
Dr. Keshav D Singh
Dr. Sajid Saleem
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • imagery data analysis
  • remote sensing
  • machine learning
  • deep learning
  • computer vision
  • big data and predictive analytics
  • multi-source/sensor data fusion

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 10230 KiB  
Article
A Super-Resolution Reconstruction Model for Remote Sensing Image Based on Generative Adversarial Networks
by Wenyi Hu, Lei Ju, Yujia Du and Yuxia Li
Remote Sens. 2024, 16(8), 1460; https://doi.org/10.3390/rs16081460 - 20 Apr 2024
Viewed by 369
Abstract
In current times, reconstruction of remote sensing images using super-resolution is a prominent topic of study. Remote sensing data have a complex spatial distribution. Compared with natural pictures, remote sensing pictures often contain subtler and more complicated information. Most super-resolution reconstruction algorithms cannot [...] Read more.
In current times, reconstruction of remote sensing images using super-resolution is a prominent topic of study. Remote sensing data have a complex spatial distribution. Compared with natural pictures, remote sensing pictures often contain subtler and more complicated information. Most super-resolution reconstruction algorithms cannot restore all the information contained in remote sensing images when reconstructing them. The content of some areas in the reconstructed images may be too smooth, and some areas may even have color changes, resulting in lower quality reconstructed images. In response to the problems presenting in current reconstruction algorithms about super-resolution, this article proposes the SRGAN-MSAM-DRC model (SRGAN model with multi-scale attention mechanism and dense residual connection). This model roots in generative adversarial networks and incorporates multi-scale attention mechanisms and dense residual connections into the generator. Furthermore, residual blocks are incorporated into the discriminator. We use some remote sensing image datasets of real-world data to evaluate this model, and the results indicate the SRGAN-MSAM-DRC model has shown enhancements in three evaluation metrics for reconstructed images about super-resolution. Compared to the basic SRGAN model, the SSIM (structural similarity), PSNR (peak signal-to-noise ratio), and IE (image entropy) increase by 5.0%, 4.0%, and 4.1%, respectively. From the results, we know the quality of the reconstructed images of remote sensing using the SRGAN-MSAM-DRC model is better than basic SRGAN model, and verifies that the model has good applicability and performance in reconstruction of remote sensing images using super-resolution. Full article
Show Figures

Graphical abstract

21 pages, 1350 KiB  
Article
S3L: Spectrum Transformer for Self-Supervised Learning in Hyperspectral Image Classification
by Hufeng Guo and Wenyi Liu
Remote Sens. 2024, 16(6), 970; https://doi.org/10.3390/rs16060970 - 10 Mar 2024
Viewed by 669
Abstract
In the realm of Earth observation and remote sensing data analysis, the advancement of hyperspectral imaging (HSI) classification technology is of paramount importance. Nevertheless, the intricate nature of hyperspectral data, coupled with the scarcity of labeled data, presents significant challenges in this domain. [...] Read more.
In the realm of Earth observation and remote sensing data analysis, the advancement of hyperspectral imaging (HSI) classification technology is of paramount importance. Nevertheless, the intricate nature of hyperspectral data, coupled with the scarcity of labeled data, presents significant challenges in this domain. To mitigate these issues, we introduce a self-supervised learning algorithm predicated on a spectral transformer for HSI classification under conditions of limited labeled data, with the objective of enhancing the efficacy of HSI classification. The S3L algorithm operates in two distinct phases: pretraining and fine-tuning. During the pretraining phase, the algorithm learns the spatial representation of HSI from unlabeled data, utilizing a masking mechanism and a spectral transformer, thereby augmenting the sequence dependence of spectral features. Subsequently, in the fine-tuning phase, labeled data is employed to refine the pretrained weights, thereby improving the precision of HSI classification. Within the comprehensive encoder–decoder framework, we propose a novel spectral transformer module specifically engineered to synergize spatial feature extraction with spectral domain analysis. This innovative module adeptly navigates the complex interplay among various spectral bands, capturing both global and sequential spectral dependencies. Uniquely, it incorporates a gated recurrent unit (GRU) layer within the encoder to enhance its ability to process spectral sequences. Our experimental evaluations across several public datasets reveal that our proposed method, distinguished by its spectral transformer, achieves superior classification performance, particularly in scenarios with limited labeled samples, outperforming existing state-of-the-art approaches. Full article
Show Figures

Figure 1

26 pages, 2982 KiB  
Article
FusionHeightNet: A Multi-Level Cross-Fusion Method from Multi-Source Remote Sensing Images for Urban Building Height Estimation
by Chao Ma, Yueting Zhang, Jiayi Guo, Guangyao Zhou and Xiurui Geng
Remote Sens. 2024, 16(6), 958; https://doi.org/10.3390/rs16060958 - 08 Mar 2024
Viewed by 706
Abstract
Extracting buildings in urban scenes from remote sensing images is crucial for the construction of digital cities, urban monitoring, urban planning, and autonomous driving. Traditional methods generally rely on shadow detection or stereo matching from multi-view high-resolution remote sensing images, which is cost-intensive. [...] Read more.
Extracting buildings in urban scenes from remote sensing images is crucial for the construction of digital cities, urban monitoring, urban planning, and autonomous driving. Traditional methods generally rely on shadow detection or stereo matching from multi-view high-resolution remote sensing images, which is cost-intensive. Recently, machine learning has provided solutions for the estimation of building heights from remote sensing images, but challenges remain due to the limited observation angles and image quality. The inherent lack of information in a single modality greatly limits the extraction precision. This article proposes an advanced method using multi-source remote sensing images for urban building height estimation, which is characterized by multi-level cross-fusion, the multi-task joint learning of footprint extraction and height estimation, and semantic information to refine the height estimation results. The complementary and effective features of synthetic aperture radar (SAR) and electro-optical (EO) images are transferred through multi-level cross-fusion. We use the semantic information of the footprint extraction branch to refine the height estimation results, enhancing the height results from coarse to fine. Finally, We evaluate our model on the SpaceNet 6 dataset and achieve 0.3849 and 0.7231 in the height estimation metric δ1 and footprint extraction metric Dice, respectively, which indicate effective improvements in the results compared to other methods. Full article
Show Figures

Figure 1

12 pages, 12887 KiB  
Communication
A One-Class Classifier for the Detection of GAN Manipulated Multi-Spectral Satellite Images
by Lydia Abady, Giovanna Maria Dimitri and Mauro Barni
Remote Sens. 2024, 16(5), 781; https://doi.org/10.3390/rs16050781 - 24 Feb 2024
Viewed by 484
Abstract
The current image generative models have achieved a remarkably realistic image quality, offering numerous academic and industrial applications. However, to ensure these models are used for benign purposes, it is essential to develop tools that definitively detect whether an image has been synthetically [...] Read more.
The current image generative models have achieved a remarkably realistic image quality, offering numerous academic and industrial applications. However, to ensure these models are used for benign purposes, it is essential to develop tools that definitively detect whether an image has been synthetically generated. Consequently, several detectors with excellent performance in computer vision applications have been developed. However, these detectors cannot be directly applied as they areto multi-spectral satellite images, necessitating the training of new models. While two-class classifiers generally achieve high detection accuracies, they struggle to generalize to image domains and generative architectures different from those encountered during training. In this paper, we propose a one-class classifier based on Vector Quantized Variational Autoencoder 2 (VQ-VAE 2) features to overcome the limitations of two-class classifiers. We start by highlighting the generalization problem faced by binary classifiers. This was demonstrated by training and testing an EfficientNet-B4 architecture on multiple multi-spectral datasets. We then illustrate that the VQ-VAE 2-based classifier, which was trained exclusively on pristine images, could detect images from different domains and generated by architectures not encountered during training. Finally, we conducted a head-to-head comparison between the two classifiers on the same generated datasets, emphasizing the superior generalization capabilities of the VQ-VAE 2-based detector, wherewe obtained a probability of detection at a 0.05 false alarm rate of 1 for the blue and red channels when using the VQ-VAE 2-based detector, and 0.72 when we used the EfficientNet-B4 classifier. Full article
Show Figures

Figure 1

24 pages, 26896 KiB  
Article
Extraction of Tobacco Planting Information Based on UAV High-Resolution Remote Sensing Images
by Lei He, Kunwei Liao, Yuxia Li, Bin Li, Jinglin Zhang, Yong Wang, Liming Lu, Sichun Jian, Rui Qin and Xinjun Fu
Remote Sens. 2024, 16(2), 359; https://doi.org/10.3390/rs16020359 - 16 Jan 2024
Cited by 1 | Viewed by 673
Abstract
Tobacco is a critical cash crop in China, so its growing status has received more and more attention. How to acquire accurate plant area, row spacing, and plant spacing at the same time have been key points for its grow status monitoring and [...] Read more.
Tobacco is a critical cash crop in China, so its growing status has received more and more attention. How to acquire accurate plant area, row spacing, and plant spacing at the same time have been key points for its grow status monitoring and yield prediction. However, accurately detecting small and densely arranged tobacco plants during the rosette stage poses a significant challenge. In Sichuan Province, the contours of scattered tobacco fields with different shapes are not well-extracted. Additionally, there is a lack of simultaneous methods for extracting crucial tobacco planting information, including area, row spacing, and plant spacing. In view of the above scientific problems, we proposed a method to extract the planting information of tobacco at the rosette stage with Unmanned Aerial Vehicle (UAV) remote sensing images. A detection model, YOLOv8s-EFF, was constructed for the small and weak tobacco in the rosette stage. We proposed an extraction algorithm for tobacco field area based on extended contours for different-shaped fields. Meanwhile, a planting distance extraction algorithm based on tobacco coordinates was presented. Further, four experimental areas were selected in Sichuan Province, and image processing and sample label production were carried out. Four isolated tobacco fields with different shapes in four experimental areas were used to preliminarily verify the effectiveness of the model and algorithm proposed. The results show that the precision ranges of tobacco field area, row spacing, and plant spacing were 96.51~99.04%, 90.08~99.74%, and 94.69~99.15%, respectively. And another two experimental areas, Jiange County, Guangyuan, and Dazhai County, Gulin County, and Luzhou, were selected to evaluate the accuracy of the method proposed in the research in practical application. The results indicate that the average accuracy of tobacco field area, row spacing, and plant spacing extracted by this method reached 97.99%, 97.98%, and 98.31%, respectively, which proved the extraction method of plant information is valuable. Full article
Show Figures

Graphical abstract

23 pages, 6663 KiB  
Article
CD-MQANet: Enhancing Multi-Objective Semantic Segmentation of Remote Sensing Images through Channel Creation and Dual-Path Encoding
by Jinglin Zhang, Yuxia Li, Bowei Zhang, Lei He, Yuan He, Wantao Deng, Yu Si, Zhonggui Tong, Yushu Gong and Kunwei Liao
Remote Sens. 2023, 15(18), 4520; https://doi.org/10.3390/rs15184520 - 14 Sep 2023
Viewed by 768
Abstract
As a crucial computer vision task, multi-objective semantic segmentation has attracted widespread attention and research in the field of remote sensing image analysis. This technology has important application value in fields such as land resource surveys, global change monitoring, urban planning, and environmental [...] Read more.
As a crucial computer vision task, multi-objective semantic segmentation has attracted widespread attention and research in the field of remote sensing image analysis. This technology has important application value in fields such as land resource surveys, global change monitoring, urban planning, and environmental monitoring. However, multi-target semantic segmentation of remote sensing images faces challenges such as complex surface features, complex spectral features, and a wide spatial range, resulting in differences in spatial and spectral dimensions among target features. To fully exploit and utilize spectral feature information, focusing on the information contained in spatial and spectral dimensions of multi-spectral images, and integrating external information, this paper constructs the CD-MQANet network structure, where C represents the Channel Creator module and D represents the Dual-Path Encoder. The Channel Creator module (CCM) mainly includes two parts: a generator block and a spectral attention module. The generator block aims to generate spectral channels that can expand different ground target types, while the spectral attention module can enhance useful spectral information. Dual-Path Encoders include channel encoders and spatial encoders, intended to fully utilize spectrally enhanced images while maintaining the spatial information of the original feature map. The decoder of CD-MQANet is a multitasking decoder composed of four types of attention, enhancing decoding capabilities. The loss function used in the CD-MQANet consists of three parts, which are generated by the intermediate results of the CCM, the intermediate results of the decoder, and the final segmentation results and label calculation. We performed experiments on the Potsdam dataset and the Vaihingen dataset. Compared to the baseline MQANet model, the CD-MQANet network improved mean F1 and OA by 2.03% and 2.49%, respectively, on the Potsdam dataset, and improved mean F1 and OA by 1.42% and 1.25%, respectively, on the Vaihingen dataset. The effectiveness of CD-MQANet was also proven by comparative experiments with other studies. We also conducted a thermographic analysis of the attention mechanism used in CD-MQANet and analyzed the intermediate results generated by CCM and LAM. Both modules generated intermediate results that had a significant positive impact on segmentation. Full article
Show Figures

Graphical abstract

18 pages, 686 KiB  
Article
A Hybrid 3D–2D Feature Hierarchy CNN with Focal Loss for Hyperspectral Image Classification
by Xiaoyan Wen, Xiaodong Yu, Yufan Wang, Cuiping Yang and Yu Sun
Remote Sens. 2023, 15(18), 4439; https://doi.org/10.3390/rs15184439 - 09 Sep 2023
Cited by 1 | Viewed by 967
Abstract
Hyperspectral image (HSI) classification has been extensively applied for analyzing remotely sensed images. HSI data consist of multiple bands that provide abundant spatial information. Convolutional neural networks (CNNs) have emerged as powerful deep learning methods for processing visual data. In recent work, CNNs [...] Read more.
Hyperspectral image (HSI) classification has been extensively applied for analyzing remotely sensed images. HSI data consist of multiple bands that provide abundant spatial information. Convolutional neural networks (CNNs) have emerged as powerful deep learning methods for processing visual data. In recent work, CNNs have shown impressive results in HSI classification. In this paper, we propose a hierarchical neural network architecture called feature extraction with hybrid spectral CNN (FE-HybridSN) to extract superior spectral–spatial features. FE-HybridSN effectively captures more spectral–spatial information while reducing computational complexity. Considering the prevalent issue of class imbalance in experimental datasets (IP, UP, SV) and real-world hyperspectral datasets, we apply the focal loss to mitigate these problems. The focal loss reconstructs the loss function and facilitates effective achievement of the aforementioned goals. We propose a framework (FEHN-FL) that combines FE-HybridSN and the focal loss for HSI classification and then conduct extensive HSI classification experiments using three remote sensing datasets: Indian Pines (IP), University of Pavia (UP), and Salinas Scene (SV). Using cross-entropy loss as a baseline, we assess the hyperspectral classification performance of various backbone networks and examine the influence of different spatial sizes on classification accuracy. After incorporating focal loss as our loss function, we not only compare the classification performance of the FE-HybridSN backbone network under different loss functions but also evaluate their convergence rates during training. The proposed classification framework demonstrates satisfactory performance compared to state-of-the-art end-to-end deep-learning-based methods, such as 2D-CNN, 3D-CNN, etc. Full article
Show Figures

Graphical abstract

18 pages, 5137 KiB  
Article
MCSNet: A Radio Frequency Interference Suppression Network for Spaceborne SAR Images via Multi-Dimensional Feature Transform
by Xiuhe Li, Jinhe Ran, Hao Zhang and Shunjun Wei
Remote Sens. 2022, 14(24), 6337; https://doi.org/10.3390/rs14246337 - 14 Dec 2022
Cited by 5 | Viewed by 1504
Abstract
Spaceborne synthetic aperture radar (SAR) is a promising remote sensing technique, as it can produce high-resolution imagery over a wide area of surveillance with all-weather and all-day capabilities. However, the spaceborne SAR sensor may suffer from severe radio frequency interference (RFI) from some [...] Read more.
Spaceborne synthetic aperture radar (SAR) is a promising remote sensing technique, as it can produce high-resolution imagery over a wide area of surveillance with all-weather and all-day capabilities. However, the spaceborne SAR sensor may suffer from severe radio frequency interference (RFI) from some similar frequency band signals, resulting in image quality degradation, blind spot, and target loss. To remove these RFI features presented on spaceborne SAR images, we propose a multi-dimensional calibration and suppression network (MCSNet) to exploit the features learning of spaceborne SAR images and RFI. In the scheme, a joint model consisting of the spaceborne SAR image and RFI is established based on the relationship between SAR echo and the scattering matrix. Then, to suppress the RFI presented in images, the main structure of MCSNet is constructed by a multi-dimensional and multi-channel strategy, wherein the feature calibration module (FCM) is designed for global depth feature extraction. In addition, MCSNet performs planned mapping on the feature maps repeatedly under the supervision of the SAR interference image, compensating for the discrepancies caused during the RFI suppression. Finally, a detailed restoration module based on the residual network is conceived to maintain the scattering characteristics of the underlying scene in interfered SAR images. The simulation data and Sentinel-1 data experiments, including different landscapes and different forms of RFI, validate the effectiveness of the proposed method. Both the results demonstrate that MCSNet outperforms the state-of-the-art methods and can greatly suppress the RFI in spaceborne SAR. Full article
Show Figures

Figure 1

Back to TopTop