remotesensing-logo

Journal Browser

Journal Browser

Advanced Machine Learning and Deep Learning Approaches for Remote Sensing III

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 7662

Special Issue Editor


E-Mail Website
Guest Editor
Department of Embedded Systems Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea
Interests: remote sensing; deep learning; artificial intelligence; image processing; signal processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote sensing is the acquisition of information about an object or phenomenon without making physical contact. Artificial intelligence such as machine learning and deep learning has shown potential to overcome the challenges of remote sensing signal, image, and video processing. Artificial intelligence approaches require huge computing power as they normally use GPUs. As a result of research efforts, recent advances in remote sensing have led to high-resolution monitoring of Earth on a global scale, providing a massive amount of Earth-observation data. We trust that artificial intelligence, machine learning, and deep learning approaches will provide promising tools to overcome many challenges in remote sensing in terms of accuracy and reliability at high speeds.

This Special Issue is the third edition of “Advanced Machine Learning for Time Series Remote Sensing Data Analysis”. In this third edition, our new Special Issue aims to introduce the latest advances and trends concerning advanced machine learning and deep learning techniques in relation to remote sensing data-processing issues. Papers of both theoretical and applicative nature, as well as contributions regarding new advanced artificial learning and data science techniques for the remote sensing research community, are welcome.

Both original research articles and review articles are welcome for submission.

This Special Issue is the third edition of the following Special Issues: “Advanced Machine Learning and Deep Learning Approaches for Remote Sensing and Advanced Machine Learning and Deep Learning Approaches for Remote Sensing II”.

Dr. Gwanggil Jeon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • remote sensing
  • signal/image processing
  • deep learning
  • artificial intelligence
  • time series processing

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 7926 KiB  
Article
Generative Adversarial Network and Mutual-Point Learning Algorithm for Few-Shot Open-Set Classification of Hyperspectral Images
by Tuo Xu, Ying Wang, Jie Li and Yuefan Du
Remote Sens. 2024, 16(7), 1285; https://doi.org/10.3390/rs16071285 - 05 Apr 2024
Viewed by 443
Abstract
Existing approaches addressing the few-shot open-set recognition (FSOSR) challenge in hyperspectral images (HSIs) often encounter limitations stemming from sparse labels, restricted category numbers, and low openness. These limitations compromise stability and adaptability. In response, an open-set HSI classification algorithm based on data wandering [...] Read more.
Existing approaches addressing the few-shot open-set recognition (FSOSR) challenge in hyperspectral images (HSIs) often encounter limitations stemming from sparse labels, restricted category numbers, and low openness. These limitations compromise stability and adaptability. In response, an open-set HSI classification algorithm based on data wandering (DW) is introduced in this research. Firstly, a K-class classifier suitable for a closed set is trained, and its internal encoder is leveraged to extract features and estimate the distribution of known categories. Subsequently, the classifier is fine-tuned based on feature distribution. To address the scarcity of samples, a sample density constraint based on the generative adversarial network (GAN) is employed to generate synthetic samples near the decision boundary. Simultaneously, a mutual-point learning method is incorporated to widen the class distance between known and unknown categories. In addition, a dynamic threshold method based on DW is devised to enhance the open-set performance. By categorizing drifting synthetic samples into known and unknown classes and retraining them together with the known samples, the closed-set classifier is optimized, and a (K + 1)-class open-set classifier is trained. The experimental results in this research demonstrate the superior FSOSR performance of the proposed method across three benchmark HSI datasets. Full article
Show Figures

Figure 1

24 pages, 82388 KiB  
Article
A Parameter-Free Pixel Correlation-Based Attention Module for Remote Sensing Object Detection
by Xin Guan, Yifan Dong, Weixian Tan, Yun Su and Pingping Huang
Remote Sens. 2024, 16(2), 312; https://doi.org/10.3390/rs16020312 - 12 Jan 2024
Viewed by 755
Abstract
Remote sensing image object detection is a challenging task in the field of computer vision due to the complex backgrounds and diverse arrangements of targets in remote sensing images, forming intricate scenes. To overcome this challenge, existing object detection models adopt rotated target [...] Read more.
Remote sensing image object detection is a challenging task in the field of computer vision due to the complex backgrounds and diverse arrangements of targets in remote sensing images, forming intricate scenes. To overcome this challenge, existing object detection models adopt rotated target detection methods. However, these methods often lead to a loss of semantic information during feature extraction, specifically regarding the insufficient consideration of element correlations. To solve this problem, this research introduces a novel attention module (EuPea) designed to effectively capture inter-elemental information in feature maps and generate more powerful feature maps for use in neural networks. In the EuPea attention mechanism, we integrate distance information and Pearson correlation coefficient information between elements in the feature map. Experimental results show that using either type of information individually can improve network performance, but their combination has a stronger effect, producing an attention-weighted feature map. This improvement effectively enhances the object detection performance of the model, enabling it to better comprehend information in remote sensing images. Concurrently, this also improves missed detections and false alarms in object detection. Experimental results obtained on the DOTA, NWPU VHR-10, and DIOR datasets indicate that, compared with baseline RCNN models, our approach achieves respective improvements of 1.0%, 2.4%, and 1.8% in mean average precision (mAP). Full article
Show Figures

Figure 1

20 pages, 7008 KiB  
Article
A New Deep Neural Network Based on SwinT-FRM-ShipNet for SAR Ship Detection in Complex Near-Shore and Offshore Environments
by Zhuhao Lu, Pengfei Wang, Yajun Li and Baogang Ding
Remote Sens. 2023, 15(24), 5780; https://doi.org/10.3390/rs15245780 - 18 Dec 2023
Cited by 1 | Viewed by 772
Abstract
The advent of deep learning has significantly propelled the utilization of neural networks for Synthetic Aperture Radar (SAR) ship detection in recent years. However, there are two main obstacles in SAR detection. Challenge 1: The multiscale nature of SAR ships. Challenge 2: The [...] Read more.
The advent of deep learning has significantly propelled the utilization of neural networks for Synthetic Aperture Radar (SAR) ship detection in recent years. However, there are two main obstacles in SAR detection. Challenge 1: The multiscale nature of SAR ships. Challenge 2: The influence of intricate near-shore environments and the interference of clutter noise in offshore areas, especially affecting small-ship detection. Existing neural network-based approaches attempt to tackle these challenges, yet they often fall short in effectively addressing small-ship detection across multiple scales and complex backgrounds simultaneously. To overcome these challenges, we propose a novel network called SwinT-FRM-ShipNet. Our method introduces an integrated feature extractor, Swin-T-YOLOv5l, which combines Swin Transformer and YOLOv5l. The extractor is designed to highlight the differences between the complex background and the target by encoding both local and global information. Additionally, a feature pyramid IEFR-FPN, consisting of the Information Enhancement Module (IEM) and the Feature Refinement Module (FRM), is proposed to enrich the flow of spatial contextual information, fuse multiresolution features, and refine representations of small and multiscale ships. Furthermore, we introduce recursive gated convolutional prediction heads (GCPH) to explore the potential of high-order spatial interactions and add a larger-sized prediction head to focus on small ships. Experimental results demonstrate the superior performance of our method compared to mainstream approaches on the SSDD and SAR-Ship-Dataset. Our method achieves an F1 score, mAP0.5, and mAP0.5:0.95 of 96.5% (+0.9), 98.2% (+1.0%), and 75.4% (+3.3%), respectively, surpassing the most competitive algorithms. Full article
Show Figures

Figure 1

19 pages, 6036 KiB  
Article
An Information Extraction Method for Industrial and Commercial Rooftop Photovoltaics Based on GaoFen-7 Remote Sensing Images
by Haoxiang Tao, Guojin He, Guizhou Wang, Ruiqing Yang, Xueli Peng and Ranyu Yin
Remote Sens. 2023, 15(24), 5744; https://doi.org/10.3390/rs15245744 - 15 Dec 2023
Viewed by 709
Abstract
With the increasing global focus on renewable energy, distributed rooftop photovoltaics (PVs) are gradually becoming an important form of energy generation. Effective monitoring of rooftop PV information can obtain their spatial distribution and installed capacity, which is the basis used by management departments [...] Read more.
With the increasing global focus on renewable energy, distributed rooftop photovoltaics (PVs) are gradually becoming an important form of energy generation. Effective monitoring of rooftop PV information can obtain their spatial distribution and installed capacity, which is the basis used by management departments to formulate regulatory policies. Due to the time-consuming and labor-intensive problems involved in manual monitoring, remote-sensing-based monitoring methods are getting more attention. Currently, remote-sensing-based distributed rooftop PV monitoring methods are mainly used as household rooftop PVs, and most of them use aerial or satellite images with a resolution higher than 0.3 m; there is no research on industrial and commercial rooftop PVs. This study focuses on the distributed industrial and commercial rooftop PV information extraction method based on the Gaofen-7 satellite with a resolution of 0.65 m. First, the distributed industrial and commercial rooftop PV dataset based on Gaofen-7 satellite and the optimized public PV datasets were constructed. Second, an advanced MANet model was proposed. Compared to MANet, the proposed model removed the downsample operation in the first stage of the encoder and added an auxiliary branch containing the Atrous Spatial Pyramid Pooling (ASPP) module in the decoder. Comparative experiments were conducted between the advanced MANet and state-of-the-art semantic segmentation models. In the Gaofen-7 satellite PV dataset, the Intersection over Union (IoU) of the advanced MANet in the test set was improved by 13.5%, 8.96%, 2.67%, 0.63%, and 0.75% over Deeplabv3+, U2net-lite, U2net-full, Unet, and MANet. In order to further verify the performance of the proposed model, experiments were conducted on optimized public PV datasets. The IoU was improved by 3.18%, 3.78%, 3.29%, 4.98%, and 0.42%, demonstrating that it outperformed the other models. Full article
Show Figures

Figure 1

23 pages, 65467 KiB  
Article
FAUNet: Frequency Attention U-Net for Parcel Boundary Delineation in Satellite Images
by Bahaa Awad and Isin Erer
Remote Sens. 2023, 15(21), 5123; https://doi.org/10.3390/rs15215123 - 26 Oct 2023
Viewed by 1060
Abstract
Parcel detection and boundary delineation play an important role in numerous remote sensing applications, such as yield estimation, crop type classification, and farmland management systems. Consequently, achieving accurate boundary delineation remains a prominent research area within remote sensing literature. In this study, we [...] Read more.
Parcel detection and boundary delineation play an important role in numerous remote sensing applications, such as yield estimation, crop type classification, and farmland management systems. Consequently, achieving accurate boundary delineation remains a prominent research area within remote sensing literature. In this study, we propose a straightforward yet highly effective method for boundary delineation that leverages frequency attention to enhance the precision of boundary detection. Our approach, named Frequency Attention U-Net (FAUNet), builds upon the foundational and successful U-Net architecture by incorporating a frequency-based attention gate to enhance edge detection performance. Unlike many similar boundary delineation methods that employ three segmentation masks, our network employs only two, resulting in a more streamlined post-processing workflow. The essence of frequency attention lies in the integration of a frequency gate utilizing a high-pass filter. This high-pass filter output accentuates the critical high-frequency components within feature maps, thereby significantly improves edge detection performance. Comparative evaluation of FAUNet against alternative models demonstrates its superiority across various pixel-based and object-based metrics. Notably, FAUNet achieves a pixel-based precision, F1 score, and IoU of 0.9047, 0.8692, and 0.7739, respectively. In terms of object-based metrics, FAUNet demonstrates minimal over-segmentation (OS) and under-segmentation (US) errors, with values of 0.0341 and 0.1390, respectively. Full article
Show Figures

Graphical abstract

26 pages, 41363 KiB  
Article
Spectral Segmentation Multi-Scale Feature Extraction Residual Networks for Hyperspectral Image Classification
by Jiamei Wang, Jiansi Ren, Yinbin Peng and Meilin Shi
Remote Sens. 2023, 15(17), 4219; https://doi.org/10.3390/rs15174219 - 28 Aug 2023
Cited by 1 | Viewed by 925
Abstract
Hyperspectral image (HSI) classification is a vital task in hyperspectral image processing and applications. Convolutional neural networks (CNN) are becoming an effective approach for categorizing hyperspectral remote sensing images as deep learning technology advances. However, traditional CNN usually uses a fixed kernel size, [...] Read more.
Hyperspectral image (HSI) classification is a vital task in hyperspectral image processing and applications. Convolutional neural networks (CNN) are becoming an effective approach for categorizing hyperspectral remote sensing images as deep learning technology advances. However, traditional CNN usually uses a fixed kernel size, which limits the model’s capacity to acquire new features and affects the classification accuracy. Based on this, we developed a spectral segmentation-based multi-scale spatial feature extraction residual network (MFERN) for hyperspectral image classification. MFERN divides the input data into many non-overlapping sub-bands by spectral bands, extracts features in parallel using the multi-scale spatial feature extraction module MSFE, and adds global branches on top of this to obtain global information of the full spectral band of the image. Finally, the extracted features are fused and sent into the classifier. Our MSFE module has multiple branches with increasing ranges of the receptive field (RF), enabling multi-scale spatial information extraction at both fine- and coarse-grained levels. On the Indian Pines (IP), Salinas (SA), and Pavia University (PU) HSI datasets, we conducted extensive experiments. The experimental results show that our model has the best performance and robustness, and our proposed MFERN significantly outperforms other models in terms of classification accuracy, even with a small amount of training data. Full article
Show Figures

Figure 1

24 pages, 6952 KiB  
Article
A Cascade Network for Pattern Recognition Based on Radar Signal Characteristics in Noisy Environments
by Jingwei Xiong, Jifei Pan and Mingyang Du
Remote Sens. 2023, 15(16), 4083; https://doi.org/10.3390/rs15164083 - 19 Aug 2023
Cited by 2 | Viewed by 840
Abstract
Target recognition mainly focuses on three approaches: optical-image-based, echo-detection-based, and passive signal-analysis-based methods. Among them, the passive signal-based method is closely integrated with practical applications due to its strong environmental adaptability. Based on passive radar signal analysis, we design an “end-to-end” model that [...] Read more.
Target recognition mainly focuses on three approaches: optical-image-based, echo-detection-based, and passive signal-analysis-based methods. Among them, the passive signal-based method is closely integrated with practical applications due to its strong environmental adaptability. Based on passive radar signal analysis, we design an “end-to-end” model that cascades a noise estimation network with a recognition network to identify working modes in noisy environments. The noise estimation network is implemented based on U-Net, which adopts a method of feature extraction and reconstruction to adaptively estimate the noise mapping level of the sample, which can help the recognition network to reduce noise interference. Focusing on the characteristics of radar signals, the recognition network is realized based on the multi-scale convolutional attention network (MSCANet). Firstly, deep group convolution is used to isolate the channel interaction in the shallow network. Then, through the multi-scale convolution module, the finer-grained features of the signal are extracted without increasing the complexity of the model. Finally, the self-attention mechanism is used to suppress the influence of low-correlation and negative-correlation channels and spaces. This method overcomes the problem of the conventional method being seriously disturbed by noise. We validated the proposed method in 81 kinds of noise environment, achieving an average accuracy of 94.65%. Additionally, we discussed the performance of six machine learning algorithms and four deep learning algorithms. Compared to these methods, the proposed MSCANet achieved an accuracy improvement of approximately 17%. Our method demonstrates better generalization and robustness. Full article
Show Figures

Figure 1

18 pages, 2126 KiB  
Article
DAFNet: A Novel Change-Detection Model for High-Resolution Remote-Sensing Imagery Based on Feature Difference and Attention Mechanism
by Chong Ma, Hongyang Yin, Liguo Weng, Min Xia and Haifeng Lin
Remote Sens. 2023, 15(15), 3896; https://doi.org/10.3390/rs15153896 - 06 Aug 2023
Cited by 2 | Viewed by 1405
Abstract
Change detection is an important component in the field of remote sensing. At present, deep-learning-based change-detection methods have acquired many breakthrough results. However, current algorithms still present issues such as target misdetection, false alarms, and blurry edges. To alleviate these problems, this work [...] Read more.
Change detection is an important component in the field of remote sensing. At present, deep-learning-based change-detection methods have acquired many breakthrough results. However, current algorithms still present issues such as target misdetection, false alarms, and blurry edges. To alleviate these problems, this work proposes a network based on feature differences and attention mechanisms. This network includes a Siamese architecture-encoding network that encodes images at different times, a Difference Feature-Extraction Module (DFEM) for extracting difference features from bitemporal images, an Attention-Regulation Module (ARM) for optimizing the extracted difference features through attention, and a Cross-Scale Feature-Fusion Module (CSFM) for merging features from different encoding stages. Experimental results demonstrate that this method effectively alleviates issues of target misdetection, false alarms, and blurry edges. Full article
Show Figures

Figure 1

Back to TopTop