remotesensing-logo

Journal Browser

Journal Browser

New Deep Learning Paradigms for Multisource Remote Sensing Data Fusion and Classification

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 August 2024 | Viewed by 1579

Special Issue Editors


E-Mail Website
Guest Editor
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen 518172, China
Interests: AI internet of things; machine learning; satellite remote sensing

E-Mail Website
Guest Editor
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Kowloon TU428, Hong Kong
Interests: remote sensing; computer vision; deep learning

E-Mail Website
Guest Editor
Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy
Interests: computer vision; pattern recognition; machine learning; deep learning; sensor reconfiguration; anomaly detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Remote Sensing and Geomatics Engineering, Nanjing University of Information Science and Technology, Nanjing 210044, China
Interests: deep learning; computer vision; remote sensing; semantic segmentation; transformer

Special Issue Information

Dear Colleagues,

Leveraging multisource remote sensing images for earth mapping and monitoring has drawn significant attention in large-scale geoscience applications. Numerous methods have been developed for data fusion and classification based on novel deep learning models in a fully supervised training manner. Although deep learning has shown dominance in multisource image fusion and classification, it still encounters several issues in practice, such as limited labeled samples in model training, weak representative capability for multisource data from heterogeneous domains, and performance degradation in cross-domain tasks. Some new learning paradigms have emerged to address these problems and to promote multimodal collaboration and cross-modal analysis in remote sensing, including self-supervised, weakly supervised, transfer, and federated learning. They significantly improve the generalization and robustness of deep learning models and provide new possibilities and challenges for the use of novel training and optimization algorithms in remote sensing.

This Special Issue aims to highlight the innovative research in novel deep learning paradigms for multisource remote sensing data fusion and classification. Topics may cover anything from multisource data integration to applications in remote sensing. Advanced deep models with new training and optimization strategies exploited in remote sensing applications are welcome to be submitted to this Special Issue.

Articles may address, but are not limited to, the following topics:

  • Novel fusion strategies for multisource remote sensing data;
  • Weakly and self-supervised deep learning in remote sensing image classification;
  • Remote sensing data fusion with generative models;
  • Transfer learning in remote sensing image classification and segmentation;
  • Cross-modal analysis in remote sensing;
  • Land-cover/use mapping using multisource remote sensing data;
  • Multisource remote sensing applications.

Dr. Man On Pun
Dr. Xiaokang Zhang
Dr. Claudio Piciarelli
Dr. Libo Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multi-modal fusion
  • cross-modal analysis
  • deep transfer learning
  • domain adaptation
  • semi-supervised learning
  • active learning
  • federated learning
  • weakly supervised learning
  • self-supervised learning
  • few-shot learning
  • unsupervised representation learning
  • adversarial training
  • generative models
  • semantic segmentation
  • scene classification
  • land cover mapping
  • change detection
  • pansharpening

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 10413 KiB  
Article
Bridging Domains and Resolutions: Deep Learning-Based Land Cover Mapping without Matched Labels
by Shuyi Cao, Yubin Tang, Enping Yan, Jiawei Jiang and Dengkui Mo
Remote Sens. 2024, 16(8), 1449; https://doi.org/10.3390/rs16081449 - 19 Apr 2024
Viewed by 327
Abstract
High-resolution land cover mapping is crucial in various disciplines but is often hindered by the lack of accurately matched labels. Our study introduces an innovative deep learning methodology for effective land cover mapping, independent of matched labels. The approach comprises three main components: [...] Read more.
High-resolution land cover mapping is crucial in various disciplines but is often hindered by the lack of accurately matched labels. Our study introduces an innovative deep learning methodology for effective land cover mapping, independent of matched labels. The approach comprises three main components: (1) An advanced fully convolutional neural network, augmented with super-resolution features, to refine labels; (2) The application of an instance-batch normalization network (IBN), leveraging these enhanced labels from the source domain, to generate 2-m resolution land cover maps for test sites in the target domain; (3) Noise assessment tests to evaluate the impact of varying noise levels on the model’s mapping accuracy using external labels. The model achieved an overall accuracy of 83.40% in the target domain using endogenous super-resolution labels. In contrast, employing exogenous, high-precision labels from the National Land Cover Database in the source domain led to a notable accuracy increase of 2.55%, reaching 85.48%. This improvement highlights the model’s enhanced generalizability and performance during domain shifts, attributed significantly to the IBN layer. Our findings reveal that, despite the absence of native high-precision labels, the utilization of high-quality external labels can substantially benefit the development of precise land cover mapping, underscoring their potential in scenarios with unmatched labels. Full article
Show Figures

Graphical abstract

20 pages, 1863 KiB  
Article
Denoising Diffusion Probabilistic Model with Adversarial Learning for Remote Sensing Super-Resolution
by Jialu Sui, Qianqian Wu and Man-On Pun
Remote Sens. 2024, 16(7), 1219; https://doi.org/10.3390/rs16071219 - 30 Mar 2024
Viewed by 627
Abstract
Single Image Super-Resolution (SISR) for image enhancement enables the generation of high spatial resolution in Remote Sensing (RS) images without incurring additional costs. This approach offers a practical solution to obtain high-resolution RS images, addressing challenges posed by the expense of acquisition equipment [...] Read more.
Single Image Super-Resolution (SISR) for image enhancement enables the generation of high spatial resolution in Remote Sensing (RS) images without incurring additional costs. This approach offers a practical solution to obtain high-resolution RS images, addressing challenges posed by the expense of acquisition equipment and unpredictable weather conditions. To address the over-smoothing of the previous SISR models, the diffusion model has been incorporated into RS SISR to generate Super-Resolution (SR) images with enhanced textural details. In this paper, we propose a Diffusion model with Adversarial Learning Strategy (DiffALS) to refine the generative capability of the diffusion model. DiffALS integrates an additional Noise Discriminator (ND) into the training process, employing an adversarial learning strategy on the data distribution learning. This ND guides noise prediction by considering the general correspondence between the noisy image in each step, thereby enhancing the diversity of generated data and the detailed texture prediction of the diffusion model. Furthermore, considering that the diffusion model may exhibit suboptimal performance on traditional pixel-level metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM), we showcase the effectiveness of DiffALS through downstream semantic segmentation applications. Extensive experiments demonstrate that the proposed model achieves remarkable accuracy and notable visual enhancements. Compared to other state-of-the-art methods, our model establishes an improvement of 189 for Fréchet Inception Distance (FID) and 0.002 for Learned Perceptual Image Patch Similarity (LPIPS) in a SR dataset, namely Alsat, and achieves improvements of 0.4%, 0.3%, and 0.2% for F1 score, MIoU, and Accuracy, respectively, in a segmentation dataset, namely Vaihingen. Full article
Show Figures

Figure 1

Back to TopTop