remotesensing-logo

Journal Browser

Journal Browser

Multisource Remote Sensing Image Interpretation and Application

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (25 March 2024) | Viewed by 4837

Special Issue Editors


E-Mail Website
Guest Editor
College of Information Science and Engineering, Hohai University, Nanjing 210098, China
Interests: deep learning; image processing; artificial intelligence
Special Issues, Collections and Topics in MDPI journals
Department of Aerospace and Geodesy, Technical University of Munich, Lise-Meitner-Str. 9, 85521 Ottobrunn, Germany
Interests: volunteered geographic information; geospatial machine learning; multi-sensor data fusion; geo-semantics; remote sensing
Special Issues, Collections and Topics in MDPI journals
College of Computer and Information, Hohai University, Nanjing 210098, China
Interests: remote sensing image processing; pansharpening; multimodal data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
China Institute of Water Resources and Hydropower Research, Beijing 100038, China
Interests: deep learning; information fusion; image processing
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
1.Helmholtz Institute Freiberg for Resource Technology, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), D-09599 Freiberg, Germany
2. Institute of Advanced Research in Artificial Intelligence (IARAI), 1030 Wien, Austria
Interests: hyperspectral image interpretation; multisensor and multitemporal data fusion
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Electrical and Information Engineering, Hunan University, Changsha, China
Interests: image processing; artificial intelligence

Special Issue Information

Dear Colleagues,

This special issue is intended to invite papers involving multisource remote sensing image interpretation approaches and applications. Review or researches articles on methodologies or applications including the advantages and limitations are welcome. This topic includes technologies and up-to-date methods which help advance science. The contributions to this collection will undergo peer review. We welcome contributions covering all aspects of multisource remote sensing interpretation and applications.

The following topics, but not limited to, are of particular interest:

  • Multisource remote sensing (panchromatic, multispectral, hyperspectral, LiDAR, SAR) data fusion
  • Classification, pansharpening, unmixing;
  • Remote sensing applications (e.g., river detection, road detection, flood detection, landslide monitoring, water sources management, land cover mapping, crop extraction);
  • Computer vision methods in multisource remote sensing image interpretation

Prof. Dr. Hongmin Gao
Dr. Hao Li
Dr. Xin Li
Dr. Mingxiang Yang
Prof. Dr. Pedram Ghamisi
Dr. Bin Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Vision and language for Earth Observation
  • Multisensor and multitemporal data fusion
  • Intelligent Applications and Multimedia Processing
  • Deep fusion of multi-source remote sensing data
  • Advanced machine learning methods

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 38590 KiB  
Article
Extracting Plastic Greenhouses from Remote Sensing Images with a Novel U-FDS Net
by Yan Mo, Wanting Zhou and Wei Chen
Remote Sens. 2023, 15(24), 5736; https://doi.org/10.3390/rs15245736 - 15 Dec 2023
Viewed by 1038
Abstract
The fast and accurate extraction of plastic greenhouses over large areas is important for environmental and agricultural management. Traditional spectral index methods and object-based methods can suffer from poor transferability or high computational costs. Current deep learning-based algorithms are seldom specifically aimed at [...] Read more.
The fast and accurate extraction of plastic greenhouses over large areas is important for environmental and agricultural management. Traditional spectral index methods and object-based methods can suffer from poor transferability or high computational costs. Current deep learning-based algorithms are seldom specifically aimed at extracting plastic greenhouses at large scales. To extract plastic greenhouses at large scales with high accuracy, this study proposed a new deep learning-based network, U-FDS Net, specifically for plastic greenhouse extraction over large areas. U-FDS Net combines full-scale dense connections and adaptive deep supervision and has strong future fusion capabilities, allowing more accurate extraction results. To test the extraction accuracy, this study compiled new greenhouse datasets covering Beijing and Shandong with a total number of more than 12,000 image samples. The results showed that the proposed U-FDS net is particularly suitable for complex backgrounds and reducing false positive conditions for nongreenhouse ground objects, with the highest mIoU (mean intersection over union) an increase of ~2%. This study provides a high-performance method for plastic greenhouse extraction to enable environmental management, pollution control and agricultural plans. Full article
(This article belongs to the Special Issue Multisource Remote Sensing Image Interpretation and Application)
Show Figures

Graphical abstract

18 pages, 9567 KiB  
Article
SiameseNet Based Fine-Grained Semantic Change Detection for High Resolution Remote Sensing Images
by Lili Zhang, Mengqi Xu, Gaoxu Wang, Rui Shi, Yi Xu and Ruijie Yan
Remote Sens. 2023, 15(24), 5631; https://doi.org/10.3390/rs15245631 - 05 Dec 2023
Viewed by 814
Abstract
Change detection in high resolution (HR) remote sensing images faces more challenges than in low resolution images because of the variations of land features, which prompts this research on faster and more accurate change detection methods. We propose a pixel-level semantic change detection [...] Read more.
Change detection in high resolution (HR) remote sensing images faces more challenges than in low resolution images because of the variations of land features, which prompts this research on faster and more accurate change detection methods. We propose a pixel-level semantic change detection method to solve the fine-grained semantic change detection for HR remote sensing image pairs, which takes one lightweight semantic segmentation network (LightNet), using the parameter-sharing SiameseNet, as the architecture to carry out pixel-level semantic segmentations for the dual-temporal image pairs and achieve pixel-level change detection based directly on semantic comparison. LightNet consists of four long–short branches, each including lightweight dilated residual blocks and an information enhancement module. The feature information is transmitted, fused, and enhanced among the four branches, where two large-scale feature maps are fused and then enhanced via the channel information enhancement module. The two small-scale feature maps are fused and then enhanced via a spatial information enhancement module, and the four upsampling feature maps are finally concatenated to form the input of the Softmax. We used high resolution remote sensing images of Lake Erhai in Yunnan Province in China, collected by GF-2, to make one dataset with a fine-grained semantic label and a dual-temporal image-pair label to train our model, and the experiments demonstrate the superiority of our method and the accuracy of LightNet; the pixel-level semantic change detection methods are up to 89% and 86%, respectively. Full article
(This article belongs to the Special Issue Multisource Remote Sensing Image Interpretation and Application)
Show Figures

Figure 1

24 pages, 6085 KiB  
Article
SSCNet: A Spectrum-Space Collaborative Network for Semantic Segmentation of Remote Sensing Images
by Xin Li, Feng Xu, Xi Yong, Deqing Chen, Runliang Xia, Baoliu Ye, Hongmin Gao, Ziqi Chen and Xin Lyu
Remote Sens. 2023, 15(23), 5610; https://doi.org/10.3390/rs15235610 - 03 Dec 2023
Cited by 2 | Viewed by 1080
Abstract
Semantic segmentation plays a pivotal role in the intelligent interpretation of remote sensing images (RSIs). However, conventional methods predominantly focus on learning representations within the spatial domain, often resulting in suboptimal discriminative capabilities. Given the intrinsic spectral characteristics of RSIs, it becomes imperative [...] Read more.
Semantic segmentation plays a pivotal role in the intelligent interpretation of remote sensing images (RSIs). However, conventional methods predominantly focus on learning representations within the spatial domain, often resulting in suboptimal discriminative capabilities. Given the intrinsic spectral characteristics of RSIs, it becomes imperative to enhance the discriminative potential of these representations by integrating spectral context alongside spatial information. In this paper, we introduce the spectrum-space collaborative network (SSCNet), which is designed to capture both spectral and spatial dependencies, thereby elevating the quality of semantic segmentation in RSIs. Our innovative approach features a joint spectral–spatial attention module (JSSA) that concurrently employs spectral attention (SpeA) and spatial attention (SpaA). Instead of feature-level aggregation, we propose the fusion of attention maps to gather spectral and spatial contexts from their respective branches. Within SpeA, we calculate the position-wise spectral similarity using the complex spectral Euclidean distance (CSED) of the real and imaginary components of projected feature maps in the frequency domain. To comprehensively calculate both spectral and spatial losses, we introduce edge loss, Dice loss, and cross-entropy loss, subsequently merging them with appropriate weighting. Extensive experiments on the ISPRS Potsdam and LoveDA datasets underscore SSCNet’s superior performance compared with several state-of-the-art methods. Furthermore, an ablation study confirms the efficacy of SpeA. Full article
(This article belongs to the Special Issue Multisource Remote Sensing Image Interpretation and Application)
Show Figures

Figure 1

Back to TopTop