remotesensing-logo

Journal Browser

Journal Browser

Explainable Deep Neural Networks for Remote Sensing Image Understanding II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 January 2024) | Viewed by 3605

Special Issue Editors

School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi’an 710021, China
Interests: image processing; machine learning
Special Issues, Collections and Topics in MDPI journals
School of Geophysics and Geomatics, China University of Geosciences, Wuhan, China
Interests: intelligent representation and calculation of geological information; geological environment monitoring and evaluation; geospatial information
Special Issues, Collections and Topics in MDPI journals
Department of Electronic and Electrical Engineering, Brunel University London, London, UK
Interests: signal processing; wireless communication; machine condition monitoring; biomedical signal processing; data analytics; machine learning; higher order statistics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Thank you for your excellent responses to the first special volume.  In this second volume, we still maintain the focus on explainable deep neural Networks for remote sensing image understanding. Deep neural networks have been widely used in remote sensing image analysis and applications, e.g., classification, detection, regression, and inversion. Although these networks are somewhat successful in remote sensing image understanding, they still face a black-box problem, since both the feature extraction and classifier design are automatically learned. This problem seriously limits the development of deep learning and its applications in the field of remote sensing image understanding. In recent years, many explainable deep neural network models have been reported in the machine learning society, such as channel attention, spatial attention, self-attention, and Transformer network. These networks, to some extent, promote the development of explainable deep learning and address some important problems in remote sensing image analysis. On the other hand, remote sensing applications usually involve several exact physical models, e.g., the radiance transfer model, linear unmixing model, and spatiotemporal autocorrelation, which can also effectively model the formation process from the remote sensing data to land-cover observation and environmental parameter monitoring. However, how to effectively integrate popular deep neural networks with the traditional remote sensing physical models is currently the main challenge in remote sensing image understanding. Therefore, the research of theoretically and physically explainable deep convolutional neural networks is currently one of the most popular topics and can offer important advantages in the applications of remote sensing image understanding. This Special Issue aims to publish high-quality research papers and salient and informative review articles addressing emerging trends in remote sensing image understanding using the combination of explainable deep network and remote sensing physical models. Original contributions, not currently under review in a journal or a conference, are solicited in relevant areas including, but not limited to, the following:

  • CNN/Transformernetworks for objection detection, segmentation, and recognition in remote sensing images.
  • The hybrid architecture of CNN and Transformer for remote sensing image applications.
  • Compact deep network models for remote sensing image applications.
  • Physical model integrating with deep convolutional neural networks for remote sensing image applications.
  • Hybrid models of joining data-driven and model-driven for remote sensing image applications.
  • Incorporating geographical laws into deep convolutional neural networks for remote sensing image applications.
  • Review/Surveys of remote sensing image processing.
  • New remote sensing image datasets.

Dr. Tao Lei
Prof. Dr. Lefei Zhang
Dr. Tao Chen
Prof. Dr. Asoke K. Nandi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • remote sensing image analysis
  • land-cover observation
  • earth environmental monitoring
  • attention mechanism
  • data driven and model driven
  • physical models

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 6565 KiB  
Article
Recurrent Residual Deformable Conv Unit and Multi-Head with Channel Self-Attention Based on U-Net for Building Extraction from Remote Sensing Images
by Wenling Yu, Bo Liu, Hua Liu and Guohua Gou
Remote Sens. 2023, 15(20), 5048; https://doi.org/10.3390/rs15205048 - 20 Oct 2023
Cited by 1 | Viewed by 739
Abstract
Considering the challenges associated with accurately identifying building shape features and distinguishing between building and non-building features during the extraction of buildings from remote sensing images using deep learning, we propose a novel method for building extraction based on U-Net, incorporating a recurrent [...] Read more.
Considering the challenges associated with accurately identifying building shape features and distinguishing between building and non-building features during the extraction of buildings from remote sensing images using deep learning, we propose a novel method for building extraction based on U-Net, incorporating a recurrent residual deformable convolution unit (RDCU) module and augmented multi-head self-attention (AMSA). By replacing conventional convolution modules with an RDCU, which adopts a deformable convolutional neural network within a residual network structure, the proposed method enhances the module’s capacity to learn intricate details such as building shapes. Furthermore, AMSA is introduced into the skip connection function to enhance feature expression and positions through content–position enhancement operations and content–content enhancement operations. Moreover, AMSA integrates an additional fusion channel attention mechanism to aid in identifying cross-channel feature expression Intersection over Union (IoU) score differences. For the Massachusetts dataset, the proposed method achieves an Intersection over Union (IoU) score of 89.99%, PA (Pixel Accuracy) score of 93.62%, and Recall score of 89.22%. For the WHU Satellite dataset I, the proposed method achieves an IoU score of 86.47%, PA score of 92.45%, and Recall score of 91.62%, For the INRIA dataset, the proposed method achieves an IoU score of 80.47%, PA score of 90.15%, and Recall score of 85.42%. Full article
Show Figures

Figure 1

25 pages, 39170 KiB  
Article
Urban Vegetation Extraction from High-Resolution Remote Sensing Imagery on SD-UNet and Vegetation Spectral Features
by Na Lin, Hailin Quan, Jing He, Shuangtao Li, Maochi Xiao, Bin Wang, Tao Chen, Xiaoai Dai, Jianping Pan and Nanjie Li
Remote Sens. 2023, 15(18), 4488; https://doi.org/10.3390/rs15184488 - 12 Sep 2023
Cited by 2 | Viewed by 1083
Abstract
Urban vegetation plays a crucial role in the urban ecological system. Efficient and accurate extraction of urban vegetation information has been a pressing task. Although the development of deep learning brings great advantages for vegetation extraction, there are still problems, such as ultra-fine [...] Read more.
Urban vegetation plays a crucial role in the urban ecological system. Efficient and accurate extraction of urban vegetation information has been a pressing task. Although the development of deep learning brings great advantages for vegetation extraction, there are still problems, such as ultra-fine vegetation omissions, heavy computational burden, and unstable model performance. Therefore, a Separable Dense U-Net (SD-UNet) was proposed by introducing dense connections, separable convolutions, batch normalization layers, and Tanh activation function into U-Net. Furthermore, the Fake sample set (NIR-RG), NDVI sample set (NDVI-RG), and True sample set (RGB) were established to train SD-UNet. The obtained models were validated and applied to four scenes (high-density buildings area, cloud and misty conditions area, park, and suburb) and two administrative divisions. The experimental results show that the Fake sample set can effectively improve the model’s vegetation extraction accuracy. The SD-UNet achieves the highest accuracy compared to other methods (U-Net, SegNet, NDVI, RF) on the Fake sample set, whose ACC, IOU, and Recall reached 0.9581, 0.8977, and 0.9577, respectively. It can be concluded that the SD-UNet trained on the Fake sample set not only is beneficial for vegetation extraction but also has better generalization ability and transferability. Full article
Show Figures

Figure 1

24 pages, 6273 KiB  
Article
C-RISE: A Post-Hoc Interpretation Method of Black-Box Models for SAR ATR
by Mingzhe Zhu, Jie Cheng, Tao Lei, Zhenpeng Feng, Xianda Zhou, Yuanjing Liu and Zhihan Chen
Remote Sens. 2023, 15(12), 3103; https://doi.org/10.3390/rs15123103 - 14 Jun 2023
Viewed by 975
Abstract
The integration of deep learning methods, especially Convolutional Neural Networks (CNN), and Synthetic Aperture Radar Automatic Target Recognition (SAR ATR) has been widely deployed in the field of radar signal processing. Nevertheless, these methods are frequently regarded as black-box models due to the [...] Read more.
The integration of deep learning methods, especially Convolutional Neural Networks (CNN), and Synthetic Aperture Radar Automatic Target Recognition (SAR ATR) has been widely deployed in the field of radar signal processing. Nevertheless, these methods are frequently regarded as black-box models due to the limited visual interpretation of their internal feature representation and parameter organization. In this paper, we propose an innovative approach named C-RISE, which builds upon the RISE algorithm to provide a post-hoc interpretation technique for black-box models used in SAR Images Target Recognition. C-RISE generates saliency maps that effectively visualize the significance of each pixel. Our algorithm outperforms RISE by clustering masks that capture similar fusion features into distinct groups, enabling more appropriate weight distribution and increased focus on the target area. Furthermore, we employ Gaussian blur to process the masked area, preserving the original image structure with optimal consistency and integrity. C-RISE has been extensively evaluated through experiments, and the results demonstrate superior performance over other interpretation methods based on perturbation when applied to neural networks for SAR image target recognition. Furthermore, our approach is highly robust and transferable compared to other interpretable algorithms, including white-box methods. Full article
Show Figures

Figure 1

Back to TopTop