remotesensing-logo

Journal Browser

Journal Browser

Advanced Artificial Intelligence Algorithm for the Analysis of Remote Sensing Images II

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 February 2024) | Viewed by 14585

Special Issue Editors


E-Mail Website
Guest Editor
College of Electronic Science, National University of Defense Technology, Changsha 410073, China
Interests: remote sensing; SAR image processing; change detection; ground moving target indication; polarimetric SAR image classification
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Electronic Science, National University of Defense Technology, Changsha 410073, China
Interests: remote sensing; SAR image processing; SAR signal processing; object detection; image classification; feature extraction; simulation modeling
Special Issues, Collections and Topics in MDPI journals
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Interests: multitemporal SAR image processing; change detection; SAR image classification; object detection and tracking
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Senior Researcher Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, Vas. Pavlou and I. Metaxa, 15236 Penteli, Greece
Interests: remote sensing; multispectral/hyperspectral imaging; imaging spectroscopy; optical/SAR sensors; image processing; geology; lithological and mineral mapping; terrestrial surface mapping
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the field of Earth observation, the massive remote sensing data obtained by a large number of satellites in orbit or manned/unmanned aerial vehicles (UAV) bring opportunities and challenges for the analysis of remote sensing images. Artificial intelligence is an emerging technology that is very suitable for big data applications. Therefore, how to interpret remote sensing images automatically, efficiently, and accurately is a hot and difficult topic in the research into and application of remote sensing technology. In recent years, artificial intelligence, especially deep learning techniques, has had a significant impact on the field of remote sensing, providing promising tools to overcome many challenging issues in the analysis of remote sensing images in terms of accuracy and reliability.

In this Special Issue, we intend to compile a series of papers that merge the analysis and use of remote sensing images with AI techniques. We expect that new research will address practical problems in remote sensing image applications with the help of advanced AI methods.

Articles may address, but are not limited to, the following topics:

  • Advanced AI architectures for image classification;
  • Advanced AI-based target detection/recognition/tracking;
  • Change detection/semantic segmentation for remote sensing;
  • Multi-senor data fusion/multi-modal data analysis;
  • Image super-resolution/restoration for remote sensing;
  • Unsupervised/weakly supervised learning for image processing;
  • Advanced AI techniques for remote sensing applications;
  • Clustering (including classic and more advanced tools, such as subspace clustering, clustering ensemble, etc.);
  • Spectral unmixing, adopting either linear or non-linear models, using Bayesian or non-Bayesian approaches for parameter estimation;
  • Dimensionality reduction;
  • Data transformations.

Prof. Dr. Gangyao Kuang
Dr. Siqian Zhang
Dr. Xin Su
Dr. Olga Sykioti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • image processing
  • target detection
  • change detection
  • data fusion
  • multispectral and hyperspectral images
  • synthetic aperture radar images
  • satellite video

Related Special Issue

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 6627 KiB  
Article
Attention-Guided Fusion and Classification for Hyperspectral and LiDAR Data
by Jing Huang, Yinghao Zhang, Fang Yang and Li Chai
Remote Sens. 2024, 16(1), 94; https://doi.org/10.3390/rs16010094 - 25 Dec 2023
Cited by 1 | Viewed by 988
Abstract
The joint use of hyperspectral image (HSI) and Light Detection And Ranging (LiDAR) data has been widely applied for land cover classification because it can comprehensively represent the urban structures and land material properties. However, existing methods fail to combine the different image [...] Read more.
The joint use of hyperspectral image (HSI) and Light Detection And Ranging (LiDAR) data has been widely applied for land cover classification because it can comprehensively represent the urban structures and land material properties. However, existing methods fail to combine the different image information effectively, which limits the semantic relevance of different data sources. To solve this problem, in this paper, an Attention-guided Fusion and Classification framework based on Convolutional Neural Network (AFC-CNN) is proposed to classify the land cover based on the joint use of HSI and LiDAR data. In the feature extraction module, AFC-CNN employs the three dimensional convolutional neural network (3D-CNN) combined with a multi-scale structure to extract the spatial-spectral features of HSI, and uses a 2D-CNN to extract the spatial features from LiDAR data. Simultaneously, the spectral attention mechanism is adopted to assign weights to the spectral channels, and the cross attention mechanism is introduced to impart significant spatial weights from LiDAR to HSI, which enhance the interaction between HSI and LiDAR data and leverage the fusion information. Then two feature branches are concatenated and transferred to the feature fusion module for higher-level feature extraction and fusion. In the fusion module, AFC-CNN adopts the depth separable convolution connected through the residual structures to obtain the advanced features, which can help reduce computational complexity and improve the fitting ability of the model. Finally, the fused features are sent into the linear classification module for final classification. Experimental results on three datasets, i.e., Houston, MUUFL and Trento datasets show that the proposed AFC-CNN framework achieves better classification accuracy compared with the state-of-the-art algorithms. The overall accuracy of AFC-CNN on Houston, MUUFL and Trento datasets are 94.2%, 95.3% and 99.5%, respectively. Full article
Show Figures

Figure 1

25 pages, 5825 KiB  
Article
RAU-Net-Based Imaging Method for Spatial-Variant Correction and Denoising in Multiple-Input Multiple-Output Radar
by Jianfei Ren, Ying Luo, Changzhou Fan, Weike Feng, Linghua Su and Huan Wang
Remote Sens. 2024, 16(1), 80; https://doi.org/10.3390/rs16010080 - 24 Dec 2023
Viewed by 860
Abstract
The conventional back projection (BP) algorithm is an accurate time-domain algorithm widely used for multiple-input multiple-output (MIMO) radar imaging, owing to its independence of antenna array configuration. The time-delay curve correction back projection (TCC-BP) algorithm greatly reduces the computational complexity of BP but [...] Read more.
The conventional back projection (BP) algorithm is an accurate time-domain algorithm widely used for multiple-input multiple-output (MIMO) radar imaging, owing to its independence of antenna array configuration. The time-delay curve correction back projection (TCC-BP) algorithm greatly reduces the computational complexity of BP but suffers from spatial-variant correction, sidelobe interference and background noise due to the use of coherent superposition of echo time-delay curves. In this article, a residual attention U-Net-based (RAU-Net) MIMO radar imaging method that adapts complex noisy scenarios with spatial variation and sidelobe interference is proposed. On the basis of the U-Net underlying structure, we develop the RAU-Net with two modules: a residual unit with identity mapping and a dual attention module to obtain resolution spatial-variant correction and denoising on real-world MIMO radar images. The network realizes MIMO radar imaging based on the TCC-BP algorithm and substantially reduces the total computational time of the BP algorithm on the basis of improving the imaging resolution and denoising capability. Extensive experiments on the simulated and measured data demonstrate that the proposed method outperforms both the traditional methods and learning-imaging methods in terms of spatial-variant correction, denoising and computational complexity. Full article
Show Figures

Figure 1

14 pages, 4101 KiB  
Communication
Shadow-Based False Target Identification for SAR Images
by Haoyu Zhang, Sinong Quan, Shiqi Xing, Junpeng Wang, Yongzhen Li and Ping Wang
Remote Sens. 2023, 15(21), 5259; https://doi.org/10.3390/rs15215259 - 6 Nov 2023
Viewed by 870
Abstract
In radar electronic countermeasures, as the difference between jamming and targets continues to decrease, traditional methods that are implemented based on classical features are currently unable to meet the requirements of jamming detection. Compared with classical features such as texture, scale, and shape, [...] Read more.
In radar electronic countermeasures, as the difference between jamming and targets continues to decrease, traditional methods that are implemented based on classical features are currently unable to meet the requirements of jamming detection. Compared with classical features such as texture, scale, and shape, shadow has better discernability and separability. In this paper, target shadow is investigated and applied to detect jamming in Synthetic Aperture Radar (SAR) images, and a SAR false target identification method based on shadow features is proposed. First, a difference image is generated by change detection, which can extract the shadow region in single-time SAR images. Then, a three-step differentiation condition is proposed, which can distinguish false targets from real targets. Simulated experimental results show that the proposed method can effectively extract the shadow region in SAR images and accurately distinguishreal and false targets. Furthermore, the potential of shadow in SAR image interpretation and electronic countermeasures is also demonstrated. Full article
Show Figures

Figure 1

21 pages, 53845 KiB  
Article
A Five-Component Decomposition Method with General Rotated Dihedral Scattering Model and Cross-Pol Power Assignment
by Yancui Duan, Sinong Quan, Hui Fan, Zhenhai Xu and Shunping Xiao
Remote Sens. 2023, 15(18), 4512; https://doi.org/10.3390/rs15184512 - 13 Sep 2023
Viewed by 740
Abstract
The model-based polarimetric decomposition is extensively studied due to its simplicity and clear physical interpretation of Polarimetric Synthetic Aperture Radar (PolSAR) data. Though there are many fine basic scattering models and well-designed decomposition methods, the overestimation of volume scattering (OVS) may still occur [...] Read more.
The model-based polarimetric decomposition is extensively studied due to its simplicity and clear physical interpretation of Polarimetric Synthetic Aperture Radar (PolSAR) data. Though there are many fine basic scattering models and well-designed decomposition methods, the overestimation of volume scattering (OVS) may still occur in highly oriented buildings, resulting in severe scattering mechanism ambiguity. It is well known that not only vegetation areas but also oriented buildings may cause intense cross-pol power. To improve the scattering mechanism ambiguity, an appropriate scattering model for oriented buildings and a feasible strategy to assign the cross-pol power between vegetation and oriented buildings are of equal importance. From this point of view, we propose a five-component decomposition method with a general rotated dihedral scattering model and an assignment strategy of cross-pol power. The general rotated dihedral scattering model is established to characterize the integral and internal cross-pol scattering from oriented buildings, while the assignment of cross-pol power between volume and rotated dihedral scattering is achieved by using an eigenvalue-based descriptor DOOB. In addition, a simple branch condition with explicit physical meaning is proposed for model parameters inversion. Experiments on spaceborne Radarsat−2 C band and airborne UAVSAR L band PolSAR datasets demonstrate the effectiveness and advantages of the proposed method in the quantitative characterization of scattering mechanisms, especially for highly oriented buildings. Full article
Show Figures

Graphical abstract

27 pages, 14986 KiB  
Article
Registration of Large Optical and SAR Images with Non-Flat Terrain by Investigating Reliable Sparse Correspondences
by Han Zhang, Lin Lei, Weiping Ni, Kenan Cheng, Tao Tang, Peizhong Wang and Gangyao Kuang
Remote Sens. 2023, 15(18), 4458; https://doi.org/10.3390/rs15184458 - 10 Sep 2023
Viewed by 915
Abstract
Optical and SAR image registration is the primary procedure to exploit the complementary information from the two different image modal types. Although extensive research has been conducted to narrow down the vast radiometric and geometric gaps so as to extract homogeneous characters for [...] Read more.
Optical and SAR image registration is the primary procedure to exploit the complementary information from the two different image modal types. Although extensive research has been conducted to narrow down the vast radiometric and geometric gaps so as to extract homogeneous characters for feature point matching, few works have considered the registration issue for non-flat terrains, which will bring in more difficulties for not only sparse feature point matching but also outlier removal and geometric relationship estimation. This article addresses these issues with a novel and effective optical-SAR image registration framework. Firstly, sparse feature points are detected based on the phase congruency moment map of the textureless SAR image (SAR-PC-Moment), which helps to identify salient local regions. Then a template matching process using very large local image patches is conducted, which increases the matching accuracy by a significant margin. Secondly, a mutual verification-based initial outlier removal method is proposed, which takes advantage of the different mechanisms of sparse and dense matching and requires no geometric consistency assumption within the inliers. These two procedures will produce a putative correspondence feature point (CP) set with a low outlier ratio and high reliability. In the third step, the putative CPs are used to segment the large input image of non-flat terrain into dozens of locally flat areas using a recursive random sample consensus (RANSAC) method, with each locally flat area co-registered using an affine transformation. As for the mountainous areas with sharp elevation variations, anchor CPs are first identified, and then optical flow-based pixelwise dense matching is conducted. In the experimental section, ablation studies using four precisely co-registered optical-SAR image pairs of flat terrain quantitatively verify the effectiveness of the proposed SAR-PC-Moment-based feature point detector, big template matching strategy, and mutual verification-based outlier removal method. Registration results on four 1 m-resolution non-flat image pairs prove that the proposed framework is able to produce robust and quite accurate registration results. Full article
Show Figures

Graphical abstract

18 pages, 11022 KiB  
Article
SAR and Optical Image Registration Based on Deep Learning with Co-Attention Matching Module
by Jiaxing Chen, Hongtu Xie, Lin Zhang, Jun Hu, Hejun Jiang and Guoqian Wang
Remote Sens. 2023, 15(15), 3879; https://doi.org/10.3390/rs15153879 - 4 Aug 2023
Cited by 2 | Viewed by 1462
Abstract
Image registration is the basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, the significant nonlinear radiation difference (NRD) and the geometric imaging model difference render the registration quite challenging. To solve this problem, both traditional and deep [...] Read more.
Image registration is the basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, the significant nonlinear radiation difference (NRD) and the geometric imaging model difference render the registration quite challenging. To solve this problem, both traditional and deep learning methods are used to extract structural information with dense descriptions of the images, but they ignore that structural information of the image pair is coupled and often process images separately. In this paper, a deep learning-based registration method with a co-attention matching module (CAMM) for SAR and optical images is proposed, which integrates structural feature maps of the image pair to extract keypoints of a single image. First, joint feature detection and description are carried out densely in both images, for which the features are robust to radiation and geometric variation. Then, a CAMM is used to integrate both images’ structural features and generate the final keypoint feature maps so that the extracted keypoints are more distinctive and repeatable, which is beneficial to global registration. Finally, considering the difference in the imaging mechanism between SAR and optical images, this paper proposes a new sampling strategy that selects positive samples from the ground-truth position’s neighborhood and augments negative samples by randomly sampling distractors in the corresponding image, which makes positive samples more accurate and negative samples more abundant. The experimental results show that the proposed method can significantly improve the accuracy of SAR–optical image registration. Compared to the existing conventional and deep learning methods, the proposed method yields a detector with better repeatability and a descriptor with stronger modality-invariant feature representation. Full article
Show Figures

Figure 1

21 pages, 127743 KiB  
Article
Unsupervised Change Detection for VHR Remote Sensing Images Based on Temporal-Spatial-Structural Graphs
by Junzheng Wu, Weiping Ni, Hui Bian, Kenan Cheng, Qiang Liu, Xue Kong and Biao Li
Remote Sens. 2023, 15(7), 1770; https://doi.org/10.3390/rs15071770 - 25 Mar 2023
Cited by 1 | Viewed by 1761
Abstract
With the aim of automatically extracting fine change information from ground objects, change detection (CD) for very high resolution (VHR) remote sensing images is extremely essential in various applications. However, the increase in spatial resolution, more complicated interactive relationships of ground objects, more [...] Read more.
With the aim of automatically extracting fine change information from ground objects, change detection (CD) for very high resolution (VHR) remote sensing images is extremely essential in various applications. However, the increase in spatial resolution, more complicated interactive relationships of ground objects, more evident diversity of spectra, and more severe speckle noise make accurately identifying relevant changes more challenging. To address these issues, an unsupervised temporal-spatial-structural graph is proposed for CD tasks. Treating each superpixel as a node of graph, the structural information of ground objects presented by the parent–offspring relationships with coarse and fine segmented scales is introduced to define the temporal-structural neighborhood, which is then incorporated with the spatial neighborhood to form the temporal-spatial-structural neighborhood. The graphs defined on such neighborhoods extend the interactive range among nodes from two dimensions to three dimensions, which can more perfectly exploit the structural and contextual information of bi-temporal images. Subsequently, a metric function is designed according to the spectral and structural similarity between graphs to measure the level of changes, which is more reasonable due to the comprehensive utilization of temporal-spatial-structural information. The experimental results on both VHR optical and SAR images demonstrate the superiority and effectiveness of the proposed method. Full article
Show Figures

Figure 1

20 pages, 5031 KiB  
Article
ResiDualGAN: Resize-Residual DualGAN for Cross-Domain Remote Sensing Images Semantic Segmentation
by Yang Zhao, Peng Guo, Zihao Sun, Xiuwan Chen and Han Gao
Remote Sens. 2023, 15(5), 1428; https://doi.org/10.3390/rs15051428 - 3 Mar 2023
Cited by 10 | Viewed by 2522
Abstract
The performance of a semantic segmentation model for remote sensing (RS) images pre-trained on an annotated dataset greatly decreases when testing on another unannotated dataset because of the domain gap. Adversarial generative methods, e.g., DualGAN, are utilized for unpaired image-to-image translation to minimize [...] Read more.
The performance of a semantic segmentation model for remote sensing (RS) images pre-trained on an annotated dataset greatly decreases when testing on another unannotated dataset because of the domain gap. Adversarial generative methods, e.g., DualGAN, are utilized for unpaired image-to-image translation to minimize the pixel-level domain gap, which is one of the common approaches for unsupervised domain adaptation (UDA). However, the existing image translation methods face two problems when performing RS image translation: (1) ignoring the scale discrepancy between two RS datasets, which greatly affects the accuracy performance of scale-invariant objects; (2) ignoring the characteristic of real-to-real translation of RS images, which brings an unstable factor for the training of the models. In this paper, ResiDualGAN is proposed for RS image translation, where an in-network resizer module is used for addressing the scale discrepancy of RS datasets and a residual connection is used for strengthening the stability of real-to-real images translation and improving the performance in cross-domain semantic segmentation tasks. Combined with an output space adaptation method, the proposed method greatly improves the accuracy performance on common benchmarks, which demonstrates the superiority and reliability of ResiDualGAN. At the end of the paper, a thorough discussion is conducted to provide a reasonable explanation for the improvement of ResiDualGAN. Our source code is also available. Full article
Show Figures

Figure 1

18 pages, 3430 KiB  
Article
RoadFormer: Road Extraction Using a Swin Transformer Combined with a Spatial and Channel Separable Convolution
by Xiangzeng Liu, Ziyao Wang, Jinting Wan, Juli Zhang, Yue Xi, Ruyi Liu and Qiguang Miao
Remote Sens. 2023, 15(4), 1049; https://doi.org/10.3390/rs15041049 - 15 Feb 2023
Cited by 12 | Viewed by 2486
Abstract
The accurate detection and extraction of roads using remote sensing technology are crucial to the development of the transportation industry and intelligent perception tasks. Recently, in view of the advantages of CNNs in feature extraction, its related road extraction methods have been proposed [...] Read more.
The accurate detection and extraction of roads using remote sensing technology are crucial to the development of the transportation industry and intelligent perception tasks. Recently, in view of the advantages of CNNs in feature extraction, its related road extraction methods have been proposed successively. However, due to the limitation of kernel size, they perform less effectively at capturing long-range information and global context, which are crucial for road targets distributed over long distances and highly structured. To deal with this problem, a novel model named RoadFormer with a Swin Transformer as the backbone is developed in this paper. Firstly, to extract long-range information effectively, a Swin Transformer multi-scale encoder is adopted in our model. Secondly, to enhance the feature representation capability of the model, we design an innovative bottleneck module, in which the spatial and channel separable convolution is employed to obtain fine-grained and globe features, and then a dilated block is connected after the spatial convolution module to capture more integrated road structures. Finally, a lightweight decoder consisting of transposed convolution and skip connection generates the final extraction results. Extensive experimental results confirm the advantages of RoadFormer on the Deepglobe and Massachusetts datasets. The comparative results of visualization and quantification demonstrate that our model outperforms comparable methods. Full article
Show Figures

Figure 1

Other

Jump to: Research

20 pages, 9500 KiB  
Technical Note
End-to-End Detail-Enhanced Dehazing Network for Remote Sensing Images
by Weida Dong, Chunyan Wang, Hao Sun, Yunjie Teng, Huan Liu, Yue Zhang, Kailin Zhang, Xiaoyan Li and Xiping Xu
Remote Sens. 2024, 16(2), 225; https://doi.org/10.3390/rs16020225 - 6 Jan 2024
Cited by 1 | Viewed by 920
Abstract
Space probes are always obstructed by floating objects in the atmosphere (clouds, haze, rain, etc.) during imaging, resulting in the loss of a significant amount of detailed information in remote sensing images and severely reducing the quality of the remote sensing images. To [...] Read more.
Space probes are always obstructed by floating objects in the atmosphere (clouds, haze, rain, etc.) during imaging, resulting in the loss of a significant amount of detailed information in remote sensing images and severely reducing the quality of the remote sensing images. To address the problem of detailed information loss in remote sensing images, we propose an end-to-end detail enhancement network to directly remove haze in remote sensing images, restore detailed information of the image, and improve the quality of the image. In order to enhance the detailed information of the image, we designed a multi-scale detail enhancement unit and a stepped attention detail enhancement unit, respectively. The former extracts multi-scale information from images, integrates global and local information, and constrains the haze to enhance the image details. The latter uses the attention mechanism to adaptively process the uneven haze distribution in remote sensing images from three dimensions: deep, middle and shallow. It focuses on effective information such as haze and high frequency to further enhance the detailed information of the image. In addition, we embed the designed parallel normalization module in the network to further improve the dehazing performance and robustness of the network. Experimental results on the SateHaze1k and HRSD datasets demonstrate that our method effectively handles remote sensing images obscured by various levels of haze, restores the detailed information of the images, and outperforms the current state-of-the-art haze removal methods. Full article
Show Figures

Graphical abstract

Back to TopTop