remotesensing-logo

Journal Browser

Journal Browser

Advances in Radar Imaging with Deep Learning Algorithms

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 26 May 2024 | Viewed by 4702

Special Issue Editor


E-Mail Website
Guest Editor
IMT Atlantique, 44300 Nantes, France
Interests: SAR; target detection; deep learning; radar processing; EM wave propagation and scattering; active sensor image processing; data fusion methods and metrics; explainable IA; non-Gaussian statistics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent decades, an unprecedented amount of data has been gathered by the radar remote sensing community, which is boosting the development of an increasing number of applications in many areas from remote sensing of terrain and sea to medical imaging. It is worth noting that the high efficacy of radar imaging approaches is key to the development of advanced data processing strategies that are able to perform the detection and tracking of targets in some scenarios. For example, target detection using multiple-input multiple-output (MIMO) radars has recently gained popularity in radar research  Moreover, a set of new analytical tools has been proposed and applied to convolutional neural network (CNN) handling automatic target recognition on SAR datasets.

In this Special Issue, we intend to compile a series of papers that merge the analysis and use of radar images with AI techniques. We expect new research that will address practical problems in radar image applications with the help of advanced AI methods.

Articles may address, but are not limited to, the following topics:

  • Advanced AI-based target detection/recognition/tracking;
  • Radar image intelligent processing;
  • AI-based SAR imaging algorithm updating;
  • Combination of advanced signal processing and artificial intelligence techniques;
  • New datasets for remote sensing image classification with deep learning;
  • New radar systems, such as MIMO radars, distributed radars, dual multi-base radars, and so on.

Dr. Jean-Marc Le Caillec
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • synthetic aperture radar (SAR)
  • target detection/recognition
  • radar imaging
  • non-linearity detection
  • deep learning machine learning artificial intelligence
  • signal and image processing
  • passive and active sensors
  • array clustering

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4839 KiB  
Article
Intelligent Reconstruction of Radar Composite Reflectivity Based on Satellite Observations and Deep Learning
by Jianyu Zhao, Jinkai Tan, Sheng Chen, Qiqiao Huang, Liang Gao, Yanping Li and Chunxia Wei
Remote Sens. 2024, 16(2), 275; https://doi.org/10.3390/rs16020275 - 10 Jan 2024
Viewed by 752
Abstract
Weather radar is a useful tool for monitoring and forecasting severe weather but has limited coverage due to beam blockage from mountainous terrain or other factors. To overcome this issue, an intelligent technology called “Echo Reconstruction UNet (ER-UNet)” is proposed in this study. [...] Read more.
Weather radar is a useful tool for monitoring and forecasting severe weather but has limited coverage due to beam blockage from mountainous terrain or other factors. To overcome this issue, an intelligent technology called “Echo Reconstruction UNet (ER-UNet)” is proposed in this study. It reconstructs radar composite reflectivity (CREF) using observations from Fengyun-4A geostationary satellites with broad coverage. In general, ER-UNet outperforms UNet in terms of root mean square error (RMSE), mean absolute error (MAE), structural similarity index (SSIM), probability of detection (POD), false alarm rate (FAR), critical success index (CSI), and Heidke skill score (HSS). Additionally, ER-UNet provides the better reconstruction of CREF compared to the UNet model in terms of the intensity, location, and details of radar echoes (particularly, strong echoes). ER-UNet can effectively reconstruct strong echoes and provide crucial decision-making information for early warning of severe weather. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

19 pages, 23605 KiB  
Article
Above Ground Level Estimation of Airborne Synthetic Aperture Radar Altimeter by a Fully Supervised Altimetry Enhancement Network
by Mengmeng Duan, Yanxi Lu, Yao Wang, Gaozheng Liu, Longlong Tan, Yi Gao, Fang Li and Ge Jiang
Remote Sens. 2023, 15(22), 5404; https://doi.org/10.3390/rs15225404 - 17 Nov 2023
Viewed by 686
Abstract
Due to the lack of accurate labels for the airborne synthetic aperture radar altimeter (SARAL), the use of deep learning methods is limited for estimating the above ground level (AGL) of complicated landforms. In addition, the inherent additive and speckle noise definitely influences [...] Read more.
Due to the lack of accurate labels for the airborne synthetic aperture radar altimeter (SARAL), the use of deep learning methods is limited for estimating the above ground level (AGL) of complicated landforms. In addition, the inherent additive and speckle noise definitely influences the intended delay/Doppler map (DDM); accurate AGL estimation becomes more challenging when using the feature extraction approach. In this paper, a generalized AGL estimation algorithm is proposed, based on a fully supervised altimetry enhancement network (FuSAE-net), where accurate labels are generated by a novel semi-analytical model. In such a case, there is no need to have a fully analytical DDM model, and accurate labels are achieved without additive noises and speckles. Therefore, deep learning supervision is easy and accurate. Next, to further decrease the computational complexity for various landforms on the airborne platform, the network architecture is designed in a lightweight manner. Knowledge distillation has proven to be an effective and intuitive lightweight paradigm. To significantly improve the performance of the compact student network, both the encoder and decoder of the teacher network are utilized during knowledge distillation under the supervision of labels. In the experiments, airborne raw radar altimeter data were applied to examine the performance of the proposed algorithm. Comparisons with conventional methods in terms of both qualitative and quantitative aspects demonstrate the superiority of the proposed algorithm. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

24 pages, 7236 KiB  
Article
Deep Learning-Based Enhanced ISAR-RID Imaging Method
by Xiurong Wang, Yongpeng Dai, Shaoqiu Song, Tian Jin and Xiaotao Huang
Remote Sens. 2023, 15(21), 5166; https://doi.org/10.3390/rs15215166 - 29 Oct 2023
Cited by 1 | Viewed by 1018
Abstract
Inverse synthetic aperture radar (ISAR) imaging can be improved by processing Range-Instantaneous Doppler (RID) images, according to a method proposed in this paper that uses neural networks. ISAR is a significant imaging technique for moving targets. However, scatterers span across several range bins [...] Read more.
Inverse synthetic aperture radar (ISAR) imaging can be improved by processing Range-Instantaneous Doppler (RID) images, according to a method proposed in this paper that uses neural networks. ISAR is a significant imaging technique for moving targets. However, scatterers span across several range bins and Doppler bins while imaging a moving target over a large accumulated angle. Defocusing consequently occurs in the results produced by the conventional Range Doppler Algorithm (RDA). Defocusing can be solved with the time-frequency analysis (TFA) method, but the resolution performance is reduced. The proposed method provides the neural network with more details by using a string of RID frames of images as input. As a consequence, it produces better resolution and avoids defocusing. Furthermore, we have developed a positional encoding method that precisely represents pixel positions while taking into account the features of ISAR images. To address the issue of an imbalance in the ratio of pixel count between target and non-target areas in ISAR images, we additionally use the idea of Focal Loss to improve the Mean Squared Error (MSE). We conduct experiments with simulated data of point targets and full-wave simulated data produced by FEKO to assess the efficacy of the proposed approach. The experimental results demonstrate that our approach can improve resolution while preventing defocusing in ISAR images. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

30 pages, 38046 KiB  
Article
MosReformer: Reconstruction and Separation of Multiple Moving Targets for Staggered SAR Imaging
by Xin Qi, Yun Zhang, Yicheng Jiang, Zitao Liu and Chang Yang
Remote Sens. 2023, 15(20), 4911; https://doi.org/10.3390/rs15204911 - 11 Oct 2023
Viewed by 730
Abstract
Maritime moving target imaging using synthetic aperture radar (SAR) demands high resolution and wide swath (HRWS). Using the variable pulse repetition interval (PRI), staggered SAR can achieve seamless HRWS imaging. The reconstruction should be performed since the variable PRI causes echo pulse loss [...] Read more.
Maritime moving target imaging using synthetic aperture radar (SAR) demands high resolution and wide swath (HRWS). Using the variable pulse repetition interval (PRI), staggered SAR can achieve seamless HRWS imaging. The reconstruction should be performed since the variable PRI causes echo pulse loss and nonuniformly sampled signals in azimuth, both of which result in spectrum aliasing. The existing reconstruction methods are designed for stationary scenes and have achieved impressive results. However, for moving targets, these methods inevitably introduce reconstruction errors. The target motion coupled with non-uniform sampling aggravates the spectral aliasing and degrades the reconstruction performance. This phenomenon becomes more severe, particularly in scenes involving multiple moving targets, since the distinct motion parameter has its unique effect on spectrum aliasing, resulting in the overlapping of various aliasing effects. Consequently, it becomes difficult to reconstruct and separate the echoes of the multiple moving targets with high precision in staggered mode. To this end, motivated by deep learning, this paper proposes a novel Transformer-based algorithm to image multiple moving targets in a staggered SAR system. The reconstruction and the separation of the multiple moving targets are achieved through a proposed network named MosReFormer (Multiple moving target separation and reconstruction Transformer). Adopting a gated single-head Transformer network with convolution-augmented joint self-attention, the proposed MosReFormer network can mitigate the reconstruction errors and separate the signals of multiple moving targets simultaneously. Simulations and experiments on raw data show that the reconstructed and separated results are close to ideal imaging results which are sampled uniformly in azimuth with constant PRI, verifying the feasibility and effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Graphical abstract

20 pages, 6178 KiB  
Article
Boosting SAR Aircraft Detection Performance with Multi-Stage Domain Adaptation Training
by Wenbo Yu, Jiamu Li, Zijian Wang and Zhongjun Yu
Remote Sens. 2023, 15(18), 4614; https://doi.org/10.3390/rs15184614 - 20 Sep 2023
Viewed by 894
Abstract
Deep learning has achieved significant success in various synthetic aperture radar (SAR) imagery interpretation tasks. However, automatic aircraft detection is still challenging due to the high labeling cost and limited data quantity. To address this issue, we propose a multi-stage domain adaptation training [...] Read more.
Deep learning has achieved significant success in various synthetic aperture radar (SAR) imagery interpretation tasks. However, automatic aircraft detection is still challenging due to the high labeling cost and limited data quantity. To address this issue, we propose a multi-stage domain adaptation training framework to efficiently transfer the knowledge from optical imagery and boost SAR aircraft detection performance. To overcome the significant domain discrepancy between optical and SAR images, the training process can be divided into three stages: image translation, domain adaptive pretraining, and domain adaptive finetuning. First, CycleGAN is used to translate optical images into SAR-style images and reduce global-level image divergence. Next, we propose multilayer feature alignment to further reduce the local-level feature distribution distance. By applying domain adversarial learning in both the pretrain and finetune stages, the detector can learn to extract domain-invariant features that are beneficial to the learning of generic aircraft characteristics. To evaluate the proposed method, extensive experiments were conducted on a self-built SAR aircraft detection dataset. The results indicate that by using the proposed training framework, the average precision of Faster RCNN gained an increase of 2.4, and that of YOLOv3 was improved by 2.6, which outperformed other domain adaptation methods. By reducing the domain discrepancy between optical and SAR in three progressive stages, the proposed method can effectively mitigate the domain shift, thereby enhancing the efficiency of knowledge transfer. It greatly improves the detection performance of aircraft and offers an effective approach to address the limited training data problem of SAR aircraft detection. Full article
(This article belongs to the Special Issue Advances in Radar Imaging with Deep Learning Algorithms)
Show Figures

Figure 1

Back to TopTop