remotesensing-logo

Journal Browser

Journal Browser

SAR Data Processing and Applications Based on Machine Learning Method

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 1 June 2024 | Viewed by 7033

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronics and Information Engineering, Beihang University, Beijing 100191, China
Interests: space-borne SAR signal processsing; SAR image quality improvement; SAR imaging understanding
Special Issues, Collections and Topics in MDPI journals
College of Information Engineering, Capital Normal University, Beijing 100048, China
Interests: new space SAR system design; advanced radar signal processing; polar remote sensing
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China
Interests: SAR system design; image understanding; imaging detection and intelligent perception; trustworthy deep learning

E-Mail Website
Guest Editor
School of Electronics and Information Engineering, Beihang University, Beijing 100191, China
Interests: moving target detection; machine learning method on SAR; SAR 3-D imaging

E-Mail Website
Guest Editor
School of Surveying and Geospatial Engineering, University of New South Wales (UNSW) Australia, Sydney, NSW 2052, Australia
Interests: information extraction from aerial and satellite images
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
Interests: synthetic aperture radar image enhancement; small-radar development; deep learning for SAR image enhancement; data interpretation, short-range radar development, radar signal processing, through the wall imaging, soil moisture estimation, and machine vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Due to the explosive growth of data and great advances in computing power, machine learning techniques are having a profound impact on the development of remote sensing. Taking advantage of this, remote sensing technology is transforming from model-driven to data-driven. Due to the unique nature of SAR signals and images, machine learning SAR still offers many opportunities for exploration, which is also an exciting challenge for us. The unique non-linear capability of machine learning has revealed the further potential of SAR processing and application, such as target recognition, feature scene classification, speckle noise and ambiguity energy suppression. Next-generation SAR spacecraft may also carry edge computers based on AI platforms. In the future, machine learning SAR will significantly diminish the workload of scientists and engineers, enhance the application value of SAR data, and even influence the underlying design concepts of novel space SAR systems.

Remote Sensing is an established scientific journal attending to the science and application of remote sensing technologies, and encouraging the presentation of unique processing techniques, applied results, and natural findings. As a crucial aspect of remote sensing, SAR provides the microwave scattering characteristics of terrain, but also poses great difficulties in processing and application due to the great variations in human visual habits. The combination of machine learning and SAR can better bridge this gap, again revealing many opportunities and challenges. We hope that this Special Issue will provide an overview of the impact of this technology and exciting results regarding machine learning on SAR remote sensing.

This Special Issue is dedicated to advancing our knowledge in SAR data processing and applications based on machine learning methods. We invite the submission of review and regular papers on machine learning with SAR systems, 2D/3D/4D imaging methods, SAR image enhancement, SAR target detection and terrain classification, SAR image understanding, and trustworthy intelligence processing.

Prof. Dr. Jie Chen
Dr. Peng Xiao
Dr. Yanan You
Dr. Wei Yang
Prof. Dr. John Trinder
Prof. Dr. Dusan Gleich
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • synthetic aperture radar
  • machine learning
  • trustworthy deep learning
  • convolutional neural network
  • SAR imaging
  • SAR image target recognition
  • SAR image classification
  • SAR image understanding

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

24 pages, 12173 KiB  
Article
Sea Clutter Suppression Based on Chaotic Prediction Model by Combining the Generator and Long Short-Term Memory Networks
by Jindong Yu, Baojing Pan, Ze Yu, Hongling Zhu, Hanfu Li, Chao Li and Hezhi Sun
Remote Sens. 2024, 16(7), 1260; https://doi.org/10.3390/rs16071260 - 02 Apr 2024
Viewed by 486
Abstract
Sea clutter usually greatly affects the target detection and identification performance of marine surveillance radars. In order to reduce the impact of sea clutter, a novel sea clutter suppression method based on chaos prediction is proposed in this paper. The method combines a [...] Read more.
Sea clutter usually greatly affects the target detection and identification performance of marine surveillance radars. In order to reduce the impact of sea clutter, a novel sea clutter suppression method based on chaos prediction is proposed in this paper. The method combines a generator trained by Generative Adversarial Networks (GAN) with a Long Short-Term Memory (LSTM) network to accomplish sea clutter prediction. By exploiting the generator’s ability to learn the distribution of unlabeled data, the accuracy of sea clutter prediction is improved compared with the classical LSTM-based model. Furthermore, effective suppression of sea clutter and improvements in the signal-to-clutter ratio of echo were achieved through clutter cancellation. Experimental results on real data demonstrated the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Figure 1

25 pages, 16942 KiB  
Article
TAG-Net: Target Attitude Angle-Guided Network for Ship Detection and Classification in SAR Images
by Dece Pan, Youming Wu, Wei Dai, Tian Miao, Wenchao Zhao, Xin Gao and Xian Sun
Remote Sens. 2024, 16(6), 944; https://doi.org/10.3390/rs16060944 - 07 Mar 2024
Viewed by 660
Abstract
Synthetic aperture radar (SAR) ship detection and classification has gained unprecedented attention due to its important role in maritime transportation. Many deep learning-based detectors and classifiers have been successfully applied and achieved great progress. However, ships in SAR images present discrete and multi-centric [...] Read more.
Synthetic aperture radar (SAR) ship detection and classification has gained unprecedented attention due to its important role in maritime transportation. Many deep learning-based detectors and classifiers have been successfully applied and achieved great progress. However, ships in SAR images present discrete and multi-centric features, and their scattering characteristics and edge information are sensitive to variations in target attitude angles (TAAs). These factors pose challenges for existing methods to obtain satisfactory results. To address these challenges, a novel target attitude angle-guided network (TAG-Net) is proposed in this article. The core idea of TAG-Net is to leverage TAA information as guidance and use an adaptive feature-level fusion strategy to dynamically learn more representative features that can handle the target imaging diversity caused by TAA. This is achieved through a TAA-aware feature modulation (TAFM) module. It uses the TAA information and foreground information as prior knowledge and establishes the relationship between the ship scattering characteristics and TAA information. This enables a reduction in the intra-class variability and highlights ship targets. Additionally, considering the different requirements of the detection and classification tasks for the scattering information, we propose a layer-wise attention-based task decoupling detection head (LATD). Unlike general deep learning methods that use shared features for both detection and classification tasks, LATD extracts multi-level features and uses layer attention to achieve feature decoupling and select the most suitable features for each task. Finally, we introduce a novel salient-enhanced feature balance module (SFB) to provide richer semantic information and capture the global context to highlight ships in complex scenes, effectively reducing the impact of background noise. A large-scale ship detection dataset (LSSDD+) is used to verify the effectiveness of TAG-Net, and our method achieves state-of-the-art performance. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Graphical abstract

20 pages, 7264 KiB  
Article
Synthetic Aperture Radar Image Compression Based on Low-Frequency Rejection and Quality Map Guidance
by Jiawen Deng and Lijia Huang
Remote Sens. 2024, 16(5), 891; https://doi.org/10.3390/rs16050891 - 02 Mar 2024
Viewed by 714
Abstract
Synthetic Aperture Radar (SAR) images are widely utilized in the field of remote sensing. However, there is a limited body of literature specifically addressing the compression of SAR learning images. To address the escalating volume of SAR image data for storage and transmission, [...] Read more.
Synthetic Aperture Radar (SAR) images are widely utilized in the field of remote sensing. However, there is a limited body of literature specifically addressing the compression of SAR learning images. To address the escalating volume of SAR image data for storage and transmission, which necessitates more effective compression algorithms, this paper proposes a novel framework for compressing SAR images. Experimental validation is performed using a representative low-resolution Sentinel-1 dataset and the high-resolution QiLu-1 dataset. Initially, we introduce a novel two-stage transformation-based approach aimed at suppressing the low-frequency components of the input data, thereby achieving a high information entropy and minimizing quantization losses. Subsequently, a quality map guidance image compression algorithm is introduced, involving the fusion of the input SAR images with a target-aware map. This fusion involves convolutional transformations to generate a compact latent representation, effectively exploring redundancies between focused and non-focused areas. To assess the algorithm’s performance, experiments are carried out on both the low-resolution Sentinel-1 dataset and the high-resolution QiLu-1 dataset. The results indicate that the low-frequency suppression algorithm significantly outperforms traditional processing algorithms by 3–8 dB when quantifying the input data, effectively preserving image features and improving image performance metrics. Furthermore, the quality map guidance image compression algorithm demonstrates a superior performance compared to the baseline model. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Figure 1

20 pages, 4803 KiB  
Article
Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network
by Wenjiao Chen, Jiwen Geng, Fanjie Meng and Li Zhang
Remote Sens. 2024, 16(4), 671; https://doi.org/10.3390/rs16040671 - 13 Feb 2024
Viewed by 650
Abstract
A novel compressive sensing (CS) synthetic-aperture radar (SAR) called AgileSAR has been proposed to increase swath width for sparse scenes while preserving azimuthal resolution. AgileSAR overcomes the limitation of the Nyquist sampling theorem so that it has a small amount of data and [...] Read more.
A novel compressive sensing (CS) synthetic-aperture radar (SAR) called AgileSAR has been proposed to increase swath width for sparse scenes while preserving azimuthal resolution. AgileSAR overcomes the limitation of the Nyquist sampling theorem so that it has a small amount of data and low system complexity. However, traditional CS optimization-based algorithms suffer from manual tuning and pre-definition of optimization parameters, and they generally involve high time and computational complexity for AgileSAR imaging. To address these issues, a pseudo-L0-norm fast iterative shrinkage algorithm network (pseudo-L0-norm FISTA-net) is proposed for AgileSAR imaging via the deep unfolding network in this paper. Firstly, a pseudo-L0-norm regularization model is built by taking an approximately fair penalization rule based on Bayesian estimation. Then, we unfold the operation process of FISTA into a data-driven deep network to solve the pseudo-L0-norm regularization model. The network’s parameters are automatically learned, and the learned network significantly increases imaging speed, so that it can improve the accuracy and efficiency of AgileSAR imaging. In addition, the nonlinearly sparsifying transform can learn more target details than the traditional sparsifying transform. Finally, the simulated and data experiments demonstrate the superiority and efficiency of the pseudo-L0-norm FISTA-net for AgileSAR imaging. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Figure 1

19 pages, 8454 KiB  
Article
DeepRED Based Sparse SAR Imaging
by Yao Zhao, Qingsong Liu, He Tian, Bingo Wing-Kuen Ling and Zhe Zhang
Remote Sens. 2024, 16(2), 212; https://doi.org/10.3390/rs16020212 - 05 Jan 2024
Cited by 1 | Viewed by 745
Abstract
The integration of deep neural networks into sparse synthetic aperture radar (SAR) imaging is explored to enhance SAR imaging performance and reduce the system’s sampling rate. However, the scarcity of training samples and mismatches between the training data and the SAR system pose [...] Read more.
The integration of deep neural networks into sparse synthetic aperture radar (SAR) imaging is explored to enhance SAR imaging performance and reduce the system’s sampling rate. However, the scarcity of training samples and mismatches between the training data and the SAR system pose significant challenges to the method’s further development. In this paper, we propose a novel SAR imaging approach based on deep image prior powered by RED (DeepRED), enabling unsupervised SAR imaging without the need for additional training data. Initially, DeepRED is introduced as the regularization technique within the sparse SAR imaging model. Subsequently, variable splitting and the alternating direction method of multipliers (ADMM) are employed to solve the imaging model, alternately updating the magnitude and phase of the SAR image. Additionally, the SAR echo simulation operator is utilized as an observation model to enhance computational efficiency. Through simulations and real data experiments, we demonstrate that our method maintains imaging quality and system downsampling rate on par with deep-neural-network-based sparse SAR imaging but without the requirement for training data. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Figure 1

28 pages, 17240 KiB  
Article
OEGR-DETR: A Novel Detection Transformer Based on Orientation Enhancement and Group Relations for SAR Object Detection
by Yunxiang Feng, Yanan You, Jing Tian and Gang Meng
Remote Sens. 2024, 16(1), 106; https://doi.org/10.3390/rs16010106 - 26 Dec 2023
Cited by 1 | Viewed by 1001
Abstract
Object detection in SAR images has always been a topic of great interest in the field of deep learning. Early works commonly focus on improving performance on convolutional neural network frameworks. More recent works continue this path and introduce the attention mechanisms of [...] Read more.
Object detection in SAR images has always been a topic of great interest in the field of deep learning. Early works commonly focus on improving performance on convolutional neural network frameworks. More recent works continue this path and introduce the attention mechanisms of Transformers for better semantic interpretation. However, these methods fail to treat the Transformer itself as a detection framework and, therefore, lack the development of various details that contribute to the state-of-the-art performance of Transformers. In this work, we first base our work on a fully multi-scale Transformer-based detection framework, DETR (DEtection TRansformer) to utilize its superior detection performance. Secondly, to acquire rotation-related attributes for better representation of SAR objects, an Orientation Enhancement Module (OEM) is proposed to facilitate the enhancement of rotation characteristics. Then, to enable learning of more effective and discriminative representations of foreground objects and background noises, a contrastive-loss-based GRC Loss is proposed to preserve patterns of both categories. Moreover, to not restrict comparisons exclusively to maritime objects, we have also developed an open-source labeled vehicle dataset. Finally, we evaluate both detection performance and generalization ability on two well-known ship datasets and our vehicle dataset. We demonstrated our method’s superior performance and generalization ability on both datasets. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Figure 1

26 pages, 36367 KiB  
Article
Intelligent Detection and Segmentation of Space-Borne SAR Radio Frequency Interference
by Jiayi Zhao, Yongliang Wang, Guisheng Liao, Xiaoning Liu, Kun Li, Chunyu Yu, Yang Zhai, Hang Xing and Xuepan Zhang
Remote Sens. 2023, 15(23), 5462; https://doi.org/10.3390/rs15235462 - 22 Nov 2023
Viewed by 774
Abstract
Space-borne synthetic aperture radar (SAR), as an all-weather observation sensor, is an important means in modern information electronic warfare. Since SAR is a broadband active radar system, radio frequency interference (RFI) in the same frequency band will affect the normal observation of the [...] Read more.
Space-borne synthetic aperture radar (SAR), as an all-weather observation sensor, is an important means in modern information electronic warfare. Since SAR is a broadband active radar system, radio frequency interference (RFI) in the same frequency band will affect the normal observation of the SAR system. Untangling the above problem, this research explores a quick and accurate method for detecting and segmenting RFI-contaminated images. The purpose of the current method is to quickly detect the existence of RFI and to locate it in massive SAR data. Based on deep learning, the method shown in this paper realizes the existence of RFI by determining the presence or absence of interference in the image domain and then performs pixel-level image segmentation on Sentinel-1 RFI-affected quick-look images to locate RFI. Considering the need to quickly detect RFI in massive SAR data, an improved network based on MobileNet is proposed, which replaces some inverted residual blocks in the network with ghost blocks, reducing the number of network parameters and the inference time to 6.1 ms per image. Further, this paper also proposes an improved network called the Smart Interference Segmentation Network (SISNet), which is based on U2Net and replaces the convolution of the VGG blocks in U2Net with a residual convolution and introduces attention mechanisms and a modified RFB module to improve the segmentation mIoU to 87.46% on average. Experiment results and statistical analysis based on the MID dataset and PAIS dataset show that the proposed methods can achieve quicker detection than other CNNs while ensuring a certain accuracy and can significantly improve segmentation performance under the same conditions compared to the original U2Net and other semantic segmentation networks. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Graphical abstract

Other

Jump to: Research

20 pages, 13392 KiB  
Technical Note
A GAN-Based Augmentation Scheme for SAR Deceptive Jamming Templates with Shadows
by Shinan Lang, Guiqiang Li, Yi Liu, Wei Lu, Qunying Zhang and Kun Chao
Remote Sens. 2023, 15(19), 4756; https://doi.org/10.3390/rs15194756 - 28 Sep 2023
Cited by 2 | Viewed by 808
Abstract
To realize fast and effective synthetic aperture radar (SAR) deception jamming, a high-quality SAR deception jamming template library can be generated by performing sample augmentation on SAR deception jamming templates. However, the current sample augmentation schemes of SAR deception jamming templates face certain [...] Read more.
To realize fast and effective synthetic aperture radar (SAR) deception jamming, a high-quality SAR deception jamming template library can be generated by performing sample augmentation on SAR deception jamming templates. However, the current sample augmentation schemes of SAR deception jamming templates face certain problems. First, the authenticity of the templates is low due to the lack of speckle noise. Second, the generated templates have a low similarity to the target and shadow areas of the input templates. To solve these problems, this study proposed a sample augmentation scheme based on generative adversarial networks, which can generate a high-quality library of SAR deception jamming templates with shadows. The proposed scheme solved the two aforementioned problems from the following aspects. First, the influence of the speckle noise was considered in the network to avoid the problem of reduced authenticity in the generated images. Second, a channel attention mechanism module was used to improve the network’s learning ability of the shadow features, which improved the similarity between the generated template and the shadow area in the input template. Finally, the single generative adversarial network (SinGAN) scheme, which is a generative adversarial network capable of image sample augmentation for a single SAR image, and the proposed scheme were compared regarding the equivalent number of looks and the structural similarity between the target and shadow in the sample augmentation results. The comparison results demonstrated that, compared to the templates generated by the SinGAN scheme, those generated by the proposed scheme had targets and shadow features similar to those of the original image and could incorporate speckle noise characteristics, resulting in a higher authenticity, which helps to achieve fast and effective SAR deception jamming. Full article
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)
Show Figures

Figure 1

Back to TopTop