remotesensing-logo

Journal Browser

Journal Browser

Deep Learning for Radar and Sonar Image Processing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (1 December 2021) | Viewed by 44754

Special Issue Editors

Institut Charles Delaunay, Université de Technologie de Troyes, 10000 Troyes, France
Interests: electromagnetic and acoustic systems; inverse problems; machine learning; multiscale/multiresolution signal and image processing
Special Issues, Collections and Topics in MDPI journals
Department of Civil Engineering, National Chung Hsing University, 250 Kuokuang Rd., Taichung 402, Taiwan
Interests: remote sensing; image processing; AI; UAVs; environmental monitoring; disaster damage assessment
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the past few years, radar and sonar image processing and understanding, for both civilian and defense applications, have taken advantage of the breakthrough of artificial intelligence, especially deep learning. Unfortunately, specialists from the radar and sonar fields do not interact much with each other. The aim of this Special Issue is to increase these exchanges and allow experts from other areas to understand the specifies of radar and sonar problems. Indeed, radar and sonar images have some particularities, compared to common optical images. Thus, processing these data requires certain precautions, and specific developments must be made to address applications such as image segmentation or object detection. However, one of the main problems, especially in defense applications, is the lack of data. To overcome this problem, several solutions can be considered such as image synthesis using generative adversarial nets (GANs) to create or increase the size of the training sets, domain adaptation or transfer learning.

Topics for this Special Issue on deep learning for radar and sonar image processing include but are not limited to the following:

  • Image segmentation;
  • Object detection;
  • Object classification;
  • Image synthesis;
  • Domain adaptation;
  • Transfer learning;
  • Supervised, semisupervised, and unsupervised learning.

Prof. Dr. Alexandre Baussard
Prof. Dr. Ming-Der Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • radar
  • sonar
  • deep learning
  • segmentation
  • detection
  • classification
  • image synthesis
  • domain adaptation
  • transfer learning

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

36 pages, 52271 KiB  
Article
Deep Learning Models for Passive Sonar Signal Classification of Military Data
by Júlio de Castro Vargas Fernandes, Natanael Nunes de Moura Junior and José Manoel de Seixas
Remote Sens. 2022, 14(11), 2648; https://doi.org/10.3390/rs14112648 - 01 Jun 2022
Cited by 9 | Viewed by 4764
Abstract
The noise radiated from ships can be used for their identification and classification using passive sonar systems. Several techniques have been proposed for military ship classification based on acoustic signatures, which can be acquired through controlled experiments performed in an acoustic lane. The [...] Read more.
The noise radiated from ships can be used for their identification and classification using passive sonar systems. Several techniques have been proposed for military ship classification based on acoustic signatures, which can be acquired through controlled experiments performed in an acoustic lane. The cost for such data acquisition is a significant issue since the ship and crew have to be dislocated from the fleet. In addition, the experiments have to be repeated for different operational conditions, taking a considerable amount of time. Even with this massive effort, the scarce amount of data produced by these controlled experiments may limit further detailed analyses. In this paper, deep learning models are used for full exploitation of such acquired data, envisaging passive sonar signal classification. A drawback of such models is the large number of parameters, which requires extensive data volumes for parameter tuning along the training phase. Thus, generative adversarial networks (GANs) are used to synthesize data so that a larger data volume can be produced for training convolutional neural networks (CNNs), which are used for the classification task. Different GAN design approaches were evaluated and both maximum probability and class-expert strategies were exploited for signal classification. Special attention was paid to how the expert knowledge might give a handle on analyzing the performance of the various deep learning models through tests that mirrored actual deployment. An accuracy as high as 99.0±0.4% was achieved using experimental data, which improves upon previous machine learning designs in the field. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

19 pages, 5529 KiB  
Article
Combinational Fusion and Global Attention of the Single-Shot Method for Synthetic Aperture Radar Ship Detection
by Libo Xu, Chaoyi Pang, Yan Guo and Zhenyu Shu
Remote Sens. 2021, 13(23), 4781; https://doi.org/10.3390/rs13234781 - 25 Nov 2021
Cited by 1 | Viewed by 1614
Abstract
Synthetic Aperture Radar (SAR), an active remote sensing imaging radar technology, has certain surface penetration ability and can work all day and in all weather conditions. It is widely applied in ship detection to quickly collect ship information on the ocean surface from [...] Read more.
Synthetic Aperture Radar (SAR), an active remote sensing imaging radar technology, has certain surface penetration ability and can work all day and in all weather conditions. It is widely applied in ship detection to quickly collect ship information on the ocean surface from SAR images. However, the ship SAR images are often blurred, have large noise interference, and contain more small targets, which pose challenges to popular one-stage detectors, such as the single-shot multi-box detector (SSD). We designed a novel network structure, a combinational fusion SSD (CF-SSD), based on the framework of the original SSD, to solve these problems. It mainly includes three blocks, namely a combinational fusion (CF) block, a global attention module (GAM), and a mixed loss function block, to significantly improve the detection accuracy of SAR images and remote sensing images and maintain a fast inference speed. The CF block equips every feature map with the ability to detect objects of all sizes at different levels and forms a consistent and powerful detection structure to learn more useful information for SAR features. The GAM block produces attention weights and considers the channel attention information of various scale feature information or cross-layer maps so that it can obtain better feature representations from the global perspective. The mixed loss function block can better learn the positions of the truth anchor boxes by considering corner and center coordinates simultaneously. CF-SSD can effectively extract and fuse the features, avoid the loss of small or blurred object information, and precisely locate the object position from SAR images. We conducted experiments on the SAR ship dataset SSDD, and achieved a 90.3% mAP and fast inference speed close to that of the original SSD. We also tested our model on the remote sensing dataset NWPU VHR-10 and the common dataset VOC2007. The experimental results indicate that our proposed model simultaneously achieves excellent detection performance and high efficiency. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

29 pages, 17943 KiB  
Article
Zero-Shot Pipeline Detection for Sub-Bottom Profiler Data Based on Imaging Principles
by Gen Zheng, Jianhu Zhao, Shaobo Li and Jie Feng
Remote Sens. 2021, 13(21), 4401; https://doi.org/10.3390/rs13214401 - 01 Nov 2021
Cited by 3 | Viewed by 3106
Abstract
With the increasing number of underwater pipeline investigation activities, the research on automatic pipeline detection is of great significance. At this stage, object detection algorithms based on Deep Learning (DL) are widely used due to their abilities to deal with various complex scenarios. [...] Read more.
With the increasing number of underwater pipeline investigation activities, the research on automatic pipeline detection is of great significance. At this stage, object detection algorithms based on Deep Learning (DL) are widely used due to their abilities to deal with various complex scenarios. However, DL algorithms require massive representative samples, which are difficult to obtain for pipeline detection with sub-bottom profiler (SBP) data. In this paper, a zero-shot pipeline detection method is proposed. First, an efficient sample synthesis method based on SBP imaging principles is proposed to generate samples. Then, the generated samples are used to train the YOLOv5s network and a pipeline detection strategy is developed to meet the real-time requirements. Finally, the trained model is tested with the measured data. In the experiment, the trained model achieved a mAP@0.5 of 0.962, and the mean deviation of the predicted pipeline position is 0.23 pixels with a standard deviation of 1.94 pixels in the horizontal direction and 0.34 pixels with a standard deviation of 2.69 pixels in the vertical direction. In addition, the object detection speed also met the real-time requirements. The above results show that the proposed method has the potential to completely replace the manual interpretation and has very high application value. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Figure 1

23 pages, 9873 KiB  
Article
Horizon Picking from SBP Images Using Physicals-Combined Deep Learning
by Jie Feng, Jianhu Zhao, Gen Zheng and Shaobo Li
Remote Sens. 2021, 13(18), 3565; https://doi.org/10.3390/rs13183565 - 08 Sep 2021
Cited by 1 | Viewed by 1834
Abstract
Horizon picking from sub-bottom profiler (SBP) images has great significance in marine shallow strata studies. However, the mainstream automatic picking methods cannot handle multiples well, and there is a need to set a group of parameters manually. Considering the constant increase in the [...] Read more.
Horizon picking from sub-bottom profiler (SBP) images has great significance in marine shallow strata studies. However, the mainstream automatic picking methods cannot handle multiples well, and there is a need to set a group of parameters manually. Considering the constant increase in the amount of SBP data and the high efficiency of deep learning (DL), we proposed a physicals-combined DL method to pick the horizons from SBP images. We adopted the DeeplabV3+ net to extract the horizons and multiples from SBP images. We generated a training dataset from the Jiaozhou Bay survey (Shandong, China) and the Zhujiang estuary survey (Guangzhou, China) to increase the applicability of the trained model. After the DL processing, we proposed a simulated Radon transform method to eliminate the surface-related multiples from the prediction by combining the designed pseudo-Radon transform and correlation analysis. We verified the proposed method using actual data (not involved in the training dataset) from Jiaozhou Bay and Zhujiang estuary. The positions of picked horizons are accurate, and multiples are suppressed. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

28 pages, 3750 KiB  
Article
Real-Time Underwater Maritime Object Detection in Side-Scan Sonar Images Based on Transformer-YOLOv5
by Yongcan Yu, Jianhu Zhao, Quanhua Gong, Chao Huang, Gen Zheng and Jinye Ma
Remote Sens. 2021, 13(18), 3555; https://doi.org/10.3390/rs13183555 - 07 Sep 2021
Cited by 97 | Viewed by 11548
Abstract
To overcome the shortcomings of the traditional manual detection of underwater targets in side-scan sonar (SSS) images, a real-time automatic target recognition (ATR) method is proposed in this paper. This method consists of image preprocessing, sampling, ATR by integration of the transformer module [...] Read more.
To overcome the shortcomings of the traditional manual detection of underwater targets in side-scan sonar (SSS) images, a real-time automatic target recognition (ATR) method is proposed in this paper. This method consists of image preprocessing, sampling, ATR by integration of the transformer module and YOLOv5s (that is, TR–YOLOv5s), and target localization. By considering the target-sparse and feature-barren characteristics of SSS images, a novel TR–YOLOv5s network and a down-sampling principle are put forward, and the attention mechanism is introduced in the method to meet the requirements of accuracy and efficiency for underwater target recognition. Experiments verified the proposed method achieved 85.6% mean average precision (mAP) and 87.8% macro-F2 score, and brought 12.5% and 10.6% gains compared with the YOLOv5s network trained from scratch, and had the real-time recognition speed of about 0.068 s per image. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

24 pages, 5501 KiB  
Article
Unknown SAR Target Identification Method Based on Feature Extraction Network and KLD–RPA Joint Discrimination
by Zhiqiang Zeng, Jinping Sun, Congan Xu and Haiyang Wang
Remote Sens. 2021, 13(15), 2901; https://doi.org/10.3390/rs13152901 - 23 Jul 2021
Cited by 19 | Viewed by 2617
Abstract
Recently, deep learning (DL) has been successfully applied in automatic target recognition (ATR) tasks of synthetic aperture radar (SAR) images. However, limited by the lack of SAR image target datasets and the high cost of labeling, these existing DL based approaches can only [...] Read more.
Recently, deep learning (DL) has been successfully applied in automatic target recognition (ATR) tasks of synthetic aperture radar (SAR) images. However, limited by the lack of SAR image target datasets and the high cost of labeling, these existing DL based approaches can only accurately recognize the target in the training dataset. Therefore, high precision identification of unknown SAR targets in practical applications is one of the important capabilities that the SAR–ATR system should equip. To this end, we propose a novel DL based identification method for unknown SAR targets with joint discrimination. First of all, the feature extraction network (FEN) trained on a limited dataset is used to extract the SAR target features, and then the unknown targets are roughly identified from the known targets by computing the Kullback–Leibler divergence (KLD) of the target feature vectors. For the targets that cannot be distinguished by KLD, their feature vectors perform t-distributed stochastic neighbor embedding (t-SNE) dimensionality reduction processing to calculate the relative position angle (RPA). Finally, the known and unknown targets are finely identified based on RPA. Experimental results conducted on the MSTAR dataset demonstrate that the proposed method can achieve higher identification accuracy of unknown SAR targets than existing methods while maintaining high recognition accuracy of known targets. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

19 pages, 2407 KiB  
Article
Fast Complex-Valued CNN for Radar Jamming Signal Recognition
by Haoyu Zhang, Lei Yu, Yushi Chen and Yinsheng Wei
Remote Sens. 2021, 13(15), 2867; https://doi.org/10.3390/rs13152867 - 22 Jul 2021
Cited by 22 | Viewed by 2913
Abstract
Jamming is a big threat to the survival of a radar system. Therefore, the recognition of radar jamming signal type is a part of radar countermeasure. Recently, convolutional neural networks (CNNs) have shown their effectiveness in radar signal processing, including jamming signal recognition. [...] Read more.
Jamming is a big threat to the survival of a radar system. Therefore, the recognition of radar jamming signal type is a part of radar countermeasure. Recently, convolutional neural networks (CNNs) have shown their effectiveness in radar signal processing, including jamming signal recognition. However, most of existing CNN methods do not regard radar jamming as a complex value signal. In this study, a complex-valued CNN (CV-CNN) is investigated to fully explore the inherent characteristics of a radar jamming signal, and we find that we can obtain better recognition accuracy using this method compared with a real-valued CNN (RV-CNN). CV-CNNs contain more parameters, which need more inference time. To reduce the parameter redundancy and speed up the recognition time, a fast CV-CNN (F-CV-CNN), which is based on pruning, is proposed for radar jamming signal fast recognition. The experimental results show that the CV-CNN and F-CV-CNN methods obtain good recognition performance in terms of accuracy and speed. The proposed methods open a new window for future research, which shows a huge potential of CV-CNN-based methods for radar signal processing. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

22 pages, 8276 KiB  
Article
A Universal Automatic Bottom Tracking Method of Side Scan Sonar Data Based on Semantic Segmentation
by Gen Zheng, Hongmei Zhang, Yuqing Li and Jianhu Zhao
Remote Sens. 2021, 13(10), 1945; https://doi.org/10.3390/rs13101945 - 17 May 2021
Cited by 21 | Viewed by 5352
Abstract
Determining the altitude of side-scan sonar (SSS) above the seabed is critical to correct the geometric distortions in the sonar images. Usually, a technology named bottom tracking is applied to estimate the distance between the sonar and the seafloor. However, the traditional methods [...] Read more.
Determining the altitude of side-scan sonar (SSS) above the seabed is critical to correct the geometric distortions in the sonar images. Usually, a technology named bottom tracking is applied to estimate the distance between the sonar and the seafloor. However, the traditional methods for bottom tracking often require pre-defined thresholds and complex optimization processes, which make it difficult to achieve ideal results in complex underwater environments without manual intervention. In this paper, a universal automatic bottom tracking method is proposed based on semantic segmentation. First, the waterfall images generated from SSS backscatter sequences are labeled as water column (WC) and seabed parts, then split into specific patches to build the training dataset. Second, a symmetrical information synthesis module (SISM) is designed and added to DeepLabv3+, which not only weakens the strong echoes in the WC area, but also gives the network the capability of considering the symmetry characteristic of bottom lines, and most importantly, the independent module can be easily combined with any other neural networks. Then, the integrated network is trained with the established dataset. Third, a coarse-to-fine segmentation strategy with the well-trained model is proposed to segment the SSS waterfall images quickly and accurately. Besides, a fast bottom line search algorithm is proposed to further reduce the time consumption of bottom tracking. Finally, the proposed method is validated by the data measured with several commonly used SSSs in various underwater environments. The results show that the proposed method can achieve the bottom tracking accuracy of 1.1 pixels of mean error and 1.26 pixels of standard deviation at the speed of 2128 ping/s, and is robust to interference factors. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Figure 1

23 pages, 7492 KiB  
Article
Bottom Detection from Backscatter Data of Conventional Side Scan Sonars through 1D-UNet
by Jun Yan, Junxia Meng and Jianhu Zhao
Remote Sens. 2021, 13(5), 1024; https://doi.org/10.3390/rs13051024 - 08 Mar 2021
Cited by 17 | Viewed by 3391
Abstract
As widely applicated in many underwater research fields, conventional side-scan sonars require the sonar height to be at the seabed for geocoding seabed images. However, many interference factors, including compensation with unknown gains, suspended matters, etc., would bring difficulties in bottom detection. Existing [...] Read more.
As widely applicated in many underwater research fields, conventional side-scan sonars require the sonar height to be at the seabed for geocoding seabed images. However, many interference factors, including compensation with unknown gains, suspended matters, etc., would bring difficulties in bottom detection. Existing methods need manual parameter setups or to use postprocessing methods, which limits automatic and real-time processing in complex situations. To solve this problem, a one-dimensional U-Net (1D-UNet) model for sea bottom detection of side-scan data and the bottom detection and tracking method based on 1D-UNet are proposed in this work. First, the basic theory of sonar bottom detection and the interference factors is introduced, which indicates that deep learning of the bottom is a feasible solution. Then, a 1D-UNet model for detecting the sea bottom position from the side-scan backscatter strength sequences is proposed, and the structure and implementation of this model are illustrated in detail. Finally, the bottom detection and tracking algorithms of a single ping and continuous pings are presented on the basis of the proposed model. The measured side-scan sonar data in Meizhou Bay and Bayuquan District were selected in the experiments to verify the model and methods. The 1D-UNet model was first trained and applied with the side-scan data in Meizhou Bay. The training and validation accuracies were 99.92% and 99.77%, respectively, and the sea bottom detection accuracy of the training survey line was 99.88%. The 1D-UNet model showed good robustness to the interference factors of bottom detection and fully real-time performance in comparison with other methods. Moreover, the trained 1D-UNet model is used to process the data in the Bayuquan District for proving model generality. The proposed 1D-UNet model for bottom detection has been proven effective for side-scan sonar data and also has great potentials in wider applications on other types of sonars. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

21 pages, 1687 KiB  
Article
Multi-Block Mixed Sample Semi-Supervised Learning for SAR Target Recognition
by Ye Tian, Jianguo Sun, Pengyuan Qi, Guisheng Yin and Liguo Zhang
Remote Sens. 2021, 13(3), 361; https://doi.org/10.3390/rs13030361 - 21 Jan 2021
Cited by 5 | Viewed by 2142
Abstract
In recent years, synthetic aperture radar (SAR) automatic target recognition has played a crucial role in multiple fields and has received widespread attention. Compared with optical image recognition with massive annotation data, lacking sufficient labeled images limits the performance of the SAR automatic [...] Read more.
In recent years, synthetic aperture radar (SAR) automatic target recognition has played a crucial role in multiple fields and has received widespread attention. Compared with optical image recognition with massive annotation data, lacking sufficient labeled images limits the performance of the SAR automatic target recognition (ATR) method based on deep learning. It is expensive and time-consuming to annotate the targets for SAR images, while it is difficult for unsupervised SAR target recognition to meet the actual needs. In this situation, we propose a semi-supervised sample mixing method for SAR target recognition, named multi-block mixed (MBM), which can effectively utilize the unlabeled samples. During the data preprocessing stage, a multi-block mixed method is used to interpolate a small part of the training image to generate new samples. Then, the new samples are used to improve the recognition accuracy of the model. To verify the effectiveness of the proposed method, experiments are carried out on the moving and stationary target acquisition and recognition (MSTAR) data set. The experimental results fully demonstrate that the proposed MBM semi-supervised learning method can effectively address the problem of annotation insufficiency in SAR data sets and can learn valuable information from unlabeled samples, thereby improving the recognition performance. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Graphical abstract

18 pages, 3359 KiB  
Article
A Novel LSTM Model with Interaction Dual Attention for Radar Echo Extrapolation
by Chuyao Luo, Xutao Li, Yongliang Wen, Yunming Ye and Xiaofeng Zhang
Remote Sens. 2021, 13(2), 164; https://doi.org/10.3390/rs13020164 - 06 Jan 2021
Cited by 27 | Viewed by 3471
Abstract
The task of precipitation nowcasting is significant in the operational weather forecast. The radar echo map extrapolation plays a vital role in this task. Recently, deep learning techniques such as Convolutional Recurrent Neural Network (ConvRNN) models have been designed to solve the task. [...] Read more.
The task of precipitation nowcasting is significant in the operational weather forecast. The radar echo map extrapolation plays a vital role in this task. Recently, deep learning techniques such as Convolutional Recurrent Neural Network (ConvRNN) models have been designed to solve the task. These models, albeit performing much better than conventional optical flow based approaches, suffer from a common problem of underestimating the high echo value parts. The drawback is fatal to precipitation nowcasting, as the parts often lead to heavy rains that may cause natural disasters. In this paper, we propose a novel interaction dual attention long short-term memory (IDA-LSTM) model to address the drawback. In the method, an interaction framework is developed for the ConvRNN unit to fully exploit the short-term context information by constructing a serial of coupled convolutions on the input and hidden states. Moreover, a dual attention mechanism on channels and positions is developed to recall the forgotten information in the long term. Comprehensive experiments have been conducted on CIKM AnalytiCup 2017 data sets, and the results show the effectiveness of the IDA-LSTM in addressing the underestimation drawback. The extrapolation performance of IDA-LSTM is superior to that of the state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning for Radar and Sonar Image Processing)
Show Figures

Figure 1

Back to TopTop