remotesensing-logo

Journal Browser

Journal Browser

Radar and Sonar Imaging and Processing Ⅲ

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (30 December 2022) | Viewed by 15452

Special Issue Editors


E-Mail Website
Guest Editor
Department of Geodesy, Faculty of Civil and Environmental Engineering, Gdansk Technical University, Narutowicza St. 11/12, Gdansk, Poland
Interests: radar navigation; comparative (terrain-based) navigation; multi-sensor data fusion; radar and sonar target tracking; sonar imaging and understanding; MBES bathymetry; ASV; artificial neural networks; geoinformatics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Geoinformatics and Hydrography, Faculty of Navigation, Maritime University of Szczecin, 70-500 Szczecin, Poland
Interests: target tracking; data fusion; maritime radars; spatial analysis; artificial neural networks; mobile cartography
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, radar and sonar technology has been at the center of several major developments in remote sensing, both in civilian and defense applications. It allows us to observe targets in space, on land, at sea, and under water, providing more and more precise information about them. Although radar technology has been known for more than 100 years, it is still developing and it is now implemented in many maritime, air, satellite, and land applications. New technologies such as sparse image reconstruction and multistatic active and passive SAR and ISAR imaging are improving the quality of images and broadening the scope of application. The rapid development of three dimensional automotive radars that are able to recognize different objects and assign the risk of collision is another example of the progress of this technology. In maritime radars, the application of FMCW technology is becoming more and more popular, aside from classical pulse radars. Simultaneously, sonar technology has also been used for dozens of decades, at the beginning only for military solutions but today, using 3D versions, it is used for many underwater tasks, such as underwater surface imaging, target detection, tracking, etc. The impact of sonar technologies has been growing, particularly at the beginning of the autonomous vehicle era. Recently, the influence of artificial intelligence on radar and sonar image processing and understanding has emerged. Radar and sonar systems are mounted onboard smart and flexible platforms and on several types of unmanned vehicle. Both of these technologies focus on remote detection of targets and both may encounter many common scientific challenges. Unfortunately, specialists from the radar and sonar fields do not interact with each other enough, slowing down progress in both areas.

This Special Issue will report the latest advances and trends in the field of remote sensing for radar and sonar image processing, addressing original developments, new applications, and practical solutions to open questions. This will be the third installment, following on from the success of previous two. The aim is to further increase the data and knowledge exchange within the scientific community and at allow experts from other areas to understand radar and sonar problems. Topics for this Special Issue include, but are not limited to, the following:

  • 3D radar and 3D sonar imaging.
  • Artificial intelligence for radar and sonar data processing.
  • Automatic target detection and classification.
  • Automotive and maritime radar.
  • Ground penetrating radar application in civil engineering.
  • Interferometric methods.
  • Multi-sensor data fusion.
  • Passive and active radar imaging (SAR, ISAR).
  • Radar and sonar surveillance systems.
  • Radar and sonar target tracking and anti-collision algorithms and methods.
  • Radar and sonar technology for autonomous vehicles.
  • Radar sensors design and platform developments.
  • Side scan sonar, imaging sonar, chirp sonar, and forward-looking sonar.
  • Sonar image processing, data reduction, feature extraction, and image understanding.
  • Synergy between radar, sonar, and other sensors.

Prof. Dr. Andrzej Stateczny
Dr. Witold Kazimierski
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Related Special Issue

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 6880 KiB  
Article
MV-GPRNet: Multi-View Subsurface Defect Detection Network for Airport Runway Inspection Based on GPR
by Nansha Li, Renbiao Wu, Haifeng Li, Huaichao Wang, Zhongcheng Gui and Dezhen Song
Remote Sens. 2022, 14(18), 4472; https://doi.org/10.3390/rs14184472 - 07 Sep 2022
Cited by 6 | Viewed by 2372
Abstract
The detection and restoration of subsurface defects are essential for ensuring the structural reliability of airport runways. Subsurface inspections can be performed with the aid of a robot equipped with a Ground Penetrating Radar (GPR). However, interpreting GPR data is extremely difficult, as [...] Read more.
The detection and restoration of subsurface defects are essential for ensuring the structural reliability of airport runways. Subsurface inspections can be performed with the aid of a robot equipped with a Ground Penetrating Radar (GPR). However, interpreting GPR data is extremely difficult, as GPR data usually contains severe clutter interference. In addition, many different types of subsurface defects present similar features in B-scan images, making them difficult to distinguish. Consequently, this makes later maintenance work harder as different subsurface defects require different restoration measures. Thus, to automate the inspection process and improve defect identification accuracy, a novel deep learning algorithm, MV-GPRNet, is proposed. Instead of traditionally using GPR B-scan images only, MV-GPRNet utilizes multi-view GPR data to robustly detect regions with defects despite significant interference. It originally fuses the 3D feature map in C-scan data and the 2D feature map in Top-scan data for defect classification and localization. With our runway inspection robot, a large number of real runway data sets from three international airports have been used to extensively test our method. Experimental results indicate that the proposed MV-GPRNet outperforms state-of-the-art (SOTA) approaches. In particular, MV-GPRNet achieves F1 measurements for voids, cracks, subsidences, and pipes at 91%, 69%, 90%, and 100%, respectively. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Figure 1

23 pages, 8996 KiB  
Article
Extraction of Submarine Gas Plume Based on Multibeam Water Column Point Cloud Model
by Xin Ren, Dong Ding, Haosen Qin, Le Ma and Guangxue Li
Remote Sens. 2022, 14(17), 4387; https://doi.org/10.3390/rs14174387 - 03 Sep 2022
Cited by 3 | Viewed by 1753
Abstract
The gas plume is a direct manifestation of sea cold seep and one of the most significant symbol indicators of the presence of gas hydrate reservoirs. The multibeam water column (MWC) data can be used to extract and identify the gas plume efficiently [...] Read more.
The gas plume is a direct manifestation of sea cold seep and one of the most significant symbol indicators of the presence of gas hydrate reservoirs. The multibeam water column (MWC) data can be used to extract and identify the gas plume efficiently and accurately. The current research methods mostly start from the perspective of image theory, which cannot identify the three-dimensional (3D) spatial structure features of gas plumes, reducing the efficiency and accuracy of detection. Therefore, this paper proposes a method for identifying and extracting the gas plume based on an MWC point cloud model, which calculates the spatially resolved homing of MWC data and constructs a 3D point cloud model of MWC containing acoustic reflection intensity information. It first performs noise suppression of the 3D point cloud of the MWC based on the symmetric subtraction and Otsu algorithm by leveraging the noise distribution of the MWC and the reflection intensity characteristics of the gas plume. Then, it extracts the point cloud clusters containing the gas plume based on Density-Based Spatial Clustering of Applications with Noise (DBSCAN) according to the density difference between the gas plume point cloud and the background MWC point cloud and next identifies the point cloud clusters by feature matching based on fast point feature histograms (FPFHs). Finally, it extracts the gas plume point cloud set in the MWC. As evidenced by the MWC data collected from gas hydrate enrichment zones in the Gulf of Mexico, the location of gas plume extracted by this method is highly consistent with that of gas leakage points measured during the cruise. Using this method, we obtained the point cloud data set of gas plume for the first time and accurately characterized the 3D spatial morphology of the subsea gas plume, providing technical support for gas hydrate exploration, subsea gas seepage area delineation, and subsea seepage gas flux estimation. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Figure 1

18 pages, 4551 KiB  
Article
MIMO Radar Sparse Recovery Imaging with Wideband Interference Prediction
by Tao Pu, Ningning Tong, Weike Feng, Pengcheng Wan and Xiaowei Hu
Remote Sens. 2022, 14(15), 3774; https://doi.org/10.3390/rs14153774 - 05 Aug 2022
Cited by 3 | Viewed by 1210
Abstract
Multiple-input multiple-output (MIMO) radar three-dimensional (3D) imaging is widely applied in military and civil fields. However, MIMO is easily affected by wideband interference (WBI). To solve this problem, in this study, we propose a sparse recovery imaging method with WBI prediction based on [...] Read more.
Multiple-input multiple-output (MIMO) radar three-dimensional (3D) imaging is widely applied in military and civil fields. However, MIMO is easily affected by wideband interference (WBI). To solve this problem, in this study, we propose a sparse recovery imaging method with WBI prediction based on the predictive recurrent neural network (PredRNN) and the tensor-based smooth L0 (TSL0) algorithm. Firstly, we extract the time-frequency (TF) feature of the historical measured WBI via the short-time Fourier transform (STFT) operation. In this way, we can use PredRNN to exploit the spatiotemporal correlation of the WBI in the TF domain to predict the TF feature of the WBI in the future. Then, we adaptively design the random sparse stepped frequency waveform by selecting non-overlapped frequencies with the WBI according to the predicted WBI TF feature. Finally, we apply the TSL0 algorithm to reconstruct the 3D high-resolution target image from the sparse signal cube. Simulation results show the high performance and robustness of the proposed imaging method in the presence of different WBIs. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Graphical abstract

24 pages, 19969 KiB  
Article
Bubble Plume Target Detection Method of Multibeam Water Column Images Based on Bags of Visual Word Features
by Junxia Meng, Jun Yan and Jianhu Zhao
Remote Sens. 2022, 14(14), 3296; https://doi.org/10.3390/rs14143296 - 08 Jul 2022
Cited by 5 | Viewed by 1660
Abstract
Bubble plumes, as main manifestations of seabed gas leakage, play an important role in the exploration of natural gas hydrate and other resources. Multibeam water column images have been widely used in detecting bubble plume targets in recent years because they can wholly [...] Read more.
Bubble plumes, as main manifestations of seabed gas leakage, play an important role in the exploration of natural gas hydrate and other resources. Multibeam water column images have been widely used in detecting bubble plume targets in recent years because they can wholly record water column and seabed backscatter strengths. However, strong noises in multibeam water column images cause many issues in target detection, and traditional target detection methods are mainly used in optical images and are less efficient for noise-affected sonar images. To improve the detection accuracy of bubble plume targets in water column images, this study proposes a target detection method based on the bag of visual words (BOVW) features and support vector machine (SVM) classifier. First, the characteristics of bubble plume targets in water column images are analyzed, with the conclusion that the BOVW features can well express the gray scale, texture, and shape characteristics of bubble plumes. Second, the BOVW features are constructed following steps of point description extraction, description clustering, and feature encoding. Third, the quadratic SVM classifier is used for the recognition of target images. Finally, a procedure of bubble plume target detection in water column images is described. In the experiment using the measured data in the Strait of Georgia, the proposed method achieved 98.6% recognition accuracy of bubble plume targets in validation sets, and 91.7% correct detection rate of the targets in water column images. By comparison with other methods, the experimental results prove the validity and accuracy of the proposed method, and show potential applications of our method in the exploration and research on ocean resources. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Graphical abstract

16 pages, 6100 KiB  
Article
Unambiguous ISAR Imaging Method for Complex Maneuvering Group Targets
by Fengkai Liu, Darong Huang, Xinrong Guo and Cunqian Feng
Remote Sens. 2022, 14(11), 2554; https://doi.org/10.3390/rs14112554 - 26 May 2022
Cited by 3 | Viewed by 1257
Abstract
In inverse synthetic-aperture radar (ISAR) imaging, it is essential to deal with the Doppler ambiguity of group targets with complex maneuvers in order to avoid the bias of target position towards the actual value. Simultaneously, migration through resolution cell (MTRC) under the Doppler [...] Read more.
In inverse synthetic-aperture radar (ISAR) imaging, it is essential to deal with the Doppler ambiguity of group targets with complex maneuvers in order to avoid the bias of target position towards the actual value. Simultaneously, migration through resolution cell (MTRC) under the Doppler ambiguity is unable to be compensated for as a preprocessing. Traditional ISAR imaging methods for maneuvering targets, however, are undesirable to handle the severe deformation and defocusing in the imaging results induced by the Doppler ambiguity and MTRC. In this paper, we propose a novel and effective ISAR imaging method to improve the imaging quality by removing the Doppler ambiguity and compensating for the MTRC. Specifically, we first model the echo as a multi-component cubic phase signal (m-CPS) and design a high-order instantaneous autocorrelation function–generalized scaled Fourier transform (HIAF–GSCFT) to process the echo. This is to estimate the rotational parameters without MTRC compensation. Then, a maximum weighted contrast algorithm is used to remove the Doppler ambiguity, followed by reconstructing the echo. Compared with the existing method, the proposed method can accurately estimate the rotational parameters under the existing MTRCs and achieves a high-quality ISAR image for group targets, with complex maneuvers without Doppler ambiguity. Experiments of simulated and measured datasets validate its effectiveness and robustness for single target and group targets. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Figure 1

21 pages, 11319 KiB  
Article
3D Sparse SAR Image Reconstruction Based on Cauchy Penalty and Convex Optimization
by Yangyang Wang, Zhiming He, Fan Yang, Qiangqiang Zeng and Xu Zhan
Remote Sens. 2022, 14(10), 2308; https://doi.org/10.3390/rs14102308 - 10 May 2022
Cited by 2 | Viewed by 1525
Abstract
Three-dimensional (3D) synthetic aperture radar (SAR) images can provide comprehensive 3D spatial information for environmental monitoring, high dimensional mapping and radar cross sectional (RCS) measurement. However, the SAR image obtained by the traditional matched filtering (MF) method has a high sidelobe and is [...] Read more.
Three-dimensional (3D) synthetic aperture radar (SAR) images can provide comprehensive 3D spatial information for environmental monitoring, high dimensional mapping and radar cross sectional (RCS) measurement. However, the SAR image obtained by the traditional matched filtering (MF) method has a high sidelobe and is easily disturbed by noise. In order to obtain high-quality 3D SAR images, sparse signal processing has been used in SAR imaging in recent years. However, the typical L1 regularization model is a biased estimation, which tends to underestimate the target intensity. Therefore, in this article, we present a 3D sparse SAR image reconstruction method combining the Cauchy penalty and improved alternating direction method of multipliers (ADMM). The Cauchy penalty is a non-convex penalty function, which can estimate the target intensity more accurately than L1. At the same time, the objective function maintains convexity via the convex non-convex (CNC) strategy. Compared with L1 regularization, the proposed method can reconstruct the image more accurately and improve the image quality. Finally, three indexes suitable for SAR images are used to evaluate the performance of the method under different conditions. Simulation and experimental results verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Figure 1

25 pages, 8607 KiB  
Article
A Multi-Domain Collaborative Transfer Learning Method with Multi-Scale Repeated Attention Mechanism for Underwater Side-Scan Sonar Image Classification
by Zhen Cheng, Guanying Huo and Haisen Li
Remote Sens. 2022, 14(2), 355; https://doi.org/10.3390/rs14020355 - 13 Jan 2022
Cited by 20 | Viewed by 2985
Abstract
Due to the strong speckle noise caused by the seabed reverberation which makes it difficult to extract discriminating and noiseless features of a target, recognition and classification of underwater targets using side-scan sonar (SSS) images is a big challenge. Moreover, unlike classification of [...] Read more.
Due to the strong speckle noise caused by the seabed reverberation which makes it difficult to extract discriminating and noiseless features of a target, recognition and classification of underwater targets using side-scan sonar (SSS) images is a big challenge. Moreover, unlike classification of optical images which can use a large dataset to train the classifier, classification of SSS images usually has to exploit a very small dataset for training, which may cause classifier overfitting. Compared with traditional feature extraction methods using descriptors—such as Haar, SIFT, and LBP—deep learning-based methods are more powerful in capturing discriminating features. After training on a large optical dataset, e.g., ImageNet, direct fine-tuning method brings improvement to the sonar image classification using a small-size SSS image dataset. However, due to the different statistical characteristics between optical images and sonar images, transfer learning methods—e.g., fine-tuning—lack cross-domain adaptability, and therefore cannot achieve very satisfactory results. In this paper, a multi-domain collaborative transfer learning (MDCTL) method with multi-scale repeated attention mechanism (MSRAM) is proposed for improving the accuracy of underwater sonar image classification. In the MDCTL method, low-level characteristic similarity between SSS images and synthetic aperture radar (SAR) images, and high-level representation similarity between SSS images and optical images are used together to enhance the feature extraction ability of the deep learning model. Using different characteristics of multi-domain data to efficiently capture useful features for the sonar image classification, MDCTL offers a new way for transfer learning. MSRAM is used to effectively combine multi-scale features to make the proposed model pay more attention to the shape details of the target excluding the noise. Experimental results of classification show that, in using multi-domain data sets, the proposed method is more stable with an overall accuracy of 99.21%, bringing an improvement of 4.54% compared with the fine-tuned VGG19. Results given by diverse visualization methods also demonstrate that the method is more powerful in feature representation by using the MDCTL and MSRAM. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Figure 1

22 pages, 10080 KiB  
Article
Three-Dimensional Sparse SAR Imaging with Generalized Lq Regularization
by Yangyang Wang, Zhiming He, Xu Zhan, Yuanhua Fu and Liming Zhou
Remote Sens. 2022, 14(2), 288; https://doi.org/10.3390/rs14020288 - 09 Jan 2022
Cited by 6 | Viewed by 1637
Abstract
Three-dimensional (3D) synthetic aperture radar (SAR) imaging provides complete 3D spatial information, which has been used in environmental monitoring in recent years. Compared with matched filtering (MF) algorithms, the regularization technique can improve image quality. However, due to the substantial computational cost, the [...] Read more.
Three-dimensional (3D) synthetic aperture radar (SAR) imaging provides complete 3D spatial information, which has been used in environmental monitoring in recent years. Compared with matched filtering (MF) algorithms, the regularization technique can improve image quality. However, due to the substantial computational cost, the existing observation-matrix-based sparse imaging algorithm is difficult to apply to large-scene and 3D reconstructions. Therefore, in this paper, novel 3D sparse reconstruction algorithms with generalized Lq-regularization are proposed. First, we combine majorization–minimization (MM) and L1 regularization (MM-L1) to improve SAR image quality. Next, we combine MM and L1/2 regularization (MM-L1/2) to achieve high-quality 3D images. Then, we present the algorithm which combines MM and L0 regularization (MM-L0) to obtain 3D images. Finally, we present a generalized MM-Lq algorithm (GMM-Lq) for sparse SAR imaging problems with arbitrary q0q1 values. The proposed algorithm can improve the performance of 3D SAR images, compared with existing regularization techniques, and effectively reduce the amount of calculation needed. Additionally, the reconstructed complex image retains the phase information, which makes the reconstructed SAR image still suitable for interferometry applications. Simulation and experimental results verify the effectiveness of the algorithms. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing Ⅲ)
Show Figures

Figure 1

Back to TopTop