remotesensing-logo

Journal Browser

Journal Browser

SAR Images Processing and Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 May 2023) | Viewed by 54230

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronic Engineering, Xidian University, Xi’an 710071, China
Interests: high-resolution radar imaging; radar automatic target recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China
Interests: SAR image interpretation; automatic target recognition; radar signal processing

E-Mail Website
Guest Editor
1. Suzhou Aerospace Information Research Institute, Suzhou 215124, China
2. Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
Interests: spaceborne SAR and airborne SAR data processing; high quality SAR image products generation; SAR 2D/3D imaging; microwave vision

E-Mail Website
Guest Editor
Institute of Information and Navigation, Air Force Engineering University, Xi’an 710077, China
Interests: remote sensing; synthetic aperture radar (SAR); radar imaging; auto target recognition (ATR)

E-Mail Website
Guest Editor
College of Information Science and Technology, Beijing University of Chemical Technology, Beijing 100029, China
Interests: synthetic aperature radar (SAR); radar remote sensing; polarimetric SAR; satellite image analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The synthetic aperture radar (SAR), an active microwave sensor, is featured by all-day, all-weather, and long-distance observation capabilities. In recent years, it has achieved rapid progress in system design, signal processing, and information acquisition, and played significant roles in civil (e.g., topography, geology, and disaster monitoring) and military applications (e.g., battlefield reconnaissance and tactical assessment).

Currently, SAR is developing towards diversified platforms, comprehensive imaging modes, and advanced working systems, producing massive SAR images constantly. Although the performance of image formation and information acquisition can be boosted by advanced signal and image processing theories, we still face challenges concerning: 1) interference/noise mitigation and target enhancement in complex observation environments; 2) feature extraction and fusion of multi-domain, multi-source, and multi-scale SAR data; 3) accurate classification with unbalanced or scarce samples; 4) designing integrated image processing and interpretation architectures.

This Special Issue provides a platform for researchers to discuss the above significant challenges. Authors are encouraged to submit their latest research progress regarding new theories and technologies in SAR image processing and interpretation, as well as new applications in a wider range of fields covering a variety of SAR platforms, such as satellites, aircraft, and UAVs. We welcome topics including, but not limited to:

  • SAR image processing, including image formation, image denoising/interference mitigation, image enhancement, etc.
  • SAR image interpretation, including SAR image segmentation, feature extraction and fusion, SAR target detection and classification, change detection, etc.
  • Advanced SAR processing techniques, including few-shot and zero-shot recognition, incremental learning in an open environment, lightweight/physical interpretable deep neural networks, integrated imaging and detection/classification networks, etc.
  • New SAR image data sets.

Prof. Dr. Xueru Bai
Prof. Dr. Gangyao Kuang
Prof. Dr. Xiaolan Qiu
Prof. Dr. Ying Luo
Prof. Dr. Deliang Xiang
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • synthetic aperture radar (SAR)
  • remote sensing
  • image processing
  • interference mitigation
  • image interpretation
  • target detection
  • classification/recognition
  • deep networks

Related Special Issue

Published Papers (33 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 19618 KiB  
Article
Deep Image Prior Amplitude SAR Image Anonymization
by Edoardo Daniele Cannas, Sara Mandelli, Paolo Bestagini, Stefano Tubaro and Edward J. Delp
Remote Sens. 2023, 15(15), 3750; https://doi.org/10.3390/rs15153750 - 27 Jul 2023
Cited by 2 | Viewed by 1065
Abstract
This paper presents an extensive evaluation of the Deep Image Prior (DIP) technique for image inpainting on Synthetic Aperture Radar (SAR) images. SAR images are gaining popularity in various applications, but there may be a need to conceal certain regions of them. Image [...] Read more.
This paper presents an extensive evaluation of the Deep Image Prior (DIP) technique for image inpainting on Synthetic Aperture Radar (SAR) images. SAR images are gaining popularity in various applications, but there may be a need to conceal certain regions of them. Image inpainting provides a solution for this. However, not all inpainting techniques are designed to work on SAR images. Some are intended for use on photographs, while others have to be specifically trained on top of a huge set of images. In this work, we evaluate the performance of the DIP technique that is capable of addressing these challenges: it can adapt to the image under analysis including SAR imagery; it does not require any training. Our results demonstrate that the DIP method achieves great performance in terms of objective and semantic metrics. This indicates that the DIP method is a promising approach for inpainting SAR images, and can provide high-quality results that meet the requirements of various applications. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

24 pages, 5725 KiB  
Article
FedDAD: Solving the Islanding Problem of SAR Image Aircraft Detection Data
by Zhiwei Jia, Haoliang Zheng, Rongjie Wang and Wenguang Zhou
Remote Sens. 2023, 15(14), 3620; https://doi.org/10.3390/rs15143620 - 20 Jul 2023
Cited by 2 | Viewed by 979
Abstract
In aircraft feature detection, the difficulty of acquiring Synthetic Aperture Radar (SAR) images leads to the scarcity of some types of aircraft samples, and the high privacy makes the personal sample set have the characteristics of data silos. Existing data enhancement methods can [...] Read more.
In aircraft feature detection, the difficulty of acquiring Synthetic Aperture Radar (SAR) images leads to the scarcity of some types of aircraft samples, and the high privacy makes the personal sample set have the characteristics of data silos. Existing data enhancement methods can alleviate the problem of data scarcity through feature reuse, but they are still powerless for data that are not involved in local training. To solve this problem, a new federated learning framework was proposed to solve the problem of data scarcity and data silos through multi-client joint training and model aggregation. The commonly used federal average algorithm is not effective for aircraft detection with unbalanced samples, so a federal distribution average deviation (FedDAD) algorithm, which is more suitable for aircraft detection in SAR images, was designed. Based on label distribution and client model quality, the contribution ratio of each client parameter is adaptively adjusted to optimize the global model. Client models trained through federated cooperation have an advantage in detecting aircraft with unknown scenarios or attitudes while remaining sensitive to local datasets. Based on the YOLOv5s algorithm, the feasibility of federated learning was verified on SAR image aircraft detection datasets and the portability of the FedDAD algorithm on public datasets. In tests based on the YOLOv5s algorithm, FedDAD outperformed FedAvg’s mAP0.5–0.95 on the total test set of two SAR image aircraft detection and far outperformed the local centralized training model. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

24 pages, 9563 KiB  
Article
Optical and SAR Image Registration Based on Pseudo-SAR Image Generation Strategy
by Canbin Hu, Runze Zhu, Xiaokun Sun, Xinwei Li and Deliang Xiang
Remote Sens. 2023, 15(14), 3528; https://doi.org/10.3390/rs15143528 - 13 Jul 2023
Viewed by 1352
Abstract
The registration of optical and SAR images has always been a challenging task due to the different imaging mechanisms of the corresponding sensors. To mitigate this difference, this paper proposes a registration algorithm based on a pseudo-SAR image generation strategy and an improved [...] Read more.
The registration of optical and SAR images has always been a challenging task due to the different imaging mechanisms of the corresponding sensors. To mitigate this difference, this paper proposes a registration algorithm based on a pseudo-SAR image generation strategy and an improved deep learning-based network. The method consists of two stages: a pseudo-SAR image generation strategy and an image registration network. In the pseudo-SAR image generation section, an improved Restormer network is used to convert optical images into pseudo-SAR images. An L2 loss function is adopted in the network, and the loss function fluctuates less at the optimal point, making it easier for the model to reach the fitting state. In the registration part, the ROEWA operator is used to construct the Harris scale space for pseudo-SAR and real SAR images, respectively, and each extreme point in the scale space is extracted and added to the keypoint set. The image patches around the keypoints are selected and fed into the network to obtain the feature descriptor. The pseudo-SAR and real SAR images are matched according to the descriptors, and outliers are removed by the RANSAC algorithm to obtain the final registration result. The proposed method is tested on a public dataset. The experimental analysis shows that the average value of NCM surpasses similar methods over 30%, and the average value of RMSE is lower than similar methods by more than 0.04. The results demonstrate that the proposed strategy is more robust than other state-of-the-art methods. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

21 pages, 3220 KiB  
Article
Ship Detection in Low-Quality SAR Images via an Unsupervised Domain Adaption Method
by Xinyang Pu, Hecheng Jia, Yu Xin, Feng Wang and Haipeng Wang
Remote Sens. 2023, 15(13), 3326; https://doi.org/10.3390/rs15133326 - 29 Jun 2023
Cited by 1 | Viewed by 1141
Abstract
Ship detection in low-quality Synthetic Aperture Radar (SAR) images poses a persistent challenge. Noise signals in complex environments disrupt imaging conditions, hindering SAR systems from acquiring precise target information, thereby significantly compromising the performance of detectors. Some methods mitigate interference via denoising techniques, [...] Read more.
Ship detection in low-quality Synthetic Aperture Radar (SAR) images poses a persistent challenge. Noise signals in complex environments disrupt imaging conditions, hindering SAR systems from acquiring precise target information, thereby significantly compromising the performance of detectors. Some methods mitigate interference via denoising techniques, while others introduce noise using classical methods to learn target features in the presence of noise. This conundrum is regarded as a cross-domain problem in this paper; a ship detection method in low-quality images is proposed to learn features of targets and shrink serious deterioration of detection performance by utilizing Generative Adversarial Networks (GANs). First, an image-to-image translation task is implemented using CycleGAN to generate low-quality SAR images with complex interference from the source domain to the target domain. Second, with the annotation inheritance, these generated SAR images participate in a training process to improve the detection accuracy and model robustness. Multiple experiments indicate that the proposed method conspicuously improves the detection performance and efficaciously reduces the missed detection rate in the SAR ship detection task. This cross-domain approach achieved outstanding improvements in the form of 11.0% mAP and 3.22% mAP on the GaoFen-3 ship dataset and SRSSD-V1.0, respectively. In addition, the characteristics and potentials of near-shore and off-shore SAR image reconstruction with style transfer based on Generative Adversarial Networks were explored and analyzed in this work. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

13 pages, 1141 KiB  
Communication
Effects of Atmospheric Coherent Time on Inverse Synthetic Aperture Ladar Imaging through Atmospheric Turbulence
by Azezigul Abdukirim, Yichong Ren, Zhiwei Tao, Shiwei Liu, Yanling Li, Hanling Deng and Ruizhong Rao
Remote Sens. 2023, 15(11), 2883; https://doi.org/10.3390/rs15112883 - 01 Jun 2023
Cited by 3 | Viewed by 1077
Abstract
Inverse synthetic aperture ladar (ISAL) can achieve high-resolution images for long-range moving targets, while its performance is affected by atmospheric turbulence. In this paper, the dynamic evolution of atmospheric turbulence is studied by using an infinitely long phase screen (ILPS), and the atmospheric [...] Read more.
Inverse synthetic aperture ladar (ISAL) can achieve high-resolution images for long-range moving targets, while its performance is affected by atmospheric turbulence. In this paper, the dynamic evolution of atmospheric turbulence is studied by using an infinitely long phase screen (ILPS), and the atmospheric coherent time is defined to describe the variation speed of the phase fluctuation induced by atmospheric turbulence. The simulation results show that the temporal decoherence of the echo induced by turbulence causes phase fluctuation and introduces an extra random phase, which deteriorates the phase stability and makes coherent synthesis impossible. Thus, we evaluated its effects on ISAL imaging and found a method to mitigate the impact of turbulence on ISAL images. The phase compensation algorithm could correct the phase variation in different pulses instead of that within the same pulse. Therefore, the relationship between the atmospheric coherent time and pulse duration time (rather than that between the atmospheric coherent time and ISAL imaging time) ultimately determines the ISAL imaging quality. Furthermore, these adverse effects could be mitigated by increasing the atmospheric coherent time or decreasing the pulse duration time, which results in an improvement in the ISAL imaging quality. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

18 pages, 3275 KiB  
Article
A Statistical Analysis for Intensity Wavelength-Resolution SAR Difference Images
by Gustavo Henrique Mittmann Voigt, Dimas Irion Alves, Crístian Müller, Renato Machado, Lucas Pedroso Ramos, Viet Thuy Vu and Mats I. Pettersson
Remote Sens. 2023, 15(9), 2401; https://doi.org/10.3390/rs15092401 - 04 May 2023
Viewed by 1357
Abstract
This paper presents a statistical analysis of intensity wavelength-resolution synthetic aperture radar (SAR) difference images. In this analysis, Anderson Darling goodness-of-fit tests are performed, considering two different statistical distributions as candidates for modeling the clutter-plus-noise, i.e., the background statistics. The results show that [...] Read more.
This paper presents a statistical analysis of intensity wavelength-resolution synthetic aperture radar (SAR) difference images. In this analysis, Anderson Darling goodness-of-fit tests are performed, considering two different statistical distributions as candidates for modeling the clutter-plus-noise, i.e., the background statistics. The results show that the Gamma distribution is a good fit for the background of the tested SAR images, especially when compared with the Exponential distribution. Based on the results of this statistical analysis, a change detection application for the detection of concealed targets is presented. The adequate selection of the background distribution allows for the evaluated change detection method to achieve a better performance in terms of probability of detection and false alarm rate, even when compared with competitive performance change detection methods in the literature. For instance, in an experimental evaluation considering a data set obtained by the Coherent All Radio Band Sensing (CARABAS) II UWB SAR system, the evaluated change detection method reached a detection probability of 0.981 for a false alarm rate of 1/km2. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

20 pages, 3607 KiB  
Article
SAR Image Quality Assessment: From Sample-Wise to Class-Wise
by Ziyi Yu, Ganggang Dong and Hongwei Liu
Remote Sens. 2023, 15(8), 2110; https://doi.org/10.3390/rs15082110 - 17 Apr 2023
Viewed by 1498
Abstract
Target recognition is the core application of radar image interpretation. In recent years, deep learning has become the mainstream solution. However, this family of methods is highly dependent on a great deal of training samples. Limited samples may lead to problems such as [...] Read more.
Target recognition is the core application of radar image interpretation. In recent years, deep learning has become the mainstream solution. However, this family of methods is highly dependent on a great deal of training samples. Limited samples may lead to problems such as underfitting and poor robustness. To solve the problem, numerous generative models have been presented. The generated samples played an important role in target recognition. It is therefore needed to assess the quality of simulated images. However, few studies were performed in the preceding works. To fill the gap, a new evaluation strategy is proposed in this paper. The proposed method is composed of two schemes: the sample-wise assessment and the class-wise one. The simulated images can then be evaluated from two different perspectives. The sample-wise assessment combines the Fisher separability criterion, fuzzy comprehensive evaluation, analytic hierarchy process, and image feature extraction into a unified framework. It is used to evaluate whether the relative intensity of the speckle noise of the SAR image and the target backscattering coefficients are well simulated. Contrarily, the class-wise assessment is designed to compare the application capability of the simulated images holistically. Multiple comparative experiments are performed to verify the proposed method. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

28 pages, 5324 KiB  
Article
Range-Doppler Based Moving Target Image Trace Analysis Method in Circular SAR
by Wenjie Shen, Yanping Wang, Yun Lin, Yang Li, Wen Jiang and Wen Hong
Remote Sens. 2023, 15(8), 2073; https://doi.org/10.3390/rs15082073 - 14 Apr 2023
Viewed by 1306
Abstract
The single channel Circular Synthetic Aperture Radar (CSAR) has the advantage of continuous surveillance of a fixed scene of interest, which can provide high frame rate image sequences to detect ground moving targets. Recent image-sequence-based CSAR moving target detection methods utilize the fact [...] Read more.
The single channel Circular Synthetic Aperture Radar (CSAR) has the advantage of continuous surveillance of a fixed scene of interest, which can provide high frame rate image sequences to detect ground moving targets. Recent image-sequence-based CSAR moving target detection methods utilize the fact that the target signal moves fast in the image sequence. Knowledge of the target’s image trace(moving trace in the image sequence, which is equal to a target signature’s morphology in a full aperture CSAR image) can help design better detection methods. However, previous signature morphology studies are based on linear track geometry assumptions, which cannot handle CSAR’s nonlinear track. Hence, this paper proposes a new image trace method based on the range-Doppler principle. The proposed method can deduct the exact analytic function of an arbitrary moving target’s image trace in CSAR. The method assumes radar operates in side-looking mode and that the target moves on the ground plane. It combines the range-Doppler equations(i.e., iso-range and iso-Doppler relation) and Cartesian transformation between the ground and radar coordinate system to obtain the parametric functions of the image trace. Based on the method, three types of target motion (including linear and nonlinear motion) are analyzed. The proposed method is validated with both simulated and real data. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

22 pages, 11697 KiB  
Article
Target Scattering Feature Extraction Based on Parametric Model Using Multi-Aspect SAR Data
by Xiaoyang Yue, Fei Teng, Yun Lin and Wen Hong
Remote Sens. 2023, 15(7), 1883; https://doi.org/10.3390/rs15071883 - 31 Mar 2023
Cited by 1 | Viewed by 983
Abstract
The multi-aspect SAR observation can obtain the backscattering information of the illuminated scene target. There are targets with different structures in the scene, and their backscattering responses are also different. Using backscattering amplitude information to analyze the differences between targets is a conventional [...] Read more.
The multi-aspect SAR observation can obtain the backscattering information of the illuminated scene target. There are targets with different structures in the scene, and their backscattering responses are also different. Using backscattering amplitude information to analyze the differences between targets is a conventional method. For point target types, one-dimensional backscattering curves can be used to analyze scattering characteristics, but it is difficult to analyze the overall structure of the target. Therefore, it is necessary to perform statistical analysis on the backscattering information combinating with the multi-aspect target area, and establish parameters to model the target area. In this paper, the algorithm uses the G0 distribution based on expectation maximization (EM) to fit the target area of the SAR scene. For different target types in the scene, the β and σ parameters obtained by the model combined with the backscattering amplitude information are used to perform the target. The results show that full-target in multi-aspect SAR image can be differentiated by two parameters. The scattering of partial-target slices can be characterized using two parameters (amplitude difference from surrounding points, scattering energy). The parametric model quantitatively characterizes the scattering feature of the target level, and the parameters changing corresponds to the change of the target image feature. C-band circular SAR data is used to validate our method. The experimental results give the parameter representation with sampling window based on the analysis of the target scattering, and give parameter estimates to characterize the partial target scattering. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

21 pages, 27902 KiB  
Article
SD-CapsNet: A Siamese Dense Capsule Network for SAR Image Registration with Complex Scenes
by Bangjie Li, Dongdong Guan, Xiaolong Zheng, Zhengsheng Chen and Lefei Pan
Remote Sens. 2023, 15(7), 1871; https://doi.org/10.3390/rs15071871 - 31 Mar 2023
Cited by 2 | Viewed by 1335
Abstract
SAR image registration is the basis for applications such as change detection, image fusion, and three-dimensional reconstruction. Although CNN-based SAR image registration methods have achieved competitive results, they are insensitive to small displacement errors in matched point pairs and do not provide a [...] Read more.
SAR image registration is the basis for applications such as change detection, image fusion, and three-dimensional reconstruction. Although CNN-based SAR image registration methods have achieved competitive results, they are insensitive to small displacement errors in matched point pairs and do not provide a comprehensive description of keypoint information in complex scenes. In addition, existing keypoint detectors are unable to obtain a uniform distribution of keypoints in SAR images with complex scenes. In this paper, we propose a texture constraint-based phase congruency (TCPC) keypoint detector that uses a rotation-invariant local binary pattern operator (RI-LBP) to remove keypoints that may be located at overlay or shadow locations. Then, we propose a Siamese dense capsule network (SD-CapsNet) to extract more accurate feature descriptors. Then, we define and verify that the feature descriptors in capsule form contain intensity, texture, orientation, and structure information that is useful for SAR image registration. In addition, we define a novel distance metric for the feature descriptors in capsule form and feed it into the Hard L2 loss function for model training. Experimental results for six pairs of SAR images demonstrate that, compared to other state-of-the-art methods, our proposed method achieves more robust results in complex scenes, with the number of correctly matched keypoint pairs (NCM) at least 2 to 3 times higher than the comparison methods, a root mean square error (RMSE) at most 0.27 lower than the compared methods. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

21 pages, 3338 KiB  
Article
Video SAR Moving Target Shadow Detection Based on Intensity Information and Neighborhood Similarity
by Zhiguo Zhang, Wenjie Shen, Linghao Xia, Yun Lin, Shize Shang and Wen Hong
Remote Sens. 2023, 15(7), 1859; https://doi.org/10.3390/rs15071859 - 30 Mar 2023
Cited by 2 | Viewed by 1345
Abstract
Video Synthetic Aperture Radar (SAR) has shown great potential in moving target detection and tracking. At present, most of the existing detection methods focus on the intensity information of the moving target shadow. According to the mechanism of shadow formation, some shadows of [...] Read more.
Video Synthetic Aperture Radar (SAR) has shown great potential in moving target detection and tracking. At present, most of the existing detection methods focus on the intensity information of the moving target shadow. According to the mechanism of shadow formation, some shadows of moving targets present low contrast, and their boundaries are blurred. Additionally, some objects with low reflectivity show similar features with them. These cause the performance of these methods to degrade. To solve this problem, this paper proposes a new moving target shadow detection method, which consists of background modeling and shadow detection based on intensity information and neighborhood similarity (BIIANS). Firstly, in order to improve the efficiency of image sequence generation, a fast method based on the Back-projection imaging algorithm (f-BP) is proposed. Secondly, due to the low-rank characteristics of stationary objects and the sparsity characteristics of moving target shadows presented in the image sequence, this paper introduces the low-rank sparse decomposition (LRSD) method to perform background modeling for obtaining better background (static objects) and foreground (moving targets) images. Because the shadows of moving targets appear in the same position in the original and the corresponding foreground images, the similarity between them is high and independent of their intensity. Therefore, using the BIIANS method can obtain better shadow detection results. Real W-band data are used to verify the proposed method. The experimental results reveal that the proposed method performs better than the classical methods in suppressing false alarms, missing alarms, and improving integrity. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

18 pages, 5544 KiB  
Article
A ViSAR Shadow-Detection Algorithm Based on LRSD Combined Trajectory Region Extraction
by Zhongzheng Yin, Mingjie Zheng and Yuwei Ren
Remote Sens. 2023, 15(6), 1542; https://doi.org/10.3390/rs15061542 - 11 Mar 2023
Cited by 1 | Viewed by 1271
Abstract
Shadow detection is a new method for video synthetic aperture radar moving target indication (ViSAR-GMTI). The shadow formed by the target occlusion will reflect its real position, preventing the defocusing or offset of the moving target from making it difficult to identify the [...] Read more.
Shadow detection is a new method for video synthetic aperture radar moving target indication (ViSAR-GMTI). The shadow formed by the target occlusion will reflect its real position, preventing the defocusing or offset of the moving target from making it difficult to identify the target during imaging. To achieve high-precision shadow detection, this paper proposes a video SAR moving target shadow-detection algorithm based on low-rank sparse decomposition combined with trajectory area extraction. Based on the low-rank sparse decomposition (LRSD) model, the algorithm creates a new decomposition framework combined with total variation (TV) regularization and coherence suppression items to improve the decomposition effect, and a global constraint is constructed to suppress interference using feature operators. In addition, it cooperates with the double threshold trajectory segmentation and error trajectory elimination method to further improve the detection performance. Finally, an experiment was carried out based on the video SAR data released by Sandia National Laboratory (SNL); the results prove the effectiveness of the proposed method, and the detection performance of the method is proved by comparative experiments. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

26 pages, 8134 KiB  
Article
Unsupervised SAR Image Change Detection Based on Structural Consistency and CFAR Threshold Estimation
by Jingxing Zhu, Feng Wang and Hongjian You
Remote Sens. 2023, 15(5), 1422; https://doi.org/10.3390/rs15051422 - 03 Mar 2023
Cited by 1 | Viewed by 1901
Abstract
Despite the remarkable progress made in recent years, until today, the automatic detection of changes in synthetic aperture radar (SAR) images remains a difficult task due to speckle noise. This inherent multiplicative noise tends to increase false alarms and misdetections. As a solution, [...] Read more.
Despite the remarkable progress made in recent years, until today, the automatic detection of changes in synthetic aperture radar (SAR) images remains a difficult task due to speckle noise. This inherent multiplicative noise tends to increase false alarms and misdetections. As a solution, we developed an unsupervised method that detects SAR changes by analyzing structural differences. By this method, the spatial structure cues of a pixel are represented by a set of similarity weight vectors calculated from the non-local scale of the pixel. The difference image (DI) is then derived by measuring the structural consistency of the corresponding pixels. A new statistical distance that is insensitive to speckle noise was used to measure the similarity weights between patches in order to obtain an accurate structure. It was derived by applying the Nakagami–Rayleigh distribution to a statistical test and customizing the approximation based on change detection. The CFAR threshold estimator in conjunction with the Rayleigh hypothesis was then employed to attenuate the effect of the unimodal histogram of the DI. The results indicated that the proposed method reduces the false alarm rate and improves the kappa and F1-scores, while providing satisfactory visual results. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

27 pages, 16946 KiB  
Article
Ship Instance Segmentation Based on Rotated Bounding Boxes for SAR Images
by Xinpeng Yang, Qiang Zhang, Qiulei Dong, Zhen Han, Xiliang Luo and Dongdong Wei
Remote Sens. 2023, 15(5), 1324; https://doi.org/10.3390/rs15051324 - 27 Feb 2023
Cited by 4 | Viewed by 1645
Abstract
Ship instance segmentation in synthetic aperture radar (SAR) images is a hard and challenging task, which not only locates ships but also obtains their shapes with pixel-level masks. However, in ocean SAR images, because of the consistent reflective intensities of ships, the appearances [...] Read more.
Ship instance segmentation in synthetic aperture radar (SAR) images is a hard and challenging task, which not only locates ships but also obtains their shapes with pixel-level masks. However, in ocean SAR images, because of the consistent reflective intensities of ships, the appearances of different ships are similar, thus making it far too difficult to distinguish ships when they are in densely packed groups. Especially when ships have incline directions and large aspect ratios, the horizontal bounding boxes (HB-Boxes) used by all the instance-segmentation networks that we know so far inevitably contain redundant backgrounds, docks, and even other ships, which mislead the following segmentation. To solve this problem, a novel ship instance-segmentation network, called SRNet, is proposed with rotated bounding boxes (RB-Boxes), which are taken as the foundation of segmentation. Along the directions of ships, the RB-Boxes can surround the ships tightly, but a minor deviation will corrupt the integrity of the ships’ masks. To improve the performance of the RB-Boxes, a dual feature alignment module (DAM) was designed to obtain the representative features with the direction and shape information of ships. On account of the difference between the classification task and regression task, two different sampling location calculation strategies were used in two convolutional kernels of the DAM, making these locations distributed dynamically on the ships’ bodies and along the ships’ boundaries. Moreover, to improve the effectiveness of training, a new adaptive Intersection-over-Union threshold (AIoU) was proposed based on the aspect-ratio information of ships to raise positive samples. To obtain the masks in the RB-Boxes, a new Mask-segmentation Head (MaskHead) with the twice sampling processes was explored. In experiments to evaluate the RB-Boxes, the accuracy of the RB-Boxes output from the Detection Head (DetHead) of SRNet outperformed eight rotated object-detection networks. In experiments to evaluate the final segmentation masks, compared with several classic and state-of-the-art instance-segmentation networks, our proposed SRNet achieved more accurate ship instance masks in SAR images. The ablation studies demonstrated the effectiveness of the DAM in the SRNet and the AIoU for our network training. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

20 pages, 6564 KiB  
Article
Detection and Monitoring of Small-Scale Diamond and Gold Mining Dredges Using Synthetic Aperture Radar on the Kadéï (Sangha) River, Central African Republic
by Marissa A. Alessi, Peter G. Chirico, Sindhuja Sunder and Kelsey L. O’Pry
Remote Sens. 2023, 15(4), 913; https://doi.org/10.3390/rs15040913 - 07 Feb 2023
Cited by 3 | Viewed by 2339
Abstract
Diamond and gold mining has been practiced by artisanal miners in the Central African Republic (CAR) for decades. The recent introduction of riverine dredges indicates a transition from artisanal/manual digging and sorting techniques to small-scale mining methods. This study implements a remote sensing [...] Read more.
Diamond and gold mining has been practiced by artisanal miners in the Central African Republic (CAR) for decades. The recent introduction of riverine dredges indicates a transition from artisanal/manual digging and sorting techniques to small-scale mining methods. This study implements a remote sensing analysis of Synthetic Aperture Radar (SAR) data to map gold and diamond dredges operating on the Kadéï (Sangha) river in the CAR. Riverine vessels are identified in Sentinel-1 SAR data between 2015 and 2019, and their activity levels are mapped over time. The number of active dredges identified on the river increased over the five years studied, with the largest increase occurring between 2016 and 2017. Detailing a method for mapping and monitoring riverine diamond and gold dredge mining is an important step in keeping up with evolving technologies and new areas of mineral exploitation and in helping address concerns over resource governance in remote and conflict-prone terrain. The use of SAR technology, with its weather-independence, broad coverage, and available wavelength combinations, allows for higher temporal resolution and improved vessel detection in the monitoring of small-scale mining (SSM) dredges. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

20 pages, 5630 KiB  
Article
D-MFPN: A Doppler Feature Matrix Fused with a Multilayer Feature Pyramid Network for SAR Ship Detection
by Yucheng Zhou, Kun Fu, Bing Han, Junxin Yang, Zongxu Pan, Yuxin Hu and Di Yin
Remote Sens. 2023, 15(3), 626; https://doi.org/10.3390/rs15030626 - 20 Jan 2023
Cited by 10 | Viewed by 1870
Abstract
Ship detection from synthetic aperture radar (SAR) images has become a major research field in recent years. It plays a major role in monitoring the ocean, marine rescue activities, and marine safety warnings. However, there are still some factors that restrict further improvements [...] Read more.
Ship detection from synthetic aperture radar (SAR) images has become a major research field in recent years. It plays a major role in monitoring the ocean, marine rescue activities, and marine safety warnings. However, there are still some factors that restrict further improvements in detecting performance, e.g., multi-scale ship transformation and unfocused images caused by motion. In order to resolve these issues, in this paper, a doppler feature matrix fused with a multi-layer feature pyramid network (D-MFPN) is proposed for SAR ship detection. The D-MFPN takes single-look complex image data as input and consists of two branches: the image branch designs a multi-layer feature pyramid network to enhance the positioning capacity for large ships combined with an attention module to refine the feature map’s expressiveness, and the doppler branch aims to build a feature matrix that characterizes the ship’s motion state by estimating the doppler center frequency and frequency modulation rate offset. To confirm the validity of each branch, individual ablation experiments are conducted. The experimental results on the Gaofen-3 satellite ship dataset illustrate the D-MFPN’s optimal performance in defocused ship detection tasks compared with six other competitive convolutional neural network (CNN)-based SAR ship detectors. Its satisfactory results demonstrate the application value of the deep-learning model fused with doppler features in the field of SAR ship detection. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

21 pages, 6377 KiB  
Article
Slip Models of the 2016 and 2022 Menyuan, China, Earthquakes, Illustrating Regional Tectonic Structures
by Donglin Wu, Chunyan Qu, Dezheng Zhao, Xinjian Shan and Han Chen
Remote Sens. 2022, 14(24), 6317; https://doi.org/10.3390/rs14246317 - 13 Dec 2022
Cited by 1 | Viewed by 1507
Abstract
As one of the large-scale block-bounding faults in the northeastern Tibetan Plateau, the Qilian-Haiyuan fault system accommodates a large portion of north-eastward motion of the Tibetan Plateau. In 2016 and 2022, two strong earthquakes of Mw6.0 and Mw6.6 occurred in the Menyuan area [...] Read more.
As one of the large-scale block-bounding faults in the northeastern Tibetan Plateau, the Qilian-Haiyuan fault system accommodates a large portion of north-eastward motion of the Tibetan Plateau. In 2016 and 2022, two strong earthquakes of Mw6.0 and Mw6.6 occurred in the Menyuan area near the Lenglongling fault (LLLF) at the western segment of the Qilian-Haiyuan fault. These two adjoining events, only 40 km apart, exhibited notable differences in focal mechanisms and rupture kinematics, indicating complex fault geometries and tectonic structures in the region, which are still poorly known. Here, we obtained an interseismic velocity map spanning 2014–2020 in the Menyuan region using Sentinel-1 InSAR data to probe strain accumulation across the LLLF. We obtained the coseismic deformation fields of the two Menyuan earthquakes using InSAR data and inverted out their slip distributions. We calculated the Coulomb stress changes to examine the interactions and triggering relationship between two ruptures and to access regional seismic potential. We found that the 2016 earthquake was a buried thrust event that occurred on the northern LLLF, whilst the 2022 earthquake was a left-lateral strike-slip event that occurred on the western end of the LLLF. We indicated there may be no direct triggering relationship between two spatiotemporally adjacent earthquakes. However, the 2022 earthquake caused a remarkable stress perturbation to the surrounding area. Particularly, a large area with notable stress increase stands out along the Tuolaishan fault and the LLLF, likely posing a high seismic hazard in the region. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

23 pages, 7051 KiB  
Article
LRFFNet: Large Receptive Field Feature Fusion Network for Semantic Segmentation of SAR Images in Building Areas
by Bo Peng, Wenyi Zhang, Yuxin Hu, Qingwei Chu and Qianqian Li
Remote Sens. 2022, 14(24), 6291; https://doi.org/10.3390/rs14246291 - 12 Dec 2022
Cited by 3 | Viewed by 1427
Abstract
There are limited studies on the semantic segmentation of high-resolution synthetic aperture radar (SAR) images in building areas due to speckle noise and geometric distortion. For this challenge, we propose the large receptive field feature fusion network (LRFFNet), which contains a feature extractor, [...] Read more.
There are limited studies on the semantic segmentation of high-resolution synthetic aperture radar (SAR) images in building areas due to speckle noise and geometric distortion. For this challenge, we propose the large receptive field feature fusion network (LRFFNet), which contains a feature extractor, a cascade feature pyramid module (CFP), a large receptive field channel attention module (LFCA), and an auxiliary branch. SAR images only contain single-channel information and have a low signal-to-noise ratio. Using only one level of features extracted by the feature extractor will result in poor segmentation results. Therefore, we design the CFP module; it can integrate different levels of features through multi-path connection. Due to the problem of geometric distortion in SAR images, the structural and semantic information is not obvious. In order to pick out feature channels that are useful for segmentation, we design the LFCA module, which can reassign the weight of channels through the channel attention mechanism with a large receptive field to help the network focus on more effective channels. SAR images do not include color information, and the identification of ground object categories is prone to errors, so we design the auxiliary branch. The branch uses the full convolution structure to optimize training results and reduces the phenomenon of recognizing objects outside the building area as buildings. Compared with state-of-the-art (SOTA) methods, our proposed network achieves higher scores in evaluation indicators and shows excellent competitiveness. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

18 pages, 5367 KiB  
Article
Micro-Motion Parameter Extraction of Multi-Scattering-Point Target Based on Vortex Electromagnetic Wave Radar
by Lijun Bu, Yongzhong Zhu, Yijun Chen, Xiaoou Song, Yufei Yang and Yadan Zang
Remote Sens. 2022, 14(23), 5908; https://doi.org/10.3390/rs14235908 - 22 Nov 2022
Cited by 7 | Viewed by 1427
Abstract
In addition to traditional linear Doppler shift, the angular Doppler shift in vortex electromagnetic wave (VEMW) radar systems carrying orbital angular momentum (OAM) can provide more accurate target identification micro-motion parameters, especially the detailed features perpendicular to the radar line-of-sight (LOS) direction. In [...] Read more.
In addition to traditional linear Doppler shift, the angular Doppler shift in vortex electromagnetic wave (VEMW) radar systems carrying orbital angular momentum (OAM) can provide more accurate target identification micro-motion parameters, especially the detailed features perpendicular to the radar line-of-sight (LOS) direction. In this paper, a micro-motion feature extraction method for a spinning target with multiple scattering points based on VEMW radar is proposed. First, a multi-scattering-point spinning target detection model using vortex radar is established, and the mathematical mechanism of echo signal flash shift in time-frequency (TF) domain is deduced. Then, linear Doppler shift is eliminated by interference processing with opposite dual-mode VEMW. Subsequently, the shift in TF flicker is focused on the reference zero frequency by the iterative phase compensation method, and the number of scattering points is estimated according to the focusing effect. After this, through the constructed compensation phase, the angular Doppler shift is separated, then the angular velocity, rotation radiusand initial phase of the target are estimated. Theoretical and simulation results verify the effectiveness of the proposed method, and more accurate rotation parameters can be obtained in the case of multiple scattering points using the VEMW radar system. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

15 pages, 9314 KiB  
Article
BoxPaste: An Effective Data Augmentation Method for SAR Ship Detection
by Zhiling Suo, Yongbo Zhao, Sheng Chen and Yili Hu
Remote Sens. 2022, 14(22), 5761; https://doi.org/10.3390/rs14225761 - 15 Nov 2022
Cited by 12 | Viewed by 1699
Abstract
Data augmentation is a crucial technique for convolutional neural network (CNN)-based object detection. Thus, this work proposes BoxPaste, a simple but powerful data augmentation method appropriate for ship detection in Synthetic Aperture Radar (SAR) imagery. BoxPaste crops ship objects from one SAR image [...] Read more.
Data augmentation is a crucial technique for convolutional neural network (CNN)-based object detection. Thus, this work proposes BoxPaste, a simple but powerful data augmentation method appropriate for ship detection in Synthetic Aperture Radar (SAR) imagery. BoxPaste crops ship objects from one SAR image using bounding box annotations and pastes them on another SAR image to artificially increase the object density in each training image. Furthermore, we dive deep into the characteristics of the SAR ship detection task and draw a principle for designing a SAR ship detector—light models may perform better. Our proposed data augmentation method and modified ship detector attain a 95.5% Average Precision (AP) and 96.6% recall on the SAR Ship Detection Dataset (SSDD), 4.7% and 5.5% higher than the fully convolutional one-stage (FCOS) object detection baseline method. Furthermore, we also combine our data augmentation scheme with two current detectors, RetinaNet and adaptive training sample selection (ATSS), to validate its effectiveness. The experimental results demonstrate that our newly proposed SAR-ATSS architecture achieves 96.3% AP, employing ResNet-50 as the backbone. The experimental results show that the method can significantly improve detection performance. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

24 pages, 23909 KiB  
Article
EDTRS: A Superpixel Generation Method for SAR Images Segmentation Based on Edge Detection and Texture Region Selection
by Hang Yu, Haoran Jiang, Zhiheng Liu, Suiping Zhou and Xiangjie Yin
Remote Sens. 2022, 14(21), 5589; https://doi.org/10.3390/rs14215589 - 05 Nov 2022
Cited by 2 | Viewed by 1729
Abstract
The generation of superpixels is becoming a critical step in SAR image segmentation. However, most studies on superpixels only focused on clustering methods without considering multi-feature in SAR images. Generating superpixels for complex scenes is a challenging task. It is also time consuming [...] Read more.
The generation of superpixels is becoming a critical step in SAR image segmentation. However, most studies on superpixels only focused on clustering methods without considering multi-feature in SAR images. Generating superpixels for complex scenes is a challenging task. It is also time consuming and inconvenient to manually adjust the parameters to regularize the shapes of superpixels. To address these issues, we propose a new superpixel generation method for SAR images based on edge detection and texture region selection (EDTRS), which takes into account the different features of SAR images. Firstly, a Gaussian function is applied in the neighborhood of each pixel in eight directions, and a Sobel operator is used to determine the redefined region. Then, 2D entropy is introduced to adjust the edge map. Secondly, local outlier factor (LOF) detection is used to eliminate speckle-noise interference in SAR images. We judge whether the texture has periodicity and introduce an edge map to select the appropriate region and extract texture features for the target pixel. A gray-level co-occurrence matrix (GLCM) and principal component analysis (PCA) are combined to extract texture features. Finally, we use a novel approach to combine the features extracted, and the pixels are clustered by the K-means method. Experimental results with different SAR images show that the proposed method outperforms existing superpixel generation methods with an increase of 5–10% in accuracy and produces more regular shapes. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

29 pages, 7600 KiB  
Article
Unblurring ISAR Imaging for Maneuvering Target Based on UFGAN
by Wenzhe Li, Yanxin Yuan, Yuanpeng Zhang and Ying Luo
Remote Sens. 2022, 14(20), 5270; https://doi.org/10.3390/rs14205270 - 21 Oct 2022
Cited by 5 | Viewed by 1908
Abstract
Inverse synthetic aperture radar (ISAR) imaging for maneuvering targets suffers from a Doppler frequency time-varying problem, leading to the ISAR images blurred in the azimuth direction. Given that the traditional imaging methods have poor imaging performance or low efficiency, and the existing deep [...] Read more.
Inverse synthetic aperture radar (ISAR) imaging for maneuvering targets suffers from a Doppler frequency time-varying problem, leading to the ISAR images blurred in the azimuth direction. Given that the traditional imaging methods have poor imaging performance or low efficiency, and the existing deep learning imaging methods cannot effectively reconstruct the deblurred ISAR images retaining rich details and textures, an unblurring ISAR imaging method based on an advanced Transformer structure for maneuvering targets is proposed. We first present a pseudo-measured data generation method based on the DeepLabv3+ network and Diamond-Square algorithm to acquire an ISAR dataset for training with good generalization to measured data. Next, with the locally-enhanced window Transformer block adopted to enhance the ability to capture local context as well as global dependencies, we construct a novel Uformer-based GAN (UFGAN) to restore the deblurred ISAR images with rich details and textures from blurred imaging results. The simulation and measured experiments show that the proposed method can achieve fast and high-quality imaging for maneuvering targets under the condition of a low signal-to-noise ratio (SNR) and sparse aperture. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

18 pages, 4143 KiB  
Article
End-to-End Radar HRRP Target Recognition Based on Integrated Denoising and Recognition Network
by Xiaodan Liu, Li Wang and Xueru Bai
Remote Sens. 2022, 14(20), 5254; https://doi.org/10.3390/rs14205254 - 20 Oct 2022
Cited by 7 | Viewed by 1806
Abstract
For high-resolution range profile (HRRP) radar target recognition in a low signal-to-noise ratio (SNR) scenario, traditional methods frequently perform denoising and recognition separately. In addition, they assume equivalent contributions of the target and the noise regions during feature extraction and fail to capture [...] Read more.
For high-resolution range profile (HRRP) radar target recognition in a low signal-to-noise ratio (SNR) scenario, traditional methods frequently perform denoising and recognition separately. In addition, they assume equivalent contributions of the target and the noise regions during feature extraction and fail to capture the global dependency. To tackle these issues, an integrated denoising and recognition network, namely, IDR-Net, is proposed. The IDR-Net achieves denoising through the denoising module after adversarial training, and learns the global relationship of the generated HRRP sequence using the attention-augmented temporal encoder. Furthermore, a hybrid loss is proposed to integrate the denoising module and the recognition module, which enables end-to-end training, reduces the information loss during denoising, and boosts the recognition performance. The experimental results on the measured HRRPs of three types of aircraft demonstrate that IDR-Net obtains higher recognition accuracy and more robustness to noise than traditional methods. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

19 pages, 4504 KiB  
Article
Ground Positioning Method of Spaceborne SAR High-Resolution Sliding-Spot Mode Based on Antenna Pointing Vector
by Yingying Li, Hao Wu, Dadi Meng, Gemengyue Gao, Cuiping Lian and Xueying Wang
Remote Sens. 2022, 14(20), 5233; https://doi.org/10.3390/rs14205233 - 19 Oct 2022
Cited by 1 | Viewed by 1467
Abstract
As a new high-resolution spaceborne SAR observation mode, sliding-spot imaging has the characteristics of a large squint, long aperture time, and azimuth aliasing, and because of the dechirp operation in the imaging algorithm of this mode, it is difficult to construct a direct [...] Read more.
As a new high-resolution spaceborne SAR observation mode, sliding-spot imaging has the characteristics of a large squint, long aperture time, and azimuth aliasing, and because of the dechirp operation in the imaging algorithm of this mode, it is difficult to construct a direct range–Doppler equation for its geometric processing. In this paper a conformation model based on an antenna pointing vector is presented, which fully considers the influence of the dechirp operation on the range image, starts from the relative position of the dechirped range image points and the satellite, and establishes a strict conversion model between the image coordinates and geographic coordinates using the accurate satellite–ground geometric conditions. Then the forward and reverse formulas for geometric processing of the sliding-spot mode are given based on this model. Finally, geometric calibration and positioning experiments under different conditions and field spaceborne SAR data are executed. Results show that after geometric errors caused by the SAR payload have been calibrated and other factors such as atmospheric delay, platform position, and elevation error have been compensated, the uncontrolled geometric positioning accuracy can reach within 1 m–2 m, which fully proves the effectiveness of this method in the geometric positioning of high-resolution sliding-spot images. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

20 pages, 24131 KiB  
Article
A Refined Model for Quad-Polarimetric Reconstruction from Compact Polarimetric Data
by Rui Guo, Xiaopeng Zhao, Bo Zang, Yi Liang, Jian Bai and Liang Guo
Remote Sens. 2022, 14(20), 5226; https://doi.org/10.3390/rs14205226 - 19 Oct 2022
Cited by 1 | Viewed by 1150
Abstract
As a special dual-polarization technique, compact polarimetric (CP) synthetic aperture radar (SAR) has already been widely studied and installed on some spaceborne systems due to its superiority to quad-polarization; moreover, quad-pol information can be explored and reconstructed from the CP SAR data. In [...] Read more.
As a special dual-polarization technique, compact polarimetric (CP) synthetic aperture radar (SAR) has already been widely studied and installed on some spaceborne systems due to its superiority to quad-polarization; moreover, quad-pol information can be explored and reconstructed from the CP SAR data. In this paper, a refined model is proposed to estimate the quad-pol information for the CP mode. This model involves CP decomposition, wherein the polarization degree is introduced as the volume scattering model parameter. Moreover, a power-weighted model for the co-polarized coherence coefficient is proposed to avoid the iterative approach in pseudo-quad-pol information reconstruction. Experiments were implemented on the simulated Gaofen-3 and ALOS-2 data collected over San Francisco. Compared with typical reconstruction models, the proposed refined model shows its superiority in estimating the quad-pol information. Furthermore, terrain classification experiments using a complex-value convolutional neural network (CV-CNN) were performed on AIRSAR Flevoland data to validate the reconstruction effectiveness for classification applications. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

18 pages, 6073 KiB  
Article
Hierarchical Superpixel Segmentation for PolSAR Images Based on the Boruvka Algorithm
by Jie Deng, Wei Wang, Sinong Quan, Ronghui Zhan and Jun Zhang
Remote Sens. 2022, 14(19), 4721; https://doi.org/10.3390/rs14194721 - 21 Sep 2022
Cited by 2 | Viewed by 1379
Abstract
Superpixel segmentation for polarimetric synthetic aperture radar (PolSAR) images plays a key role in remote-sensing tasks, such as ship detection and land-cover classification. However, the existing methods cannot directly generate multi-scale superpixels in a hierarchical style and they will take a long time [...] Read more.
Superpixel segmentation for polarimetric synthetic aperture radar (PolSAR) images plays a key role in remote-sensing tasks, such as ship detection and land-cover classification. However, the existing methods cannot directly generate multi-scale superpixels in a hierarchical style and they will take a long time when multi-scale segmentation is executed separately. In this article, we propose an effective and accurate hierarchical superpixel segmentation method, by introducing a minimum spanning tree (MST) algorithm called the Boruvka algorithm. To accurately measure the difference between neighboring pixels, we obtain the scattering mechanism information derived from the model-based refined 5-component decomposition (RFCD) and construct a comprehensive dissimilarity measure. In addition, the edge strength map and homogeneity measurement are considered to make use of the structural and spatial distribution information in the PolSAR image. On this basis, we can generate superpixels using the distance metric along with the MST framework. The proposed method can maintain good segmentation accuracy at multiple scales, and it generates superpixels in real time. According to the experimental results on the ESAR and AIRSAR datasets, our method is faster than the current state-of-the-art algorithms and preserves somewhat more image details in different segmentation scales. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

18 pages, 4438 KiB  
Article
Sparse SAR Imaging Method for Ground Moving Target via GMTSI-Net
by Luwei Chen, Jiacheng Ni, Ying Luo, Qifang He and Xiaofei Lu
Remote Sens. 2022, 14(17), 4404; https://doi.org/10.3390/rs14174404 - 04 Sep 2022
Cited by 5 | Viewed by 1618
Abstract
Ground moving targets (GMT), due to the existence of velocity in range and azimuth direction, will lead to the deviation from their true position and defocus in the azimuth direction during the synthetic aperture radar (SAR) imaging process. To address this problem and [...] Read more.
Ground moving targets (GMT), due to the existence of velocity in range and azimuth direction, will lead to the deviation from their true position and defocus in the azimuth direction during the synthetic aperture radar (SAR) imaging process. To address this problem and compress the amount of echo data, a sparse SAR imaging method for ground moving targets is proposed. Specifically, we first constructed a two-dimensional sparse observation model of the GMT based on matched filter operators. Then, the observation model was solved by a deep network, GMT sparse imaging network (GMTSI-Net), which was mainly obtained by unfolding an iterative soft threshold algorithm (ISTA)-based iterative solution. Furthermore, we designed an adaptive unfolding module in the imaging network to improve the adaptability of the network to the input of echo data with different sampling ratios. The proposed imaging network can achieve faster and more accurate SAR images of ground moving targets under a low sampling ratio and signal-to-noise ratio (SNR). Simulated and measured data experiments were conducted to demonstrate the performance of imaging quality of the proposed method. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

22 pages, 716 KiB  
Article
Multi-Aspect Convolutional-Transformer Network for SAR Automatic Target Recognition
by Siyuan Li, Zongxu Pan and Yuxin Hu
Remote Sens. 2022, 14(16), 3924; https://doi.org/10.3390/rs14163924 - 12 Aug 2022
Cited by 8 | Viewed by 1888
Abstract
In recent years, synthetic aperture radar (SAR) automatic target recognition (ATR) has been widely used in both military and civilian fields. Due to the sensitivity of SAR images to the observation azimuth, the multi-aspect SAR image sequence contains more information for recognition than [...] Read more.
In recent years, synthetic aperture radar (SAR) automatic target recognition (ATR) has been widely used in both military and civilian fields. Due to the sensitivity of SAR images to the observation azimuth, the multi-aspect SAR image sequence contains more information for recognition than a single-aspect one. Nowadays, multi-aspect SAR target recognition methods mainly use recurrent neural networks (RNN), which rely on the order between images and thus suffer from information loss. At the same time, the training of the deep learning model also requires a lot of training data, but multi-aspect SAR images are expensive to obtain. Therefore, this paper proposes a multi-aspect SAR recognition method based on self-attention, which is used to find the correlation between the semantic information of images. Simultaneously, in order to improve the anti-noise ability of the proposed method and reduce the dependence on a large amount of data, the convolutional autoencoder (CAE) used to pretrain the feature extraction part of the method is designed. The experimental results using the MSTAR dataset show that the proposed multi-aspect SAR target recognition method is superior in various working conditions, performs well with few samples and also has a strong ability of anti-noise. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

23 pages, 8157 KiB  
Article
An Integrated Raw Data Simulator for Airborne Spotlight ECCM SAR
by Haemin Lee and Ki-Wan Kim
Remote Sens. 2022, 14(16), 3897; https://doi.org/10.3390/rs14163897 - 11 Aug 2022
Cited by 5 | Viewed by 1606
Abstract
Airborne synthetic aperture radar (SAR) systems often encounter the threats of interceptors or electronic countermeasures (ECM) and suffer from motion measurement errors. In order to design and analyze SAR systems while considering such threats and errors, an integrated raw data simulator is proposed [...] Read more.
Airborne synthetic aperture radar (SAR) systems often encounter the threats of interceptors or electronic countermeasures (ECM) and suffer from motion measurement errors. In order to design and analyze SAR systems while considering such threats and errors, an integrated raw data simulator is proposed for airborne spotlight electronic counter-countermeasure (ECCM) SAR. The raw data for reflected echo signals and jamming signals are generated in arbitrary waveform to achieve pulse diversity. The echo signals are simulated based on the scene model computed through the inverse polar reformatting of the reflectivity map. The reflectivity map is generated by applying a noise-like speckle to an arbitrary grayscale optical image. The received jamming signals are generated by the jamming model, and their powers are determined by the jamming equivalent sigma zero (JESZ), a newly proposed quantitative measure for designing ECCM SAR systems. The phase errors due to the inaccuracy of the navigation system are also considered in the design of the proposed simulator, as navigation sensor errors were added in the motion measurement process, with the results used for the motion compensation. The validity and usefulness of the proposed simulator is verified through the simulation of autofocus algorithms, SAR jamming, and SAR ECCM with pulse diversity. Various types of autofocus algorithms were performed through the proposed simulator and, as a result, the performance trends were identified to be similar to those of the real data from actual flight tests. The simulation results of the SAR jamming and SAR ECCM indicate that the proposed JESZ is well-defined measure for quantifying the power requirements of ECCM SAR and SAR jammers. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

21 pages, 3562 KiB  
Article
Few Shot Object Detection for SAR Images via Feature Enhancement and Dynamic Relationship Modeling
by Shiqi Chen, Jun Zhang, Ronghui Zhan, Rongqiang Zhu and Wei Wang
Remote Sens. 2022, 14(15), 3669; https://doi.org/10.3390/rs14153669 - 31 Jul 2022
Cited by 7 | Viewed by 2317
Abstract
Current Synthetic Aperture Radar (SAR) image object detection methods require huge amounts of annotated data and can only detect the categories that appears in the training set. Due to the lack of training samples in the real applications, the performance decreases sharply on [...] Read more.
Current Synthetic Aperture Radar (SAR) image object detection methods require huge amounts of annotated data and can only detect the categories that appears in the training set. Due to the lack of training samples in the real applications, the performance decreases sharply on rare categories, which largely inhibits the detection model from reaching robustness. To tackle this problem, a novel few-shot SAR object detection framework is proposed, which is built upon the meta-learning architecture and aims at detecting objects of unseen classes given only a few annotated examples. Observing the quality of support features determines the performance of the few-shot object detection task, we propose an attention mechanism to highlight class-specific features while softening the irrelevant background information. Considering the variation between different support images, we also employ a support-guided module to enhance query features, thus generating high-qualified proposals more relevant to support images. To further exploit the relevance between support and query images, which is ignored in single class representation, a dynamic relationship learning paradigm is designed via constructing a graph convolutional network and imposing orthogonality constraint in hidden feature space, which both make features from the same category more closer and those from different classes more separable. Comprehensive experiments have been completed on the self-constructed SAR multi-class object detection dataset, which demonstrate the effectiveness of our few-shot object detection framework in learning more generalized features to both enhance the performance on novel classes and maintain the performance on base classes. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

22 pages, 6447 KiB  
Article
Built-Up Area Mapping for the Greater Bay Area in China from Spaceborne SAR Data Based on the PSDNet and Spatial Statistical Features
by Wei Zhang, Shengtao Lu, Deliang Xiang and Yi Su
Remote Sens. 2022, 14(14), 3428; https://doi.org/10.3390/rs14143428 - 16 Jul 2022
Cited by 1 | Viewed by 1481
Abstract
Built-up areas (BAs) information acquisition is essential to urban planning and sustainable development in the Greater Bay Area in China. In this paper, a pseudo-Siamese dense convolutional network, namely PSDNet, is proposed to automatically extract BAs from the spaceborne synthetic aperture radar (SAR) [...] Read more.
Built-up areas (BAs) information acquisition is essential to urban planning and sustainable development in the Greater Bay Area in China. In this paper, a pseudo-Siamese dense convolutional network, namely PSDNet, is proposed to automatically extract BAs from the spaceborne synthetic aperture radar (SAR) data in the Greater Bay Area, which considers the spatial statistical features and speckle features in SAR images. The local indicators of spatial association, including Moran’s, Geary’s, and Getis’ together with the speckle divergence feature, are calculated for the SAR data, which can indicate the potential BAs. The amplitude SAR images and the corresponding features are then regarded as the inputs for PSDNet. In this framework, a pseudo-Siamese network can independently learn the BAs discrimination ability from the SAR original amplitude image and the features. The DenseNet is adopted as the backbone network of each channel, which can improve the efficiency while extracting the deep features of the BAs. Moreover, it also has the ability to extract the BAs with multi-scale sizes by using a multi-scale decoder. The Sentinel-1 (S1) SAR data for the Greater Bay Area in China are used for the experimental validation. Our method of BA extraction can achieve above 90% accuracy, which is similar to the current urban extraction product, demonstrating that our method can achieve BA mapping for spaceborne SAR data. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

17 pages, 10414 KiB  
Article
Physics-Based TOF Imaging Simulation for Space Targets Based on Improved Path Tracing
by Zhiqiang Yan, Hongyuan Wang, Xiang Liu, Qianhao Ning and Yinxi Lu
Remote Sens. 2022, 14(12), 2868; https://doi.org/10.3390/rs14122868 - 15 Jun 2022
Cited by 1 | Viewed by 1585
Abstract
Aiming at the application of close-up space measurement based on time-of-flight (TOF) cameras, according to the analysis of the characteristics of the space background environment and the imaging characteristics of the TOF camera, a physics-based amplitude modulated continuous wave (AMCW) TOF camera imaging [...] Read more.
Aiming at the application of close-up space measurement based on time-of-flight (TOF) cameras, according to the analysis of the characteristics of the space background environment and the imaging characteristics of the TOF camera, a physics-based amplitude modulated continuous wave (AMCW) TOF camera imaging simulation method for space targets based on the improved path tracing is proposed. Firstly, the microfacet bidirectional reflection distribution function (BRDF) model of several typical space target surface materials is fitted according to the measured BRDF data in the TOF camera response band to make it physics-based. Secondly, an improved path tracing algorithm is developed to adapt to the TOF camera by introducing a cosine component to characterize the modulated light in the TOF camera. Then, the imaging link simulation model considering the coupling effects of the BRDF of materials, the suppression of background illumination (SBI), optical system, detector, electronic equipment, platform vibration, and noise is established, and the simulation images of the TOF camera are obtained. Finally, ground tests are carried out, and the test shows that the relative error of the grey mean, grey variance, depth mean, and depth variance is 2.59%, 3.80%, 18.29%, and 14.58%, respectively; the MSE, SSIM, and PSNR results of our method are also better than those of the reference method. The ground test results verify the correctness of the proposed simulation model, which can provide image data support for the ground test of TOF camera algorithms for space targets. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Graphical abstract

18 pages, 19581 KiB  
Article
Inshore Ship Detection in Large-Scale SAR Images Based on Saliency Enhancement and Bhattacharyya-like Distance
by Jianda Cheng, Deliang Xiang, Jiaxin Tang, Yanpeng Zheng, Dongdong Guan and Bin Du
Remote Sens. 2022, 14(12), 2832; https://doi.org/10.3390/rs14122832 - 13 Jun 2022
Cited by 6 | Viewed by 1852
Abstract
While the detection of offshore ships in synthetic aperture radar (SAR) images has been widely studied, inshore ship detection remains a challenging task. Due to the influence of speckle noise and the high similarity between onshore buildings and inshore ships, the traditional methods [...] Read more.
While the detection of offshore ships in synthetic aperture radar (SAR) images has been widely studied, inshore ship detection remains a challenging task. Due to the influence of speckle noise and the high similarity between onshore buildings and inshore ships, the traditional methods are unable to achieve effective detection for inshore ships. To improve the detection performance of inshore ships, we propose a novel saliency enhancement algorithm based on the difference of anisotropic pyramid (DoAP). Considering the limitations of IoU in small-target detection, we design a detection framework based on the proposed Bhattacharyya-like distance (BLD). First, the anisotropic pyramid of the SAR image is constructed by a bilateral filter (BF). Then, the differences between the finest two scales and the coarsest two scales are used to generate the saliency map, which can be used to enhance ship pixels and suppress background clutter. Finally, the BLD is used to replace IoU in label assignment and non-maximum suppression to overcome the limitations of IoU for small-target detection. We embed the DoAP into the BLD-based detection framework to detect inshore ships in large-scale SAR images. The experimental results on the LS-SSDD-v1.0 dataset indicate that the proposed method outperforms the basic state-of-the-art detection methods. Full article
(This article belongs to the Special Issue SAR Images Processing and Analysis)
Show Figures

Figure 1

Back to TopTop