Next Article in Journal
An Improved Acceleration Approach by Utilizing K-Band Range Rate Observations
Next Article in Special Issue
RAU-Net-Based Imaging Method for Spatial-Variant Correction and Denoising in Multiple-Input Multiple-Output Radar
Previous Article in Journal
Multi-Hypothesis Marginal Multi-Target Bayes Filter for a Heavy-Tailed Observation Noise
Previous Article in Special Issue
A Five-Component Decomposition Method with General Rotated Dihedral Scattering Model and Cross-Pol Power Assignment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Shadow-Based False Target Identification for SAR Images

1
State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410076, China
2
National Key Laboratory of Science and Technology on ATR, National University of Defense Technology, Changsha 410076, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(21), 5259; https://doi.org/10.3390/rs15215259
Submission received: 8 September 2023 / Revised: 30 October 2023 / Accepted: 5 November 2023 / Published: 6 November 2023

Abstract

:
In radar electronic countermeasures, as the difference between jamming and targets continues to decrease, traditional methods that are implemented based on classical features are currently unable to meet the requirements of jamming detection. Compared with classical features such as texture, scale, and shape, shadow has better discernability and separability. In this paper, target shadow is investigated and applied to detect jamming in Synthetic Aperture Radar (SAR) images, and a SAR false target identification method based on shadow features is proposed. First, a difference image is generated by change detection, which can extract the shadow region in single-time SAR images. Then, a three-step differentiation condition is proposed, which can distinguish false targets from real targets. Simulated experimental results show that the proposed method can effectively extract the shadow region in SAR images and accurately distinguishreal and false targets. Furthermore, the potential of shadow in SAR image interpretation and electronic countermeasures is also demonstrated.

1. Introduction

As an active observation system that images objects by electronic waves, Synthetic Aperture Radar (SAR) uses the synthetic aperture principle and pulse compression technology for radar target detection [1,2]. SAR has become the primary technology in the field of electronic countermeasures (ECM) due to its high operational capability and favorable imaging characteristics of high resolution, all-day, and all-weather [3].
With respect to SAR deceptive jamming technology, by intercepting radar signals and implementing a modulated retransmitted process, it can generate realistic false targets and the corresponding backgrounds in SAR images [4]. Accordingly, it weakens the imaging results and destroys the image features, which seriously restricts the ability of SAR image interpretation, such as image segmentation, feature extraction, target detection, and recognition [5,6].
SAR images have a rich set of classical features such as texture, scale, and shape that describes both the global and local scattering characteristics of targets. From the perspective of SAR image processing and anti-jamming, it is useful to distinguish false targets from real ones using these features. Nevertheless, with the development of the latest jamming technology, it can further generate false targets with higher resolution, finer details, and even similar scattering characteristics compared to real targets. Consequently, the differences between the jamming and real scenes have diminished significantly, the image representation of false targets and real targets evaluated by classical features are similar, and they cannot be accurately discriminated, which invalidates the effectiveness of classical features in false target identification.
Several studies have been conducted on the detection of false targets in SAR images. Feng et al. [7] used cross-track interferometric SAR to detect false targets. Wang et al. [8] proposed an anti-deceptive jamming method for multistatic SAR to locate the deceptive jammer and eliminate false targets. Li et al. [9] detected false targets by multi-angle and multi-time SAR image registration. Zhao et al. [10] proposed a jamming detection operator in the SAR image domain to detect false targets. Nicholas et al. [11] introduced a feedback training method for using a Bayesian convolutional neural network, capable of discriminating between real and false targets in SAR target detection. Although these methods have achieved varying degrees of effectiveness, there has been a neglect of the nature of the target backscattering characteristics and insufficient extraction of the distinctive features of real and false targets.
In view of the above-mentioned facts, it is urgent to explore deeper and more sophisticated features for false target detection. According to the principle of SAR imaging, a real target at a certain height on the ground will prevent the SAR from receiving echoes from certain regions, generating a shadow around the real target [12]. Jamming signals only contribute to additive noise and may not have the geographical conditions to produce shadows, nor can it generate shadows by adjusting the amplitude around the jamming area [13], which means that shadows have promising potential in identifying false targets. As an intrinsic feature that reflects the target geometry, a shadow provides a robust image representation of the observed target [14]. For example, Gao et al. [15] detect shadow regions using a dual-threshold Otsu method, which allows fast localization of incomplete targets. Papson et al. [16] use a hidden Markov model to detect shadow contours and classify targets. Choi et al. [12] propose a parallel region feature network to efficiently detect shadows. Zhang et al. [17] use a multi-resolution dense encoder and decoder network to automatically extract shadows. Despite a large number of previous results [18,19,20,21], the use of shadows for false target recognition tasks has not been investigated.
In summary, this paper focuses on detecting false targets in SAR images and proposes a false target identification method based on shadows. On one hand, a shadow extraction method based on a change detection technique is proposed, in which an image transformation and a histogram-based threshold selection strategy is involved, enabling the fast extraction of shadow regions. On the other hand, a shadow-based false target detection method is introduced. It uses a three-stage differentiation condition to robustly identify false targets by fully considering the geometric relationship (i.e., direction, position, and width) between targets and shadows.
Compared with the traditional methods, our proposed method adopts the image inversion methodology and change detection operator to accurately extract shadow regions, which can overcome the problem of loss of spatial information and distinguish the weak scattering pixels from shadows. More importantly, the traditional methods identify the false targets only based on the existence of shadows. Meanwhile in our work, a three-stage differentiation condition is proposed by adequately considering the geometric relationship; thus, the identification of false targets is more accurate and effective.
The main contributions of this paper can be summarized as follows:
  • By means of the methodology of change detection, a difference image is generated by image translation manipulation to efficiently and effectively extract the shadow region in the SAR image. The involved generation does not require the slaved image, which enhances its applicability and flexibility.
  • In the process of identifying false targets, a hierarchical discrimination technique is proposed for detecting false targets. This technique shows better sensitivity than classical features, even the slight image disturbance, making it more effective in eliminating background noise and highlighting the potential target within the scene.
  • This work adequately investigates the distribution of classical features on the real and false targets, and further comparative analysis verifies the effectiveness of shadows in terms of false target discrimination. In addition, the feasibility of incorporating shadows into SAR image interpretation and ECM is demonstrated.

2. Shadow Extraction

Before extracting the shadow, it is necessary to slice the regions of interest (ROI) so that the outer contours and inner pixels of the targets and jamming can be displayed. Since the sliced SAR image also contains a small amount of natural clutter in addition to the target and jamming, the extraction of shadows in regions is closely related to the position, direction, and shape of the target and clutter.
Generally, the Otsu or CFAR method [22,23] is widely used to detect SAR shadow regions. However, the shadows of targets are usually blurred, which deteriorates the detection effect. In traditional methods, they often identify each shadow pixel separately, resulting in the loss of spatial information contained in neighboring pixels. Moreover, the extracted shadow region may contain some non-shadow components with weak backscattering (such as road, water, and bare soil). The change detection operator [24] has high sensitivity to the high-intensity pixels, so it can detect the change information accurately. Change detection is frequently used to detect changes in the target or scenes across different periods. This enables the analysis of data discrepancies over various timeframes and locations. However, comparing distinct images across different periods is usually necessary for change detection and it poses important challenges when working with single-time images. To generate the difference image in the case of a single-time image, an image translation strategy is applied to reasonably generate the reference and test images, thus proposing a change detection method.
A slice S O containing the suspected shadow region is obtained from the SAR image with a size of D × L . Thus, shadow regions can be highlighted by inverting S O , which is devoted to S I
S I = 255 S O .
The purpose of the image inversion is to enhance the white or gray details in the dark regions of the SAR image. Assume that the region with a width size of Δ D around S I is set as the isolation region, and its internal region is intercepted as the reference image S R , whose dimension is ( D 2 Δ D ) × ( L 2 Δ D ) . Subsequently, four test images S T k (k = 1, 2, 3, 4) can be obtained by translating the reference image in four directions (i.e., upper right, upper left, bottom right, bottom left). The above process is shown in Figure 1, and the size of the test image is identical to reference images.
The flowchart of the proposed method, shown in Figure 2, can be divided into the following four parts: difference images generating unit, difference result calculating unit, shadow detection threshold calculating unit, and shadow regions extracting unit.
Generally, change detection obtains the different information between the reference and test images using a specific change detector. In this paper, the Likelihood Ratio Change Detector (LRCD) [25] is used, which can detect changes with high accuracy and is sensitive to changed information.
The LRCD η is calculated by
η ( i , j ) = p = m p = m q = m q = m S T ( i + p , j + q ) p = m p = m q = m q = m S R ( i + p , j + q ) + p = m p = m q = m q = m S R ( i + p , j + q ) p = m p = m q = m q = m S T ( i + p , j + q )   .
Taking a sliding window with a size of M = 2 m + 1 to define the surrounding neighborhood of pixel ( i , j ) , the difference image S D can be obtained by normalizing the LRCD
S D = round ( η ( i , j ) η min η max η min × 255 ) ,
where round ( ) denotes the rounding-off function.
Accordingly, the shadow detection threshold is calculated by analyzing the histogram of S D , which contains a high-peak region and a low-peak region [25]. The high peak region represents the invariant part, and the low peak region represents the changing part. The gray value of the dividing point between the two regions can be regarded as the test threshold for one of the test images. Assuming that the gray value corresponding to the peak point is T max , the ratio curve of the histogram at two adjacent gray-level values is computed by
R ( i ) = { N ( i ) N ( i + 1 ) , N ( i ) > 0 & N ( i + 1 ) > 0 1 , else   ,
where N ( i ) is the number of pixels of each gray value in the range [ T max , 255 ] .
The dividing point can be selected as the first point satisfying R ( i ) < 1 , which means that the histogram starts to enter the low peak region from this point. Assume that the total number of pixels whose values are larger than the dividing point in the S D is N T . For the inverted image, let N O be the total number of pixels in the value range [ T , 255 ] . The first value satisfying N O > N T can be considered as the test threshold T k of test images.
T = 255 1 4 k = 1 4 T k
Finally, complete all test images to multiple test thresholds and arithmetic averages. According to the final detection thresholds, the sliced images S O T are binarized, which is given by
S O T ( i , j ) = { 1 , S O ( i , j ) T 0 , S O ( i , j ) > T .
Moreover, morphological processing and area filtering are performed to finely detect the shadow regions in the images.

3. False Target Identification

For some scenes, a SAR image can be divided into three categories: targets, backgrounds, and shadows. A target with certain height will prevent the region that is behind the target and along the beam orientation from being illuminated by sufficient radiation. Therefore, the region cannot generate enough backscatter and will appear as a shadow in the SAR image, as shown in Figure 3.
The main issue in traditional false target detection methods is extracting shadow regions from the image and realizing the discrimination by determining whether the shadow is present or not. However, the geometric relationship between targets and shadows should be considered properly.
Although shadows result in the loss of scene information at the corresponding location, they still have some characteristics that are helpful for SAR interpretation [26]. On one hand, all shadows in SAR images are in the same direction as their corresponding targets. That is to say, the targets and shadows have a certain angular relationship with the beam direction [16]. On the other hand, theoretically, shadows should be close to their corresponding targets. Due to the imaging mechanism or the structural characteristics of the target itself, there may be a certain distance between the target and the shadow [27]. In addition, the widths of targets and shadows are equal, but due to the detection algorithm, the target may be divided into several small parts or several targets corresponding to a large shadow region, but these width differences will be within a certain range.
By summarizing the relationship between targets and shadows, the characteristics of shadow signatures are summarized, and a three-stage discrimination method is proposed, which is described below.
(1)
In a SAR image, shadows are generally located along the beam orientation and they distribute at the same side as their corresponding targets. Therefore, the beam orientation is necessary prior information to ascertain the shadow orientation.
The orientation relationship between the target and the shadow is shown in Figure 4. Assuming that the coordinates of the target and shadow centroid are, respectively, C t ( x t , y t ) and C s ( x s , y s ) . The LOS (line of sight) is along the positive y-axis. When the radar beam illuminates the target, the corresponding shadow appears on the other side of the target. Therefore, it is anticipated that the vector direction C t C s that links the centroids of the shadow and the target will nearly match the beam direction O y . Equation (7) states that the detected shadow is expected to correspond to the real target if the cosine of the angle between two vectors C t C s and O y falls within the determined threshold range ( π / 2 , π / 2 ) , which means μ 1 . As a result, the threshold μ is finally set as 1 in this paper. According to the aforementioned characteristics [26], the first-stage identification condition is proposed as:
cos [ ang ( C t C s , O y ) ] μ .
(2)
It is known that the shadow is generally adjacent to its corresponding target. However, in some cases (such as the adoption of different imaging algorithms), there may be some spatial distance between the shadow and the target. Shadow pixels are usually distributed in a certain circle, whose origin point is the centroid of the target, and the radius is the diameter of the target. Thereinto, the diameter of the target can be obtained by measuring the diagonal length of the smallest outer rectangle [26]. Thus, the second-stage identification condition is proposed as
dis ( C t , C s ) = ( x t x s ) 2 + ( y t y s ) 2 L t ,
where dis ( ) represents the distance, and L t is the target diameter.
(3)
Theoretically, the shadow region should be equal to the width of its corresponding target. The width is defined as the length of the longest line between the target edges, whose direction is perpendicular to the radar beam direction. Based on previous studies [10], it has been observed that the width of the shadow region is usually greater than the half width of the target. Thus, the third-stage identification condition is proposed as
W s W t 2 ,
where W s is the width of shadow region, and W t is the width of target.
According to the above discussion, if no shadow is detected in the sliced image or the three-stage identification condition is not satisfied, the suspected target is identified as the false target.

4. Experimental Results

4.1. Data Description

The experimental results reported here in this paper were tested with real and simulated data. The measured real data are acquired from the Sandia Labs’ MiniSAR dataset in Ku-band with a resolution of 0.5 m × 0.5 m. The imaging scene mainly comprises tanks and missile launching vehicles on a sandy background. Aiming at the jamming background, a 2-D convolution jamming pattern [28] is implemented to generate false targets. The SAR simulation parameters are listed in Table 1. Before the implementation, the Gaussian filtering and linear normalization is adopted to preprocess the SAR data. Notice that the preprocessing may reduce the radiometric resolution of SAR images. Nevertheless, it has no influence on the outcome since the focus of this work is target detection rather than pixel detection.
The final simulated jammed SAR image is presented in Figure 5a. The interested targets are outlined by blue rectangles, and there are two false targets (i.e., slices 1 and 4) as a reference. It can be seen that the false targets formed by 2-D convolution jamming are realistic. Thereinto, the false target (slice 1) is generated based on the template (slice 3), which is a real vehicle target. Similarly, slice 4 is generated by employing the template (slice 6).
Specifically, false targets are similar to real targets in terms of backscattering intensity, imaging characteristics, geometry, and orientation. They are highly integrated with the background environment. In addition, it is difficult to distinguish the real targets from the false ones by visual interpretation. After morphological processing and area filtering, the target slices are given in Figure 5b.

4.2. Statistical Analysis Based on Classical Features

In Figure 5, slice 1 and slice 4 contain false targets, while the other slices contain real targets. To investigate the discrimination performance of classical features, we select slice 4 and slice 6 to perform the statistics analysis and quantitative evaluation.
The selected slices are subjected to target detection, slice masking, morphological filtering, and calculation of the minimum outer rectangle. The procedures and results are shown in Figure 6, and it can be seen that the preprocessing results for the real and false targets are similar, except for the slight discrepancy in the target boundary.
After the pre-processing operations, the representative classical features of the true and false targets were analyzed separately. These features mainly include statistical-based and CFAR-based features, which characterize the target backscattering in terms of texture, shape, scale, and distribution. The resulting feature values are shown in Table 2. The main evaluation metrics are: (a) mean: a statistical measure of the average intensity of the pixel points in the SAR image; (b) standard deviation: a statistical measure of the uniformity of the intensity distribution of the pixel points in the SAR image; (c) weighted rank fill ratio: a statistical measure of the ratio of the sum of the intensity values of the brighter pixel points in the target region to the sum of the intensity values of all pixel points; (d) mass characteristics: counts the maximum amount of target structure information in the target region; (e) diameter characteristics: find the shortest diagonal length in the rectangle containing the target in the image after the filtering process is completed; (f) maximum CFAR characteristics: count the number of pixels whose intensity exceeds the maximum intensity; (g) mean CFAR characteristics: count the number of pixels whose intensity is greater than the mean; (h) CFAR bright characteristics: counts the ratio of the number of pixels in the target area that exceed the threshold to the total number of pixels.
By comparing the relevant evaluation, it can be seen that the classical features of the real and false targets are approximate. It is difficult to quantitatively separate the differences. Therefore, it can be concluded that the identification of false targets using traditional features may be ineffective.

4.3. Shadow Detection Results

As an example, Figure 7 presents the histograms of difference images derived from the reference and the upper right test images for six target slices. It can be seen that the histograms contain a high-peak region and low-peak region. The corresponding ratio curves of the histogram are given in Figure 8. It is intuitive that when the difference images’ pixels fall into a high-peak region, the neighborhood ratio satisfies R ( i ) > 1 , and the first point R ( i ) < 1 can be regarded as the dividing point, which is marked in Figure 8. This observation is consistent with the above theoretical analysis. The detection threshold corresponding to each slice is given in Table 3.
Applying the proposed shadow detection method, the extracted shadows in each slice are shown in Figure 9a. For comparisons, the shadow detection results derived from the CFAR and dual-threshold Otsu methods are also presented in Figure 9b,c. It can be seen that both CFAR and dual-threshold Otsu methods suffer the deficiencies of missed detections and false alarms. In comparison, the proposed method is more accurate, reflecting that the shadow contours are complete and clear-cut.

4.4. False Target Identification

Based on the preliminary detected shadows, the final identification results using the distinction conditions are presented in Figure 10. Thereinto, Figure 10a gives the three-stage detection results. Concerning targets 2, 3, 5, and 6, the geometry relationship between the detected shadow and the target itself satisfies the distinction condition; thus, they are recognized as real targets. In comparison, no shadow is located for target 1, while the geometry relationship does not satisfy the condition for target 4. As a result, targets 1 and 4 are identified as false targets.
To further demonstrate the superiority, we validate the effectiveness of the proposed method on other datasets through changing the background and the target templates, such as bulldozers and trucks. The detection results are shown in Figure 11. It can be seen that the method proposed in this paper is still effective and accurate. To verify the superiority, the proposed method is compared with two commonly used methods (the CFAR and Gray-Level Co-occurrence Matrix (GLCM)), and the quantitative evaluation is given in Table 4, Table 5 and Table 6. We can see that the two comparison methods can only detect the target based on the intensity information, so they lose effectiveness in false target identification.

5. Conclusions

The SAR deceptive jamming technique has been widely applied in the field of radar electronic countermeasure, and detecting jamming from real targets in SAR images has a significant meaning in the field of military surveillance and reconnaissance. In this paper, a SAR image false target identification method is proposed. To this end, a change detection-based shadow extraction technique and a geometry identification condition is involved. For the former, an image translation strategy and a histogram curve ratio-based threshold selection technique is designed to extract the shadow of candidate targets. For the latter, a three-stage identification method based on a geometric relationship between targets and shadows is proposed to identify false targets. Quantitative validations and comparisons of real and simulated data not only prove that the proposed method is capable of discriminating jamming from real targets, but also demonstrate that the shadow deserves to be further exploited for target recognition. Future works will focus on how to identify false targets in scenes with weak backscattering intensity and further utilizing the complex data information of SAR images on more datasets for additional targets and scenarios.

Author Contributions

Conceptualization, H.Z. and S.Q.; methodology, H.Z. and J.W.; software, H.Z.; validation, H.Z. and J.W.; Formal analysis, H.Z., S.Q. and J.W.; resources, S.Q.; data curation, H.Z.; writing—original draft, H.Z. and J.W.; writing—review & editing, H.Z., S.Q. and S.X.; visualization, H.Z.; supervision, S.X., Y.L. and P.W.; project administration, S.Q., S.X., Y.L. and P.W.; funding acquisition, S.Q. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No additional data are available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Papathanassiou, K.P. A Tutorial on Synthetic Aperture Radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef]
  2. Sun, H.; Shimada, M.; Xu, F. Recent Advances in Synthetic Aperture Radar Remote Sensing—Systems, Data Processing, and Applications. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2013–2016. [Google Scholar] [CrossRef]
  3. Yongzhen, L.; Datong, H.; Shiqi, X.; Xuesong, W. A review of synthetic aperture radar jamming technique. J. Radars 2020, 9, 753–764. [Google Scholar]
  4. Li, Y.-C. Active Deceptive Jamming against SAR Based on Convolutional Modulation. Master Dissertation, National University of Defense Technology, Changsha, China, 2013. [Google Scholar]
  5. Jiao, L.C.; Wang, S.; Hou, B. A Review of SAR Images Understanding and Interpretation. Chin. J. Electron. 2005, 33, 2423–2434. [Google Scholar]
  6. Singh, P.; Diwakar, M.; Shankar, A.; Shree, R.; Kumar, M. A Review on SAR Image and its Despeckling. Arch. Comput. Methods Eng. 2021, 28, 4633–4653. [Google Scholar] [CrossRef]
  7. Feng, Q.; Xu, H.; Wu, Z.; Liu, W. Deceptive jamming detection for SAR based on cross-track interferometry. Sensors 2018, 18, 2265. [Google Scholar] [CrossRef]
  8. Zhang, X.; Li, W.; Huang, C.; Wang, W.; Li, Z.; Wu, J. Three Dimensional Surface Reconstruction with Multistatic SAR. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 5207–5210. [Google Scholar]
  9. Xiong, B.; Chen, J.M.; Kuang, G. A change detection measure based on a likelihood ratio and statistical properties of SAR intensity images. Remote Sens. Lett. 2012, 3, 267–275. [Google Scholar] [CrossRef]
  10. Zhao, P.; Dai, D.; Wu, H.; Pang, B. Research on Repeater Jamming Detection Method in SAR Image Domain. In Proceedings of the 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 14–16 January 2022; pp. 236–239. [Google Scholar]
  11. Blomerus, N.; Cilliers, J.; Nel, W.; Blasch, E.; de Villiers, P. Feedback-Assisted Automatic Target and Clutter Discrimination Using a Bayesian Convolutional Neural Network for Improved Explainability in SAR Applications. Remote Sens. 2022, 14, 6096. [Google Scholar] [CrossRef]
  12. Choi, J.-H.; Lee, M.-J.; Jeong, N.-H.; Lee, G.; Kim, K.-T. Fusion of target and shadow regions for improved SAR ATR. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5226217. [Google Scholar] [CrossRef]
  13. Tang, X.; Zhang, X.; Shi, J.; Wei, S.; Yu, L. SAR deception jamming target recognition based on the shadow feature. In Proceedings of the 2017 25th European Signal Processing Conference (EUSIPCO), Kos, Greece, 28 August–2 September 2017; pp. 2491–2495. [Google Scholar]
  14. Jahangir, M.; Blacknell, D.; Moate, C.; Hill, R. Extracting information from shadows in SAR imagery. In Proceedings of the 2007 International Conference on Machine Vision, Isalambad, Pakistan, 28–29 December 2007; pp. 107–112. [Google Scholar]
  15. Gao, F.; You, J.; Wang, J.; Sun, J.; Yang, E.; Zhou, H. A novel target detection method for SAR images based on shadow proposal and saliency analysis. Neurocomputing 2017, 267, 220–231. [Google Scholar] [CrossRef]
  16. Papson, S.; Narayanan, R.M. Classification via the shadow region in SAR imagery. IEEE Trans. Aerosp. Electron. Syst. 2012, 48, 969–980. [Google Scholar] [CrossRef]
  17. Zhang, P.; Chen, L.; Li, Z.; Xing, J.; Xing, X.; Yuan, Z. Automatic extraction of water and shadow from SAR images based on a multi-resolution dense encoder and decoder network. Sensors 2019, 19, 3576. [Google Scholar] [CrossRef] [PubMed]
  18. Zhang, X.; Yang, P.; Sun, H. An omega-k algorithm for multireceiver synthetic aperture sonar. Electron. Lett. 2023, 59, e12859. [Google Scholar] [CrossRef]
  19. Huang, P.; Yang, P. Synthetic aperture imagery for high-resolution imaging sonar. Front. Mar. Sci. 2022, 9, 1049761. [Google Scholar] [CrossRef]
  20. Xie, Z.; Xu, Z.; Han, S.; Zhu, J.; Huang, X. Modulus Constrained Minimax Radar Code Design Against Target Interpulse Fluctuation. IEEE Trans. Veh. Technol. 2023, 72, 1–6. [Google Scholar] [CrossRef]
  21. Zhu, J.; Song, Y.; Jiang, N.; Xie, Z.; Fan, C.; Huang, X. Enhanced Doppler Resolution and Sidelobe Suppression Performance for Golay Complementary Waveforms. Remote Sens. 2023, 15, 2452. [Google Scholar] [CrossRef]
  22. Li, H.-X.; Yu, X.-L.; Sun, X.-D.; Tian, J.-C.; Wang, X.-G. Shadow Detection in SAR Images: An OTSU-and CFAR-Based Method. In Proceedings of the IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 2803–2806. [Google Scholar]
  23. Huang, Y.; Liu, F. Detecting cars in VHR SAR images via semantic CFAR algorithm. IEEE Geosci. Remote Sens. Lett. 2016, 13, 801–805. [Google Scholar] [CrossRef]
  24. Saha, S.; Shahzad, M.; Ebel, P.; Zhu, X.X. Supervised Change Detection Using Prechange Optical-SAR and Postchange SAR Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8170–8178. [Google Scholar] [CrossRef]
  25. Quan, S.; Xiong, B.; Zhang, S.; Yu, M.; Kuang, G. Adaptive and fast prescreening for SAR ATR via change detection technique. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1691–1695. [Google Scholar] [CrossRef]
  26. Li, H.; Yu, X.; Zou, L.; Zhou, Y.; Wang, X. A feed-forward framework integrating saliency and geometry discrimination for shadow detection in SAR images. IET Radar Sonar Navig. 2022, 16, 249–266. [Google Scholar] [CrossRef]
  27. Xu, H.; Yang, Z.; Chen, G.; Liao, G.; Tian, M. A ground moving target detection approach based on shadow feature with multichannel high-resolution synthetic aperture radar. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1572–1576. [Google Scholar] [CrossRef]
  28. Datong, H.; Shiqi, X.; Yemin, L.; Yongzhen, L.; Shunping, X. Fake SAR signal generation method based on noise convolution modulation. J. Radars 2020, 9, 898–907. [Google Scholar]
Figure 1. Generation of the difference image.
Figure 1. Generation of the difference image.
Remotesensing 15 05259 g001
Figure 2. The flowchart of the proposed method.
Figure 2. The flowchart of the proposed method.
Remotesensing 15 05259 g002
Figure 3. Generation of the SAR shadow: principle of shadow formation.
Figure 3. Generation of the SAR shadow: principle of shadow formation.
Remotesensing 15 05259 g003
Figure 4. Relationship between the target and shadow.
Figure 4. Relationship between the target and shadow.
Remotesensing 15 05259 g004
Figure 5. Simulated jammed SAR image: (a) original image interfered by deceptive jamming and the image whose interested targets are outlined; (b) slices that contain suspicious targets.
Figure 5. Simulated jammed SAR image: (a) original image interfered by deceptive jamming and the image whose interested targets are outlined; (b) slices that contain suspicious targets.
Remotesensing 15 05259 g005
Figure 6. Target identification results (from left to right: SAR images, detection results, masking results, morphological filtering results, and calculation of minimum outer rectangle): (a) real targets; (b) false targets.
Figure 6. Target identification results (from left to right: SAR images, detection results, masking results, morphological filtering results, and calculation of minimum outer rectangle): (a) real targets; (b) false targets.
Remotesensing 15 05259 g006
Figure 7. Partial histogram of difference images, which are derived from the reference and the upper right test images for six target slices.
Figure 7. Partial histogram of difference images, which are derived from the reference and the upper right test images for six target slices.
Remotesensing 15 05259 g007
Figure 8. Ratio curves of the histogram. X-precision indicates the gray value in the range [ 0 , 255 ] , and Y-precision indicates the ratio of the histogram at two adjacent gray-level values. The dividing point can be selected as the first point satisfying R ( i ) < 1 .
Figure 8. Ratio curves of the histogram. X-precision indicates the gray value in the range [ 0 , 255 ] , and Y-precision indicates the ratio of the histogram at two adjacent gray-level values. The dividing point can be selected as the first point satisfying R ( i ) < 1 .
Remotesensing 15 05259 g008
Figure 9. Comparison of different shadow detection methods: (a) results of proposed method; (b) results of CFAR method; (c) results of dual-threshold Otsu method.
Figure 9. Comparison of different shadow detection methods: (a) results of proposed method; (b) results of CFAR method; (c) results of dual-threshold Otsu method.
Remotesensing 15 05259 g009aRemotesensing 15 05259 g009b
Figure 10. Identification process of false targets: (a) target slices and their internal shadows; (b) image of the identification result.
Figure 10. Identification process of false targets: (a) target slices and their internal shadows; (b) image of the identification result.
Remotesensing 15 05259 g010
Figure 11. Identification of false targets with different backgrounds and targets. (a) Dataset #2. (b) Dataset #3.
Figure 11. Identification of false targets with different backgrounds and targets. (a) Dataset #2. (b) Dataset #3.
Remotesensing 15 05259 g011
Table 1. Simulation parameters of deceptive jamming.
Table 1. Simulation parameters of deceptive jamming.
ParameterValue
Height5 km
Carrier Frequency16 GHz
Range Resolution0.5 m
Bandwidth265.8 GHz
Speed30 m/s
Pulse Width3 μs
Azimuth Resolution0.5 m
Jamming Pattern2-D convolution
Table 2. Statistical analysis of the feature distribution of real and false targets.
Table 2. Statistical analysis of the feature distribution of real and false targets.
Types of FeaturesReal TargetsFalse Targets
Texture FeaturesMean64.946964.6906
Standard Deviation26.467625.0982
Standard Deviation Characteristic2.58942.6651
Weighted Rank Fill Ratio Characteristics0.09240.0996
Shape FeaturesMass Characteristics175169
Diameter Characteristics26.015524.6383
Scale FeaturesMaximum CFAR Characteristics248241
Mean CFAR Characteristics135.5397130.4032
CFAR Bright Characteristics (100)0.84920.7339
CFAR Bright Characteristics (150)0.29370.2661
CFAR Bright Characteristics (200)0.07940.0887
Table 3. Detection threshold of the proposed method.
Table 3. Detection threshold of the proposed method.
SliceThreshold
Slice 147
Slice 241.5
Slice 342.25
Slice 442.75
Slice 546.25
Slice 645.25
Table 4. Quantitative detection results with dataset #1.
Table 4. Quantitative detection results with dataset #1.
MethodReal TargetsFalse TargetsAccuracy
Proposed42100%
CFAR6066.7%
GLCM6066.7%
Table 5. Quantitative detection with dataset #2.
Table 5. Quantitative detection with dataset #2.
MethodReal TargetsFalse TargetsAccuracy
Proposed31100%
CFAR4075%
GLCM4075%
Table 6. Quantitative detection with dataset #3.
Table 6. Quantitative detection with dataset #3.
MethodReal TargetsFalse TargetsAccuracy
Proposed32100%
CFAR5060%
GLCM5060%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Quan, S.; Xing, S.; Wang, J.; Li, Y.; Wang, P. Shadow-Based False Target Identification for SAR Images. Remote Sens. 2023, 15, 5259. https://doi.org/10.3390/rs15215259

AMA Style

Zhang H, Quan S, Xing S, Wang J, Li Y, Wang P. Shadow-Based False Target Identification for SAR Images. Remote Sensing. 2023; 15(21):5259. https://doi.org/10.3390/rs15215259

Chicago/Turabian Style

Zhang, Haoyu, Sinong Quan, Shiqi Xing, Junpeng Wang, Yongzhen Li, and Ping Wang. 2023. "Shadow-Based False Target Identification for SAR Images" Remote Sensing 15, no. 21: 5259. https://doi.org/10.3390/rs15215259

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop