Next Article in Journal
VLC Network Design for High Mobility Users in Urban Tunnels
Previous Article in Journal
Detail Preserving Low Illumination Image and Video Enhancement Algorithm Based on Dark Channel Prior
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Stage STAP Method Based on Fine Doppler Localization and Sparse Bayesian Learning in the Presence of Arbitrary Array Errors

1
National Lab of Radar Signal Processing, Xidian University, Xi’an 710071, China
2
School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510275, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(1), 77; https://doi.org/10.3390/s22010077
Submission received: 16 November 2021 / Revised: 13 December 2021 / Accepted: 20 December 2021 / Published: 23 December 2021
(This article belongs to the Section Radar Sensors)

Abstract

:
In the presence of unknown array errors, sparse recovery based space-time adaptive processing (SR-STAP) methods usually directly use the ideal spatial steering vectors without array errors to construct the space-time dictionary; thus, the steering vector mismatch between the dictionary and clutter data will cause a severe performance degradation of SR-STAP methods. To solve this problem, in this paper, we propose a two-stage SR-STAP method for suppressing nonhomogeneous clutter in the presence of arbitrary array errors. In the first stage, utilizing the spatial-temporal coupling property of the ground clutter, a set of spatial steering vectors with array errors are well estimated by fine Doppler localization. In the second stage, firstly, in order to solve the model mismatch problem caused by array errors, we directly use these spatial steering vectors obtained in the first stage to construct the space-time dictionary, and then, the constructed dictionary and multiple measurement vectors sparse Bayesian learning (MSBL) algorithm are combined for space-time adaptive processing (STAP). The proposed SR-STAP method can exhibit superior clutter suppression performance and target detection performance in the presence of arbitrary array errors. Simulation results validate the effectiveness of the proposed method.

1. Introduction

Space-time adaptive processing (STAP) [1,2,3,4,5,6,7,8] is an effective approach for ground clutter suppression and low-velocity target detection in airborne radars. The performance of STAP mainly depends on the estimation accuracy of the clutter plus noise covariance matrix (CCM) of the cell under test (CUT). Generally, the independent and identically distributed (IID) target-free training samples adjacent to the CUT are used to estimate the CCM. According to the Reed–Mallett–Brennan (RMB) rule [9], to achieve an output signal-to-clutter-plus-noise ratio (SCNR) loss within 3 dB, the number of used IID training samples must be greater than twice the system degrees of freedom (DOFs). However, this requirement is hard to be satisfied in the practical heterogeneous and non-stationary clutter environment, thereby resulting in a severe performance degradation of the STAP algorithms.
Several low-sample methods have been developed to relieve the performance degradation caused by limited training data, such as reduced-dimension (RD) [10,11,12,13,14,15,16] algorithms, reduced-rank (RR) [17,18,19,20,21] algorithms, parametric adaptive matched filter (PAMF) algorithms [22,23], direct data domain (D3) [24,25] algorithms and knowledge-aided (KA) algorithms [26,27,28,29,30]. Although these algorithms can reduce the number of required training samples, they suffer from some drawbacks. The requirement of RR and RD algorithms is still hard to be satisfied, especially for large scale systems, the order for PAMF algorithms is hard to be determined, the system DOFs are significantly reduced for D3 algorithms and the exact prior knowledge of the environment is hard to obtain for KA algorithms.
Recently, with the development of sparse recovery (SR) techniques, sparse recovery based space-time adaptive processing (SR-STAP) methods have been extensively re-searched [31,32,33,34,35,36,37,38,39]. By utilizing the intrinsic sparsity of the clutter in angle-Doppler plane, SR-STAP recovers a signal with a sparse coefficient vector and a uniformly discretized space-time dictionary. Compared with the traditional STAP methods, SR-STAP can exhibit better clutter suppression performance in a very small training samples support. However, unfortunately, most SR algorithms, such as the iterative splitting and thresholding (IST) algorithm [40] and homotopy algorithm [41], need the fine tuning of one or more user parameters which affect the recovery results significantly. Sparse Bayesian learning (SBL) was proposed by Tipping and has been introduced to sparse signal recovery by Wipf for the single measurement vector (SMV) case and multiple measurement vector (MMV) case [42,43,44]. Different with the general SR algorithms, SBL is parameter-independent, which can guarantee the robustness of the algorithm in changing environment. Moreover, SBL can get favorable performance when the dictionary is highly coherent and its global minimum is always the sparsest solution. Thus, for its robustness and excellent performance, sparse Bayesian learning based space-time adaptive processing (SBL-STAP) [45,46] has received much attention.
However, SR-STAP methods rely on the accuracy of the sparse model and suffer performance degradation due to the model mismatch caused by array errors. Thus, several SR-STAP methods which can handle unknown array errors are developed. A sparsity-based STAP method considering array gain/phase error (AGPE-STAP) is proposed in [47], which combines a conventional sparsity-based STAP method and a conventional array gain/phase error calibration method. A sparsity-based STAP method with array gain/phase (GP) error self-calibration has been developed in [48], which iteratively solves an SR problem and an LS calibration problem. In [49], utilizing the specific structure of the mutual coupling matrix, a mutual coupling calibration method is developed for SBL-STAP by rearranging the received snapshots with the designed spatial-temporal selection matrix. In [50], under the framework of the alternating direction method (ADM), a constraint is added to the array GP errors, and the conventional sparsity-based STAP problem is transformed into a joint optimization problem of the angle-Doppler profile and the array GP errors. However, these SR-STAP methods are based on model errors and are only suitable for gain/phase calibration or mutual coupling calibration, in practice, various array errors often work together and some errors are difficult to model, in that case, these methods are no longer effective. Thus, an SR-STAP method which can handle the arbitrary array errors is urgently needed.
In this paper, we propose a two-stage SR-STAP method for suppressing nonhomo-geneous clutter in the presence of arbitrary array errors. In our two-stage SR-STAP method, the radar operates in two modes. In the first stage, radar operates in measurement mode, this mode needs a long coherent processing interval (CPI) to ensure sufficient Doppler resolution. Then, utilizing the spatial-temporal coupling property of the ground clutter, a set of spatial steering vectors with array errors are well estimated by fine Doppler localization. In the second stage, radar operates in STAP mode, in order to solve the model mismatch problem caused by array errors, we directly use these spatial steering vectors obtained in the first stage to construct the space-time dictionary, and then, the constructed dictionary and MSBL algorithm are combined for STAP. The main contributions of this paper are summarized as follows.
(1) A new two-stage SR-STAP method is proposed, in the presence of arbitrary array errors, the proposed two-stage SR-STAP method can obtain superior clutter suppression performance and target detection performance with limited training samples.
(2) Steering vector estimation for arbitrary array errors is developed, which is based on the spatial-temporal coupling property of the ground clutter. Relative to many existing array calibration methods which are only suitable for individual perturbation, the developed method can handle arbitrary array errors. Since it is free of the array model and based on clutter data, the developed method also avoids the model mismatch problem and has adaptability to the changing scenes.
(3) The developed method for estimating steering vectors is still effective when intrinsic clutter motion (ICM) is present, spatial steering vectors with array errors can also be well estimated when the pulse-to-pulse fluctuations are small.
The rest of the paper is organized as follows. In Section 2, the signal model with array errors is introduced. In Section 3, the proposed two-stage SR-STAP method is introduced. In Section 4, simulation results are provided to demonstrate the clutter suppression performance and target detection performance of the proposed method. Final conclusion is discussed in Section 5.
Notation: Boldface small letters denote vectors and boldface capital letters denote matrices. · T and · H represent the transpose and Hermitian transpose, respectively. R , R + and C represent the real filed, nonnegative real filed and complex filed, respec-tively. The expectation operator is represented by E · . The symbols ⊗ and ⊙ denote the Kronecker product and Hadamard product, respectively. d i a g · represents a diagonal matrix with entries of the argument vector on the diagonal. The N K × N K identity matrix is defined as I N K . · F denotes the Frobenius norm. · 2 , 0 denotes a mixed norm defined as the number of non-zero elements of l 2 -norms of the row vectors.

2. Signal Model

Consider an airborne pulsed Doppler radar system that employs a side-looking uniform linear array (ULA) consisting of N elements with an inter-element spacing d and K coherent pulses in a CPI at a constant pulse repetition frequency (PRF) f P R F . Ignoring the influence of range ambiguity, the clutter plus noise echoes collected over all pulses, all elements and all range bins can be represented by
Y = y 1 , y 2 , , y L
where y l is clutter plus noise data snapshot with array errors of the lth range bin, given by
y l = y 11 l , y 21 l , , y N 1 l , , y 1 K l , y 2 K l , , y N K l T = i = 1 N c ς c , i s c , i + n l = i = 1 N c ς c , i b f d i a f s i + n l
where N c is the number of independent clutter sources, ς c , i is the random complex amplitude, s c , i = b f d i a f s i is the spatial-temporal steering vector with array errors of the ith clutter patch, a f s i = G c , i a ¯ f s i is the spatial steering vector with array errors of the ith clutter patch, G c , i is the array error matrix of the ith clutter patch, n l is a Gaussian noise vector with zero mean and covariance matrix σ 2 I , σ 2 is the noise power, I is the identity matrix, b f d i and a ¯ f s i are the corresponding temporal steering vector and the ideal spatial steering vector without array errors, and
b f d i = 1 , exp j 2 π f d i , , exp j 2 π K 1 f d i T
a ¯ f s i = 1 , exp j 2 π f s i , , exp j 2 π N 1 f s i T
where f s i = d cos ϕ i / λ and f d i = 2 v p cos ϕ i / λ f P R F are the normalized spatial fre-quency and the normalized Doppler frequency of the ith clutter patch, ϕ i is the corre-sponding spatial cone angle, λ is the wavelength, v p is the velocity of the platform.
In practice, the gain and delay of each sensor are usually not identical due to different aging rates or imperfect manufacturing, which causes gain and phase errors. The errors can be represented by a N × N complex diagonal matrix G g a i n [4]
G g a i n = d i a g ( [ g 1 , g 2 , , g N ] )
where g n = 1 + Δ α n e j Δ φ n , Δ α n and Δ φ n are the gain error and phase error of the nth sensor, respectively.
Due to closed distance, the interactions among sensors generate mutual coupling. The mutual coupling can be represented by the following N × N symmetric Toeplitz matrix G m u t u a l [4]
G m u t u a l = 1 c 1 c q 0 c 1 1 c 1 c 1 1 c 1 c q c q c 1 1 c 1 c 1 1 c 1 0 c q c 1 1
where c i i = 1 , 2 , , q denotes the complex mutual coupling coefficient, q N , which means that the mutual coupling can be ignored when the element spacing is greater than q inter-element spacing.
In order to obtain a certain geometry of array, each sensor must be in the precise location. However, in practice, this requirement is sometimes difficult to satisfy, which causes the sensor location errors. The error vector of the ith clutter patch caused by sensor location errors can be written as [4]
e p i = 1 , e j 2 π Δ 1 cos ϕ i / λ , , e j 2 π Δ N 1 cos ϕ i / λ T
where Δ 0 = 0 , Δ j j = 1 , 2 , , N 1 are the random numbers represent the location errors for each sensor.
Let G o t h e r s i C N × N denotes other array perturbations encountered at the ith clutter patch, the array error matrix G c , i can be formulated as
G c , i = G g a i n G m u t u a l d i a g e p i G o t h e r s i

3. Proposed Method

In this section, we propose a two-stage SR-STAP method for suppressing nonhomo-geneous clutter in the presence of arbitrary array errors.

3.1. Steering Vector Estimation

In the first stage, radar operates in measurement mode, assuming that the number of pulses in a CPI is K 1 , to promise sufficient Doppler resolution, K 1 should be a large value. From (2), we get the clutter plus noise data snapshot of the lth range bin.
y l = i = 1 N c ς c , i b f d i a f s i + n l
Without consideration of the ICM, the relationship of spatial frequency f s i and temporal frequency f d i is represented by
f s i = d f P R F 2 v p f d i , i = 1 , 2 , , N c
It means that clutter patches can be localized either by a spatial filter or by a Doppler filter. Generally, the number of pulses in a CPI is larger than the number of elements in the array, so, it is easier to create a narrow Doppler filter. Moreover, the ultra-low sidelobe of a Doppler filter is more reasonable than that of a spatial filter. Thus, clutter localization is preferred to be realized by fine Doppler localization. The kth Doppler filter output is given by
X k l = T k H y l = f k I N H y l = i = 1 N c ς c , i f k I N H b f d i a f s i + f k I N H n l = i = 1 N c ς c , i f k H b f d i I N a f s i + f k I N H n l = i = 1 N c ς c , i f k H b f d i a f s i + n ˜ l
where T k = f k I N is the transformation matrix, f k = t f u k is the Doppler filter coefficient vector of the kth Doppler filter, t f is a ultra-low sidelobe taper, u k = [ 1 , exp j 2 π k / K 1 , , exp j 2 π k K 1 1 / K 1 ] T , n ˜ l = u k I N H n l is the additive Gaussian noise, and
p b r f d k f d i = f k H b f d i
is the low-pass filter response with the passband of
f d i f d k D w 2
where f d k is the center frequency of the kth Doppler filter, D w is the Doppler frequency passband width (DFPW). Then, (11) can be recast as
X k l = i = 1 N c ς c , i p b r f d k f d i a f s i + n ˜ l
According to the Doppler frequency passband of the kth Doppler filter, we can get the associated spatial frequency passband of the clutter component by substituting (10) into (13)
d f P R F 2 v p f d k d f P R F 4 v p D w f s i d f P R F 2 v p f d k + d f P R F 4 v p D w
The width of the spatial frequency passband is
Δ = d f P R F 2 v p D w
For a Doppler filter with ultra-low sidelobes, the gain of the stopband is negligible relative to the passband. Without consideration of the components in the stopband of the Doppler filter, (14) can be written as
X k l = i = N p k N q k ξ c , i a f s i + n ˜ l
where ξ c , i = ς c , i p b r f d k f d i , N p k and N q k are the bounded indexes of the spatial frequency passband corresponding to the kth Doppler filter.
Similar to the Doppler beam sharpening (DBS) radar, we define a sharpening ratio as
κ = θ mainlobe Δ = 2 v p N d f P R F D w
where θ mainlobe is the mainlobe beamwidth.
For an untapered Doppler filter, the distance between its two first nulls is 2 / K 1 , which is larger than its DFPW. Therefore, when K 1 is large, a narrow Doppler filter with a small DFPW can be obtained. However, its sidelobe level is high (the first sidelobe is at −13.4 dB); thus, the sidelobe gain of an untapered Doppler filter cannot be ignored. A heavy tapered Doppler filter can obtain ultra-low sidelobes, but the obtainment is at the cost of a broadening mainlobe, and thereby resulting a larger DFPW. We define the DFPW to be the width of the Doppler frequency range where the drop of the power gain of a Doppler filter is less than 40 dB. For a Doppler filter with ultra-low sidelobes, the power gain is negligible outside this range. It is difficult to get the analytical DFPW of a tapered Doppler filter, but we can give a reasonable value based on our experience. For example, when a Chebyshev taper with sidelobe level of −80 dB is used, by experience, we know that 5 / K 1 is a reasonable DFPW value, i.e., D w = 5 / K 1 . Substituting D w = 5 / K 1 into (16), we get the spatial frequency passband width corresponding to the DFPW of a Doppler filter with a 80 dB Chebyshev taper.
Δ = 5 d f P R F 2 v p K 1
Substituting (19) into (18) yields
κ = θ mainlobe Δ = 2 v p K 1 5 N d f P R F
Define the correlation coefficient of a f s i and a f s i + Δ / 2 as
c c = a H f s i + Δ 2 a f s i a H f s i + Δ 2 a f s i + Δ 2 a H f s i a f s i
Thus, as the number of pulses K 1 increases, the sharpening ratio κ becomes larger and Δ becomes smaller, as a result, the correlation coefficient of a f s i and a f s i + Δ / 2 becomes larger. Figure 1 depicts the correlation coefficient of a f s i and a f s i + Δ / 2 versus the sharpening ratio κ , where v p = 150 m / s , N = 8 , d = 0.15 m , f P R F = 2000 Hz and K 1 changes from 40 to 520 at intervals of 40.
In Figure 1, the dotted line with symbol * shows the correlation coefficient of a f s i and a f s i + Δ / 2 versus the sharpening ratio κ and the dotted line with symbol ∘ denotes a threshold value. From Figure 1, we can observe that when the sharpening ratio κ is larger than 6.4, the correlation coefficient of a f s i and a f s i + Δ / 2 is greater than 0.99, i.e., in this case, if κ is larger than 6.4 (the number of pulses in a CPI is larger than 256), a f s i + Δ / 2 can be well approximated by a f s i .
When the sharpening ratio κ is large, a f s k ± Δ / 2 a f s k , (17) can be simplified as
X k l = μ k l a f s k + n ˜ l
where μ k l = i = N p k N q k ξ c , i , f s k is the normalized spatial frequency corresponding to the center Doppler frequency of the kth Doppler filter, a f s k is the corresponding spatial steering vector.
To alleviate the bad influence of n ˜ l , multiple range gates are utilized to estimate a f s k , according to (22), the covariance matrix of X k l can be written as
R k l = E X k l X k l H = γ k l 2 a f s k a H f s k + σ ˜ 2 I N
where γ k l 2 = E μ k l 2 , E n ˜ l n ˜ l H = σ ˜ 2 I N , σ ˜ 2 is the additive noise power. In practice, R k l is unknown and can be substituted by the sample covariance matrix, i.e.,
R ^ k l = 1 L l = 1 L X k l X k l H
Under the high clutter-to-noise ratio (CNR) case, γ k l 2 / σ ˜ 2 1 and it is valid to say that the number of large eigenvalues of R ^ k l is 1. Thus, we can perform singular value decomposition (SVD) on R ^ k l and a f s k is estimated by the eigenvector associated with the largest eigenvalue.
When ICM is present, the pulse-to-pulse fluctuations will cause a broadening of the Doppler spectrum of a single clutter return and the relation in (10) does not hold. In this case, for a single clutter echo, its Doppler frequency range can be written as
f d i D b 2 f d f d i + D b 2
where D b = 2 σ v 2 σ v λ f P R F λ f P R F is the width of the Doppler spectrum, σ v is the velocity standard deviation caused by ICM [1].
By substituting (10) into (25), we get the associated spatial frequency range
d f P R F 2 v p f d i d f P R F 4 v p D b f s d f P R F 2 v p f d i + d f P R F 4 v p D b
And the width of the spatial frequency range is
Δ b = d f P R F 2 v p D b
When the Doppler spectrum broadening caused by ICM is much smaller than the DFPW of the heavy tapered Doppler filter, i.e., D b D w , the inequality Δ b Δ holds. As a result, the correlation coefficient of a f s i and a f s i + Δ b / 2 is approximately 1 when the sharpening ratio κ is large, and in this case, we can say that the Doppler frequency range given in (25) corrsponds to a single spatial frequency f s i . Thus, when the velocity standard deviation σ v caused by ICM is small, the broadening of the Doppler spectrum has little effect on estimating the spatial steering vectors and the proposed method for estimating spatial steering vector still works well.
In practice, clutter from the sidelobes and the nulls of the array pattern is much weaker than that from the mainlobe. Besides, the reflection coefficients are small in some unknown clutter areas. In addition, the adjacent range gates used to estimate R k l may include strong moving targets and other unwanted components. In these cases, the estimation accuracy of R k l or the condition γ k l 2 / σ ˜ 2 1 cannot be well guaranteed, which causes an inaccurate estimate of a f s k . Thus, beam scanning and secondary data selection are necessary. Figure 2 describes the process of beam scanning and fine Doppler localization. Firstly, to guarantee the gain of the array in all clutter regions, multiple beams, such as a group of N orthogonal Fourier beams, are used to cover all the azimuth angles; thus, we can get the ground clutter data of all range gates under each spatial beam. Then, for the reason that the angle resolution in the spatial domain is low while the Doppler resolution in the temporal domain is high, a group of K 1 Doppler filters are used for better localization of the ground clutter. Thus, we can obtain the output data of all range gates under each heavy tapered Doppler filter by the fine Doppler localization of the ground clutter data. Each spatial beam will cover several Doppler filters and N spatial beams will cover all K 1 Doppler filters, and the gain of the array in these clutter areas corresponding to the DFPW of each Doppler filter can be well guaranteed. Thus, by processing the output data of each heavy tapered Doppler filter in turn, a set of K 1 spatial steering vectors can be well estimated.
ε l = a ^ 0 H f s k X k l 2
and an angle selection parameter ρ l
ρ l = a ^ 0 H f s k X k l a ^ 0 H f s k a ^ 0 f s k X k l H X k l
where a ^ 0 f s k is the initial estimated spatial steering vector by utilizing all range gates. According to the definition of ε l and ρ l given in (28) and (29), we find that ε l is dependent on both the direction and amplitude of X k l and ρ l is only dependent on the direction of X k l . Thus, firstly, we use a power selection parameter ε l to pick out the range gates which may be strong clutter or strong outliers. Then, we use an angle selection parameter ρ l to kick out the possible outliers, such as strong moving targets or strong interference, whose directions are different from X k l . Thereafter, these range gates which may be strong clutter can be preserved and the possible outliers can be removed.
Figure 3 plots the flow chart of the first stage of the proposed two-stage SR-STAP method. In the first stage of our two-stage SR-STAP method, our goal is to estimate a set of spatial steering vectors with array errors. Firstly, in the beam scanning step, we can get the ground clutter data of all range gates given in (9) under the first spatial beam. Then, in the fine Doppler localization step, we can obtain the output data of all range gates given in (11) under the first heavy tapered Doppler filter by the fine Doppler localization of the ground clutter data. Then, for the secondary data selection step, we firstly obtain the initial estimated spatial steering vector by utilizing the output data of all range gates given in (11); then we use the power selection parameter ε l given in (28) to pick out these range gates which may be strong clutter or strong outliers; finally, we use the angle selection parameter ρ l given in (29) to kick out these range gates which may contain possible outliers. Next, for the steering vector estimation step, we calculate the R ^ k l by (24) utilizing these selected range gates and perform SVD on R ^ k l to find the eigenvector a ^ f s k associated with the largest eigenvalue, which is considered as the estimate of a f s k . Here we can get the spatial steering vector with array errors corresponding to the first heavy tapered Doppler filter. Then, we need to judge whether all the Doppler channel contained in the current beam have been processed. If it has not been finished, we should assume k = k + 1 and back to the beam scanning step. If the answer is Yes, we need to judge whether the beam scanning has been finished and if has not, we should assume n = n + 1 and back to the fine Doppler localization step. When the beam scanning ends and all K 1 Doppler bins are processed, a set of K 1 spatial steering vectors with array errors are well estimated by fine Doppler localization.
The procedures of the first stage of the proposed method are summarized as follows:
Step 1: Obtain the initial estimated spatial steering vector a ^ 0 f s k corresponding to the kth Doppler filter.
Step 2: Compute the values of power selection parameter ε l and angle selection parameter ρ l for all range gates according to (28) and (29).
Step 3: Find p range gates corresponding to p maximal values of ε l among all L range gates, i.e., l 1 , l 2 , , l p = arg max l ε l , l = 1 , 2 , , L .
Step 4: Find q range gates corresponding to q maximal values of ρ l among all p range gates selected in step 3, i.e., l ˙ 1 , l ˙ 2 , , l ˙ q = arg max l ρ l , l = l 1 , l 2 , , l p .
Step 5: Calculate the R ^ k l given in (24) utilizing q range gates selected in step 4. Perform SVD on R ^ k l to find the eigenvector a ^ f s k associated with the largest eigenvalue, which is considered as the estimate of a f s k .
Step 6: Go back to step 1 until the beam scanning ends and all K 1 Doppler bins are processed.

3.2. SR-STAP Method

In the second stage of our two-stage SR-STAP method, we firstly use these spatial steering vectors obtained in the first stage to construct the space-time dictionary, and then, since the MSBL algorithm has been demonstrated a robust, sparse enough, parameter-independent algorithm in the presence of noise, the existing multiple measurement vector sparse Bayesian learning based space-time adaptive processing (MSBL-STAP) [45] method is adopted.
In the second stage, radar operates in STAP mode, since we have already measured a set of spatial steering vectors with array errors in the first stage; thus, in this mode, high Doppler resolution is not needed, assuming that the pulse number in a CPI is K 2 , in general, K 2 < K 1 . To solve the model mismatch problem caused by array errors, we need to select N s spatial steering vectors from the K 1 spatial steering vectors obtained in the first stage and use these selected steering vectors to construct the space-time dictionary. Then, the received data snapshot of L range bins can be expressed by
Y = D Ψ + N
where Ψ = β 1 , β 2 , , β L R N s N d × L is the solution matrix with each row representing a possible clutter source, N = n 1 , n 2 , , n L C N K × L is a noise matrix whose entries are Gaussian with zero mean and variance σ 2 , D = s 1 , s 2 , , s M is the space-time dictionary, M = N s N d is the number of the grid points of the whole angle-Doppler plane, N s = ρ s N K 1 ( ρ s > 1 ) is the number of angle bins, N d = ρ d K 2 ρ d > 1 is the number of Doppler bins, s m = b f d m a ^ f s m is the spatial-temporal steering vector with array errors of the mth grid point, a ^ f s m is the estimated spatial steering vector with array errors of the mth grid point, b f d m is the temporal steering vector of the mth grid point.
The angle-Doppler profile Ψ is obtained by solving the following optimization problem
min Ψ Y D Ψ F 2 s . t . Ψ 2 , 0 r s
where r s R + is the degrees of the clutter sparsity (DOSs). A convex relaxation of (31) is
min λ , Ψ Y D Ψ F 2 + m = 1 M λ Ψ i · 2
From a Bayesian perspective, (32) is equivalent to maximum a posterior probability (MAP) with the prior probability density function (PDF) p exp i = 1 M Ψ i · 2 [43]. According to the measurement model in (30), we get the Gaussian likelihood function
p Y | Ψ ; σ 2 = π σ 2 N K L exp σ 2 Y D Ψ F 2
Assuming that each column in Ψ obeys a complex Gaussian prior
β l N 0 , Γ
where 0 is a zero vector, Γ = diag ( ζ ) , ζ = ζ 1 , ζ 2 , , ζ M are the hyperparameters con-trolling the prior covariance of β l and its values can be viewed as the power of the clutter sources. Then the prior PDF of Ψ can be represented as
p Ψ ; Γ = π M L Γ L exp l = 1 L β l H Γ 1 β l
Combining the prior and likelihood, we get the posterior PDF of Ψ
p Ψ | Y ; Γ , σ 2 = p Y | Ψ ; σ 2 p Ψ ; Γ p Y | Ψ ; σ 2 p Ψ ; Γ d Ψ
Actually, the sparsity profile Ψ is estimated by the posterior mean μ , whose value is modulated by the hyperparameter vector ζ and σ 2 . Thus, the task to estimate Ψ is shifted to estimate the hyperparameter vector ζ and σ 2 . The latter can be effectively accomplished by an expectation maximization (EM) algorithm. The procedures of the EM algorithm are described as follows.
E step: According to (33) and (35), the joint PDF of Y , Ψ at j + 1 step is given by
p Y , Ψ j + 1 ; Γ j , σ j 2 = p Y | Ψ j + 1 ; σ j 2 p Ψ j + 1 ; Γ j = π N K + M L Γ j L σ j 2 N K L · exp l = 1 L σ j 2 y l D β j + 1 l H y l D β j + 1 l β j + 1 l H Γ j 1 β j + 1 l
Then, the marginal PDF of Y at j + 1 step is represented as
p Y ; Γ j , σ j 2 = p Y , Ψ j + 1 ; Γ j , σ j 2 d Ψ j + 1 = π N K L σ j 2 I + D Γ j D H L · exp l = 1 L y l H σ j 2 I + D Γ j D H 1 y l
By combining (37) and (38), we get the posterior PDF of Ψ at j + 1 step
p Ψ j + 1 | Y ; Γ j , σ j 2 = p Y , Ψ j + 1 ; Γ j , σ j 2 p Y ; Γ j , σ j 2 = π M L Σ j + 1 L . exp l = 1 L β j + 1 l μ j + 1 l H Σ j + 1 1 β j + 1 l μ j + 1 l
where μ j + 1 is the mean matrix and Σ j + 1 is the covariance matrix, given by
μ j + 1 = Γ j D H σ j 2 I + D Γ j D H 1 Y
Σ j + 1 = Γ j Γ j D H σ j 2 I + D Γ j D H 1 D Γ j
M step: At M-step, we estimate ζ j + 1 and σ j + 1 2 by using a Type-II maximum likelihood [42], i.e.,
ζ j + 1 , σ j + 1 2 = arg max Γ , σ 2 E ln p Y , Ψ j + 1 ; Γ j , σ j 2
Because of decoupling [43], (42) can be divided into two optimization problems
ζ j + 1 = arg max Γ E ln p Ψ j + 1 ; Γ j
σ j + 1 2 = arg max σ 2 E ln p Y | Ψ j + 1 ; σ j 2
Substituting (35) into (43) yields
ζ m , j + 1 = l = 1 L μ m , j + 1 l 2 L + Σ m , j + 1
where μ m , j + 1 l is the mth component of μ j + 1 l , Σ m , j + 1 is the mth component of the main diagonal of Σ j + 1 .
Substituting (33) into (44) yields
σ j + 1 2 = 1 / L Y D μ j + 1 F 2 + σ j 2 m = 1 M 1 Σ m , j + 1 / ζ m , j N K
The iteration for updating ζ and σ 2 ends when a predetermined criteria is satisfied. Such as, ζ j + 1 ζ j / ζ j δ , where δ is a small enough positive threshold. Then, the CCM can be calculated by
R c + n = 1 L l = 1 L m = 1 M β m l 2 s m s m H + α σ 2 I N K
where α is a real constant. Based on the minimum variance distortionless response (MVDR) principle, we get the optimal STAP weight vector
w = R c + n 1 s t s t H R c + n 1 s t
where s t = b f d t a ¯ f s t is the target spatial-temporal steering vector with the normalized Doppler frequency of f d t and the normalized spatial frequency of f s t .
The procedures of the second stage of the proposed method are summarized as follows:
Step 1: Construct the dictionary D using these spatial steering vectors obtained in the first stage, give the initial values ζ 0 = 1 , σ 0 2 = 0.1 .
Step 2: Compute the mean matrix μ j + 1 and the covariance matrix Σ j + 1 using (40) and (41).
Step 3: Update ζ j + 1 and σ j + 1 2 using (45) and (46).
Step 4: Continue step 2 and step 3 until the predetermined criteria is satisfied.
Step 5: Calculate the CCM R c + n using (47), where Ψ μ .
Step 6: Compute the optimal STAP weight w using (48).

4. Numerical Experiments

In this section, numerical experiments are conducted to assess the performance of proposed method. The radar system parameters are given in Table 1. In the first stage, a Chebyshev taper with sidelobe level of −80 dB is used and the sharpening ratio κ is equal to 6.4. In the second stage, the discretized grids are set to be N s = 32 and N d = 32 , i.e., ρ s = ρ d = 4 , the number of used training samples and the iteration termination threshold of MSBL-STAP algorithm are set to be 10 and δ = 0.001 , respectively. We use the signal to interference plus noise ratio (SINR) loss as a measure of clutter suppression performance, which is calculated by the ratio of output SINR and the signal to noise ratio (SNR) obtained by a match filter in a noise-only environment, i.e.,
L S I N R = σ 2 N K w H s t 2 w H Rw
where w is the STAP weight vector, R is the known CCM. We also evaluate the target detection performance by the probability of detection (PD) versus SNR curves, which are achieved by utilizing the adaptive matched filter (AMF) detector [51], and the probability of false alarm rate (PFA) is set as 10 3 , the target is assumed in the main beam direction with the normalized Doppler frequency 0.1, the threshold and probability of detection estimates are based on 10 4 samples. Besides, all the simulation results of SINR loss are acquired through 100 Monte Carlo runs and all the PD to SNR curves are averaged over 1000 Monte Carlo trials.
To demonstrate the performance of proposed two-stage SR-STAP method in the presence of array errors, each perturbation is first considered separately, and then, their combined effects are demonstrated, finally, we also measure the effect of the presence of ICM on our two-stage SR-STAP method. We consider four cases in the simulation, (1) use the true spatial steering vectors with array errors to construct the space-time dictionary and perform MSBL-STAP, which is called TSV-MSBL, (2) use the estimated spatial steering vectors which are obtained by utilizing single range gate to construct the space-time dictionary and perform MSBL-STAP, which is called SESV-MSBL, (3) use the estimated spatial steering vectors which are obtained by utilizing multiple range gates to construct the space-time dictionary and perform MSBL-STAP, which is called MESV-MSBL, (4) use the ideal spatial steering vectors without array errors to construct the space-time dictionary and perform MSBL-STAP, which is called ISV-MSBL.

4.1. Gain and Phase Errors

In this experiment, we verify the performance of the proposed two-stage SR-STAP method in the presence of gain and phase errors. G g a i n = d i a g ( [ g 1 , g 2 , , g N ] ) is the error matrix, g 1 = 1 , g i = 1 + Δ α i e j Δ φ i i = 2 , , N is the gain and phase error of the ith element, where Δ α i and Δ φ i follow a uniform distribution within 0.1 , 0.1 and 10 , 10 , respectively [47]. Then, we can get the following equation
A ^ = G ^ g a i n A
where A ^ = a ^ f s 1 , a ^ f s 2 , , a ^ f s K 1 is the matrix whose columns are the K 1 estimated spatial steering vectors with array errors in the first stage, A = a ¯ f s 1 , a ¯ f s 2 , , a ¯ f s K 1 is the matrix whose columns are the K 1 ideal spatial steering vectors without array errors, G ^ g a i n is the estimate of G g a i n . The least square (LS) solution of G ^ g a i n is given by
G ^ g a i n = A ^ A H A A H 1
To show the performance loss of the proposed method where there are varying levels of amplitude and phase errors, twenty-one different levels of the amplitude and phase errors are defined as “level1”, “level2”, “level3”, ⋯, “level20” and “level21”, which are subject to uniform distribution as (0,0)/(0 ,0 ), (−0.01,0.01)/(−1 ,1 ), (−0.02,0.02)/(−2 ,2 ), ⋯, (−0.19,0.19)/(−19 ,19 ) and (−0.2,0.2)/(−20 ,20 ), respectively. Figure 4 plots the average SINR loss versus the amplitude and phase errors level, as shown in Figure 4, the higher the amplitude and phase errors level, the severer performance loss of the proposed method. In general, it is acceptable when the performance loss of the algorithm is less than 3 dB. In Figure 4, the black dotted line with a square mark denotes a threshold value, which means that the average SINR loss is decreased by 3 dB compared with the OPT. From Figure 4, we can observe that the slight amplitude and phase errors will cause a severe performance loss of the proposed method, specifically, the performance loss of the proposed two-stage method is greater than 3 dB when the amplitude and phase errors level is greater than 2. Thus, we can say that when the phase errors level is greater than 2, the performance of the proposed method is significantly deteriorated.
The gain and phase errors estimated by the proposed method are presented in Table 2, from this table, it is observed that the estimated values are very close to the true ones.
The SINR loss curves in the presence of gain and phase errors are given in Figure 5a. As shown in Figure 5a, due to the steering vector mismatch between the dictionary and clutter data, the clutter suppression performance of the ISV-MSBL method is much poorer than that of the TSV-MSBL method. By comparing the SINR loss curves of ISV-MSBL, SESV-MSBL, MESV-MSBL and the OPT, it is observed that the MESV-MSBL method achieves the comparable performance as the OPT, which is better than that of the SESV-MSBL method and much better than that of the ISV-MSBL method. The results demonstrate that the gain and phase errors can be well calibrated by the developed steering vector estimation method. In the sidelobe region ( f d = 0.4 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 14.1 dB and 16.3 dB, respectively. In the mainlobe region ( f d = 0.1 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 8.5 dB and 15.8 dB, respectively. Whether in the mainlobe region or in the sidelobe region, the clutter suppression performance of the proposed method is significantly improved. The PD versus SNR curves in the presence of gain and phase errors are given in Figure 5b. As depicted in Figure 5b, the target detection performance of the MESV-MSBL method is close to the optimal performance, which is better than that of the SESV-MSBL method and much better than that of the ISV-MSBL method. Compared with the ISV-MSBL method, the slow-moving target detection performance of proposed SESV-MSBL method and MESV-MSBL method are significantly improved.

4.2. Mutual Coupling

In this experiment, we verify the performance of the proposed two-stage SR-STAP method in the presence of mutual coupling. Assuming that mutual coupling coefficient can be ignored when the element spacing is greater than 1.5 wavelength, which means that q = 3 . We set the non-zero mutual coupling coefficients as 1, 0.1250 + 0.2165 j , 0.0866 0.0500 j , respectively [49]. The same principle as the estimation of G g a i n , we can also get the estimate of mutual coupling matrix G m u t u a l by Equation (51).
The mutual coupling coefficients estimated by the proposed method are presented in Table 3, from this table, we find that the estimated values are also very close to the true ones.
The SINR loss curves in the presence of mutual coupling are depicted in Figure 6a, the PD versus SNR curves in the presence of mutual coupling are depicted in Figure 6b. The superior clutter suppression performance and target detection performance of the proposed method are demonstrated. In the sidelobe region ( f d = 0.4 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 19.3 dB and 21.7 dB, respectively. In the mainlobe region ( f d = 0.1 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 17.2 dB and 27.6 dB, respectively.

4.3. Sensor Location Errors

In this experiment, we verify the performance of the proposed two-stage SR-STAP method in the presence of sensor location errors. Δ PE = d i a g Δ 0 , Δ 1 , , Δ N 1 is the position errors matrix, Δ 0 = 0 , Δ i 1 i = 2 , , N is the position error value of the ith element, where Δ i 1 follows a uniform distribution within 0.1 d , 0.1 d [52]. We can also utilize the K 1 estimated spatial steering vectors a ^ f s 1 , a ^ f s 2 , , a ^ f s K 1 and the K 1 ideal spatial steering vectors a ¯ f s 1 , a ¯ f s 2 , , a ¯ f s K 1 to estimate Δ i 1 , given by
Δ ^ i 1 = 1 K 1 k = 1 K 1 λ a n g l e a ^ i f s k a ^ i f s k a ¯ i f s k a ¯ i f s k 2 π cos ϕ k
where a ^ i f s k and a ¯ i f s k are the ith element of a ^ f s k and a ¯ f s k , respectively. ϕ k is the is the spatial cone angle corresponding to the center frequency of the kth Doppler filter.
The sensor location errors estimated by the proposed method are presented in Table 4, from this table, we find that the estimated ones match true ones pretty well.
The SINR loss curves in the presence of sensor location errors are depicted in Figure 7a, the PD versus SNR curves in the presence of sensor location errors are depicted in Figure 7b. It is observed that the clutter suppression performance and the target detection performance of the proposed method are significantly improved when sensor location errors are present. In the sidelobe region ( f s t = 0.4 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 15.5 dB and 18.2 dB, respectively. In the mainlobe region ( f s t = 0.1 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 15.9 dB and 26.2 dB, respectively.

4.4. Arbtrary Array Errors

In this experiment, we model the arbitrary array errors as the combined effects of gain and phase errors, mutual coupling and sensor location errors, then, the performance of proposed method in the presence of arbitrary array errors is demonstrated. The specific values of array errors are the same as those in Section 4.1, Section 4.2 and Section 4.3.
The amplitudes and interferometry phases of all K 1 estimated spatial steering vectors with array errors in the first stage are given in Figure 8a,c, respectively. The amplitudes and interferometry phases of K 1 true spatial steering vectors with array errors are given in Figure 8b,d, respectively. From Figure 8a,c, we can observe that the amplitudes of the estimated spatial steering vectors with array errors are close to that of the true spatial steering vectors. From Figure 8b,d, we can also observe that the interferometry phases of the estimated spatial steering vectors with array errors are very close to that of the true spatial steering vectors. Thus, we can say that the amplitudes differences and the phase differences between the estimated spatial steering vectors and true spatial steering vectors of all Doppler bins are very small, i.e., a set of K 1 spatial steering vectors can be well estimated in the first stage of our two-stage SR-STAP method in the presence of arbitrary array errors.
Figure 9a,b show the amplitudes differences and the phase differences between the estimated spatial steering vectors and true spatial steering vectors of all Doppler bins, respectively. As depicted in Figure 9, the amplitude differences and the phase differences between the estimated spatial steering vectors and true spatial steering vectors of all Doppler bins are very small, i.e., the estimated spatial steering vectors are very close to these true spatial steering vectors. Thus, when the arbitrary array errors are present, a set of spatial steering vectors can be well estimated in the first stage of our two-stage method. The results intuitively demonstrate the superior steering vector estimation performance of the proposed method. For clarify, in Figure 10, the amplitudes and phases of the ideal steering vector, the true steering vector and the estimated steering vector of the 75th Doppler bin are demonstrated, the results indicate that the estimated steering vector is much closer to the true steering vector than the ideal steer vector, and the true steering vector can be well approximated by the estimated steering vector in the presence of arbitrary array errors.
The SINR loss curves and the PD versus SNR curves in the presence of arbitrary array errors are depicted in Figure 11a,b, respectively. From Figure 11, it is observed that the ISV-MSBL method has a severe performance degradation when arbitrary array errors are present. However, compared with the ISV-MSBL method, the clutter suppression performance and the target detection performance of the proposed SESV-MSBL method and MESV-MSBL method are significantly improved and the MESV-MSBL method can obtain the comparable performance as the OPT. The reason is that the array errors are well calibrated by the developed steering vector estimation method and thereby the mismath problem between the clutter data and the space-time dictionary are well solved. The results further validate the superior performance of the proposed method. In the sidelobe region ( f d = 0.4 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 20.5 dB and 23.8 dB, respectively. In the mainlobe region ( f d = 0.1 ), compared with the ISV-MSBL method, the output SINRs of proposed SESV-MSBL method and MESV-MSBL method are increased by about 23.6 dB and 30.7 dB, respectively.
To better illustrate the advantage of the proposed method, Figure 12a–d plot the clutter capon spectra of different STAP methods. From Figure 12a–c, we can observe that the spectra of the MESV-MSBL method is the closest to the optimal spectra with few clutter power leakage, and the spectra of the SEMV-MSBL method is close to the optimal spectra with some clutter power leakage and a slight spectrum expansion. However, from Figure 12d, we can observed that the spectra of the ISV-MSBL method has severe clutter power leakage and spectrum expansion, the reason is that if the array calibration is not performed, the steering vector mismatch between the clutter data and the space-time dictionary will cause that the clutter spectrum cannot be well estimated; thus, the clutter suppression performance and the slow moving target detection performance of the SR-STAP methods will significantly degrade for the reason that the adaptive pattern cannot to suppress clutter and protect the target well because of the widened notches or the incorrect notches. That is the reason why we must perform array calibration when we apply sparse recovery technique to STAP.
Figure 13 plots the average SINR loss versus the number of training samples used in the first stage. From Figure 13, we can know that when the number of training samples used in the first stage is larger than 100, the spatial steering vectors with array errors can be well estimated and the MESV-MSBL method can acquire comparable performance as the TSV-MSBL method and the OPT.
In Figure 14, we compare the clutter suppression performance of the proposed SESV-MSBL method and the MESV-MSBL method with that of the AGPE-SR-STAP method [47], the MSB-SR-STAP method [49] and the IAD-SR-STAP method [48]. From Figure 14, we can observe that the SESV-MSBL method and the MESV-MSBL method have narrower notches than other STAP methods. The reason is that AGPE-SR-STAP method and IAD-SR-STAP method are only suitable for gain/phase calibration and MSB-SR-STAP method is only suitable for mutual coupling calibration. Thus, in the presence of arbitrary array errors, these methods are not effective any more.
Finally, we give two experiments to show that how much variation in values of the system parameters in Table 1 affect the performance of the proposed method. Figure 15a plots the SINR loss curves of the MESV-MSBL method under different pulse numbers in a CPI in the first stage of the proposed two-stage SR-STAP method. From Figure 15a, we can observe that the more pulses in a CPI, the better clutter suppression performance of the proposed MESV-MSBL method. The reason is that when the number of pulses K 1 in a CPI increases, the sharpening ratio κ given in (20) will become larger and the width of the spatial frequency passband Δ given in (19) will become smaller, as a result, the correlation coefficient of a f s i and a f s i + Δ / 2 given in (21) will become larger. In other word, as the number of pulses K 1 increases, the spatial steering vectors can be estimated more and more accurately in the first stage of our two-stage SR-STAP method, as a result, the clutter suppression performance of the proposed MESV-MSBL method is getting better and better. Thus, we can conclude that the system parameters determine the value of the sharpening ratio κ and the value of the sharpening ratio κ determines the clutter suppression performance of the proposed two-stage SR-STAP method. The greater the sharpening ratio κ , the better the clutter suppression performance of the proposed method. In general, as long as these system parameters can guarantee that the correlation coefficient of a f s i and a f s i + Δ / 2 given in (21) is greater than 0.95, the proposed two-stage SR-STAP method can obtain superior clutter suppression performance. To further confirm our conclusion, Figure 15b plots the SINR loss curves of the MESV-MSBL method under different platform velocities and pulse repetition frequencies. From Figure 15b, we can observe that when v p = 150 and f P R F = 2000 , the proposed MESV-MSBL method can achieve superior clutter suppression performance for the reason that the sharpening ratio κ is high in this case. In addition, when v p = 120 and f P R F = 1600 , although the system parameters have changed, the sharpening ratio κ has not changed; thus, the proposed MESV-MSBL method can still achieve superior clutter suppression performance. However, when v p = 120 , f P R F = 2000 and v p = 20 , f P R F = 2000 , the clutter suppression performance of the proposed MESV-MSBL method is getting worse and worse because that the sharpening ratio κ becomes smaller and smaller.

4.5. Arbitrary Array Errors and Intrinsic Clutter Motion

In this experiment, we verify the performance of the proposed two-stage SR-STAP method in the presence of arbitrary array errors and ICM. The same to Section 4.4, we model the arbitrary array errors as the combined effects of gain and phase errors, mutual coupling and sensor location errors. In this experiment, we only consider the ICM in the first stage of our two-stage method, i.e., we only measure the effect of the presence of ICM on estimating the spatial steering vectors in the first stage, without considering the clutter spectrum expansion problem caused by ICM in the second stage. In fact, this problem can be effectively handled by the covariance matrix taper (CMT) approach, the interested reader is referred to the literature [53,54] for further details. The ICM model is given by [1]. The temporal autocorrelation of the fluctuations is Gaussian in shape
γ ( k ) = exp 8 π 2 σ v 2 T r 2 λ 2 k 2
where σ v is the velocity standard deviation, T r = 1 1 f P R F f P R F is the pulse repetition interval.
The SINR loss curves of different methods in the presence of arbitrary array errors and ICM are depicted in Figure 16. As shown in Figure 16a, when σ v = 0.5 m m s s , due to the broadening of the Doppler spectrum caused by ICM is much smaller than the DFPW of the heavy tapered Doppler filter, the proposed two-stage method still achieves superior performance. Specifically, in this experiment, when σ v = 0.5 m m s s , the width of the Doppler spectrum is D b = 2 σ v = 2 σ v λ f P R F λ f P R F = 1 = 2 σ v = 2 σ v λ f P R F λ f P R F = 1 300 300 , and a reasonable DFPW value is 5 5 K 1 K 1 , i.e., D w = 5 D w = 5 256 256 ; thus, the inequality D b D w holds and we can say that the broadening of the Doppler spectrum has little effect on estimating the spatial steering vectors. However, when σ v = 3 m m s s , the width of the Doppler spectrum is D b = 2 σ v D b = 2 σ v λ f P R F λ f P R F = 6 D b = 2 σ v D b = 2 σ v λ f P R F λ f P R F = 6 300 300 and the inequality D b D w no longer holds; thus, the broadening of the Doppler spectrum will have some adverse effect on estimating spatial steering vectors. As depicted in Figure 16b, when σ v = 3 m m s s , due to severe temporal fluctuations, the notches of the proposed SESV-MSBL method and MESV-MSBL method are spreading. However, compared with other SR-STAP methods, the proposed two-stage method still achieves better performance.
The existence of intrinsic clutter motion will deteriorate the performance of the proposed two-stage SR-STAP method. The more serious the intrinsic clutter motion, the less accurate the estimation of the spatial steering vectors at the first stage our two-stage method, thereby the worse the algorithm performance. In general, it is acceptable when the performance loss of the algorithm is less than 3 dB. Figure 17 plots the average SINR loss versus the velocity standard deviation. As shown in Figure 17, when the velocity standard deviation is small, the proposed two-stage method still obtains the near-optimal performance, however, when the velocity standard deviation becomes larger, the performance of the proposed method degrades due to the severer pulse-to-pulse fluctuations. In Figure 17, the black dotted line with a square mark denotes a threshold value, which means that the average SINR loss is decreased by 3 dB compared with the OPT. From Figure 17, we can observe that the performance loss of the proposed two-stage method is less than 3 dB when the velocity standard deviation is less than 3.1 m/s. For land clutter, in some areas, such as rural and urban, its velocity standard deviation is usually a very small value, even in wooded terrain, its velocity standard deviation is generally less than 1 m m s s [55]. From Figure 17, we can observe that the performance loss of the proposed two-stage method is less than 1 dB when the velocity standard deviation is less than 1 m/s. Therefore, we can say that the performance of the proposed two-stage method is satisfactory when the velocity standard deviation is less than 1 m/s. Thus, the proposed method is still effective for ground clutter suppression and ground moving target detection in the presence of arbitrary array errors and small ICM.

5. Conclusions

The model mismatch caused by array errors drastically degrade the clutter suppression performance and the target detection performance of SR-STAP methods. To solve this problem, a new two-stage SR-STAP method is proposed in this paper. In our two-stage SR-STAP method, firstly, based on the spatial-temporal coupling property of ground clutter data, we obtain a set of spatial steering vectors with array errors by fine Doppler localization, then, in order to solve the model mismatch problem caused by array errors, we directly use these obtained spatial steering vectors with array errors to construct the space-time dictionary, finally, the constructed space-time dictionary and MSBL algorithm are combined for space-time adaptive processing. The simulation results demonstrate that the variation in system parameters will affect the performance of the proposed two-stage SR-STAP method, the system parameters determine the value of the sharpening ratio κ and the value of the sharpening ratio κ determines the performance of the proposed two-stage SR-STAP method. The greater the sharpening ratio κ , the better the clutter suppression performance and the target detection performance of the proposed method. In general, as long as these system parameters can guarantee that the correlation coefficient of a f s i and a f s i + Δ / 2 given in (21) is greater than 0.95, the proposed two-stage SR-STAP method can obtain favorable performance. In addition, this simulation results which are obtained based on some reasonable system parameters which are listed in Table 1 demonstrate that the spatial steering vectors with array errors can be well estimated in the first stage of our two-stage SR-STAP method when the arbitrary array errors and small ICM are present, and also demonstrate that the proposed method can achieve superior clutter suppression performance and target detection performance in the presence of arbitrary array errors.

Author Contributions

Conceptualization, K.L. and T.W.; investigation, J.C.; methodology, K.L. and J.W.; project administration, T.W.; software, K.L.; supervision, J.W.; visualization, K.L.; writing—original draft, K.L.; writing—review & editing, J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key R&D Program of China, grant number 2021YFA1000400.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ward, J. Space-Time Adaptive Processing for Airborne Radar; Technical Report; MIT Lincoln Laboratory: Lexington, KY, USA, 1998. [Google Scholar]
  2. Klemm, R. Principles of Space-Time Adaptive Processing; The Institution of Electrical Engineers: London, UK, 2002. [Google Scholar]
  3. Brennan, L.E.; Reed, L. Theory of adaptive radar. IEEE Trans. Aerosp. Electron. Syst. 1973, AES-9, 237–252. [Google Scholar] [CrossRef]
  4. Guerci, J.R. Space-Time Adaptive Processing for Radar; Artech House: Boston, MA, USA, 2014. [Google Scholar]
  5. Aboutanios, E.; Mulgrew, B. Hybrid detection approach for STAP in heterogeneous clutter. IEEE Trans. Aerosp. Electron. Syst. 2010, 46, 1021–1033. [Google Scholar] [CrossRef]
  6. Xiao, H.; Wang, T.; Zhang, S.; Wen, C. A robust refined training sample reweighting space-time adaptive processing method for airborne radar in heterogeneous environment. IET Radar Sonar Navig. 2021, 15, 310–322. [Google Scholar] [CrossRef]
  7. Zhu, S.; Liao, G.; Xu, J.; Huang, L.; So, H.C. Robust STAP based on magnitude and phase constrained iterative optimization. IEEE Sens. J. 2019, 19, 8650–8656. [Google Scholar] [CrossRef]
  8. Xiao, H.; Wang, T.; Wen, C.; Ren, B. A generalised eigenvalue reweighting covariance matrix estimation algorithm for airborne STAP radar in complex environment. IET Radar Sonar Navig. 2021, 15, 1309–1324. [Google Scholar] [CrossRef]
  9. Reed, I.S.; Mallett, J.D.; Brennan, L.E. Rapid convergence rate in adaptive arrays. IEEE Trans. Aerosp. Electron. Syst. 1974, AES-10, 853–863. [Google Scholar] [CrossRef]
  10. Klemm, R. Adaptive airborne MTI: An auxiliary channel approach. IEE Proc. F Commun. Radar Signal Process. 1987, 134, 269–276. [Google Scholar] [CrossRef]
  11. DiPietro, R.C. Extended factored space-time processing for airborne radar systems. In Proceedings of the Twenty-Sixth Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA, 26–28 October 1992; pp. 425–426. [Google Scholar]
  12. Wang, H.; Cai, L. On adaptive spatial-temporal processing for airborne surveillance radar systems. IEEE Trans. Aerosp. Electron. Syst. 1994, 30, 660–670. [Google Scholar] [CrossRef]
  13. Tong, Y.; Wang, T.; Wu, J. Improving EFA-STAP performance using persymmetric covariance matrix estimation. IEEE Trans. Aerosp. Electron. Syst. 2015, 51, 924–936. [Google Scholar] [CrossRef]
  14. Zhang, W.; He, Z.; Li, J.; Liu, H.; Sun, Y. A method for finding best channels in beam-space post-Doppler reduced-dimension STAP. IEEE Trans. Aerosp. Electron. Syst. 2014, 50, 254–264. [Google Scholar] [CrossRef]
  15. Xie, L.; He, Z.; Tong, J.; Zhang, W. A recursive angle-Doppler channel selection method for reduced-dimension space-time adaptive processing. IEEE Trans. Aerosp. Electron. Syst. 2020, 56, 3985–4000. [Google Scholar] [CrossRef]
  16. Shi, J.; Xie, L.; Cheng, Z.; He, Z.; Zhang, W. Angle-Doppler Channel Selection Method for Reduced-Dimension STAP based on Sequential Convex Programming. IEEE Commun. Lett. 2021, 25, 3080–3084. [Google Scholar] [CrossRef]
  17. Haimovich, A. The eigencanceler: Adaptive radar by eigenanalysis methods. IEEE Trans. Aerosp. Electron. Syst. 1996, 32, 532–542. [Google Scholar] [CrossRef]
  18. Haimovich, A. Asymptotic distribution of the conditional signal-to-noise ratio in an eigenanalysis-based adaptive array. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 988–997. [Google Scholar]
  19. Goldstein, J.S.; Reed, I.S. Subspace selection for partially adaptive sensor array processing. IEEE Trans. Aerosp. Electron. Syst. 1997, 33, 539–544. [Google Scholar] [CrossRef]
  20. Goldstein, J.S.; Reed, I.S.; Zulch, P.A. Multistage partially adaptive STAP CFAR detection algorithm. IEEE Trans. Aerosp. Electron. Syst. 1999, 35, 645–661. [Google Scholar] [CrossRef]
  21. Wang, X.; Aboutanios, E.; Amin, M.G. Reduced-rank STAP for slow-moving target detection by antenna-pulse selection. IEEE Signal Process. Lett. 2015, 22, 1156–1160. [Google Scholar] [CrossRef]
  22. Roman, J.R.; Rangaswamy, M.; Davis, D.W.; Zhang, Q.; Himed, B.; Michels, J.H. Parametric adaptive matched filter for airborne radar applications. IEEE Trans. Aerosp. Electron. Syst. 2000, 36, 677–692. [Google Scholar] [CrossRef]
  23. Wang, P.; Li, H.; Himed, B. Parametric Rao tests for multichannel adaptive detection in partially homogeneous environment. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1850–1862. [Google Scholar] [CrossRef]
  24. Sarkar, T.K.; Wang, H.; Park, S.; Adve, R.; Koh, J.; Kim, K.; Zhang, Y.; Wicks, M.C.; Brown, R.D. A deterministic least-squares approach to space-time adaptive processing (STAP). IEEE Trans. Antennas Propag. 2001, 49, 91–103. [Google Scholar] [CrossRef] [Green Version]
  25. Cristallini, D.; Burger, W. A robust direct data domain approach for STAP. IEEE Trans. Signal Process. 2011, 60, 1283–1294. [Google Scholar] [CrossRef]
  26. Stoica, P.; Li, J.; Zhu, X.; Guerci, J.R. On using a priori knowledge in space-time adaptive processing. IEEE Trans. Signal Process. 2008, 56, 2598–2602. [Google Scholar] [CrossRef]
  27. Zhu, X.; Li, J.; Stoica, P. Knowledge-aided space-time adaptive processing. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1325–1336. [Google Scholar] [CrossRef]
  28. Riedl, M.; Potter, L.C. Multimodel shrinkage for knowledge-aided space-time adaptive processing. IEEE Trans. Aerosp. Electron. Syst. 2018, 54, 2601–2610. [Google Scholar] [CrossRef]
  29. Liu, M.; Zou, L.; Yu, X.; Zhou, Y.; Wang, X.; Tang, B. Knowledge aided covariance matrix estimation via Gaussian kernel function for airborne SR-STAP. IEEE Access 2020, 8, 5970–5978. [Google Scholar] [CrossRef]
  30. Tao, F.; Wang, T.; Wu, J.; Lin, X. A novel KA-STAP method based on Mahalanobis distance metric learning. Digital Signal Process. 2020, 97, 102613. [Google Scholar] [CrossRef]
  31. Sun, K.; Meng, H.; Wang, Y.; Wang, X. Direct data domain STAP using sparse representation of clutter spectrum. Signal Process. 2011, 91, 2222–2236. [Google Scholar] [CrossRef] [Green Version]
  32. Yang, Z.; Li, X.; Wang, H.; Jiang, W. On clutter sparsity analysis in space–time adaptive processing airborne radar. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1214–1218. [Google Scholar] [CrossRef]
  33. Duan, K.; Yuan, H.; Xu, H.; Liu, W.; Wang, Y. Sparsity-based non-stationary clutter suppression technique for airborne radar. IEEE Access 2018, 6, 56162–56169. [Google Scholar] [CrossRef]
  34. Yang, Z.; Wang, Z.; Liu, W.; de Lamare, R.C. Reduced-dimension space-time adaptive processing with sparse constraints on beam-Doppler selection. Signal Process. 2019, 157, 78–87. [Google Scholar] [CrossRef]
  35. Zhang, W.; An, R.; He, N.; He, Z.; Li, H. Reduced dimension STAP based on sparse recovery in heterogeneous clutter environments. IEEE Trans. Aerosp. Electron. Syst. 2019, 56, 785–795. [Google Scholar] [CrossRef]
  36. Wang, X.; Yang, Z.; Huang, J.; de Lamare, R.C. Robust two-stage reduced-dimension sparsity-aware STAP for airborne radar with coprime arrays. IEEE Trans. Signal Process. 2019, 68, 81–96. [Google Scholar] [CrossRef]
  37. Li, Z.; Wang, T. ADMM-Based Low-Complexity Off-Grid Space-Time Adaptive Processing Methods. IEEE Access 2020, 8, 206646–206658. [Google Scholar] [CrossRef]
  38. Su, Y.; Wang, T.; Tao, F.; Li, Z. A Grid-Less Total Variation Minimization-Based Space-Time Adaptive Processing for Airborne Radar. IEEE Access 2020, 8, 29334–29343. [Google Scholar] [CrossRef]
  39. Li, Z.; Wang, T.; Su, Y. A fast and gridless STAP algorithm based on mixed-norm minimisation and the alternating direction method of multipliers. IET Radar Sonar Navig. 2021. [Google Scholar] [CrossRef]
  40. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  41. Yang, A.Y.; Sastry, S.S.; Ganesh, A.; Ma, Y. Fast Solution of L1-Norm Minimization Problems When the Solution May Be Sparse. IEEE Trans. Inf. Theory 2008, 54, 4789–4812. [Google Scholar]
  42. Tipping, M.E. Sparse Bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
  43. Wipf, D.P.; Rao, B.D. Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 2004, 52, 2153–2164. [Google Scholar] [CrossRef]
  44. Wipf, D.P.; Rao, B.D. An empirical Bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Process. 2007, 55, 3704–3716. [Google Scholar] [CrossRef]
  45. Duan, K.; Wang, Z.; Xie, W.; Chen, H.; Wang, Y. Sparsity-based STAP algorithm with multiple measurement vectors via sparse Bayesian learning strategy for airborne radar. IET Signal Process. 2017, 11, 544–553. [Google Scholar] [CrossRef]
  46. Wang, Z.; Xie, W.; Duan, K.; Wang, Y. Clutter suppression algorithm based on fast converging sparse Bayesian learning for airborne radar. Signal Process. 2017, 130, 159–168. [Google Scholar] [CrossRef]
  47. Zhu, Y.; Yang, Z.; Huang, J. Sparsity-based space-time adaptive processing considering array gain/phase error. In Proceedings of the CIE International Conference on Radar (RADAR), Guangzhou, China, 10–13 October 2016; pp. 1–4. [Google Scholar]
  48. Ma, Z.; Liu, Y.; Meng, H.; Wang, X. Sparse recovery-based space-time adaptive processing with array error self-calibration. Electron. Lett. 2014, 50, 952–954. [Google Scholar] [CrossRef]
  49. Li, Z.; Guo, Y.; Zhang, Y.; Zhou, H.; Zheng, G. Sparse Bayesian learning based space-time adaptive processing against unknown mutual coupling for airborne radar using middle subarray. IEEE Access 2018, 7, 6094–6108. [Google Scholar] [CrossRef]
  50. Yang, Z.; de Lamare, R.C.; Liu, W. Sparsity-based STAP using alternating direction method with gain/phase errors. IEEE Trans. Aerosp. Electron. Syst. 2017, 53, 2756–2768. [Google Scholar]
  51. Robey, F.C.; Fuhrmann, D.R.; Kelly, E.J.; Nitzberg, R. A CFAR adaptive matched filter detector. IEEE Trans. Aerosp. Electron. Syst. 1992, 28, 208–216. [Google Scholar] [CrossRef] [Green Version]
  52. Zhen, J.; Ding, Q. Calibration method of sensor location error in direction of arrival estimation for wideband signals. In Proceedings of the IEEE International Conference on Electronic Information and Communication Technology (ICEICT), Harbin, China, 20–22 August 2016; pp. 298–302. [Google Scholar]
  53. Guerci, J.R. Theory and application of covariance matrix tapers for robust adaptive beamforming. IEEE Trans. Signal Process. 1999, 47, 977–985. [Google Scholar] [CrossRef]
  54. Guerci, J.; Bergin, J. Principal components, covariance matrix tapers, and the subspace leakage problem. IEEE Trans. Aerosp. Electron. Syst. 2002, 38, 152–162. [Google Scholar] [CrossRef]
  55. Nathanson, F.E. Radar Design Principles, 2nd ed.; McGrawHill: New York, NY, USA, 1990. [Google Scholar]
Figure 1. Correlation coefficient of a f s i and a f s i + Δ / 2 versus the sharpening ratio.
Figure 1. Correlation coefficient of a f s i and a f s i + Δ / 2 versus the sharpening ratio.
Sensors 22 00077 g001
Figure 2. Beam scanning and fine Doppler localization.
Figure 2. Beam scanning and fine Doppler localization.
Sensors 22 00077 g002
Figure 3. The flow chart of the first stage of the proposed two-stage SR-STAP method.
Figure 3. The flow chart of the first stage of the proposed two-stage SR-STAP method.
Sensors 22 00077 g003
Figure 4. Average SINR loss versus the amplitude and phase errors level.
Figure 4. Average SINR loss versus the amplitude and phase errors level.
Sensors 22 00077 g004
Figure 5. SINR loss curves and PD versus SNR curves in the presence of gain and phase errors. (a) SINR loss versus the normalized Doppler frequency in the presence of gain and phase errors; (b) PD versus SNR ( f s t = 0.1 ) in the presence of gain and phase errors.
Figure 5. SINR loss curves and PD versus SNR curves in the presence of gain and phase errors. (a) SINR loss versus the normalized Doppler frequency in the presence of gain and phase errors; (b) PD versus SNR ( f s t = 0.1 ) in the presence of gain and phase errors.
Sensors 22 00077 g005
Figure 6. SINR loss curves and PD versus SNR curves in the presence of mutual coupling. (a) SINR loss versus the normalized Doppler frequency in the presence of mutual coupling; (b) PD versus SNR ( f s t = 0.1 ) in the presence of mutual coupling.
Figure 6. SINR loss curves and PD versus SNR curves in the presence of mutual coupling. (a) SINR loss versus the normalized Doppler frequency in the presence of mutual coupling; (b) PD versus SNR ( f s t = 0.1 ) in the presence of mutual coupling.
Sensors 22 00077 g006
Figure 7. SINR loss curves and PD versus SNR curves in the presence of sensor location errors. (a) SINR loss versus the normalized Doppler frequency in the presence of sensor location errors; (b) PD versus SNR ( f s t = 0.1 ) in the presence of sensor location errors.
Figure 7. SINR loss curves and PD versus SNR curves in the presence of sensor location errors. (a) SINR loss versus the normalized Doppler frequency in the presence of sensor location errors; (b) PD versus SNR ( f s t = 0.1 ) in the presence of sensor location errors.
Sensors 22 00077 g007
Figure 8. Steering vector estimation results of all Doppler bins. (a) Amplitudes of estimated steering vectors; (b) Amplitudes of true steering vectors; (c) Interferometry phases of estimated steering vectors; (d) Interferometry phases of true steering vectors.
Figure 8. Steering vector estimation results of all Doppler bins. (a) Amplitudes of estimated steering vectors; (b) Amplitudes of true steering vectors; (c) Interferometry phases of estimated steering vectors; (d) Interferometry phases of true steering vectors.
Sensors 22 00077 g008
Figure 9. The amplitude differences and phase differences between the estimated spatial steering vectors and true spatial steering vectors (all Doppler bins). (a) Amplitude differences; (b) Phase differences.
Figure 9. The amplitude differences and phase differences between the estimated spatial steering vectors and true spatial steering vectors (all Doppler bins). (a) Amplitude differences; (b) Phase differences.
Sensors 22 00077 g009
Figure 10. Performance comparison on steering vector estimation. (a) Amplitude; (b) Phase. TSV denotes the true steering vector with array errors, ESV denotes the estimated steering vector with array errors, and ISV denotes the ideal steering vector without array errors.
Figure 10. Performance comparison on steering vector estimation. (a) Amplitude; (b) Phase. TSV denotes the true steering vector with array errors, ESV denotes the estimated steering vector with array errors, and ISV denotes the ideal steering vector without array errors.
Sensors 22 00077 g010
Figure 11. SINR loss curves and PD versus SNR curves in the presence of arbitrary array errors. (a) SINR loss versus the normalized Doppler frequency in the presence of arbitrary array errors; (b) PD versus SNR ( f s t = 0.1 ) in the presence of arbitrary array errors.
Figure 11. SINR loss curves and PD versus SNR curves in the presence of arbitrary array errors. (a) SINR loss versus the normalized Doppler frequency in the presence of arbitrary array errors; (b) PD versus SNR ( f s t = 0.1 ) in the presence of arbitrary array errors.
Sensors 22 00077 g011
Figure 12. Clutter capon spectra of different methods. (a) OPT-STAP; (b) MESV-MSBL; (c) SESV-MSBL; (d) ISV-MSBL.
Figure 12. Clutter capon spectra of different methods. (a) OPT-STAP; (b) MESV-MSBL; (c) SESV-MSBL; (d) ISV-MSBL.
Sensors 22 00077 g012
Figure 13. Average SINR loss versus the number of training samples used in the first stage.
Figure 13. Average SINR loss versus the number of training samples used in the first stage.
Sensors 22 00077 g013
Figure 14. SINR loss comparison of different methods in the presence of arbitrary array errors.
Figure 14. SINR loss comparison of different methods in the presence of arbitrary array errors.
Sensors 22 00077 g014
Figure 15. SINR loss curves of the MESV-MSBL method under different system parameters. (a) SINR loss curves of the MESV-MSBL method under different pulse numbers in a CPI in the first stage of the proposed two-stage SR-STAP method; (b) SINR loss curves of the MESV-MSBL method under different platform velocities and pulse repetition frequencies.
Figure 15. SINR loss curves of the MESV-MSBL method under different system parameters. (a) SINR loss curves of the MESV-MSBL method under different pulse numbers in a CPI in the first stage of the proposed two-stage SR-STAP method; (b) SINR loss curves of the MESV-MSBL method under different platform velocities and pulse repetition frequencies.
Sensors 22 00077 g015
Figure 16. SINR loss comparison of different methods in the presence of arbitrary array errors and intrinsic clutter motion. (a) σ v = 0.5 m m s s ; (b) σ v = 3 m m s s .
Figure 16. SINR loss comparison of different methods in the presence of arbitrary array errors and intrinsic clutter motion. (a) σ v = 0.5 m m s s ; (b) σ v = 3 m m s s .
Sensors 22 00077 g016
Figure 17. Average SINR loss versus the velocity standard deviation.
Figure 17. Average SINR loss versus the velocity standard deviation.
Sensors 22 00077 g017
Table 1. Radar system parameters.
Table 1. Radar system parameters.
ParameterValue
Bandwidth2.5 M
Wavelength0.3 m
Pulse repetition frequency2000 Hz
Platform velocity150 m/s
Platform height9 km
Element number8
Pulse number in the first stage256
Pulse number in the second stage8
CNR40 dB
Table 2. Gain and phase errors estimation.
Table 2. Gain and phase errors estimation.
TrueEstimated
g111
g20.9178 + j0.06420.9183 + j0.0644
g31.1288 + j0.04611.1298 + j0.0466
g40.8951 + j0.09410.8965 + j0.0944
g50.9277 + j0.06490.9291 + j0.0651
g60.8888 + j0.04660.8898 + j0.0469
g71.0946 + j0.09511.0955 + j0.0956
g80.8988 + j0.04710.8985 + j0.0471
Table 3. Mutual coupling estimation.
Table 3. Mutual coupling estimation.
TrueEstimated
c111
c20.1250 + j0.21650.1253 + j0.2169
c30.0866 + j0.05000.0869 + j0.0498
Table 4. Sensor location errors estimation.
Table 4. Sensor location errors estimation.
True (m)Estimated (m)
Δ 0 00
Δ 1 −0.0041−0.0042
Δ 2 0.0040.0041
Δ 3 0.00030.0002
Δ 4 −0.014−0.0139
Δ 5 0.01230.0122
Δ 6 −0.0021−0.0022
Δ 7 −0.0135−0.0135
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, K.; Wang, T.; Wu, J.; Chen, J. A Two-Stage STAP Method Based on Fine Doppler Localization and Sparse Bayesian Learning in the Presence of Arbitrary Array Errors. Sensors 2022, 22, 77. https://doi.org/10.3390/s22010077

AMA Style

Liu K, Wang T, Wu J, Chen J. A Two-Stage STAP Method Based on Fine Doppler Localization and Sparse Bayesian Learning in the Presence of Arbitrary Array Errors. Sensors. 2022; 22(1):77. https://doi.org/10.3390/s22010077

Chicago/Turabian Style

Liu, Kun, Tong Wang, Jianxin Wu, and Jinming Chen. 2022. "A Two-Stage STAP Method Based on Fine Doppler Localization and Sparse Bayesian Learning in the Presence of Arbitrary Array Errors" Sensors 22, no. 1: 77. https://doi.org/10.3390/s22010077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop