Next Article in Journal
Noncoherent Channel Combining for GNSS Signal Tracking with an Adaptive Antenna Array
Next Article in Special Issue
Pseudo-L0-Norm Fast Iterative Shrinkage Algorithm Network: Agile Synthetic Aperture Radar Imaging via Deep Unfolding Network
Previous Article in Journal
Rao and Wald Tests for Moving Target Detection in Forward Scatter Radar
Previous Article in Special Issue
OEGR-DETR: A Novel Detection Transformer Based on Orientation Enhancement and Group Relations for SAR Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DeepRED Based Sparse SAR Imaging

1
Guangdong University of Technology, Guangzhou 510006, China
2
National Key Laboratory of Scattering and Radiation, Beijing 100854, China
3
Beijing Institute of Environment Features, Beijing 100854, China
4
Suzhou Key Laboratory of Microwave Imaging, Processing and Application Technology, Suzhou 215000, China
5
Suzhou Aerospace Information Research Institute, Suzhou 215000, China
6
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100190, China
7
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100190, China
8
National Key Laboratory of Microwave Imaging Technology, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(2), 212; https://doi.org/10.3390/rs16020212
Submission received: 25 November 2023 / Revised: 26 December 2023 / Accepted: 2 January 2024 / Published: 5 January 2024
(This article belongs to the Special Issue SAR Data Processing and Applications Based on Machine Learning Method)

Abstract

:
The integration of deep neural networks into sparse synthetic aperture radar (SAR) imaging is explored to enhance SAR imaging performance and reduce the system’s sampling rate. However, the scarcity of training samples and mismatches between the training data and the SAR system pose significant challenges to the method’s further development. In this paper, we propose a novel SAR imaging approach based on deep image prior powered by RED (DeepRED), enabling unsupervised SAR imaging without the need for additional training data. Initially, DeepRED is introduced as the regularization technique within the sparse SAR imaging model. Subsequently, variable splitting and the alternating direction method of multipliers (ADMM) are employed to solve the imaging model, alternately updating the magnitude and phase of the SAR image. Additionally, the SAR echo simulation operator is utilized as an observation model to enhance computational efficiency. Through simulations and real data experiments, we demonstrate that our method maintains imaging quality and system downsampling rate on par with deep-neural-network-based sparse SAR imaging but without the requirement for training data.

1. Introduction

Synthetic aperture radar (SAR), an active sensor renowned for its all-day and all-weather capabilities, plays a pivotal role in the realm of remote sensing [1]. Its applications span various domains, including topographic mapping, geological surveys, marine monitoring, agricultural and forestry assessments, disaster evaluation, and military reconnaissance. Sparse SAR imaging is a new theory for SAR imaging, which introduces the sparse signal processing into SAR imaging, effectively improving the image quality and system performance [2,3,4,5]. Sparse SAR imaging is based on the sparse inverse problem to break through the bottleneck of system complexity in traditional SAR imaging. The theory has been effectively applied to various modes of SAR imaging, including stripmap SAR, ScanSAR, spotlight SAR, and TOPS SAR [6,7,8,9].
Regularization plays a crucial role in the success of sparse SAR imaging by improving the quality and stability of the reconstructed images, i.e., incorporating prior knowledge or constraints into the SAR image reconstruction process. There are several types of regularization techniques used in CS-SAR, as follows. Tikhonov regularization adds a penalty term to the reconstruction problem that encourages a smooth solution, reducing the impact of measurement noise and other distortions [10]. Total variation (TV) regularization promotes piecewise constant or piecewise smooth solutions, preserving edges and sharp transitions in the reconstructed image [11]. Sparsity-promoting regularization encourages solutions that are sparse in some domain, such as the wavelet or Fourier domain. Bayesian regularization incorporates a Bayesian prior into the reconstruction problem, encoding prior knowledge or constraints about the desired solution [12]. The choice of regularization method depends on the specific requirements of the application and the characteristics of the measurement data.
Deep neural networks, a current research hotspot, are also emerging as an effective regularization choice for solving sparse inverse problems and are now emerging for applications in SAR imaging. Deep neural network (DNN) regularization uses a pre-trained DNN as a regularization term, leveraging the deep learning framework to learn the mapping between the undersampled measurement data and the desired high-resolution image [13,14]. By incorporating DNNs as regularization terms, CS-SAR can achieve improved imaging performance and robustness and can be applied to a wider range of imaging scenarios. The DNNs commonly used in sparse SAR include [15]. However, the performance of DNN-based SAR imaging is heavily dependent on the training data. A high-quality and diverse set of training data is crucial for ensuring the accurate and robust performance of the DNN in this imaging scenario. Poor quality training data can result in incorrect mappings and reduced performance of the DNN. A lack of training data can result in underfitting, where the DNN is unable to learn the complex relationships, or overfitting, where the DNN memorizes the training data but does not generalize well to new data.
Deep image prior powered by RED (DeepRED) is a regularization model for solving sparse inverse problems that has attracted much attention in image processing [16]. DeepRED regularization merges the concepts of deep image prior (DIP) and regularization by denoising (RED). In this framework, DIP leverages the inherent structure of a deep network as a regularizer for inverse problems, while RED employs an explicit, image-adaptive, Laplacian-based regularization function. This fusion results in an overall objective function that is more transparent and well-defined. The key advantage of DeepRED is that it combines the strengths of DIP and RED, providing a flexible and powerful approach for image restoration and reconstruction. The deep neural network serves as a powerful prior, and the denoising algorithm provides effective regularization, resulting in robust and accurate image restoration results. DeepRED does not require explicit prior knowledge or training data specific to the task or imaging scenario. Instead, the method leverages the generic and rich prior knowledge learned from the large datasets of natural images to achieve robust and accurate image restoration results. DeepRED is an effective method for image restoration and reconstruction and has been applied to a wide range of tasks, including image deblurring, denoising, and super-resolution.
In this article, we combine DeepRED with sparse SAR imaging, using DeepRED as a regularization term to improve SAR imaging performance. This method does not require any training data and maintains the imaging performance as a supervised deep learning approach. We first present a sparse SAR imaging model based on DeepRED, then we present a solution to this imaging model based on the ADMM algorithm, and finally, we use simulation and real data experiments to illustrate the effectiveness of the algorithm. The application of DeepRED in SAR is different from that in applications such as MRI, where the phase of the SAR image is highly random. This leads to the direct application of DeepRED not working, and DeepRED constraints can only be added to the amplitude of the image.
We have innovatively applied the DeepRED algorithm to SAR imaging, breaking through the traditional confines of processing only amplitude images [16]. The crux of our innovation lies in the extension of the algorithm to concurrently handle both amplitude and phase information in SAR imaging, both of which are critical for the accuracy and completeness of SAR images. By iteratively updating the amplitude and phase of the image, our method not only significantly enhances the overall quality of SAR images but also achieves greater precision in capturing ground details and features. This research paves a new path in SAR imaging technology, propelling the field forward with fresh perspectives and methodologies.
The remaining content of this article is organized as follows. Section 2 describes the SAR echo signal model and the proposed DeepRED-based sparse SAR imaging method. Section 3 gives the simulation experimental results and the real data processing results. Section 4 gives the conclusions.

2. Materials and Methods

2.1. Signal Model

In SAR imaging, the complex-valued reflectivity matrix of the monitored area is given by X C N P × N Q , and the collected 2-D echo data are symbolized by Y C M η × M τ . We introduce x = vec ( X ) C N × 1 , where N = N P × N Q , and y = vec ( Y ) C M × 1 , where M = M η × M τ . The vectorization operation, vec ( · ) , is used to stack the matrix columns in sequence. Both two-dimensional variables X and Y and one-dimensional variables x and y are introduced to account for the prior information associated with the two-dimensional image variables. The complex reflectivity matrix x can be expressed as the product of its magnitude component x m and its phase component x θ . In a single SAR image, the magnitude and phase components exhibit distinct properties. The phase of each pixel is typically randomized unless there is some coherence or consistent scattering mechanism at play. This randomness is due to the mixture of scattered signals returning to the radar sensor from various structures within each resolution cell. On the other hand, the magnitude component of a SAR image is not only influenced by speckle noise but also showcases a piecewise smooth nature. This allows for the incorporation of prior knowledge about the magnitude component of the image.
In SAR imaging, represented as a linear system in matrix form, the correlation between SAR echo data and the scene’s reflectivity is captured by the equation:
y = Φ x + n ,
with n C M × 1 signifying the additive noise and Φ C M × N as the system’s measurement matrix.
The primary objective in SAR imaging is the recovery of x from y and Φ . Matched filtering-based SAR imaging is formulated as x = Φ y , where Φ represents the conjugate transpose of Φ . Adherence to the Nyquist sampling theorem in both azimuth and range directions ensures Φ Φ I (the identity matrix), while deviation leads to pronounced sidelobes in output. Inversions like x = Φ 1 y are often ill-posed, particularly with poorly conditioned system matrices Φ , making even minor errors in y potentially significant in the estimated x. To mitigate this, regularization techniques are applied, introducing constraints or prior knowledge into the solution process.
To achieve rapid imaging techniques, we utilize the echo simulation operator instead of the compressive sensing-based SAR measurement matrix. In this paper, we adopt a forward operator based on the range-Doppler algorithm (RDA), denoted as
x ^ = P r d ( y ) ,
where x ^ denotes the reconstructed SAR image matrix, y represents the echo data matrix, and P r d signifies the RDA operation:
P r d ( · ) = P a ( C ( P r ( · ) ) ) .
Herein, P r ( · ) symbolizes the range compression operator, C ( · ) designates the range migration correction operator, and P a ( · ) denotes the azimuth compression operator. The linearity of these three operators was established in [17]. The inverse process of the forward operator based on the RDA is referred to as the echo simulation operator S ( · ) :
S ( · ) = P r H ( C 1 ( P a H ( · ) ) ) .
The echo simulation operator serves as an approximation to the observation matrix. In other words, the relationship
y = Φ x
can be approximately represented as
y = S ( x ) .
The regularization base SAR imaging can be denoted as
min x 1 2 y S ( x ) 2 2 + λ R ( x ) .
This formula consists of two parts: the first term is the data fidelity term that quantifies the difference between the observed echo data y and the estimated model S ( x ) , and the second term is the regularization term R ( x ) , influenced by the parameter λ , which introduces a priori knowledge or constraints on the desired solution x.
In sparse SAR imaging, the L 1 regularization technique is applied to promote sparsity in the solution. TV regularization is also often used for preserving edges in the reconstructed image. When addressing SAR imaging within the wavelet domain, it is common to transform the image (or its data) into this domain, leveraging the inherent sparsity or compressibility of SAR images. In this setting, the l 1 regularization technique is specifically tailored to the wavelet domain.

2.2. DeepRED Regularization

The DeepRED is known as the combination of the DIP with RED. The idea is to synergistically blend the strengths of both techniques for improved image reconstruction. The DeepRED is described as follows.
DIP leverages the architecture of deep neural networks as a form of implicit regularization [18]. Unlike traditional deep learning techniques, DIP does not require training on a large dataset. Instead, it trains a network on a single image, aiming to fit the image as closely as possible. The idea is that the network’s structure inherently prevents it from fitting the noise, acting as a form of regularization
Θ ^ = arg min Θ f Θ ( z ) w 2
where f Θ ( z ) represents the denoised image by DIP, w is the given noisy image, and f Θ is a deep neural network parameterized by Θ . The term f Θ ( z ) w 2 is the data term; its role is to quantify the degree of agreement between the reconstructed image and the observed data by measuring the discrepancy between the network’s output and the given image.
The network f Θ is initialized with random weights and is then trained to generate an image that closely resembles w. Due to the high capacity of deep networks, it is possible for f Θ to overfit to w. However, before overfitting to the noise or the corruptions in w, the network converges to a “natural” image, leveraging the prior implicitly encoded in its architecture. The main insight of DIP is that during the early stages of this training process, before the network starts to fit the noise, it captures the main structures and features of the image, essentially acting as a denoiser or reconstructor.
RED is an image reconstruction framework that integrates external denoising methods into the reconstruction process [19]. The primary concept behind RED is to utilize the capabilities of off-the-shelf denoising algorithms as a regularization term, aiding in solving inverse problems in imaging. RED suggests the use of the following expression as the regularization term:
R R E D ( c ) = 1 2 c T ( c D ( c ) ) ,
where D ( · ) is a denoiser. Within the RED framework, various denoising algorithms such as NLM [20] and BM3D [21] can be utilized, which are capable of achieving good denoising effects while ensuring computational speed. Specifically, in the experiments of this paper, NLM is used as D ( · ) . By utilizing external denoisers, RED can easily incorporate the latest advancements in the field of denoising without significant alterations to the reconstruction algorithm. With the right choice in denoiser, RED is capable of achieving high-quality reconstructions that surpass the performance of traditional regularization methods.
DeepRED is applied to SAR image denoising and is given as follows:
min c , Θ 1 2 f Θ ( z ) w 2 2 + λ 2 c T ( c D ( c ) ) s . t . c = f Θ ( z ) .
where w is the magnitude of the complex SAR image, and z is the auxiliary variable.
The above problem can be solved by ADMM. To apply ADMM, we begin by forming the augmented Lagrangian for the problem [22].
The augmented Lagrangian for this problem is
L ( c , Θ ) = 1 2 f Θ ( z ) w 2 2 + λ 2 c T ( c D ( c ) ) + μ u T ( f Θ ( z ) c ) + μ 2 c f Θ ( z ) 2 2 .
In this formula, the vector u represents the Lagrange multipliers associated with the equality constraints, while μ is a selectable free parameter.
The above solution process is denoted as D e e p R E D ( · ) . The reference [16] provides an implementation method for solving using the ADMM:
  • First, we generate a Θ close to c u using the optimization method of DIP:
    min Θ 1 2 f Θ ( z ) w 2 2 + μ 2 c f Θ ( z ) u 2 2 .
  • When Θ and u are fixed, in order to obtain c, the solution to the following equation can be found using a fixed-point strategy:
    min c λ 2 c T ( c D ( c ) ) + μ 2 c f Θ ( z ) u 2 2 .
    This means that for the iterative process described by the following equation, iterate J times:
    c ( j + 1 ) = 1 λ + μ ( λ D ( c ( j ) ) + μ ( f Θ ( z ) + μ ) ) .
  • Finally, update u using c and Θ :
    u ( k + 1 ) = u ( k ) c ( k + 1 ) + f Θ ( z ) .
Iterate following the above steps until the convergence criteria are met.

2.3. DeepRED Based the SAR Imaging Model

This section delineates the application of the DeepRED regularization technique within the SAR imaging, emphasizing the imaging model and algorithmic implementation. The imaging model under consideration integrates the DeepRED technique as a pivotal component to mitigate this noise while preserving the intrinsic details of the SAR imagery. We applied the optimization solution method from [23] to solve the model, subsequently obtaining the SAR imaging results.
Overall, for our SAR imaging problem, y represents the measured echo data, and x represents the imaging result; both are complex matrices. We introduce a regularization prior to the amplitude of the SAR image. Therefore, the objective function for imaging in this work can be expressed as follows:
x ^ m , x ^ θ = arg min x m , x θ y S ( x θ f Θ ( z ) ) 2 2 + λ 2 x m T ( x m D ( x m ) ) s . t . x m = f Θ ( z ) .
where x m and x θ denote the amplitude and phase of x, respectively.
ADMM is an iterative algorithm that gradually converges to the optimal solution by alternately updating variables, auxiliary variables, and dual variables. The pseudo-code of this algorithm is detailed in Algorithm 1, which includes both the initialization step and the iterative updating process:
Algorithm 1: ADMM
         Input: echo data y, parameter ρ , number of iterations T
         Initialize auxiliary variables v ( 0 ) = u ( 0 ) = 0 , x ( 0 ) = 0
         for k = 1 to T do
             x ( k + 1 ) = p r o x f , ρ ( v ( k ) u ( k ) )
             v ( k + 1 ) = p r o x R , ρ ( x m ( k ) + u ( k ) )
             u ( k + 1 ) = u ( k ) + x ( k + 1 ) v ( k + 1 )
         end for
         Output: SAR imaging result x ^ = x ( k ) .
Here, we define the approximate operator as
p r o x f , ρ ( v ( k ) u ( k ) ) = arg min x ( f ( x ) + ρ 2 x ( v ( k ) u ( k ) ) 2 2 ) .
To reconstruct the complex-valued x, we need to estimate both the magnitude x m and the phase x θ of x. Initially, the value of x θ is computed based on x θ = v e c ( e j x ) , where x denotes the phase of x. This means that x can be expressed as x = x θ x m , where ∘ denotes the Hadamard product. Thus, it should satisfy the constraint that | x θ i | equals 1. Thus, in the ADMM iterative process, the solution for x θ and x m is achieved through the following steps:
x θ ( k + 1 ) = arg min x θ y S ( x m ( k ) x θ ( k ) ) 2 2 + δ i = 1 N ( | x θ i | 1 ) 2
x m ( k + 1 ) = arg min x m y S ( x m ( k ) x θ ( k ) ) 2 2 + ρ 2 x m ( v ( k ) u ( k ) ) 2 2 .
For (18), we can solve it through the following iterative process:
G x θ ( n + 1 ) = ( H X m ) H y + λ θ e j x θ ( n ) ,
where
G = ( H X m ) H H X m + λ θ I ,
where X m and X θ are used to represent diag ( x m ) and diag ( x θ ) , respectively, and H represents the echo simulation operator applied on the diagonal matrix. For (19), we can solve the following equation using the conjugate gradient algorithm:
ρ 2 I + X θ H H H H X θ x ^ m = X θ H H H y + ρ 2 ( v ( k ) u ( k ) ) .
The solution method of ADMM has a modular characteristic, where each iterative step has an independent meaning. Equation (17) can be viewed as an inversion of the forward model f ( x ) , while (23) can be regarded as a denoising step based on prior information. We consider x ( k + 1 ) + u ( k ) as a noisy image. This iterative step can be described as a denoising process with a prior in the following form:
v ( k + 1 ) = arg min v λ 2 v T ( v D ( v ) ) + ρ 2 v ( x m ( k + 1 ) + u ( k ) ) 2 2 , s . t . v = f Θ ( z ) .
In our work, we replace Equation (23) with DeepRED, and this step becomes
v ( k + 1 ) = D e e p R E D ( x m ( k ) + u ( k ) )
After the aforementioned improvements, the sparse SAR imaging framework based on DeepRED is illustrated in Figure 1, as follows:

2.4. Evaluation Metrics

In this analysis, we evaluate the image quality of azimuth-range decouple-operators-based sparse SAR imaging by examining radiometric accuracy, radiometric resolution, and spatial resolution.
  • Radiometric Resolution: The reconstruction quality of the distributed targets is assessed using the equivalent number of looks (ENL) and radiometric resolution (RR). Higher ENL and lower RR values indicate better reconstruction quality. The amplitude-averaging-based ENL is given by [24]
    E N L = 0 . 5227 2 × μ ( I ) 2 σ ( I ) 2 .
    The RR is defined as [25]
    γ ( d B ) = 10 log 10 1 + 1 0.5227 × μ ( I ) σ ( I ) .
  • Average Edge Strength (AES): In the process of suppressing speckle noise, there can be a negative effect of edge blurring [26]. To assess the clarity of edges in our method, we utilized the average edge strength metric. Within the context of using the Sobel operator for edge detection, the calculation formula for AES is expressed as follows:
    AES = 1 N edges i , j E ( I i , j ) G x ( I i , j ) 2 + G y ( I i , j ) 2
    Here, I i , j represents the intensity (gray level) of the image at pixel position ( i , j ) , E ( I i , j ) is the binary edge image obtained after processing with the Sobel operator, where edge pixels are marked as 1 and non-edge pixels as 0. G x ( I i , j ) and G y ( I i , j ) are the gradients calculated by the Sobel operator in the horizontal and vertical directions, respectively [27], and N edges is the total number of edge pixels. This formula calculates AES by averaging the gradient strength across all edge pixels. The Sobel operator is employed to detect edges in the image and to compute the gradient strength at these edges, with AES measuring the average strength of these edges.
  • Spatial Resolution: In sparse SAR imaging, the width of the main lobe widths (MLW) serves as a key metric for evaluating spatial resolution. A smaller MLW value typically indicates enhanced spatial resolution. Given that the system’s impulse response to an ideal point target approximates a sinc ( · ) function in this imaging technique, MLW becomes an effective tool for gauging spatial resolution.
By evaluating the performance of sparse SAR imaging based on DeepRED in terms of ENL, RR, AES, and spatial resolution, we can better understand the advantages of this imaging technique in comparison to other methods.

3. Results

In this section, we evaluate the effectiveness of the proposed method using both simulated and real data. Our approach is compared with several methods, including the RDA method, L 1 regularization method, L 1 &TV regularization method, and a pre-trained CNN regularization method using the DnCNN model. The efficacy of the DnCNN model as a prior has been previously validated [28], which we refer to as the CNN method for convenience. Additionally, the method proposed in this paper is termed the DeepRED method. Specifically, in our simulation experiments, we demonstrate imaging results for distributed targets under various SNR settings to prove the superiority of our proposed method. In real-world imaging experiments, we use RADARSAT data from regions with distinct characteristics for validation. Initially, we assess the imaging performance for sparse point targets in maritime ship areas, comparing the resolution advantages of our method under different sampling rates. Subsequent tests in plain regions provide comparisons between the equivalent number of looks and radiometric resolution, verifying enhancements in imaging quality and smoothness achieved by our approach. Finally, in mountainous regions, we conduct imaging experiments, comparing the AES of the imaging results of each method, validating the clarity of our approach in complex scenarios.

3.1. Simulation

Initially, a series of simulation experiments were devised to assess the reconstruction efficacy of our proposed method across varying SNRs. The constructed scenario spans 1024 × 1024 pixels, featuring a distributed target covering an area of 101 × 101 pixels at its core. In line with Oliver and Quegan’s research [29], an equivalent phase center was assigned to every pixel within the target area. The amplitude of each pixel adheres to a Rayleigh distribution that is independent and identically distributed (with a mean μ = π σ 0 2 and a variance σ 2 = ( 1 π 4 ) σ 0 , where σ 0 is the backscattering coefficient). Simultaneously, the phase of each pixel is uniformly distributed within the range U ( π , π ) . The ideal original images of the simulated data are rectangular surface targets, as shown in Figure 2a, which, for convenience, we refer to as ground truth (GT). In SAR imaging, the presence of distributed Rayleigh random variables fundamentally relates to the way SAR systems capture and process radar signals from the Earth’s surface, particularly in scenes with high scatterer density. In our simulation experiments, we selected two sets of SNRs for the raw echo data, specifically 30 dB and 0 dB.
In the first experiment, with an SNR of 30 dB, we reconstruct the target scene using the RDA, L 1 regularization, L 1 &TV regularization, CNN, and the proposed DeepRED methods. The imaging results are shown in Figure 2. Then, we present the range and azimuth slices in Figure 3. In the second experiment, with an SNR of 0 dB, we reconstruct the target scene using the five methods and display the imaging results in Figure 4. The range and azimuth slices are shown in Figure 5.
In Figure 2 and Figure 4, the results of five imaging methods are presented, along with the ground truth. From the results in Figure 2, it can be seen that the RDA algorithm causes a significant amount of coherent speckle noise in the imaging results. Comparing with the results in Figure 4, it is evident that when the SNR of the echo data decreases, more coherent speckles are produced. However, looking at the results of L 1 , although there is some coherent speckle in the results of Figure 2, when the SNR decreases, it can be seen from Figure 4 that the coherent speckle does not worsen, indicating that it has some coherent speckle suppression capability. From the results in Figure 2, the L 1 &TV regularization and CNN imaging methods both more effectively suppress coherent speckle noise without showing very dense spots. However, according to the imaging results in Figure 4, although coherent speckles are still suppressed when SNR decreases, the amplitude is somewhat different compared to the GT. The DeepRED imaging method proposed in this paper, from Figure 2, shows good suppression of coherent speckles, and in the results of Figure 4, with decreased SNR, no more severe coherent speckles are observed, and the image amplitude is also closer to GT. Therefore, the method of this paper can produce good images under different echo data noise conditions and also has better stability.
Compared to the experimental results shown in Figure 3, Figure 5 presents outcomes under a different experimental setup, where the SNR of the echo data is reduced from 30 dB to 0 dB. Analyzing the results, it is first noticeable that the RDA method generates a lot of noise. The L 1 regularization method shows a significant overall decrease in the amplitude of the imaging results. With the decrease in SNR, the L 1 &TV regularization method also shows a slight reduction in image amplitude. Although the CNN method does not exhibit significant amplitude changes, the edges of the rectangular targets are visibly affected by noise. In contrast, the method proposed in this paper, based on DeepRED, shows no significant amplitude changes with the reduction in SNR, indicating better stability compared to the other methods.
From the experimental results, we can observe that, except for RDA and L 1 regularization, the other methods can maintain the uniformity and continuity of the reflectivity of the distributed targets. Among the last two methods, they improve the reconstruction accuracy compared to L 1 &TV regularization. The proposed method demonstrates better clutter suppression capability.

3.2. Real Data Experiments

The real data experiments in this paper utilized SAR data from RADARSAT-1 in the Vancouver area, and the scene data we used can be directly downloaded from the site of [30]. In our quantitative analysis, we first selected maritime ships as point targets and compared the MLW and clutter intensity of imaging results from various methods. We then chose a plain area as a distributed target and calculated the ENL and RR for imaging results from different methods. Finally, we conducted imaging experiments in mountainous regions and compared the AES of the imaging results. Through these three sets of real-scene experimental setups, we further validated the performance advantages of our proposed DeepRED denoising regularization imaging method. Overall, our method demonstrates substantial improvements in multiple aspects. Specifically, a narrower MLW indicates better reconstruction accuracy in our proposed method, higher ENL and RR signify that our algorithm significantly enhances the smoothness, spatial resolution, and noise resistance capabilities, and a higher AES indicates clearer imaging results from our method. Our algorithm maintains excellent performance even when faced with various downsampling ratios and noise levels, underscoring its robustness. This implies that our method can generate high-quality reconstruction results even under challenging conditions of sparse data and high noise levels.

3.2.1. Experiment 1

In the first experiment, our focus is on evaluating the accuracy of point target reconstruction. For this purpose, two points with strong scattering characteristics were selected. Under full sampling conditions, several methods were utilized for scene reconstruction, including the RDA, L 1 regularization, L 1 &TV regularization, CNN, and our proposed DeepRED technique. The reconstruction results are displayed in Figure 6, where we can observe that the subsequent images in Figure 6 exhibit less noise and clutter compared to Figure 6a. Following this, we perform a slice on the target indicated in Figure 6a along the range and azimuth directions in the imaging results of the RDA, L 1 regularization, L 1 &TV regularization, CNN, and the DeepRED method proposed in this paper. As shown in Figure 7, compared to the reconstruction result of the RDA, all other algorithms demonstrate an effect of clutter and noise suppression. Among them, the DeepRED method reconstructs the target’s reflectivity most accurately, exhibiting the least noise and the smallest MLW.
In the final phase, point target experiments were conducted with a 60% undersampling ratio to assess the impact of data undersampling. To simulate the undersampling, we randomly selected subsets from the fully sampled dataset, as no directly undersampled data was available. For reconstructing the scene, the same five techniques as used previously were applied. The results, depicted in Figure 8a, indicate that the RDA’s reconstruction under undersampling conditions leads to significant sidelobes, particularly in the azimuth direction. However, the other methods, including our proposed one, demonstrated varying degrees of success in reducing sidelobes, with ours showing the most notable enhancement. Additional slice analysis on the left target, as shown in Figure 9, reinforced the effectiveness of our approach.
By comparing Figure 8 with Figure 6, it can be observed that in imaging the ship targets in the scene, the RDA algorithm produced significant sidelobes for both targets, and the sidelobes became more pronounced after reducing the sampling rate. In Figure 6, it is evident that the L 1 , L 1 &TV regularization methods, and the CNN method all effectively reduced the sidelobes. However, in Figure 8, it is found that when the sampling rate is reduced, the sidelobe suppression effect of the L 1 regularization method is not as good as that of the L 1 & T V regularization method and the CNN method. The DeepRED method proposed in this paper almost showed no significant sidelobes in the imaging results of Figure 6, and even with reduced sampling rates, observing the results in Figure 8, no significant sidelobes were evident, confirming that our method can effectively suppress sidelobes and maintain stability even at lower sampling rates.

3.2.2. Experiment 2

In the subsequent section, our focus shifts to evaluating the distributed target’s reconstruction accuracy and uniformity. For this purpose, a relatively flat terrain within the entire scene was chosen as the subject of study. Each target zone within this area is marked with a red rectangular outline.
Initially, the RDA was employed to reconstruct the target scene. As depicted in Figure 10a, numerous speckles are evident in the image, disrupting the continuity and uniformity of the ground surface. The L 1 regularization method, introducing sparsity via soft threshold, resulted in an image where speckle noise was only mildly suppressed. Subsequently, we adopted an iterative approach incorporating both L 1 and TV regularization to further reconstruct the scene, as illustrated in Figure 10b. Comparing Figure 10a,b, it is evident that the outcome in Figure 10c exhibits enhanced uniformity and continuity. Following this, experiments were conducted using the CNN method, which, as shown in Figure 10e, achieved superior speckle noise reduction and rendered the image smoother. Finally, imaging was executed using the DeepRED technique, with results suggesting that it offers the best denoising and smoothing effects, as displayed in Figure 10e. Quantitative analyses were subsequently carried out to further substantiate these findings. To conduct a more detailed quantitative assessment of bias and uniformity, three ground areas encircled by red rectangles were chosen. For each of these areas, calculations of the ENL and RR were performed, utilizing the imaging results obtained previously. The corresponding numerical findings are summarized in Table 1.
Analyzing the imaging outcomes depicted in Figure 10 and the statistical data in Table 1, it becomes evident that the DeepRED regularization method, as proposed in our study, not only mitigates noise and clutter but also lowers the variance of the reconstruction results, thereby enhancing the uniformity and continuity of the distributed target compared to the traditional RDA.
For a precise evaluation of reconstruction accuracy, we identified three specific areas, labeled A1/A2/A3, as experimental distributed targets in Figure 10a–e, each demarcated by a red rectangle. The mean and variance were calculated for these areas, and the ENL and RR were determined using Equation (26). A lower γ value correlates with improved radiometric resolution.
While both L 1 regularization and L 1 &TV regularization methods enhance radiometric resolution over the RDA method, the latter shows a more pronounced improvement. However, the bias induced by L 1 regularization somewhat negatively affects radiometric resolution. The CNN imaging method shows improvements in ENL and radiometric resolution. Notably, the DeepRED method, as proposed in this paper, attains the optimal ENL and radiometric resolution.

3.2.3. Experiment 3

In Experiment 3, we conducted imaging experiments on the mountainous part of RADASAT data. Here, we selected a mountainous area as our subject within the complete scene. Different regularization terms lead to varying degrees of sparsity, influencing the continuity and integrity of the mountain imaging. The experimental results are shown in Figure 11.
Initially, we employed the RDA to reconstruct the target scene. As illustrated in the figure, the image contains numerous speckles, causing discontinuities in the mountains. Following this, we reconstructed the target scene using the L 1 regularization method. Although it reduced speckle noise, the image remained discontinuous. Due to the soft-thresholding effect, some target features were lost. We then utilized the L 1 &TV regularization method for imaging. This somewhat improved the image’s continuity but at the expense of making the texture blurred and less detailed. Subsequently, we applied CNN for imaging. While this approach significantly suppressed speckle noise and provided clear and detailed imaging, it still exhibited discontinuities and missed certain target details. Finally, we employed the DeepRED for imaging, which resulted in clear images, notable speckle noise suppression, and the most complete preservation of the target.
In addition, we conducted a quantitative analysis of the AES for mountainous regions, with the results presented in Table 2. It is evident that the method proposed in this paper yields the highest AES, indicating that our imaging results have clear edges. While effectively suppressing speckle noise, our method does not excessively cause edge blurring. In contrast, the RDA leads to image blurring due to speckle noise, and both the L 1 and L 1 &TV regularization algorithms result in varying degrees of image unclarity, influenced by the selection of threshold values and the effects of regularization.

4. Discussion

In this study, we have introduced an innovative unsupervised approach to SAR imaging, leveraging the DeepRED regularization method. This approach diverges from traditional methods by employing a RDA-based echo simulation operator instead of the conventional observation matrix, alongside the ADMM framework for decoupling. Our method’s efficacy was rigorously tested through a series of simulation experiments and applied to RADARSAT data, benchmarked against methods like RDA, L 1 regularization, L 1 &TV regularization, and CNN-based imaging.
The results of our experiments were revealing. For distributed targets, our technique showed a remarkable proficiency in suppressing speckle noise, leading to smoother imaging areas. This was particularly evident in point target imaging within RADARSAT data’s ship regions, where our method excelled in delineating clear edges and internal structures, even amid the lowest noise levels. Furthermore, our approach outperformed others in imaging of plain areas and mountainous terrains, displaying a superior number of looks, radiometric resolution, and maintaining clarity without sacrificing target details. This demonstrates not just the method’s effectiveness in noise suppression and edge preservation but also its versatility across different imaging scenarios.

5. Conclusions

The development and validation of the DeepRED-based sparse SAR imaging method represent a significant step forward in SAR imaging technology. This method has shown a unique capability to handle various imaging challenges, from speckle noise reduction to edge clarity and target integrity preservation. Its adaptability to different environments and noise levels underscores its potential as a robust tool for advanced SAR imaging applications. As SAR imaging continues to evolve, methodologies like ours will play a pivotal role in enhancing the clarity and accuracy of the images captured, thereby contributing significantly to fields like remote sensing and earth observation.

Author Contributions

Conceptualization, Y.Z., Z.Z. and H.T.; methodology, Y.Z. and Z.Z.; software, H.T. and Q.L.; writing—original draft preparation, Y.Z. and Q.L.; writing—review and editing, B.W.-K.L. and Y.Z.; supervision, Y.Z., B.W.-K.L. and Z.Z.; funding acquisition, Y.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Guangdong Province under Grant 2021A1515012009.

Data Availability Statement

The data utilized in this study were sourced from the RADARSAT dataset, which is openly accessible at the following URL: http://us.artechhouse.com/Assets/downloads/Cumming_058-3.zip, accessed on 1 January 2024. The methodology employed for data processing is based on the techniques outlined in the reference textbook, “Digital Processing of Synthetic Aperture Radar Data”, available for reference at https://us.artechhouse.com/Digital-Processing-of-Synthetic-Aperture-Radar-Data-P1549.aspx, accessed on 1 January 2024. This dataset and the associated processing methods were instrumental in the analysis and findings presented in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Curlander, J.C.; Mcdonough, R.N. Synthetic Aperture Radar: Systems and Signal Processing; Wiley: New York, NY, USA, 1991. [Google Scholar]
  2. Herman, M.A.; Strohmer, T. High-resolution radar via compressed sensing. IEEE Trans. Signal Process. 2009, 57, 2275–2284. [Google Scholar] [CrossRef]
  3. Onhon, N.Ö.; Cetin, M. A sparsity-driven approach for joint SAR imaging and phase error correction. IEEE Trans. Image Process. 2011, 21, 2075–2088. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, B.; Hong, W.; Wu, Y. Sparse microwave imaging: Principles and applications. Sci. China Inf. Sci. 2012, 55, 1722–1754. [Google Scholar] [CrossRef]
  5. Xu, G.; Zhang, B.; Yu, H.; Chen, J.; Xing, M.; Hong, W. Sparse Synthetic Aperture Radar Imaging From Compressed Sensing and Machine Learning: Theories, applications, and trends. IEEE Geosci. Remote Sens. Mag. 2022, 10, 32–69. [Google Scholar] [CrossRef]
  6. Zhang, J.; Lu, X.; Song, Y.; Yu, D.; Bi, H. Sparse Stripmap SAR Autofocusing Imaging Combining Phase Error Estimation and L1-Norm Regularization Reconstruction. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
  7. Bi, H.; Zhang, B.; Zhu, X.; Hong, W. Azimuth-range decouple-based L1 regularization method for wide ScanSAR imaging via extended chirp scaling. J. Appl. Remote Sens. 2017, 11, 015007. [Google Scholar] [CrossRef]
  8. Xu, Z.; Wei, Z.; Wu, C.; Zhang, B. Multichannel Sliding Spotlight SAR Imaging Based on Sparse Signal Processing. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 3703–3706. [Google Scholar] [CrossRef]
  9. Bi, H.; Zhang, B.; Zhu, X.X.; Jiang, C.; Wei, Z.; Hong, W. Lq Regularization Method for Spaceborne SCANSAR and TOPS SAR Imaging. In Proceedings of the EUSAR 2016: 11th European Conference on Synthetic Aperture Radar, Hamburg, Germany, 6–9 June 2016; pp. 1–4. [Google Scholar]
  10. Kang, M.S.; Kim, K.T. Compressive sensing based SAR imaging and autofocus using improved Tikhonov regularization. IEEE Sens. J. 2019, 19, 5529–5540. [Google Scholar] [CrossRef]
  11. Rodríguez, P. Total variation regularization algorithms for images corrupted with different noise models: A review. J. Electr. Comput. Eng. 2013, 2013, 10. [Google Scholar] [CrossRef]
  12. Autieri, R.; Ferraiuolo, G.; Pascazio, V. Bayesian Regularization in Nonlinear Imaging: Reconstructions From Experimental Data in Nonlinearized Microwave Tomography. IEEE Trans. Geosci. Remote Sens. 2011, 49, 801–813. [Google Scholar] [CrossRef]
  13. Ongie, G.; Jalal, A.; Metzler, C.A.; Baraniuk, R.G.; Dimakis, A.G.; Willett, R. Deep Learning Techniques for Inverse Problems in Imaging. IEEE J. Sel. Areas Inf. Theory 2020, 1, 39–56. [Google Scholar] [CrossRef]
  14. Kamilov, U.S.; Bouman, C.A.; Buzzard, G.T.; Wohlberg, B. Plug-and-Play Methods for Integrating Physical and Learned Models in Computational Imaging: Theory, algorithms, and applications. IEEE Signal Process. Mag. 2023, 40, 85–97. [Google Scholar] [CrossRef]
  15. Zhao, S.; Ni, J.; Liang, J.; Xiong, S.; Luo, Y. End-to-End SAR Deep Learning Imaging Method Based on Sparse Optimization. Remote Sens. 2021, 13, 4429. [Google Scholar] [CrossRef]
  16. Mataev, G.; Milanfar, P.; Elad, M. DeepRED: Deep image prior powered by RED. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1–10. [Google Scholar]
  17. Fang, J.; Xu, Z.; Zhang, B.; Hong, W.; Wu, Y. Fast Compressed Sensing SAR Imaging Based on Approximated Observation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 352–363. [Google Scholar] [CrossRef]
  18. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9446–9454. [Google Scholar]
  19. Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 2017, 10, 1804–1844. [Google Scholar] [CrossRef]
  20. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  21. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  22. Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A.T. Fast Image Recovery Using Variable Splitting and Constrained Optimization. IEEE Trans. Image Process. 2010, 19, 2345–2356. [Google Scholar] [CrossRef] [PubMed]
  23. Alver, M.B.; Saleem, A.; Çetin, M. Plug-and-Play Synthetic Aperture Radar Image Formation Using Deep Priors. IEEE Trans. Comput. Imaging 2021, 7, 43–57. [Google Scholar] [CrossRef]
  24. Lee, J.S.; Jurkevich, L.; Dewaele, P.; Wambacq, P.; Oosterlinck, A. Speckle filtering of synthetic aperture radar images: A review. Remote Sens. Rev. 1994, 8, 313–340. [Google Scholar] [CrossRef]
  25. Chen, Q.; Li, Z.; Zhang, P.; Tao, H.; Zeng, J. A preliminary evaluation of the GaoFen-3 SAR radiation characteristics in land surface and compared with Radarsat-2 and Sentinel-1A. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1040–1044. [Google Scholar] [CrossRef]
  26. Qiu, F.; Berglund, J.; Jensen, J.R.; Thakkar, P.; Ren, D. Speckle noise reduction in SAR imagery using a local adaptive median filter. GISci. Remote Sens. 2004, 41, 244–266. [Google Scholar] [CrossRef]
  27. Vincent, O.R.; Folorunso, O. A descriptive algorithm for sobel image edge detection. In Proceedings of the Informing Science & IT Education Conference (InSITE), Macon, GA, USA, 12–15 June 2009; Volume 40, pp. 97–107. [Google Scholar]
  28. Ryu, E.; Liu, J.; Wang, S.; Chen, X.; Wang, Z.; Yin, W. Plug-and-play methods provably converge with properly trained denoisers. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 5546–5557. [Google Scholar]
  29. Oliver, C.; Quegan, S. Understanding Synthetic Aperture Radar Images; SciTech Publishing: Raleigh, NC, USA, 2004. [Google Scholar]
  30. Cumming, I.G.; Wong, F.H. Digital processing of synthetic aperture radar data. Artech House 2005, 1, 108–110. [Google Scholar]
Figure 1. Framework of ADMM Solution for SAR Imaging Leveraging DeepRED.
Figure 1. Framework of ADMM Solution for SAR Imaging Leveraging DeepRED.
Remotesensing 16 00212 g001
Figure 2. Reconstruction results of simulated scenes at SNR = 30 dB. (a) GT; (b) RDA; (c) L 1 regularization; (d) L 1 & T V regularization; (e) CNN; (f) DeepRED. All images were simulated under the same conditions and plotted with the same color map to maintain consistency for comparison.
Figure 2. Reconstruction results of simulated scenes at SNR = 30 dB. (a) GT; (b) RDA; (c) L 1 regularization; (d) L 1 & T V regularization; (e) CNN; (f) DeepRED. All images were simulated under the same conditions and plotted with the same color map to maintain consistency for comparison.
Remotesensing 16 00212 g002
Figure 3. The range and azimuth direction slices of the results obtained by the five methods in processing the simulated data when SNR = 30 dB. (a) The slice along the range direction. (b) The slice along the azimuth direction.
Figure 3. The range and azimuth direction slices of the results obtained by the five methods in processing the simulated data when SNR = 30 dB. (a) The slice along the range direction. (b) The slice along the azimuth direction.
Remotesensing 16 00212 g003
Figure 4. Reconstruction results of simulated scenes at SNR = 0 dB. (a) GT; (b) RDA; (c) L 1 regularization; (d) L 1 & T V regularization; (e) CNN; (f) DeepRED. All images were simulated under the same conditions and plotted with the same color map to maintain consistency for comparison.
Figure 4. Reconstruction results of simulated scenes at SNR = 0 dB. (a) GT; (b) RDA; (c) L 1 regularization; (d) L 1 & T V regularization; (e) CNN; (f) DeepRED. All images were simulated under the same conditions and plotted with the same color map to maintain consistency for comparison.
Remotesensing 16 00212 g004
Figure 5. The range and azimuth direction slices of the results obtained by the five methods in processing the simulated data when SNR = 0 dB. (a) The slice along the range direction. (b) The slice along the azimuth direction.
Figure 5. The range and azimuth direction slices of the results obtained by the five methods in processing the simulated data when SNR = 0 dB. (a) The slice along the range direction. (b) The slice along the azimuth direction.
Remotesensing 16 00212 g005
Figure 6. Reconstruction Results of Real Scene Point Target from Full Sampling. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Figure 6. Reconstruction Results of Real Scene Point Target from Full Sampling. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Remotesensing 16 00212 g006
Figure 7. Slices of the reconstructed point target results along the range and azimuth directions under full sampling conditions. (a) The slice along the azimuth direction. (b) The slice along the range direction.
Figure 7. Slices of the reconstructed point target results along the range and azimuth directions under full sampling conditions. (a) The slice along the azimuth direction. (b) The slice along the range direction.
Remotesensing 16 00212 g007
Figure 8. Reconstruction Results of Real Scene Point Target from Random Sampling at 60% Rate. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Figure 8. Reconstruction Results of Real Scene Point Target from Random Sampling at 60% Rate. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Remotesensing 16 00212 g008
Figure 9. Slices of the reconstructed point target results along the range and azimuth directions under 60 % downsampling conditions. (a) The slice along the azimuth direction. (b) The slice along the range direction.
Figure 9. Slices of the reconstructed point target results along the range and azimuth directions under 60 % downsampling conditions. (a) The slice along the azimuth direction. (b) The slice along the range direction.
Remotesensing 16 00212 g009
Figure 10. Reconstruction results of Experiment 2. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Figure 10. Reconstruction results of Experiment 2. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Remotesensing 16 00212 g010
Figure 11. Reconstruction results of Experiment 3. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Figure 11. Reconstruction results of Experiment 3. (a) RDA; (b) L 1 regularization; (c) L 1 & T V regularization; (d) CNN; (e) DeepRED.
Remotesensing 16 00212 g011
Table 1. Comparison of ENL and γ Values for Imaging Results of Different Target Scenes Using 5 Methods in Experiment 2.
Table 1. Comparison of ENL and γ Values for Imaging Results of Different Target Scenes Using 5 Methods in Experiment 2.
TargetMethodENL γ (dB)
A1RDA0.85873.1789
L 1 0.68653.4379
L 1 &TV0.92683.0936
CNN1.07122.9363
DeepRED1.75952.4400
A2RDA0.91913.1029
L 1 0.34124.3327
L 1 &TV0.70063.4138
CNN1.00263.0075
DeepRED4.59411.6630
A3RDA0.52393.7687
L 1 0.41264.0768
L 1 &TV0.45493.9492
CNN0.61613.5679
DeepRED0.63793.5258
Table 2. AES of the Target Scene in Experiment 3 as Imaged by Different Methods.
Table 2. AES of the Target Scene in Experiment 3 as Imaged by Different Methods.
MethodAES
RDA0.0789
L 1 0.0621
L 1 &TV0.0757
CNN0.1129
DeepRED0.2144
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Liu, Q.; Tian, H.; Ling, B.W.-K.; Zhang, Z. DeepRED Based Sparse SAR Imaging. Remote Sens. 2024, 16, 212. https://doi.org/10.3390/rs16020212

AMA Style

Zhao Y, Liu Q, Tian H, Ling BW-K, Zhang Z. DeepRED Based Sparse SAR Imaging. Remote Sensing. 2024; 16(2):212. https://doi.org/10.3390/rs16020212

Chicago/Turabian Style

Zhao, Yao, Qingsong Liu, He Tian, Bingo Wing-Kuen Ling, and Zhe Zhang. 2024. "DeepRED Based Sparse SAR Imaging" Remote Sensing 16, no. 2: 212. https://doi.org/10.3390/rs16020212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop