Next Article in Journal
Retrievals of Chlorophyll-a from GOCI and GOCI-II Data in Optically Complex Lakes
Previous Article in Journal
Spatiotemporal Analysis of Landscape Ecological Risk and Driving Factors: A Case Study in the Three Gorges Reservoir Area, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust InSAR Phase Unwrapping Method via Improving the pix2pix Network

1
School of Geosciences and Info-Physics, Central South University, Changsha 410083, China
2
Chinese Academy of Surveying and Mapping, Beijing 100036, China
3
College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(19), 4885; https://doi.org/10.3390/rs15194885
Submission received: 16 September 2023 / Revised: 30 September 2023 / Accepted: 2 October 2023 / Published: 9 October 2023
(This article belongs to the Section AI Remote Sensing)

Abstract

:
The main core of InSAR (interferometric synthetic aperture radar) data processing is phase unwrapping, and the output has a direct impact on the quality of the data processing products. Noise introduced from the SAR system and interferometric processing is unavoidable, causing local phase inaccuracy and limiting the unwrapping results of traditional unwrapping methods. With the successful implementation of deep learning in a variety of industries in recent years, new concepts for phase unwrapping have emerged. This research offers a one-step InSAR phase unwrapping method based on an improved pix2pix network model. We achieved our aim by upgrading the pix2pix network generator model and introducing the concept of quality map guidance. Experiments on InSAR phase unwrapping utilizing simulated and real data with different noise intensities were carried out to compare the method with other unwrapping methods. The experimental results demonstrated that the proposed method is superior to other unwrapping methods and has a good robustness to noise.

1. Introduction

Synthetic aperture radar interferometry (InSAR) combines the technological properties of interferometry with SAR imaging, can extract three-dimensional information and surface deformation effectively, and has become one of the most extensively used remote sensing methods. InSAR measurement employs the interferometric phase difference of two SAR images, which is typically within π , π ] . To derive the real phase value from the wrapped phase, an integer multiple of 2 π is added or subtracted on the basis of the wrapped phase to restore the corresponding integer period, which is referred to as phase unwrapping.
Most phase unwrapping procedures are based on the assumption of phase continuity [1] (also known as the Itoh condition), which states that the absolute phase difference between any two consecutive pixels must be smaller than π, under which condition, the real phase may be readily retrieved. However, the real InSAR interferometric phase is affected by speckle and terrain, making it difficult to meet this condition. Traditional phase unwrapping techniques have been split into three categories: path-following methods, optimization-based methods, and integrated denoising and unwrapping methods. The path-following method is simple in principle, and the appropriate integration path is found through the Itoh condition, usually guided by residue distribution or quality mapping to meet the condition in which any closed path integral is zero to avoid error propagation, which is the basis and key of algorithms such as the Goldstein branch-cut algorithm [2], quality guide methods [3,4,5], and the region growth method [6]. However, such methods are prone to introducing errors during the integration process, and sometimes require the assistance of external data. The optimization-based method is used to reduce the difference between the phase gradient and the estimated gradient using various objective functions. Such approaches are global methods. While it has a high time and space complexity and a mismatch between accuracy and complexity, its global results are robust, meaning that the algorithm has become a widely used phase unwrapping algorithm, alongside least squares (LS) and its improved algorithm [7,8,9,10], the network flow algorithm [11,12], the graph cut algorithm [13], and others. The integrated denoising and unwrapping method can effectively reduce the errors introduced in the interference process due to the filtering method, and can achieve the filtering operation of interferometric phase noise while performing phase unwrapping, which has a strong robustness, such as in the Kalman-filtering method [14,15,16,17], the grid-filtering method [18], and the particle filter method [19,20]. In summary, these types of methods are based on the Itoh condition. Unfortunately, phase discontinuity produced by significant noise is usually present in genuine InSAR interferograms, making phase unwrapping difficult. Although there are still some innovative studies [21,22,23] emerging, the interference phase discontinuity caused by various conditions is still a difficulty in phase unwrapping research.
Because of its data-driven architecture, the phase unwrapping approach based on deep learning has effectively overcome the limitations of the Itoh condition in recent years as deep learning has become utilized in SAR data more often. Several deep-learning-based InSAR phase unwrapping studies [24] have been conducted. For example, phase unwrapping preprocessing based on deep learning. Rouet al. [25] designed a deep convolutional autoencoder for atmospheric noise removal based on a large amount of InSAR time series data in 2019. In the same year, Sun et al. [26] and Sica et al. [27] intelligently solved phase filtering and coherent estimation problems based on CNN models and U-Net structures, respectively. Another type of phase unwrapping method based on deep learning is called deep-learning-based path following. Zhou et al. [28] proposed a deep convolutional neural network (DCNN) for phase gradient estimation in 2020, called PGNet, which detects phase gradients in the form of segmented problems. In 2021, Zhang et al. [29] introduced the least squares algorithm into PGNet and used it as an input to the least squares unwrapping method. By utilizing the predicted phase gradient from PGNet, the efficiency and accuracy of the least squares unwrapping method could be significantly improved. In the same year, Zhou et al. [30] proposed BCNet, which transformed the polarity balance problem of residual points into a semantic segmentation problem. However, theoretically, BCNet cannot ensure that all residual points are balanced, so post-processing is still necessary. Ferraioli et al. [31] analyzed the effectiveness of the BCNet unwrapping method combined with noise filtering on simulated data with different coherence coefficients. The results showed that the robust pre-phase filtering step was superior to the direct BCNet method in terms of both unwrapping accuracy and processing time. In this type of method, the two key concepts of residual and branch in traditional phase unwrapping are integrated into deep learning methods. However, these methods do not yet possess end-to-end capabilities.
Unlike the above methods, global phase unwrapping based on deep learning refers to the construction of a deep learning framework to obtain the unwrapped phase directly from the input interferogram. From the processing steps, these approaches may be broadly classified into two types: (1) The one-step phase unwrapping method. Wu et al. [32] suggested a network (PUNet) in 2020 for the phase unwrapping of the cut interferogram centered on the identified settlement point. The key advantage of this network is that it considers the unwrapping problem as a regression problem, which is comparable to the standard least-square phase unwrapping method. Zhou et al. [33] presented a PU-GAN unwrapping approach in 2022 that was based on the conditional development of confronting networks and treats unwrapping as an image-to-image conversion problem. (2) The two-step unwrapping approach differs from the deep-learning-based one-step unwrapping method. In the first stage, it defines the phase unwrapping problem as a semantic segmentation problem, and then classifies the pixels in the wrapped phase image that correspond to the same period into the same category one by one. Unfortunately, because the segmentation results cannot guarantee that all the pixels are correctly identified, some pixels will always have phase jump problems. As a result, post-processing is required to identify these pixels and rectify them in the second step. Spoorthi et al. [34] proposed PhaseNet in 2018, which was based on the complete convolution network and can recover phase unwrapping results by predicting the number of wrapped phase periods at each pixel of the input wrapped phase image, but it cannot be implemented well in the case of sharp phase changes, so it must use the cluster-based smoothing algorithm for post-processing. Zhang [35] and colleagues created a similar network using PhaseNet in 2019, which still requires post-processing after adding an extra independent network to denoise the input wrapped phase. As a result, one evident downside of the two-step unwrapping approach is that it cannot produce appropriate results without post-processing. To address this issue, in 2020, Spoorthi et al. [36] upgraded PhaseNet to version 2.0 based on DenseNet [37]. Its main improvement over the original version is that the loss function of PhaseNet2.0 has a clear unwinding meaning. Specifically, in addition to cross-entropy losses similar to the original version, it combines residual losses and L1 losses to reduce the post-processing steps.
The quality map is an index used to assess the quality of each pixel in the interferometric phase image. The pixel value in the quality map is typically between 0 and 1. There are several types of information that may be utilized to create a quality map [38,39,40]. It is not only used for phase unwrapping using the quality map guiding approach, but it may also help in other ways [9,10,11].
This study investigates merging the concept of a quality map with deep learning and presents pu-pix2pix, a one-step unwrapping method. The phase unwrapping problem is transformed into learning the mapping connection between the interferogram and the real phase image via the pu-pix2pix model. The model comprises a generator, a discriminator, and a loss function. The generator has a structure similar to U-Net. The coherent coefficient map is employed in the generator structure to impose conditional restrictions on the input features, and the atrous spatial pyramid pooling module and bottleneck modules are incorporated, resulting in a more accurate unwrapped phase image. PatchGAN is used by the discriminator to discriminate between real and fake values in the generator’s phase image. The L1 loss function and antagonism loss function make up the pu-pix2pix loss function. The L1 loss function can recover the low-frequency part of the image, bringing the generated unwrapped phase closer to the real phase, whereas the antagonism loss function is in charge of encouraging mutual confrontation between the two structures, resulting in higher-quality unwrapped phase images. Experiments using simulation and real-world InSAR interferometric data reveal that this approach can generate effective phase unwrapping results in a variety of noise situations.
This paper’s organizational structure is as follows. The second section introduces the phase unwrapping idea, the problem analysis, and the recommended approach. The third section describes the technique for generating datasets, the loss function, the unwrapping outcome assessment index, and the experimental environment. A series of experimental results employing simulated and actual InSAR data are described in the third section. The fourth and fifth sections of this work comprise the discussion and summary.

2. pu-pix2pix

In this section, we first introduce the principle of phase unwrapping. The structure and loss function of the pu-pix2pix model are then described in detail.

2.1. The Principle of Phase Unwrapping

In general, the phase value in the interferogram is always nonlinearly wrapped into the interval π , π ] to generate the phase primary value, also known as the wrapped phase value. To obtain the real phase value from the wrapped phase, an integer multiple of 2 π is added or subtracted from the wrapped phase. We refer to the procedure of finding the true phase by utilizing a numerical analysis or geometric methods to recover the matching integer period. The wrapping function is as follows:
ψ ( t ) = ϕ ( t ) + 2 π k ( t )
where ψ ( t ) and ϕ ( t ) are the absolute phase and wrapped phase, respectively; and k ( t ) is an integer function.
The phase unwrapping method is based on the Itoh assumption [1], in that the phase difference between adjacent pixels in the unwrapped phase will not exceed π . Based on this assumption, the main steps are as follows.
To define the difference operator Δ , the phase difference between the pixel and its adjacent pixels is Δ ϕ ( i ) :
Δ ϕ ( i ) = ϕ ( i + 1 ) ϕ ( i ) = ψ ( i + 1 ) ψ ( i ) + ( 2 π k ( i + 1 ) 2 π k ( i ) ) = Δ ψ ( i ) + 2 π k Δ ( i )
W is defined as the wrapping operation, wrapping the two sides of Equation (2):
W Δ ϕ ( i ) = W Δ ψ ( i ) + Δ 2 π k ( i ) = Δ ψ ( i ) + 2 π k Δ ( i ) + 2 π k ( i )
Defined by wrapping, W Δ ϕ ( i ) and Δ ψ ( i ) are to intervals π , π ] , so:
k Δ ( i ) + k ( i ) = 0
Therefore, Equation (3) changes to:
Δ ψ ( i ) = W Δ ϕ ( i )
Given an initial real phase value of ψ ( 0 ) = ϕ ( 0 ) , plus Equation (2), the absolute phase value of the current pixel is:
ψ ( i + 1 ) = ψ ( i ) + W Δ ϕ ( i )
Therefore, the true phase may begin from a known true phase beginning point, and the wrapped phase of neighboring points can be gathered until all cells are computed, at which point, the unwrapping process is completed.

2.2. pix2pix Model

A generative adversarial network (GAN) is a generation antagonism framework proposed by Goodfellow [41] in 2014. A GAN is a framework for generating models using antagonistic process estimation, which is mostly used for the development of many types of pictures. It is based on the concept of a two-person zero-sum game. The pictures generated by the initial GAN network are random, unexpected, and uncontrollable because the network is too free. Mehdi Mirza [42] and colleagues proposed in 2014 that a conditional generative adversarial network (CGAN) is a fundamental modification of a GAN. The basic idea is to include attribute information as supplemental data into G (generators) and D (discriminators) as constraints. As a result, we may more correctly produce the predicted samples and assess the legitimacy of the generated samples.
The CGAN converts unsupervised learning into supervised learning effectively, allowing the network to learn better according to our aims. The pix2pix network is a model framework especially applied in the field of image conversion introduced by Phillip Isola [43] and others in 2016 on the basis of the CGAN, which belongs to an advanced GAN model. Its primary purpose is to discover the mapping relationship between the picture before and after conversion, and the model input does not require any new restrictions. Figure 1 depicts its primary structure.

2.3. The Structure of pu-pix2pix

The pu-pix2pix generator structure combines the quality-map-processing operation with the feature initial processing layer; the ASPP module replaces the original convolution down-sampling layer; and the bottleneck residuals module is included in the new structure’s down-sampling process. While ensuring the model’s training accuracy, the stability of the unwrapping result is increased. Following this improvement, the generator structure is illustrated in Figure 2 below.
The structure of the generator can be divided into three parts. The first part uses the quality map as the limiting condition of the input feature. The specific operation splices the fringe pattern and the coherence coefficient pattern, and then the channel number of the restored mosaic image after a convolution of 1 × 1 is input into the second part of the encoder. The encoder depth is 8, which converts the input pixel space into a low-resolution and high-level feature space. First, a 4 × 4 convolution is repeated seven times, although the first convolution has no BatchNorm layer; each convolution operation is followed by a BatchNorm layer, a LeakyReLU layer with a slope of 0.2, and a bottleneck residual module, and the next encoder layer is entered via ASPP down-sampling. Convolution and ReLU are used in the encoder’s last layer. The encoder convolutionally raises the feature channel from 1 to 512 channels, and the feature size from 512 × 512 to 1 × 1. The decoder is the generator’s third component, and it works via seven 4 × 4 deconvolution layers, projecting the features learned by the encoder into the pixel space. To retain the details, the decoder contains a deconvolution layer, followed by a BatchNorm layer and a ReLU layer, and employs a skip connection to connect the information with the relevant features at the encoder. The first three groups also set a dropout layer with a probability of 0.5 to prevent the over-fitting phenomenon induced by the training process, while the last group directly employs the deconvolution layer and the tanh function layer. In the final group, the decoder reduces the number of characteristic channels from 512 to 1 and restores the size of the input image.
Part 1 of the encoder is similar to, but also different from, the conditional input in the CGAN. The quality map in this context refers not only to the data labels and categories that the conditions in the CGAN refer to, but also to the criteria that assist the model in reducing the differences between the images before and after unwrapping. The Atrous Spatial Pyramid Pooling Module, shown in Figure 3, replaces the down-sampling operation in the generator’s encoder. It combines the extended convolution feature maps with different sampling rates to capture context information. This expansion of the feature-receiving field is achieved without sacrificing the feature spatial resolution. Additionally, it is conducive to accurately obtaining the wrapped interferogram’s feature information and enhancing the robustness of the phase unwrapping. However, the improper setting of the module’s convolutional sampling rate can easily lead to the grid effect issue, as shown in Figure 3, causing the loss of relevant information.
The reasonable sampling rate settings should satisfy the following Equation (7).
M i = max M i + 1 2 r i , M i + 1 2 M i + 1 r i , r i
where M i represents the maximum sampling rate for the ith dilated convolution and r i represents the sampling rate for the ith dilated convolution, with a default value of M n = r n . From the above equation, the sampling rates should not share a greatest common divisor greater than 1, otherwise the grid effect would still exist. Following this principle, the sampling rate combination for the ASPP module is reconfigured as 1, 2, 7, and 15. The schematic diagram of this module is shown in Figure 4.
The residual module is embedded in the generator before the down-sampling operation to avoid the loss of training set characteristics due to too many network layers, enabling the network model to minimize the number of parameter calculations, preventing network degradation and improving the network training accuracy and efficiency. The structure of the residual module is shown in Figure 5.
The pu-pix2pix discriminator is unchanged, and it employs a Markov discriminator (PatchGAN) structure. The fundamental idea is to divide the input image into N × N parts, with each area being independent of the others, to determine if each area is true or false, and to generate a prediction probability matrix. The final discriminator output is the average of the results. Figure 6 below depicts its unique structure.
PatchGAN is built on a complete convolution architecture. The first discriminator layer is made up of a 4 × 4 convolution layer and a LeakyReLU layer with a slope of 0.2, then the 4 × 4 convolution, BatchNorm, and LeakyReLU with a slope of 0.2 are reused three times. Lastly, the output is mapped to a 32 × 32 matrix after the convolution and a sigmoid activation function of size 1. Each output matrix element represents the predicted value of each block in the original generated image. Finally, all the matrix elements are averaged to provide the discriminator’s final output. PatchGAN maintains a certain resolution and details in image conversion. It can judge the local unwrapping in the entire image, promoting the generator to create high-quality phase unwrapping outcomes.

2.4. Loss Function

pu-pix2pix combines the antagonism loss function and L1 loss function. The antagonism loss function is similar to that in other GAN models, and can be expressed as:
min G   max D   L C G A N G , D = E x , y log D x , y + E x , z log 1 D G x , z
where x and y represent the picture samples before and after unwrapping and L G , D represents the value functions shared by G and D; E represents the expected value, which means that the average of all the training data may achieve this aim; z denotes the dropout noise applied to certain layers of the generator; and G aims to minimize the loss under the condition of the target loss function; whereas D tries to maximize the loss function, that is, min G   max D   L G , D .
The loss function used to calculate the image difference in depth learning is L1 norm. We can utilize the L1 loss in the phase unwrapping to bring the unwrapped phase created by the generator closer to the genuine phase image. The function can be expressed as:
L L 1 G = E x , y , z y G x , z 1
The L1 loss function and the CGAN loss function are employed in tandem because the former can recover the low-frequency portion of the image and the latter can recover the high-frequency portion of the image, hence improving the authenticity of the image generated by G. The final pix2pix objective function is the maximum and minimum game of G and D under regular constraints, which may be represented as:
G * = arg min G   max D   L C G A N G , D + λ L L 1 G

3. Experiments and Results

In this section, the detailed dataset generation process, performance evaluation index, and experimental settings are described. Two experiments are performed to quantitatively and qualitatively verify the entanglement performance of the proposed method. In the first experiment, the accuracy and robustness of the pu-pix2pix unwrapping method are evaluated using simulated interference data, and the results are compared with those of the quality guide, least squares, MCF (minimum cost flow) U-net model, and pix2pix model methods. In the second experiment, two sets of real data are used, the first one is from ALOS (Advanced Land Observing Satellite) PALSAR, and the second one is from airborne C-band interferometric data obtained from an outfield experiment of the Radar Group of the Chinese Academy of Surveying and Mapping Sciences in Emeishan City, Sichuan Province, and the results of the proposed unwrapping method are compared with the results of the above unwrapping methods.

3.1. Dataset Generation

A large number of training samples must be trained and learned in the early stages of deep learning. The creation of datasets is a critical stage in deep learning. High-quality training samples can improve the test result’s accuracy; however, an insufficient number of training samples can produce unreliable deep learning. To aid the generator in learning the direct mapping between the input and output, the pu-pix2pix model must pre-import the paired pictures (that is, the images created by combining the interferogram and the unwrapped phase image) into the model. Because there are no publicly available datasets in the field of InSAR phase unwrapping, this paper uses simulated and real InSAR data to jointly construct the phase unwrapping datasets, which provide a sample basis for the application of deep learning, ensuring diversity in the number and types of samples and improving the generalization ability of the trained pu-pix2pix unwrapping method.
The interferometric phase primarily consists of a flat earth phase, a topography phase, a deformation phase, an atmosphere phase, and a noise phase, according to the interferometric height measurement theory [44]. The function can be expressed as:
ϕ int = ϕ r e f + ϕ t o p + ϕ d e f + ϕ a t m + ϕ n o i
where ϕ r e f represents the flat earth phase; ϕ t o p denotes the topography phase; ϕ d e f is the deformation phase; ϕ a t m stands for the atmospheric phase; and ϕ n o i refers to the noise phase. Among these, the noise phase has a random feature and exists in the phase model as high-frequency components that can be reduced by low-pass phase filtering. In the phase model, the atmospheric phase has a significant degree of topographic spatial autocorrelation and offers low-frequency information. As a result, comparable high-pass filtering can aid in the reduction in atmospheric effects. In addition, the flat earth phase, topography phase, and deformation phase dominate in the real interferometric phase.
The goal of our research is to directly recover the real phase from the wrapped interferogram. The flat earth phase is no longer required. The dual-antenna imaging method is utilized to simulate the InSAR phase, and the deformation phase is also ignored. As a result, the simulated interferometric phase consists mostly of a topography phase, atmospheric phase, and noise phase. To reduce the training memory needs, the analog phase image is divided into 256 × 256 size. To analyze the noise, different noise strengths are added to the image blocks, and the mean value of coherence is used as an indicator. The closer the coherence is to 1, the smaller the phase noise. There are a total of 1000 sets of simulated InSAR data created. The steps to building simulation data are as follows, with the simulation results shown in Figure 7. The legend on the right side of the phase plot represents the phase value for that color.
(1)
Simulate the topography phase. There are two methods for this: one is to specify the size and elevation range of the row and column, and generate an analog digital elevation model according to the random polynomial to obtain the topography phase image; the other is to use the existing DEM (digital elevation model) data, then simulate the oblique range imaging process of dual-antenna SAR sensors, specify the baseline length, and obtain a phase image containing only terrain errors, which is considered to be a true terrain phase image.
(2)
Simulate the atmospheric phase noise. The power spectrum inversion method is used to simulate this atmospheric phase noise. Its basic principle is to filter the complex Gauss random number matrix with a power spectrum function consistent with atmospheric turbulence, and then use the inverse Fourier transform to obtain the atmospheric phase noise.
(3)
Phase rewrapped. To perform the phase rewrapped operation, combine the terrain phase result with the atmospheric phase, and wrap the phase value to π , π .
(4)
Add noise. The gamma distribution is used to simulate the InSAR phase noise, and the wrapped phase image is noised to obtain the wrapped phase image with noise.
(5)
Merge the images. To meet the input conditions of the pu-pix2pix model’s paired image training, first cut the phase diagram and the wrapped phase diagram with noise to 256 × 256 size, and then combine them into multiple 256 × 512 size images.
Sentinel-1 data, ALOS-PALSAR data, and airborne SAR data are examples of real-world data sources. Real InSAR phase unwrapping datasets consist of the phase image before and after unwrapping during data processing. A total of 1000 sets of real InSAR data are generated for the model training using the segmentation operation. The airborne InSAR data of the region are shown in Figure 8.

3.2. Performance Evaluation Index

To assess the correctness of the proposed unwrapping methods, qualitative and quantitative methodologies are applied. The visual appraisal of the picture unwrapping accuracy by viewing the phase error map and error statistical curve graph with the naked eye is referred to as qualitative evaluation. The RMSE index is used to quantitatively evaluate the correctness of the unwrapping algorithm. The degree of difference between the images is measured using the RMSE. The smaller the RMSE, the better the unwrapping effect, and this is calculated as follows:
R M S E = 1 m n i = 1 m j = 1 n x i , j y i , j 2
where m , n represents the picture size; i , j denotes the pixel position in the image; x stands for a reference image, in this case, a correct unwrapped phase; and y represents the image to be evaluated, referring to the phase image for each method of evaluation.

3.3. Experimental Settings

All our testing was carried out on a PC with an Intel Core i7-13700KF CPU, an NVIDIA GeForce RTX 3080Ti GPU, and 64 GB of memory. Pu-pix2pix was trained for 300 epochs on the TensorFlow platform with a batch size of 2, using an Adam optimizer to speed up the training. The learning rate was initially set to 0.0002, and progressively dropped exponentially to 0. The trained pu-pix2pix was used to experiment with both simulation and real-world data.

3.4. Analysis of Unwrapping Results Based on Simulated Data

A collection of phase data with a mean coherence coefficient of 0.7 and a simulated size of 256 × 256 was used. Figure 9 depicts the findings of the simulation. Figure 9a depicts the actual phase value, or the ideal phase after unwrapping, whereas Figure 9b depicts the phase after wrapping and noise addition. The above-mentioned methods were used to conduct the unwrapping experiments on this simulated data. The experimental results shown in Figure 10a–e correspond to the unwrapped phase image, the error with te ideal phase, and the error statistics histogram of the quality guide, LS, MCF, U-net, and pix2pix methods, and the method proposed in this paper. According to the error map and error statistical histogram, the method suggested in this paper is more concentrated in the location where the phase error is close to zero than the other five methods, and it has the best accuracy.
The results of each method are qualitatively evaluated using the RMSE, and the running times of each method are also tallied in Table 1. The quality guide and LS methods both produce poor unwrapping results. The RMSE between the two deep learning methods’ unwrapping results and the original phase image is smaller than the traditional method’s phase unwrapping result, and the RMSE index of the proposed method’s unwrapping result is the best. In terms of time, the two methods based on deep learning consume significantly less time than the traditional methods.

3.5. Analysis of Unwrapping Results Based on Real Data

There are two sets of real data for the experimental verification. The first set of real data used in the experiment is derived from ALOS PALSAR-2 interference data, the reference real phase is obtained from the reference ALOS 12.5 m DEM, and the mean coherence coefficient is 0.6, as shown in Figure 11. Figure 12 depicts the first set of real data unwrapped phase findings, phase error graphs, and error statistical curves achieved using the six unwrapping methods without phase filtering. It is clear that the unwrapped phase results obtained by the quality guide method and LS method differ significantly from the reference phase. There are four methods left. The unwrapped phase image obtained using the proposed method is the closest to the reference phase diagram, and the error curve is more sharply concentrated at the position where the error is zero. The method proposed in this paper has the highest unwrapping accuracy.
The RMSE index is employed for a quantitative examination of each method’s unwrapping results, and the wrapping algorithm’s running time is calculated. Table 2 summarizes the findings. In the face of the two sets of real data, the proposed method has the lowest RMSE, and, in the results of the first set of experimental data, it reduces the RMSE by 0.03, 0.73, and 0.53, respectively, compared with the pix2pix, U-net, and MCF unwrapping methods. Similarly, a similar comparison is obtained in the result indicators of the second group of experimental data. The RMSEs of the quality guide and LS methods are larger, and, compared to the MCF, U-net, and pix2pix methods, the RMSE of the proposed method is reduced by 0.68, 0.45, and 0.03, respectively. Only considering the time-consuming method, the three methods based on deep learning take very close times, ranging from 0.1 s to 0.2 s, and are far less than those of the other three traditional unwrapping methods, ranging from 1 s to 5 s. In terms of the unwrapping accuracy and time, the proposed unwrapping approach remains ideal.
The second set of real data is the airborne interferometric data of a field flight in the research group. The real phase data are converted from DEM products in this area, and the average coherence coefficient of this experimental data is 0.6. The true phase of the data and their wrapped phase are shown in Figure 13. Figure 14 depicts the second set of real data unwrapped phase findings, phase error graphs, and error statistical curves achieved using the above six unwrapping methods.
To test the noise resistance of the method, the winding phase images with different noise intensities are obtained by adding the simulated gamma distribution phase noise and phase filtering on the first set of real phase data. The results are shown in Figure 15a, where the average coherence coefficient is between 0.5 and 0.9, and a phase image is taken every 0.1. The aforementioned six methods are employed for unwrapping the phases of these wrapped phases. The unwrapped results of each method corresponding to different levels of noise intensity are illustrated in Figure 15b–g. Figure 16 depicts the statistical curves of the unwrapping quantitative indicator results from the various methods. It can be seen that, when the noise level in the real data steadily increases, the proposed method maintains a high accuracy and has a good noise resistance. We can draw the same conclusion from the analysis of the two sets of different data: when confronted with data with different coherence coefficients, the RMSE of the proposed method’s unwrapping result does not change significantly and is always in a relatively stable state, which is more stable than that of the other methods. Compared to the traditional methods, the unwrapping process is more susceptible to SAR interference data noise. The proposed method directly learns the mapping relationship from the wrapped phase image to the unwrapped phase using the unique data-driven framework of deep learning. Therefore, its performance index is found to be superior to the traditional unwrapping algorithm under any noise conditions in the experiment.

4. Discussion

This study presented pu-pix2pix, a one-step phase unwrapping method based on the pix2pix model. Although this method achieved some advances in phase unwrapping, it still has several weaknesses that need to be investigated and improved. First is the variety of the InSAR dataset. The diversity of the dataset in this paper was substantially increased by merging real InSAR data and simulated InSAR data. However, due to the side imaging properties of SAR images, the geometric elements of perspective contraction, top–bottom displacement, overlay, and radar shadow in SAR images will produce phase deformation. A huge number of unique datasets is still required to increase the network model’s generalization capabilities for varied terrain situations. Second, the revised pix2pix-network-model-based phase unwrapping technique still has room for improvement. When the fringes in the wrapped phase diagram were dense, the deep learning unwrapping algorithm suggested in this research struggled to produce the predicted outcomes. The next step in resolving the aforementioned issues is to broaden the training set category. To produce the best unwrapping result for the wrapped phase image with dense fringes, block processing is performed first, then each area is processed separately, with the results being merged. The research in this paper is only in the experimental stage of phase unwrapping. Although the proposed method achieved better results in the experiment, it has not been applied to engineering projects. Subsequently, the unwrapped results will be added to the InSAR process to generate DEM products.

5. Conclusions

This study proposed a robust InSAR phase unwrapping method that combined the pix2pix and quality guide methods to successfully prevent the influence of phase graph noise during InSAR phase unwrapping. The pu-pix2pix method primarily altered the pix2pix network’s generator model. A preprocessing step was added to the original pix2pix generator’s encoder–decoder structure, and a conditional operation on the wrapped phase diagram in the input generator was performed using the coherent image to lessen the image difference before and after the unwrapping. To produce high-quality unwrapping results, the bottleneck residual module was integrated before each ASPP down-sampling step of the generator, preventing overfitting from occurring. As a one-step deep learning unwrapping method, pu-pix2pix transformed the phase unwrapping problem into the discovery of the mapping relationship between the wrapped phase and the unwrapped phase. The simulation and real-world data revealed that this approach has a superior unwrapping accuracy and robustness against phase noise. The solution surpassed numerous standard InSAR phase unwrapping algorithms in terms of its performance.
The pix2pix model was integrated with the quality guide concept brought into the generative adversarial network. To ensure its applicability to real data in the future, targeted enhancements to apply it to large-scale phase data and difficult phase and terrain situations in InSAR unwrapped processing will be required.

Author Contributions

Conceptualization, L.Z. and G.H.; methodology, software, validation, writing—original draft preparation, writing—review and editing, visualization, data curation, L.Z.; supervision, G.H., Y.L. and W.H.; funding acquisition, G.H., S.Y. and L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2022YFB3901604; the National Natural Science Foundation of China, grant number 42171442 and Chinese Academy of Surveying and Mapping Fundamental Research Project, grant number AR2105.

Data Availability Statement

Not applicable.

Acknowledgments

We thank all editors and reviewers and for their valuable comments suggestions for improving this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Itoh, K. Analysis of the phase unwrapping algorithm. Appl. Opt. 1982, 21, 2470. [Google Scholar] [CrossRef]
  2. Goldstein, R.M.; Zebker, H.A.; Werner, C.L. Satellite radar interferometry: Two-dimensional phase unwrapping. Radio Sci. 2016, 23, 713–720. [Google Scholar] [CrossRef]
  3. Lim, H.; Wei, X.; Huang, X. Two new practical methods for phase unwrapping. In Proceedings of the International Geoscience & Remote Sensing Symposium, Firenze, Italy, 10–14 July 1995; pp. 196–198. [Google Scholar]
  4. Flynn, T.J. Consistent 2-D phase unwrapping guided by a quality map. In Proceedings of the International Geoscience & Remote Sensing Symposium, Lincoln, NE, USA, 31 May 1996; pp. 2057–2059. [Google Scholar]
  5. Quiroga, J.A.; González-Cano, A.; Bernabeu, E. Phase-unwrapping algorithm based on an adaptive criterion. Appl. Opt. 1995, 34, 2560–2563. [Google Scholar] [CrossRef]
  6. Wei, X.; Cumming, I. A region-growing algorithm for InSAR phase unwrapping. IEEE Trans. Geosci. Remote Sens. 1999, 37, 124–134. [Google Scholar] [CrossRef]
  7. Chen, Q.; Yang, Y.; Liu, G.; Cheng, H.; Liu, W. InSAR Phase Unwrapping Using Least Squares Method with Integer Ambiguity Resolution and Edge Detection. Acta Geod. Cartogr. Sin. 2012, 41, 441–448. [Google Scholar]
  8. Takajo, H.; Takahashi, T. Least-squares phase estimation from the phase difference. J. Opt. Soc. Am. A 1988, 5, 1818–1827. [Google Scholar] [CrossRef]
  9. Pritt, M.D.; Shipman, J.S. Least-squares two-dimensional phase unwrapping using FFT’s. IEEE Trans. Geosci. Remote Sens. 1994, 32, 706–708. [Google Scholar] [CrossRef]
  10. Fried, D.L. Least-square fitting a wave-front distortion estimate to an array of phase-difference measurements. Opt. Soc. Am. J. 1977, 67, 370–375. [Google Scholar] [CrossRef]
  11. Costantini, M. A novel phase unwrapping method based on network programming. IEEE Trans. Geosci. Remote Sens. 1998, 36, 813–821. [Google Scholar] [CrossRef]
  12. Chen, C.W.; Zebker, H.A. Two-dimensional phase unwrapping with use of statistical models for cost functions in nonlinear optimization. J. Opt. Soc. Am. A 2001, 18, 338–351. [Google Scholar] [CrossRef]
  13. Bioucas-Dias, J.M.; Valadao, G. Phase Unwrapping via Graph Cuts. IEEE Trans. Image Process. 2007, 16, 698–709. [Google Scholar] [CrossRef] [PubMed]
  14. Xie, X.M.; Zeng, Q.N. Efficient and robust phase unwrapping algorithm based on unscented Kalman filter, the strategy of quantizing paths-guided map, and pixel classification strategy. Appl. Opt. 2015, 54, 9294–9307. [Google Scholar] [CrossRef] [PubMed]
  15. Xie, X.M.; Li, Y.H. Enhanced phase unwrapping algorithm based on unscented Kalman filter, enhanced phase gradient estimator, and path-following strategy. Appl. Opt. 2014, 53, 4049–4060. [Google Scholar] [CrossRef]
  16. Xie, X.; Pi, Y. Phase noise filtering and phase unwrapping method based on unscented Kalman filter. Syst. Eng. Electron. J. 2011, 22, 365–372. [Google Scholar] [CrossRef]
  17. Loffeld, O.; Nies, H.; Knedlik, S.; Yu, W. Phase Unwrapping for SAR Interferometry—A Data Fusion Approach by Kalman Filtering. IEEE Trans. Geosci. Remote Sens. 2007, 46, 47–58. [Google Scholar] [CrossRef]
  18. Martinez-Espla, J.J.; Martinez-Marin, T.; Lopez-Sanchez, J.M. Using a Grid-Based Filter to Solve InSAR Phase Unwrapping. IEEE Trans. Geosci. Remote Sens. Lett. 2008, 2, 147–151. [Google Scholar] [CrossRef]
  19. Martinez-Espla, J.J.; Martinez-Marin, T.; Lopez-Sanchez, J.M. An Optimized Algorithm for InSAR Phase Unwrapping Based on Particle Filtering, Matrix Pencil, and Region-Growing Techniques. IEEE Geosci. Remote Sens. Lett. 2009, 6, 835–839. [Google Scholar] [CrossRef]
  20. Martinez-Espla, J.J.; Martinez-Marin, T.; Lopez-Sanchez, J.M. A Particle Filter Approach for InSAR Phase Filtering and Unwrapping. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1197–1211. [Google Scholar] [CrossRef]
  21. Yu, H.; Lan, Y.; Yuan, Z.; Xu, J.; Lee, H. Phase Unwrapping in InSAR: A Review. IEEE Trans. Geosci. Remote Sens. 2019, 7, 40–58. [Google Scholar] [CrossRef]
  22. Pu, L.; Zhang, X.; Zhou, Z.; Li, L.; Zhou, L.; Shi, J.; Wei, S. A Robust InSAR Phase Unwrapping Method via Phase Gradient Estimation Network. Remote Sens. 2021, 13, 4564. [Google Scholar] [CrossRef]
  23. Gao, J.; Jiang, H.; Sun, Z.; Wang, R.; Han, Y. A Parallel InSAR Phase Unwrapping Method Based on Separated Continuous Regions. Remote Sens. 2023, 15, 1370. [Google Scholar] [CrossRef]
  24. Zhou, L.; Yu, H.; Lan, Y.; Xing, M. Artificial Intelligence Interferometric Synthetic Aperture Radar Phase Unwrapping: A Review. IEEE Trans. Geosci. Remote Sens. 2021, 9, 10–28. [Google Scholar]
  25. Rouet-leduc, B.; Dalaison, M.; Johnson, P.A.; Jolivet, R. Deep learning InSAR: Atmospheric noise removal and small deformation signal extraction from InSAR time series using a convolutional autoencoder. Am. Geophys. Union Fall Meet. 2019, 2019, G21A-07. [Google Scholar]
  26. Sun, X.; Zimmer, A.; Mukherjee, S.; Kottayil, N.K.; Ghuman, P.; Cheng, I. DeepInSAR: A Deep Learning Framework for SAR Interferometric Phase Restoration and Coherence Estimation. Remote Sens. 2020, 12, 2340. [Google Scholar] [CrossRef]
  27. Sica, F.; Gobbi, G.; Rizzoli, P.; Bruzzone, L. Φ-Net: Deep Residual Learning for InSAR Parameters Estimation. IEEE Trans. Geosci. Remote Sens. 2021, 59, 3917–3941. [Google Scholar] [CrossRef]
  28. Zhou, L.; Yu, H.; Yang, L. Deep Convolutional Neural Network-Based Robust Phase Gradient Estimation for Two-Dimensional Phase Unwrapping Using SAR Interferograms. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4653–4665. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Qian, J.; Wang, Y.; Yang, X. An Improved Least Square Phase Unwrapping Algorithm Combined with Convolutional Neural Network. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3384–3387. [Google Scholar] [CrossRef]
  30. Zhou, L.; Yu, H.; Yang, L.; Xing, M. Deep Learning-Based Branch-Cut Method for InSAR Two-Dimensional Phase Unwrapping. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  31. Ferraioli, G.; Pascazio, V.; Schirinzi, G.; Vitale, S.; Xing, M.; Yu, H.; Zhou, L. Joint Phase Unwrapping and Speckle Filtering by Using Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2021, 3376–3779. [Google Scholar] [CrossRef]
  32. Wu, Z.; Zhang, H.; Wang, Y.; Wang, T.; Wang, R. A Deep Learning Based Method for Local Subsidence Detection and InSAR Phase Unwrapping: Application to Mining Deformation Monitoring. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 20–23. [Google Scholar] [CrossRef]
  33. Zhou, L.; Yu, H.; Pascazio, V.; Xing, M. PU-GAN: A One-Step 2-D InSAR Phase Unwrapping Based on Conditional Generative Adversarial Network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–10. [Google Scholar] [CrossRef]
  34. Spoorthi, G.E.; Gorthi, S.; Gorthi, R.K.S.S. PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping. IEEE Signal Process. Lett. 2018, 26, 54–58. [Google Scholar] [CrossRef]
  35. Zhang, J.; Tian, X.; Shao, J.; Luo, H.; Liang, R. Phase unwrapping in optical metrology via denoised and convolutional segmentation networks. Opt. Express 2019, 27, 14903–14912. [Google Scholar] [CrossRef]
  36. Spoorthi, G.E.; Gorthi, R.K.S.S.; Gorthi, S. PhaseNet 2.0: Phase Unwrapping of Noisy Data Based on Deep Learning Approach. IEEE Trans. Image Process. 2020, 29, 4862–4872. [Google Scholar] [CrossRef]
  37. Zhang, T.; Jiang, S.; Zhao, Z. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1175–1183. [Google Scholar] [CrossRef]
  38. Ghiglia, D.C.; Pritt, M. Two-Dimensional Phase Unwrapping: Theory, Algorithms, and Software; John Wiley & Sons, Inc.: New York, NY, USA, 1998. [Google Scholar]
  39. Liu, G.; Wang, R.; Deng, Y.K.; Chen, R.; Shao, Y.; Yuan, Z. A New Quality Map for 2-D Phase Unwrapping Based on Gray Level Co-Occurrence Matrix. IEEE Geosci. Remote Sens. Lett. 2013, 11, 444–448. [Google Scholar] [CrossRef]
  40. Kemao, Q.; Gao, W.; Wang, H. Windowed Fourier-filtered and quality-guided phase-unwrapping algorithm. Appl. Opt. 2008, 47, 5420–5428. [Google Scholar] [CrossRef] [PubMed]
  41. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
  42. Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  43. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Networks. arXiv 2017, arXiv:1611.07004. [Google Scholar]
  44. Xing, M.; Pascazio, V.; Yu, H. A Special Issue on Synthetic Aperture Radar Interferometry [From the Guest Editors]. IEEE Geosci. Remote Sens. Mag. 2020, 8, 6–7. [Google Scholar] [CrossRef]
Figure 1. The pix2pix network.
Figure 1. The pix2pix network.
Remotesensing 15 04885 g001
Figure 2. The generator structure.
Figure 2. The generator structure.
Remotesensing 15 04885 g002
Figure 3. Schematic issue diagram of the ASPP grid problems.
Figure 3. Schematic issue diagram of the ASPP grid problems.
Remotesensing 15 04885 g003
Figure 4. Atrous spatial pyramid pooling module with reasonable sampling settings.
Figure 4. Atrous spatial pyramid pooling module with reasonable sampling settings.
Remotesensing 15 04885 g004
Figure 5. The residual module.
Figure 5. The residual module.
Remotesensing 15 04885 g005
Figure 6. The framework of the discriminator.
Figure 6. The framework of the discriminator.
Remotesensing 15 04885 g006
Figure 7. Generation of the simulated InSAR phase unwrapping datasets: (a) simulated topographic phase image; (b) simulated atmospheric phase image; (c) phase rewrapping map; (d) phase rewrapping map after noise; and (e) map merge.
Figure 7. Generation of the simulated InSAR phase unwrapping datasets: (a) simulated topographic phase image; (b) simulated atmospheric phase image; (c) phase rewrapping map; (d) phase rewrapping map after noise; and (e) map merge.
Remotesensing 15 04885 g007aRemotesensing 15 04885 g007b
Figure 8. Generation of the real InSAR phase unwrapping datasets: (a) the InSAR wrapped phase image of a certain area; (b) unwrapped phase image for the MCF method; and (c) map merge.
Figure 8. Generation of the real InSAR phase unwrapping datasets: (a) the InSAR wrapped phase image of a certain area; (b) unwrapped phase image for the MCF method; and (c) map merge.
Remotesensing 15 04885 g008
Figure 9. Simulated wrapped phase and true phase: (a) unwrapped phase and (b) wrapped phase.
Figure 9. Simulated wrapped phase and true phase: (a) unwrapped phase and (b) wrapped phase.
Remotesensing 15 04885 g009
Figure 10. Unwrapping results of simulated data: (a) unwrapped phase and error diagram of quality guide method; (b) unwrapped phase and error diagram of LS method; (c) unwrapped phase and error diagram of MCF method; (d) unwrapped phase and error diagram of U-net method; (e) unwrapped phase and error diagram of pix2pix method; and (f) unwrapped phase and error diagram of proposed method.
Figure 10. Unwrapping results of simulated data: (a) unwrapped phase and error diagram of quality guide method; (b) unwrapped phase and error diagram of LS method; (c) unwrapped phase and error diagram of MCF method; (d) unwrapped phase and error diagram of U-net method; (e) unwrapped phase and error diagram of pix2pix method; and (f) unwrapped phase and error diagram of proposed method.
Remotesensing 15 04885 g010aRemotesensing 15 04885 g010b
Figure 11. Real wrapped phase and its unwrapped phase of experimental data 1: (a) unwrapped phase and (b) wrapped phase.
Figure 11. Real wrapped phase and its unwrapped phase of experimental data 1: (a) unwrapped phase and (b) wrapped phase.
Remotesensing 15 04885 g011
Figure 12. Unwrapping result of real experimental data 1: (a) unwrapped phase and error diagram of quality guide method; (b) unwrapped phase and error diagram of LS method; (c) unwrapped phase and error diagram of MCF method; (d) unwrapped phase and error diagram of U-net method; (e) unwrapped phase and error diagram of pix2pix method; and (f) unwrapped phase and error diagram of proposed method.
Figure 12. Unwrapping result of real experimental data 1: (a) unwrapped phase and error diagram of quality guide method; (b) unwrapped phase and error diagram of LS method; (c) unwrapped phase and error diagram of MCF method; (d) unwrapped phase and error diagram of U-net method; (e) unwrapped phase and error diagram of pix2pix method; and (f) unwrapped phase and error diagram of proposed method.
Remotesensing 15 04885 g012aRemotesensing 15 04885 g012b
Figure 13. Real wrapped phase and its unwrapped phase of experimental data 2: (a) unwrapped phase and (b) wrapped phase.
Figure 13. Real wrapped phase and its unwrapped phase of experimental data 2: (a) unwrapped phase and (b) wrapped phase.
Remotesensing 15 04885 g013
Figure 14. Unwrapping result of real experimental data 2: (a) unwrapped phase and error diagram of quality guide method; (b) unwrapped phase and error diagram of LS method; (c) unwrapped phase and error diagram of MCF method; (d) unwrapped phase and error diagram of U-net method; (e) unwrapped phase and error diagram of pix2pix method; and (f) unwrapped phase and error diagram of proposed method.
Figure 14. Unwrapping result of real experimental data 2: (a) unwrapped phase and error diagram of quality guide method; (b) unwrapped phase and error diagram of LS method; (c) unwrapped phase and error diagram of MCF method; (d) unwrapped phase and error diagram of U-net method; (e) unwrapped phase and error diagram of pix2pix method; and (f) unwrapped phase and error diagram of proposed method.
Remotesensing 15 04885 g014aRemotesensing 15 04885 g014b
Figure 15. The wrapping error curves with various noise levels on real data and unwrapping results corresponding to different methods: (a) wrapped phase with different noise intensity; (b) unwrapped phase of quality guide method; (c) unwrapped phase of LS method; (d) unwrapped phase of MCF method; (e) unwrapped phase of U-net method; (f) unwrapped phase pix2pix method; and (g) unwrapped phase of proposed method.
Figure 15. The wrapping error curves with various noise levels on real data and unwrapping results corresponding to different methods: (a) wrapped phase with different noise intensity; (b) unwrapped phase of quality guide method; (c) unwrapped phase of LS method; (d) unwrapped phase of MCF method; (e) unwrapped phase of U-net method; (f) unwrapped phase pix2pix method; and (g) unwrapped phase of proposed method.
Remotesensing 15 04885 g015aRemotesensing 15 04885 g015b
Figure 16. The unwrapping error curves with various noise levels on real data.
Figure 16. The unwrapping error curves with various noise levels on real data.
Remotesensing 15 04885 g016
Table 1. Evaluation of simulated data unwrapping results for each algorithm.
Table 1. Evaluation of simulated data unwrapping results for each algorithm.
MethodRMSE (Rad)Time (s)
Quality guide5.57971.3982
LS5.05031.1495
MCF1.89113.2251
U-net1.54240.1474
pix2pix1.55140.1517
pu-pix2pix1.51290.1526
Table 2. Evaluation of the real data unwrapping results of each algorithm.
Table 2. Evaluation of the real data unwrapping results of each algorithm.
MethodThe First Set of Real DataThe Second Set of Real Data
RMSE (Rad)Time (s)RMSE (Rad)Time (s)
Quality guide14.37511.47344.12601.3154
LS13.04931.35499.62361.3241
MCF2.75104.04522.81143.3454
U-net2..95140.16212.57860.1579
pix2pix2.25130.16452.15680.1631
pu-pix2pix2.22070.16312.13210.1570
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, L.; Huang, G.; Li, Y.; Yang, S.; Lu, L.; Huo, W. A Robust InSAR Phase Unwrapping Method via Improving the pix2pix Network. Remote Sens. 2023, 15, 4885. https://doi.org/10.3390/rs15194885

AMA Style

Zhang L, Huang G, Li Y, Yang S, Lu L, Huo W. A Robust InSAR Phase Unwrapping Method via Improving the pix2pix Network. Remote Sensing. 2023; 15(19):4885. https://doi.org/10.3390/rs15194885

Chicago/Turabian Style

Zhang, Long, Guoman Huang, Yutong Li, Shucheng Yang, Lijun Lu, and Wenhao Huo. 2023. "A Robust InSAR Phase Unwrapping Method via Improving the pix2pix Network" Remote Sensing 15, no. 19: 4885. https://doi.org/10.3390/rs15194885

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop