Next Article in Journal
Application of Salivary Biomarkers in the Diagnosis of Fibromyalgia
Next Article in Special Issue
Electrical Properties Tomography: A Methodological Review
Previous Article in Journal
Preoperative Embolization of a Solitary Fibrous Tumor Originating from External Auditory Meatus: A Case Report with Literature Review
Previous Article in Special Issue
Hydrophilic Biocompatible Poly(Acrylic Acid-co-Maleic Acid) Polymer as a Surface-Coating Ligand of Ultrasmall Gd2O3 Nanoparticles to Obtain a High r1 Value and T1 MR Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction

1
School of Computer and Control Engineering, Yantai University, Yantai 264005, China
2
Human Phenome Institute, Fudan University, Shanghai 201203, China
3
Cardiovascular Research Centre, Royal Brompton Hospital, London SW3 6NP, UK
4
National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
*
Authors to whom correspondence should be addressed.
Diagnostics 2021, 11(1), 61; https://doi.org/10.3390/diagnostics11010061
Submission received: 5 November 2020 / Revised: 28 December 2020 / Accepted: 29 December 2020 / Published: 2 January 2021
(This article belongs to the Special Issue Advanced Techniques in Body Magnetic Resonance Imaging)

Abstract

:
In this study, we proposed a model combing parallel imaging (PI) with generative adversarial network (GAN) architecture (PIC-GAN) for accelerated multi-channel magnetic resonance imaging (MRI) reconstruction. This model integrated data fidelity and regularization terms into the generator to benefit from multi-coils information and provide an “end-to-end” reconstruction. Besides, to better preserve image details during reconstruction, we combined the adversarial loss with pixel-wise loss in both image and frequency domains. The proposed PIC-GAN framework was evaluated on abdominal and knee MRI images using 2, 4 and 6-fold accelerations with different undersampling patterns. The performance of the PIC-GAN was compared to the sparsity-based parallel imaging (L1-ESPIRiT), the variational network (VN), and conventional GAN with single-channel images as input (zero-filled (ZF)-GAN). Experimental results show that our PIC-GAN can effectively reconstruct multi-channel MR images at a low noise level and improved structure similarity of the reconstructed images. PIC-GAN has yielded the lowest Normalized Mean Square Error (in × 10 5 ) (PIC-GAN: 0.58 ± 0.37, ZF-GAN: 1.93 ± 1.41, VN: 1.87 ± 1.28, L1-ESPIRiT: 2.49 ± 1.04 for abdominal MRI data and PIC-GAN: 0.80 ± 0.26, ZF-GAN: 0.93 ± 0.29, VN:1.18 ± 0.31, L1-ESPIRiT: 1.28 ± 0.24 for knee MRI data) and the highest Peak Signal to Noise Ratio (PIC-GAN: 34.43 ± 1.92, ZF-GAN: 31.45 ± 4.0, VN: 29.26 ± 2.98, L1-ESPIRiT: 25.40 ± 1.88 for abdominal MRI data and PIC-GAN: 34.10 ± 1.09, ZF-GAN: 31.47 ± 1.05, VN: 30.01 ± 1.01, L1-ESPIRiT: 28.01 ± 0.98 for knee MRI data) compared to ZF-GAN, VN and L1-ESPIRiT with an under-sampling factor of 6. The proposed PIC-GAN framework has shown superior reconstruction performance in terms of reducing aliasing artifacts and restoring tissue structures as compared to other conventional and state-of-the-art reconstruction methods.

1. Introduction

Magnetic resonance imaging (MRI) is an important non-invasive imaging modality for in vivo clinical studies that offers preeminent soft tissue contrast without ionizing radiation. However, MRI suffers from long scanning time, especially for high-resolution 3D/4D imaging sequences, which can cause patient discomfort and consequent patient fatigue can yield motion artifacts and thereby degrades the quality of the reconstructed images. Accelerated acquisition and reconstruction are crucial to improve the performance of the current MR imaging techniques. The k-space undersampling is a widely used approach to reduce scan time, but it will produce aliasing artifacts in the image domain if reconstructed in a normal way. Hence, various approaches have been explored to obtain accurate reconstructions without introducing aliasing artifacts, including parallel imaging (PI) and compressed sensing (CS).
PI [1] takes use of multi-channel k-space data for accelerated imaging. PI techniques can be divided into two categories: (1) reconstruction methods that are performed in the image domain that perform unfolding or reversing [2], and (2) methods that are performed in the k-space domain, which require the estimation of missing harmonic data before reconstruction [3]. Since fewer data are acquired in PI, the signal-to-noise ratio (SNR) will be reduced. The SNR of the reconstructed image is related to both the acceleration factor (AF) and the geometry factor (g-factor) [4]. It is well known that the g-factor [5] depends on the geometrical distribution of the receiver coils as well as the sampling patterns [6]. In contrast, CS algorithms adopt a nonlinear process to reconstruct images from undersampled k-space data. It is assumed that the signal is sparse [7,8] in a particular transform domain, e.g., via wavelet [9] or total variation [10,11,12,13], and the artifacts generated by the random sampling are incoherent [7]. Although PI and CS based methods can shorten the acquisition time, both of them require long reconstruction time due to the iterative computations.
Recent studies have demonstrated that deep learning-based MRI reconstruction algorithms are capable to recover high-quality images from undersampled acquisitions with significantly reduced reconstruction time. Wang et al. [14] trained a convolutional neural network (CNN) architecture to identify the mapping between zero-filled (ZF) images and fully-sampled images. Sun et al. [15] presented an ADMM-NET that learned parameters in the alternating direction method of multipliers (ADMM) algorithm via a back-propagation. Schlemper et al. [16] introduced a deep cascade of CNNs that intercalated data consistency layers for dynamic 2D cardiac MRI reconstruction with Cartesian undersampling. Instead of learning the artifact-free images, Lee et al. [17] combined CNN with PI to estimate the image degrading patterns and then removed the corresponding artifacts. Furthermore, Lv et al. [18] developed a stack of autoencoders to remove streaking artifacts from radial undersampled free-breathing 3D abdominal MRI data. More recent studies [19,20] integrated the attention mechanism into CNN for accelerated MRI reconstruction, which improved the reconstruction outcome by taking advantage of long-range dependencies across images.
Nowadays, generative adversarial network (GAN) based models have been exploited to perform MRI reconstruction. GAN consists of a generator and a discriminator. The generator is trained to learn the distribution from the giving dataset, while the discriminator is trained to distinguish the generated images from the real ones. Since the error of the discriminator is backpropagated to the generator, the error of the discriminator and the generator conflicts, resulting in an adversarial loss. Compared to other loss functions, the use of adversarial loss can improve the perceptual image quality. Shitrit et al. [21] presented a GAN-based model to reconstruct MR images directly from under-sampled k-space data. The generator is capable of estimating the missing k-space data and the discriminator is used to judge the generated samples from the real ones. Yang et al. [22] proposed a deep de-aliasing generative adversarial network named DAGAN, which adopted a residual U-Net as generator with a loss function consists of an image domain loss, a frequency domain loss, a perceptual loss and an adversarial loss. Quan et al. [23] proposed a GAN-based framework with a cyclic loss. This framework was composed of two consecutive networks, one was used to reconstruct the under-sampled k-space data and the other was used to refine the result. Jiang et al. [24] proposed a de-aliasing fine-tuning Wasserstein generative adversarial network (DA-FWGAN) to perform CS-MRI reconstruction. This approach combines fine-tuning and Wasserstein distance for training. In addition, Cole et al. [25] proposed an unsupervised GAN framework for MRI reconstruction that does not rely on fully-sampled datasets for supervision. Yuan et al. [26] developed a self-attention GAN that combines the Self-Attention mechanism with Relative Average discriminator (SARA-GAN) for under-sampled k-space data reconstruction. Thanks to the long-range global dependence constructed by the self-attention module, this approach can reconstruct images with more realistic image details and higher quantitative metrics.
To the best of our knowledge, most previous approaches have used single-channel data for training. In fact, multi-channel technology provides many complementary information. Several endeavors have been made to extend the previous single-channel CNN-based MRI reconstruction methods to the multi-channel reconstructions. Hammernik et al. [27] presented a variational network (VN) for multi-channel MRI reconstruction. Subsequently, Zhou et al. [28] developed a PI-CNN reconstruction framework, which utilized a cascaded structure that intercalated the CNN and PI-DC layers. This method allows the network to make better use of information from multi-coils. Nevertheless, the multi-channel loss function was not integrated into the architecture of the network. Wang et al. [29] trained a deep complex CNN that yielded the direct mapping between aliased multi-channel images and fully-sampled multi-channel images. Unlike other networks for PI, no prior information (such as sparse transform or coil sensitivity) was required, and therefore could provide an end-to-end network in this deep complex CNN based framework. It is of note that all these studies have focused on a single-domain (in either the image domain or the k-space domain).
In this study, we aim to introduce a novel reconstruction framework named ’Parallel Imaging Coupled Generative Adversarial Network (PIC-GAN)’, which is developed to learn a unified model for improving multi-channel MRI reconstruction. We performed experiments on two MRI datasets (abdominal and knee MRI datasets) to validate the efficacy and generalization capacity of the proposed method with different acceleration factors and different sampling trajectories. Besides, we compared our model with the conventional sparsity-based parallel imaging method (L1-ESPIRiT), the VN model and the GAN approach with single-channel images as input (ZF-GAN).

2. Methods

2.1. Problem Formulation

The idea of PI is to apply coil sensitivity encoding into the reconstruction of multi-channel undersampled k-space data. The PI reconstruction can be formulated as an inverse problem, which can be described in a matrix-vector form:
y = E x + n = R S x + n ,
where y represents the k-space measurements, x represents the image to be reconstructed, n represents the noise, E represents the forward encoding operator including the sampling trajectory R , the Fourier transform , and the coil sensitivity S .
The presence of the operator E and n causes the solution of Equation (1) to be ill-posed [30]. Thus, Equation (1) is usually solved in an iterative manner with the inclusion of certain regularization terms:
min x 1 2 E x y 2 2 + i λ i R i ( x ) ,
where · 2 2 denotes the l 2 norm, R i represents the i-th regularization term and λ i represents the corresponding weighting parameter. The regularization term R i is typically selected as a l 1 -norm in CS reconstruction [31,32,33]. ADMM [15] algorithm is usually employed to solve this optimization problem.
Recently, with the introduction of deep learning, R i can be formulated as a CNN based regularization term, where the model parameters can be trained from existing dataset.
min x 1 2 E x y 2 2 + λ x F CNN x u ; θ N .
Here, x u represents an undersampled image to be reconstructed, F CNN ( x u ; θ ) is an image generated by the CNN network and θ represents the optimal parameters of the trained CNN.
Our objective is to train a generator G that can generate a fully-reconstructed MR image x ^ u = G θ G ( x u ) from a zero-filled reconstruction image x u under the constraint that G θ G ( x u ) is indistinguishable from the image reconstructed from the fully-sampled k-space data ( x ^ ).
The objective function of D is to maximize the log-likelihood for estimating the conditional probability, where D G x u = 0 and D ( x ^ ) = 1 . Hence, this can be addressed by defining an adversarial loss L a d v , which can be rewritten as a minimax problem between the generator G θ G ( x ) and D θ D ( x ) . The training process of GAN can be parameterized by θ G and θ D as following
min θ G max θ D L a d v θ D , θ G = E x ^ P train ( x ^ ) log D θ D ( x ) + E x u P G x u log 1 D θ D G θ G x u .
Here, x u is sampled from a fixed latent distribution P G x u and real samples x ^ come from a real data distribution P t r a i n ( x ^ ) . Once the training converges, G θ G can generate the image G θ G x u which is similar to x ^ , and D θ D is unable to differentiate between them.

2.2. The Proposed PIC-GAN Reconstruction Framework

The schema of the proposed PIC-GAN for multi-channel image reconstruction is illustrated in Figure 1. The detailed architecture of G and D components are described as following. The input to the generator is a single, sensitivity-weighted recombined image x u . Besides, the input is made up of two channels, the real and the imaginary parts.
A deep residual U-Net is adopted for the generator to improve learning robustness and accuracy. As shown in Figure 2, the model of Generator G consists of a network of a convolutional encoder and a network of convolutional decoder, and there are multiple shortcut connections between them. The encoder blocks (colored in yellow) are capable to compress the input images and explore the image features with strong robustness and spatial invariance. The decoder blocks (colored in blue) is utilized to restore image features and increase image resolution. Multiple shortcut connections (red lines) are introduced to connect the feature maps from the encoder to the decoder, thus feeding different levels of features to the decoder to get better image reconstruction details. The final result is calculated by adding the zero-filled image x u to the output of generator G ( x u ) . More specifically, each encoder block (colored in green) or decoder block (colored in lavender) consists of four convolutional layers with a kernel size of 3 × 3 and different numbers (illustrated under the blocks) of feature maps. It is then followed by a convolutional layer without any activation to get two output channels for the real and imaginary parts, respectively.
A discriminator is connected to the generator output. The discriminator D network is composed of similar encoding part of the generator G, which consists of 6 convolutional layers. In all the convolutional layers except the last one, each convolutional layer is followed by batch normalization (BN) and ReLU layers. We use 64, 128, 256, 512 feature maps for the first 4 layers. Meanwhile, a convolution with a stride of 2 is used to reduce the image resolution. The first four layers use kernel size of 3 × 3 , while the last layer uses kernel size of 1 × 1 . The final layer simply averages out features of the seventh layer to obtain decision variables for binary classification without soft-max operation. The output of the last residual block is used to calculate the mentioned adversarial loss L a d v .
In this study, we incorporate parallel imaging into the GAN paradigm to fully utilize all the information acquired from the multi-channel coils. Meanwhile, the data consistency loss is designed for training the generator G in both frequency and image domains to help the optimization and to exploit the complementary properties of the two domains. This loss consists of three parts (Figure 1), one is a pixel-wise image domain mean absolute error (MAE) L i MAE ( θ G ) , the other two are frequency domain MAE losses L i MAE , R ( θ G ) and L f MAE , 1 R ( θ G ) . The three loss functions can be written as
L i MAE ( θ G ) = q x q S q x ^ u 1 ,
L f MAE , R ( θ G ) = q y R q R S q x ^ u 1 ,
L f MAE , 1 R ( θ G ) = q y 1 R q ( 1 R ) S q x ^ u 1 .
Here, q denotes the coil element, the L i MAE ( θ G ) term removes aliasing artifacts between the reconstructed image and its corresponding ground truth image. Specifically, the L f MAE , R ( θ G ) term guarantees that the reconstructed image produces corresponding undersampled image matching the undersampled k-space measurements ( y R ). The L f MAE , 1 R ( θ G ) term ensures that the difference between the unacquired k-space data ( y 1 R ) and interpolated data based on reconstruction to be minimal.
Together with L adv , the complete loss function can be written as:
L total = L adv θ D , θ G + α L iMAE θ G + β L fMAE , R θ G + γ L fMAE , 1 R θ G .
Here, α , β and γ are the hyper-parameters that control the trade-off between each function. The adversarial loss term L adv enforces the reconstructed images to keep the high perceptual quality and to maintain image details and textural information of the images.
It is well known that the GAN model is hard to be trained [23] due to the need for alternate training process on the adversarial components. Inspired by the study of DAGAN [22], we incorporated the refinement learning to stabilize the training of our model. In fact, we utilize x ^ u = G θ G x u + x u . Thus, the generator only generates information that is not sampled, which can greatly reduce the complexity of the model.

2.2.1. Datasets

To validate the efficacy and generalization capacity of our proposed method, publicly available abdominal [34] and knee [35] MRI datasets are used retrospectively. Both datasets were acquired from a GE 3.0 T whole-body scanner (GE Healthcare, Milwaukee, WI, USA). Using the same PIC-GAN architecture, we trained our model on each dataset and test independently on their corresponding testing dataset.
The abdominal MRI dataset contains images acquired from 28 subjects. The signal was acquired by a 32-channel pediatric coil. The data was undersampled by a 3D spoiled-gradient-echo with Poisson-disc random undersampling of the phase encodes. The imaging parameters were TE/TR = 1.128 ms/4.832 ms, field-of-view (FOV) = 38 × 38 cm 2 , slice thickness = 2 mm, flip angle = 15 , bandwidth = ±64 kHz, matrix size = 308 × 230 × 156, and auto-calibration signal (ACS) lines = 24 × 20.
The knee dataset consists of images acquired from 20 subjects. The MRI data were acquired with an 8-channel knee coil. The images were fully sampled using a 3D FSE CUBE sequence with proton density weighting. The imaging parameters were TE/TR = 0.944 ms/3.832 ms, FOV = 35 × 35 cm 2 , slice thickness = 2 mm, flip angle = 15 , bandwidth = ±64 kHz, and matrix size = 192 × 224 × 184.
In this study, the real and imaginary components of the complex MR image x u were considered as two individual image channels. Among all the 28 abdominal, 26 subjects were randomly selected for training, and the remaining 2 subjects were used for test. For each subject, 50 central slices were selected. Thus, the training set contained 1300 slices and the test set had 100 slices. Similarly, 18 out of 20 knee data were randomly selected for training, while the remaining subjects were used for test. A total of 100 central slices were selected for each subject. Therefore, the knee training and test sets contained 1800 and 200 images, respectively.

2.2.2. Comparison Studies, Experimental Settings and Evaluation

The proposed PIC-GAN was tested on data with both regular and random Cartesian undersampling under 2×, 4× and 6× acceleration factors. Next, we evaluated the performance of the PIC-GAN against previously proposed reconstruction methods, including L1-ESPIRiT, VN and ZF-GAN. The L1-ESPIRiT reconstruction was performed using the Berkeley Advanced Reconstruction Toolbox (BART) [36], where the parameters were optimized for the best SNR performance. The coil sensitivity maps were estimated by ESPIRiT [37] with 24 and 40 calibration lines for abdominal and knee dataset, respectively.
We trained the networks with the following hyperparameters: α = 1 and β = γ = 10 for PIC-GAN reconstruction. For the ZF-GAN method, reconstruction was performed without using sensitivity maps. The Adam optimizer [7] is used for the training. The model used a batch size of 32 and the initial learning rate of 10 4 for training, which decreased monotonically over 2000 epochs. The model with the highest validation Peak Signal to Noise Ratio (PSNR) was selected for testing.
Experiments were carried out on a system equipped with GPUs of NVIDIA Tesla V100 (4 cores, each with 16 GB memory) and a 32-core Intel-Xeon Gold-6130-CPU at 2.10 GHz. Our PIC-GAN was developed using Tensorpack [38] with the Tensorflow [39] library.
We evaluated the reconstruction results quantitatively in terms of Peak Signal to Noise Ratio (PSNR), Normalized Mean Square Error (NMSE), and Structural Similarity Index (SSIM). A paired Wilcoxon signed-rank test was conducted to compare the NMSE, PSNR and SSIM measurements between different approaches. p < 0.05 was treated as statistically significant.

3. Results

3.1. Reconstruction Results: Abdominal MRI Data

Figure 3 shows representative images reconstructed from ZF, L1-ESPIRiT, VN, ZF-GAN, and PIC-GAN with sixfold undersampling compared to the ground truth (GT). As illustrated in the 1st and 3rd rows, the liver and kidney regions are marked with red boxes. The ZF reconstruction was remarkably blurred. Zoomed in error maps showed that liver vessels almost disappeared in L1-ESPIRiT. Moreover, the VN reconstructed images contained substantial residual artifacts, which can be seen in the error maps. The ZF-GAN results produced unnatural blocky patterns for vessels and appeared blurrier at image edges. Compared to the other methods, PIC-GAN results had the least error and were capable of removing the aliasing artifacts. Correspondingly, the proposed PIC-GAN method also performed the best in terms of PSNR and SSIM metrics. These observations have a good correlation with the numerical analysis shown in Table 1.

3.2. Reconstruction Results: Knee MRI Data

To better understand the refining procedure of our PIC-GAN, the intermediate results during the iterations of the reconstruction are shown in Figure 4. We can observe a gradual improvement in the quality of the reconstruction from epochs 0 to 2000, which is consistent with the quantitative results (PSNR and SSIM) showing in the sub-figures in Figure 4.
Figure 5 shows representative images reconstructed from ZF, L1-ESPIRiT, VN, ZF-GAN and the proposed PIC-GAN compared to the GT. All four methods (L1-ESPIRiT, VN, ZF-GAN and the proposed PIC-GAN) achieved acceptable image quality when AF was selected as 2. When 4-fold undersampling was applied, the residual artifacts can be clearly observed in images reconstructed using VN. Besides, the images reconstructed by ZF-GAN appeared less noisy than L1-ESPIRiT and VN. However, the ZF-GAN reconstructed images were over-smoothed with blocky artifacts (yellow arrows) and obvious residual artifacts (green arrows) as shown in Figure 5. The proposed PIC-GAN, on the other hand, could better maintain fine details and thus show more accurate textures. The proposed PIC-GAN method achieved the highest PSNR with acceleration of factor up to 6. The other two methods missed some high-frequency texture details (green and yellow arrows). Compared to other reconstruction approaches, PIC-GAN yielded the lowest NMSE and the highest PSNR with regular under-sampling.
Figure 6 demonstrates the advantage of the proposed PIC-GAN method using different sampling patterns. The ZF reconstructed images presented with a significant amount of aliasing artifacts. Similarly, there were significant residual artifacts and amplified noise that existed in the results obtained by L1-ESPIRiT. For the reconstruction produced by VN, fine texture details were missing, which might limit the clinical usage. The ZF-GAN images enhanced the spatial homogeneity and the sharpness of the images reconstructed by VN. However, ZF-GAN images contained blurred vessels (green arrows) and blocky patterns (yellow arrows). The PIC-GAN not only suppressed aliasing artifacts but also provided sharper edges and more realistic texture details. These observations are consistent with the quantitative analyzed results shown in Table 2.

3.3. Quantitative Evaluations

Table 1 and Table 2 show the quantitative metrics, including PSNR, SSIM, NMSE, and the reconstruction time, for all compared methods. The numbers in Table 1 and Table 2 represent the mean values and standard deviation of corresponding metrics (bold numbers indicate the best performance). Compared to the L1-ESPIRiT method, the CNN based VN model and single-channel based deep learning method (ZF-GAN), the proposed PIC-GAN framework outperformed them remarkably at different acceleration factors showing the effectiveness of our method.
As shown in Figure 7, the proposed PIC-GAN method significantly outperformed the L1-ESPIRiT, VN and ZF-GAN reconstruction with acceleration factors of 2, 4 and 6 with respect to all metrics (p < 0.01) for the abdominal data with regular Cartesian undersampling.
The reconstruction time of L1-ESPIRiT was calculated with 30 iterations of conjugate gradient descent using the BART toolbox. For the abdominal data, it took about 66 seconds, which was 165 times longer than the PIC-GAN based approaches. In contrast, ZF-GAN and PIC-GAN methods took about 0.4 to 0.7 seconds for the reconstruction of a single slice, which was much more time-efficient. Similarly, as shown in Table 2, the reconstruction time using PIC-GAN is much shorter than L1-ESPIRiT for the knee data, and comparable to other methods.

4. Discussion

In this study, we have developed a PIC-GAN model incoperating PI and GAN to improve the multi-channel MRI reconstruction. Experimental results show that our PIC-GAN outperformed conventional L1-ESPIRiT and the state-of-the-art VN and ZF-GAN methods in terms of all quantitative metrics. In addition, the speed of PIC-GAN reconstruction is faster than conventional L1-ESPIRiT, indicating its feasibility for real-time imaging.
Currently, several novel GAN-based approaches have been proposed for MRI reconstruction. For example, the DA-FWGAN [24] architecture used a fine-tuning method for training the neural network and the Wasserstein distance as the discrepancy measure between the reference and reconstructed images. SARA-GAN [26] integrated the self-attention mechanism with relative average discriminator to reconstruct images with more realistic details and better integrity. Meanwhile, in contrast to most supervised deep learning reconstruction method, an unsupervised GAN based approach [25] was proposed for accelerated imaging where fully-sampled datasets are difficult to be obtained. However, these approaches are limited to single-channel reconstruction, which is not suitable for modern MRI scanners. Besides, some artifacts removal techniques, e.g., motion correction [40], are also based on multi-channel acquisitions. Thus, single-channel reconstructions are less realistic for clinical routines since modern MRI scanners are equipped with multi-coils. Thus, several methods have been explored to address this problem. The variational network [27] approach was proposed to learn an end-to-end reconstruction procedure for complex-valued multi-channel imaging. Moreover, a similar result using a PI-CNN network was reported in [28] to integrate multi-channel k-space data and to exploit them through PI. However, the PI algorithm was not incorporated into the optimization equation of the network but only treated as a regularization term. In addition, Deepcomplex-CNN [29] was presented to directly map aliased multi-channel images to the reference images without the requirement of any prior information. Obviously, the data fidelity term of these approaches was only defined in a single-domain (either the image or frequency domain). In our proposed PIC-GAN, we used a progressive refinement method in both frequency and image domains, which can not only help to stabilize the optimization of the network, but also make full use of the complementary information of the two domains. More specifically, the loss function in the image domain ensures reducing aliasing artifacts between the reconstructed images and their corresponding ground truth (i.e., fully-sampled reconstructions). In addition, we want to emphasize that we separated the loss function of the k-space into two parts: one is used to guarantee that the reconstructed image generates its the corresponding undersampled image with matching undersampled k-space data, the other to minimize the discrepancies between the missing data and the data interpolated by PIC-GAN in the k-space. This ensures high-fidelity reconstructions with high acceleration factors.
It is crucial to mention that both ZF-GAN and PIC-GAN have outperformed the L1-ESPIRiT in terms of reconstruction robustness, speed and image quality. This is because the CS method is sensitive to the regularization term while deep learning-based approaches do not need to impose the sparsity assumption. The networks automatically learn the underlying features and aliasing artifacts of the reconstructed image. Thus, their performance is more robust compared to the conventional non-deep learning CS techniques. Furthermore, the CS method treats each reconstruction as an individual nonlinear optimization problem. In contrast, deep learning based methods pre-calculate the network parameters offline. Therefore, once the parameters of the PIC-GAN are determined, the reconstruction is super-fast to unseen data with the same undersampling factor since no iterative calculations are required. Besides, experimental results show the feasibility of our PIC-GAN to learn the mapping from undersampled artifact-corrupted images to the GT images, using different sampling patterns with fixed undersampling factor. This indicates that a fixed undersampling pattern is not a prerequisite to train the network.
Multi-channel imaging is widely used in current clinical practice. It is obvious that the multi-channel network achieves better performance than the combined single-channel reconstruction, demonstrating the multi-channel network has the advantage over single-channel reconstruction by incorporating the sensitivity maps within the network. The results suggest that the operation of introducing a sensitivity map during training is similar to applying a low-pass filter that not only discards high-frequency noise but also enables a fairly clear image to be reconstructed. However, as the acceleration factor increases, the input k-space starts to contain very few ACS lines, which results in a relatively poor quality of the generated sensitivity maps for training. Thus, possible extensions of PIC-GAN may be either to improve the accuracy of the sensitivity maps estimation or to incorporate a calibrationless [41,42] algorithm into the model.
This study has several limitations. First, system imperfections exist during data acquisition that were not considered in the current study. Further studies should be taken to include those physical imperfections, e.g., gradient delays, B0 inhomogeneity, multiple projections with opposing orientations, etc. Second, the sample size was relatively small and only included healthy subjects. Future investigations should enlarge the sample size and validate the model on patients to see its generalization performance. Third, a future study is warranted to evaluate the performance of the proposed PIC-GAN for higher acceleration rates. It is of note that although we have reported the average reconstruction time for our comparison study, the reconstruction efficiency also depends on the system configuration, e.g., actual GPU allocated etc.

5. Conclusions

In conclusion, by coupling multi-channel information and GAN, our PIC-GAN model has been successfully evaluated using two MRI datasets. The proposed PIC-GAN method not only demonstrated superior reconstruction efficacy and generalization capacity, but also outperformed conventional L1-ESPIRiT and other deep learning based algorithms with different acceleration factors. In terms of the reconstruction efficiency, our PIC-GAN can remarkably reduce the reconstruction time (from seconds to milliseconds per slice) for multi-channel data compared to iterative L1-ESPIRiT, which is promising for real-time imaging in a lot of clinical applications.

Author Contributions

Conceptualization: J.L. and G.Y.; Methodology: J.L., C.W. and G.Y.; Implementation: J.L.; Writing—original draft preparation: J.L.; Writing—review and editing: G.Y. and C.W.; Visualization and Analysis: J.L. and G.Y.; Supervision: G.Y.; Funding acquisition: J.L. and G.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China No.61902338, in part by IIAT Hangzhou, in part by the European Research Council Innovative Medicines Initiative on Development of Therapeutics and Diagnostics Combatting Coronavirus Infections Award ’DRAGON: rapiD and secuRe AI imaging based diaGnosis, stratification, fOllow-up, and preparedness for coronavirus paNdemics’ [H2020-JTI-IMI2 101005122], and in part by the AI for Health Imaging Award ‘CHAIMELEON: Accelerating the Lab to Market Transition of AI Tools for Cancer Management’ [H2020-SC1-FA-DTS-2019-1 952172].

Institutional Review Board Statement

The studies involving human participants were reviewed and approved by Ethics Committee of the data providers of the public datasets.

Informed Consent Statement

Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: http://old.mridata.org/undersampled/abdomens and http://mridata.org/fullysampled/knees (accessed on 18 October 2020).

Acknowledgments

We appreciate the research groups of Michael Lustig at UC Berkeley and Shreyas Vasanawala at Stanford’s Lucille Packard Children’s Hospital for providing the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sodickson, D.K.; Manning, W.J. Simultaneous acquisition of spatial harmonics (SMASH): Fast imaging with radiofrequency coil arrays. Magn. Reson. Med. 1997, 38, 591–603. [Google Scholar] [CrossRef] [PubMed]
  2. Pruessmann, K.P.; Weiger, M.; Scheidegger, M.B.; Boesiger, P. SENSE: Sensitivity encoding for fast MRI. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 1999, 42, 952–962. [Google Scholar] [CrossRef]
  3. Griswold, M.A.; Jakob, P.M.; Heidemann, R.M.; Nittka, M.; Jellus, V.; Wang, J.; Kiefer, B.; Haase, A. Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2002, 47, 1202–1210. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Deshmane, A.; Gulani, V.; Griswold, M.A.; Seiberlich, N. Parallel MR imaging. J. Magn. Reson. Imaging 2012, 36, 55–72. [Google Scholar] [CrossRef] [Green Version]
  5. Robson, P.M.; Grant, A.K.; Madhuranthakam, A.J.; Lattanzi, R.; Sodickson, D.K.; McKenzie, C.A. Comprehensive quantification of signal-to-noise ratio and g-factor for image-based and k-space-based parallel imaging reconstructions. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2008, 60, 895–907. [Google Scholar] [CrossRef] [Green Version]
  6. Hamilton, J.; Franson, D.; Seiberlich, N. Recent advances in parallel imaging for MRI. Prog. Nucl. Magn. Reson. Spectrosc. 2017, 101, 71–95. [Google Scholar] [CrossRef]
  7. Lustig, M.; Pauly, J.M. SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space. Magn. Reson. Med. 2010, 64, 457–471. [Google Scholar] [CrossRef] [Green Version]
  8. Jung, H.; Sung, K.; Nayak, K.S.; Kim, E.Y.; Ye, J.C. k-t FOCUSS: A general compressed sensing framework for high resolution dynamic MRI. Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med. 2009, 61, 103–116. [Google Scholar] [CrossRef]
  9. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  10. Haldar, J.P.; Zhuo, J. P-LORAKS: Low-rank modeling of local k-space neighborhoods with parallel imaging data. Magn. Reson. Med. 2015. [Google Scholar] [CrossRef] [Green Version]
  11. Lingala, S.G.; Hu, Y.; Dibella, E.; Jacob, M. Accelerated Dynamic MRI Exploiting Sparsity and Low-Rank Structure: K-t SLR. IEEE Trans. Med. Imaging 2011, 30, 1042–1054. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Sumbul, U.; Santos, J.M.; Pauly, J.M. A Practical Acceleration Algorithm for Real-Time Imaging. IEEE Trans. Med. Imaging 2009, 28, 2042–2051. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Ravishankar, S.; Bresler, Y. MR Image Reconstruction From Highly Undersampled k-Space Data by Dictionary Learning. IEEE Trans. Med. Imaging 2011, 30, 1028–1041. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, S.; Su, Z.; Ying, L.; Peng, X.; Zhu, S.; Liang, F.; Feng, D.; Liang, D. Accelerating magnetic resonance imaging via deep learning. In Proceedings of the 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 514–517. [Google Scholar]
  15. Sun, J.; Li, H.; Xu, Z. Deep ADMM-Net for compressive sensing MRI. Adv. Neural Inf. Process. Syst. 2016, 29, 10–18. [Google Scholar]
  16. Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.N.; Rueckert, D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 2017, 37, 491–503. [Google Scholar] [CrossRef] [Green Version]
  17. Lee, D.; Yoo, J.; Ye, J.C. Deep artifact learning for compressed sensing and parallel MRI. arXiv 2017, arXiv:1703.01120. [Google Scholar]
  18. Lv, J.; Chen, K.; Yang, M.; Zhang, J.; Wang, X. Reconstruction of undersampled radial free-breathing 3D abdominal MRI using stacked convolutional auto-encoders. Med. Phys. 2018, 45, 2023–2032. [Google Scholar] [CrossRef]
  19. Wu, Y.; Ma, Y.; Liu, J.; Du, J.; Xing, L. Self-attention convolutional neural network for improved MR image reconstruction. Inf. Sci. 2019, 490, 317–328. [Google Scholar] [CrossRef] [Green Version]
  20. Guo, Y.; Wang, C.; Zhang, H.; Yang, G. Deep Attentive Wasserstein Generative Adversarial Networks for MRI Reconstruction with Recurrent Context-Awareness. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin, Germany, 2020; pp. 167–177. [Google Scholar]
  21. Shitrit, O.; Raviv, T.R. Accelerated magnetic resonance imaging by adversarial neural network. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin, Germany, 2017; pp. 30–38. [Google Scholar]
  22. Yang, G.; Yu, S.; Dong, H.; Slabaugh, G.; Dragotti, P.L.; Ye, X.; Liu, F.; Arridge, S.; Keegan, J.; Guo, Y.; et al. DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Trans. Med. Imaging 2017, 37, 1310–1321. [Google Scholar] [CrossRef] [Green Version]
  23. Quan, T.M.; Nguyen-Duc, T.; Jeong, W.K. Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss. IEEE Trans. Med. Imaging 2018, 37, 1488–1497. [Google Scholar] [CrossRef] [Green Version]
  24. Jiang, M.; Yuan, Z.; Yang, X.; Zhang, J.; Gong, Y.; Xia, L.; Li, T. Accelerating CS-MRI reconstruction with fine-tuning Wasserstein generative adversarial network. IEEE Access 2019, 7, 152347–152357. [Google Scholar] [CrossRef]
  25. Cole, E.K.; Pauly, J.M.; Vasanawala, S.S.; Ong, F. Unsupervised MRI Reconstruction with Generative Adversarial Networks. arXiv 2020, arXiv:2008.13065. [Google Scholar]
  26. Yuan, Z.; Jiang, M.; Wang, Y.; Wei, B.; Li, Y.; Wang, P.; Menpes-Smith, W.; Niu, Z.; Yang, G. SARA-GAN: Self-Attention and Relative Average Discriminator Based Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. Front. Neuroinform. 2020, 14, 611666. [Google Scholar] [CrossRef] [PubMed]
  27. Hammernik, K.; Klatzer, T.; Kobler, E.; Recht, M.P.; Sodickson, D.K.; Pock, T.; Knoll, F. Learning a variational network for reconstruction of accelerated MRI data. Magn. Reson. Med. 2018, 79, 3055–3071. [Google Scholar] [CrossRef] [PubMed]
  28. Zhou, Z.; Han, F.; Ghodrati, V.; Gao, Y.; Yin, W.; Yang, Y.; Hu, P. Parallel imaging and convolutional neural network combined fast MR image reconstruction: Applications in low-latency accelerated real-time imaging. Med. Phys. 2019, 46, 3399–3413. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, S.; Cheng, H.; Ying, L.; Xiao, T.; Ke, Z.; Liu, X.; Zheng, H.; Liang, D. DeepcomplexMRI: Exploiting deep residual network for fast parallel MR imaging with complex convolution. Magn. Reson. Imaging 2020, 68, 136–147. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Eksioglu, E.M. Decoupled algorithm for MRI reconstruction using nonlocal block matching model: BM3D-MRI. J. Math. Imaging Vis. 2016, 56, 430–440. [Google Scholar] [CrossRef]
  31. Akçakaya, M.; Basha, T.A.; Chan, R.H.; Manning, W.J.; Nezafat, R. Accelerated isotropic sub-millimeter whole-heart coronary MRI: Compressed sensing versus parallel imaging. Magn. Reson. Med. 2014, 71, 815–822. [Google Scholar] [CrossRef] [Green Version]
  32. Akçakaya, M.; Basha, T.A.; Goddu, B.; Goepfert, L.A.; Kissinger, K.V.; Tarokh, V.; Manning, W.J.; Nezafat, R. Low-dimensional-structure self-learning and thresholding: Regularization beyond compressed sensing for MRI reconstruction. Magn. Reson. Med. 2011, 66, 756–767. [Google Scholar] [CrossRef] [Green Version]
  33. Feng, L.; Grimm, R.; Block, K.T.; Chandarana, H.; Kim, S.; Xu, J.; Axel, L.; Sodickson, D.K.; Otazo, R. Golden-angle radial sparse parallel MRI: Combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI. Magn. Reson. Med. 2014, 72, 707–717. [Google Scholar] [CrossRef] [Green Version]
  34. Available online: http://old.mridata.org/undersampled/abdomens (accessed on 18 October 2020).
  35. Available online: http://mridata.org/fullysampled/knees (accessed on 18 October 2020).
  36. Tamir, J.I.; Ong, F.; Cheng, J.Y.; Uecker, M.; Lustig, M. Generalized Magnetic Resonance Image Reconstruction Using the Berkeley Advanced Reconstruction Toolbox; ISMRM Workshop on Data Sampling & Image Reconstruction: Sedona, AZ, USA, 2016. [Google Scholar]
  37. Uecker, M.; Lai, P.; Murphy, M.J.; Virtue, P.; Elad, M.; Pauly, J.M.; Vasanawala, S.S.; Lustig, M. ESPIRiT—An eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA. Magn. Reson. Med. 2014, 71, 990–1001. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Tensorpack. 2016. Available online: https://githubcom/tensorpack/ (accessed on 18 October 2020).
  39. Tensorflow. Available online: http://www.tensorflow.org/ (accessed on 18 October 2020).
  40. Wang, C.; Liang, Y.; Zhao, S.; Du, Y.P. Correction of out-of-FOV motion artifacts using convolutional neural network. Magn. Reson. Imaging 2020, 71, 93–102. [Google Scholar] [CrossRef] [PubMed]
  41. Shin, P.J.; Larson, P.E.Z.; Ohliger, M.A.; Elad, M.; Pauly, J.M.; Vigneron, D.B.; Lustig, M. Calibrationless parallel imaging reconstruction based on structured low-rank matrix completion. Magn. Reson. Med. 2014, 72, 959–970. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Schlemper, J.; Duan, J.; Ouyang, C.; Qin, C.; Caballero, J.; Hajnal, J.V.; Rueckert, D. Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction. arXiv 2019, arXiv:1909.11795. [Google Scholar]
Figure 1. Schema of the proposed parallel imaging and generative adversarial network (PIC-GAN) reconstruction network.
Figure 1. Schema of the proposed parallel imaging and generative adversarial network (PIC-GAN) reconstruction network.
Diagnostics 11 00061 g001
Figure 2. The generator G consists of four encoder blocks followed by corresponding 4 decoder blocks. In addition, shortcut connections are applied to connect mirrored layers between the encoder and decoder paths.
Figure 2. The generator G consists of four encoder blocks followed by corresponding 4 decoder blocks. In addition, shortcut connections are applied to connect mirrored layers between the encoder and decoder paths.
Diagnostics 11 00061 g002
Figure 3. Representative abdominal images reconstructed with acceleration AF = 6. The first and second rows depict reconstruction results for regular Cartesian sampling, the third and fourth row depict the same for variable density random sampling. The PIC-GAN reconstruction shows reduced artifacts compared to other methods. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method. Red box: Zoomed-in area.)
Figure 3. Representative abdominal images reconstructed with acceleration AF = 6. The first and second rows depict reconstruction results for regular Cartesian sampling, the third and fourth row depict the same for variable density random sampling. The PIC-GAN reconstruction shows reduced artifacts compared to other methods. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method. Red box: Zoomed-in area.)
Diagnostics 11 00061 g003
Figure 4. Visualization of the intermediate results of our PIC-GAN reconstruction. (a) Undersampled image with an acceleration factor of 6× with the regular (1st row) and the random (3rd row) Cartesian sampling (bd) Results from intermediate steps 500 to 2000 in the reconstruction process. (e) Ground truth.
Figure 4. Visualization of the intermediate results of our PIC-GAN reconstruction. (a) Undersampled image with an acceleration factor of 6× with the regular (1st row) and the random (3rd row) Cartesian sampling (bd) Results from intermediate steps 500 to 2000 in the reconstruction process. (e) Ground truth.
Diagnostics 11 00061 g004
Figure 5. Comparison of different reconstruction methods with different acceleration factors for the knee dataset. From left to right, each column represents selected knee image reconstructed using ZF, L1-ESPIRiT, VN, ZF-GA and PIC-GAN, respectively, compared to the GT. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method. The ZF-GAN reconstructed images were over-smoothed with blocky artifacts (yellow arrows) and obvious residual artifacts (green arrows).)
Figure 5. Comparison of different reconstruction methods with different acceleration factors for the knee dataset. From left to right, each column represents selected knee image reconstructed using ZF, L1-ESPIRiT, VN, ZF-GA and PIC-GAN, respectively, compared to the GT. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method. The ZF-GAN reconstructed images were over-smoothed with blocky artifacts (yellow arrows) and obvious residual artifacts (green arrows).)
Diagnostics 11 00061 g005
Figure 6. Representative knee images reconstructed with an acceleration factor of 6. The first and second rows show reconstruction results using regular Cartesian sampling, the third and fourth rows show reconstruction results using variable density random sampling. Zoomed in views (as red boxes) show that the proposed method has resulted in both sharper and cleaner reconstruction compared to the results obtained by L1-ESPIRiT, VN and ZF-GAN. Both ZF-GAN and PIC-GAN reconstruction can significantly suppress the artifacts compared to ZF and L1-ESPIRiT. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method. ZF-GAN images contained blurred vessels (green arrows) and blocky patterns (yellow arrows).)
Figure 6. Representative knee images reconstructed with an acceleration factor of 6. The first and second rows show reconstruction results using regular Cartesian sampling, the third and fourth rows show reconstruction results using variable density random sampling. Zoomed in views (as red boxes) show that the proposed method has resulted in both sharper and cleaner reconstruction compared to the results obtained by L1-ESPIRiT, VN and ZF-GAN. Both ZF-GAN and PIC-GAN reconstruction can significantly suppress the artifacts compared to ZF and L1-ESPIRiT. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method. ZF-GAN images contained blurred vessels (green arrows) and blocky patterns (yellow arrows).)
Diagnostics 11 00061 g006
Figure 7. Performance comparisons (PSNR, SSIM and NMSE × 10 5 ) on abdominal MRI data with different acceleration factors. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method.)
Figure 7. Performance comparisons (PSNR, SSIM and NMSE × 10 5 ) on abdominal MRI data with different acceleration factors. (GT: Ground truth. ZF: Zero-filled. L1-ESPIRiT: Sparsity-based parallel imaging. VN: Variational network. ZF-GAN: Conventional GAN with single-channel images as input PIC-GAN: Our proposed method.)
Diagnostics 11 00061 g007
Table 1. Performance comparisons (Normalized Mean Square Error (NMSE) × 10 5 , Structural Similarity Index (SSIM), Peak Signal to Noise Ratio (PSNR) and Average Reconstruction Time(s)) on abdominal magnetic resonance imaging (MRI) data with different acceleration factors. The bold numbers highlight the best results. The PIC-GAN outperformed the competing algorithms with significantly higher PSNR, SSIM and lower NMSE values (p < 0.05).
Table 1. Performance comparisons (Normalized Mean Square Error (NMSE) × 10 5 , Structural Similarity Index (SSIM), Peak Signal to Noise Ratio (PSNR) and Average Reconstruction Time(s)) on abdominal magnetic resonance imaging (MRI) data with different acceleration factors. The bold numbers highlight the best results. The PIC-GAN outperformed the competing algorithms with significantly higher PSNR, SSIM and lower NMSE values (p < 0.05).
RMETHODREGULARRANDOMTIME (s)
PSNRSSIMNMSEPSNRSSIMNMSE
2-FOLDZF28.03 ± 2.680.90 ± 0.011.74 ± 0.9434.66 ± 2.980.95 ± 0.010.49 ± 0.330.05 ± 0.01
L1-ESPIRiT33.25 ± 2.340.8 ± 0.060.62 ± 0.2533.69 ± 1.480.81 ± 0.030.50 ± 0.02143.71 ± 1.20
VN34.99 ± 2.090.89 ± 0.030.51 ± 0.2733.20 ± 2.820.90 ± 0.020.92 ± 0.630.38 ± 0.01
ZF-GAN34.91 ± 2.920.93 ± 0.050.60 ± 0.3337.22 ± 1.770.96 ± 0.010.32 ± 0.090.37 ± 0.00
PIC-GAN36.60 ± 3.570.94 ± 0.020.49 ± 0.4439.59 ± 2.640.97 ± 0.010.19 ± 0.130.69 ± 0.00
4-FOLDZF25.21 ± 3.130.81 ± 0.023.01 ± 1.8727.31 ± 3.230.84 ± 0.020.21 ± 0.150.05 ± 0.01
L1-ESPIRiT27.69 ± 2.790.62 ± 0.111.81 ± 1.1627.87 ± 0.780.70 ± 0.031.54 ± 0.46143.01 ± 1.13
VN30.30 ± 2.880.85 ± 0.071.32 ± 1.1030.72 ± 2.310.87 ± 0.021.12 ± 0.510.38 ± 0.00
ZF-GAN31.79 ± 2.950.86 ± 0.031.11 ± 1.0632.95 ± 2.570.89 ± 0.020.92 ± 0.640.36 ± 0.00
PIC-GAN34.99 ± 2.090.89 ± 0.030.51 ± 0.2733.20 ± 2.820.90 ± 0.020.92 ± 0.630.69 ± 0.01
6-FOLDZF24.71 ± 3.310.79 ± 0.033.34 ± 2.1825.15 ± 3.370.79 ± 0.030.31 ± 0.210.05 ± 0.01
L1-ESPIRiT25.40 ± 1.880.66 ± 0.022.49 ± 1.0425.71 ± 2.940.67 ± 0.012.49 ± 1.30143.43 ± 2.18
VN29.26 ± 2.980.84 ± 0.041.87 ± 1.2820.76 ± 2.640.84 ± 0.011.54 ± 0.970.39 ± 0.01
ZF-GAN31.45 ± 4.000.85 ± 0.061.93 ± 1.4130.91 ± 2.720.85 ± 0.021.42 ± 1.010.40 ± 0.00
PIC-GAN34.43 ± 1.920.87 ± 0.050.58 ± 0.3731.76 ± 3.040.86 ± 0.021.22 ± 0.970.68 ± 0.01
Table 2. Performance comparisons (NMSE × 10 5 , SSIM, PSNR and Average Reconstruction Time(s)) on knee MRI data with different acceleration factors. The bold numbers highlight the best results. The PIC-GAN outperformed the competing algorithms with significantly higher PSNR, SSIM and lower NMSE values (p < 0.05).
Table 2. Performance comparisons (NMSE × 10 5 , SSIM, PSNR and Average Reconstruction Time(s)) on knee MRI data with different acceleration factors. The bold numbers highlight the best results. The PIC-GAN outperformed the competing algorithms with significantly higher PSNR, SSIM and lower NMSE values (p < 0.05).
RMETHODREGULARRANDOMTIME (s)
PSNRSSIMNMSEPSNRSSIMNMSE
2-FOLDZF25.95 ± 1.420.83 ± 0.035.25 ± 1.2125.94 ± 1.190.83 ± 0.015.28 ± 1.130.02 ± 0.01
L1-ESPIRiT31.60 ± 1.270.72 ± 0.010.89 ± 0.5530.07 ± 1.000.73 ± 0.021.01 ± 0.6167.18 ± 1.10
VN32.79 ± 1.420.85 ± 0.020.60 ± 0.1232.54 ± 1.430.86 ± 0.010.57 ± 0.120.19 ± 0.01
ZF-GAN34.71 ± 1.310.86 ± 0.000.44 ± 0.0834.45 ± 1.600.87 ± 0.000.39 ± 0.100.22 ± 0.01
PIC-GAN37.80 ± 1.020.91 ± 0.000.33 ± 0.0937.98 ± 1.020.91 ± 0.000.10 ± 0.020.43 ± 0.01
4-FOLDZF24.27 ± 1.410.78 ± 0.038.05 ± 1.8924.21 ± 1.230.78 ± 0.028.04 ± 1.890.02 ± 0.00
L1-ESPIRiT30.67 ± 1.380.59 ± 0.071.12 ± 0.5728.98 ± 1.270.60 ± 0.011.27 ± 0.2266.12 ± 1.13
VN31.65 ± 1.310.84 ± 0.020.82 ± 0.2131.23 ± 1.260.83 ± 0.010.92 ± 0.200.19 ± 0.01
ZF-GAN33.28 ± 1.270.85 ± 0.010.69 ± 0.1933.10 ± 1.260.84 ± 0.010.73 ± 0.170.21 ± 0.01
PIC-GAN36.49 ± 1.300.89 ± 0.010.46 ± 0.1536.17 ± 0.940.88 ± 0.010.58 ± 0.120.44 ± 0.01
6-FOLDZF23.18 ± 1.450.75 ± 0.048.09 ± 1.9122.44 ± 1.460.76 ± 0.048.98 ± 2.310.02 ± 0.00
L1-ESPIRiT28.01 ± 0.980.55 ± 0.001.28 ± 0.2427.52 ± 1.090.57 ± 0.011.59 ± 0.1066.02 ± 1.76
VN30.01 ± 1.010.81 ± 0.011.18 ± 0.3128.54 ± 1.220.80 ± 0.000.98 ± 0.100.20 ± 0.01
ZF-GAN31.47 ± 1.050.82 ± 0.010.93 ± 0.2930.48 ± 1.240.81 ± 0.010.86 ± 0.110.24 ± 0.01
PIC-GAN34.10 ± 1.090.86 ± 0.010.80 ± 0.2633.85 ± 1.110.85 ± 0.000.81 ± 0.100.45 ± 0.01
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lv, J.; Wang, C.; Yang, G. PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction. Diagnostics 2021, 11, 61. https://doi.org/10.3390/diagnostics11010061

AMA Style

Lv J, Wang C, Yang G. PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction. Diagnostics. 2021; 11(1):61. https://doi.org/10.3390/diagnostics11010061

Chicago/Turabian Style

Lv, Jun, Chengyan Wang, and Guang Yang. 2021. "PIC-GAN: A Parallel Imaging Coupled Generative Adversarial Network for Accelerated Multi-Channel MRI Reconstruction" Diagnostics 11, no. 1: 61. https://doi.org/10.3390/diagnostics11010061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop