Next Article in Journal
Improvement of Flow Velocity Measurement Algorithms Based on Correlation Function and Twin Plane Electrical Capacitance Tomography
Next Article in Special Issue
Suppressing the Spikes in Electroencephalogram via an Iterative Joint Singular Spectrum Analysis and Low-Rank Decomposition Approach
Previous Article in Journal
A High-Performance 2.5 μm Charge Domain Global Shutter Pixel and Near Infrared Enhancement with Light Pipe Technology
Previous Article in Special Issue
Improvement of Fast Model-Based Acceleration of Parameter Look-Locker T1 Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reference-Driven Compressed Sensing MR Image Reconstruction Using Deep Convolutional Neural Networks without Pre-Training

1
Key Laboratory of Complex System Optimization and Big Data Processing, Guangxi Colleges and Universities, Yulin Normal University, Yulin 537000, China
2
School of Physics and Telecommunication Engineering, Yulin Normal University, Yulin 537000, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(1), 308; https://doi.org/10.3390/s20010308
Submission received: 13 December 2019 / Revised: 3 January 2020 / Accepted: 4 January 2020 / Published: 6 January 2020
(This article belongs to the Special Issue Compressed Sensing in Biomedical Signal and Image Analysis)

Abstract

:
Deep learning has proven itself to be able to reduce the scanning time of Magnetic Resonance Imaging (MRI) and to improve the image reconstruction quality since it was introduced into Compressed Sensing MRI (CS-MRI). However, the requirement of using large, high-quality, and patient-based datasets for network training procedures is always a challenge in clinical applications. In this paper, we propose a novel deep learning based compressed sensing MR image reconstruction method that does not require any pre-training procedure or training dataset, thereby largely reducing clinician dependence on patient-based datasets. The proposed method is based on the Deep Image Prior (DIP) framework and uses a high-resolution reference MR image as the input of the convolutional neural network in order to induce the structural prior in the learning procedure. This reference-driven strategy improves the efficiency and effect of network learning. We then add the k-space data correction step to enforce the consistency of the k-space data with the measurements, which further improve the image reconstruction accuracy. Experiments on in vivo MR datasets showed that the proposed method can achieve more accurate reconstruction results from undersampled k-space data.

1. Introduction

Magnetic Resonance Imaging (MRI) is an important non-invasive procedure that can provide critical structural, functional, and anatomical information about a patient. Nevertheless, the long time required for the scanning procedure may result in motion artifacts that can degrade image quantity and lead to misinterpretation of data, as well as sometimes cause discomfort for the patient. Accelerating the process of data acquisition without degrading the image reconstruction quality has always been one of the goals of MRI technology research. Compressed Sensing MRI (CS-MRI) [1,2,3,4] is an effective approach to reconstructing high-quality MR images from undersampled k-space data. CS-MRI utilizes the sparsity (or compressibility) of the MR image as prior information and builds the reconstruction model as the combination of the data fidelity term in k-space and the regularization constraint under some sparsifying operation. The available prior used in classical CS-MRI can be the sparsity in specific transform domains (e.g., gradient and wavelet) [2,5,6], as well as a more fixable sparse representation obtained from data via dictionary learning [7,8,9,10]. In addition, the structural prior information is drawing increased attention, because it can be acquired from a known high-resolution reference image [11,12,13] and introduces support information [14,15] or structural sparsity (e.g., group sparsity and block sparsity) [16,17,18] into the reconstruction model based on the union of subspaces theory [19,20].
Over the past several years, deep learning has attracted a great deal of attention in the medical imaging field, because it achieves better performance than conventional model based methods in terms of denoising, segmentation, classification, and accelerated MRI tasks [21,22,23,24,25,26,27,28,29,30,31]. Due to its ability to learn from data, deep learning based CS-MRI also shows superior image reconstruction performance. However, the network training procedure usually requires large datasets, which is a challenge in clinical applications because large, high quality, and patient-based datasets can be difficult to obtain due to patient privacy concerns.
Recently, Ulyanov et al. proposed a Deep Image Prior (DIP) framework [32], which performs very well in solving imaging inverse problems without pre-training. In DIP, no pre-training dataset is needed, a convolutional neural network (CNN) is initialized with random parameters, and only random noise is prepared as the network input. Research related to DIP has focused on natural image denoising, inpainting, super-resolution reconstruction [33,34], PET image reconstruction [35,36], and even compressed sensing recovery problems [37].
Leveraging the key concept of DIP, to overcome the difficulty of MR dataset acquisition and to improve learning efficiency, we used the DIP framework and introduced a structural prior provided by a high-resolution reference MR image with the same anatomical structure (which usually can be obtained by being fully sampled in advance) and proposed a reference-driven compressed sensing MR image reconstruction method. Our proposed method can achieve more accurate MR reconstruction than DIP. Our contributions can be summarized as follows.
(1) We propose a novel deep learning based compressed sensing MR image reconstruction method that does not require any pre-training procedure. This significantly reduces the dependence of traditional deep learning methods on datasets, which has always been a challenge in clinical applications.
(2) The proposed method utilizes high-resolution reference images as the input for CNNs, so that the structural similarity between the target and the reference MR image can be introduced as prior information into the network, which improves the efficiency of learning.
(3) The k-space data correction step is added to force the final reconstructed k-space data to be consistent with the prior measurement, which further improves the reconstruction accuracy.
The rest of this paper is organized as follows. Section 2 describes the proposed method in detail. Section 3 shows experimental results from three groups of in vivo MR scans, and data acquisition, undersampled masks, and network setup details are also included. Finally, conclusions are drawn in Section 4.

2. Methodology

2.1. Proposed Method

An overview of our proposed method is depicted in Figure 1. The reconstruction for the target MR image can be achieved in two steps: (1) reference-driven network training with DIP framework; and (2) data correction. In the first step, we learn the network’s parameters by solving an optimization problem and obtain the output MR image of the trained network. In the data correction step, we replace the k-space data of the output MR image with the original undersampled measurements and finally reconstruct the target MR image. The following sections will provide further explanation of this method.
A. Reference-driven network training with DIP framework
Let I t C N × N denote the target MR image desired to be reconstructed and I r C N × N denote a high-resolution reference MR image with similar anatomical structure to the target image acquired in advance. The proposed reference-driven network training with DIP can be formulated as the following optimization:
θ ^ = argmin θ y F u f ( θ I r ) 2 2
where y C N × 1 is the k-space measurements of the target MR image, F u denotes an undersampled Fourier transform operator, and · is the l 2 norm. f ( θ I r ) is an untrained deep CNN parametrized by θ and with the fully known reference image as input. The objective function employed in Equation (1) restricts the data consistency between the CNN output and k-space measurements. In other words, the parameters of CNN are iteratively optimized so that the output of the network is as close to the target MR image as possible.
Then, we obtain the output I ^ out of the trained CNN such that:
I ^ out = f ( θ ^ I r )
With our proposed reference-driven method, the patient’s own MR image (the reference image) is utilized as the CNN input instead of as random noise. Due to the structural similarity between the target and reference MR images, this strategy efficiently introduces the structural prior to the target image to the network training procedure.
B. Data correction
Applying data correction operator C o r ( · ) to the output of the network I ^ out , we obtain new k-space data as follows:
y new = C o r ( I ^ out ) = ( F I ^ out ) U ¯ y
Here, F denotes the Fourier transform, y is the measurement of the target MR image collected at spatial locations corresponding to the undersampled mask U , and U ¯ denotes the complementary set of U . The k-space data correction operation shown in Equation (3) enforces consistency with the priori acquired measurements, so that the reconstruction error will focus on the missing k-space data. Experiments show that this strategy is highly effective. The final reconstruction can then be obtained through the inverse Fourier transform of y new :
I ^ t = F 1 ( y new )

2.2. Network Architecture

Figure 2 depicts the CNN architecture employed in our proposed method, which is an encoder-decoder (“hourglass”) architecture with skip connections, the same as in [32]. The skip connections (marked by yellow arrows) link the encoding path (upper side) and decoding path (bottom side) and allow the integration of features from different resolutions. The network consists of repetitive applications of the convolutional (Conv) layer, batch normalization (BN) layer, leaky rectified linear unit (LeakyReLU) layer, downsampling with stride, and upsampling with bilinear interpolation. For simplicity, we denote the number of filters at depth i for downsampling, upsampling, and skip connections as n d [ i ] , n u [ i ] , and n s [ i ] , respectively, and the corresponding kernel sizes are k d [ i ] , k u [ i ] , and k s [ i ] , respectively. The variable L is the maximal depth of the network.

3. Experiments and Results

In this section, we compare our proposed method with the state-of-the-art DIP method presented in [32] to confirm the former’s better performance. To ensure a fair comparison, the same network architecture was used for both methods. In addition, to show the effectiveness in increasing reconstruction quality from highly undersampled measurements, the zero-filling image is also shown for comparison.

3.1. Experimental Setup

A. Data acquisition
To demonstrate the performance of the proposed method, we performed the simulations on three groups of compressible in vivo MR images, as shown in Figure 3. To simulate the data acquisition, we undersampled the 2D discrete Fourier transform of the MR images that were from in vivo MR scans. The first group of scanned data (Brain A) was acquired from a 3T Siemens MRI scanner using the GR sequence with a flip angle of 70° and TR/TE = 250/2.5 ms. The Field Of View (FOV) was 220 mm × 220 mm , and the slice thickness was 5.0 mm . The reference and target images were of size 512 × 512 , as shown in Figure 3a,b. The second and third groups of scanned data (Brain B and Brain C) were also acquired from the 3T Siemens scanner, but using the SE sequence (120° flip angle, TR/TE = 4000/91 ms, 176 mm × 176 mm field of view, 5.0 mm slice thickness). The MR images in Brain B and Brain C were of size 256 × 256 and are shown in Figure 3c–f, respectively. Three different undersampling masks were used in our experiments: a radial mask, Cartesian mask, and variable density mask. These are shown in Figure 4.
B. Network training
The network architectures were given above. The network parameters θ 0 were initialized randomly at the first iteration. Table 1 shows the hyperparameters for the experiments conducted on Brain A, Brain B, and Brain C.
The models were implemented on the Ubuntu 16.04 LTS (64 bit) operating system, running on an Intel Core i9-7920X 2.9 GHz CPU and Nvidia GeForce GTX 1080Ti GPU (11 GB memory) in the open framework Pytorch with CUDA and CUDNN support.
C. Performance evaluation
To evaluate the quantitative performance of the proposed method, we measured the relative error, Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index (SSIM) [38], which is more often typically used in the imaging field for consistency with human eye perception:
R e l a t i v e e r r o r = x ^ x x
P S N R = 10 lg N N ( M A X x ) 2 i = 1 N j = 1 N [ x ^ ( i , j ) x ( i , j ) ]
S S I M = ( 2 μ x μ x ^ + c 1 ) ( 2 σ x x ^ + c 2 ) ( μ x 2 + μ x ^ 2 + c 1 ) ( σ x 2 + σ x ^ 2 + c 2 )
where x ^ and x denote the reconstructed image and the ground truth with the same size of N × N and M A X x is the largest value in x . Moreover, in Equation (7), μ x , μ x ^ , σ x , and σ x ^ represent the means and standard deviations of x and x ^ , respectively, and σ x x ^ denotes the cross-covariance between x and x ^ and constants c 1 = 0.01 and c 2 = 0.03 .

3.2. Results

A. Reconstruction under different sampling rates
Table 2 shows the quantitative performance of our proposed method, the classic DIP method and zero-filling reconstruction on three groups of in vivo MR images at different sampling rates under the Cartesian mask. Due to the randomness involved in the training procedure (the initial network parameters for our method; both initial network parameters and network input for DIP), all results were the average values of 30 times of running. It can be seen that the proposed method achieved better performance with fewer relative errors and higher PSNRs and SSIMs (marked by red), which means that the proposed method can reconstruct the target MR image more accurately.
Figure 5, Figure 6 and Figure 7 show a visual comparison of the reconstructions under Cartesian undersampling. From these figures, it is obvious that our proposed method reconstructed the higher quality image with more structural details and fewer artifacts. The corresponding error maps show that the images reconstructed by our proposed method were closer to the target image than the classic DIP method.
Table 3 shows the computational time at different sampling rates under the Cartesian mask for DIP and the proposed methods on Brain B and Brain C. Here, the computational time was the total time cost of 5000 iterations. Compared to the DIP method, our proposed method did not save time because the output of the network needed to be undersampled after each iteration so as to update the loss function. In spite of this, the significant improvement in reconstruction accuracy made the proposed method attractive.
B. Reconstruction under different undersampled masks
To further demonstrate the effectiveness of the proposed method under different undersampled masks, we also used the radial undersampled mask and variable density undersampled mask to compare the reconstructed performance. The quantitative results of three groups of MR data are presented in Table 4. It is clear that the proposed method still showed significantly improved performance under different sampling masks.
C. Convergence analysis
Here, we detect the convergence of the proposed method by conducting experiments on Brain A at different sampling rates under the Cartesian undersampled mask. The curves in Figure 8a,b present the relative errors and PSNR values (average values of 30 times of running) at every 100 iterations. From the curves, we see that the proposed method gradually and stably converged to low/high values as the number of iterations increased.
D. Anti-noise performance analysis
In order to evaluate the robustness against measurement noise of the proposed method, we performed experiments on Brain B with additive Gaussian noise. Figure 9 shows the comparison of the reconstructed images under the radial undersampled mask with a 30% sampling rate. The additive Gaussian noise is complex-valued because the MRI data in k-space is complex-valued, with the mean μ = 0 and standard deviation σ = 1 . The reconstructed target images by the classical DIP method and the proposed method were both acceptable, and the proposed method achieved more accurate reconstruction and fewer artifacts. The quantitative results shown in Table 5 further support the improved performance of our proposed method in the presence of measurement noise.

4. Conclusions

In this paper, we proposed a novel deep learning based method, which did not require patient-based training datasets, for MR image reconstruction from undersampled k-space data. First, our proposed method reconstructed the target MR image using the DIP framework so as to reduce the dependence of the learning on training datasets. Next, we used the known high-resolution reference MR image with a similar anatomical structure as the input of the CNN. This strategy introduced the structural information and improved the efficiency of the learning. The final k-space data correction step further increased the accuracy of the reconstruction by enforcing the data consistency. The experimental results demonstrated that the proposed method could successfully reconstruct the MR image without pre-training and also further improve the reconstruction quality on preserving texture details and removing artifacts compared with the conventional DIP method.

Author Contributions

Conceptualization, D.Z. and F.Z.; methodology, D.Z. and F.Z.; software, D.Z. and Y.G.; investigation, D.Z.; writing–original draft preparation, D.Z.; writing–review and editing, Y.G.; funding acquisition, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 61527802, in part by the Key Science and Technology Project of Guangxi under Grant AB19110044, and in part by the Guangxi Natural Science Foundation Innovation Research Team Project under Grant 2016GXNSFGA380002.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed Sensing MRI. IIEEE Signal Process. Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
  2. Lustig, M.; Donoho, D.L.; Pauly, J.M. Sparse MRI: The Application of Compressed Sensing for Rapid MR Imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  3. Qu, X.B.; Cao, X.; Guo, D.; Hu, C.W.; Chen, Z. Combined Sparsifying Transforms for Compressed Sensing MRI. Electron. Lett. 2010, 46, 121–123. [Google Scholar] [CrossRef] [Green Version]
  4. Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic l0-minimization. IEEE Trans. Med. Imaging 2010, 28, 106–121. [Google Scholar] [CrossRef]
  5. Kim, D.O.; Park, R.H. Evaluation of image quality using dual-tree complex wavelet transform and compressive sensing. Electron. Lett. 2010, 46, 494–495. [Google Scholar] [CrossRef]
  6. Qu, X.B.; Guo, D.; Ning, B.D.; Hou, Y.K.; Lin, Y.L.; Cai, S.H.; Chen, Z. Undersampled MRI Reconstruction with the Patch based Directional Wavelets. Magn. Reson. Imaging 2012, 30, 964–977. [Google Scholar] [CrossRef]
  7. Zhan, Z.F.; Cai, J.F.; Guo, D.; Liu, Y.S.; Chen, Z.; Qu, X.B. Fast Mul-ticlass Dictionaries Learning With Geometrical Directions in MRI Reconstruction. IEEE. Trans. Biomed. Eng. 2016, 63, 1850–1861. [Google Scholar] [CrossRef] [PubMed]
  8. Liu, Q.G.; Wang, S.S.; Ying, L.; Peng, X.; Zhu, Y.J.; Liang, D. Adaptive Dictionary Learning in Sparse Gradient Domain for Image Recovery. IEEE Trans. Image Process. 2013, 22, 4652–4663. [Google Scholar] [CrossRef] [PubMed]
  9. Aharon, M.; Elad, M.; Bruckstein, A.M. The K-SVD: An algorithm for designing overcomplete dictionaries for sparse representations. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  10. Ophir, B.; Lustig, M.; Elad, M. Multi-scale dictionary learning using wavelets. IEEE J. Sel. Top. Signal Process. 2011, 5, 1014–1024. [Google Scholar] [CrossRef]
  11. Du, H.Q.; Lam, F. Compressed Sensing MR Image Reconstruction Using A Motion-compensated Reference. Magn. Reson. Imaging 2012, 30, 954–963. [Google Scholar] [CrossRef] [PubMed]
  12. Peng, X.; Du, H.Q.; Lam, F.; Babacan, D.; Liang, Z.P. Reference driven MR Image Reconstruction with Sparsity and Support Constraints. In Proceedings of the IEEE International Symposium on Biomedical Imaging, Chicago, IL, USA, 30 March–2 April 2011; pp. 89–92. [Google Scholar]
  13. Lam, F.; Haldar, J.P.; Liang, Z.P. Motion Compensation for Reference-constrained Image Reconstruction from Limited Data. In Proceedings of the IEEE International Symposium on Biomedical Imaging. Chicago, IL, USA, 30 March–2 April 2011; pp. 73–76. [Google Scholar]
  14. Vaswani, N.; Lu, W. Modified-CS: Modifying compressive sensing for problems with partially known support. IEEE Trans. Signal Process. 2010, 58, 4595–4607. [Google Scholar] [CrossRef] [Green Version]
  15. Manduca, A.; Trzasko, J.D.; Li, Z.B. Compressive sensing of images with a priori known spatial support. In Proceedings of the SPIE, The International Society for Optical Engineering, San Diego, CA, USA, 22 March 2010. [Google Scholar]
  16. Stojnic, M.; Parvaresh, F.; Hassibi, B. On the reconstruction of block-sparse signals with an optimal number of measurements. IEEE Trans. Signal Process. 2009, 57, 3075–3085. [Google Scholar] [CrossRef] [Green Version]
  17. Usman, M.; Prieto, C.; Schaeffter, T.; Batchelor, P.G. k-t group sparse: A method for accelerating dynamic MRI. Magn. Reson. Med. 2011, 66, 1163–1176. [Google Scholar] [CrossRef]
  18. Han, Y.; Du, H.Q.; Gao, X.Z.; Mei, W.B. MR image reconstruction using cosupport constraints and group sparsity regularisation. IET Image Process. 2017, 11, 155–163. [Google Scholar] [CrossRef]
  19. Blumensath, T. Sampling and reconstructing signals from a union of linear subspaces. IEEE Trans. Inf. Theory 2011, 57, 4660–4671. [Google Scholar] [CrossRef]
  20. Eldar, Y.; Mishali, M. Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory 2009, 55, 5302–5316. [Google Scholar] [CrossRef] [Green Version]
  21. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A Survey on Deep Learning in Medical Image Analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, S.S.; Su, Z.H.; Ying, L.; Peng, X.; Zhu, S.; Liang, F.; Feng, D.G.; Liang, D. Accelerating magnetic resonance imaging via deep learning. In Proceedings of the 2016 IEEE 13th International Sympo-sium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 514–517. [Google Scholar]
  23. Lee, D.; Yoo, J.; Ye, J.C. Deep Residual Learning for Compressed Sensing MRI. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging, Melbourne, VIC, Australia, 18–21 April 2017; pp. 15–18. [Google Scholar]
  24. Hyun, C.M.; Kim, H.P.; Lee, S.M.; Lee, S.; Seo, K. Deep Learning for Undersampled MRI Reconstruction. Phys. Med. Biol. 2018, 63, 135007. [Google Scholar] [CrossRef]
  25. Schlemper, J.; Caballero, J.; Hajnal, J.V.; Price, A.N.; Rueckert, D. A Deep Cascade of Con-volutional Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 491–503. [Google Scholar] [CrossRef] [Green Version]
  26. Yang, G.; Yu, S.; Dong, H.; Slabaugh, G.; Dragotti, P.L.; Ye, X.J.; Liu, F.D.; Arridge, S.; Keegan, J.; Guo, Y.K.; et al. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 1310–1321. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Qin, C.; Schlemper, J.; Caballero, J.; Price, A.N.; Hajnal, J.V.; Rueckert, D. Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction. IEEE Trans. Med. Imaging 2019, 38, 280–290. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Yang, Y.; Sun, J.; Li, H.B.; Xu, Z.B. Deep ADMM-Net for Compressive Sensing MRI. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 10–18. [Google Scholar]
  29. Hammernik, K.; Klatzer, T.; Kobler, E.; Recht, M.P.; Sodickson, D.K.; Pock, T.; Knoll, F. Learning a Variational Network for Reconstruction of Accelerated MRI Data. Magn. Reson. Med. 2018, 79, 3055–3071. [Google Scholar] [CrossRef] [PubMed]
  30. Aggarwal, H.K.; Mani, M.P.; Jacob, M. MoDL: Model-Based Deep Learning Architecture for Inverse Problems. IEEE Trans. Med. Imaging 2019, 38, 394–405. [Google Scholar] [CrossRef] [PubMed]
  31. Tezcan, K.C.; Baumgartner, C.F.; Luechinger, R.; Pruessmann, K.P.; Konukoglu, E. MR Image Reconstruction Using Deep Density Priors. IEEE Trans. Med. Imaging 2019, 38, 1633–1642. [Google Scholar] [CrossRef] [PubMed]
  32. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep Image Prior. arXiv 2017, arXiv:1711.10925v3. [Google Scholar]
  33. Liu, J.M.; Sun, Y.; Xu, X.J.; Ulugbek, S. Kamilov. Image Restoration using Total Variation Regularized Deep Image Prior. arXiv 2018, arXiv:1810.12864. [Google Scholar]
  34. Mataev, G.; Elad, M.; Milanfar, P. Deep RED: Deep Image Prior Powered by RED. arXiv 2019, arXiv:1903.10176. [Google Scholar]
  35. Gong, K.; Catana, C.; Qi, J.Y.; Li, Q.Z. PET Image Reconstruction Using Deep Image Prior. IEEE Trans. Med. Imaging 2019, 38, 1655–1665. [Google Scholar] [CrossRef]
  36. Hashimoto, F.; Ohba, H.; Ote, K.; Teramoto, A.; Tsukada, H. Dynamic PET Image Denoising Using Deep Convolutional Neural Networks Without Prior Training Datasets. IEEE Access 2019, 7, 96594–96603. [Google Scholar] [CrossRef]
  37. Veen, D.V.; Jalal, A.; Soltanolkotabi, M.; Price, E.; Vishwanath, S.; Dimakis, A.G. Compressed Sensing with Deep Image Prior and Learned Regularization. arXiv 2018, arXiv:1806.06438. [Google Scholar]
  38. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image. Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Overview of our proposed method. DIP, Deep Image Prior.
Figure 1. Overview of our proposed method. DIP, Deep Image Prior.
Sensors 20 00308 g001
Figure 2. Network architecture [32] used in the proposed method.
Figure 2. Network architecture [32] used in the proposed method.
Sensors 20 00308 g002
Figure 3. The MR images. Brain A: the reference image (a) and target image (b); Brain B: the reference image (c) and target image (d); Brain C: the reference image (e) and target image (f).
Figure 3. The MR images. Brain A: the reference image (a) and target image (b); Brain B: the reference image (c) and target image (d); Brain C: the reference image (e) and target image (f).
Sensors 20 00308 g003
Figure 4. The different undersampled masks with a sampling rate of 15%. From left to right: radial mask, variable density mask, and Cartesian mask.
Figure 4. The different undersampled masks with a sampling rate of 15%. From left to right: radial mask, variable density mask, and Cartesian mask.
Sensors 20 00308 g004
Figure 5. Comparison of the reconstruction results of the target MR image: (a) in Brain A using the Cartesian undersampled mask with 20% sampling rate; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Figure 5. Comparison of the reconstruction results of the target MR image: (a) in Brain A using the Cartesian undersampled mask with 20% sampling rate; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Sensors 20 00308 g005
Figure 6. Comparison of the reconstruction results of the target MR image: (a) in Brain B using the Cartesian undersampled mask with 30% sampling rate; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Figure 6. Comparison of the reconstruction results of the target MR image: (a) in Brain B using the Cartesian undersampled mask with 30% sampling rate; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Sensors 20 00308 g006
Figure 7. Comparison of the reconstruction results of the target MR image: (a) in Brain C using the Cartesian undersampled mask with 30% sampling rate; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Figure 7. Comparison of the reconstruction results of the target MR image: (a) in Brain C using the Cartesian undersampled mask with 30% sampling rate; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Sensors 20 00308 g007
Figure 8. Results of the convergence for the proposed method at different sampling rates under the Cartesian undersampled mask: relative error curves (a) and PSNR curves (b).
Figure 8. Results of the convergence for the proposed method at different sampling rates under the Cartesian undersampled mask: relative error curves (a) and PSNR curves (b).
Sensors 20 00308 g008
Figure 9. Comparison of the reconstruction results of the target MR image: (a) in Brain B at a 30% sampling rate under the radial undersampled mask with additive Gaussian noise; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Figure 9. Comparison of the reconstruction results of the target MR image: (a) in Brain B at a 30% sampling rate under the radial undersampled mask with additive Gaussian noise; (b) zero-filling reconstruction; (c) DIP reconstruction; (d) the proposed method reconstruction and corresponding error maps (e)–(g).
Sensors 20 00308 g009
Table 1. Hyperparameter setting for the experiments.
Table 1. Hyperparameter setting for the experiments.
Hyperparameters Images
Brain ABrain BBrain C
L566
n d [8, 16, 32, 64, 128][6, 32, 64, 128, 128, 128][6, 32, 64, 128, 128, 128]
n u [8, 16, 32, 64, 128][6, 32, 64, 128, 128, 128][6, 32, 64, 128, 128, 128]
n s [8, 8, 8, 8, 8][4, 4, 4, 4, 4, 4][4, 4, 4, 4, 4, 4]
k d [3, 3, 3, 3, 3][3, 3, 3, 3, 3, 3][3, 3, 3, 3, 3, 3]
k u [3, 3, 3, 3, 3][3, 3, 3, 3, 3, 3][3, 3, 3, 3, 3, 3]
k s [1, 1, 1, 1, 1][1, 1, 1, 1, 1, 1][1, 1, 1, 1, 1, 1]
Number of iterations500050005000
Learning rate0.010.010.01
Table 2. Relative errors, PSNR, and SSIM values of reconstruction by different methods under the Cartesian undersampled mask.
Table 2. Relative errors, PSNR, and SSIM values of reconstruction by different methods under the Cartesian undersampled mask.
ImagesMethods 10% 20%
Relative Error (%)PSNR (dB)SSIMRelative Error (%)PSNR (dB)SSIM
Brain AZero-filling23.1921.29910.665816.9624.01310.7340
DIP17.7623.68560.81696.5932.41960.9505
Proposed method7.5031.10770.94433.6937.28700.9793
Brain BZero-filling39.7118.37970.567120.7923.99850.7116
DIP34.9419.53280.673813.8627.54780.9023
Proposed method19.3724.61700.84439.4330.84470.9516
Brain CZero-filling33.6518.90340.587417.0224.82500.7216
DIP31.1319.58030.677013.2827.00630.8877
Proposed method20.9323.02890.802710.1529.31770.9317
ImagesMethods 30% 40%
Relative Error (%)PSNR (dB)SSIMRelative Error (%)PSNR (dB)SSIM
Brain AZero-filling5.9233.15980.82154.2735.99040.8409
DIP4.0836.69190.97343.8237.30450.9768
Proposed method2.3141.33550.99002.0242.51440.9918
Brain BZero-filling20.9323.94180.718510.7029.77190.8024
DIP9.6730.66240.94557.4532.92210.9644
Proposed method7.3932.97470.96655.5835.43240.9781
Brain CZero-filling16.4225.13640.74088.9430.40970.8071
DIP10.2629.22260.92878.0531.35310.9513
Proposed method7.8331.57330.95595.8334.13430.9718
Table 3. The computational time at different sampling rates under the Cartesian mask for DIP and the proposed methods.
Table 3. The computational time at different sampling rates under the Cartesian mask for DIP and the proposed methods.
ImagesMethodsComputational Time
10%20%30%40%
Brain BDIP3 m 12 s3 m 9 s3 m 14 s3 m 4 s
Proposed method3 m 21 s3 m 8 s3 m 14 s3 m 12 s
Brain CDIP3 m 7 s3 m 19 s3 m 9 s3 m 8 s
Proposed method3 m 14 s3 m 19 s3 m 6 s3 m 16 s
Table 4. Relative errors, PSNR, and SSIM values of reconstruction by different methods at 20% sampling rates under the radial undersampled mask and variable density undersampled mask.
Table 4. Relative errors, PSNR, and SSIM values of reconstruction by different methods at 20% sampling rates under the radial undersampled mask and variable density undersampled mask.
ImagesMethodsRadial Undersampled Mask (20%)Variable Density Undersampled Mask (20%)
Relative Error (%)PSNR (dB)SSIMRelative Error (%)PSNR (dB)SSIM
Brain AZero-filling6.0333.00530.89028.6129.90790.8346
DIP3.9836.82540.97545.0834.67610.9638
Proposed method2.2341.65450.98972.9339.26280.9830
Brain BZero-filling17.6125.44400.742422.4923.31500.6674
DIP9.4330.87240.949211.0929.48990.9250
Proposed method3.9836.82540.97545.0834.67610.9638
Brain CZero-filling14.5726.17700.774418.4124.14330.7102
DIP9.1830.18920.935511.0128.61240.9077
Proposed method7.2732.21660.95978.1331.24400.9458
Table 5. Relative errors, PSNR, and SSIM values of reconstruction for Brain B by different methods at 30% sampling rates under the radial undersampled mask.
Table 5. Relative errors, PSNR, and SSIM values of reconstruction for Brain B by different methods at 30% sampling rates under the radial undersampled mask.
MethodsRelative Error (%)PSNR (dB)SSIM
Zero-filling11.2829.31390.8694
DIP8.6131.66100.9339
Proposed method6.9833.47170.9490

Share and Cite

MDPI and ACS Style

Zhao, D.; Zhao, F.; Gan, Y. Reference-Driven Compressed Sensing MR Image Reconstruction Using Deep Convolutional Neural Networks without Pre-Training. Sensors 2020, 20, 308. https://doi.org/10.3390/s20010308

AMA Style

Zhao D, Zhao F, Gan Y. Reference-Driven Compressed Sensing MR Image Reconstruction Using Deep Convolutional Neural Networks without Pre-Training. Sensors. 2020; 20(1):308. https://doi.org/10.3390/s20010308

Chicago/Turabian Style

Zhao, Di, Feng Zhao, and Yongjin Gan. 2020. "Reference-Driven Compressed Sensing MR Image Reconstruction Using Deep Convolutional Neural Networks without Pre-Training" Sensors 20, no. 1: 308. https://doi.org/10.3390/s20010308

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop