Next Article in Journal
Time-Variant Front-End Read-Out Electronics for High-Data-Rate Detectors
Next Article in Special Issue
Accurate Transmission-Less Attenuation Correction Method for Amyloid-β Brain PET Using Deep Neural Network
Previous Article in Journal
Viewing Direction Based LSB Data Hiding in 360° Videos
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Noise2Noise Improved by Trainable Wavelet Coefficients for PET Denoising

1
Medical Research Center, Institute of Radiation Medicine, Seoul National University, Seoul 03080, Korea
2
Brightonix Imaging Inc., Seoul 04782, Korea
3
Department of Bioengineering, Seoul National University, Seoul 03080, Korea
4
Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
5
Department of Biomedical Sciences, Seoul National University, Seoul 03080, Korea
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(13), 1529; https://doi.org/10.3390/electronics10131529
Submission received: 9 May 2021 / Revised: 18 June 2021 / Accepted: 21 June 2021 / Published: 24 June 2021
(This article belongs to the Special Issue Machine Learning for Medical Imaging Processing)

Abstract

:
The significant statistical noise and limited spatial resolution of positron emission tomography (PET) data in sinogram space results in the degradation of the quality and accuracy of reconstructed images. Although high-dose radiotracers and long acquisition times improve the PET image quality, the patients’ radiation exposure increases and the patient is more likely to move during the PET scan. Recently, various data-driven techniques based on supervised deep neural network learning have made remarkable progress in reducing noise in images. However, these conventional techniques require clean target images that are of limited availability for PET denoising. Therefore, in this study, we utilized the Noise2Noise framework, which requires only noisy image pairs for network training, to reduce the noise in the PET images. A trainable wavelet transform was proposed to improve the performance of the network. The proposed network was fed wavelet-decomposed images consisting of low- and high-pass components. The inverse wavelet transforms of the network output produced denoised images. The proposed Noise2Noise filter with wavelet transforms outperforms the original Noise2Noise method in the suppression of artefacts and preservation of abnormal uptakes. The quantitative analysis of the simulated PET uptake confirms the improved performance of the proposed method compared with the original Noise2Noise technique. In the clinical data, 10 s images filtered with Noise2Noise are virtually equivalent to 300 s images filtered with a 6 mm Gaussian filter. The incorporation of wavelet transforms in Noise2Noise network training results in the improvement of the image contrast. In conclusion, the performance of Noise2Noise filtering for PET images was improved by incorporating the trainable wavelet transform in the self-supervised deep learning framework.

1. Introduction

Reconstructing clear positron emission tomography (PET) images from noisy observations without the loss of spatial resolution remains a challenge because of the severe noise corruption in raw PET data and the limited resolution of the scanner. For decades, numerous studies have attempted to address this problem using various statistical and numerical approaches and signal processing techniques [1,2,3,4,5,6,7,8,9,10,11]. Various filters such as bilateral, nonlocal means, and wavelet-based filters have been proposed to reduce the noise in the corrupted images without causing blur to the anatomical boundaries [1,2,3,10,11]. In addition, iterative reconstruction algorithms in Bayesian frameworks incorporating specifically designed regularization functions have also been proposed [5,10,12,13,14]. Consequently, these algorithms could significantly improve the quality of the reconstructed PET images by employing proper noise models and adequate optimization methods. However, several problems remain, such as the hyperparameter selection, optimal modeling of noise, and high computational burden. In recent years, data-driven machine learning techniques based on deep neural networks have made remarkable progress in performing many challenging signal and image processing tasks [15].
In general, supervised learning for image denoising requires a paired dataset of corrupted images and clean targets [6,16,17]. On the other hand, unsupervised learning methods for PET images were also proposed [7,8,18]. They utilized deep image prior (DIP), which trains the convolutional neural network as the regularizer for a given cost function [19]. Recently, the Noise2Noise framework, which trains neural networks using only noisy images, was proposed [9,20]. The only mandatory requirement for the successful training of the Noise2Noise framework is that the noise distribution in the input and training target should be identical and independent. Once the requirement was satisfied, different noisy realizations from the same image were fed to the neural network as the input and target. Moreover, list-mode PET data acquisition allows the generation of independent noisy sinograms with different time frames. This is the main difference from the above DIP-based methods, which only require corrupted input at the training phase and generate clean results. Besides, DIP sometimes suffers from overfitting, which tends to replicate results with corrupted inputs [19,21]. In this study, we applied Noise2Noise to list-mode PET data to determine whether this self-supervised denoising method was effective for short-scan PET data.
We also focused on enhancing the generalization ability of the neural network, as generalization is a major technical issue in deep learning-based medical image processing due to the limited dataset. Based on the proven efficacy of image denoising by transforming a given image into a wavelet basis [22,23,24], we propose the wavelet transform (WT) in a neural network with trainable coefficients. Simulation and clinical data show that our proposed method can improve the generalization ability of the trained network compared with the ordinary Noise2Noise method.

2. Materials and Methods

2.1. PET Data Model and Image Reconstruction

The PET image and observation were modeled by the following equation:
y = A x + r + s ,
where y is the observed data, A is the projection matrix, x is the desired image to be reconstructed, r is random, and s is the scatter component. Both the analytic reconstruction method, e.g., filtered back-projection, and the iterative algorithm can be applied in y to find the target image x . In this study, we used the standard ordered-subset expectation-maximization (OS-EM) algorithm for image reconstruction. Although the noise distribution in the sinogram is independent, the noise characteristics in the reconstructed image may not be independent of each other due to some systemic noise sources generated through the reconstruction [25,26]. However, the following experimental results show that this method works well with both simulation and real data. We used six outer iterations and 21 subsets for both simulation and clinical data. No postfilter was applied to the reconstructed images.

2.2. Noise2Noise Training and Trainable Wavelets

Unlike the normal supervised learning for image denoising, Noise2Noise exploits two noisy pairs (never uses a clean target) in the training phase, as follows:
  arg min θ n = 1 N f θ ( x n + ϵ n , 1 ) ( x n + ϵ n , 2 ) 2 ,
where f is the neural network with parameters θ , x n is the clean target image, and ϵ n , . is the independent noise for each image. In this study, x n + ϵ n , . corresponds to the results of the OS-EM algorithm with different time frames. Using the list-mode data of PET images, we can divide original data from a reference scan-time to an independent short scan-time satisfying the i.i.d assumption of Noise2Noise in the sinogram space.
The generalization power of the trained network, which depends on the number of patients and list-mode time bins, is often limited in medical imaging applications. To improve the generalization for a small dataset, we proposed applying WT, allowing the images to be handled in the multispectral band. The network was fed the decomposed images consisting of low-pass and high-pass components, and the inverse WT produced a denoised image by converting the output of the network (Figure 1a). Accordingly, the overall training procedure can be described by:
argmin θ f , θ W , θ W * n = 1 N W θ W * * f θ f ( W θ W ( x n + ϵ n , 1 ) ) ( x n + ϵ n , 2 ) 2 ,
where W θ W and W θ W * * are the forward and inverse WTs with coefficients θ W and θ W * . WTs were initialized with the elements of the discrete Haar filter banks [27]. The single-level normalized low-pass and high-pass Haar filter banks before network training are defined by:
θ l o w = [ 1 2 , 1 2 ] ,   θ h i g h = [ 1 2 , 1 2 ] .
It is well known that the 1D forward WT is equivalent to a convolution between a given signal and above filters followed by dyadic downsampling. The inverse WT is found by a convolution between dyadic upsampled wavelet domain signals and transposed filter banks (Figure 1b). For 2D WT, the low-pass and high-pass transforms are applied to each axis, and this operation produces four components: low resolution (LL), vertical (LH), horizontal (HL), and diagonal (HH), respectively. In the conventional wavelet theory using fixed filter banks, the inverse transform is performed by applying the same filters used for the forward transform. However, we used independent filters for forward and inverse transforms. A detailed discussion can be found in Section 3. All the network parameters and wavelet coefficients were trained using an efficient backpropagation algorithm [28].
We used a U-net-based convolutional neural network combined with DenseNet architecture [29,30], which is widely used for various tasks in biomedical imaging [31,32,33,34,35]. The network parameters and wavelet coefficients were optimized using Adam optimizer with a learning rate of 10−4. The batch size was 8, and the network was implemented using Tensorflow on GTX 1080Ti.

2.3. Simulation Data

Twenty segmented synthetic magnetic resonance images were collected from BrainWeb (Cocosco et al. 1997), and ground truth PET images were generated. The assigned PET uptake value for different brain tissues was 0.5 for gray matter, 0.125 for white matter and background, and 0.75 for randomly located lesions in gray matter. After the forward projection using the 2D matrix operation implemented in MATLAB 2018a (The MathWorks Inc., Natick, MA, USA), Poisson noise was added to the projection data to generate 100 independent realizations of the noisy sinogram with 1/30 counts relative to the reference. We trained the Noise2Noise network models with and without the incorporation of the trainable WT in 2.5D mode; three adjacent slices were fed into the networks together as different channels. The numbers of training and test sets were 1635 and 545 slices, obtained from fifteen and five image volumes, respectively. After training the neural networks, the normalized root-mean-square error (NRMSE), peak-to-signal ratio (PSNR), and structural similarity metric (SSIM) were calculated between the ground truth PET image and test data, as follows:
NRMSE = j R O I   ( x j   x ^ j     ) 2 j R O I     x ^ j   2 ,
PSNR = 20 × log 10 0.75 MSE ,
SSIM = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 )
where x j   is the j-th voxel test image generated from the trained network; x ^ .   is the ground truth; and R O I is the predefined region, which was the simulated abnormal lesions or whole gray matter. μ .   and σ .   were the average and standard deviation of the images, and we used the default function for SSIM from MATLAB 2018a.

2.4. Clinical Data

We retrospectively used fourteen [18F] FDG brain scans (eight males and six females, age = 50.9 ± 21.6 years) using a Biograph mCT 40 scanner (Siemens Healthcare, Knoxville, TN, USA). Nine images were used to train the neural networks and five to test and evaluate their performance (981 and 545 slices). The list-mode PET data were acquired 60 min after the intravenous injection of 18F-FDG (5.18 MBq/kg) for 5 min in a single bed position. To obtain noisy sinograms, we divided the 5 min list-mode data into 10 s long bins. The PET images were then reconstructed using the OS-EM algorithm (six iterations and 21 subsets, no post-filter) using E7tool provided by Siemens, in which CT-derived attenuation maps were used for attenuation and scatter correction. The matrix and pixel sizes of the reconstructed PET images were 200 × 200 × 109 and 2.04 × 2.04 × 2.03 mm3, respectively. The same training parameters were used to train the Noise2Noise networks with and without the incorporation of the WT. We also calculated PSNR and SSIM for each test image data, which consisted of 10 independent samples acquired from full count data, after normalizing them using the 99% value of the full count image.

3. Results and Discussion

Figure 2 shows the noisy simulation (input) and denoised images using Gaussian filter and Noise2Noise filters with and without the incorporation of the trainable WT. The proposed Noise2Noise filter with WT outperformed the original Noise2Noise method in the preservation of abnormal uptakes indicated by the red arrows. Moreover, increased uptake of normal gray matter tissue was observed in the original Noise2Noise method (blue arrows). Noise2Noise with wavelet transform yielded a smaller error in abnormal lesions and normal gray matter regions than original Noise2Noise (Figure 3a,b). The quantitative analysis using the image quality metrics also confirmed the improved performance of the proposed method compared to the original Noise2Noise technique (Figure 3c,d). In the clinical data, the 10-s images filtered with the Noise2Noise model were most similar to the 300-s images (Figure 4). The incorporation of the WT, which yielded multiple downscaled images in the Noise2Noise network training, resulted in an improved image contrast (red arrows). Moreover, the quantitative analysis of clinical data also supported that Noise2Noise with wavelet transform outperformed the original Noise2Noise (Figure 5). The proposed Noise2Noise along with wavelet transform yielded the best results among other filtering methods.
One of the interesting properties of the proposed network is that it does not obey the orthogonality of the Haar wavelet because different filter banks are used for the forward and inverse wavelet transforms. This unsatisfied mathematical completeness would be a limitation of this study; however, the network produced the desired denoised images. Moreover, the updates for filter banks were more efficient because they were not shared and overlapped in the back-propagation path for each iteration. If the same filter banks were used during training, they should be updated twice for forward and inverse transforms, resulting in undesired change in the computed gradient.
In this study, we only evaluated the Haar wavelet in the network because it is one of the simplest discrete wavelet transforms. However, various wavelets were proposed, including continuous wavelets [36,37]. In the future, we will investigate various wavelet transforms to find the best structure for the Noise2Noise framework.
The noise characteristics of PET images are complex, over-dispersed, and correlated [26]. Various studies have attempted to reduce the noise in a reconstructed image by applying handcrafted prior or data-driven methods [6,7,8,16]. Most data-driven methods based on deep neural networks have focused on supervised learning that requires clean target images. Recently, the Noise2Noise [20] framework has produced promising outcomes in image denoising without ground truth. The present study shows that the Noise2Noise method can mitigate the considerable noise in reconstructed PET images. Noise2Noise network training is simple and does not require the design of a complex noise model of the image. Moreover, the incorporation of WT with a trainable coefficient led to an improved generalization in Noise2Noise training, as shown in clinical data experiments.

4. Conclusions

We proposed the Noise2Noise framework to reduce the noise in low-count PET images. Discrete Haar wavelet transform was applied to the input and output of the neural network, where both images were noisy but contained identical and independent noise statistics. Filter banks of the given wavelets were updated during the training.
After training, both Noise2Noise with and without WT successfully alleviated the noise in the input. Furthermore, the performance of Noise2Noise filtering for PET images was improved by incorporating a trainable WT. In computer simulations, Noise2Noise with WT recovered the intensities of randomly added lesions better than the original Noise2Noise. Experiments using clinical images also showed an improved performance of Noise2Noise training with WT in terms of PSNR and SSIM. In the future, more rigorous evaluations will be performed using various wavelet transforms and radiotracers.

Author Contributions

Conceptualization, S.-K.K., S.-Y.Y. and J.-S.L.; methodology and investigation, S.-K.K., S.-Y.Y. and J.-S.L.; writing—original draft preparation, review, and editing; S.-K.K. and J.-S.L.; supervision and funding acquisition, J.-S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by grants from the Korea Medical Device Development Fund grant funded by the Korea government (Project no. KMDF_PR_20200901_0006 and no. KMDF_PR_20200901_0028).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Seoul National University Hospital (H-1711-140-903, 17 December 2019).

Informed Consent Statement

Informed consent was waived because of the retrospective nature of the study and the analysis used anonymous clinical data.

Data Availability Statement

The data are not publicly available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hofheinz, F.; Langner, J.; Beuthien-Baumann, B.; Oehme, L.; Steinbach, J.; Kotzerke, J.; van den Hoff, J. Suitability of bilateral filtering for edge-preserving noise reduction in PET. EJNMMI Res. 2011, 1, 23. [Google Scholar] [CrossRef] [Green Version]
  2. Dutta, J.; Leahy, R.M.; Li, Q. Non-Local Means Denoising of Dynamic PET Images. PLoS ONE 2013, 8, e81390. [Google Scholar] [CrossRef] [Green Version]
  3. Le Pogam, A.; Hanzouli, H.; Hatt, M.; Cheze Le Rest, C.; Visvikis, D. Denoising of PET images by combining wavelets and curvelets for improved preservation of resolution and quantitation. Med. Image Anal. 2013, 17, 877–891. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Tang, J.; Rahmim, A. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy. Phys. Med. Biol. 2015, 60, 31–48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Wang, G.; Qi, J. Penalized likelihood PET image reconstruction using patch-based edge-preserving regularization. IEEE Trans. Med. Imaging 2012, 31, 2194–2204. [Google Scholar] [CrossRef] [PubMed]
  6. Gong, K.; Guan, J.; Liu, C.-C.; Qi, J. PET image denoising using a deep neural network through fine tuning. IEEE Trans. Radiat. Plasma Med. Sci. 2018, 3, 153–161. [Google Scholar] [CrossRef]
  7. Hashimoto, F.; Ohba, H.; Ote, K.; Teramoto, A.; Tsukada, H. Dynamic PET image denoising using deep convolutional neural networks without prior training datasets. IEEE Access 2019, 7, 96594–96603. [Google Scholar] [CrossRef]
  8. Cui, J.; Gong, K.; Guo, N.; Wu, C.; Meng, X.; Kim, K.; Zheng, K.; Wu, Z.; Fu, L.; Xu, B. PET image denoising using unsupervised deep learning. Eur. J. Nucl. Med. Mol. Imaging 2019, 46, 2780–2789. [Google Scholar] [CrossRef]
  9. Chan, C.; Zhou, J.; Yang, L.; Qi, W.; Asma, E. Noise to Noise Ensemble Learning for PET Image Denoising. In Proceedings of the 2019 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), Manchester, UK, 26 October–2 November 2019; pp. 1–3. [Google Scholar]
  10. Arabi, H.; Zaidi, H. Spatially guided nonlocal mean approach for denoising of PET images. Med. Phys. 2020, 47, 1656–1669. [Google Scholar] [CrossRef]
  11. Cheng, J.C.; Bevington, C.; Rahmim, A.; Klyuzhin, I.; Matthews, J.; Boellaard, R.; Sossi, V. Dynamic PET image reconstruction utilizing intrinsic data-driven HYPR4D denoising kernel. Med. Phys. 2021, 48, 2230–2244. [Google Scholar] [CrossRef]
  12. Tang, J.; Yang, B.; Wang, Y.; Ying, L. Sparsity-constrained PET image reconstruction with learned dictionaries. Phys. Med. Biol. 2016, 61, 6347–6368. [Google Scholar] [CrossRef]
  13. Schramm, G.; Holler, M.; Rezaei, A.; Vunckx, K.; Knoll, F.; Bredies, K.; Boada, F.; Nuyts, J. Evaluation of Parallel Level Sets and Bowsher’s Method as Segmentation-Free Anatomical Priors for Time-of-Flight PET Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 590–603. [Google Scholar] [CrossRef]
  14. Kang, S.K.; Lee, J.S. Anatomy-guided PET reconstruction using l1 bowsher prior. Phys. Med. Biol. 2021, 66, 095010. [Google Scholar] [CrossRef]
  15. Xie, J.; Xu, L.; Chen, E. Image denoising and inpainting with deep neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Carson City, NV, USA, 3–6 December 2012; pp. 341–349. [Google Scholar]
  16. Wang, Y.; Yu, B.; Wang, L.; Zu, C.; Lalush, D.S.; Lin, W.; Wu, X.; Zhou, J.; Shen, D.; Zhou, L. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. Neuroimage 2018, 174, 550–562. [Google Scholar] [CrossRef]
  17. Kim, K.; Wu, D.; Gong, K.; Dutta, J.; Kim, J.H.; Son, Y.D.; Kim, H.K.; El Fakhri, G.; Li, Q. Penalized PET reconstruction using deep learning prior and local linear fitting. IEEE Trans. Med. Imaging 2018, 37, 1478–1487. [Google Scholar] [CrossRef]
  18. Gong, K.; Catana, C.; Qi, J.; Li, Q. PET image reconstruction using deep image prior. IEEE Trans. Med. Imaging 2018, 38, 1655–1665. [Google Scholar] [CrossRef] [PubMed]
  19. Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9446–9454. [Google Scholar]
  20. Lehtinen, J.; Munkberg, J.; Hasselgren, J.; Laine, S.; Karras, T.; Aittala, M.; Aila, T. Noise2noise: Learning image restoration without clean data. arXiv 2018, arXiv:1803.04189. [Google Scholar]
  21. Cheng, Z.; Gadelha, M.; Maji, S.; Sheldon, D. A bayesian perspective on the deep image prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–21 June 2019; pp. 5443–5451. [Google Scholar]
  22. Kang, E.; Min, J.; Ye, J.C. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med. Phys. 2017, 44, e360–e375. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Huang, H.; He, R.; Sun, Z.; Tan, T. Wavelet-srnet: A wavelet-based cnn for multi-scale face super resolution. In Proceedings of the IEEE International Conference on Computer Vision, Honolulu, HI, USA, 22–25 July 2017; pp. 1689–1697. [Google Scholar]
  24. Luo, X.; Zhang, J.; Hong, M.; Qu, Y.; Xie, Y.; Li, C. Deep Wavelet Network with Domain Adaptation for Single Image Demoireing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 24–19 June 2020; pp. 420–421. [Google Scholar]
  25. Barrett, H.H.; Wilson, D.W.; Tsui, B.M. Noise properties of the EM algorithm. I. Theory. Phys. Med. Biol. 1994, 39, 833. [Google Scholar] [CrossRef]
  26. Teymurazyan, A.; Riauka, T.; Jans, H.S.; Robinson, D. Properties of Noise in Positron Emission Tomography Images Reconstructed with Filtered-Backprojection and Row-Action Maximum Likelihood Algorithm. J. Dig. Imaging 2013, 26, 447–456. [Google Scholar] [CrossRef] [Green Version]
  27. Akansu, A.N.; Haddad, P.A.; Haddad, R.A.; Haddad, P.R. Multiresolution Signal Decomposition: Transforms, Subbands, and Wavelets; Academic Press: Cambridge, MA, USA, 2001. [Google Scholar]
  28. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  29. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  30. Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. arXiv 2016, arXiv:1608.06993. [Google Scholar]
  31. Hegazy, M.A.; Cho, M.H.; Cho, M.H.; Lee, S.Y. U-net based metal segmentation on projection domain for metal artifact reduction in dental CT. Biomed. Eng. Lett. 2019, 9, 375–385. [Google Scholar] [CrossRef] [PubMed]
  32. Hwang, D.; Kim, K.Y.; Kang, S.K.; Seo, S.; Paeng, J.C.; Lee, D.S.; Lee, J.S. Improving the Accuracy of Simultaneously Reconstructed Activity and Attenuation Maps Using Deep Learning. J. Nucl. Med. 2018, 59, 1624–1629. [Google Scholar] [CrossRef] [PubMed]
  33. Park, J.; Bae, S.; Seo, S.; Park, S.; Bang, J.-I.; Han, J.H.; Lee, W.W.; Lee, J.S. Measurement of Glomerular Filtration Rate using Quantitative SPECT/CT and Deep-learning-based Kidney Segmentation. Sci. Rep. 2019, 9, 4223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Lee, M.S.; Hwang, D.; Kim, J.H.; Lee, J.S. Deep-dose: A voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry. Sci. Rep. 2019, 9, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Kang, S.K.; Shin, S.A.; Seo, S.; Byun, M.S.; Lee, D.Y.; Kim, Y.K.; Lee, D.S.; Lee, J.S. Deep learning-Based 3D inpainting of brain MR images. Sci. Rep. 2021, 11, 1673. [Google Scholar] [CrossRef] [PubMed]
  36. Antoine, J.-P.; Carrette, P.; Murenzi, R.; Piette, B. Image analysis with two-dimensional continuous wavelet transform. Signal Process. 1993, 31, 241–272. [Google Scholar] [CrossRef]
  37. Stéphane, M. CHAPTER 11—Denoising. In A Wavelet Tour of Signal Processing, 3rd ed.; Stéphane, M., Ed.; Academic Press: Boston, MA, USA, 2009; pp. 535–610. [Google Scholar] [CrossRef]
Figure 1. (a) Schematic of the proposed neural network. The forward and inverse wavelet transform (WT) performed before and after passing the deep neural network has trainable coefficients. To achieve the continuity of 3D data, we used a 2.5D input and output. (b) Schematic explanation for the 1D wavelet transform and its inverse transform. * denotes the convolution with Haar filter banks ( θ . ), which were learnable parameters during the training. The arrows show the dyadic down sampling ( 2 ) and upsampling ( 2 ).
Figure 1. (a) Schematic of the proposed neural network. The forward and inverse wavelet transform (WT) performed before and after passing the deep neural network has trainable coefficients. To achieve the continuity of 3D data, we used a 2.5D input and output. (b) Schematic explanation for the 1D wavelet transform and its inverse transform. * denotes the convolution with Haar filter banks ( θ . ), which were learnable parameters during the training. The arrows show the dyadic down sampling ( 2 ) and upsampling ( 2 ).
Electronics 10 01529 g001
Figure 2. Ground truth, noisy input, Gaussian-filtered, and denoised images using the Noise2Noise (N2N) without and with the incorporation of the trainable wavelet transform (WT) for simulation data.
Figure 2. Ground truth, noisy input, Gaussian-filtered, and denoised images using the Noise2Noise (N2N) without and with the incorporation of the trainable wavelet transform (WT) for simulation data.
Electronics 10 01529 g002
Figure 3. Results of the quantitative analysis for each test subject with 100 noisy realizations using Noise2Noise with and without wavelet transform. (a) NRMSE (normalized root-mean-square error) for gray matter, (b) NRMSE for random lesions, (c) PSNR (peak-to-signal ratio), and (d) SSIM (structural similarity metric).
Figure 3. Results of the quantitative analysis for each test subject with 100 noisy realizations using Noise2Noise with and without wavelet transform. (a) NRMSE (normalized root-mean-square error) for gray matter, (b) NRMSE for random lesions, (c) PSNR (peak-to-signal ratio), and (d) SSIM (structural similarity metric).
Electronics 10 01529 g003
Figure 4. Full count image, noisy input, Gaussian-filtered, and denoised images using the Noise2Noise (N2N) without and with the incorporation of the trainable wavelet transform (WT) for clinical data. Red arrows indicate the region with decrease uptake in N2N without wavelet transform.
Figure 4. Full count image, noisy input, Gaussian-filtered, and denoised images using the Noise2Noise (N2N) without and with the incorporation of the trainable wavelet transform (WT) for clinical data. Red arrows indicate the region with decrease uptake in N2N without wavelet transform.
Electronics 10 01529 g004
Figure 5. Results of the quantitative analysis for each test subject from the clinical dataset using Gaussian filtering, Noise2Nosie (N2N) without wavelet transform, and Noise2Noise with wavelet transform. (a) PSNR and (b) SSIM.
Figure 5. Results of the quantitative analysis for each test subject from the clinical dataset using Gaussian filtering, Noise2Nosie (N2N) without wavelet transform, and Noise2Noise with wavelet transform. (a) PSNR and (b) SSIM.
Electronics 10 01529 g005
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kang, S.-K.; Yie, S.-Y.; Lee, J.-S. Noise2Noise Improved by Trainable Wavelet Coefficients for PET Denoising. Electronics 2021, 10, 1529. https://doi.org/10.3390/electronics10131529

AMA Style

Kang S-K, Yie S-Y, Lee J-S. Noise2Noise Improved by Trainable Wavelet Coefficients for PET Denoising. Electronics. 2021; 10(13):1529. https://doi.org/10.3390/electronics10131529

Chicago/Turabian Style

Kang, Seung-Kwan, Si-Young Yie, and Jae-Sung Lee. 2021. "Noise2Noise Improved by Trainable Wavelet Coefficients for PET Denoising" Electronics 10, no. 13: 1529. https://doi.org/10.3390/electronics10131529

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop