PET Imaging with Deep Learning

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (20 November 2021) | Viewed by 22514

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Physical Sciences, University Complutense of Madrid, CEI Moncloa, 28040 Madrid, Spain
Interests: image reconstruction; PET; CT; US; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
Interests: signal and image processing; PET-MRI; attenuation correction; motion correction; deep learning
Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
Interests: image reconstruction; PET system correction; dynamic PET; kinetic modeling; machine learning

E-Mail Website
Guest Editor
Department of Translational Biomedical Sciences, College of Medicine, Dong-A University, Busan 49201, Republic of Korea
Interests: brain; diseases; image classification; medical image processing; neurophysiology; positron emission tomography; biomedical MRI; cognition; computerised tomography; feature extraction; image segmentation; neural nets; unsupervised learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Positron Emission Tomography (PET) is an imaging modality widely used in oncology, neurology, and cardiology, with the ability to observe molecular-level activities inside the tissue through the injection of specific radioactive tracers. Though PET has higher sensitivity than other imaging modalities, its image resolution and signal to noise ratio (SNR) are still low due to various physical degradation factors and low number of coincidences detected. Improving PET image quality is essential, especially in applications such as small lesion detection, brain imaging and longitudinal studies.

Machine Learning is a very exciting field with many promising applications in medical imaging. Deep-Learning methods based on convolutional neural networks, have already shown tremendous potential for data processing, image reconstruction, and image processing and analysis (denoising, classification, segmentation, synthesis). Some of these methods have already been successfully applied to improve PET imaging.

The purpose of this Special Issue is to provide an overview of the many applications of Deep-Learning methods in all the different steps of PET imaging. Potential topics include, but are not limited to, improved PET signal detection, data denoising, data corrections (attenuation, scatter, motion, normalization, …), image reconstruction, image processing and quantification, and multimodality imaging.

Prof. Joaquin Lopez Herraiz
Prof. David Izquierdo Garcia
Dr. Kuang Gong
Dr. Do-Young Kang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Positron Emission Tomography (PET)
  • PET-CT
  • PET-MRI
  • PET-US
  • machine learning
  • Deep Learning (DL)
  • Neural Network (NN)
  • Convolutional Neural Network (CNN)
  • Generative-Adversarial Network (GAN)
  • spatial-temporal networks
  • attenuation correction
  • image reconstruction
  • image denoising
  • image segmentation
  • scatter correction
  • partial-colume correction
  • motion correction
  • position estimation of PET detectors
  • timing estimation of PET detectors
  • simulation
  • dynamic PET
  • kinetic modeling
  • super-resolution

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 10207 KiB  
Article
Franken-CT: Head and Neck MR-Based Pseudo-CT Synthesis Using Diverse Anatomical Overlapping MR-CT Scans
by Pedro Miguel Martinez-Girones, Javier Vera-Olmos, Mario Gil-Correa, Ana Ramos, Lina Garcia-Cañamaque, David Izquierdo-Garcia, Norberto Malpica and Angel Torrado-Carvajal
Appl. Sci. 2021, 11(8), 3508; https://doi.org/10.3390/app11083508 - 14 Apr 2021
Cited by 8 | Viewed by 3229
Abstract
Typically, pseudo-Computerized Tomography (CT) synthesis schemes proposed in the literature rely on complete atlases acquired with the same field of view (FOV) as the input volume. However, clinical CTs are usually acquired in a reduced FOV to decrease patient ionization. In this work, [...] Read more.
Typically, pseudo-Computerized Tomography (CT) synthesis schemes proposed in the literature rely on complete atlases acquired with the same field of view (FOV) as the input volume. However, clinical CTs are usually acquired in a reduced FOV to decrease patient ionization. In this work, we present the Franken-CT approach, showing how the use of a non-parametric atlas composed of diverse anatomical overlapping Magnetic Resonance (MR)-CT scans and deep learning methods based on the U-net architecture enable synthesizing extended head and neck pseudo-CTs. Visual inspection of the results shows the high quality of the pseudo-CT and the robustness of the method, which is able to capture the details of the bone contours despite synthesizing the resulting image from knowledge obtained from images acquired with a completely different FOV. The experimental Zero-Normalized Cross-Correlation (ZNCC) reports 0.9367 ± 0.0138 (mean ± SD) and 95% confidence interval (0.9221, 0.9512); the experimental Mean Absolute Error (MAE) reports 73.9149 ± 9.2101 HU and 95% confidence interval (66.3383, 81.4915); the Structural Similarity Index Measure (SSIM) reports 0.9943 ± 0.0009 and 95% confidence interval (0.9935, 0.9951); and the experimental Dice coefficient for bone tissue reports 0.7051 ± 0.1126 and 95% confidence interval (0.6125, 0.7977). The voxel-by-voxel correlation plot shows an excellent correlation between pseudo-CT and ground-truth CT Hounsfield Units (m = 0.87; adjusted R2 = 0.91; p < 0.001). The Bland–Altman plot shows that the average of the differences is low (−38.6471 ± 199.6100; 95% CI (−429.8827, 352.5884)). This work serves as a proof of concept to demonstrate the great potential of deep learning methods for pseudo-CT synthesis and their great potential using real clinical datasets. Full article
(This article belongs to the Special Issue PET Imaging with Deep Learning)
Show Figures

Figure 1

12 pages, 8809 KiB  
Article
A Multi-Channel Uncertainty-Aware Multi-Resolution Network for MR to CT Synthesis
by Kerstin Klaser, Pedro Borges, Richard Shaw, Marta Ranzini, Marc Modat, David Atkinson, Kris Thielemans, Brian Hutton, Vicky Goh, Gary Cook, Jorge Cardoso and Sebastien Ourselin
Appl. Sci. 2021, 11(4), 1667; https://doi.org/10.3390/app11041667 - 12 Feb 2021
Cited by 8 | Viewed by 2585
Abstract
Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Convolutional neural networks (CNNs) have achieved state-of-the-art results in image-to-image translation for brain applications. However, synthesising [...] Read more.
Synthesising computed tomography (CT) images from magnetic resonance images (MRI) plays an important role in the field of medical image analysis, both for quantification and diagnostic purposes. Convolutional neural networks (CNNs) have achieved state-of-the-art results in image-to-image translation for brain applications. However, synthesising whole-body images remains largely uncharted territory, involving many challenges, including large image size and limited field of view, complex spatial context, and anatomical differences between images acquired at different times. We propose the use of an uncertainty-aware multi-channel multi-resolution 3D cascade network specifically aiming for whole-body MR to CT synthesis. The Mean Absolute Error on the synthetic CT generated with the MultiResunc network (73.90 HU) is compared to multiple baseline CNNs like 3D U-Net (92.89 HU), HighRes3DNet (89.05 HU) and deep boosted regression (77.58 HU) and shows superior synthesis performance. We ultimately exploit the extrapolation properties of the MultiRes networks on sub-regions of the body. Full article
(This article belongs to the Special Issue PET Imaging with Deep Learning)
Show Figures

Figure 1

13 pages, 2842 KiB  
Article
Deep-Learning Based Positron Range Correction of PET Images
by Joaquín L. Herraiz, Adrián Bembibre and Alejandro López-Montes
Appl. Sci. 2021, 11(1), 266; https://doi.org/10.3390/app11010266 - 29 Dec 2020
Cited by 11 | Viewed by 3509
Abstract
Positron emission tomography (PET) is a molecular imaging technique that provides a 3D image of functional processes in the body in vivo. Some of the radionuclides proposed for PET imaging emit high-energy positrons, which travel some distance before they annihilate (positron range), creating [...] Read more.
Positron emission tomography (PET) is a molecular imaging technique that provides a 3D image of functional processes in the body in vivo. Some of the radionuclides proposed for PET imaging emit high-energy positrons, which travel some distance before they annihilate (positron range), creating significant blurring in the reconstructed images. Their large positron range compromises the achievable spatial resolution of the system, which is more significant when using high-resolution scanners designed for the imaging of small animals. In this work, we trained a deep neural network named Deep-PRC to correct PET images for positron range effects. Deep-PRC was trained with modeled cases using a realistic Monte Carlo simulation tool that considers the positron energy distribution and the materials and tissues it propagates into. Quantification of the reconstructed PET images corrected with Deep-PRC showed that it was able to restore the images by up to 95% without any significant noise increase. The proposed method, which is accessible via Github, can provide an accurate positron range correction in a few seconds for a typical PET acquisition. Full article
(This article belongs to the Special Issue PET Imaging with Deep Learning)
Show Figures

Figure 1

13 pages, 2740 KiB  
Article
Direct Annihilation Position Classification Based on Deep Learning Using Paired Cherenkov Detectors: A Monte Carlo Study
by Kibo Ote, Ryosuke Ota, Fumio Hashimoto and Tomoyuki Hasegawa
Appl. Sci. 2020, 10(22), 7957; https://doi.org/10.3390/app10227957 - 10 Nov 2020
Cited by 2 | Viewed by 1924
Abstract
To apply deep learning to estimate the three-dimensional interaction position of a Cherenkov detector, an experimental measurement of the true depth of interaction is needed. This requires significant time and effort. Therefore, in this study, we propose a direct annihilation position classification method [...] Read more.
To apply deep learning to estimate the three-dimensional interaction position of a Cherenkov detector, an experimental measurement of the true depth of interaction is needed. This requires significant time and effort. Therefore, in this study, we propose a direct annihilation position classification method based on deep learning using paired Cherenkov detectors. The proposed method does not explicitly estimate the interaction position or time-of-flight information and instead directly estimates the annihilation position from the raw data of photon information measured by paired Cherenkov detectors. We validated the feasibility of the proposed method using Monte Carlo simulation data of point sources. A total of 125 point sources were arranged three-dimensionally with 5 mm intervals, and two Cherenkov detectors were placed face-to-face, 50 mm apart. The Cherenkov detector consisted of a monolithic PbF2 crystal with a size of 40 × 40 × 10 mm3 and a photodetector with a single photon time resolution (SPTR) of 0 to 100 picosecond (ps) and readout pitch of 0 to 10 mm. The proposed method obtained a classification accuracy of 80% and spatial resolution with a root mean square error of less than 1.5 mm when the SPTR was 10 ps and the readout pitch was 3 mm. Full article
(This article belongs to the Special Issue PET Imaging with Deep Learning)
Show Figures

Figure 1

12 pages, 3937 KiB  
Article
Depth of Interaction Estimation in a Preclinical PET Scanner Equipped with Monolithic Crystals Coupled to SiPMs Using a Deep Neural Network
by Amirhossein Sanaat and Habib Zaidi
Appl. Sci. 2020, 10(14), 4753; https://doi.org/10.3390/app10144753 - 10 Jul 2020
Cited by 36 | Viewed by 9640
Abstract
The scintillation light distribution produced by photodetectors in positron emission tomography (PET) provides the depth of interaction (DOI) information required for high-resolution imaging. The goal of positioning techniques is to reverse the photodetector signal’s pattern map to the coordinates of the incident photon [...] Read more.
The scintillation light distribution produced by photodetectors in positron emission tomography (PET) provides the depth of interaction (DOI) information required for high-resolution imaging. The goal of positioning techniques is to reverse the photodetector signal’s pattern map to the coordinates of the incident photon energy position. By considering the DOI information, monolithic crystals offer good spatial, energy, and timing resolution along with high sensitivity. In this work, a supervised deep neural network was used for the approximation of DOI and to assess through Monte Carlo (MC) simulations the performance on a small-animal PET scanner consisting of ten 50 × 50 × 10 mm3 continuous Lutetium-Yttrium Oxyorthosilicate doped with Cerium (LYSO: Ce) crystals and 12 × 12 silicon photomultiplier (SiPM) arrays. The scintillation position was predicted by a multilayer perceptron neural network with 256 units and 4 layers whose inputs were the number of fired pixels on the SiPM plane and the total deposited energy. A GEANT4 MC code was used to generate training and test datasets by altering the photons’ incident position, energy, and direction, as well as readout of the photodetector output. The calculated spatial resolutions in the X-Y plane and along the Z-axis were 0.96 and 1.02 mm, respectively. Our results demonstrated that using a multilayer perceptron (MLP)-based positioning algorithm in the detector modules, constituting the PET scanner, enhances the spatial resolution by approximately 18% while the absolute sensitivity remains constant. The proposed algorithm proved its ability to predict the DOI for depth under 7 mm with an error below 8.7%. Full article
(This article belongs to the Special Issue PET Imaging with Deep Learning)
Show Figures

Graphical abstract

Back to TopTop