Next Article in Journal
Machine Learning Approaches to Identify Discriminative Signatures of Volatile Organic Compounds (VOCs) from Bacteria and Fungi Using SPME-DART-MS
Next Article in Special Issue
Serum Metabolites Associated with Blood Pressure in Chronic Kidney Disease Patients
Previous Article in Journal
Lipid Profiles of Human Serum Fractions Enhanced with CD9 Antibody-Immobilized Magnetic Beads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease

1
Institute of Human Genomic Study, College of Medicine, Korea University Ansan Hospital, 123 Jeokgeum-ro, Danwon-gu, Ansan 15355, Korea
2
Department of Radiation Convergence Engineering, College of Software and Digital Healthcare Convergence, Yonsei University, 1, Yeonsedae-gil, Heungeop-myeon, Wonju 26493, Korea
3
Department of Integrative Medicine, Major in Digital Healthcare, Yonsei University College of Medicine, Unju-ro, Gangman-gu, Seoul 06229, Korea
4
Department of Radiological Science, College of Health Science, Gachon University, 191, Hambakmoero, Yeonsu-gu, Incheon 21936, Korea
*
Authors to whom correspondence should be addressed.
Metabolites 2022, 12(3), 231; https://doi.org/10.3390/metabo12030231
Submission received: 14 February 2022 / Revised: 2 March 2022 / Accepted: 4 March 2022 / Published: 7 March 2022
(This article belongs to the Special Issue Deep Learning for Metabolomics)

Abstract

:
Alzheimer’s disease (AD) is the most common progressive neurodegenerative disease. 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) is widely used to predict AD using a deep learning model. However, the effects of noise and blurring on 18F-FDG PET images were not considered. The performance of a classification model trained using raw, deblurred (by the fast total variation deblurring method), or denoised (by the median modified Wiener filter) 18F-FDG PET images without or with cropping around the limbic system area using a 3D deep convolutional neural network was investigated. The classification model trained using denoised whole-brain 18F-FDG PET images achieved classification performance (0.75/0.65/0.79/0.39 for sensitivity/specificity/F1-score/Matthews correlation coefficient (MCC), respectively) higher than that with raw and deblurred 18F-FDG PET images. The classification model trained using cropped raw 18F-FDG PET images achieved higher performance (0.78/0.63/0.81/0.40 for sensitivity/specificity/F1-score/MCC) than the whole-brain 18F-FDG PET images (0.72/0.32/0.71/0.10 for sensitivity/specificity/F1-score/MCC, respectively). The 18F-FDG PET image deblurring and cropping (0.89/0.67/0.88/0.57 for sensitivity/specificity/F1-score/MCC) procedures were the most helpful for improving performance. For this model, the right middle frontal, middle temporal, insula, and hippocampus areas were the most predictive of AD using the class activation map. Our findings demonstrate that 18F-FDG PET image preprocessing and cropping improves the explainability and potential clinical applicability of deep learning models.

Graphical Abstract

1. Introduction

Alzheimer’s disease (AD) is a progressive neurodegenerative disease characterized by cognitive decline and memory loss [1], and it is the most common cause of dementia, which causes disability and dependency in older people worldwide [2]. Cognitive decline may be associated with metabolic and neurotransmitter activities in the brain [3]. These changes in AD may start several years before the onset of clinical symptoms [4,5]. Thus, early detection of AD is important, as early treatment of this disease may delay its progression [2]. To acquire information on pathological processes related to AD, 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET), which reflects the glucose metabolism of cerebral neurons, is widely used [3,6]. However, since the earliest symptoms of AD, such as short-term memory loss, are confused with symptoms resulting from aging, stress, or other brain disorders, it remains challenging to recognize AD before the manifestation of severe cognitive impairment with typical neuroimaging signs [7]. Additionally, the reliance on the interpretation of qualitative readings by specialists in order to recognize AD patterns is an issue in the clinical application of brain 18F-FDG PET [8]. Thus, a recognized approach to detect AD in the early stages is urgently needed.
In nuclear medicine, recent developments in artificial intelligence (AI) methodology allow the extraction of brain metabolic activity features related to neurodegenerative disorders, and it has been utilized to classify AD and normal conditions using 18F-FDG PET images [6,9,10]. Ding et al. reported that the classification accuracy for three groups (cognitively normal control, mild cognitive impairment (MCI), and AD groups) using a classification model of 18F-FDG PET images implemented with a convolutional neural network (CNN) technique was higher than that of radiologists’ performance [6]. Zhou et al. proposed a deep learning model to assist in the diagnosis of conversion to AD from MCI [9]. Although previous studies provided an effective deep learning model to predict AD, the effects of noise and blurring of the 18F-FDG PET images in the classification model were not considered. Because adversarial images can lead to misclassification, deep learning models are susceptible to image noise or blurring. This consequently reduces the performance and explainability of the deep learning model [11,12].
The origin of the PET signal is the radioactivity decay of the labelled tracer, and the underlying decay process follows Poisson statistics [13]. Therefore, the noise of the PET image is determined by the number of registered counts, and the positron range and motion contribute to the blurring of PET images [13]. Because the unavoidable noise and blurring of PET images reduce the signal-to-noise ratio and spatial resolution, correction and the application of a filter are required to accurately recognize AD. Software or image processing algorithms to reduce noise and blurring in PET images of AD are constantly being developed. However, few studies have measured the change in accuracy of AD classification according to the presence or absence of an image improvement algorithm based on deep learning technology. Furthermore, because irrelevant and redundant features degrade the accuracy and efficiency of a classification model [14,15], we compared the performance of the deep learning model using input features that included the whole brain or just the regions of interest.
This study aimed to investigate: (1) whether applying the denoising and deblurring method to 18F-FDG PET images improves the performance of AD classification, and (2) whether 18F-FDG PET image cropping improves the performance of AD classification using a modified deep learning model from 3D-ResNet, which has been recently described as a powerful prediction model for 3D medical images [16]. We also investigated the explainability difference using a class activation map [17] between deep learning outputs of raw images and denoised or deblurred images. Our primary hypothesis is that applying the preprocessing (i.e., denoising or deblurring method and cropping) to the input features can improve the performance of the deep learning model. In the present study, we attempted to improve the deep learning-based assessment of 18F-FDG PET images, where current approaches are suboptimal for controlling 18F-FDG PET image distortions, by systematically evaluating whether denoising and deblurring 18F-FDG PET images can improve the performance of deep learning models. Ultimately, such an application could be an effective tool for accurately classifying AD patients and cognitively normal controls.

2. Results

2.1. Demographic Characteristics and Clinical Assessments

Demographic data including age, sex, education, and neuropsychological cognitive assessment tests such as the Mini-Mental State Examination (MMSE), Clinical Dementia Rating (CDR) scale, and apolipoprotein E (APOE) ε4 genotyping characteristics are shown in Table 1. There were no differences in age, sex, or education level. However, the AD group was more likely to carry the APOE ε4 allele (p < 0.001) and have lower cognitive test performance results (i.e., MMSE and CDR, p < 0.001).

2.2. Classification Performance

Figure 1 shows the convergence curves of loss function for the 3D deep CNN during the training and testing process based on raw, deblurred, or denoised whole-brain 18F-FDG PET images and 18F-FDG PET images cropped around the limbic system area. The training and testing loss of 3D deep CNN was converged at the 20th epoch regardless of data processing.
The performance comparisons of the individual classification models trained using raw, denoised, or deblurred whole-brain 18F-FDG PET images to classify AD and cognitively normal conditions are summarized in Table 2. The classification model trained using deblurred whole-brain 18F-FDG PET images with a σb of 2 achieved the highest average sensitivity of 0.91, but not the highest specificity, accuracy, F1-score, and Matthews correlation coefficient (MCC). The classification model trained using denoised whole-brain 18F-FDG PET images achieved higher average specificity, accuracy, F1-score, and MCC than raw and deblurred 18F-FDG PET images. Although the deblurring method was helpful for improving classification sensitivity, the denoising method (σn = 3, 0.75/0.65/0.72/0.79/0.39 and σn = 5, 0.85/0.48/0.74/0.82/0.35 for sensitivity/specificity/accuracy/F1-score/MCC, respectively) was more effective than the deblurring method (σb = 1, 0.83/0.30/0.67/0.78/0.14 and σb = 2, 0.91/0.25/0.71/0.81/0.21) for the classification of AD patients and cognitively normal controls using whole-brain 18F-FDG PET images.
The performance comparisons of individual classification models trained using raw, denoised, or deblurred 18F-FDG PET images that were cropped around the limbic system area to classify AD and normal conditions are summarized in Table 3. The classification model trained using raw 18F-FDG PET images cropped around the limbic system area achieved higher classification performance (0.78/0.63/0.74/0.81/0.40 for sensitivity/specificity/accuracy/F1-score/MCC, respectively) than the whole-brain 18F-FDG PET images (0.72/0.32/0.60/0.71/0.10). This may mean that the 18F-FDG PET image cropping is helpful for improving classification performance. The deblurring method with a σb of 2 (0.89/0.67/0.82/0.88/0.57) was most helpful for improving the model’s classification performance using 18F-FDG PET images cropped around the limbic system area.

2.3. Activation Maps Associated with the Classification of Alzheimer’s Disease and Cognitively Normal Controls

Figure 2 presents the activation maps showing important brain regions that the 3D deep CNN learned as the most predictive of AD. Each map was averaged over the participants. An image intensity with a greater weight was more predictive of AD. A wide range of non-brain regions were included as the most predictive brain regions, regardless of image processing performed in the classification model trained using whole-brain 18F-FDG PET images. However, non-brain regions were not included as the most predictive brain regions in the classification model trained using denoised or deblurred 18F-FDG PET images, which had been cropped around the limbic system area.
In the classification model trained using deblurred and cropped 18F-FDG PET images, the right middle frontal gyrus, insula, middle temporal gyrus, and hippocampus area were the most predictive of AD. Moreover, the right thalamus, left caudate, and bilateral putamen areas were the most predictive of AD in the classification model trained using denoised and cropped 18F-FDG PET images.

3. Discussion

The present study provides three major findings supporting the performance of the AD classification model trained using preprocessed 18F-FDG PET images cropped around the limbic system area over other models (i.e., the classification model trained using raw and preprocessed whole-brain 18F-FDG PET images and trained using raw 18F-FDG PET images cropped around the limbic system area). First, denoising and deblurring methods are helpful for improving classification performance indicators, including sensitivity, specificity, accuracy, F1-score, and MCC. Second, cropping whole-brain 18F-FDG PET images around the limbic system area improved the model’s classification performance. Third, preprocessing the input improved the explainability of the model’s classification using a class activation map, which improved the inference process and clinical interpretation.
This study was motivated by the hypothesis that deep learning models are influenced by image quality, which varies depending on several factors. The performance of a deep learning model trained by an input dataset with noisy images could worsen when noisy features are captured [12,18,19]. Therefore, the input image quality must be considered to improve the performance of a deep learning model [12,18,19]. The quality of PET images is influenced by various factors, including imaging system hardware, non-collinearity of the emitted photon pairs, intercrystal scatter, and crystal penetration [20,21]. The development of denoising and deblurring methods for PET imaging remains an important research avenue to facilitate clinical decision-making and interpretation [22,23]. In the present study, 18F-FDG PET images were obtained from 50 PET centers having nine different scanner models. In spite of using a standardized imaging protocol, there is inter-scanner variability in 18F-FDG-PET images due to differences in scintillator materials, scintillator size, image reconstruction algorithm, image size, and slice thickness [24] (Table 4). For this reason, the noise type or level of the PET images might have varied. In the present study, these unavoidable and various noisy characteristics of 18F-FDG PET images practically deteriorated the performance of the classification model of AD, as the classification model trained using raw 18F-FDG PET images learned features of irrelevant or non-brain regions (see Figure 2). After preprocessing (i.e., deblurring and denoising), the performance of the classification model was significantly improved (MCC of 0.10 before preprocessing vs. MCC of up to 0.39 after preprocessing). Nevertheless, few non-brain regions remained important for classifying AD patients and cognitively normal controls. This may still hamper clinical decision-making and interpretation. Therefore, this suggests a need for minimizing the confounders of the classification model.
Image cropping is advantageous, as it elucidates a more focused region of interest to facilitate feature extraction from images, and it allows for the reduction of noisy image components [25]. A previous study verified that a cropping-based image classification model improved classification accuracy [25]. In the present study, 18F-FDG PET images were cropped around the limbic system area to focus on this area and reduce noisy image components (see Figure 3). After 18F-FDG PET image cropping, the performance of the classification model was conspicuously improved (MCC = 0.1 before cropping vs. MCC = 0.4 after cropping raw images). After image preprocessing and cropping, the performance of the classification model was significantly improved (MCC = 0.1 before both image preprocessing and cropping vs. MCC = up to 0.57 after both image preprocessing and cropping). Furthermore, non-brain regions were not important for classifying AD patients and cognitively normal controls. These findings suggest that although either image preprocessing or cropping could improve the performance of the classification model, image preprocessing and cropping improved the performance and clinical applicability of the classification model.
Previous neuroimaging studies have demonstrated that AD is associated with brain structural or functional alterations in a wide range of brain areas, including the middle frontal gyrus, middle temporal gyrus, and limbic system areas such as the hippocampus, insula, thalamus, putamen, and caudate [26,27,28,29,30,31,32]. The middle frontal and middle temporal gyri are related to verbal short-term memory performance [31], and the limbic system area is known to be functionally related to cognition, autonomic, emotional, and sensory processes [26,27,28]. Further, AD patients showed distinct patterns of cerebral glucose metabolism due to loss of synapses and neuropil, as well as functional impairment of the neurons in these areas [29,32]. These patterns allow for the differentiation of AD from cognitively normal controls using 18F-FDG PET images [33]. In the present study, the most predictive brain regions in the class activation map were obtained from the classification model trained using pre-processed 18F-FDG PET images that were cropped around the limbic system area were largely consistent with previous studies. Our findings provide evidence that preprocessing the input features facilitates clinical decision-making and interpretation of the classification model based on 18F-FDG PET images.
The present study had several limitations. First, because we focused on the effects of preprocessing the input data on the performance of the AD classification model, other deep learning models were not considered for evaluating the classification performance. Although this is beyond the scope of the current study, it remains an important line of inquiry for future research. In addition, we did not consider regional harmonization and intensity normalization using state-of-the-art methods such as removal of artificial voxel effect by linear regression (RAVEL) and combining batches (ComBat) to reduce scanner effects and improve reproducibility of image intensities [34,35]. Further studies applying state-of-the-art methods to our proposed methods are needed. Nevertheless, we attempted to employ widely used models such as the 3D deep CNN model for training 18F-FDG PET images, a fast total variation (TV)-l1 deblurring method, and the median modified Wiener filter (MMWF) for denoising. Second, the sample size (cognitively normal controls, n = 155, AD patients, n = 66) was not large enough to achieve an ultimate AD classification model via the determination of the heterogeneity of the 18F-FDG PET image data using the present deep learning technique, or to avoid over- or under-estimation of classification model performance due to chance. To minimize this limitation, we used 3-fold cross-validation [36]. Third, in the present study, since MCI is an unstable diagnosis in that its accuracy is dependent on the source of the patients, the specific criteria used, the methods employed, and the length of follow-up [37], we did not consider classifying MCI. However, this remains an important line of inquiry for future research. Finally, delicate work is required in image restoration and input value for the deep learning model. In this study, a two-dimensional (2D) point-spread function (PSF) was used to improve the reliability of the results, because the PET reconstruction method was not unified (see Table 4). Since we used a 3D deep learning model, it is necessary to check the difference in results between the input image using the 2D and 3D PSF based on the PET image with the reconstruction algorithm matched. Moreover, results according to various input types (e.g., multiple input of deblurred and denoised image, deblurred image after denoising, and vice versa) of the deep learning model were derived; however, significant results and correlation were not identified. To overcome this problem, further research is being conducted.

4. Materials and Methods

4.1. Data Acquisition and Preprocessing

All data used in this study were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (https://adni.loni.usc.edu, accessed on 31 October 2021). The ADNI was launched in 2003, with the primary goal of testing whether serial magnetic resonance imaging (MRI), PET, other biological dementia markers, and structured clinical and neuropsychological assessments could be combined to measure the progression of MCI and early AD.
In the present study, we included 155 cognitively healthy controls (age, 75.31 ± 6.57 years; men, 52.26%) and 66 patients (age, 74.44 ± 8.39 years; men, 50.00%) clinically diagnosed with AD from ADNI Phases 1, 2, or Go who underwent APOE testing, cognitive assessments, and 18F-FDG PET scans. Diagnostic criteria were available on the ADNI website. Briefly, participants with AD met the National Institute of Neurological Disorders and Stroke Alzheimer’s Disease and Related Disorder Association (NINDS-ADRDA) criteria for probable AD [38].
The 18F-FDG PET images with six 5-min frames were acquired beginning 30 min after injection of 5.0 ± 0.5 mCi (i.e., 185 MBq) of 18F-FDG using a Siemens scanner. Detailed information on FDG-PET image acquisition and preprocessing is available on the ADNI website (http://adni.loni.usc.edu/methods/documents/, accessed on 31 October 2021). All 18F-FDG PET images were spatially normalized to a standard space using SPM12 (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/, accessed on 24 November 2021). The processed images had an image size of 91 × 109 × 91 and a voxel size of 2 × 2 × 2   mm3. Lastly, each 3D 18F-FDG PET image was cropped around the limbic system area (i.e., including the hippocampus, amygdala, thalamus, and putamen, Figure 3), a known metabolic dysfunction-associated area in AD patients [39], using an automated anatomical labeling atlas 2 template [40] to investigate the effect of reducing irrelevant and redundant features on classification performance.
All individuals were randomly assigned into two groups: a training set (cognitively normal controls, n = 109; AD patients, n = 46) to implement AD classification using the deep learning model of 18F-FDG PET images and testing set (cognitively normal controls, n = 46; AD patients, n = 20) to confirm the convergence of the AD classification and to reproduce the accuracy of the AD classification.

4.2. Cognitive Assessment

The MMSE is an instrument for cognitive assessment, with raw scores ranging from 0 to 30 [41]. The CDR [42,43] is a clinician-rated staging cognitive assessment method that requires an individual severity rating in each of the following six domains: (1) memory, (2) orientation, (3) judgment and problem-solving; (4) community affairs; (5) home and hobbies; and (6) personal care. An overall global CDR score that indicates five stages of cognitive dysfunction (0 = no dementia, 0.5 = very mild dementia, 1 = mild dementia, 2 = moderate dementia, and 3 = severe dementia) may be calculated through the CDR scoring algorithm [43]. Alternatively, the Sum of Boxes scoring approach (CDRSB) yields scores from 0 to 18 by summing the CDR domain scores [44].

4.3. Image Restoration

Generally, PET image restoration should consider both the sinogram domain and the image (voxel) domain. Multiplicative image degradation occurs because of the inherent performance of the detector, which is affected in the sinogram domain, and the positron range in the image domain during reconstruction [20]. However, only image degradation in the image domain is considered based on the limitations of the information provided by the ADNI. The degradation model in the image domain [45] can be written as shown in Equation (1):
g = p s f f + N ,
where g is the degraded image, psf is the PSF, which is the amount of blurring that degrades from the clean image, f, and N is the noise component. Here, represents a 2D convolution. Here, the PSF is defined as:
p s f ( m , n ) = exp ( m 2 + n 2 2 σ b 2 ) ,
where σ b is standard deviation of the Gaussian kernel, and m and n are discrete indices in the image domain. When deblurring and denoising are implemented, perfect restoration may be difficult to achieve. In deblurring, although the resolution and sharpness of the image are improved, the noise component in the high-frequency region may also be amplified. Therefore, we employed a fast TV-l1 deblurring method [46,47], which is well known to improve the resolution while suppressing noise amplification. The object function is expressed as follows:
f = argmin f Q ( || p s f f g || + λ || f || ) ,
where the object function consists of the fidelity term ( || p s f f g || ) and the penalty term ( || f || ). A balancing parameter, λ , is implemented to increase the signal-to-noise ratio (i.e., λ = 0.01 was used in this study). The aim of this problem is to find the f that minimizes the object function without additional artifacts (e.g., ringing artifact [48]). Here, the solution method in Equation (2) uses a half-quadratic splitting approach [49].
Nevertheless, conventional denoising methods have a limitation in reducing sharpness with noise reduction. Cannistraci et al. [50] introduced the median modified Wiener filter (MMWF), and Park et al. [51] showed improved image performance using the MMWF in gamma camera images. It was confirmed that the MMWF is effective in suppressing the noise component while preserving the outline of the object as much as possible. The MMWF is represented as follows:
b m m w f ( n , m ) = μ ¯ + σ n 2 γ 2 σ n 2 · ( Ω ( n , m ) μ ¯ ) ,   n , m η
where μ ¯ and σ n 2 are the local median mean and variance around each pixel, n -by- m is the size of the neighborhood area η in the mask, Ω ( n , m ) represents each pixel in η , and γ 2 is the noise variance. Here, we used the average of all the local variances.

4.4. Proposed Framework

Figure 4 shows the simplified schematic illustration for the prediction of cognitively normal and AD patients using the proposed framework. In brief, a 18F-FDG PET image for training implemented the image restoration using Equation (2) or Equation (3). In some cases, image restoration could not be performed to evaluate the prediction performance due to variations in the image quality. Here, the σ b of the psf in Equation (2) was 2 and the size of Ω ( n , m ) was 5 × 5. This processing was applied to all slices with the same parameters, including the σ and Ω ( n , m ) size. Following this, each model was trained using the 18F-FDG PET dataset, which was predetermined using the image restoration components. Finally, cognitively normal or AD conditions could be predicted using a trained model with the same pre-training conditions. Moreover, a class activation map was also deduced to interpret the prediction decisions.
Figure 5 shows the proposed 3D deep CNN architecture, which consists of an input layer, hidden layers, and an output layer. In this network, a small block is composed of a convolution layer, batch normalization (BN) layer, and an activation layer such as a rectified linear unit (ReLU). Additionally, residual blocks have branches such as convolution and BN layers, which use the concept-of-ensemble approach [52]. The loss function was implemented by means of cross entropy, and the adaptive momentum estimation optimizer was used to update the parameters in the back-propagation. The batch size was 4, the number of epochs was 20, and the initial learning rate was 10−4. The architecture of the network is summarized in Table 5.
We implemented the proposed scheme using a normal workstation (OS: Windows 10, CPU: 2.13 GHz, RAM: 64 GB, GPU: Titan Xp, 12 GB) and the software language employed for the image processing and deep learning was MATLAB (R2020b, MathWorks, Natick, MA, USA). Of the total data from 221 individuals, 70% (cognitively normal controls, n = 109; AD patients, n = 46) was used for training, and the remaining share (cognitively normal controls, n = 46; AD patients, n = 20) was used for testing.

4.5. Statistical Analysis

General and clinical characteristics were compared between the groups using the two-sample t-test or chi-square test.
The performance of the deep learning model was evaluated using the sensitivity, which is a measure of the true positive rate; the specificity, a measure of the true negative rate; and the accuracy, which is a measure of the proportion of correct classifications calculated from a confusion matrix for a two-class classification problem [53]. In the present study, since the participants were mainly cognitively normal controls (controls, 70%; AD patients, 30%; i.e., imbalanced data), the F1-scores and MCCs, which are effective measures for imbalanced datasets, were also used [54]. Moreover, since our data sample was not sufficient to represent the whole population, and since the performance of the classification model might be due to chance, we employed 3-fold cross-validation, a resampling procedure that could be used to evaluate our 3D deep-CNN model [36].

5. Conclusions

In summary, the present study provides data that explain the importance of preprocessing input data for training classification models using deep learning methods. Furthermore, our study demonstrates that preprocessing the input data improves the explainability and potential clinical applicability of the deep learning method. Therefore, our approach may encourage future studies to develop a classification model trained using 18F-FDG PET images using a deep learning method.

Author Contributions

Conceptualization, M.-H.L., K.K. and Y.L.; Methodology, M.-H.L., C.-S.Y. and K.K.; Software, K.K. and Y.L.; Validation, M.-H.L. and C.-S.Y.; Formal Analysis, K.K. and Y.L.; Investigation, K.K.; Writing—Original Draft Preparation, M.-H.L., C.-S.Y. and K.K.; Writing—Review and Editing, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by a grant from the National Foundation of Korea (NRF) funded by the Korean government (Grant No. NRF-2021R1F1A1061440).

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Institutional Review Board of Gachon University (1044396-202110-HR-218-01).

Informed Consent Statement

A benchmark dataset, the Alzheimer’s Disease Neuroimaging Initiative (ADNI), which was used in our work, obtained informed consent from the participants. More information can be found in the following link: http://adni.loni.usc.edu/study-design/ (accessed on 16 November 2021).

Data Availability Statement

Data used in the preparation of this article were obtained from the ADNI database (adni.loni.usc.edu, accessed on 31 October 2021). The ADNI was launched in 2003 as a public–private partnership led by Principal Investigator Michael W. Weiner, MD. The primary goal of the ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessments can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD). For up-to-date information, see www.adni-info.org (accessed on 31 October 2021).

Acknowledgments

Data collection and sharing for this project was funded by the Alzheimer’s Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). The ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer’s Association; Alzheimer’s Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd. and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC; Johnson & Johnson Pharmaceutical Research & Development LLC; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research provides funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org, accessed on 31 October 2021). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer’s Therapeutic Research Institute at the University of Southern California. ADNI data were disseminated by the Laboratory for Neuro Imaging at the University of Southern California.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. DeTure, M.A.; Dickson, D.W. The neuropathological diagnosis of Alzheimer’s disease. Mol. Neurodegener. 2019, 14, 32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ngamsombat, C.; Dechasasawat, T.; Wongsripuemtet, J.; Charnchaowanish, P.; Muangpaisan, W.; Chawalparit, O. The evaluation of posterior cingulate gyrus by diffusion tensor imaging in Alzheimer’s disease patients compared with normal control subjects. Siriraj Med. J. 2019, 71, 117–122. [Google Scholar]
  3. Nordberg, A.; Rinne, J.O.; Kadir, A.; Långström, B. The use of pet in Alzheimer disease. Nat. Rev. Neurol. 2010, 6, 78–87. [Google Scholar] [CrossRef] [PubMed]
  4. Jack, C.R.; Knopman, D.S.; Jagust, W.J.; Shaw, L.M.; Aisen, P.S.; Weiner, M.W.; Petersen, R.C.; Trojanowski, J.Q. Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. Lancet Neurol. 2010, 9, 119–128. [Google Scholar] [CrossRef] [Green Version]
  5. Forlenza, O.V.; Radanovic, M.; Talib, L.L.; Aprahamian, I.; Biniz, B.S.; Zetterberg, H.; Gattaz, W.F. Cerebrospinal fluid biomarkers in Alzheimer’s disease: Diagnostic accuracy and prediction of dementia. Alzheimer’s Dement. Diagn. Assess. Dis. Monit. 2015, 1, 455–463. [Google Scholar] [CrossRef]
  6. Ding, Y.; Sohn, J.H.; Kawczynski, M.G.; Trivedi, H.; Harnish, R.; Jenkins, N.W.; Lituiev, D.; Copeland, T.P.; Aboian, M.S.; Aparici, C.M.; et al. A deep learning model to predict a diagnosis of alzheimer disease by using 18F-FDG PET of the brain. Radiology 2019, 290, 456–464. [Google Scholar] [CrossRef]
  7. Long, X.; Chen, L.; Jiang, C.; Zhang, L.; Alzheimer’s Disease Neuroimaging Initiative. Prediction and classification of alzheimer disease based on quantification of MRI deformation. PLoS ONE 2017, 12, e0173372. [Google Scholar] [CrossRef]
  8. Bohen, N.I.; Djang, D.S.W.; Herholz, K.; Anzai, Y.; Minoshima, S. Effectiveness and safety of 18F-FDG PET in the evaluation of dementia: A review of the recent literature. J. Nucl. Med. 2012, 53, 59–71. [Google Scholar] [CrossRef] [Green Version]
  9. Zhou, P.; Zeng, R.; Yu, L.; Feng, Y.; Chen, C.; Li, F.; Liu, Y.; Huang, Y.; Huang, Z.; Alzheimer’s Disease Neuroimaging Initiative. Deep-learning radiomics for discrimination conversion of Alzheimer’s disease in patients with mild cognitive impairment: A study based on 18F-FDG PET imaging. Front. Aging Neurosci. 2021, 13, 764872. [Google Scholar] [CrossRef]
  10. Yang, Z.; Liu, Z. The risk prediction of Alzheimer’s disease based on the deep learning model of 18F-FDG positron emission tomography. Saudi J. Biol. Sci. 2020, 27, 659–665. [Google Scholar] [CrossRef]
  11. Röhrbein, F.; Goddard, P.; Schneider, M.; James, G.; Guo, K. How does image noise affect actual and predicted human gaze allocation in assessing image quality? Vis. Res. 2015, 112, 11–25. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Dodge, S.; Karam, L. Understanding how image quality affects deep neural networks. In Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016; pp. 1–6. [Google Scholar]
  13. Christensen, A.N. PET and PET/CT physics. In PET/CT Atlas on Quality Control and Image Artefacts; IAEA Human Health Series; International Atomic Energy Agency: Vienna, Austria, 2014; pp. 10–27. [Google Scholar]
  14. Shi, H.; Li, H.; Zhang, D.; Cheng, C.; Cao, X. An efficient feature generation approach based on deep learning and feature selection techniques for traffic classification. Comput. Netw. 2018, 132, 81–98. [Google Scholar] [CrossRef]
  15. Kwak, N.; Choi, C.-H. Input feature selection for classification problems. IEEE Trans. Neural Netw. 2002, 13, 143–159. [Google Scholar] [CrossRef] [PubMed]
  16. Ebrahimi, A.; Luo, S.; Chiong, R. Introducing transfer learning to 3D ResNet-18 for Alzheimer’s disease detection on MRI images. In Proceedings of the 2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ), Wellington, New Zealand, 25–27 November 2020. [Google Scholar]
  17. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  18. Nazaré, T.S.; da Costa, G.B.P.; Contato, W.A.; Ponti, M. Deep convolutional neural networks and noisy images. In Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications; Springer: Cham, Switzerland, 2018; pp. 416–424. [Google Scholar]
  19. da Costa, G.B.P.; Contato, W.A.; Nazaré, T.S.; Neto, J.E.S.B.; Ponti, M. An empirical study on the effects of different types of noise in image classification tasks. arXiv 2016, arXiv:1609.02781. [Google Scholar]
  20. Song, T.A.; Yang, F.; Chowdhury, S.R.; Kim, K.; Johnson, K.A.; Fakhri, G.E.; Li, Q.; Dutta, J. Pet image deblurring and super-resolution with and MR-based joint entropy prior. IEEE Trans. Comput. Imaging 2019, 5, 530–539. [Google Scholar] [CrossRef]
  21. Cui, J.; Gong, K.; Guo, N.; Wu, C.; Meng, X.; Kim, K.; Zheng, K.; Wu, Z.; Fu, L.; Xu, B.; et al. PET image denoising using unsupervised deep learning. Eur. J. Nucl. Med. Mol. Imaging 2021, 46, 2780–2789. [Google Scholar] [CrossRef]
  22. Song, T.-A.; Yang, F.; Dutta, J. Noise2void: Unsupervised denoising of PET images. Phys. Med. Biol. 2021, 66, 214002. [Google Scholar] [CrossRef]
  23. Liu, J.; Malekzadeh, M.; Mirian, N.; Song, T.-A.; Liu, C.; Dutta, J. Artificial intelligence-based image enhancement in PET imaging. PET Clin. 2021, 16, 553–576. [Google Scholar] [CrossRef]
  24. Joshi, A.; Koeppe, R.A.; Fessler, J.A. Reducing between scanner differences in multi-center PET studies. Neuroimage 2009, 46, 154–159. [Google Scholar] [CrossRef] [Green Version]
  25. Mishra, B.K.; Thakker, D.; Mazumdar, S.; Neagu, D.; Gheorghe, M.; Simpson, S. A novel application of deep learning with image cropping: A smart city use case for flood monitoring. J. Reliab. Intell. Environ. 2020, 6, 51–61. [Google Scholar] [CrossRef] [Green Version]
  26. Liu, X.; Chen, X.; Zheng, W.; Xia, M.; Han, Y.; Song, H.; Li, K.; He, Y.; Wang, Z. Altered functional connectivity of insular subregions in Alzheimer’s disease. Front. Aging Neurosci. 2018, 10, 107. [Google Scholar] [CrossRef] [Green Version]
  27. Xue, J.; Guo, H.; Gao, Y.; Wang, X.; Cui, H.; Chen, Z.; Wang, B.; Xiang, J. Altered directed functional connectivity of the hippocampus in mild cognitive impairment and Alzheimer’s disease: A resting-state fMRI study. Front. Aging Neurosci. 2019, 11, 326. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Trzepacz, P.T.; Yu, P.; Bhamidipati, P.K.; Willis, B.; Forrester, T.; Tabas, L.; Schwarz, A.J.; Saykin, A.J.; Alzheimer’s Disease Neuroimaging Initiative. Frontolimbic atrophy is associated with agitation and aggression in mild cognitive impairment and Alzheimer’s disease. Alzheimer’s Dement. 2013, 9, S95–S104. [Google Scholar] [CrossRef] [Green Version]
  29. Marcus, C.; Mena, E.; Subramaniam, R.M. Brain PET in the diagnosis of Alzheimer’s disease. Clin. Nucl. Med. 2014, 39, e413–e426. [Google Scholar] [CrossRef] [Green Version]
  30. Mosconi, L.; De Santi, S.; Li, J.; Tsui, W.H.; Li, Y.; Boppana, M.; Laska, E.; Rusinek, H.; de Leon, M.J. Hippocampal hypometabolism predicts cognitive decline from normal aging. Neurobiol. Aging 2008, 29, 676–692. [Google Scholar] [CrossRef] [Green Version]
  31. Peters, F.; Collette, F.; Degueldre, C.; Sterpenich, V.; Majerus, S.; Salmon, E. The neural correlates of verbal short-term memory in Alzheimer’s disease: An fMRI study. Brain 2009, 132, 1833–1846. [Google Scholar] [CrossRef] [PubMed]
  32. Silverman, D.H.; Small, G.W.; Chang, C.Y.; Lu, C.S.; Kung De Aburto, M.A.; Chen, W.; Czernin, J.; Rapoport, S.I.; Pietrini, P.; Alexander, G.E.; et al. Positron emission tomography in evaluation of dementia: Regional brain metabolism and long-term outcome. JAMA 2001, 286, 2120–2127. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Ou, Y.-N.; Xu, W.; Li, J.-Q.; Guo, Y.; Cui, M.; Chen, K.-L.; Huang, Y.-Y.; Dong, Q.; Tan, L.; Yu, J.-T.; et al. FDG-PET as an independent biomarker for Alzheimer’s biological diagnosis: A longitudinal study. Alzheimer’s Res. Ther. 2019, 11, 57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Fortin, J.-P.; Sweeney, E.M.; Muschelli, J.; Crainiceanu, C.P.; Shinohara, R.T.; Alzheimer’s Disease Neuroimaging Initiative. Removing inter-subject technical variability in magnetic resonance imaging studies. Neuroimage 2016, 132, 198–212. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Torbati, M.E.; Minhas, D.S.; Ahmad, G.; O’Connor, E.E.; Muschelli, J.; Laymon, C.M.; Yang, Z.; Cohen, A.D.; Aizenstein, H.J.; Klunk, W.E.; et al. A multi-scanner neuroimaging data harmonization using RAVEL and ComBat. Neuroimage 2021, 245, 118703. [Google Scholar] [CrossRef]
  36. Mosteller, F.; Turkey, J. Data analysis, including statistics. In The Handbook of Social Psychology; Gardner, L., Eliot, A., Eds.; Springer & Addison-Wesley: Reading, MA, USA, 1968; pp. 109–112. [Google Scholar]
  37. Petersen, R.C. Mild cognitive impairment as a diagnostic entity. J. Intern. Med. 2004, 256, 183–194. [Google Scholar] [CrossRef] [PubMed]
  38. McKhann, G.; Drachman, D.; Folstein, M.; Katzman, R.; Price, D.; Stadlan, E.M. Clinical diagnosis of Alzheimer’s disease: Report of the NINCDS-ADRDA work group under the auspices of department of health and human services task force on Alzheimer’s disease. Neurology 1984, 37, 939–944. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Bouter, C.; Henniges, P.; Franke, T.N.; Irwin, C.; Sahlmann, C.O.; Sichler, M.E.; Beindorff, N.; Bayer, T.A.; Bouter, Y. 18F-FDG-PET detects drastic changes in brain metabolism in the Tg4–42 model of Alzheimer’s disease. Front. Aging Neurosci. 2019, 10, 425. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Rolls, E.T.; Joliot, M.; Tzourio-Mazoyer, N. Implementation of a new parcellation of the orbitofrontal cortex in the automated anatomical labeling atlas. Neuroimage 2015, 122, 1–5. [Google Scholar] [CrossRef] [PubMed]
  41. Folstein, M.F.; Folstein, S.E.; McHugh, P.R. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 1975, 12, 189–198. [Google Scholar] [CrossRef]
  42. Hughes, C.P.; Berg, L.; Danziger, W.L.; Coben, L.A.; Martin, R.L. A new clinical scale for the staging of dementia. Br. J. Psychiatry 1982, 140, 566–572. [Google Scholar] [CrossRef] [PubMed]
  43. Morris, J.C. The clinical dementia rating (CDR): Current vision and scoring rules. Neurology 1993, 43, 2412–2414. [Google Scholar] [CrossRef]
  44. O’Bryant, S.E.; Waring, S.C.; Cullum, C.M.; Hall, J.; Lacritz, L.; Massman, P.J.; Lupo, P.J.; Reisch, J.S.; Doody, R.; Texas Alzheimer’s Research Consortium. Staging dementia using clinical dementia rating scale sum of boxes scores. Arch. Neurol. 2008, 65, 1091–1095. [Google Scholar] [CrossRef] [Green Version]
  45. da Silva, E.A.B.; Mendonça, G.V. The Electrical Engineering Handbook, 4-Digital Image Processing; Academic Press: Cambridge, MA, USA, 2005. [Google Scholar]
  46. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  47. Yang, J.; Yin, W.; Zhang, Y.; Wang, Y. A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J. Imaging Sci. 2009, 2, 569–592. [Google Scholar] [CrossRef]
  48. Ren, D.; Zuo, W.; Zhang, D.; Xu, J.; Zhang, L. Partial deconvolution with inaccurate blur kernel. IEEE Trans. Image Process. 2018, 27, 511–524. [Google Scholar] [CrossRef] [PubMed]
  49. Ramani, S.; Fessler, J.A. A splitting-based iterative algorithm for accelerated statistical X-ray CT reconstruction. IEEE Trans. Med. Imaging 2012, 31, 677–688. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Cannistraci, C.V.; Abbas, A.; Gao, X. Median modified wiener filter for nonlinear adaptive spatial denoising of protein NMR multidimensional spectra. Sci. Rep. 2015, 5, 8017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Park, C.R.; Kang, S.H.; Lee, Y. Median modified wiener filter for improving the image quality of gamma camera images. Nucl. Eng. Technol. 2020, 52, 2328–2333. [Google Scholar] [CrossRef]
  52. Xu, G.; Wu, H.Z.; Shi, Y.Q. Ensemble of CNNs for steganalysis: An empirical study. In Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security, Galicia, Spain, 20–22 June 2016; pp. 103–107. [Google Scholar]
  53. Ting, M.K. Confusion matrix. In Encyclopedia of Machine Learning and Data Mining; Sammut, C., Webb, G.I., Eds.; Springer: Boston, MA, USA, 2017. [Google Scholar]
  54. Chicco, D.; Tötsch, N.; Jurman, G. The Matthews Correlation Coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation. Methodology 2021, 14, 13. [Google Scholar] [CrossRef]
Figure 1. Convergence curves of a 3D deep-convolutional neural network trained using raw, deblurred, or denoised whole-brain 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) images (A) and 18F-FDG PET images cropped around the limbic system area (B). A semi-logarithmic line plot has a logarithmic scale of loss on the y-axis.
Figure 1. Convergence curves of a 3D deep-convolutional neural network trained using raw, deblurred, or denoised whole-brain 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) images (A) and 18F-FDG PET images cropped around the limbic system area (B). A semi-logarithmic line plot has a logarithmic scale of loss on the y-axis.
Metabolites 12 00231 g001
Figure 2. Activation maps showing brain regions learned by the 3D deep- convolutional neural network model as the most predictive of Alzheimer’s disease. Higher weights (red) indicate that the image intensity was more predictive of Alzheimer’s disease. Abbreviations: L, Left; R, Right.
Figure 2. Activation maps showing brain regions learned by the 3D deep- convolutional neural network model as the most predictive of Alzheimer’s disease. Higher weights (red) indicate that the image intensity was more predictive of Alzheimer’s disease. Abbreviations: L, Left; R, Right.
Metabolites 12 00231 g002
Figure 3. Slices of interest, including the hippocampus, amygdala, thalamus, and putamen, that were used for the 18F-FDG PET image cropping.
Figure 3. Slices of interest, including the hippocampus, amygdala, thalamus, and putamen, that were used for the 18F-FDG PET image cropping.
Metabolites 12 00231 g003
Figure 4. Simplified diagram representing the proposed prediction process incorporating image restoration using the deep convolutional neural network trained using 18F-FDG PET images.
Figure 4. Simplified diagram representing the proposed prediction process incorporating image restoration using the deep convolutional neural network trained using 18F-FDG PET images.
Metabolites 12 00231 g004
Figure 5. The structure of the proposed network. Most network parameters used in this study are consistent with the notations in this figure.
Figure 5. The structure of the proposed network. Most network parameters used in this study are consistent with the notations in this figure.
Metabolites 12 00231 g005
Table 1. Demographic and clinical assessments.
Table 1. Demographic and clinical assessments.
Cognitively Normal Controls
(n = 155)
Patients with Alzheimer’s Disease
(n = 66)
p
Age75.31 ± 6.5774.44 ± 8.390.412
Male sex81 (52.26)33 (50.00)0.759
Education16.17 ± 2.8915.35 ± 2.900.054
APOE ε4, carriers43 (27.74)45 (68.18)<0.001
MMSE score29.03 ± 1.2123.26 ± 2.15<0.001
Global CDR score0.00 ± 0.000.80 ± 0.29<0.001
CDR Sum of Boxes0.04 ± 0.154.53 ± 1.67<0.001
Memory0.00 ± 0.001.05 ± 0.40<0.001
Orientation0.00 ± 0.000.89 ± 0.40<0.001
Judgment0.04 ± 0.130.87 ± 0.34<0.001
Community affairs0.01 ± 0.080.76 ± 0.49<0.001
Hobbies0.00 ± 0.000.73 ± 0.51<0.001
Personal care0.00 ± 0.000.23 ± 0.42<0.001
Abbreviations: APOE, apolipoprotein E; MMSE, Mini-Mental State Examination; CDR, Clinical Dementia Rating scale.
Table 2. Performance comparison of an individual classification model trained by raw, denoised, or deblurred whole-brain 18F-FDG PET images for the classification of Alzheimer’s disease and cognitively normal conditions.
Table 2. Performance comparison of an individual classification model trained by raw, denoised, or deblurred whole-brain 18F-FDG PET images for the classification of Alzheimer’s disease and cognitively normal conditions.
SensitivitySpecificityAccuracyF1-ScoreMCC
Raw 18F-FDG PET0.72 ± 0.070.32 ± 0.080.60 ± 0.070.71 ± 0.050.10 ± 0.09
Deblurred 18F-FDG PET (σb = 1)0.83 ± 0.070.30 ± 0.090.67 ± 0.030.78 ± 0.030.14 ± 0.04
Deblurred 18F-FDG PET (σb = 2)0.91 ± 0.080.25 ± 0.150.71 ± 0.050.81 ± 0.040.21 ± 0.15
Denoised 18F-FDG PET (σn = 3)0.75 ± 0.080.65 ± 0.050.72 ± 0.050.79 ± 0.050.39 ± 0.08
Denoised 18F-FDG PET (σn = 5)0.85 ± 0.060.48 ± 0.130.74 ± 0.050.82 ± 0.040.35 ± 0.14
Data are presented as means ± standard deviations over cross-validation folds. Abbreviations: MCC, Matthews Correlation Coefficient; 18F-FDG PET, 18F-fluorodeoxyglucose positron emission tomography.
Table 3. Performance comparison of an individual classification model trained using raw, denoised, or deblurred 18F-FDG PET images that were cropped around the limbic system area in order to classify Alzheimer’s disease and cognitively normal conditions.
Table 3. Performance comparison of an individual classification model trained using raw, denoised, or deblurred 18F-FDG PET images that were cropped around the limbic system area in order to classify Alzheimer’s disease and cognitively normal conditions.
SensitivitySpecificityAccuracyF1-ScoreMCC
Raw 18F-FDG PET0.78 ± 0.060.63 ± 0.130.74 ± 0.030.81 ± 0.030.40 ± 0.09
Deblurred 18F-FDG PET (σb = 1)0.85 ± 0.060.68 ± 0.060.80 ± 0.050.85 ± 0.040.53 ± 0.10
Deblurred 18F-FDG PET (σb = 2)0.89 ± 0.060.67 ± 0.100.82 ± 0.070.88 ± 0.050.57 ± 0.17
Denoised 18F-FDG PET (σn = 3)0.83 ± 0.080.50 ± 0.130.73 ± 0.080.81 ± 0.050.34 ± 0.19
Denoised 18F-FDG PET (σn = 5)0.88 ± 0.060.63 ± 0.080.80 ± 0.030.86 ± 0.020.53 ± 0.05
Data are presented as means ± standard deviations over cross-validation folds. Abbreviations: MCC, Matthews Correlation Coefficient; 18F-FDG PET, 18F-fluorodeoxyglucose positron emission tomography.
Table 4. Scanner models and inter-scanner differences in the 18F-FDG-PET scans.
Table 4. Scanner models and inter-scanner differences in the 18F-FDG-PET scans.
Scanner ModelScintillator MaterialsScintillator Size (mm3)Reconstruction
Algorithm
Image SizeSlice Thickness (mm)
Siemens HRRTLSO2.1 × 2.1 × 20OSEM-3D256 × 256 × 2071.2
Siemens HR+BGO4.05 × 4.39 × 30FORE/OSEM-2D128 × 128 × 632.4
Siemens AccelLSO6.45 × 6.45 × 25FORE/OSEM-2D128 × 128 × 473.4
Siemens ExactBGO6.75 × 6.75 × 20FORE/OSEM-2D128 × 128 × 473.4
Siemens SOMATOM Definition AS mCTLSO4.0 × 4.0 × 20OSEM-3D400 × 400 × 1092.0
Siemens SOMATOM Definition AS mCTLSO4.0 × 4.0 × 20OSEM-3D400 × 400 × 812.0
Siemens Biograph 64LSO4.0 × 4.0 × 20OSEM-3D400 × 400 × 1092.0
Siemens Biograph mCT 20LSO4.0 × 4.0 × 20OSEM-3D256 × 256 × 812.0
Siemens 1094LSO4.0 × 4.0 × 20OSEM-3D336 × 336 × 1092.0
FORE, Fourier rebinning; OSEM, the ordered subsets expectation–maximization.
Table 5. Architecture of the network.
Table 5. Architecture of the network.
LayerShapeFilter of PoolingStride/Padding
Input128 × 128 × 79 × 1
or
128 × 12 × 810 × 1
--
Conv64 × 64 × 40 × 64
or
64 × 64 × 5 × 64
7 × 7 × 72/3
BN--
ReLU--
Max pooling32 × 32 × 20 × 64
or
32 × 32 × 3 × 64
3 × 3 × 32/1
Conv and Conv-branch32 × 32 × 20 × 64
or
32 × 32 × 3 × 64
3 × 3 × 31/1
BN and BN-branch--
ReLU--
Conv and Conv-branch16 × 16 × 10 × 128
or
16 × 16 × 2 × 128
3 × 3 × 31/1
BN and BN-branch--
ReLU--
Global average pooling1 × 1 × 1 × 128--
Fully connected-1281 × 1 × 1 × 128--
Fully connected-21 × 1 × 1 × 2--
Softmax1 × 1 × 1 × 2--
Classification output1 × 1 × 1 × 2--
Abbreviations: Conv, convolution; BN, batch normalization; ReLU, rectified linear unit.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, M.-H.; Yun, C.-S.; Kim, K.; Lee, Y. Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease. Metabolites 2022, 12, 231. https://doi.org/10.3390/metabo12030231

AMA Style

Lee M-H, Yun C-S, Kim K, Lee Y. Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease. Metabolites. 2022; 12(3):231. https://doi.org/10.3390/metabo12030231

Chicago/Turabian Style

Lee, Min-Hee, Chang-Soo Yun, Kyuseok Kim, and Youngjin Lee. 2022. "Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease" Metabolites 12, no. 3: 231. https://doi.org/10.3390/metabo12030231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop