Medical Imaging and Machine Learning​

A special issue of Cancers (ISSN 2072-6694). This special issue belongs to the section "Cancer Informatics and Big Data".

Deadline for manuscript submissions: closed (15 December 2021) | Viewed by 65744

Special Issue Editors


E-Mail Website
Guest Editor
Center for Biomedical Informatics and Information Technology, National Cancer Institute, NIH, Bethesda, MD, USA
Interests: Cancer imaging; imaging informatics; and applications and best practices of machine learning and artificial intelligence in cancer diagnosis and treatment

E-Mail Website
Co-Guest Editor
Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA
Interests: biomedical data analysis; artificial intelligence; computer-aided diagnosis/prognosis; machine learning; deep learning; multiomics; organ segmentation; precision medicine; infrared imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning refers to a set of techniques, mathematical models, and algorithms that allow computers to learn from data by first recognizing meaningful patterns in biomedical data, including images. It is a component of artificial intelligence because it facilitates the extraction of expressive patterns from data, which is a principle of human intelligence. In the past several decades, machine learning has shown itself as a complex tool and a solution assisting medical professionals in the diagnosis/prognosis of various cancers in different imaging modalities. Different machine learning methods are used in various medical fields, such as radiology, oncology, pathology, genetics, etc.

While much of the effort has so far been on introducing machine learning into the medical field, development of the present methods and algorithms in medicine also plays a significant role. Recently, deep neural network algorithms have significantly revolutionized conventional machine learning methods for a vast variety of applications, including medicine. Deep learning models increase the complexity of traditional algorithms while intensifying the dimensionality of data for detecting more details. This Special Issue will highlight advances in machine learning in cancer in all its diversity, covering both conventional and new deep learning methods in oncology.

Dr. Keyvan Farahani
Guest Editor
Dr. Bardia Yousefi
Co-Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Cancers is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2900 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • supervised and unsupervised learning
  • kernel methods
  • deep neural networks
  • mathematical modeling
  • predication
  • detection
  • diagnosis
  • omics
  • dimensionality reduction
  • federated learning

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 4450 KiB  
Article
Dual-Intended Deep Learning Model for Breast Cancer Diagnosis in Ultrasound Imaging
by Nicolle Vigil, Madeline Barry, Arya Amini, Moulay Akhloufi, Xavier P. V. Maldague, Lan Ma, Lei Ren and Bardia Yousefi
Cancers 2022, 14(11), 2663; https://doi.org/10.3390/cancers14112663 - 27 May 2022
Cited by 15 | Viewed by 3155
Abstract
Automated medical data analysis demonstrated a significant role in modern medicine, and cancer diagnosis/prognosis to achieve highly reliable and generalizable systems. In this study, an automated breast cancer screening method in ultrasound imaging is proposed. A convolutional deep autoencoder model is presented for [...] Read more.
Automated medical data analysis demonstrated a significant role in modern medicine, and cancer diagnosis/prognosis to achieve highly reliable and generalizable systems. In this study, an automated breast cancer screening method in ultrasound imaging is proposed. A convolutional deep autoencoder model is presented for simultaneous segmentation and radiomic extraction. The model segments the breast lesions while concurrently extracting radiomic features. With our deep model, we perform breast lesion segmentation, which is linked to low-dimensional deep-radiomic extraction (four features). Similarly, we used high dimensional conventional imaging throughputs and applied spectral embedding techniques to reduce its size from 354 to 12 radiomics. A total of 780 ultrasound images—437 benign, 210, malignant, and 133 normal—were used to train and validate the models in this study. To diagnose malignant lesions, we have performed training, hyperparameter tuning, cross-validation, and testing with a random forest model. This resulted in a binary classification accuracy of 78.5% (65.1–84.1%) for the maximal (full multivariate) cross-validated model for a combination of radiomic groups. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

13 pages, 3273 KiB  
Article
Unsupervised Deep Learning Registration of Uterine Cervix Sequence Images
by Peng Guo, Zhiyun Xue, Sandeep Angara and Sameer K. Antani
Cancers 2022, 14(10), 2401; https://doi.org/10.3390/cancers14102401 - 13 May 2022
Cited by 1 | Viewed by 2348
Abstract
During a colposcopic examination of the uterine cervix for cervical cancer prevention, one or more digital images are typically acquired after the application of diluted acetic acid. An alternative approach is to acquire a sequence of images at fixed intervals during an examination [...] Read more.
During a colposcopic examination of the uterine cervix for cervical cancer prevention, one or more digital images are typically acquired after the application of diluted acetic acid. An alternative approach is to acquire a sequence of images at fixed intervals during an examination before and after applying acetic acid. This approach is asserted to be more informative as it can capture dynamic pixel intensity variations on the cervical epithelium during the aceto-whitening reaction. However, the resulting time sequence images may not be spatially aligned due to the movement of the cervix with respect to the imaging device. Disease prediction using automated visual evaluation (AVE) techniques using multiple images could be adversely impacted without correction for this misalignment. The challenge is that there is no registration ground truth to help train a supervised-learning-based image registration algorithm. We present a novel unsupervised registration approach to align a sequence of digital cervix color images. The proposed deep-learning-based registration network consists of three branches and processes the red, green, and blue (RGB, respectively) channels of each input color image separately using an unsupervised strategy. Each network branch consists of a convolutional neural network (CNN) unit and a spatial transform unit. To evaluate the registration performance on a dataset that has no ground truth, we propose an evaluation strategy that is based on comparing automatic cervix segmentation masks in the registered sequence and the original sequence. The compared segmentation masks are generated by a fine-tuned transformer-based object detection model (DeTr). The segmentation model achieved Dice/IoU scores of 0.917/0.870 and 0.938/0.885, which are comparable to the performance of our previous model in two datasets. By comparing our segmentation on both original and registered time sequence images, we observed an average improvement in Dice scores of 12.62% following registration. Further, our approach achieved higher Dice and IoU scores and maintained full image integrity compared to a non-deep learning registration method on the same dataset. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

16 pages, 1550 KiB  
Article
Fully Automatic Whole-Volume Tumor Segmentation in Cervical Cancer
by Erlend Hodneland, Satheshkumar Kaliyugarasan, Kari Strøno Wagner-Larsen, Njål Lura, Erling Andersen, Hauke Bartsch, Noeska Smit, Mari Kyllesø Halle, Camilla Krakstad, Alexander Selvikvåg Lundervold and Ingfrid Salvesen Haldorsen
Cancers 2022, 14(10), 2372; https://doi.org/10.3390/cancers14102372 - 11 May 2022
Cited by 8 | Viewed by 3075
Abstract
Uterine cervical cancer (CC) is the most common gynecologic malignancy worldwide. Whole-volume radiomic profiling from pelvic MRI may yield prognostic markers for tailoring treatment in CC. However, radiomic profiling relies on manual tumor segmentation which is unfeasible in the clinic. We present a [...] Read more.
Uterine cervical cancer (CC) is the most common gynecologic malignancy worldwide. Whole-volume radiomic profiling from pelvic MRI may yield prognostic markers for tailoring treatment in CC. However, radiomic profiling relies on manual tumor segmentation which is unfeasible in the clinic. We present a fully automatic method for the 3D segmentation of primary CC lesions using state-of-the-art deep learning (DL) techniques. In 131 CC patients, the primary tumor was manually segmented on T2-weighted MRI by two radiologists (R1, R2). Patients were separated into a train/validation (n = 105) and a test- (n = 26) cohort. The segmentation performance of the DL algorithm compared with R1/R2 was assessed with Dice coefficients (DSCs) and Hausdorff distances (HDs) in the test cohort. The trained DL network retrieved whole-volume tumor segmentations yielding median DSCs of 0.60 and 0.58 for DL compared with R1 (DL-R1) and R2 (DL-R2), respectively, whereas DSC for R1-R2 was 0.78. Agreement for primary tumor volumes was excellent between raters (R1-R2: intraclass correlation coefficient (ICC) = 0.93), but lower for the DL algorithm and the raters (DL-R1: ICC = 0.43; DL-R2: ICC = 0.44). The developed DL algorithm enables the automated estimation of tumor size and primary CC tumor segmentation. However, segmentation agreement between raters is better than that between DL algorithm and raters. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

12 pages, 3820 KiB  
Article
Segmentation Uncertainty Estimation as a Sanity Check for Image Biomarker Studies
by Ivan Zhovannik, Dennis Bontempi, Alessio Romita, Elisabeth Pfaehler, Sergey Primakov, Andre Dekker, Johan Bussink, Alberto Traverso and René Monshouwer
Cancers 2022, 14(5), 1288; https://doi.org/10.3390/cancers14051288 - 2 Mar 2022
Viewed by 2078
Abstract
Problem. Image biomarker analysis, also known as radiomics, is a tool for tissue characterization and treatment prognosis that relies on routinely acquired clinical images and delineations. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, radiomics often lack reproducibility. [...] Read more.
Problem. Image biomarker analysis, also known as radiomics, is a tool for tissue characterization and treatment prognosis that relies on routinely acquired clinical images and delineations. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, radiomics often lack reproducibility. Radiomics harmonization techniques have been proposed as a solution to reduce these sources of uncertainty and/or their influence on the prognostic model performance. A relevant question is how to estimate the protocol-induced uncertainty of a specific image biomarker, what the effect is on the model performance, and how to optimize the model given the uncertainty. Methods. Two non-small cell lung cancer (NSCLC) cohorts, composed of 421 and 240 patients, respectively, were used for training and testing. Per patient, a Monte Carlo algorithm was used to generate three hundred synthetic contours with a surface dice tolerance measure of less than 1.18 mm with respect to the original GTV. These contours were subsequently used to derive 104 radiomic features, which were ranked on their relative sensitivity to contour perturbation, expressed in the parameter η. The top four (low η) and the bottom four (high η) features were selected for two models based on the Cox proportional hazards model. To investigate the influence of segmentation uncertainty on the prognostic model, we trained and tested the setup in 5000 augmented realizations (using a Monte Carlo sampling method); the log-rank test was used to assess the stratification performance and stability of segmentation uncertainty. Results. Although both low and high η setup showed significant testing set log-rank p-values (p = 0.01) in the original GTV delineations (without segmentation uncertainty introduced), in the model with high uncertainty, to effect ratio, only around 30% of the augmented realizations resulted in model performance with p < 0.05 in the test set. In contrast, the low η setup performed with a log-rank p < 0.05 in 90% of the augmented realizations. Moreover, the high η setup classification was uncertain in its predictions for 50% of the subjects in the testing set (for 80% agreement rate), whereas the low η setup was uncertain only in 10% of the cases. Discussion. Estimating image biomarker model performance based only on the original GTV segmentation, without considering segmentation, uncertainty may be deceiving. The model might result in a significant stratification performance, but can be unstable for delineation variations, which are inherent to manual segmentation. Simulating segmentation uncertainty using the method described allows for more stable image biomarker estimation, selection, and model development. The segmentation uncertainty estimation method described here is universal and can be extended to estimate other protocol uncertainties (such as image acquisition and pre-processing). Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

18 pages, 1175 KiB  
Article
Multitask Learning Radiomics on Longitudinal Imaging to Predict Survival Outcomes following Risk-Adaptive Chemoradiation for Non-Small Cell Lung Cancer
by Parisa Forouzannezhad, Dominic Maes, Daniel S. Hippe, Phawis Thammasorn, Reza Iranzad, Jie Han, Chunyan Duan, Xiao Liu, Shouyi Wang, W. Art Chaovalitwongse, Jing Zeng and Stephen R. Bowen
Cancers 2022, 14(5), 1228; https://doi.org/10.3390/cancers14051228 - 26 Feb 2022
Cited by 19 | Viewed by 3802
Abstract
Medical imaging provides quantitative and spatial information to evaluate treatment response in the management of patients with non-small cell lung cancer (NSCLC). High throughput extraction of radiomic features on these images can potentially phenotype tumors non-invasively and support risk stratification based on survival [...] Read more.
Medical imaging provides quantitative and spatial information to evaluate treatment response in the management of patients with non-small cell lung cancer (NSCLC). High throughput extraction of radiomic features on these images can potentially phenotype tumors non-invasively and support risk stratification based on survival outcome prediction. The prognostic value of radiomics from different imaging modalities and time points prior to and during chemoradiation therapy of NSCLC, relative to conventional imaging biomarker or delta radiomics models, remains uncharacterized. We investigated the utility of multitask learning of multi-time point radiomic features, as opposed to single-task learning, for improving survival outcome prediction relative to conventional clinical imaging feature model benchmarks. Survival outcomes were prospectively collected for 45 patients with unresectable NSCLC enrolled on the FLARE-RT phase II trial of risk-adaptive chemoradiation and optional consolidation PD-L1 checkpoint blockade (NCT02773238). FDG-PET, CT, and perfusion SPECT imaging pretreatment and week 3 mid-treatment was performed and 110 IBSI-compliant pyradiomics shape-/intensity-/texture-based features from the metabolic tumor volume were extracted. Outcome modeling consisted of a fused Laplacian sparse group LASSO with component-wise gradient boosting survival regression in a multitask learning framework. Testing performance under stratified 10-fold cross-validation was evaluated for multitask learning radiomics of different imaging modalities and time points. Multitask learning models were benchmarked against conventional clinical imaging and delta radiomics models and evaluated with the concordance index (c-index) and index of prediction accuracy (IPA). FDG-PET radiomics had higher prognostic value for overall survival in test folds (c-index 0.71 [0.67, 0.75]) than CT radiomics (c-index 0.64 [0.60, 0.71]) or perfusion SPECT radiomics (c-index 0.60 [0.57, 0.63]). Multitask learning of pre-/mid-treatment FDG-PET radiomics (c-index 0.71 [0.67, 0.75]) outperformed benchmark clinical imaging (c-index 0.65 [0.59, 0.71]) and FDG-PET delta radiomics (c-index 0.52 [0.48, 0.58]) models. Similarly, the IPA for multitask learning FDG-PET radiomics (30%) was higher than clinical imaging (26%) and delta radiomics (15%) models. Radiomics models performed consistently under different voxel resampling conditions. Multitask learning radiomics for outcome modeling provides a clinical decision support platform that leverages longitudinal imaging information. This framework can reveal the relative importance of different imaging modalities and time points when designing risk-adaptive cancer treatment strategies. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

19 pages, 5407 KiB  
Article
The Impact of Resampling and Denoising Deep Learning Algorithms on Radiomics in Brain Metastases MRI
by Ilyass Moummad, Cyril Jaudet, Alexis Lechervy, Samuel Valable, Charlotte Raboutet, Zamila Soilihi, Juliette Thariat, Nadia Falzone, Joëlle Lacroix, Alain Batalla and Aurélien Corroyer-Dulmont
Cancers 2022, 14(1), 36; https://doi.org/10.3390/cancers14010036 - 22 Dec 2021
Cited by 8 | Viewed by 3221
Abstract
Background: Magnetic resonance imaging (MRI) is predominant in the therapeutic management of cancer patients, unfortunately, patients have to wait a long time to get an appointment for examination. Therefore, new MRI devices include deep-learning (DL) solutions to save acquisition time. However, the impact [...] Read more.
Background: Magnetic resonance imaging (MRI) is predominant in the therapeutic management of cancer patients, unfortunately, patients have to wait a long time to get an appointment for examination. Therefore, new MRI devices include deep-learning (DL) solutions to save acquisition time. However, the impact of these algorithms on intensity and texture parameters has been poorly studied. The aim of this study was to evaluate the impact of resampling and denoising DL models on radiomics. Methods: Resampling and denoising DL model was developed on 14,243 T1 brain images from 1.5T-MRI. Radiomics were extracted from 40 brain metastases from 11 patients (2049 images). A total of 104 texture features of DL images were compared to original images with paired t-test, Pearson correlation and concordance-correlation-coefficient (CCC). Results: When two times shorter image acquisition shows strong disparities with the originals concerning the radiomics, with significant differences and loss of correlation of 79.81% and 48.08%, respectively. Interestingly, DL models restore textures with 46.15% of unstable parameters and 25.96% of low CCC and without difference for the first-order intensity parameters. Conclusions: Resampling and denoising DL models reconstruct low resolution and noised MRI images acquired quickly into high quality images. While fast MRI acquisition loses most of the radiomic features, DL models restore these parameters. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

32 pages, 12587 KiB  
Article
Artificial Neural Networks Predicted the Overall Survival and Molecular Subtypes of Diffuse Large B-Cell Lymphoma Using a Pancancer Immune-Oncology Panel
by Joaquim Carreras, Shinichiro Hiraiwa, Yara Yukie Kikuti, Masashi Miyaoka, Sakura Tomita, Haruka Ikoma, Atsushi Ito, Yusuke Kondo, Giovanna Roncador, Juan F. Garcia, Kiyoshi Ando, Rifat Hamoudi and Naoya Nakamura
Cancers 2021, 13(24), 6384; https://doi.org/10.3390/cancers13246384 - 20 Dec 2021
Cited by 24 | Viewed by 4371
Abstract
Diffuse large B-cell lymphoma (DLBCL) is one of the most frequent subtypes of non-Hodgkin lymphomas. We used artificial neural networks (multilayer perceptron and radial basis function), machine learning, and conventional bioinformatics to predict the overall survival and molecular subtypes of DLBCL. The series [...] Read more.
Diffuse large B-cell lymphoma (DLBCL) is one of the most frequent subtypes of non-Hodgkin lymphomas. We used artificial neural networks (multilayer perceptron and radial basis function), machine learning, and conventional bioinformatics to predict the overall survival and molecular subtypes of DLBCL. The series included 106 cases and 730 genes of a pancancer immune-oncology panel (nCounter) as predictors. The multilayer perceptron predicted the outcome with high accuracy, with an area under the curve (AUC) of 0.98, and ranked all the genes according to their importance. In a multivariate analysis, ARG1, TNFSF12, REL, and NRP1 correlated with favorable survival (hazard risks: 0.3–0.5), and IFNA8, CASP1, and CTSG, with poor survival (hazard risks = 1.0–2.1). Gene set enrichment analysis (GSEA) showed enrichment toward poor prognosis. These high-risk genes were also associated with the gene expression of M2-like tumor-associated macrophages (CD163), and MYD88 expression. The prognostic relevance of this set of 7 genes was also confirmed within the IPI and MYC translocation strata, the EBER-negative cases, the DLBCL not-otherwise specified (NOS) (High-grade B-cell lymphoma with MYC and BCL2 and/or BCL6 rearrangements excluded), and an independent series of 414 cases of DLBCL in Europe and North America (GSE10846). The perceptron analysis also predicted molecular subtypes (based on the Lymph2Cx assay) with high accuracy (AUC = 1). STAT6, TREM2, and REL were associated with the germinal center B-cell (GCB) subtype, and CD37, GNLY, CD46, and IL17B were associated with the activated B-cell (ABC)/unspecified subtype. The GSEA had a sinusoidal-like plot with association to both molecular subtypes, and immunohistochemistry analysis confirmed the correlation of MAPK3 with the GCB subtype in another series of 96 cases (notably, MAPK3 also correlated with LMO2, but not with M2-like tumor-associated macrophage markers CD163, CSF1R, TNFAIP8, CASP8, PD-L1, PTX3, and IL-10). Finally, survival and molecular subtypes were successfully modeled using other machine learning techniques including logistic regression, discriminant analysis, SVM, CHAID, C5, C&R trees, KNN algorithm, and Bayesian network. In conclusion, prognoses and molecular subtypes were predicted with high accuracy using neural networks, and relevant genes were highlighted. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

17 pages, 2322 KiB  
Article
Impact of Interobserver Variability in Manual Segmentation of Non-Small Cell Lung Cancer (NSCLC) Applying Low-Rank Radiomic Representation on Computed Tomography
by Michelle Hershman, Bardia Yousefi, Lacey Serletti, Maya Galperin-Aizenberg, Leonid Roshkovan, José Marcio Luna, Jeffrey C. Thompson, Charu Aggarwal, Erica L. Carpenter, Despina Kontos and Sharyn I. Katz
Cancers 2021, 13(23), 5985; https://doi.org/10.3390/cancers13235985 - 28 Nov 2021
Cited by 9 | Viewed by 3509
Abstract
This study tackles interobserver variability with respect to specialty training in manual segmentation of non-small cell lung cancer (NSCLC). Four readers included for segmentation are: a data scientist (BY), a medical student (LS), a radiology trainee (MH), and a specialty-trained radiologist (SK) for [...] Read more.
This study tackles interobserver variability with respect to specialty training in manual segmentation of non-small cell lung cancer (NSCLC). Four readers included for segmentation are: a data scientist (BY), a medical student (LS), a radiology trainee (MH), and a specialty-trained radiologist (SK) for a total of 293 patients from two publicly available databases. Sørensen–Dice (SD) coefficients and low rank Pearson correlation coefficients (CC) of 429 radiomics were calculated to assess interobserver variability. Cox proportional hazard (CPH) models and Kaplan-Meier (KM) curves of overall survival (OS) prediction for each dataset were also generated. SD and CC for segmentations demonstrated high similarities, yielding, SD: 0.79 and CC: 0.92 (BY-SK), SD: 0.81 and CC: 0.83 (LS-SK), and SD: 0.84 and CC: 0.91 (MH-SK) in average for both databases, respectively. OS through the maximal CPH model for the two datasets yielded c-statistics of 0.7 (95% CI) and 0.69 (95% CI), while adding radiomic and clinical variables (sex, stage/morphological status, and histology) together. KM curves also showed significant discrimination between high- and low-risk patients (p-value < 0.005). This supports that readers’ level of training and clinical experience may not significantly influence the ability to extract accurate radiomic features for NSCLC on CT. This potentially allows flexibility in the training required to produce robust prognostic imaging biomarkers for potential clinical translation. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

20 pages, 4996 KiB  
Article
An Automatic Detection and Localization of Mammographic Microcalcifications ROI with Multi-Scale Features Using the Radiomics Analysis Approach
by Tariq Mahmood, Jianqiang Li, Yan Pei, Faheem Akhtar, Azhar Imran and Muhammad Yaqub
Cancers 2021, 13(23), 5916; https://doi.org/10.3390/cancers13235916 - 24 Nov 2021
Cited by 12 | Viewed by 4427
Abstract
Microcalcifications in breast tissue can be an early sign of breast cancer, and play a crucial role in breast cancer screening. This study proposes a radiomics approach based on advanced machine learning algorithms for diagnosing pathological microcalcifications in mammogram images and provides radiologists [...] Read more.
Microcalcifications in breast tissue can be an early sign of breast cancer, and play a crucial role in breast cancer screening. This study proposes a radiomics approach based on advanced machine learning algorithms for diagnosing pathological microcalcifications in mammogram images and provides radiologists with a valuable decision support system (in regard to diagnosing patients). An adaptive enhancement method based on the contourlet transform is proposed to enhance microcalcifications and effectively suppress background and noise. Textural and statistical features are extracted from each wavelet layer’s high-frequency coefficients to detect microcalcification regions. The top-hat morphological operator and wavelet transform segment microcalcifications, implying their exact locations. Finally, the proposed radiomic fusion algorithm is employed to classify the selected features into benign and malignant. The proposed model’s diagnostic performance was evaluated on the MIAS dataset and compared with traditional machine learning models, such as the support vector machine, K-nearest neighbor, and random forest, using different evaluation parameters. Our proposed approach outperformed existing models in diagnosing microcalcification by achieving an 0.90 area under the curve, 0.98 sensitivity, and 0.98 accuracy. The experimental findings concur with expert observations, indicating that the proposed approach is most effective and practical for early diagnosing breast microcalcifications, substantially improving the work efficiency of physicians. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

13 pages, 2565 KiB  
Article
Discovering Digital Tumor Signatures—Using Latent Code Representations to Manipulate and Classify Liver Lesions
by Jens Kleesiek, Benedikt Kersjes, Kai Ueltzhöffer, Jacob M. Murray, Carsten Rother, Ullrich Köthe and Heinz-Peter Schlemmer
Cancers 2021, 13(13), 3108; https://doi.org/10.3390/cancers13133108 - 22 Jun 2021
Cited by 1 | Viewed by 2353
Abstract
Modern generative deep learning (DL) architectures allow for unsupervised learning of latent representations that can be exploited in several downstream tasks. Within the field of oncological medical imaging, we term these latent representations “digital tumor signatures” and hypothesize that they can be used, [...] Read more.
Modern generative deep learning (DL) architectures allow for unsupervised learning of latent representations that can be exploited in several downstream tasks. Within the field of oncological medical imaging, we term these latent representations “digital tumor signatures” and hypothesize that they can be used, in analogy to radiomics features, to differentiate between lesions and normal liver tissue. Moreover, we conjecture that they can be used for the generation of synthetic data, specifically for the artificial insertion and removal of liver tumor lesions at user-defined spatial locations in CT images. Our approach utilizes an implicit autoencoder, an unsupervised model architecture that combines an autoencoder and two generative adversarial network (GAN)-like components. The model was trained on liver patches from 25 or 57 inhouse abdominal CT scans, depending on the experiment, demonstrating that only minimal data is required for synthetic image generation. The model was evaluated on a publicly available data set of 131 scans. We show that a PCA embedding of the latent representation captures the structure of the data, providing the foundation for the targeted insertion and removal of tumor lesions. To assess the quality of the synthetic images, we conducted two experiments with five radiologists. For experiment 1, only one rater and the ensemble-rater were marginally above the chance level in distinguishing real from synthetic data. For the second experiment, no rater was above the chance level. To illustrate that the “digital signatures” can also be used to differentiate lesion from normal tissue, we employed several machine learning methods. The best performing method, a LinearSVM, obtained 95% (97%) accuracy, 94% (95%) sensitivity, and 97% (99%) specificity, depending on if all data or only normal appearing patches were used for training of the implicit autoencoder. Overall, we demonstrate that the proposed unsupervised learning paradigm can be utilized for the removal and insertion of liver lesions at user defined spatial locations and that the digital signatures can be used to discriminate between lesions and normal liver tissue in abdominal CT scans. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

16 pages, 2614 KiB  
Article
3T MRI-Radiomic Approach to Predict for Lymph Node Status in Breast Cancer Patients
by Domiziana Santucci, Eliodoro Faiella, Ermanno Cordelli, Rosa Sicilia, Carlo de Felice, Bruno Beomonte Zobel, Giulio Iannello and Paolo Soda
Cancers 2021, 13(9), 2228; https://doi.org/10.3390/cancers13092228 - 6 May 2021
Cited by 19 | Viewed by 2175
Abstract
Background: axillary lymph node (LN) status is one of the main breast cancer prognostic factors and it is currently defined by invasive procedures. The aim of this study is to predict LN metastasis combining MRI radiomics features with primary breast tumor histological features [...] Read more.
Background: axillary lymph node (LN) status is one of the main breast cancer prognostic factors and it is currently defined by invasive procedures. The aim of this study is to predict LN metastasis combining MRI radiomics features with primary breast tumor histological features and patients’ clinical data. Methods: 99 lesions on pre-treatment contrasted 3T-MRI (DCE). All patients had a histologically proven invasive breast cancer and defined LN status. Patients’ clinical data and tumor histological analysis were previously collected. For each tumor lesion, a semi-automatic segmentation was performed, using the second phase of DCE-MRI. Each segmentation was optimized using a convex-hull algorithm. In addition to the 14 semantics features and a feature ROI volume/convex-hull volume, 242 other quantitative features were extracted. A wrapper selection method selected the 15 most prognostic features (14 quantitative, 1 semantic), used to train the final learning model. The classifier used was the Random Forest. Results: the AUC-classifier was 0.856 (label = positive or negative). The contribution of each feature group was lower performance than the full signature. Conclusions: the combination of patient clinical, histological and radiomics features of primary breast cancer can accurately predict LN status in a non-invasive way. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

22 pages, 10078 KiB  
Article
Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data
by Laith Alzubaidi, Muthana Al-Amidie, Ahmed Al-Asadi, Amjad J. Humaidi, Omran Al-Shamma, Mohammed A. Fadhel, Jinglan Zhang, J. Santamaría and Ye Duan
Cancers 2021, 13(7), 1590; https://doi.org/10.3390/cancers13071590 - 30 Mar 2021
Cited by 137 | Viewed by 11950
Abstract
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from [...] Read more.
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

24 pages, 2277 KiB  
Article
The Role of [18F]Fluciclovine PET/CT in the Characterization of High-Risk Primary Prostate Cancer: Comparison with [11C]Choline PET/CT and Histopathological Analysis
by Lucia Zanoni, Riccardo Mei, Lorenzo Bianchi, Francesca Giunchi, Lorenzo Maltoni, Cristian Vincenzo Pultrone, Cristina Nanni, Irene Bossert, Antonella Matti, Riccardo Schiavina, Michelangelo Fiorentino, Cristina Fonti, Filippo Lodi, Antonietta D’Errico, Eugenio Brunocilla and Stefano Fanti
Cancers 2021, 13(7), 1575; https://doi.org/10.3390/cancers13071575 - 29 Mar 2021
Cited by 5 | Viewed by 2521
Abstract
The primary aim of the study was to evaluate the role of [18F]Fluciclovine PET/CT in the characterization of intra-prostatic lesions in high-risk primary PCa patients eligible for radical prostatectomy, in comparison with conventional [11C]Choline PET/CT and validated by prostatectomy [...] Read more.
The primary aim of the study was to evaluate the role of [18F]Fluciclovine PET/CT in the characterization of intra-prostatic lesions in high-risk primary PCa patients eligible for radical prostatectomy, in comparison with conventional [11C]Choline PET/CT and validated by prostatectomy pathologic examination. Secondary aims were to determine the performance of PET semi-quantitative parameters (SUVmax; target-to-background ratios [TBRs], using abdominal aorta, bone marrow and liver as backgrounds) for malignant lesion detection (and best cut-off values) and to search predictive factors of malignancy. A six sextants prostate template was created and used by PET readers and pathologists for data comparison and validation. PET visual and semi-quantitative analyses were performed: for instance, patient-based, blinded to histopathology; subsequently lesion-based, un-blinded, according to the pathology reference template. Among 19 patients included (mean age 63 years, 89% high and 11% very-high-risk, mean PSA 9.15 ng/mL), 45 malignant and 31 benign lesions were found and 19 healthy areas were selected (n = 95). For both tracers, the location of the “blinded” prostate SUVmax matched with the lobe of the lesion with the highest pGS in 17/19 cases (89%). There was direct correlation between [18F]Fluciclovine uptake values and pISUP. Overall, lesion-based (n = 95), the performance of PET semiquantitative parameters, with either [18F]Fluciclovine or [11C]Choline, in detecting either malignant/ISUP2-5/ISUP4-5 PCa lesions, was moderate and similar (AUCs ≥ 0.70) but still inadequate (AUCs ≤ 0.81) as a standalone staging procedure. A [18F]Fluciclovine TBR-L3 ≥ 1.5 would depict a clinical significant lesion with a sensitivity and specificity of 85% and 68% respectively; whereas a SUVmax cut-off value of 4 would be able to identify a ISUP 4-5 lesion in all cases (sensitivity 100%), although with low specificity (52%). TBRs (especially with threshold significantly higher than aorta and slightly higher than bone marrow), may be complementary to implement malignancy targeting. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

16 pages, 5952 KiB  
Article
Artificial Intelligence Techniques for Prostate Cancer Detection through Dual-Channel Tissue Feature Engineering
by Cho-Hee Kim, Subrata Bhattacharjee, Deekshitha Prakash, Suki Kang, Nam-Hoon Cho, Hee-Cheol Kim and Heung-Kook Choi
Cancers 2021, 13(7), 1524; https://doi.org/10.3390/cancers13071524 - 26 Mar 2021
Cited by 14 | Viewed by 8217
Abstract
The optimal diagnostic and treatment strategies for prostate cancer (PCa) are constantly changing. Given the importance of accurate diagnosis, texture analysis of stained prostate tissues is important for automatic PCa detection. We used artificial intelligence (AI) techniques to classify dual-channel tissue features extracted [...] Read more.
The optimal diagnostic and treatment strategies for prostate cancer (PCa) are constantly changing. Given the importance of accurate diagnosis, texture analysis of stained prostate tissues is important for automatic PCa detection. We used artificial intelligence (AI) techniques to classify dual-channel tissue features extracted from Hematoxylin and Eosin (H&E) tissue images, respectively. Tissue feature engineering was performed to extract first-order statistic (FOS)-based textural features from each stained channel, and cancer classification between benign and malignant was carried out based on important features. Recursive feature elimination (RFE) and one-way analysis of variance (ANOVA) methods were used to identify significant features, which provided the best five features out of the extracted six features. The AI techniques used in this study for binary classification (benign vs. malignant and low-grade vs. high-grade) were support vector machine (SVM), logistic regression (LR), bagging tree, boosting tree, and dual-channel bidirectional long short-term memory (DC-BiLSTM) network. Further, a comparative analysis was carried out between the AI algorithms. Two different datasets were used for PCa classification. Out of these, the first dataset (private) was used for training and testing the AI models and the second dataset (public) was used only for testing to evaluate model performance. The automatic AI classification system performed well and showed satisfactory results according to the hypothesis of this study. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

30 pages, 9896 KiB  
Article
piNET–An Automated Proliferation Index Calculator Framework for Ki67 Breast Cancer Images
by Rokshana Stephny Geread, Abishika Sivanandarajah, Emily Rita Brouwer, Geoffrey A. Wood, Dimitrios Androutsos, Hala Faragalla and April Khademi
Cancers 2021, 13(1), 11; https://doi.org/10.3390/cancers13010011 - 22 Dec 2020
Cited by 14 | Viewed by 3469
Abstract
In this work, a novel proliferation index (PI) calculator for Ki67 images called piNET is proposed. It is successfully tested on four datasets, from three scanners comprised of patches, tissue microarrays (TMAs) and whole slide images (WSI), representing a diverse multi-centre dataset for [...] Read more.
In this work, a novel proliferation index (PI) calculator for Ki67 images called piNET is proposed. It is successfully tested on four datasets, from three scanners comprised of patches, tissue microarrays (TMAs) and whole slide images (WSI), representing a diverse multi-centre dataset for evaluating Ki67 quantification. Compared to state-of-the-art methods, piNET consistently performs the best over all datasets with an average PI difference of 5.603%, PI accuracy rate of 86% and correlation coefficient R = 0.927. The success of the system can be attributed to several innovations. Firstly, this tool is built based on deep learning, which can adapt to wide variability of medical images—and it was posed as a detection problem to mimic pathologists’ workflow which improves accuracy and efficiency. Secondly, the system is trained purely on tumor cells, which reduces false positives from non-tumor cells without needing the usual pre-requisite tumor segmentation step for Ki67 quantification. Thirdly, the concept of learning background regions through weak supervision is introduced, by providing the system with ideal and non-ideal (artifact) patches that further reduces false positives. Lastly, a novel hotspot analysis is proposed to allow automated methods to score patches from WSI that contain “significant” activity. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 960 KiB  
Review
Radiomic and Volumetric Measurements as Clinical Trial Endpoints—A Comprehensive Review
by Ionut-Gabriel Funingana, Pubudu Piyatissa, Marika Reinius, Cathal McCague, Bristi Basu and Evis Sala
Cancers 2022, 14(20), 5076; https://doi.org/10.3390/cancers14205076 - 17 Oct 2022
Cited by 6 | Viewed by 2627
Abstract
Clinical trials for oncology drug development have long relied on surrogate outcome biomarkers that assess changes in tumor burden to accelerate drug registration (i.e., Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST v1.1) criteria). Drug-induced reduction in tumor size represents an imperfect [...] Read more.
Clinical trials for oncology drug development have long relied on surrogate outcome biomarkers that assess changes in tumor burden to accelerate drug registration (i.e., Response Evaluation Criteria in Solid Tumors version 1.1 (RECIST v1.1) criteria). Drug-induced reduction in tumor size represents an imperfect surrogate marker for drug activity and yet a radiologically determined objective response rate is a widely used endpoint for Phase 2 trials. With the addition of therapies targeting complex biological systems such as immune system and DNA damage repair pathways, incorporation of integrative response and outcome biomarkers may add more predictive value. We performed a review of the relevant literature in four representative tumor types (breast cancer, rectal cancer, lung cancer and glioblastoma) to assess the preparedness of volumetric and radiomics metrics as clinical trial endpoints. We identified three key areas—segmentation, validation and data sharing strategies—where concerted efforts are required to enable progress of volumetric- and radiomics-based clinical trial endpoints for wider clinical implementation. Full article
(This article belongs to the Special Issue Medical Imaging and Machine Learning​)
Show Figures

Graphical abstract

Back to TopTop