AI as a Tool to Improve Hybrid Imaging in Cancer

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (30 June 2022) | Viewed by 50677

Special Issue Editor


E-Mail Website
Guest Editor
PET Centre, School of Biomedical Engineering and Imaging Sciences, Kings College London, St Thomas’ Hospital, Westminster Bridge Road, London, UK
Interests: hybrid imaging; PET/CT; PET/MR; oncology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Imaging plays a pivotal role in treating patients with cancer, and hybrid imaging represents a key phenotypic presentation of the disease, stage, and prognosis as well as characterisation of the tumour. A prerequisite for continued improvement in the treatment of cancer patients is the ability to stratify patients into ever smaller subpopulations, enabling interventions to be tailored to individual patients balancing the potential benefit with the risk and severity of side effects. The recent development in the field of AI towards deep learning algorithms learning from examples rather than rule-based logic has enabled studies demonstrating the potential predictive power of data-driven stratification taking hundreds of variables into account. These new analytical methods enable us to harvest information not previously accessible or well understood as well as new ways to improve image acquisition, reconstruction, and clinical workflows. This Special Issue will present up-to-date knowledge and examples of the use of AI in a wide range of applications within hybrid imaging, including tumour classification, segmentation, multimodal data analysis, as well as in preprocessing and reconstruction of images.

Dr. Malene Fischer
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial intelligence
  • Deep learning
  • Machine learning
  • Hybrid imaging
  • PET/CT
  • PET/MR
  • SPECT/CT
  • Multimodal imaging
  • Cancer
  • Oncology
  • Tumour segmentation
  • Tumour characterisation
  • Prediction
  • Multimodal data analysis
  • Image reconstruction

Related Special Issue

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 2771 KiB  
Article
A Multi-Task Convolutional Neural Network for Lesion Region Segmentation and Classification of Non-Small Cell Lung Carcinoma
by Zhao Wang, Yuxin Xu, Linbo Tian, Qingjin Chi, Fengrong Zhao, Rongqi Xu, Guilei Jin, Yansong Liu, Junhui Zhen and Sasa Zhang
Diagnostics 2022, 12(8), 1849; https://doi.org/10.3390/diagnostics12081849 - 31 Jul 2022
Cited by 3 | Viewed by 1751
Abstract
Targeted therapy is an effective treatment for non-small cell lung cancer. Before treatment, pathologists need to confirm tumor morphology and type, which is time-consuming and highly repetitive. In this study, we propose a multi-task deep learning model based on a convolutional neural network [...] Read more.
Targeted therapy is an effective treatment for non-small cell lung cancer. Before treatment, pathologists need to confirm tumor morphology and type, which is time-consuming and highly repetitive. In this study, we propose a multi-task deep learning model based on a convolutional neural network for joint cancer lesion region segmentation and histological subtype classification, using magnified pathological tissue images. Firstly, we constructed a shared feature extraction channel to extract abstract information of visual space for joint segmentation and classification learning. Then, the weighted losses of segmentation and classification tasks were tuned to balance the computing bias of the multi-task model. We evaluated our model on a private in-house dataset of pathological tissue images collected from Qilu Hospital of Shandong University. The proposed approach achieved Dice similarity coefficients of 93.5% and 89.0% for segmenting squamous cell carcinoma (SCC) and adenocarcinoma (AD) specimens, respectively. In addition, the proposed method achieved an accuracy of 97.8% in classifying SCC vs. normal tissue and an accuracy of 100% in classifying AD vs. normal tissue. The experimental results demonstrated that our method outperforms other state-of-the-art methods and shows promising performance for both lesion region segmentation and subtype classification. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

16 pages, 4880 KiB  
Article
Breast Cancer Detection in Mammography Images Using Deep Convolutional Neural Networks and Fuzzy Ensemble Modeling Techniques
by Ayman Altameem, Chandrakanta Mahanty, Ramesh Chandra Poonia, Abdul Khader Jilani Saudagar and Raghvendra Kumar
Diagnostics 2022, 12(8), 1812; https://doi.org/10.3390/diagnostics12081812 - 28 Jul 2022
Cited by 32 | Viewed by 3029
Abstract
Breast cancer has evolved as the most lethal illness impacting women all over the globe. Breast cancer may be detected early, which reduces mortality and increases the chances of a full recovery. Researchers all around the world are working on breast cancer screening [...] Read more.
Breast cancer has evolved as the most lethal illness impacting women all over the globe. Breast cancer may be detected early, which reduces mortality and increases the chances of a full recovery. Researchers all around the world are working on breast cancer screening tools based on medical imaging. Deep learning approaches have piqued the attention of many in the medical imaging field due to their rapid growth. In this research, mammography pictures were utilized to detect breast cancer. We have used four mammography imaging datasets with a similar number of 1145 normal, benign, and malignant pictures using various deep CNN (Inception V4, ResNet-164, VGG-11, and DenseNet121) models as base classifiers. The proposed technique employs an ensemble approach in which the Gompertz function is used to build fuzzy rankings of the base classification techniques, and the decision scores of the base models are adaptively combined to construct final predictions. The proposed fuzzy ensemble techniques outperform each individual transfer learning methodology as well as multiple advanced ensemble strategies (Weighted Average, Sugeno Integral) with reference to prediction and accuracy. The suggested Inception V4 ensemble model with fuzzy rank based Gompertz function has a 99.32% accuracy rate. We believe that the suggested approach will be of tremendous value to healthcare practitioners in identifying breast cancer patients early on, perhaps leading to an immediate diagnosis. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

12 pages, 3340 KiB  
Article
Isolated Convolutional-Neural-Network-Based Deep-Feature Extraction for Brain Tumor Classification Using Shallow Classifier
by Yassir Edrees Almalki, Muhammad Umair Ali, Karam Dad Kallu, Manzar Masud, Amad Zafar, Sharifa Khalid Alduraibi, Muhammad Irfan, Mohammad Abd Alkhalik Basha, Hassan A. Alshamrani, Alaa Khalid Alduraibi and Mervat Aboualkheir
Diagnostics 2022, 12(8), 1793; https://doi.org/10.3390/diagnostics12081793 - 24 Jul 2022
Cited by 14 | Viewed by 2063
Abstract
In today’s world, a brain tumor is one of the most serious diseases. If it is detected at an advanced stage, it might lead to a very limited survival rate. Therefore, brain tumor classification is crucial for appropriate therapeutic planning to improve patient [...] Read more.
In today’s world, a brain tumor is one of the most serious diseases. If it is detected at an advanced stage, it might lead to a very limited survival rate. Therefore, brain tumor classification is crucial for appropriate therapeutic planning to improve patient life quality. This research investigates a deep-feature-trained brain tumor detection and differentiation model using classical/linear machine learning classifiers (MLCs). In this study, transfer learning is used to obtain deep brain magnetic resonance imaging (MRI) scan features from a constructed convolutional neural network (CNN). First, multiple layers (19, 22, and 25) of isolated CNNs are constructed and trained to evaluate the performance. The developed CNN models are then utilized for training the multiple MLCs by extracting deep features via transfer learning. The available brain MRI datasets are employed to validate the proposed approach. The deep features of pre-trained models are also extracted to evaluate and compare their performance with the proposed approach. The proposed CNN deep-feature-trained support vector machine model yielded higher accuracy than other commonly used pre-trained deep-feature MLC training models. The presented approach detects and distinguishes brain tumors with 98% accuracy. It also has a good classification rate (97.2%) for an unknown dataset not used to train the model. Following extensive testing and analysis, the suggested technique might be helpful in assisting doctors in diagnosing brain tumors. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

14 pages, 3536 KiB  
Article
Radiomics Diagnostic Tool Based on Deep Learning for Colposcopy Image Classification
by Yuliana Jiménez Gaona, Darwin Castillo Malla, Bernardo Vega Crespo, María José Vicuña, Vivian Alejandra Neira, Santiago Dávila and Veronique Verhoeven
Diagnostics 2022, 12(7), 1694; https://doi.org/10.3390/diagnostics12071694 - 12 Jul 2022
Cited by 6 | Viewed by 3958
Abstract
Background: Colposcopy imaging is widely used to diagnose, treat and follow-up on premalignant and malignant lesions in the vulva, vagina, and cervix. Thus, deep learning algorithms are being used widely in cervical cancer diagnosis tools. In this study, we developed and preliminarily validated [...] Read more.
Background: Colposcopy imaging is widely used to diagnose, treat and follow-up on premalignant and malignant lesions in the vulva, vagina, and cervix. Thus, deep learning algorithms are being used widely in cervical cancer diagnosis tools. In this study, we developed and preliminarily validated a model based on the Unet network plus SVM to classify cervical lesions on colposcopy images. Methodology: Two sets of images were used: the Intel & Mobile ODT Cervical Cancer Screening public dataset, and a private dataset from a public hospital in Ecuador during a routine colposcopy, after the application of acetic acid and lugol. For the latter, the corresponding clinical information was collected, specifically cytology on the PAP smear and the screening of human papillomavirus testing, prior to colposcopy. The lesions of the cervix or regions of interest were segmented and classified by the Unet and the SVM model, respectively. Results: The CAD system was evaluated for the ability to predict the risk of cervical cancer. The lesion segmentation metric results indicate a DICE of 50%, a precision of 65%, and an accuracy of 80%. The classification results’ sensitivity, specificity, and accuracy were 70%, 48.8%, and 58%, respectively. Randomly, 20 images were selected and sent to 13 expert colposcopists for a statistical comparison between visual evaluation experts and the CAD tool (p-value of 0.597). Conclusion: The CAD system needs to improve but could be acceptable in an environment where women have limited access to clinicians for the diagnosis, follow-up, and treatment of cervical cancer; better performance is possible through the exploration of other deep learning methods with larger datasets. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Graphical abstract

15 pages, 4226 KiB  
Article
End-to-End Calcification Distribution Pattern Recognition for Mammograms: An Interpretable Approach with GNN
by Melissa Min-Szu Yao, Hao Du, Mikael Hartman, Wing P. Chan and Mengling Feng
Diagnostics 2022, 12(6), 1376; https://doi.org/10.3390/diagnostics12061376 - 02 Jun 2022
Cited by 2 | Viewed by 4174
Abstract
Purpose: We aimed to develop a novel interpretable artificial intelligence (AI) model algorithm focusing on automatic detection and classification of various patterns of calcification distribution in mammographic images using a unique graph convolution approach. Materials and methods: Images from 292 patients, [...] Read more.
Purpose: We aimed to develop a novel interpretable artificial intelligence (AI) model algorithm focusing on automatic detection and classification of various patterns of calcification distribution in mammographic images using a unique graph convolution approach. Materials and methods: Images from 292 patients, which showed calcifications according to the mammographic reports and diagnosed breast cancers, were collected. The calcification distributions were classified as diffuse, segmental, regional, grouped, or linear. Excluded were mammograms with (1) breast cancer with multiple lexicons such as mass, asymmetry, or architectural distortion without calcifications; (2) hidden calcifications that were difficult to mark; or (3) incomplete medical records. Results: A graph-convolutional-network-based model was developed. A total of 581 mammographic images from 292 cases of breast cancer were divided based on the calcification distribution pattern: diffuse (n = 67), regional (n = 115), group (n = 337), linear (n = 8), or segmental (n = 54). The classification performances were measured using metrics including precision, recall, F1 score, accuracy, and multi-class area under the receiver operating characteristic curve. The proposed model achieved a precision of 0.522 ± 0.028, sensitivity of 0.643 ± 0.017, specificity of 0.847 ± 0.009, F1 score of 0.559 ± 0.018, accuracy of 64.325 ± 1.694%, and area under the curve of 0.745 ± 0.030; thus, the method was found to be superior compared to all baseline models. The predicted linear and diffuse classifications were highly similar to the ground truth, and the predicted grouped and regional classifications were also superior compared to baseline models. The prediction results are interpretable using visualization methods to highlight the important calcification nodes in graphs. Conclusions: The proposed deep neural network framework is an AI solution that automatically detects and classifies calcification distribution patterns on mammographic images highly suspected of showing breast cancers. Further study of the AI model in an actual clinical setting and additional data collection will improve its performance. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

18 pages, 761 KiB  
Article
Bio-Imaging-Based Machine Learning Algorithm for Breast Cancer Detection
by Sadia Safdar, Muhammad Rizwan, Thippa Reddy Gadekallu, Abdul Rehman Javed, Mohammad Khalid Imam Rahmani, Khurram Jawad and Surbhi Bhatia
Diagnostics 2022, 12(5), 1134; https://doi.org/10.3390/diagnostics12051134 - 03 May 2022
Cited by 30 | Viewed by 3080
Abstract
Breast cancer is one of the most widespread diseases in women worldwide. It leads to the second-largest mortality rate in women, especially in European countries. It occurs when malignant lumps that are cancerous start to grow in the breast cells. Accurate and early [...] Read more.
Breast cancer is one of the most widespread diseases in women worldwide. It leads to the second-largest mortality rate in women, especially in European countries. It occurs when malignant lumps that are cancerous start to grow in the breast cells. Accurate and early diagnosis can help in increasing survival rates against this disease. A computer-aided detection (CAD) system is necessary for radiologists to differentiate between normal and abnormal cell growth. This research consists of two parts; the first part involves a brief overview of the different image modalities, using a wide range of research databases to source information such as ultrasound, histography, and mammography to access various publications. The second part evaluates different machine learning techniques used to estimate breast cancer recurrence rates. The first step is to perform preprocessing, including eliminating missing values, data noise, and transformation. The dataset is divided as follows: 60% of the dataset is used for training, and the rest, 40%, is used for testing. We focus on minimizing type one false-positive rate (FPR) and type two false-negative rate (FNR) errors to improve accuracy and sensitivity. Our proposed model uses machine learning techniques such as support vector machine (SVM), logistic regression (LR), and K-nearest neighbor (KNN) to achieve better accuracy in breast cancer classification. Furthermore, we attain the highest accuracy of 97.7% with 0.01 FPR, 0.03 FNR, and an area under the ROC curve (AUC) score of 0.99. The results show that our proposed model successfully classifies breast tumors while overcoming previous research limitations. Finally, we summarize the paper with the future trends and challenges of the classification and segmentation in breast cancer detection. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

12 pages, 1170 KiB  
Article
Glioma Tumors’ Classification Using Deep-Neural-Network-Based Features with SVM Classifier
by Ghazanfar Latif, Ghassen Ben Brahim, D. N. F. Awang Iskandar, Abul Bashar and Jaafar Alghazo
Diagnostics 2022, 12(4), 1018; https://doi.org/10.3390/diagnostics12041018 - 18 Apr 2022
Cited by 48 | Viewed by 3385
Abstract
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With [...] Read more.
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient’s life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

18 pages, 2105 KiB  
Article
A Non-Invasive Interpretable Diagnosis of Melanoma Skin Cancer Using Deep Learning and Ensemble Stacking of Machine Learning Models
by Iftiaz A. Alfi, Md. Mahfuzur Rahman, Mohammad Shorfuzzaman and Amril Nazir
Diagnostics 2022, 12(3), 726; https://doi.org/10.3390/diagnostics12030726 - 17 Mar 2022
Cited by 32 | Viewed by 6807
Abstract
A skin lesion is a portion of skin that observes abnormal growth compared to other areas of the skin. The ISIC 2018 lesion dataset has seven classes. A miniature dataset version of it is also available with only two classes: malignant and benign. [...] Read more.
A skin lesion is a portion of skin that observes abnormal growth compared to other areas of the skin. The ISIC 2018 lesion dataset has seven classes. A miniature dataset version of it is also available with only two classes: malignant and benign. Malignant tumors are tumors that are cancerous, and benign tumors are non-cancerous. Malignant tumors have the ability to multiply and spread throughout the body at a much faster rate. The early detection of the cancerous skin lesion is crucial for the survival of the patient. Deep learning models and machine learning models play an essential role in the detection of skin lesions. Still, due to image occlusions and imbalanced datasets, the accuracies have been compromised so far. In this paper, we introduce an interpretable method for the non-invasive diagnosis of melanoma skin cancer using deep learning and ensemble stacking of machine learning models. The dataset used to train the classifier models contains balanced images of benign and malignant skin moles. Hand-crafted features are used to train the base models (logistic regression, SVM, random forest, KNN, and gradient boosting machine) of machine learning. The prediction of these base models was used to train level one model stacking using cross-validation on the training set. Deep learning models (MobileNet, Xception, ResNet50, ResNet50V2, and DenseNet121) were used for transfer learning, and were already pre-trained on ImageNet data. The classifier was evaluated for each model. The deep learning models were then ensembled with different combinations of models and assessed. Furthermore, shapely adaptive explanations are used to construct an interpretability approach that generates heatmaps to identify the parts of an image that are most suggestive of the illness. This allows dermatologists to understand the results of our model in a way that makes sense to them. For evaluation, we calculated the accuracy, F1-score, Cohen’s kappa, confusion matrix, and ROC curves and identified the best model for classifying skin lesions. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

35 pages, 7402 KiB  
Article
Breast Cancer Mammograms Classification Using Deep Neural Network and Entropy-Controlled Whale Optimization Algorithm
by Saliha Zahoor, Umar Shoaib and Ikram Ullah Lali
Diagnostics 2022, 12(2), 557; https://doi.org/10.3390/diagnostics12020557 - 21 Feb 2022
Cited by 44 | Viewed by 5395
Abstract
Breast cancer has affected many women worldwide. To perform detection and classification of breast cancer many computer-aided diagnosis (CAD) systems have been established because the inspection of the mammogram images by the radiologist is a difficult and time taken task. To early diagnose [...] Read more.
Breast cancer has affected many women worldwide. To perform detection and classification of breast cancer many computer-aided diagnosis (CAD) systems have been established because the inspection of the mammogram images by the radiologist is a difficult and time taken task. To early diagnose the disease and provide better treatment lot of CAD systems were established. There is still a need to improve existing CAD systems by incorporating new methods and technologies in order to provide more precise results. This paper aims to investigate ways to prevent the disease as well as to provide new methods of classification in order to reduce the risk of breast cancer in women’s lives. The best feature optimization is performed to classify the results accurately. The CAD system’s accuracy improved by reducing the false-positive rates.The Modified Entropy Whale Optimization Algorithm (MEWOA) is proposed based on fusion for deep feature extraction and perform the classification. In the proposed method, the fine-tuned MobilenetV2 and Nasnet Mobile are applied for simulation. The features are extracted, and optimization is performed. The optimized features are fused and optimized by using MEWOA. Finally, by using the optimized deep features, the machine learning classifiers are applied to classify the breast cancer images. To extract the features and perform the classification, three publicly available datasets are used: INbreast, MIAS, and CBIS-DDSM. The maximum accuracy achieved in INbreast dataset is 99.7%, MIAS dataset has 99.8% and CBIS-DDSM has 93.8%. Finally, a comparison with other existing methods is performed, demonstrating that the proposed algorithm outperforms the other approaches. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

17 pages, 6085 KiB  
Article
Two-Stage Liver and Tumor Segmentation Algorithm Based on Convolutional Neural Network
by Lu Meng, Qianqian Zhang and Sihang Bu
Diagnostics 2021, 11(10), 1806; https://doi.org/10.3390/diagnostics11101806 - 29 Sep 2021
Cited by 19 | Viewed by 2759
Abstract
The liver is an essential metabolic organ of the human body, and malignant liver tumors seriously affect and threaten human life. The segmentation algorithm for liver and liver tumors is one of the essential branches of computer-aided diagnosis. This paper proposed a two-stage [...] Read more.
The liver is an essential metabolic organ of the human body, and malignant liver tumors seriously affect and threaten human life. The segmentation algorithm for liver and liver tumors is one of the essential branches of computer-aided diagnosis. This paper proposed a two-stage liver and tumor segmentation algorithm based on the convolutional neural network (CNN). In the present study, we used two stages to segment the liver and tumors: liver localization and tumor segmentation. In the liver localization stage, the network segments the liver region, adopts the encoding–decoding structure and long-distance feature fusion operation, and utilizes the shallow features’ spatial information to improve liver identification. In the tumor segmentation stage, based on the liver segmentation results of the first two steps, a CNN model was designed to accurately identify the liver tumors by using the 2D image features and 3D spatial features of the CT image slices. At the same time, we use the attention mechanism to improve the segmentation performance of small liver tumors. The proposed algorithm was tested on the public data set Liver Tumor Segmentation Challenge (LiTS). The Dice coefficient of liver segmentation was 0.967, and the Dice coefficient of tumor segmentation was 0.725. The proposed algorithm can accurately segment the liver and liver tumors in CT images. Compared with other state-of-the-art algorithms, the segmentation results of the proposed algorithm rank the highest in the Dice coefficient. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

13 pages, 2033 KiB  
Article
Radiomics and Artificial Intelligence for Outcome Prediction in Multiple Myeloma Patients Undergoing Autologous Transplantation: A Feasibility Study with CT Data
by Daniela Schenone, Alida Dominietto, Cristina Campi, Francesco Frassoni, Michele Cea, Sara Aquino, Emanuele Angelucci, Federica Rossi, Lorenzo Torri, Bianca Bignotti, Alberto Stefano Tagliafico and Michele Piana
Diagnostics 2021, 11(10), 1759; https://doi.org/10.3390/diagnostics11101759 - 24 Sep 2021
Cited by 9 | Viewed by 2125
Abstract
Multiple myeloma is a plasma cell dyscrasia characterized by focal and non-focal bone lesions. Radiomic techniques extract morphological information from computerized tomography images and exploit them for stratification and risk prediction purposes. However, few papers so far have applied radiomics to multiple myeloma. [...] Read more.
Multiple myeloma is a plasma cell dyscrasia characterized by focal and non-focal bone lesions. Radiomic techniques extract morphological information from computerized tomography images and exploit them for stratification and risk prediction purposes. However, few papers so far have applied radiomics to multiple myeloma. A retrospective study approved by the institutional review board: n = 51 transplanted patients and n = 33 (64%) with focal lesion analyzed via an open-source toolbox that extracted 109 radiomics features. We also applied a dedicated tool for computing 24 features describing the whole skeleton asset. The redundancy reduction was realized via correlation and principal component analysis. Fuzzy clustering (FC) and Hough transform filtering (HTF) allowed for patient stratification, with effectiveness assessed by four skill scores. The highest sensitivity and critical success index (CSI) were obtained representing each patient, with 17 focal features selected via correlation with the 24 features describing the overall skeletal asset. These scores were higher than the ones associated with a standard cytogenetic classification. The Mann–Whitney U-test showed that three among the 17 imaging descriptors passed the null hypothesis. This AI-based interpretation of radiomics features stratified relapsed and non-relapsed MM patients, showing some potentiality for the determination of the prognostic image-based biomarkers in disease follow-up. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Graphical abstract

15 pages, 2069 KiB  
Article
Contrast Administration Impacts CT-Based Radiomics of Colorectal Liver Metastases and Non-Tumoral Liver Parenchyma Revealing the “Radiological” Tumour Microenvironment
by Francesco Fiz, Guido Costa, Nicolò Gennaro, Ludovico la Bella, Alexandra Boichuk, Martina Sollini, Letterio S. Politi, Luca Balzarini, Guido Torzilli, Arturo Chiti and Luca Viganò
Diagnostics 2021, 11(7), 1162; https://doi.org/10.3390/diagnostics11071162 - 25 Jun 2021
Cited by 16 | Viewed by 1953
Abstract
The impact of the contrast medium on the radiomic textural features (TF) extracted from the CT scan is unclear. We investigated the modification of TFs of colorectal liver metastases (CLM), peritumoral tissue, and liver parenchyma. One hundred and sixty-two patients with 409 CLMs [...] Read more.
The impact of the contrast medium on the radiomic textural features (TF) extracted from the CT scan is unclear. We investigated the modification of TFs of colorectal liver metastases (CLM), peritumoral tissue, and liver parenchyma. One hundred and sixty-two patients with 409 CLMs undergoing resection (2017–2020) into a single institution were considered. We analyzed the following volumes of interest (VOIs): The CLM (Tumor-VOI); a 5-mm parenchyma rim around the CLM (Margin-VOI); and a 2-mL sample of parenchyma distant from CLM (Liver-VOI). Forty-five TFs were extracted from each VOI (LIFEx®®). Contrast enhancement affected most TFs of the Tumor-VOI (71%) and Margin-VOI (62%), and part of those of the Liver-VOI (44%, p = 0.010). After contrast administration, entropy increased and energy decreased in the Tumor-VOI (0.93 ± 0.10 vs. 0.85 ± 0.14 in pre-contrast; 0.14 ± 0.03 vs. 0.18 ± 0.04, p < 0.001) and Margin-VOI (0.89 ± 0.11 vs. 0.85 ± 0.12; 0.16 ± 0.04 vs. 0.18 ± 0.04, p < 0.001), while remaining stable in the Liver-VOI. Comparing the VOIs, pre-contrast Tumor and Margin-VOI had similar entropy and energy (0.85/0.18 for both), while Liver-VOI had lower values (0.76/0.21, p < 0.001). In the portal phase, a gradient was observed (entropy: Tumor > Margin > Liver; energy: Tumor < Margin < Liver, p < 0.001). Contrast enhancement affected TFs of CLM, while it did not modify entropy and energy of parenchyma. TFs of the peritumoral tissue had modifications similar to the Tumor-VOI despite its radiological aspect being equal to non-tumoral parenchyma. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

15 pages, 1875 KiB  
Article
Does Anatomical Contextual Information Improve 3D U-Net-Based Brain Tumor Segmentation?
by Iulian Emil Tampu, Neda Haj-Hosseini and Anders Eklund
Diagnostics 2021, 11(7), 1159; https://doi.org/10.3390/diagnostics11071159 - 25 Jun 2021
Cited by 7 | Viewed by 2601
Abstract
Effective, robust, and automatic tools for brain tumor segmentation are needed for the extraction of information useful in treatment planning. Recently, convolutional neural networks have shown remarkable performance in the identification of tumor regions in magnetic resonance (MR) images. Context-aware artificial intelligence is [...] Read more.
Effective, robust, and automatic tools for brain tumor segmentation are needed for the extraction of information useful in treatment planning. Recently, convolutional neural networks have shown remarkable performance in the identification of tumor regions in magnetic resonance (MR) images. Context-aware artificial intelligence is an emerging concept for the development of deep learning applications for computer-aided medical image analysis. A large portion of the current research is devoted to the development of new network architectures to improve segmentation accuracy by using context-aware mechanisms. In this work, it is investigated whether or not the addition of contextual information from the brain anatomy in the form of white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) masks and probability maps improves U-Net-based brain tumor segmentation. The BraTS2020 dataset was used to train and test two standard 3D U-Net (nnU-Net) models that, in addition to the conventional MR image modalities, used the anatomical contextual information as extra channels in the form of binary masks (CIM) or probability maps (CIP). For comparison, a baseline model (BLM) that only used the conventional MR image modalities was also trained. The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject. Median (mean) Dice scores of 90.2 (81.9), 90.2 (81.9), and 90.0 (82.1) were obtained on the official BraTS2020 validation dataset (125 subjects) for BLM, CIM, and CIP, respectively. Results show that there is no statistically significant difference when comparing Dice scores between the baseline model and the contextual information models (p > 0.05), even when comparing performances for high and low grade tumors independently. In a few low grade cases where improvement was seen, the number of false positives was reduced. Moreover, no improvements were found when considering model training time or domain generalization. Only in the case of compensation for fewer MR modalities available for each subject did the addition of anatomical contextual information significantly improve (p < 0.05) the segmentation of the whole tumor. In conclusion, there is no overall significant improvement in segmentation performance when using anatomical contextual information in the form of either binary WM, GM, and CSF masks or probability maps as extra channels. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 1591 KiB  
Review
Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities
by Huanye Li, Chau Hung Lee, David Chia, Zhiping Lin, Weimin Huang and Cher Heng Tan
Diagnostics 2022, 12(2), 289; https://doi.org/10.3390/diagnostics12020289 - 24 Jan 2022
Cited by 31 | Viewed by 6029
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring [...] Read more.
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field. Full article
(This article belongs to the Special Issue AI as a Tool to Improve Hybrid Imaging in Cancer)
Show Figures

Figure 1

Back to TopTop