Artificial Intelligence Applications in Medical Imaging

A special issue of Life (ISSN 2075-1729). This special issue belongs to the section "Radiobiology and Nuclear Medicine".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 23287

Special Issue Editors


E-Mail Website
Guest Editor
Department of Radiology and Medical Imaging, University of Medicine and Pharmacy of Craiova, 200533 Craiova, Romania
Interests: medical imaging; artificial intelligence; computer-assisted image analysis; computed tomography; magnetic resonance imaging; cerebral diseases; lung diseases; gastrointestinal diseases

E-Mail Website
Guest Editor
Department of Diagnostic and Interventional Radiology, School of Medicine, University of Zagreb, 10000 Zagreb, Croatia
Interests: breast imaging; vascular radiology; urogenital radiology

Special Issue Information

Dear Colleagues,

Artificial Intelligence represents an improved simulation of human intelligence processes by using a computer system that can surpass human capabilities and has the potential to make every aspect of our lives easier, faster, safer, and more accurate. With the help of Artificial Intelligence, the entire landscape of medical diagnosis can be completely transformed, thus, leading to earlier and more accurate disease detection, combined with multiple available therapeutic options and increased life expectancy.

This Special Issue aims to gather a collection of state-of-the-art papers targeting novel Artificial Intelligence applications in medical imaging that can lead to a rapid diagnosis and improved diagnostic accuracy.

Prof. Dr. Ioana Andreea Gheonea
Prof. Dr. Boris Brkljacic
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Life is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • computed tomography
  • magnetic resonance imaging
  • breast imaging
  • gastrointestinal imaging
  • prostate imaging
  • musculoskeletal imaging
  • lung imaging
  • brain imaging

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 3488 KiB  
Article
Brain Tumor Detection and Classification Using Fine-Tuned CNN with ResNet50 and U-Net Model: A Study on TCGA-LGG and TCIA Dataset for MRI Applications
by Abdullah A. Asiri, Ahmad Shaf, Tariq Ali, Muhammad Aamir, Muhammad Irfan, Saeed Alqahtani, Khlood M. Mehdar, Hanan Talal Halawani, Ali H. Alghamdi, Abdullah Fahad A. Alshamrani and Samar M. Alqhtani
Life 2023, 13(7), 1449; https://doi.org/10.3390/life13071449 - 26 Jun 2023
Cited by 9 | Viewed by 9489
Abstract
Nowadays, brain tumors have become a leading cause of mortality worldwide. The brain cells in the tumor grow abnormally and badly affect the surrounding brain cells. These cells could be either cancerous or non-cancerous types, and their symptoms can vary depending on their [...] Read more.
Nowadays, brain tumors have become a leading cause of mortality worldwide. The brain cells in the tumor grow abnormally and badly affect the surrounding brain cells. These cells could be either cancerous or non-cancerous types, and their symptoms can vary depending on their location, size, and type. Due to its complex and varying structure, detecting and classifying the brain tumor accurately at the initial stages to avoid maximum death loss is challenging. This research proposes an improved fine-tuned model based on CNN with ResNet50 and U-Net to solve this problem. This model works on the publicly available dataset known as TCGA-LGG and TCIA. The dataset consists of 120 patients. The proposed CNN and fine-tuned ResNet50 model are used to detect and classify the tumor or no-tumor images. Furthermore, the U-Net model is integrated for the segmentation of the tumor regions correctly. The model performance evaluation metrics are accuracy, intersection over union, dice similarity coefficient, and similarity index. The results from fine-tuned ResNet50 model are IoU: 0.91, DSC: 0.95, SI: 0.95. In contrast, U-Net with ResNet50 outperforms all other models and correctly classified and segmented the tumor region. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

13 pages, 2895 KiB  
Article
Bi-DCNet: Bilateral Network with Dilated Convolutions for Left Ventricle Segmentation
by Zi Ye, Yogan Jaya Kumar, Fengyan Song, Guanxi Li and Suyu Zhang
Life 2023, 13(4), 1040; https://doi.org/10.3390/life13041040 - 18 Apr 2023
Viewed by 1038
Abstract
Left ventricular segmentation is a vital and necessary procedure for assessing cardiac systolic and diastolic function, while echocardiography is an indispensable diagnostic technique that enables cardiac functionality assessment. However, manually labeling the left ventricular region on echocardiography images is time consuming and leads [...] Read more.
Left ventricular segmentation is a vital and necessary procedure for assessing cardiac systolic and diastolic function, while echocardiography is an indispensable diagnostic technique that enables cardiac functionality assessment. However, manually labeling the left ventricular region on echocardiography images is time consuming and leads to observer bias. Recent research has demonstrated that deep learning has the capability to employ the segmentation process automatically. However, on the downside, it still ignores the contribution of all semantic information through the segmentation process. This study proposes a deep neural network architecture based on BiSeNet, named Bi-DCNet. This model comprises a spatial path and a context path, with the former responsible for spatial feature (low-level) acquisition and the latter responsible for contextual semantic feature (high-level) exploitation. Moreover, it incorporates feature extraction through the integration of dilated convolutions to achieve a larger receptive field to capture multi-scale information. The EchoNet-Dynamic dataset was utilized to assess the proposed model, and this is the first bilateral-structured network implemented on this large clinical video dataset for accomplishing the segmentation of the left ventricle. As demonstrated by the experimental outcomes, our method obtained 0.9228 and 0.8576 in DSC and IoU, respectively, proving the structure’s effectiveness. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

14 pages, 4604 KiB  
Article
Tumor Area Highlighting Using T2WI, ADC Map, and DWI Sequence Fusion on bpMRI Images for Better Prostate Cancer Diagnosis
by Rossy Vlăduț Teică, Mircea-Sebastian Șerbănescu, Lucian Mihai Florescu and Ioana Andreea Gheonea
Life 2023, 13(4), 910; https://doi.org/10.3390/life13040910 - 30 Mar 2023
Cited by 1 | Viewed by 1720
Abstract
Prostate cancer is the second most common cancer in men worldwide. The results obtained in magnetic resonance imaging examinations are used to decide the indication, type, and location of a prostate biopsy and contribute information about the characterization or aggressiveness of detected cancers, [...] Read more.
Prostate cancer is the second most common cancer in men worldwide. The results obtained in magnetic resonance imaging examinations are used to decide the indication, type, and location of a prostate biopsy and contribute information about the characterization or aggressiveness of detected cancers, including tumor progression over time. This study proposes a method to highlight prostate lesions with a high and very high risk of being malignant by overlaying a T2-weighted image, apparent diffusion coefficient map, and diffusion-weighted image sequences using 204 pairs of slices from 80 examined patients. It was reviewed by two radiologists who segmented suspicious lesions and labeled them according to the prostate imaging-reporting and data system (PI-RADS) score. Both radiologists found the algorithm to be useful as a “first opinion”, and they gave an average score on the quality of the highlight of 9.2 and 9.3, with an agreement of 0.96. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

14 pages, 3051 KiB  
Article
The Utility of Multimodal Imaging and Artificial Intelligence Algorithms for Overlying Two Volumes in the Decision Chain for the Treatment of Complex Pathologies in Interventional Neuroradiology—A Case Series Study
by Bogdan Valeriu Popa, Aurelian Costin Minoiu, Catalin Juratu, Cristina Fulgoi, Dragos Trifan, Adrian Tutelca, Dana Crisinescu, Dan Adrian Popica, Cristian Mihalea and Horia Ples
Life 2023, 13(3), 784; https://doi.org/10.3390/life13030784 - 14 Mar 2023
Viewed by 1345
Abstract
3D rotational angiography is now increasingly used in routine neuroendovascular procedures––in particular, for situations where the analysis of two overlayed sets of volume imaging proves useful for planning the treatment strategy or for confirming the optimal apposition of the intravascular devices used. The [...] Read more.
3D rotational angiography is now increasingly used in routine neuroendovascular procedures––in particular, for situations where the analysis of two overlayed sets of volume imaging proves useful for planning the treatment strategy or for confirming the optimal apposition of the intravascular devices used. The aim of this study is to identify and describe the decision algorithm for which the overlay function of 3D rotational angiography volumes, high-resolution contrast-enhanced flat panel detector CT adapted for intravascular devices (VasoCT/DynaCT), non-enhanced flat detector C-arm volume acquisition functionality integrated with the angiography equipment (XperCT/DynaCT), and isovolumetric MRI volumes were all used in treatments performed in a series of 29 patients. Two superposed 3DRA volumes were used in the treatment aneurysms located at the junction of two vascular territories and for arteriovenous malformations with compartments fed from different vascular territories. The superposition function of a preoperatively acquired 3DRA volume and a postoperatively acquired VasoCT volume provides accurate information about the apposition of neuroendovascular endoprostheses used in the treatment of aneurysms. The automatic overlay function generated by the 3D workstation is particularly useful, but in about 50% of cases it requires manual operator-dependent correction, requiring a certain level of experience. In our experience, multimodal imaging brings an important benefit, both in the treatment decision algorithm and in the assessment of neuroendovascular treatment efficacy. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

19 pages, 5885 KiB  
Article
Deep Learning Algorithms in the Automatic Segmentation of Liver Lesions in Ultrasound Investigations
by Mădălin Mămuleanu, Cristiana Marinela Urhuț, Larisa Daniela Săndulescu, Constantin Kamal, Ana-Maria Pătrașcu, Alin Gabriel Ionescu, Mircea-Sebastian Șerbănescu and Costin Teodor Streba
Life 2022, 12(11), 1877; https://doi.org/10.3390/life12111877 - 14 Nov 2022
Cited by 3 | Viewed by 1697
Abstract
Background: The ultrasound is one of the most used medical imaging investigations worldwide. It is non-invasive and effective in assessing liver tumors or other types of parenchymal changes. Methods: The aim of the study was to build a deep learning model for image [...] Read more.
Background: The ultrasound is one of the most used medical imaging investigations worldwide. It is non-invasive and effective in assessing liver tumors or other types of parenchymal changes. Methods: The aim of the study was to build a deep learning model for image segmentation in ultrasound video investigations. The dataset used in the study was provided by the University of Medicine and Pharmacy Craiova, Romania and contained 50 video examinations from 49 patients. The mean age of the patients in the cohort was 69.57. Regarding presence of a subjacent liver disease, 36.73% had liver cirrhosis and 16.32% had chronic viral hepatitis (5 patients: chronic hepatitis C and 3 patients: chronic hepatitis B). Frames were extracted and cropped from each examination and an expert gastroenterologist labelled the lesions in each frame. After labelling, the labels were exported as binary images. A deep learning segmentation model (U-Net) was trained with focal Tversky loss as a loss function. Two models were obtained with two different sets of parameters for the loss function. The performance metrics observed were intersection over union and recall and precision. Results: Analyzing the intersection over union metric, the first segmentation model obtained performed better compared to the second model: 0.8392 (model 1) vs. 0.7990 (model 2). The inference time for both models was between 32.15 milliseconds and 77.59 milliseconds. Conclusions: Two segmentation models were obtained in the study. The models performed similarly during training and validation. However, one model was trained to focus on hard-to-predict labels. The proposed segmentation models can represent a first step in automatically extracting time-intensity curves from CEUS examinations. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

20 pages, 18251 KiB  
Article
Combined Deep Learning Techniques for Mandibular Fracture Diagnosis Assistance
by Dong-Min Son, Yeong-Ah Yoon, Hyuk-Ju Kwon and Sung-Hak Lee
Life 2022, 12(11), 1711; https://doi.org/10.3390/life12111711 - 26 Oct 2022
Cited by 4 | Viewed by 2646
Abstract
Mandibular fractures are the most common fractures in dentistry. Since diagnosing a mandibular fracture is difficult when only panoramic radiographic images are used, most doctors use cone beam computed tomography (CBCT) to identify the patient’s fracture location. In this study, considering the diagnosis [...] Read more.
Mandibular fractures are the most common fractures in dentistry. Since diagnosing a mandibular fracture is difficult when only panoramic radiographic images are used, most doctors use cone beam computed tomography (CBCT) to identify the patient’s fracture location. In this study, considering the diagnosis of mandibular fractures using the combined deep learning technique, YOLO and U-Net were used as auxiliary diagnostic methods to detect the location of mandibular fractures based on panoramic images without CBCT. In a previous study, mandibular fracture diagnosis was performed using YOLO learning; in the detection performance result of the YOLOv4-based mandibular fracture diagnosis module, the precision score was approximately 97%, indicating that there was almost no misdiagnosis. In particular, fractures in the symphysis, body, angle, and ramus tend to be distributed in the middle of the mandible. Owing to the irregular fracture types and overlapping location information, the recall score was approximately 79%, which increased the detection of undiagnosed fractures. In many cases, fractures that are clearly visible to the human eye cannot be grasped. To overcome these shortcomings, the number of undiagnosed fractures can be reduced using a combination of the U-Net and YOLOv4 learning modules. U-Net is advantageous for the segmentation of fractures spread over a wide area because it performs semantic segmentation. Consequently, the undiagnosed case in the middle of the mandible, where YOLO was weak, was somewhat supplemented by the U-Net module. The precision score of the combined module was 95%, similar to that of the previous method, and the recall score improved to 87%, as the number of undiagnosed cases was reduced. Through this study, the performance of a deep learning method that can be used for the diagnosis of the mandibular bone has been improved, and it is anticipated that as an auxiliary diagnostic inspection device, it will assist dentists in making diagnoses. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

16 pages, 1258 KiB  
Article
Federated Learning Approach with Pre-Trained Deep Learning Models for COVID-19 Detection from Unsegmented CT images
by Lucian Mihai Florescu, Costin Teodor Streba, Mircea-Sebastian Şerbănescu, Mădălin Mămuleanu, Dan Nicolae Florescu, Rossy Vlăduţ Teică, Raluca Elena Nica and Ioana Andreea Gheonea
Life 2022, 12(7), 958; https://doi.org/10.3390/life12070958 - 26 Jun 2022
Cited by 18 | Viewed by 2300
Abstract
(1) Background: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS-CoV-2. Reverse transcription polymerase chain reaction (RT-PCR) remains the current gold standard for detecting SARS-CoV-2 infections in nasopharyngeal swabs. In Romania, the first reported patient to have contracted COVID-19 was officially [...] Read more.
(1) Background: Coronavirus disease 2019 (COVID-19) is an infectious disease caused by SARS-CoV-2. Reverse transcription polymerase chain reaction (RT-PCR) remains the current gold standard for detecting SARS-CoV-2 infections in nasopharyngeal swabs. In Romania, the first reported patient to have contracted COVID-19 was officially declared on 26 February 2020. (2) Methods: This study proposes a federated learning approach with pre-trained deep learning models for COVID-19 detection. Three clients were locally deployed with their own dataset. The goal of the clients was to collaborate in order to obtain a global model without sharing samples from the dataset. The algorithm we developed was connected to our internal picture archiving and communication system and, after running backwards, it encountered chest CT changes suggestive for COVID-19 in a patient investigated in our medical imaging department on the 28 January 2020. (4) Conclusions: Based on our results, we recommend using an automated AI-assisted software in order to detect COVID-19 based on the lung imaging changes as an adjuvant diagnostic method to the current gold standard (RT-PCR) in order to greatly enhance the management of these patients and also limit the spread of the disease, not only to the general population but also to healthcare professionals. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

Review

Jump to: Research

12 pages, 805 KiB  
Review
One Step Forward—The Current Role of Artificial Intelligence in Glioblastoma Imaging
by Costin Chirica, Danisia Haba, Elena Cojocaru, Andreea Isabela Mazga, Lucian Eva, Bogdan Ionut Dobrovat, Sabina Ioana Chirica, Ioana Stirban, Andreea Rotundu and Maria Magdalena Leon
Life 2023, 13(7), 1561; https://doi.org/10.3390/life13071561 - 14 Jul 2023
Cited by 2 | Viewed by 1736
Abstract
Artificial intelligence (AI) is rapidly integrating into diagnostic methods across many branches of medicine. Significant progress has been made in tumor assessment using AI algorithms, and research is underway on how image manipulation can provide information with diagnostic, prognostic and treatment impacts. Glioblastoma [...] Read more.
Artificial intelligence (AI) is rapidly integrating into diagnostic methods across many branches of medicine. Significant progress has been made in tumor assessment using AI algorithms, and research is underway on how image manipulation can provide information with diagnostic, prognostic and treatment impacts. Glioblastoma (GB) remains the most common primary malignant brain tumor, with a median survival of 15 months. This paper presents literature data on GB imaging and the contribution of AI to the characterization and tracking of GB, as well as recurrence. Furthermore, from an imaging point of view, the differential diagnosis of these tumors can be problematic. How can an AI algorithm help with differential diagnosis? The integration of clinical, radiomics and molecular markers via AI holds great potential as a tool for enhancing patient outcomes by distinguishing brain tumors from mimicking lesions, classifying and grading tumors, and evaluating them before and after treatment. Additionally, AI can aid in differentiating between tumor recurrence and post-treatment alterations, which can be challenging with conventional imaging methods. Overall, the integration of AI into GB imaging has the potential to significantly improve patient outcomes by enabling more accurate diagnosis, precise treatment planning and better monitoring of treatment response. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Medical Imaging)
Show Figures

Figure 1

Back to TopTop