Artificial Intelligence in Biomedical Imaging

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (25 January 2024) | Viewed by 37699

Special Issue Editors

Auckland Bioengineering Institute, Faculty of Medical and Health Sciences, University of Auckland, Grafton, Auckland 1010, New Zealand
Interests: bioengineering; machine learning; artificial intelligence; biomedical imaging; intelligent and integrative medicine; computational radiology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Auckland Bioengineering Institute, Faculty of Medical and Health Sciences, University of Auckland, Grafton, Auckland 1010, New Zealand
Interests: medical imaging; collaborative learning; applied artificial intelligence; design science research; evidence-based practice
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH 45267, USA
Interests: medical imaging; MRI; statistics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Key Laboratory of Electromagnetic Field and Electrical Apparatus Reliability, Hebei University of Technology, Tianjin 300130, China
Interests: medical devices; biomedical engineering; medical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is a revolutionary technology increasingly used in medicine. Biomedical imaging is one of the most promising clinical applications of AI. AI can be leveraged to help clinicians to improve the accuracy and efficiency of diagnosing a wide range of illnesses. AI solutions in biomedical imaging are expected to improve workflow efficiency without loss of accuracy. In the case of cancer, an AI-assisted imaging evaluation may deliver an early diagnosis, and precise tumor staging and intelligent-imaging-guided intervention may improve treatment precision. Despite the fast development and great potential involved, however, many difficulties and challenges still slow the implementation of AI in biomedical imaging. Determining how to improve AI’s credibility and increase AI models’ interpretability and transparency are topics that still require in-depth research. It is well understood that AI solutions can support radiologists and reduce their workload because they can increase efficiency, increase accuracy and standardization, and further improve patients’ lives. However, clinical implementation of AI in biomedical imaging is still at a lower scale. Therefore, the challenges of AI implementation in biomedical imaging need to be addressed by further technical development.

We are pleased to present a new Special Issue on “Artificial Intelligence in Biomedical Imaging”. This Special Issue aims to highlight recent developments in AI-driven biomedical imaging technologies, such as novel machine learning models, image analysis algorithms, imaging devices/sequences, optimization, effectiveness predictions, interpretability, translatability, etc. We are looking for articles on AI-driven biomedical imaging that can support better medical diagnostics/treatment for individual patients in healthcare systems.

Dr. Alan Wang
Dr. Sibusiso Mdletshe
Dr. Brady Williamson
Prof. Dr. Guizhi Xu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • biomedical imaging
  • medical image analysis
  • imaging device
  • imaging sequence
  • imaging-guided intervention
  • interpretability
  • translatability

Published Papers (24 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

23 pages, 15127 KiB  
Article
Leveraging Multi-Annotator Label Uncertainties as Privileged Information for Acute Respiratory Distress Syndrome Detection in Chest X-ray Images
by Zijun Gao, Emily Wittrup and Kayvan Najarian
Bioengineering 2024, 11(2), 133; https://doi.org/10.3390/bioengineering11020133 - 29 Jan 2024
Cited by 1 | Viewed by 978
Abstract
Acute Respiratory Distress Syndrome (ARDS) is a life-threatening lung injury for which early diagnosis and evidence-based treatment can improve patient outcomes. Chest X-rays (CXRs) play a crucial role in the identification of ARDS; however, their interpretation can be difficult due to non-specific radiological [...] Read more.
Acute Respiratory Distress Syndrome (ARDS) is a life-threatening lung injury for which early diagnosis and evidence-based treatment can improve patient outcomes. Chest X-rays (CXRs) play a crucial role in the identification of ARDS; however, their interpretation can be difficult due to non-specific radiological features, uncertainty in disease staging, and inter-rater variability among clinical experts, thus leading to prominent label noise issues. To address these challenges, this study proposes a novel approach that leverages label uncertainty from multiple annotators to enhance ARDS detection in CXR images. Label uncertainty information is encoded and supplied to the model as privileged information, a form of information exclusively available during the training stage and not during inference. By incorporating the Transfer and Marginalized (TRAM) network and effective knowledge transfer mechanisms, the detection model achieved a mean testing AUROC of 0.850, an AUPRC of 0.868, and an F1 score of 0.797. After removing equivocal testing cases, the model attained an AUROC of 0.973, an AUPRC of 0.971, and an F1 score of 0.921. As a new approach to addressing label noise in medical image analysis, the proposed model has shown superiority compared to the original TRAM, Confusion Estimation, and mean-aggregated label training. The overall findings highlight the effectiveness of the proposed methods in addressing label noise in CXRs for ARDS detection, with potential for use in other medical imaging domains that encounter similar challenges. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

14 pages, 2434 KiB  
Article
Toward Smart, Automated Junctional Tourniquets—AI Models to Interpret Vessel Occlusion at Physiological Pressure Points
by Guy Avital, Sofia I. Hernandez Torres, Zechariah J. Knowlton, Carlos Bedolla, Jose Salinas and Eric J. Snider
Bioengineering 2024, 11(2), 109; https://doi.org/10.3390/bioengineering11020109 - 24 Jan 2024
Viewed by 909
Abstract
Hemorrhage is the leading cause of preventable death in both civilian and military medicine. Junctional hemorrhages are especially difficult to manage since traditional tourniquet placement is often not possible. Ultrasound can be used to visualize and guide the caretaker to apply pressure at [...] Read more.
Hemorrhage is the leading cause of preventable death in both civilian and military medicine. Junctional hemorrhages are especially difficult to manage since traditional tourniquet placement is often not possible. Ultrasound can be used to visualize and guide the caretaker to apply pressure at physiological pressure points to stop hemorrhage. However, this process is technically challenging, requiring the vessel to be properly positioned over rigid boney surfaces and applying sufficient pressure to maintain proper occlusion. As a first step toward automating this life-saving intervention, we demonstrate an artificial intelligence algorithm that classifies a vessel as patent or occluded, which can guide a user to apply the appropriate pressure required to stop flow. Neural network models were trained using images captured from a custom tissue-mimicking phantom and an ex vivo swine model of the inguinal region, as pressure was applied using an ultrasound probe with and without color Doppler overlays. Using these images, we developed an image classification algorithm suitable for the determination of patency or occlusion in an ultrasound image containing color Doppler overlay. Separate AI models for both test platforms were able to accurately detect occlusion status in test-image sets to more than 93% accuracy. In conclusion, this methodology can be utilized for guiding and monitoring proper vessel occlusion, which, when combined with automated actuation and other AI models, can allow for automated junctional tourniquet application. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

24 pages, 6029 KiB  
Article
Multi-Instance Classification of Breast Tumor Ultrasound Images Using Convolutional Neural Networks and Transfer Learning
by Alexandru Ciobotaru, Maria Aurora Bota, Dan Ioan Goța and Liviu Cristian Miclea
Bioengineering 2023, 10(12), 1419; https://doi.org/10.3390/bioengineering10121419 - 13 Dec 2023
Viewed by 1086
Abstract
Background: Breast cancer is arguably one of the leading causes of death among women around the world. The automation of the early detection process and classification of breast masses has been a prominent focus for researchers in the past decade. The utilization of [...] Read more.
Background: Breast cancer is arguably one of the leading causes of death among women around the world. The automation of the early detection process and classification of breast masses has been a prominent focus for researchers in the past decade. The utilization of ultrasound imaging is prevalent in the diagnostic evaluation of breast cancer, with its predictive accuracy being dependent on the expertise of the specialist. Therefore, there is an urgent need to create fast and reliable ultrasound image detection algorithms to address this issue. Methods: This paper aims to compare the efficiency of six state-of-the-art, fine-tuned deep learning models that can classify breast tissue from ultrasound images into three classes: benign, malignant, and normal, using transfer learning. Additionally, the architecture of a custom model is introduced and trained from the ground up on a public dataset containing 780 images, which was further augmented to 3900 and 7800 images, respectively. What is more, the custom model is further validated on another private dataset containing 163 ultrasound images divided into two classes: benign and malignant. The pre-trained architectures used in this work are ResNet-50, Inception-V3, Inception-ResNet-V2, MobileNet-V2, VGG-16, and DenseNet-121. The performance evaluation metrics that are used in this study are as follows: Precision, Recall, F1-Score and Specificity. Results: The experimental results show that the models trained on the augmented dataset with 7800 images obtained the best performance on the test set, having 94.95 ± 0.64%, 97.69 ± 0.52%, 97.69 ± 0.13%, 97.77 ± 0.29%, 95.07 ± 0.41%, 98.11 ± 0.10%, and 96.75 ± 0.26% accuracy for the ResNet-50, MobileNet-V2, InceptionResNet-V2, VGG-16, Inception-V3, DenseNet-121, and our model, respectively. Conclusion: Our proposed model obtains competitive results, outperforming some state-of-the-art models in terms of accuracy and training time. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

17 pages, 6481 KiB  
Article
Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI
by Nikola K. Kasabov, Helena Bahrami, Maryam Doborjeh and Alan Wang
Bioengineering 2023, 10(12), 1341; https://doi.org/10.3390/bioengineering10121341 - 21 Nov 2023
Viewed by 978
Abstract
Humans learn from a lot of information sources to make decisions. Once this information is learned in the brain, spatio-temporal associations are made, connecting all these sources (variables) in space and time represented as brain connectivity. In reality, to make a decision, we [...] Read more.
Humans learn from a lot of information sources to make decisions. Once this information is learned in the brain, spatio-temporal associations are made, connecting all these sources (variables) in space and time represented as brain connectivity. In reality, to make a decision, we usually have only part of the information, either as a limited number of variables, limited time to make the decision, or both. The brain functions as a spatio-temporal associative memory. Inspired by the ability of the human brain, a brain-inspired spatio-temporal associative memory was proposed earlier that utilized the NeuCube brain-inspired spiking neural network framework. Here we applied the STAM framework to develop STAM for neuroimaging data, on the cases of EEG and fMRI, resulting in STAM-EEG and STAM-fMRI. This paper showed that once a NeuCube STAM classification model was trained on a complete spatio-temporal EEG or fMRI data, it could be recalled using only part of the time series, or/and only part of the used variables. We evaluated both temporal and spatial association and generalization accuracy accordingly. This was a pilot study that opens the field for the development of classification systems on other neuroimaging data, such as longitudinal MRI data, trained on complete data but recalled on partial data. Future research includes STAM that will work on data, collected across different settings, in different labs and clinics, that may vary in terms of the variables and time of data collection, along with other parameters. The proposed STAM will be further investigated for early diagnosis and prognosis of brain conditions and for diagnostic/prognostic marker discovery. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

12 pages, 2438 KiB  
Article
Machine Learning Approaches to Differentiate Sellar-Suprasellar Cystic Lesions on Magnetic Resonance Imaging
by Chendan Jiang, Wentai Zhang, He Wang, Yixi Jiao, Yi Fang, Feng Feng, Ming Feng and Renzhi Wang
Bioengineering 2023, 10(11), 1295; https://doi.org/10.3390/bioengineering10111295 - 08 Nov 2023
Cited by 1 | Viewed by 1103
Abstract
Cystic lesions are common lesions of the sellar region with various pathological types, including pituitary apoplexy, Rathke’s cleft cyst, cystic craniopharyngioma, etc. Suggested surgical approaches are not unique when dealing with different cystic lesions. However, cystic lesions with different pathological types were hard [...] Read more.
Cystic lesions are common lesions of the sellar region with various pathological types, including pituitary apoplexy, Rathke’s cleft cyst, cystic craniopharyngioma, etc. Suggested surgical approaches are not unique when dealing with different cystic lesions. However, cystic lesions with different pathological types were hard to differentiate on MRI with the naked eye by doctors. This study aimed to distinguish different pathological types of cystic lesions in the sellar region using preoperative magnetic resonance imaging (MRI). Radiomics and deep learning approaches were used to extract features from gadolinium-enhanced MRIs of 399 patients enrolled at Peking Union Medical College Hospital over the past 15 years. Paired imaging differentiations were performed on four subtypes, including pituitary apoplexy, cystic pituitary adenoma (cysticA), Rathke’s cleft cyst, and cystic craniopharyngioma. Results showed that the model achieved an average AUC value of 0.7685. The model based on a support vector machine could distinguish cystic craniopharyngioma from Rathke’s cleft cyst with the highest AUC value of 0.8584. However, distinguishing cystic apoplexy from pituitary apoplexy was difficult and almost unclassifiable with any algorithms on any feature set, with the AUC value being only 0.6641. Finally, the proposed methods achieved an average Accuracy of 0.7532, which outperformed the traditional clinical knowledge-based method by about 8%. Therefore, in this study, we first fill the gap in the existing literature and provide a non-invasive method for accurately differentiating between these lesions, which could improve preoperative diagnosis accuracy and help to make surgery plans in clinical work. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

22 pages, 1342 KiB  
Article
Machine and Deep Learning Approaches Applied to Classify Gougerot–Sjögren Syndrome and Jointly Segment Salivary Glands
by Aurélien Olivier, Clément Hoffmann, Sandrine Jousse-Joulin, Ali Mansour, Luc Bressollette and Benoit Clement
Bioengineering 2023, 10(11), 1283; https://doi.org/10.3390/bioengineering10111283 - 03 Nov 2023
Viewed by 661
Abstract
To diagnose Gougerot–Sjögren syndrome (GSS), ultrasound imaging (US) is a promising tool for helping physicians and experts. Our project focuses on the automatic detection of the presence of GSS using US. Ultrasound imaging suffers from a weak signal-to-noise ratio. Therefore, any classification or [...] Read more.
To diagnose Gougerot–Sjögren syndrome (GSS), ultrasound imaging (US) is a promising tool for helping physicians and experts. Our project focuses on the automatic detection of the presence of GSS using US. Ultrasound imaging suffers from a weak signal-to-noise ratio. Therefore, any classification or segmentation task based on these images becomes a difficult challenge. To address these two tasks, we evaluate different approaches: a classification using a machine learning method along with feature extraction based on a set of measurements following the radiomics guidance and a deep-learning-based classification. We propose, therefore, an innovative method to enhance the training of a deep neural network with a two phases: multiple supervision using joint classification and a segmentation implemented as pretraining. We highlight the fact that our learning methods provide segmentation results similar to those performed by human experts. We obtain proficient segmentation results for salivary glands and promising detection results for Gougerot–Sjögren syndrome; we observe maximal accuracy with the model trained in two phases. Our experimental results corroborate the fact that deep learning and radiomics combined with ultrasound imaging can be a promising tool for the above-mentioned problems. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

13 pages, 717 KiB  
Article
Med-cDiff: Conditional Medical Image Generation with Diffusion Models
by Alex Ling Yu Hung, Kai Zhao, Haoxin Zheng, Ran Yan, Steven S. Raman, Demetri Terzopoulos and Kyunghyun Sung
Bioengineering 2023, 10(11), 1258; https://doi.org/10.3390/bioengineering10111258 - 28 Oct 2023
Cited by 3 | Viewed by 3226
Abstract
Conditional image generation plays a vital role in medical image analysis as it is effective in tasks such as super-resolution, denoising, and inpainting, among others. Diffusion models have been shown to perform at a state-of-the-art level in natural image generation, but they have [...] Read more.
Conditional image generation plays a vital role in medical image analysis as it is effective in tasks such as super-resolution, denoising, and inpainting, among others. Diffusion models have been shown to perform at a state-of-the-art level in natural image generation, but they have not been thoroughly studied in medical image generation with specific conditions. Moreover, current medical image generation models have their own problems, limiting their usage in various medical image generation tasks. In this paper, we introduce the use of conditional Denoising Diffusion Probabilistic Models (cDDPMs) for medical image generation, which achieve state-of-the-art performance on several medical image generation tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

11 pages, 2341 KiB  
Article
An Integrated Multi-Channel Deep Neural Network for Mesial Temporal Lobe Epilepsy Identification Using Multi-Modal Medical Data
by Ruowei Qu, Xuan Ji, Shifeng Wang, Zhaonan Wang, Le Wang, Xinsheng Yang, Shaoya Yin, Junhua Gu, Alan Wang and Guizhi Xu
Bioengineering 2023, 10(10), 1234; https://doi.org/10.3390/bioengineering10101234 - 21 Oct 2023
Viewed by 1244
Abstract
Epilepsy is a chronic brain disease with recurrent seizures. Mesial temporal lobe epilepsy (MTLE) is the most common pathological cause of epilepsy. With the development of computer-aided diagnosis technology, there are many auxiliary diagnostic approaches based on deep learning algorithms. However, the causes [...] Read more.
Epilepsy is a chronic brain disease with recurrent seizures. Mesial temporal lobe epilepsy (MTLE) is the most common pathological cause of epilepsy. With the development of computer-aided diagnosis technology, there are many auxiliary diagnostic approaches based on deep learning algorithms. However, the causes of epilepsy are complex, and distinguishing different types of epilepsy accurately is challenging with a single mode of examination. In this study, our aim is to assess the combination of multi-modal epilepsy medical information from structural MRI, PET image, typical clinical symptoms and personal demographic and cognitive data (PDC) by adopting a multi-channel 3D deep convolutional neural network and pre-training PET images. The results show better diagnosis accuracy than using one single type of medical data alone. These findings reveal the potential of a deep neural network in multi-modal medical data fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

13 pages, 3557 KiB  
Article
Rapid Segmentation and Diagnosis of Breast Tumor Ultrasound Images at the Sonographer Level Using Deep Learning
by Lei Yang, Baichuan Zhang, Fei Ren, Jianwen Gu, Jiao Gao, Jihua Wu, Dan Li, Huaping Jia, Guangling Li, Jing Zong, Jing Zhang, Xiaoman Yang, Xueyuan Zhang, Baolin Du, Xiaowen Wang and Na Li
Bioengineering 2023, 10(10), 1220; https://doi.org/10.3390/bioengineering10101220 - 19 Oct 2023
Cited by 2 | Viewed by 1792
Abstract
Background: Breast cancer is one of the most common malignant tumors in women. A noninvasive ultrasound examination can identify mammary-gland-related diseases and is well tolerated by dense breast, making it a preferred method for breast cancer screening and of significant clinical value. However, [...] Read more.
Background: Breast cancer is one of the most common malignant tumors in women. A noninvasive ultrasound examination can identify mammary-gland-related diseases and is well tolerated by dense breast, making it a preferred method for breast cancer screening and of significant clinical value. However, the diagnosis of breast nodules or masses via ultrasound is performed by a doctor in real time, which is time-consuming and subjective. Junior doctors are prone to missed diagnoses, especially in remote areas or grass-roots hospitals, due to limited medical resources and other factors, which bring great risks to a patient’s health. Therefore, there is an urgent need to develop fast and accurate ultrasound image analysis algorithms to assist diagnoses. Methods: We propose a breast ultrasound image-based assisted-diagnosis method based on convolutional neural networks, which can effectively improve the diagnostic speed and the early screening rate of breast cancer. Our method consists of two stages: tumor recognition and tumor classification. (1) Attention-based semantic segmentation is used to identify the location and size of the tumor; (2) the identified nodules are cropped to construct a training dataset. Then, a convolutional neural network for the diagnosis of benign and malignant breast nodules is trained on this dataset. We collected 2057 images from 1131 patients as the training and validation dataset, and 100 images of the patients with accurate pathological criteria were used as the test dataset. Results: The experimental results based on this dataset show that the MIoU of tumor location recognition is 0.89 and the average accuracy of benign and malignant diagnoses is 97%. The diagnosis performance of the developed diagnostic system is basically consistent with that of senior doctors and is superior to that of junior doctors. In addition, we can provide the doctor with a preliminary diagnosis so that it can be diagnosed quickly. Conclusion: Our proposed method can effectively improve diagnostic speed and the early screening rate of breast cancer. The system provides a valuable aid for the ultrasonic diagnosis of breast cancer. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

19 pages, 7067 KiB  
Article
Conditional Variational Autoencoder for Functional Connectivity Analysis of Autism Spectrum Disorder Functional Magnetic Resonance Imaging Data: A Comparative Study
by Mariia Sidulova and Chung Hyuk Park
Bioengineering 2023, 10(10), 1209; https://doi.org/10.3390/bioengineering10101209 - 16 Oct 2023
Viewed by 1348
Abstract
Generative models, such as Variational Autoencoders (VAEs), are increasingly employed for atypical pattern detection in brain imaging. During training, these models learn to capture the underlying patterns within “normal” brain images and generate new samples from those patterns. Neurodivergent states can be observed [...] Read more.
Generative models, such as Variational Autoencoders (VAEs), are increasingly employed for atypical pattern detection in brain imaging. During training, these models learn to capture the underlying patterns within “normal” brain images and generate new samples from those patterns. Neurodivergent states can be observed by measuring the dissimilarity between the generated/reconstructed images and the input images. This paper leverages VAEs to conduct Functional Connectivity (FC) analysis from functional Magnetic Resonance Imaging (fMRI) scans of individuals with Autism Spectrum Disorder (ASD), aiming to uncover atypical interconnectivity between brain regions. In the first part of our study, we compare multiple VAE architectures—Conditional VAE, Recurrent VAE, and a hybrid of CNN parallel with RNN VAE—aiming to establish the effectiveness of VAEs in application FC analysis. Given the nature of the disorder, ASD exhibits a higher prevalence among males than females. Therefore, in the second part of this paper, we investigate if introducing phenotypic data could improve the performance of VAEs and, consequently, FC analysis. We compare our results with the findings from previous studies in the literature. The results showed that CNN-based VAE architecture is more effective for this application than the other models. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

21 pages, 3967 KiB  
Article
Automatic Assessment of Transcatheter Aortic Valve Implantation Results on Four-Dimensional Computed Tomography Images Using Artificial Intelligence
by Laura Busto, César Veiga, José A. González-Nóvoa, Silvia Campanioni, Pablo Juan-Salvadores, Víctor Alfonso Jiménez Díaz, José Antonio Baz, José Luis Alba-Castro, Maximilian Kütting and Andrés Íñiguez
Bioengineering 2023, 10(10), 1206; https://doi.org/10.3390/bioengineering10101206 - 16 Oct 2023
Viewed by 1091
Abstract
Transcatheter aortic valve implantation (TAVI) is a procedure to treat severe aortic stenosis. There are several clinical concerns related to potential complications after the procedure, which demand the analysis of computerized tomography (CT) scans after TAVI to assess the implant’s result. This work [...] Read more.
Transcatheter aortic valve implantation (TAVI) is a procedure to treat severe aortic stenosis. There are several clinical concerns related to potential complications after the procedure, which demand the analysis of computerized tomography (CT) scans after TAVI to assess the implant’s result. This work introduces a novel, fully automatic method for the analysis of post-TAVI 4D-CT scans to characterize the prosthesis and its relationship with the patient’s anatomy. The method enables measurement extraction, including prosthesis volume, center of mass, cross-sectional area (CSA) along the prosthesis axis, and CSA difference between the aortic root and prosthesis, all the variables studied throughout the cardiac cycle. The method has been implemented and evaluated with a cohort of 13 patients with five different prosthesis models, successfully extracting all the measurements from each patient in an automatic way. For Allegra patients, the mean of the obtained inner volume values ranged from 10,798.20 mm3 to 18,172.35 mm3, and CSA in the maximum diameter plane varied from 396.35 mm2 to 485.34 mm2. The implantation of this new method could provide information of the important clinical value that would contribute to the improvement of TAVI, significantly reducing the time and effort invested by clinicians in the image interpretation process. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

21 pages, 14293 KiB  
Article
Improving OCT Image Segmentation of Retinal Layers by Utilizing a Machine Learning Based Multistage System of Stacked Multiscale Encoders and Decoders
by Arunodhayan Sampath Kumar, Tobias Schlosser, Holger Langner, Marc Ritter and Danny Kowerko
Bioengineering 2023, 10(10), 1177; https://doi.org/10.3390/bioengineering10101177 - 10 Oct 2023
Cited by 2 | Viewed by 1559
Abstract
Optical coherence tomography (OCT)-based retinal imagery is often utilized to determine influential factors in patient progression and treatment, for which the retinal layers of the human eye are investigated to assess a patient’s health status and eyesight. In this contribution, we propose a [...] Read more.
Optical coherence tomography (OCT)-based retinal imagery is often utilized to determine influential factors in patient progression and treatment, for which the retinal layers of the human eye are investigated to assess a patient’s health status and eyesight. In this contribution, we propose a machine learning (ML)-based multistage system of stacked multiscale encoders and decoders for the image segmentation of OCT imagery of the retinal layers to enable the following evaluation regarding the physiological and pathological states. Our proposed system’s results highlight its benefits compared to currently investigated approaches by combining commonly deployed methods from deep learning (DL) while utilizing deep neural networks (DNN). We conclude that by stacking multiple multiscale encoders and decoders, improved scores for the image segmentation task can be achieved. Our retinal-layer-based segmentation results in a final segmentation performance of up to 82.25±0.74% for the Sørensen–Dice coefficient, outperforming the current best single-stage model by 1.55% with a score of 80.70±0.20%, given the evaluated peripapillary OCT data set. Additionally, we provide results on the data sets Duke SD-OCT, Heidelberg, and UMN to illustrate our model’s performance on especially noisy data sets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

12 pages, 3568 KiB  
Article
Microwave Breast Sensing via Deep Learning for Tumor Spatial Localization by Probability Maps
by Marijn Borghouts, Michele Ambrosanio, Stefano Franceschini, Maria Maddalena Autorino, Vito Pascazio and Fabio Baselice
Bioengineering 2023, 10(10), 1153; https://doi.org/10.3390/bioengineering10101153 - 02 Oct 2023
Viewed by 989
Abstract
Background: microwave imaging (MWI) has emerged as a promising modality for breast cancer screening, offering cost-effective, rapid, safe and comfortable exams. However, the practical application of MWI for tumor detection and localization is hampered by its inherent low resolution and low detection capability. [...] Read more.
Background: microwave imaging (MWI) has emerged as a promising modality for breast cancer screening, offering cost-effective, rapid, safe and comfortable exams. However, the practical application of MWI for tumor detection and localization is hampered by its inherent low resolution and low detection capability. Methods: this study aims to generate an accurate tumor probability map directly from the scattering matrix. This direct conversion makes the probability map independent of specific image formation techniques and thus potentially complementary to any image formation technique. An approach based on a convolutional neural network (CNN) is used to convert the scattering matrix into a tumor probability map. The proposed deep learning model is trained using a large realistic numerical dataset of two-dimensional (2D) breast slices. The performance of the model is assessed through visual inspection and quantitative measures to assess the predictive quality at various levels of detail. Results: the results demonstrate a remarkably high accuracy (0.9995) in classifying profiles as healthy or diseased, and exhibit the model’s ability to accurately locate the core of a single tumor (within 0.9 cm for most cases). Conclusion: overall, this research demonstrates that an approach based on neural networks (NN) for direct conversion from scattering matrices to tumor probability maps holds promise in advancing state-of-the-art tumor detection algorithms in the MWI domain. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

17 pages, 3552 KiB  
Article
A Circular Box-Based Deep Learning Model for the Identification of Signet Ring Cells from Histopathological Images
by Saleh Albahli and Tahira Nazir
Bioengineering 2023, 10(10), 1147; https://doi.org/10.3390/bioengineering10101147 - 29 Sep 2023
Viewed by 779
Abstract
Signet ring cell (SRC) carcinoma is a particularly serious type of cancer that is a leading cause of death all over the world. SRC carcinoma has a more deceptive onset than other carcinomas and is mostly encountered in its later stages. Thus, the [...] Read more.
Signet ring cell (SRC) carcinoma is a particularly serious type of cancer that is a leading cause of death all over the world. SRC carcinoma has a more deceptive onset than other carcinomas and is mostly encountered in its later stages. Thus, the recognition of SRCs at their initial stages is a challenge because of different variants and sizes and illumination changes. The recognition process of SRCs at their early stages is costly because of the requirement for medical experts. A timely diagnosis is important because the level of the disease determines the severity, cure, and survival rate of victims. To tackle the current challenges, a deep learning (DL)-based methodology is proposed in this paper, i.e., custom CircleNet with ResNet-34 for SRC recognition and classification. We chose this method because of the circular shapes of SRCs and achieved better performance due to the CircleNet method. We utilized a challenging dataset for experimentation and performed augmentation to increase the dataset samples. The experiments were conducted using 35,000 images and attained 96.40% accuracy. We performed a comparative analysis and confirmed that our method outperforms the other methods. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

13 pages, 1269 KiB  
Article
Self-Supervised Contrastive Learning to Predict the Progression of Alzheimer’s Disease with 3D Amyloid-PET
by Min Gu Kwak, Yi Su, Kewei Chen, David Weidman, Teresa Wu, Fleming Lure, Jing Li and for the Alzheimer’s Disease Neuroimaging Initiative
Bioengineering 2023, 10(10), 1141; https://doi.org/10.3390/bioengineering10101141 - 28 Sep 2023
Viewed by 1107
Abstract
Early diagnosis of Alzheimer’s disease (AD) is an important task that facilitates the development of treatment and prevention strategies, and may potentially improve patient outcomes. Neuroimaging has shown great promise, including the amyloid-PET, which measures the accumulation of amyloid plaques in the brain—a [...] Read more.
Early diagnosis of Alzheimer’s disease (AD) is an important task that facilitates the development of treatment and prevention strategies, and may potentially improve patient outcomes. Neuroimaging has shown great promise, including the amyloid-PET, which measures the accumulation of amyloid plaques in the brain—a hallmark of AD. It is desirable to train end-to-end deep learning models to predict the progression of AD for individuals at early stages based on 3D amyloid-PET. However, commonly used models are trained in a fully supervised learning manner, and they are inevitably biased toward the given label information. To this end, we propose a selfsupervised contrastive learning method to accurately predict the conversion to AD for individuals with mild cognitive impairment (MCI) with 3D amyloid-PET. The proposed method, SMoCo, uses both labeled and unlabeled data to capture general semantic representations underlying the images. As the downstream task is given as classification of converters vs. non-converters, unlike the general self-supervised learning problem that aims to generate task-agnostic representations, SMoCo additionally utilizes the label information in the pre-training. To demonstrate the performance of our method, we conducted experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. The results confirmed that the proposed method is capable of providing appropriate data representations, resulting in accurate classification. SMoCo showed the best classification performance over the existing methods, with AUROC = 85.17%, accuracy = 81.09%, sensitivity = 77.39%, and specificity = 82.17%. While SSL has demonstrated great success in other application domains of computer vision, this study provided the initial investigation of using a proposed self-supervised contrastive learning model, SMoCo, to effectively predict MCI conversion to AD based on 3D amyloid-PET. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

14 pages, 1016 KiB  
Article
SECP-Net: SE-Connection Pyramid Network for Segmentation of Organs at Risk with Nasopharyngeal Carcinoma
by Zexi Huang, Xin Yang, Sijuan Huang and Lihua Guo
Bioengineering 2023, 10(10), 1119; https://doi.org/10.3390/bioengineering10101119 - 24 Sep 2023
Viewed by 956
Abstract
Nasopharyngeal carcinoma (NPC) is a kind of malignant tumor. The accurate and automatic segmentation of computed tomography (CT) images of organs at risk (OAR) is clinically significant. In recent years, deep learning models represented by U-Net have been widely applied in medical image [...] Read more.
Nasopharyngeal carcinoma (NPC) is a kind of malignant tumor. The accurate and automatic segmentation of computed tomography (CT) images of organs at risk (OAR) is clinically significant. In recent years, deep learning models represented by U-Net have been widely applied in medical image segmentation tasks, which can help to reduce doctors’ workload. In the OAR segmentation of NPC, the sizes of the OAR are variable, and some of their volumes are small. Traditional deep neural networks underperform in segmentation due to the insufficient use of global and multi-size information. Therefore, a new SE-Connection Pyramid Network (SECP-Net) is proposed. For extracting global and multi-size information, the SECP-Net designs an SE-connection module and a pyramid structure for improving the segmentation performance, especially that of small organs. SECP-Net also uses an auto-context cascaded structure to further refine the segmentation results. Comparative experiments are conducted between SECP-Net and other recent methods on a private dataset with CT images of the head and neck and a public liver dataset. Five-fold cross-validation is used to evaluate the performance based on two metrics; i.e., Dice and Jaccard similarity. The experimental results show that SECP-Net can achieve SOTA performance in these two challenging tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

16 pages, 2632 KiB  
Article
Dual-Guided Brain Diffusion Model: Natural Image Reconstruction from Human Visual Stimulus fMRI
by Lu Meng and Chuanhao Yang
Bioengineering 2023, 10(10), 1117; https://doi.org/10.3390/bioengineering10101117 - 24 Sep 2023
Cited by 2 | Viewed by 1470
Abstract
The reconstruction of visual stimuli from fMRI signals, which record brain activity, is a challenging task with crucial research value in the fields of neuroscience and machine learning. Previous studies tend to emphasize reconstructing pixel-level features (contours, colors, etc.) or semantic features (object [...] Read more.
The reconstruction of visual stimuli from fMRI signals, which record brain activity, is a challenging task with crucial research value in the fields of neuroscience and machine learning. Previous studies tend to emphasize reconstructing pixel-level features (contours, colors, etc.) or semantic features (object category) of the stimulus image, but typically, these properties are not reconstructed together. In this context, we introduce a novel three-stage visual reconstruction approach called the Dual-guided Brain Diffusion Model (DBDM). Initially, we employ the Very Deep Variational Autoencoder (VDVAE) to reconstruct a coarse image from fMRI data, capturing the underlying details of the original image. Subsequently, the Bootstrapping Language-Image Pre-training (BLIP) model is utilized to provide a semantic annotation for each image. Finally, the image-to-image generation pipeline of the Versatile Diffusion (VD) model is utilized to recover natural images from the fMRI patterns guided by both visual and semantic information. The experimental results demonstrate that DBDM surpasses previous approaches in both qualitative and quantitative comparisons. In particular, the best performance is achieved by DBDM in reconstructing the semantic details of the original image; the Inception, CLIP and SwAV distances are 0.611, 0.225 and 0.405, respectively. This confirms the efficacy of our model and its potential to advance visual decoding research. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

15 pages, 670 KiB  
Article
MWG-UNet: Hybrid Deep Learning Framework for Lung Fields and Heart Segmentation in Chest X-ray Images
by Yu Lyu and Xiaolin Tian
Bioengineering 2023, 10(9), 1091; https://doi.org/10.3390/bioengineering10091091 - 18 Sep 2023
Cited by 2 | Viewed by 1068
Abstract
Deep learning technology has achieved breakthrough research results in the fields of medical computer vision and image processing. Generative adversarial networks (GANs) have demonstrated a capacity for image generation and expression ability. This paper proposes a new method called MWG-UNet (multiple tasking Wasserstein [...] Read more.
Deep learning technology has achieved breakthrough research results in the fields of medical computer vision and image processing. Generative adversarial networks (GANs) have demonstrated a capacity for image generation and expression ability. This paper proposes a new method called MWG-UNet (multiple tasking Wasserstein generative adversarial network U-shape network) as a lung field and heart segmentation model, which takes advantages of the attention mechanism to enhance the segmentation accuracy of the generator so as to improve the performance. In particular, the Dice similarity, precision, and F1 score of the proposed method outperform other models, reaching 95.28%, 96.41%, and 95.90%, respectively, and the specificity surpasses the sub-optimal models by 0.28%, 0.90%, 0.24%, and 0.90%. However, the value of the IoU is inferior to the optimal model by 0.69%. The results show the proposed method has considerable ability in lung field segmentation. Our multi-organ segmentation results for the heart achieve Dice similarity and IoU values of 71.16% and 74.56%. The segmentation results on lung fields achieve Dice similarity and IoU values of 85.18% and 81.36%. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

17 pages, 7532 KiB  
Article
Clinical Interpretability of Deep Learning for Predicting Microvascular Invasion in Hepatocellular Carcinoma by Using Attention Mechanism
by Huayu You, Jifei Wang, Ruixia Ma, Yuying Chen, Lujie Li, Chenyu Song, Zhi Dong, Shiting Feng and Xiaoqi Zhou
Bioengineering 2023, 10(8), 948; https://doi.org/10.3390/bioengineering10080948 - 09 Aug 2023
Cited by 1 | Viewed by 1089
Abstract
Preoperative prediction of microvascular invasion (MVI) is essential for management decision in hepatocellular carcinoma (HCC). Deep learning-based prediction models of MVI are numerous but lack clinical interpretation due to their “black-box” nature. Consequently, we aimed to use an attention-guided feature fusion network, including [...] Read more.
Preoperative prediction of microvascular invasion (MVI) is essential for management decision in hepatocellular carcinoma (HCC). Deep learning-based prediction models of MVI are numerous but lack clinical interpretation due to their “black-box” nature. Consequently, we aimed to use an attention-guided feature fusion network, including intra- and inter-attention modules, to solve this problem. This retrospective study recruited 210 HCC patients who underwent gadoxetate-enhanced MRI examination before surgery. The MRIs on pre-contrast, arterial, portal, and hepatobiliary phases (hepatobiliary phase: HBP) were used to develop single-phase and multi-phase models. Attention weights provided by attention modules were used to obtain visual explanations of predictive decisions. The four-phase fusion model achieved the highest area under the curve (AUC) of 0.92 (95% CI: 0.84–1.00), and the other models proposed AUCs of 0.75–0.91. Attention heatmaps of collaborative-attention layers revealed that tumor margins in all phases and peritumoral areas in the arterial phase and HBP were salient regions for MVI prediction. Heatmaps of weights in fully connected layers showed that the HBP contributed the most to MVI prediction. Our study firstly implemented self-attention and collaborative-attention to reveal the relationship between deep features and MVI, improving the clinical interpretation of prediction models. The clinical interpretability offers radiologists and clinicians more confidence to apply deep learning models in clinical practice, helping HCC patients formulate personalized therapies. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

14 pages, 5007 KiB  
Article
Self-Supervision for Medical Image Classification: State-of-the-Art Performance with ~100 Labeled Training Samples per Class
by Maximilian Nielsen, Laura Wenderoth, Thilo Sentker and René Werner
Bioengineering 2023, 10(8), 895; https://doi.org/10.3390/bioengineering10080895 - 28 Jul 2023
Cited by 1 | Viewed by 1239
Abstract
Is self-supervised deep learning (DL) for medical image analysis already a serious alternative to the de facto standard of end-to-end trained supervised DL? We tackle this question for medical image classification, with a particular focus on one of the currently most limiting factor [...] Read more.
Is self-supervised deep learning (DL) for medical image analysis already a serious alternative to the de facto standard of end-to-end trained supervised DL? We tackle this question for medical image classification, with a particular focus on one of the currently most limiting factor of the field: the (non-)availability of labeled data. Based on three common medical imaging modalities (bone marrow microscopy, gastrointestinal endoscopy, dermoscopy) and publicly available data sets, we analyze the performance of self-supervised DL within the self-distillation with no labels (DINO) framework. After learning an image representation without use of image labels, conventional machine learning classifiers are applied. The classifiers are fit using a systematically varied number of labeled data (1–1000 samples per class). Exploiting the learned image representation, we achieve state-of-the-art classification performance for all three imaging modalities and data sets with only a fraction of between 1% and 10% of the available labeled data and about 100 labeled samples per class. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

13 pages, 1254 KiB  
Article
Prompt-Based Tuning of Transformer Models for Multi-Center Medical Image Segmentation of Head and Neck Cancer
by Numan Saeed, Muhammad Ridzuan, Roba Al Majzoub and Mohammad Yaqub
Bioengineering 2023, 10(7), 879; https://doi.org/10.3390/bioengineering10070879 - 24 Jul 2023
Cited by 1 | Viewed by 1815
Abstract
Medical image segmentation is a vital healthcare endeavor requiring precise and efficient models for appropriate diagnosis and treatment. Vision transformer (ViT)-based segmentation models have shown great performance in accomplishing this task. However, to build a powerful backbone, the self-attention block of ViT requires [...] Read more.
Medical image segmentation is a vital healthcare endeavor requiring precise and efficient models for appropriate diagnosis and treatment. Vision transformer (ViT)-based segmentation models have shown great performance in accomplishing this task. However, to build a powerful backbone, the self-attention block of ViT requires large-scale pre-training data. The present method of modifying pre-trained models entails updating all or some of the backbone parameters. This paper proposes a novel fine-tuning strategy for adapting a pretrained transformer-based segmentation model on data from a new medical center. This method introduces a small number of learnable parameters, termed prompts, into the input space (less than 1% of model parameters) while keeping the rest of the model parameters frozen. Extensive studies employing data from new unseen medical centers show that the prompt-based fine-tuning of medical segmentation models provides excellent performance regarding the new-center data with a negligible drop regarding the old centers. Additionally, our strategy delivers great accuracy with minimum re-training on new-center data, significantly decreasing the computational and time costs of fine-tuning pre-trained models. Our source code will be made publicly available. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

18 pages, 1732 KiB  
Article
Predicting Recurrence in Pancreatic Ductal Adenocarcinoma after Radical Surgery Using an AX-Unet Pancreas Segmentation Model and Dynamic Nomogram
by Haixu Ni, Gonghai Zhou, Xinlong Chen, Jing Ren, Minqiang Yang, Yuhong Zhang, Qiyu Zhang, Lei Zhang, Chengsheng Mao and Xun Li
Bioengineering 2023, 10(7), 828; https://doi.org/10.3390/bioengineering10070828 - 11 Jul 2023
Cited by 1 | Viewed by 1356
Abstract
This study aims to investigate the reliability of radiomic features extracted from contrast-enhanced computer tomography (CT) by AX-Unet, a pancreas segmentation model, to analyse the recurrence of pancreatic ductal adenocarcinoma (PDAC) after radical surgery. In this study, we trained an AX-Unet model to [...] Read more.
This study aims to investigate the reliability of radiomic features extracted from contrast-enhanced computer tomography (CT) by AX-Unet, a pancreas segmentation model, to analyse the recurrence of pancreatic ductal adenocarcinoma (PDAC) after radical surgery. In this study, we trained an AX-Unet model to extract the radiomic features from preoperative contrast-enhanced CT images on a training set of 205 PDAC patients. Then we evaluated the segmentation ability of AX-Unet and the relationship between radiomic features and clinical characteristics on an independent testing set of 64 patients with clear prognoses. The lasso regression analysis was used to screen for variables of interest affecting patients’ post-operative recurrence, and the Cox proportional risk model regression analysis was used to screen for risk factors and create a nomogram prediction model. The proposed model achieved an accuracy of 85.9% for pancreas segmentation, meeting the requirements of most clinical applications. Radiomic features were found to be significantly correlated with clinical characteristics such as lymph node metastasis, resectability status, and abnormally elevated serum carbohydrate antigen 19-9 (CA 19-9) levels. Specifically, variance and entropy were associated with the recurrence rate (p < 0.05). The AUC for the nomogram predicting whether the patient recurred after surgery was 0.92 (95% CI: 0.78–0.99) and the C index was 0.62 (95% CI: 0.48–0.78). The AX-Unet pancreas segmentation model shows promise in analysing recurrence risk factors after radical surgery for PDAC. Additionally, our findings suggest that a dynamic nomogram model based on AX-Unet can provide pancreatic oncologists with more accurate prognostic assessments for their patients. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Figure 1

12 pages, 8736 KiB  
Article
Image Translation of Breast Ultrasound to Pseudo Anatomical Display by CycleGAN
by Lilach Barkat, Moti Freiman and Haim Azhari
Bioengineering 2023, 10(3), 388; https://doi.org/10.3390/bioengineering10030388 - 22 Mar 2023
Cited by 1 | Viewed by 2024
Abstract
Ultrasound imaging is cost effective, radiation-free, portable, and implemented routinely in clinical procedures. Nonetheless, image quality is characterized by a granulated appearance, a poor SNR, and speckle noise. Specific for breast tumors, the margins are commonly blurred and indistinct. Thus, there is a [...] Read more.
Ultrasound imaging is cost effective, radiation-free, portable, and implemented routinely in clinical procedures. Nonetheless, image quality is characterized by a granulated appearance, a poor SNR, and speckle noise. Specific for breast tumors, the margins are commonly blurred and indistinct. Thus, there is a need for improving ultrasound image quality. We hypothesize that this can be achieved by translation into a more realistic display which mimics a pseudo anatomical cut through the tissue, using a cycle generative adversarial network (CycleGAN). In order to train CycleGAN for this translation, two datasets were used, “Breast Ultrasound Images” (BUSI) and a set of optical images of poultry breast tissues. The generated pseudo anatomical images provide improved visual discrimination of the lesions through clearer border definition and pronounced contrast. In order to evaluate the preservation of the anatomical features, the lesions in both datasets were segmented and compared. This comparison yielded median dice scores of 0.91 and 0.70; median center errors of 0.58% and 3.27%; and median area errors of 0.40% and 4.34% for the benign and malignancies, respectively. In conclusion, generated pseudo anatomical images provide a more intuitive display, enhance tissue anatomy, and preserve tumor geometry; and can potentially improve diagnoses and clinical outcomes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

Other

Jump to: Research

20 pages, 1639 KiB  
Systematic Review
Machine Learning for Brain MRI Data Harmonisation: A Systematic Review
by Grace Wen, Vickie Shim, Samantha Jane Holdsworth, Justin Fernandez, Miao Qiao, Nikola Kasabov and Alan Wang
Bioengineering 2023, 10(4), 397; https://doi.org/10.3390/bioengineering10040397 - 23 Mar 2023
Cited by 3 | Viewed by 2941
Abstract
Background: Magnetic Resonance Imaging (MRI) data collected from multiple centres can be heterogeneous due to factors such as the scanner used and the site location. To reduce this heterogeneity, the data needs to be harmonised. In recent years, machine learning (ML) has been [...] Read more.
Background: Magnetic Resonance Imaging (MRI) data collected from multiple centres can be heterogeneous due to factors such as the scanner used and the site location. To reduce this heterogeneity, the data needs to be harmonised. In recent years, machine learning (ML) has been used to solve different types of problems related to MRI data, showing great promise. Objective: This study explores how well various ML algorithms perform in harmonising MRI data, both implicitly and explicitly, by summarising the findings in relevant peer-reviewed articles. Furthermore, it provides guidelines for the use of current methods and identifies potential future research directions. Method: This review covers articles published through PubMed, Web of Science, and IEEE databases through June 2022. Data from studies were analysed based on the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Quality assessment questions were derived to assess the quality of the included publications. Results: a total of 41 articles published between 2015 and 2022 were identified and analysed. In the review, MRI data has been found to be harmonised either in an implicit (n = 21) or an explicit (n = 20) way. Three MRI modalities were identified: structural MRI (n = 28), diffusion MRI (n = 7) and functional MRI (n = 6). Conclusion: Various ML techniques have been employed to harmonise different types of MRI data. There is currently a lack of consistent evaluation methods and metrics used across studies, and it is recommended that the issue be addressed in future studies. Harmonisation of MRI data using ML shows promises in improving performance for ML downstream tasks, while caution should be exercised when using ML-harmonised data for direct interpretation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Imaging)
Show Figures

Graphical abstract

Back to TopTop