AI in MRI: Frontiers and Applications

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (1 February 2023) | Viewed by 49018

Special Issue Editors

Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
Interests: molecular imaging; MRI; magnetic resonance fingerprinting; machine learning; chemical exchange saturation transfer

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Sciences (EECS), UC Berkeley, Berkeley, CA, USA
Interests: MRI reconstruction; machine learning for medical imaging; dynamic MRI; pediatric MRI; data usage in AI for healthcare

Special Issue Information

Dear Colleagues,

Recent advances in artificial intelligence (AI) have created tremendous opportunities for improving the capabilities of medical imaging. Magnetic resonance imaging (MRI), in particular, is a highly versatile modality that may greatly benefit from the incorporation of AI along different points of the imaging pipeline. Recent studies have demonstrated the potential use of AI for scan time reduction, improved disease classification/segmentation, extraction of quantitative parameters from low signal-to-noise-ratio images, and automatic protocol design.  This Special Issue of Bioengineering, titled “AI in MRI: Frontiers and applications” will include new and exciting results from original research endeavors, reviews of the state of the art, and highlights of the upcoming challenges and opportunities in the field.

Dr. Or Perlman
Dr. Efrat Shimron
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • machine learning
  • magnetic resonance imaging (MRI)
  • deep learning
  • MRI reconstruction
  • lesion segmentation
  • disease classification
  • image registration
  • medical imaging
  • quantitative imaging

Published Papers (21 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

9 pages, 242 KiB  
Editorial
AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow
by Efrat Shimron and Or Perlman
Bioengineering 2023, 10(4), 492; https://doi.org/10.3390/bioengineering10040492 - 20 Apr 2023
Cited by 2 | Viewed by 2822
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...] Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)

Research

Jump to: Editorial, Review

13 pages, 2026 KiB  
Article
AFNet Algorithm for Automatic Amniotic Fluid Segmentation from Fetal MRI
by Alejo Costanzo, Birgit Ertl-Wagner and Dafna Sussman
Bioengineering 2023, 10(7), 783; https://doi.org/10.3390/bioengineering10070783 - 30 Jun 2023
Viewed by 1046
Abstract
Amniotic Fluid Volume (AFV) is a crucial fetal biomarker when diagnosing specific fetal abnormalities. This study proposes a novel Convolutional Neural Network (CNN) model, AFNet, for segmenting amniotic fluid (AF) to facilitate clinical AFV evaluation. AFNet was trained and tested on a manually [...] Read more.
Amniotic Fluid Volume (AFV) is a crucial fetal biomarker when diagnosing specific fetal abnormalities. This study proposes a novel Convolutional Neural Network (CNN) model, AFNet, for segmenting amniotic fluid (AF) to facilitate clinical AFV evaluation. AFNet was trained and tested on a manually segmented and radiologist-validated AF dataset. AFNet outperforms ResUNet++ by using efficient feature mapping in the attention block and transposing convolutions in the decoder. Our experimental results show that AFNet achieved a mean Intersection over Union (mIoU) of 93.38% on our dataset, thereby outperforming other state-of-the-art models. While AFNet achieves performance scores similar to those of the UNet++ model, it does so while utilizing merely less than half the number of parameters. By creating a detailed AF dataset with an improved CNN architecture, we enable the quantification of AFV in clinical practice, which can aid in diagnosing AF disorders during gestation. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

20 pages, 4358 KiB  
Article
Automatic Multiple Articulator Segmentation in Dynamic Speech MRI Using a Protocol Adaptive Stacked Transfer Learning U-NET Model
by Subin Erattakulangara, Karthika Kelat, David Meyer, Sarv Priya and Sajan Goud Lingala
Bioengineering 2023, 10(5), 623; https://doi.org/10.3390/bioengineering10050623 - 22 May 2023
Viewed by 1472
Abstract
Dynamic magnetic resonance imaging has emerged as a powerful modality for investigating upper-airway function during speech production. Analyzing the changes in the vocal tract airspace, including the position of soft-tissue articulators (e.g., the tongue and velum), enhances our understanding of speech production. The [...] Read more.
Dynamic magnetic resonance imaging has emerged as a powerful modality for investigating upper-airway function during speech production. Analyzing the changes in the vocal tract airspace, including the position of soft-tissue articulators (e.g., the tongue and velum), enhances our understanding of speech production. The advent of various fast speech MRI protocols based on sparse sampling and constrained reconstruction has led to the creation of dynamic speech MRI datasets on the order of 80–100 image frames/second. In this paper, we propose a stacked transfer learning U-NET model to segment the deforming vocal tract in 2D mid-sagittal slices of dynamic speech MRI. Our approach leverages (a) low- and mid-level features and (b) high-level features. The low- and mid-level features are derived from models pre-trained on labeled open-source brain tumor MR and lung CT datasets, and an in-house airway labeled dataset. The high-level features are derived from labeled protocol-specific MR images. The applicability of our approach to segmenting dynamic datasets is demonstrated in data acquired from three fast speech MRI protocols: Protocol 1: 3 T-based radial acquisition scheme coupled with a non-linear temporal regularizer, where speakers were producing French speech tokens; Protocol 2: 1.5 T-based uniform density spiral acquisition scheme coupled with a temporal finite difference (FD) sparsity regularization, where speakers were producing fluent speech tokens in English, and Protocol 3: 3 T-based variable density spiral acquisition scheme coupled with manifold regularization, where speakers were producing various speech tokens from the International Phonetic Alphabetic (IPA). Segments from our approach were compared to those from an expert human user (a vocologist), and the conventional U-NET model without transfer learning. Segmentations from a second expert human user (a radiologist) were used as ground truth. Evaluations were performed using the quantitative DICE similarity metric, the Hausdorff distance metric, and segmentation count metric. This approach was successfully adapted to different speech MRI protocols with only a handful of protocol-specific images (e.g., of the order of 20 images), and provided accurate segmentations similar to those of an expert human. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

26 pages, 5061 KiB  
Article
Synthetic Inflammation Imaging with PatchGAN Deep Learning Networks
by Aniket A. Tolpadi, Johanna Luitjens, Felix G. Gassert, Xiaojuan Li, Thomas M. Link, Sharmila Majumdar and Valentina Pedoia
Bioengineering 2023, 10(5), 516; https://doi.org/10.3390/bioengineering10050516 - 25 Apr 2023
Viewed by 1765
Abstract
Background: Gadolinium (Gd)-enhanced Magnetic Resonance Imaging (MRI) is crucial in several applications, including oncology, cardiac imaging, and musculoskeletal inflammatory imaging. One use case is rheumatoid arthritis (RA), a widespread autoimmune condition for which Gd MRI is crucial in imaging synovial joint inflammation, [...] Read more.
Background: Gadolinium (Gd)-enhanced Magnetic Resonance Imaging (MRI) is crucial in several applications, including oncology, cardiac imaging, and musculoskeletal inflammatory imaging. One use case is rheumatoid arthritis (RA), a widespread autoimmune condition for which Gd MRI is crucial in imaging synovial joint inflammation, but Gd administration has well-documented safety concerns. As such, algorithms that could synthetically generate post-contrast peripheral joint MR images from non-contrast MR sequences would have immense clinical utility. Moreover, while such algorithms have been investigated for other anatomies, they are largely unexplored for musculoskeletal applications such as RA, and efforts to understand trained models and improve trust in their predictions have been limited in medical imaging. Methods: A dataset of 27 RA patients was used to train algorithms that synthetically generated post-Gd IDEAL wrist coronal T1-weighted scans from pre-contrast scans. UNets and PatchGANs were trained, leveraging an anomaly-weighted L1 loss and global generative adversarial network (GAN) loss for the PatchGAN. Occlusion and uncertainty maps were also generated to understand model performance. Results: UNet synthetic post-contrast images exhibited stronger normalized root mean square error (nRMSE) than PatchGAN in full volumes and the wrist, but PatchGAN outperformed UNet in synovial joints (UNet nRMSEs: volume = 6.29 ± 0.88, wrist = 4.36 ± 0.60, synovial = 26.18 ± 7.45; PatchGAN nRMSEs: volume = 6.72 ± 0.81, wrist = 6.07 ± 1.22, synovial = 23.14 ± 7.37; n = 7). Occlusion maps showed that synovial joints made substantial contributions to PatchGAN and UNet predictions, while uncertainty maps showed that PatchGAN predictions were more confident within those joints. Conclusions: Both pipelines showed promising performance in synthesizing post-contrast images, but PatchGAN performance was stronger and more confident within synovial joints, where an algorithm like this would have maximal clinical utility. Image synthesis approaches are therefore promising for RA and synthetic inflammatory imaging. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

19 pages, 9068 KiB  
Article
Federated End-to-End Unrolled Models for Magnetic Resonance Image Reconstruction
by Brett R. Levac, Marius Arvinte and Jonathan I. Tamir
Bioengineering 2023, 10(3), 364; https://doi.org/10.3390/bioengineering10030364 - 16 Mar 2023
Cited by 5 | Viewed by 2051
Abstract
Image reconstruction is the process of recovering an image from raw, under-sampled signal measurements, and is a critical step in diagnostic medical imaging, such as magnetic resonance imaging (MRI). Recently, data-driven methods have led to improved image quality in MRI reconstruction using a [...] Read more.
Image reconstruction is the process of recovering an image from raw, under-sampled signal measurements, and is a critical step in diagnostic medical imaging, such as magnetic resonance imaging (MRI). Recently, data-driven methods have led to improved image quality in MRI reconstruction using a limited number of measurements, but these methods typically rely on the existence of a large, centralized database of fully sampled scans for training. In this work, we investigate federated learning for MRI reconstruction using end-to-end unrolled deep learning models as a means of training global models across multiple clients (data sites), while keeping individual scans local. We empirically identify a low-data regime across a large number of heterogeneous scans, where a small number of training samples per client are available and non-collaborative models lead to performance drops. In this regime, we investigate the performance of adaptive federated optimization algorithms as a function of client data distribution and communication budget. Experimental results show that adaptive optimization algorithms are well suited for the federated learning of unrolled models, even in a limited-data regime (50 slices per data site), and that client-sided personalization can improve reconstruction quality for clients that did not participate in training. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

13 pages, 4671 KiB  
Article
Accelerated Diffusion-Weighted MRI of Rectal Cancer Using a Residual Convolutional Network
by Mohaddese Mohammadi, Elena A. Kaye, Or Alus, Youngwook Kee, Jennifer S. Golia Pernicka, Maria El Homsi, Iva Petkovska and Ricardo Otazo
Bioengineering 2023, 10(3), 359; https://doi.org/10.3390/bioengineering10030359 - 14 Mar 2023
Cited by 2 | Viewed by 1332
Abstract
This work presents a deep-learning-based denoising technique to accelerate the acquisition of high b-value diffusion-weighted MRI for rectal cancer. A denoising convolutional neural network (DCNN) with a combined L1–L2 loss function was developed to denoise high b-value diffusion-weighted MRI data acquired [...] Read more.
This work presents a deep-learning-based denoising technique to accelerate the acquisition of high b-value diffusion-weighted MRI for rectal cancer. A denoising convolutional neural network (DCNN) with a combined L1–L2 loss function was developed to denoise high b-value diffusion-weighted MRI data acquired with fewer repetitions (NEX: number of excitations) using the low b-value image as an anatomical guide. DCNN was trained using 85 datasets acquired on patients with rectal cancer and tested on 20 different datasets with NEX = 1, 2, and 4, corresponding to acceleration factors of 16, 8, and 4, respectively. Image quality was assessed qualitatively by expert body radiologists. Reader 1 scored similar overall image quality between denoised images with NEX = 1 and NEX = 2, which were slightly lower than the reference. Reader 2 scored similar quality between NEX = 1 and the reference, while better quality for NEX = 2. Denoised images with fourfold acceleration (NEX = 4) received even higher scores than the reference, which is due in part to the effect of gas-related motion in the rectum, which affects longer acquisitions. The proposed deep learning denoising technique can enable eightfold acceleration with similar image quality (average image quality = 2.8 ± 0.5) and fourfold acceleration with higher image quality (3.0 ± 0.6) than the clinical standard (2.5 ± 0.8) for improved diagnosis of rectal cancer. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

12 pages, 6039 KiB  
Article
Synthesizing Complex-Valued Multicoil MRI Data from Magnitude-Only Images
by Nikhil Deveshwar, Abhejit Rajagopal, Sule Sahin, Efrat Shimron and Peder E. Z. Larson
Bioengineering 2023, 10(3), 358; https://doi.org/10.3390/bioengineering10030358 - 14 Mar 2023
Cited by 3 | Viewed by 2741
Abstract
Despite the proliferation of deep learning techniques for accelerated MRI acquisition and enhanced image reconstruction, the construction of large and diverse MRI datasets continues to pose a barrier to effective clinical translation of these technologies. One major challenge is in collecting the MRI [...] Read more.
Despite the proliferation of deep learning techniques for accelerated MRI acquisition and enhanced image reconstruction, the construction of large and diverse MRI datasets continues to pose a barrier to effective clinical translation of these technologies. One major challenge is in collecting the MRI raw data (required for image reconstruction) from clinical scanning, as only magnitude images are typically saved and used for clinical assessment and diagnosis. The image phase and multi-channel RF coil information are not retained when magnitude-only images are saved in clinical imaging archives. Additionally, preprocessing used for data in clinical imaging can lead to biased results. While several groups have begun concerted efforts to collect large amounts of MRI raw data, current databases are limited in the diversity of anatomy, pathology, annotations, and acquisition types they contain. To address this, we present a method for synthesizing realistic MR data from magnitude-only data, allowing for the use of diverse data from clinical imaging archives in advanced MRI reconstruction development. Our method uses a conditional GAN-based framework to generate synthetic phase images from input magnitude images. We then applied ESPIRiT to derive RF coil sensitivity maps from fully sampled real data to generate multi-coil data. The synthetic data generation method was evaluated by comparing image reconstruction results from training Variational Networks either with real data or synthetic data. We demonstrate that the Variational Network trained on synthetic MRI data from our method, consisting of GAN-derived synthetic phase and multi-coil information, outperformed Variational Networks trained on data with synthetic phase generated using current state-of-the-art methods. Additionally, we demonstrate that the Variational Networks trained with synthetic k-space data from our method perform comparably to image reconstruction networks trained on undersampled real k-space data. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

18 pages, 5241 KiB  
Article
Joint Cardiac T1 Mapping and Cardiac Cine Using Manifold Modeling
by Qing Zou, Sarv Priya, Prashant Nagpal and Mathews Jacob
Bioengineering 2023, 10(3), 345; https://doi.org/10.3390/bioengineering10030345 - 09 Mar 2023
Cited by 3 | Viewed by 1659
Abstract
The main focus of this work is to introduce a single free-breathing and ungated imaging protocol to jointly estimate cardiac function and myocardial T1 maps. We reconstruct a time series of images corresponding to k-space data from a free-breathing and ungated inversion [...] Read more.
The main focus of this work is to introduce a single free-breathing and ungated imaging protocol to jointly estimate cardiac function and myocardial T1 maps. We reconstruct a time series of images corresponding to k-space data from a free-breathing and ungated inversion recovery gradient echo sequence using a manifold algorithm. We model each image in the time series as a non-linear function of three variables: cardiac and respiratory phases and inversion time. The non-linear function is realized using a convolutional neural networks (CNN) generator, while the CNN parameters, as well as the phase information, are estimated from the measured k-t space data. We use a dense conditional auto-encoder to estimate the cardiac and respiratory phases from the central multi-channel k-space samples acquired at each frame. The latent vectors of the auto-encoder are constrained to be bandlimited functions with appropriate frequency bands, which enables the disentanglement of the latent vectors into cardiac and respiratory phases, even when the data are acquired with intermittent inversion pulses. Once the phases are estimated, we pose the image recovery as the learning of the parameters of the CNN generator from the measured k-t space data. The learned CNN generator is used to generate synthetic data on demand by feeding it with appropriate latent vectors. The proposed approach capitalizes on the synergies between cine MRI and T1 mapping to reduce the scan time and improve patient comfort. The framework also enables the generation of synthetic breath-held cine movies with different inversion contrasts, which improves the visualization of the myocardium. In addition, the approach also enables the estimation of the T1 maps with specific phases, which is challenging with breath-held approaches. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

24 pages, 7768 KiB  
Article
K2S Challenge: From Undersampled K-Space to Automatic Segmentation
by Aniket A. Tolpadi, Upasana Bharadwaj, Kenneth T. Gao, Rupsa Bhattacharjee, Felix G. Gassert, Johanna Luitjens, Paula Giesler, Jan Nikolas Morshuis, Paul Fischer, Matthias Hein, Christian F. Baumgartner, Artem Razumov, Dmitry Dylov, Quintin van Lohuizen, Stefan J. Fransen, Xiaoxia Zhang, Radhika Tibrewala, Hector Lise de Moura, Kangning Liu, Marcelo V. W. Zibetti, Ravinder Regatte, Sharmila Majumdar and Valentina Pedoiaadd Show full author list remove Hide full author list
Bioengineering 2023, 10(2), 267; https://doi.org/10.3390/bioengineering10020267 - 18 Feb 2023
Cited by 5 | Viewed by 3500
Abstract
Magnetic Resonance Imaging (MRI) offers strong soft tissue contrast but suffers from long acquisition times and requires tedious annotation from radiologists. Traditionally, these challenges have been addressed separately with reconstruction and image analysis algorithms. To see if performance could be improved by treating [...] Read more.
Magnetic Resonance Imaging (MRI) offers strong soft tissue contrast but suffers from long acquisition times and requires tedious annotation from radiologists. Traditionally, these challenges have been addressed separately with reconstruction and image analysis algorithms. To see if performance could be improved by treating both as end-to-end, we hosted the K2S challenge, in which challenge participants segmented knee bones and cartilage from 8× undersampled k-space. We curated the 300-patient K2S dataset of multicoil raw k-space and radiologist quality-checked segmentations. 87 teams registered for the challenge and there were 12 submissions, varying in methodologies from serial reconstruction and segmentation to end-to-end networks to another that eschewed a reconstruction algorithm altogether. Four teams produced strong submissions, with the winner having a weighted Dice Similarity Coefficient of 0.910 ± 0.021 across knee bones and cartilage. Interestingly, there was no correlation between reconstruction and segmentation metrics. Further analysis showed the top four submissions were suitable for downstream biomarker analysis, largely preserving cartilage thicknesses and key bone shape features with respect to ground truth. K2S thus showed the value in considering reconstruction and image analysis as end-to-end tasks, as this leaves room for optimization while more realistically reflecting the long-term use case of tools being developed by the MR community. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

14 pages, 7374 KiB  
Article
Cascade of Denoising and Mapping Neural Networks for MRI R2* Relaxometry of Iron-Loaded Liver
by Qiqi Lu, Changqing Wang, Zifeng Lian, Xinyuan Zhang, Wei Yang, Qianjin Feng and Yanqiu Feng
Bioengineering 2023, 10(2), 209; https://doi.org/10.3390/bioengineering10020209 - 04 Feb 2023
Cited by 3 | Viewed by 1746
Abstract
MRI of effective transverse relaxation rate (R2*) measurement is a reliable method for liver iron concentration quantification. However, R2* mapping can be degraded by noise, especially in the case of iron overload. This study aimed to develop a deep learning method for MRI [...] Read more.
MRI of effective transverse relaxation rate (R2*) measurement is a reliable method for liver iron concentration quantification. However, R2* mapping can be degraded by noise, especially in the case of iron overload. This study aimed to develop a deep learning method for MRI R2* relaxometry of an iron-loaded liver using a two-stage cascaded neural network. The proposed method, named CadamNet, combines two convolutional neural networks separately designed for image denoising and parameter mapping into a cascade framework, and the physics-based R2* decay model was incorporated in training the mapping network to enforce data consistency further. CadamNet was trained using simulated liver data with Rician noise, which was constructed from clinical liver data. The performance of CadamNet was quantitatively evaluated on simulated data with varying noise levels as well as clinical liver data and compared with the single-stage parameter mapping network (MappingNet) and two conventional model-based R2* mapping methods. CadamNet consistently achieved high-quality R2* maps and outperformed MappingNet at varying noise levels. Compared with conventional R2* mapping methods, CadamNet yielded R2* maps with lower errors, higher quality, and substantially increased efficiency. In conclusion, the proposed CadamNet enables accurate and efficient iron-loaded liver R2* mapping, especially in the presence of severe noise. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

22 pages, 8111 KiB  
Article
Improving Data-Efficiency and Robustness of Medical Imaging Segmentation Using Inpainting-Based Self-Supervised Learning
by Jeffrey Dominic, Nandita Bhaskhar, Arjun D. Desai, Andrew Schmidt, Elka Rubin, Beliz Gunel, Garry E. Gold, Brian A. Hargreaves, Leon Lenchik, Robert Boutin and Akshay S. Chaudhari
Bioengineering 2023, 10(2), 207; https://doi.org/10.3390/bioengineering10020207 - 04 Feb 2023
Cited by 3 | Viewed by 2384
Abstract
We systematically evaluate the training methodology and efficacy of two inpainting-based pretext tasks of context prediction and context restoration for medical image segmentation using self-supervised learning (SSL). Multiple versions of self-supervised U-Net models were trained to segment MRI and CT datasets, each using [...] Read more.
We systematically evaluate the training methodology and efficacy of two inpainting-based pretext tasks of context prediction and context restoration for medical image segmentation using self-supervised learning (SSL). Multiple versions of self-supervised U-Net models were trained to segment MRI and CT datasets, each using a different combination of design choices and pretext tasks to determine the effect of these design choices on segmentation performance. The optimal design choices were used to train SSL models that were then compared with baseline supervised models for computing clinically-relevant metrics in label-limited scenarios. We observed that SSL pretraining with context restoration using 32 × 32 patches and Poission-disc sampling, transferring only the pretrained encoder weights, and fine-tuning immediately with an initial learning rate of 1 × 103 provided the most benefit over supervised learning for MRI and CT tissue segmentation accuracy (p < 0.001). For both datasets and most label-limited scenarios, scaling the size of unlabeled pretraining data resulted in improved segmentation performance. SSL models pretrained with this amount of data outperformed baseline supervised models in the computation of clinically-relevant metrics, especially when the performance of supervised learning was low. Our results demonstrate that SSL pretraining using inpainting-based pretext tasks can help increase the robustness of models in label-limited scenarios and reduce worst-case errors that occur with supervised learning. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

16 pages, 4963 KiB  
Article
Myocardial Segmentation of Tagged Magnetic Resonance Images with Transfer Learning Using Generative Cine-To-Tagged Dataset Transformation
by Arnaud P. Dhaene, Michael Loecher, Alexander J. Wilson and Daniel B. Ennis
Bioengineering 2023, 10(2), 166; https://doi.org/10.3390/bioengineering10020166 - 28 Jan 2023
Cited by 4 | Viewed by 1996
Abstract
The use of deep learning (DL) segmentation in cardiac MRI has the potential to streamline the radiology workflow, particularly for the measurement of myocardial strain. Recent efforts in DL motion tracking models have drastically reduced the time needed to measure the heart’s displacement [...] Read more.
The use of deep learning (DL) segmentation in cardiac MRI has the potential to streamline the radiology workflow, particularly for the measurement of myocardial strain. Recent efforts in DL motion tracking models have drastically reduced the time needed to measure the heart’s displacement field and the subsequent myocardial strain estimation. However, the selection of initial myocardial reference points is not automated and still requires manual input from domain experts. Segmentation of the myocardium is a key step for initializing reference points. While high-performing myocardial segmentation models exist for cine images, this is not the case for tagged images. In this work, we developed and compared two novel DL models (nnU-net and Segmentation ResNet VAE) for the segmentation of myocardium from tagged CMR images. We implemented two methods to transform cardiac cine images into tagged images, allowing us to leverage large public annotated cine datasets. The cine-to-tagged methods included (i) a novel physics-driven transformation model, and (ii) a generative adversarial network (GAN) style transfer model. We show that pretrained models perform better (+2.8 Dice coefficient percentage points) and converge faster (6×) than models trained from scratch. The best-performing method relies on a pretraining with an unpaired, unlabeled, and structure-preserving generative model trained to transform cine images into their tagged-appearing equivalents. Our state-of-the-art myocardium segmentation network reached a Dice coefficient of 0.828 and 95th percentile Hausdorff distance of 4.745 mm on a held-out test set. This performance is comparable to existing state-of-the-art segmentation networks for cine images. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

17 pages, 5231 KiB  
Article
Jointly Learning Non-Cartesian k-Space Trajectories and Reconstruction Networks for 2D and 3D MR Imaging through Projection
by Chaithya Giliyar Radhakrishna and Philippe Ciuciu
Bioengineering 2023, 10(2), 158; https://doi.org/10.3390/bioengineering10020158 - 24 Jan 2023
Cited by 6 | Viewed by 2275
Abstract
Compressed sensing in magnetic resonance imaging essentially involves the optimization of (1) the sampling pattern in k-space under MR hardware constraints and (2) image reconstruction from undersampled k-space data. Recently, deep learning methods have allowed the community to address both problems [...] Read more.
Compressed sensing in magnetic resonance imaging essentially involves the optimization of (1) the sampling pattern in k-space under MR hardware constraints and (2) image reconstruction from undersampled k-space data. Recently, deep learning methods have allowed the community to address both problems simultaneously, especially in the non-Cartesian acquisition setting. This work aims to contribute to this field by tackling some major concerns in existing approaches. Particularly, current state-of-the-art learning methods seek hardware compliant k-space sampling trajectories by enforcing the hardware constraints through additional penalty terms in the training loss. Through ablation studies, we rather show the benefit of using a projection step to enforce these constraints and demonstrate that the resulting k-space trajectories are more flexible under a projection-based scheme, which results in superior performance in reconstructed image quality. In 2D studies, our novel method trajectories present an improved image reconstruction quality at a 20-fold acceleration factor on the fastMRI data set with SSIM scores of nearly 0.92–0.95 in our retrospective studies as compared to the corresponding Cartesian reference and also see a 3–4 dB gain in PSNR as compared to earlier state-of-the-art methods. Finally, we extend the algorithm to 3D and by comparing optimization as learning-based projection schemes, we show that data-driven joint learning-based method trajectories outperform model-based methods such as SPARKLING through a 2 dB gain in PSNR and 0.02 gain in SSIM. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

13 pages, 4863 KiB  
Article
Fet-Net Algorithm for Automatic Detection of Fetal Orientation in Fetal MRI
by Joshua Eisenstat, Matthias W. Wagner, Logi Vidarsson, Birgit Ertl-Wagner and Dafna Sussman
Bioengineering 2023, 10(2), 140; https://doi.org/10.3390/bioengineering10020140 - 20 Jan 2023
Cited by 2 | Viewed by 1818
Abstract
Identifying fetal orientation is essential for determining the mode of delivery and for sequence planning in fetal magnetic resonance imaging (MRI). This manuscript describes a deep learning algorithm named Fet-Net, composed of convolutional neural networks (CNNs), which allows for the automatic detection of [...] Read more.
Identifying fetal orientation is essential for determining the mode of delivery and for sequence planning in fetal magnetic resonance imaging (MRI). This manuscript describes a deep learning algorithm named Fet-Net, composed of convolutional neural networks (CNNs), which allows for the automatic detection of fetal orientation from a two-dimensional (2D) MRI slice. The architecture consists of four convolutional layers, which feed into a simple artificial neural network. Compared with eleven other prominent CNNs (different versions of ResNet, VGG, Xception, and Inception), Fet-Net has fewer architectural layers and parameters. From 144 3D MRI datasets indicative of vertex, breech, oblique and transverse fetal orientations, 6120 2D MRI slices were extracted to train, validate and test Fet-Net. Despite its simpler architecture, Fet-Net demonstrated an average accuracy and F1 score of 97.68% and a loss of 0.06828 on the 6120 2D MRI slices during a 5-fold cross-validation experiment. This architecture outperformed all eleven prominent architectures (p < 0.05). An ablation study proved each component’s statistical significance and contribution to Fet-Net’s performance. Fet-Net demonstrated robustness in classification accuracy even when noise was introduced to the images, outperforming eight of the 11 prominent architectures. Fet-Net’s ability to automatically detect fetal orientation can profoundly decrease the time required for fetal MRI acquisition. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

10 pages, 2798 KiB  
Article
Automated MRI Field of View Prescription from Region of Interest Prediction by Intra-Stack Attention Neural Network
by Ke Lei, Ali B. Syed, Xucheng Zhu, John M. Pauly and Shreyas V. Vasanawala
Bioengineering 2023, 10(1), 92; https://doi.org/10.3390/bioengineering10010092 - 10 Jan 2023
Cited by 7 | Viewed by 1768
Abstract
Manual prescription of the field of view (FOV) by MRI technologists is variable and prolongs the scanning process. Often, the FOV is too large or crops critical anatomy. We propose a deep learning framework, trained by radiologists’ supervision, for automating FOV prescription. An [...] Read more.
Manual prescription of the field of view (FOV) by MRI technologists is variable and prolongs the scanning process. Often, the FOV is too large or crops critical anatomy. We propose a deep learning framework, trained by radiologists’ supervision, for automating FOV prescription. An intra-stack shared feature extraction network and an attention network are used to process a stack of 2D image inputs to generate scalars defining the location of a rectangular region of interest (ROI). The attention mechanism is used to make the model focus on a small number of informative slices in a stack. Then, the smallest FOV that makes the neural network predicted ROI free of aliasing is calculated by an algebraic operation derived from MR sampling theory. The framework’s performance is examined quantitatively with intersection over union (IoU) and pixel error on position and qualitatively with a reader study. The proposed model achieves an average IoU of 0.867 and an average ROI position error of 9.06 out of 512 pixels on 80 test cases, significantly better than two baseline models and not significantly different from a radiologist. Finally, the FOV given by the proposed framework achieves an acceptance rate of 92% from an experienced radiologist. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

18 pages, 39769 KiB  
Article
De-Aliasing and Accelerated Sparse Magnetic Resonance Image Reconstruction Using Fully Dense CNN with Attention Gates
by Md. Biddut Hossain, Ki-Chul Kwon, Shariar Md Imtiaz, Oh-Seung Nam, Seok-Hee Jeon and Nam Kim
Bioengineering 2023, 10(1), 22; https://doi.org/10.3390/bioengineering10010022 - 22 Dec 2022
Cited by 5 | Viewed by 2202
Abstract
When sparsely sampled data are used to accelerate magnetic resonance imaging (MRI), conventional reconstruction approaches produce significant artifacts that obscure the content of the image. To remove aliasing artifacts, we propose an advanced convolutional neural network (CNN) called fully dense attention CNN (FDA-CNN). [...] Read more.
When sparsely sampled data are used to accelerate magnetic resonance imaging (MRI), conventional reconstruction approaches produce significant artifacts that obscure the content of the image. To remove aliasing artifacts, we propose an advanced convolutional neural network (CNN) called fully dense attention CNN (FDA-CNN). We updated the Unet model with the fully dense connectivity and attention mechanism for MRI reconstruction. The main benefit of FDA-CNN is that an attention gate in each decoder layer increases the learning process by focusing on the relevant image features and provides a better generalization of the network by reducing irrelevant activations. Moreover, densely interconnected convolutional layers reuse the feature maps and prevent the vanishing gradient problem. Additionally, we also implement a new, proficient under-sampling pattern in the phase direction that takes low and high frequencies from the k-space both randomly and non-randomly. The performance of FDA-CNN was evaluated quantitatively and qualitatively with three different sub-sampling masks and datasets. Compared with five current deep learning-based and two compressed sensing MRI reconstruction techniques, the proposed method performed better as it reconstructed smoother and brighter images. Furthermore, FDA-CNN improved the mean PSNR by 2 dB, SSIM by 0.35, and VIFP by 0.37 compared with Unet for the acceleration factor of 5. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

16 pages, 8183 KiB  
Article
Wave-Encoded Model-Based Deep Learning for Highly Accelerated Imaging with Joint Reconstruction
by Jaejin Cho, Borjan Gagoski, Tae Hyung Kim, Qiyuan Tian, Robert Frost, Itthi Chatnuntawech and Berkin Bilgic
Bioengineering 2022, 9(12), 736; https://doi.org/10.3390/bioengineering9120736 - 29 Nov 2022
Cited by 7 | Viewed by 1863
Abstract
A recently introduced model-based deep learning (MoDL) technique successfully incorporates convolutional neural network (CNN)-based regularizers into physics-based parallel imaging reconstruction using a small number of network parameters. Wave-controlled aliasing in parallel imaging (CAIPI) is an emerging parallel imaging method that accelerates imaging acquisition [...] Read more.
A recently introduced model-based deep learning (MoDL) technique successfully incorporates convolutional neural network (CNN)-based regularizers into physics-based parallel imaging reconstruction using a small number of network parameters. Wave-controlled aliasing in parallel imaging (CAIPI) is an emerging parallel imaging method that accelerates imaging acquisition by employing sinusoidal gradients in the phase- and slice/partition-encoding directions during the readout to take better advantage of 3D coil sensitivity profiles. We propose wave-encoded MoDL (wave-MoDL) combining the wave-encoding strategy with unrolled network constraints for highly accelerated 3D imaging while enforcing data consistency. We extend wave-MoDL to reconstruct multicontrast data with CAIPI sampling patterns to leverage similarity between multiple images to improve the reconstruction quality. We further exploit this to enable rapid quantitative imaging using an interleaved look-locker acquisition sequence with T2 preparation pulse (3D-QALAS). Wave-MoDL enables a 40 s MPRAGE acquisition at 1 mm resolution at 16-fold acceleration. For quantitative imaging, wave-MoDL permits a 1:50 min acquisition for T1, T2, and proton density mapping at 1 mm resolution at 12-fold acceleration, from which contrast-weighted images can be synthesized as well. In conclusion, wave-MoDL allows rapid MR acquisition and high-fidelity image reconstruction and may facilitate clinical and neuroscientific applications by incorporating unrolled neural networks into wave-CAIPI reconstruction. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

16 pages, 2542 KiB  
Article
SelfCoLearn: Self-Supervised Collaborative Learning for Accelerating Dynamic MR Imaging
by Juan Zou, Cheng Li, Sen Jia, Ruoyou Wu, Tingrui Pei, Hairong Zheng and Shanshan Wang
Bioengineering 2022, 9(11), 650; https://doi.org/10.3390/bioengineering9110650 - 04 Nov 2022
Cited by 6 | Viewed by 1639
Abstract
Lately, deep learning technology has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved. However, without fully sampled reference data for training, the current approaches may have limited abilities in recovering fine details or structures. To address this [...] Read more.
Lately, deep learning technology has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved. However, without fully sampled reference data for training, the current approaches may have limited abilities in recovering fine details or structures. To address this challenge, this paper proposes a self-supervised collaborative learning framework (SelfCoLearn) for accurate dynamic MR image reconstruction from undersampled k-space data directly. The proposed SelfCoLearn is equipped with three important components, namely, dual-network collaborative learning, reunderampling data augmentation and a special-designed co-training loss. The framework is flexible and can be integrated into various model-based iterative un-rolled networks. The proposed method has been evaluated on an in vivo dataset and was compared to four state-of-the-art methods. The results show that the proposed method possesses strong capabilities in capturing essential and inherent representations for direct reconstructions from the undersampled k-space data and thus enables high-quality and fast dynamic MR imaging. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

18 pages, 7138 KiB  
Article
Deep Learning-Based Water-Fat Separation from Dual-Echo Chemical Shift-Encoded Imaging
by Yan Wu, Marcus Alley, Zhitao Li, Keshav Datta, Zhifei Wen, Christopher Sandino, Ali Syed, Hongyi Ren, Lei Xing, Michael Lustig, John Pauly and Shreyas Vasanawala
Bioengineering 2022, 9(10), 579; https://doi.org/10.3390/bioengineering9100579 - 19 Oct 2022
Cited by 3 | Viewed by 2089
Abstract
Conventional water–fat separation approaches suffer long computational times and are prone to water/fat swaps. To solve these problems, we propose a deep learning-based dual-echo water–fat separation method. With IRB approval, raw data from 68 pediatric clinically indicated dual echo scans were analyzed, corresponding [...] Read more.
Conventional water–fat separation approaches suffer long computational times and are prone to water/fat swaps. To solve these problems, we propose a deep learning-based dual-echo water–fat separation method. With IRB approval, raw data from 68 pediatric clinically indicated dual echo scans were analyzed, corresponding to 19382 contrast-enhanced images. A densely connected hierarchical convolutional network was constructed, in which dual-echo images and corresponding echo times were used as input and water/fat images obtained using the projected power method were regarded as references. Models were trained and tested using knee images with 8-fold cross validation and validated on out-of-distribution data from the ankle, foot, and arm. Using the proposed method, the average computational time for a volumetric dataset with ~400 slices was reduced from 10 min to under one minute. High fidelity was achieved (correlation coefficient of 0.9969, l1 error of 0.0381, SSIM of 0.9740, pSNR of 58.6876) and water/fat swaps were mitigated. I is of particular interest that metal artifacts were substantially reduced, even when the training set contained no images with metallic implants. Using the models trained with only contrast-enhanced images, water/fat images were predicted from non-contrast-enhanced images with high fidelity. The proposed water–fat separation method has been demonstrated to be fast, robust, and has the added capability to compensate for metal artifacts. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

18 pages, 4553 KiB  
Article
Quantification of Intra-Muscular Adipose Infiltration in Calf/Thigh MRI Using Fully and Weakly Supervised Semantic Segmentation
by Rula Amer, Jannette Nassar, Amira Trabelsi, David Bendahan, Hayit Greenspan and Noam Ben-Eliezer
Bioengineering 2022, 9(7), 315; https://doi.org/10.3390/bioengineering9070315 - 14 Jul 2022
Cited by 7 | Viewed by 2143
Abstract
Purpose: Infiltration of fat into lower limb muscles is one of the key markers for the severity of muscle pathologies. The level of fat infiltration varies in its severity across and within patients, and it is traditionally estimated using visual radiologic inspection. Precise [...] Read more.
Purpose: Infiltration of fat into lower limb muscles is one of the key markers for the severity of muscle pathologies. The level of fat infiltration varies in its severity across and within patients, and it is traditionally estimated using visual radiologic inspection. Precise quantification of the severity and spatial distribution of this pathological process requires accurate segmentation of lower limb anatomy into muscle and fat. Methods: Quantitative magnetic resonance imaging (qMRI) of the calf and thigh muscles is one of the most effective techniques for estimating pathological accumulation of intra-muscular adipose tissue (IMAT) in muscular dystrophies. In this work, we present a new deep learning (DL) network tool for automated and robust segmentation of lower limb anatomy that is based on the quantification of MRI’s transverse (T2) relaxation time. The network was used to segment calf and thigh anatomies into viable muscle areas and IMAT using a weakly supervised learning process. A new disease biomarker was calculated, reflecting the level of abnormal fat infiltration and disease state. A biomarker was then applied on two patient populations suffering from dysferlinopathy and Charcot–Marie–Tooth (CMT) diseases. Results: Comparison of manual vs. automated segmentation of muscle anatomy, viable muscle areas, and intermuscular adipose tissue (IMAT) produced high Dice similarity coefficients (DSCs) of 96.4%, 91.7%, and 93.3%, respectively. Linear regression between the biomarker value calculated based on the ground truth segmentation and based on automatic segmentation produced high correlation coefficients of 97.7% and 95.9% for the dysferlinopathy and CMT patients, respectively. Conclusions: Using a combination of qMRI and DL-based segmentation, we present a new quantitative biomarker of disease severity. This biomarker is automatically calculated and, most importantly, provides a spatially global indication for the state of the disease across the entire thigh or calf. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Graphical abstract

Review

Jump to: Editorial, Research

27 pages, 7834 KiB  
Review
Deep Learning-Based Reconstruction for Cardiac MRI: A Review
by Julio A. Oscanoa, Matthew J. Middione, Cagan Alkan, Mahmut Yurt, Michael Loecher, Shreyas S. Vasanawala and Daniel B. Ennis
Bioengineering 2023, 10(3), 334; https://doi.org/10.3390/bioengineering10030334 - 06 Mar 2023
Cited by 9 | Viewed by 4924
Abstract
Cardiac magnetic resonance (CMR) is an essential clinical tool for the assessment of cardiovascular disease. Deep learning (DL) has recently revolutionized the field through image reconstruction techniques that allow unprecedented data undersampling rates. These fast acquisitions have the potential to considerably impact the [...] Read more.
Cardiac magnetic resonance (CMR) is an essential clinical tool for the assessment of cardiovascular disease. Deep learning (DL) has recently revolutionized the field through image reconstruction techniques that allow unprecedented data undersampling rates. These fast acquisitions have the potential to considerably impact the diagnosis and treatment of cardiovascular disease. Herein, we provide a comprehensive review of DL-based reconstruction methods for CMR. We place special emphasis on state-of-the-art unrolled networks, which are heavily based on a conventional image reconstruction framework. We review the main DL-based methods and connect them to the relevant conventional reconstruction theory. Next, we review several methods developed to tackle specific challenges that arise from the characteristics of CMR data. Then, we focus on DL-based methods developed for specific CMR applications, including flow imaging, late gadolinium enhancement, and quantitative tissue characterization. Finally, we discuss the pitfalls and future outlook of DL-based reconstructions in CMR, focusing on the robustness, interpretability, clinical deployment, and potential for new methods. Full article
(This article belongs to the Special Issue AI in MRI: Frontiers and Applications)
Show Figures

Figure 1

Back to TopTop