Advances in Quantitative Imaging Analysis: From Theory to Practice

A special issue of BioMedInformatics (ISSN 2673-7426). This special issue belongs to the section "Imaging Informatics".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 6759

Special Issue Editors

1. Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy
2. Department of Translational Medicine, University of Piemonte Orientale (UPO), Via Solaroli 17, 28100 Novara, Italy
Interests: radiotherapy; artificial intelligence; machine learning; process mining; radiomics
Special Issues, Collections and Topics in MDPI journals
1. Department of Diagnostic and Interventional Radiology, IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
2. Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
Interests: radiology; abdominal radiology; thoracic radiology; radiomics; quantitative imaging analysis; machine learning
Department of Radiotherapy, European Institute of Oncology (IEO) IRCCS, 20141 Milan, Italy
Interests: urological malignancies; radiation oncology; new fractionation protocols; treatment accuracy; patient’s quality of life; prognostic and predictive factors; SBRT hypofractionation; oligometastatic disease
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The special issue on Quantitative Analysis of Imaging seeks to bring together the latest innovations and advancements in the field of imaging analysis. With a focus on quantitative methods for the extraction of meaningful information from images, this special issue will cover a wide range of imaging modalities, including microscopy, medical imaging, and remote sensing.

The articles in this special issue will showcase the latest developments in the field of quantitative imaging analysis, covering both theoretical and practical aspects. Experts in the field will present their work on various topics, including image segmentation, registration, feature extraction, pattern recognition, and image-based modeling. The articles will highlight the challenges and opportunities in this rapidly evolving field, while promoting interdisciplinary collaboration among researchers from different fields.

The special issue on Quantitative Analysis of Imaging will provide a comprehensive overview of the current state of the art in quantitative imaging analysis, and aims to promote further research and development in this field. By presenting the latest advancements in imaging analysis, this special issue will serve as a valuable resource for researchers, practitioners, and students interested in this field.

Dr. Federico Mastroleo
Dr. Angela Ammirabile
Dr. Giulia Marvaso
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. BioMedInformatics is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • quantitative analysis
  • imaging
  • image processing
  • computer vision
  • biomedical engineering
  • image segmentation
  • pattern recognition
  • radiomics

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6951 KiB  
Article
Enhancing Brain Tumor Classification with Transfer Learning across Multiple Classes: An In-Depth Analysis
by Syed Ahmmed, Prajoy Podder, M. Rubaiyat Hossain Mondal, S M Atikur Rahman, Somasundar Kannan, Md Junayed Hasan, Ali Rohan and Alexander E. Prosvirin
BioMedInformatics 2023, 3(4), 1124-1144; https://doi.org/10.3390/biomedinformatics3040068 - 06 Dec 2023
Cited by 4 | Viewed by 1269
Abstract
This study focuses on leveraging data-driven techniques to diagnose brain tumors through magnetic resonance imaging (MRI) images. Utilizing the rule of deep learning (DL), we introduce and fine-tune two robust frameworks, ResNet 50 and Inception V3, specifically designed for the classification of brain [...] Read more.
This study focuses on leveraging data-driven techniques to diagnose brain tumors through magnetic resonance imaging (MRI) images. Utilizing the rule of deep learning (DL), we introduce and fine-tune two robust frameworks, ResNet 50 and Inception V3, specifically designed for the classification of brain MRI images. Building upon the previous success of ResNet 50 and Inception V3 in classifying other medical imaging datasets, our investigation encompasses datasets with distinct characteristics, including one with four classes and another with two. The primary contribution of our research lies in the meticulous curation of these paired datasets. We have also integrated essential techniques, including Early Stopping and ReduceLROnPlateau, to refine the model through hyperparameter optimization. This involved adding extra layers, experimenting with various loss functions and learning rates, and incorporating dropout layers and regularization to ensure model convergence in predictions. Furthermore, strategic enhancements, such as customized pooling and regularization layers, have significantly elevated the accuracy of our models, resulting in remarkable classification accuracy. Notably, the pairing of ResNet 50 with the Nadam optimizer yields extraordinary accuracy rates, reaching 99.34% for gliomas, 93.52% for meningiomas, 98.68% for non-tumorous images, and 97.70% for pituitary tumors. These results underscore the transformative potential of our custom-made approach, achieving an aggregate testing accuracy of 97.68% for these four distinct classes. In a two-class dataset, Resnet 50 with the Adam optimizer excels, demonstrating better precision, recall, F1 score, and an overall accuracy of 99.84%. Moreover, it attains perfect per-class accuracy of 99.62% for ‘Tumor Positive’ and 100% for ‘Tumor Negative’, underscoring a remarkable advancement in the realm of brain tumor categorization. This research underscores the innovative possibilities of DL models and our specialized optimization methods in the domain of diagnosing brain cancer from MRI images. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

14 pages, 953 KiB  
Article
Federated Learning for Diabetic Retinopathy Detection Using Vision Transformers
by Mohamed Chetoui and Moulay A. Akhloufi
BioMedInformatics 2023, 3(4), 948-961; https://doi.org/10.3390/biomedinformatics3040058 - 01 Nov 2023
Cited by 1 | Viewed by 1464
Abstract
A common consequence of diabetes mellitus called diabetic retinopathy (DR) results in lesions on the retina that impair vision. It can cause blindness if not detected in time. Unfortunately, DR cannot be reversed, and treatment simply keeps eyesight intact. The risk of vision [...] Read more.
A common consequence of diabetes mellitus called diabetic retinopathy (DR) results in lesions on the retina that impair vision. It can cause blindness if not detected in time. Unfortunately, DR cannot be reversed, and treatment simply keeps eyesight intact. The risk of vision loss can be considerably decreased with early detection and treatment of DR. Ophtalmologists must manually diagnose DR retinal fundus images, which takes time, effort, and is cost-consuming. It is also more prone to error than computer-aided diagnosis methods. Deep learning has recently become one of the methods used most frequently to improve performance in a variety of fields, including medical image analysis and classification. In this paper, we develop a federated learning approach to detect diabetic retinopathy using four distributed institutions in order to build a robust model. Our federated learning approach is based on Vision Transformer architecture to classify DR and Normal cases. Several performance measures were used such as accuracy, area under the curve (AUC), sensitivity and specificity. The results show an improvement of up to 3% in terms of accuracy with the proposed federated learning technique. The technique also resolving crucial issues like data security, data access rights, and data protection. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

21 pages, 643 KiB  
Article
Multimodal Deep Learning Methods on Image and Textual Data to Predict Radiotherapy Structure Names
by Priyankar Bose, Pratip Rana, William C. Sleeman IV, Sriram Srinivasan, Rishabh Kapoor, Jatinder Palta and Preetam Ghosh
BioMedInformatics 2023, 3(3), 493-513; https://doi.org/10.3390/biomedinformatics3030034 - 25 Jun 2023
Cited by 1 | Viewed by 1835
Abstract
Physicians often label anatomical structure sets in Digital Imaging and Communications in Medicine (DICOM) images with nonstandard random names. Hence, the standardization of these names for the Organs at Risk (OARs), Planning Target Volumes (PTVs), and ‘Other’ organs is a vital problem. This [...] Read more.
Physicians often label anatomical structure sets in Digital Imaging and Communications in Medicine (DICOM) images with nonstandard random names. Hence, the standardization of these names for the Organs at Risk (OARs), Planning Target Volumes (PTVs), and ‘Other’ organs is a vital problem. This paper presents novel deep learning methods on structure sets by integrating multimodal data compiled from the radiotherapy centers of the US Veterans Health Administration (VHA) and Virginia Commonwealth University (VCU). These de-identified data comprise 16,290 prostate structures. Our method integrates the multimodal textual and imaging data with Convolutional Neural Network (CNN)-based deep learning approaches such as CNN, Visual Geometry Group (VGG) network, and Residual Network (ResNet) and shows improved results in prostate radiotherapy structure name standardization. Evaluation with macro-averaged F1 score shows that our model with single-modal textual data usually performs better than previous studies. The models perform well on textual data alone, while the addition of imaging data shows that deep neural networks achieve better performance using information present in other modalities. Additionally, using masked images and masked doses along with text leads to an overall performance improvement with the CNN-based architectures than using all the modalities together. Undersampling the majority class leads to further performance enhancement. The VGG network on the masked image-dose data combined with CNNs on the text data performs the best and presents the state-of-the-art in this domain. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Graphical abstract

17 pages, 3456 KiB  
Article
Generation of Musculoskeletal Ultrasound Images with Diffusion Models
by Sofoklis Katakis, Nikolaos Barotsis, Alexandros Kakotaritis, Panagiotis Tsiganos, George Economou, Elias Panagiotopoulos and George Panayiotakis
BioMedInformatics 2023, 3(2), 405-421; https://doi.org/10.3390/biomedinformatics3020027 - 23 May 2023
Cited by 1 | Viewed by 1545
Abstract
The recent advances in deep learning have revolutionised computer-aided diagnosis in medical imaging. However, deep learning approaches to unveil their full potential require significant amounts of data, which can be a challenging task in some scientific fields, such as musculoskeletal ultrasound imaging, in [...] Read more.
The recent advances in deep learning have revolutionised computer-aided diagnosis in medical imaging. However, deep learning approaches to unveil their full potential require significant amounts of data, which can be a challenging task in some scientific fields, such as musculoskeletal ultrasound imaging, in which data privacy and security reasons can lead to important limitations in the acquisition and the distribution process of patients’ data. For this reason, different generative methods have been introduced to significantly reduce the required amount of real data by generating synthetic images, almost indistinguishable from the real ones. In this study, the power of the diffusion models is incorporated for the generation of realistic data from a small set of musculoskeletal ultrasound images in four different muscles. Afterwards, the similarity of the generated and real images is assessed with different types of qualitative and quantitative metrics that correspond well with human judgement. In particular, the histograms of pixel intensities of the two sets of images have demonstrated that the two distributions are statistically similar. Additionally, the well-established LPIPS, SSIM, FID, and PSNR metrics have been used to quantify the similarity of these sets of images. The two sets of images have achieved extremely high similarity scores in all these metrics. Subsequently, high-level features are extracted from the two types of images and visualized in a two-dimensional space for inspection of their structure and to identify patterns. From this representation, the two sets of images are hard to distinguish. Finally, we perform a series of experiments to assess the impact of the generated data for training a highly efficient Attention-UNet for the important clinical application of muscle thickness measurement. Our results depict that the synthetic data play a significant role in the model’s final performance and can lead to the improvement of the deep learning systems in musculoskeletal ultrasound. Full article
(This article belongs to the Special Issue Advances in Quantitative Imaging Analysis: From Theory to Practice)
Show Figures

Figure 1

Back to TopTop