Medical Imaging & Image Processing III

A special issue of Technologies (ISSN 2227-7080). This special issue belongs to the section "Information and Communication Technologies".

Deadline for manuscript submissions: 30 September 2024 | Viewed by 20147

Special Issue Editors


grade E-Mail Website
Guest Editor
Informatics Building School of Informatics, University of Leicester, Leicester LE1 7RH, UK
Interests: deep learning; artificial intelligence; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Professor, Molecular Imaging and Neuropathology Division, Columbia University, New York, NY 10032, USA
2. Research Scientist, New York State Psychiatric Institute, New York, NY 10032, USA
Interests: magnetic resonance spectroscopy imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging is becoming an essential component in various fields of biomedical research and clinical practice: neuroscientists detect regional metabolic brain activity from positron emission tomography (PET), functional magnetic resonance imaging (MRI), and magnetic resonance spectrum imaging (MRSI) scans. Biologists study cells and generate 3D confocal microscopy data sets. Virologists generate 3D reconstructions of viruses from micrographs. Radiologists identify and quantify tumors from MRI and computed tomography (CT) scans.

On the other hand, image processing includes the analysis, enhancement, and display of biomedical images. Image reconstruction and modeling techniques allow instant processing of 2D signals to create 3D images. Image processing and analysis can be used to determine the diameter, volume, and vasculature of a tumor or organ, flow parameters of blood or other fluids, and microscopic changes that have yet to raise any otherwise discernible flags. Image classification techniques help to detect subjects suffering from particular diseases and to detect disease-related regions.

This Special Issue aims to provide a diverse, but complementary, set of contributions to demonstrate new developments and applications in the field of medical imaging and image processing.

The relevant Special Issue can be found here:

https://www.mdpi.com/journal/technologies/special_issues/medical_imaging

https://www.mdpi.com/journal/technologies/special_issues/medical_imaging_II

Prof. Dr. Yudong Zhang
Dr. Zhengchao Dong
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Technologies is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical imaging
  • magnetic resonance imaging
  • neuroimaging X-ray
  • computed tomography
  • mammography
  • image processing and analysis
  • computer vision machine learning
  • artificial intelligence
  • deep learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

4 pages, 451 KiB  
Editorial
Medical Imaging and Image Processing
by Yudong Zhang and Zhengchao Dong
Technologies 2023, 11(2), 54; https://doi.org/10.3390/technologies11020054 - 05 Apr 2023
Cited by 5 | Viewed by 4724
Abstract
Medical imaging (MI) [...] Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

Research

Jump to: Editorial

42 pages, 13997 KiB  
Article
Multi-Scale CNN: An Explainable AI-Integrated Unique Deep Learning Framework for Lung-Affected Disease Classification
by Ovi Sarkar, Md. Robiul Islam, Md. Khalid Syfullah, Md. Tohidul Islam, Md. Faysal Ahamed, Mominul Ahsan and Julfikar Haider
Technologies 2023, 11(5), 134; https://doi.org/10.3390/technologies11050134 - 30 Sep 2023
Cited by 2 | Viewed by 2324
Abstract
Lung-related diseases continue to be a leading cause of global mortality. Timely and precise diagnosis is crucial to save lives, but the availability of testing equipment remains a challenge, often coupled with issues of reliability. Recent research has highlighted the potential of Chest [...] Read more.
Lung-related diseases continue to be a leading cause of global mortality. Timely and precise diagnosis is crucial to save lives, but the availability of testing equipment remains a challenge, often coupled with issues of reliability. Recent research has highlighted the potential of Chest X-ray (CXR) images in identifying various lung diseases, including COVID-19, fibrosis, pneumonia, and more. In this comprehensive study, four publicly accessible datasets have been combined to create a robust dataset comprising 6650 CXR images, categorized into seven distinct disease groups. To effectively distinguish between normal and six different lung-related diseases (namely, bacterial pneumonia, COVID-19, fibrosis, lung opacity, tuberculosis, and viral pneumonia), a Deep Learning (DL) architecture called a Multi-Scale Convolutional Neural Network (MS-CNN) is introduced. The model is adapted to classify multiple numbers of lung disease classes, which is considered to be a persistent challenge in the field. While prior studies have demonstrated high accuracy in binary and limited-class scenarios, the proposed framework maintains this accuracy across a diverse range of lung conditions. The innovative model harnesses the power of combining predictions from multiple feature maps at different resolution scales, significantly enhancing disease classification accuracy. The approach aims to shorten testing duration compared to the state-of-the-art models, offering a potential solution toward expediting medical interventions for patients with lung-related diseases and integrating explainable AI (XAI) for enhancing prediction capability. The results demonstrated an impressive accuracy of 96.05%, with average values for precision, recall, F1-score, and AUC at 0.97, 0.95, 0.95, and 0.94, respectively, for the seven-class classification. The model exhibited exceptional performance across multi-class classifications, achieving accuracy rates of 100%, 99.65%, 99.21%, 98.67%, and 97.47% for two, three, four, five, and six-class scenarios, respectively. The novel approach not only surpasses many pre-existing state-of-the-art (SOTA) methodologies but also sets a new standard for the diagnosis of lung-affected diseases using multi-class CXR data. Furthermore, the integration of XAI techniques such as SHAP and Grad-CAM enhanced the transparency and interpretability of the model’s predictions. The findings hold immense promise for accelerating and improving the accuracy and confidence of diagnostic decisions in the field of lung disease identification. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

17 pages, 6492 KiB  
Article
Multi-Classification of Lung Infections Using Improved Stacking Convolution Neural Network
by Usharani Bhimavarapu, Nalini Chintalapudi and Gopi Battineni
Technologies 2023, 11(5), 128; https://doi.org/10.3390/technologies11050128 - 17 Sep 2023
Cited by 1 | Viewed by 1542
Abstract
Lung disease is a respiratory disease that poses a high risk to people worldwide and includes pneumonia and COVID-19. As such, quick and precise identification of lung disease is vital in medical treatment. Early detection and diagnosis can significantly reduce the life-threatening nature [...] Read more.
Lung disease is a respiratory disease that poses a high risk to people worldwide and includes pneumonia and COVID-19. As such, quick and precise identification of lung disease is vital in medical treatment. Early detection and diagnosis can significantly reduce the life-threatening nature of lung diseases and improve the quality of life of human beings. Chest X-ray and computed tomography (CT) scan images are currently the best techniques to detect and diagnose lung infection. The increase in the chest X-ray or CT scan images at the time of training addresses the overfitting dilemma, and multi-class classification of lung diseases will deal with meaningful information and overfitting. Overfitting deteriorates the performance of the model and gives inaccurate results. This study reduces the overfitting issue and computational complexity by proposing a new enhanced kernel convolution function. Alongside an enhanced kernel convolution function, this study used convolution neural network (CNN) models to determine pneumonia and COVID-19. Each CNN model was applied to the collected dataset to extract the features and later applied these features as input to the classification models. This study shows that extracting deep features from the common layers of the CNN models increased the performance of the classification procedure. The multi-class classification improves the diagnostic performance, and the evaluation metrics improved significantly with the improved support vector machine (SVM). The best results were obtained using the improved SVM classifier fed with the features provided by CNN, and the success rate of the improved SVM was 99.8%. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

27 pages, 27415 KiB  
Article
Efficient Deep Learning-Based Data-Centric Approach for Autism Spectrum Disorder Diagnosis from Facial Images Using Explainable AI
by Mohammad Shafiul Alam, Muhammad Mahbubur Rashid, Ahmed Rimaz Faizabadi, Hasan Firdaus Mohd Zaki, Tasfiq E. Alam, Md Shahin Ali, Kishor Datta Gupta and Md Manjurul Ahsan
Technologies 2023, 11(5), 115; https://doi.org/10.3390/technologies11050115 - 29 Aug 2023
Cited by 2 | Viewed by 2961
Abstract
The research describes an effective deep learning-based, data-centric approach for diagnosing autism spectrum disorder from facial images. To classify ASD and non-ASD subjects, this method requires training a convolutional neural network using the facial image dataset. As a part of the data-centric approach, [...] Read more.
The research describes an effective deep learning-based, data-centric approach for diagnosing autism spectrum disorder from facial images. To classify ASD and non-ASD subjects, this method requires training a convolutional neural network using the facial image dataset. As a part of the data-centric approach, this research applies pre-processing and synthesizing of the training dataset. The trained model is subsequently evaluated on an independent test set in order to assess the performance matrices of various data-centric approaches. The results reveal that the proposed method that simultaneously applies the pre-processing and augmentation approach on the training dataset outperforms the recent works, achieving excellent 98.9% prediction accuracy, sensitivity, and specificity while having 99.9% AUC. This work enhances the clarity and comprehensibility of the algorithm by integrating explainable AI techniques, providing clinicians with valuable and interpretable insights into the decision-making process of the ASD diagnosis model. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

18 pages, 2301 KiB  
Article
The U-Net Family for Epicardial Adipose Tissue Segmentation and Quantification in Low-Dose CT
by Lu Liu, Runlei Ma, Peter M. A. van Ooijen, Matthijs Oudkerk, Rozemarijn Vliegenthart, Raymond N. J. Veldhuis and Christoph Brune
Technologies 2023, 11(4), 104; https://doi.org/10.3390/technologies11040104 - 05 Aug 2023
Viewed by 1554
Abstract
Epicardial adipose tissue (EAT) is located between the visceral pericardium and myocardium, and EAT volume is correlated with cardiovascular risk. Nowadays, many deep learning-based automated EAT segmentation and quantification methods in the U-net family have been developed to reduce the workload for radiologists. [...] Read more.
Epicardial adipose tissue (EAT) is located between the visceral pericardium and myocardium, and EAT volume is correlated with cardiovascular risk. Nowadays, many deep learning-based automated EAT segmentation and quantification methods in the U-net family have been developed to reduce the workload for radiologists. The automatic assessment of EAT on non-contrast low-dose CT calcium score images poses a greater challenge compared to the automatic assessment on coronary CT angiography, which requires a higher radiation dose to capture the intricate details of the coronary arteries. This study comprehensively examined and evaluated state-of-the-art segmentation methods while outlining future research directions. Our dataset consisted of 154 non-contrast low-dose CT scans from the ROBINSCA study, with two types of labels: (a) region inside the pericardium and (b) pixel-wise EAT labels. We selected four advanced methods from the U-net family: 3D U-net, 3D attention U-net, an extended 3D attention U-net, and U-net++. For evaluation, we performed both four-fold cross-validation and hold-out tests. Agreement between the automatic segmentation/quantification and the manual quantification was evaluated with the Pearson correlation and the Bland–Altman analysis. Generally, the models trained with label type (a) showed better performance compared to models trained with label type (b). The U-net++ model trained with label type (a) showed the best performance for segmentation and quantification. The U-net++ model trained with label type (a) efficiently provided better EAT segmentation results (hold-out test: DCS = 80.18±0.20%, mIoU = 67.13±0.39%, sensitivity = 81.47±0.43%, specificity = 99.64±0.00%, Pearson correlation = 0.9405) and EAT volume compared to the other U-net-based networks and the recent EAT segmentation method. Interestingly, our findings indicate that 3D convolutional neural networks do not consistently outperform 2D networks in EAT segmentation and quantification. Moreover, utilizing labels representing the region inside the pericardium proved advantageous in training more accurate EAT segmentation models. These insights highlight the potential of deep learning-based methods for achieving robust EAT segmentation and quantification outcomes. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

18 pages, 2498 KiB  
Article
Infrared Thermal Imaging and Artificial Neural Networks to Screen for Wrist Fractures in Pediatrics
by Olamilekan Shobayo, Reza Saatchi and Shammi Ramlakhan
Technologies 2022, 10(6), 119; https://doi.org/10.3390/technologies10060119 - 22 Nov 2022
Cited by 2 | Viewed by 1730
Abstract
Paediatric wrist fractures are commonly seen injuries at emergency departments. Around 50% of the X-rays taken to identify these injuries indicate no fracture. The aim of this study was to develop a model using infrared thermal imaging (IRTI) data and multilayer perceptron (MLP) [...] Read more.
Paediatric wrist fractures are commonly seen injuries at emergency departments. Around 50% of the X-rays taken to identify these injuries indicate no fracture. The aim of this study was to develop a model using infrared thermal imaging (IRTI) data and multilayer perceptron (MLP) neural networks as a screening tool to assist clinicians in deciding which patients require X-ray imaging to diagnose a fracture. Forty participants with wrist injury (19 with a fracture, 21 without, X-ray confirmed), mean age 10.50 years, were included. IRTI of both wrists was performed with the contralateral as reference. The injured wrist region of interest (ROI) was segmented and represented by the means of cells of 10 × 10 pixels. The fifty largest means were selected, the mean temperature of the contralateral ROI was subtracted, and they were expressed by their standard deviation, kurtosis, and interquartile range for MLP processing. Training and test files were created, consisting of randomly split 2/3 and 1/3 of the participants, respectively. To avoid bias of participant inclusion in the two files, the experiments were repeated 100 times, and the MLP outputs were averaged. The model’s sensitivity and specificity were 84.2% and 71.4%, respectively. Further work involves a larger sample size, adults, and other bone fractures. Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

17 pages, 8852 KiB  
Article
Deep Neural Network for Lung Image Segmentation on Chest X-ray
by Mahesh Chavan, Vijayakumar Varadarajan, Shilpa Gite and Ketan Kotecha
Technologies 2022, 10(5), 105; https://doi.org/10.3390/technologies10050105 - 30 Sep 2022
Cited by 10 | Viewed by 3533
Abstract
COVID-19 patients require effective diagnostic methods, which are currently in short supply. In this study, we explained how to accurately identify the lung regions on the X-ray scans of such people’s lungs. Images from X-rays or CT scans are critical in the healthcare [...] Read more.
COVID-19 patients require effective diagnostic methods, which are currently in short supply. In this study, we explained how to accurately identify the lung regions on the X-ray scans of such people’s lungs. Images from X-rays or CT scans are critical in the healthcare business. Image data categorization and segmentation algorithms have been developed to help doctors save time and reduce manual errors during the diagnosis. Over time, CNNs have consistently outperformed other image segmentation algorithms. Various architectures are presently based on CNNs such as ResNet, U-Net, VGG-16, etc. This paper merged the U-Net image segmentation and ResNet feature extraction networks to construct the ResUNet++ network. The paper’s novelty lies in the detailed discussion and implementation of the ResUNet++ architecture in lung image segmentation. In this research paper, we compared the ResUNet++ architecture with two other popular segmentation architectures. The ResNet residual block helps us in lowering the feature reduction issues. ResUNet++ performed well compared with the UNet and ResNet architectures by achieving high evaluation scores with the validation dice coefficient (96.36%), validation mean IoU (94.17%), and validation binary accuracy (98.07%). The novelty of this research paper lies in a detailed discussion of the UNet and ResUNet architectures and the implementation of ResUNet++ in lung images. As per our knowledge, until now, the ResUNet++ architecture has not been performed on lung image segmentation. We ran both the UNet and ResNet models for the same amount of epochs and found that the ResUNet++ architecture achieved higher accuracy with fewer epochs. In addition, the ResUNet model gave us higher accuracy (94%) than the UNet model (92%). Full article
(This article belongs to the Special Issue Medical Imaging & Image Processing III)
Show Figures

Figure 1

Back to TopTop