Advances in Biomedical Image Processing and Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Applied Biosciences and Bioengineering".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 24183

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Communication and Digital Media, University of Western Macedonia, 52100 Kastoria, Greece
Interests: image processing; machine learning; computer vision; biomedical imaging; augmented/virtual reality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Ioannina, 45110 Ioannina, Greece
Interests: image processing and analysis; computer vision; pattern recognition; with emphasis on biomedical applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Houston, Houston, TX 77204, USA
Interests: biomedical image analysis; biomedical computing; AI in Healthcare; computer vision and biometrics

Special Issue Information

Dear Colleagues,

Biomedical image analysis plays a vital role in diagnosing numerous pathologies, ranging from infectious diseases to cancer. Advanced methodologies for signal and/or image processing and analysis and biomedical analytics may be a powerful tool for classifying medical data, identifying individualized health trends, and finding evolutionary trajectories between normal and non-normal cases in many medical applications.

The rapid growth in algorithms and computing power in recent years has spurred the emergence of machine learning and image processing techniques as a new tool, which is rapidly entering every aspect of our lives, from intelligent personal assistance, such as Siri, Alexa, and Google Home, to self-driving cars. The medical community has begun taking advantage of these new possibilities to create new predictive models and improve existing models. For example, novel methods applying machine learning and image processing/analysis methods that are robust and theoretically sound to efficiently and intuitively solve learning tasks have become widespread across different facets of biomedical imaging for identifying complex patterns. Likewise, advances in biomedical imaging may lead to new technologies for developing predictive models for all diseases and guide the decision on who should receive preventive therapy.

To incorporate different aspects of health monitoring, authors are invited to submit papers reporting novel imaging methods with biomedical applications—in particular, exploration and research into the development of new algorithms for biomedical image processing and analysis—to this Special Issue. Topics of interest include, but are not limited to, the following:

  • Computer-aided diagnosis;
  • Imaging biomarkers;
  • Image reconstruction;
  • Image registration;
  • Image segmentation;
  • Integration of imaging with non-imaging biomarkers;
  • Interpretability and explainability of machine learning;
  • Machine learning for biomedical applications;
  • Advances in machine learning methods;
  • COVID-19 and imaging;
  • Biomedical and biological image processing;
  • Deep learning for biomedical imaging;
  • Histopathological image analysis;
  • Mixed, augmented, and virtual reality;
  • Visualization in biomedical imaging.

Articles and reviews are also welcomed.

Dr. Michalis Vrigkas
Prof. Dr. Christophoros Nikou
Dr. Ioannis A. Kakadiaris
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical image processing
  • biomedical image analysis
  • biomedical imaging
  • machine learning
  • deep learning
  • biomedical imaging
  • mixed augmented, and virtual reality

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 2505 KiB  
Article
Optimizing Point Source Tracking in Awake Rat PET Imaging: A Comprehensive Study of Motion Detection and Best Correction Conditions
by Fernando Arias-Valcayo, Pablo Galve, Jose Manuel Udías, Juan José Vaquero, Manuel Desco and Joaquín L. Herraiz
Appl. Sci. 2023, 13(22), 12329; https://doi.org/10.3390/app132212329 - 15 Nov 2023
Viewed by 689
Abstract
Preclinical PET animal studies require immobilization of the animal, typically accomplished through the administration of anesthesia, which may affect the radiotracer biodistribution. The use of 18F point sources attached to the rat head is one of the most promising methods for motion [...] Read more.
Preclinical PET animal studies require immobilization of the animal, typically accomplished through the administration of anesthesia, which may affect the radiotracer biodistribution. The use of 18F point sources attached to the rat head is one of the most promising methods for motion compensation in awake rat PET studies. However, the presence of radioactive markers may degrade image quality. In this study, we aimed to investigate the most favorable conditions for preclinical PET studies using awake rats with attached point sources. Firstly, we investigate the optimal activity conditions for the markers and rat-injected tracer using Monte Carlo simulations to determine the parameters of maximum detectability without compromising image quality. Additionally, we scrutinize the impact of delayed window correction for random events on marker detectability and overall image quality within these studies. Secondly, we present a method designed to mitigate the influence of rapid rat movements, which resulted in a medium loss of events of around 30%, primarily observed during the initial phase of the data acquisition. We validated our study with PET acquisitions from an awake rat within the acceptable conditions of activity and motion compensation parameters. This acquisition revealed an 8% reduction in resolution compared to a sedated animal, along with a 6% decrease in signal-to-noise ratio (SNR). These outcomes affirm the viability of our method for conducting awake preclinical brain studies. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

21 pages, 2285 KiB  
Article
MDAU-Net: A Liver and Liver Tumor Segmentation Method Combining an Attention Mechanism and Multi-Scale Features
by Jinlin Ma, Mingge Xia, Ziping Ma and Zhiqing Jiu
Appl. Sci. 2023, 13(18), 10443; https://doi.org/10.3390/app131810443 - 18 Sep 2023
Cited by 1 | Viewed by 889
Abstract
In recent years, U-Net and its extended variants have made remarkable progress in the realm of liver and liver tumor segmentation. However, the limitations of single-path convolutional operations have hindered the full exploitation of valuable features and restricted their mobility within networks. Moreover, [...] Read more.
In recent years, U-Net and its extended variants have made remarkable progress in the realm of liver and liver tumor segmentation. However, the limitations of single-path convolutional operations have hindered the full exploitation of valuable features and restricted their mobility within networks. Moreover, the semantic gap between shallow and deep features proves that a simplistic shortcut is not enough. To address these issues and realize automatic liver and tumor area segmentation in CT images, we introduced the multi-scale feature fusion with dense connections and an attention mechanism segmentation method (MDAU-Net). This network leverages the multi-head attention (MHA) mechanism and multi-scale feature fusion. First, we introduced a double-flow linear pooling enhancement unit to optimize the fusion of deep and shallow features while mitigating the semantic gap between them. Subsequently, we proposed a cascaded adaptive feature extraction unit, combining attention mechanisms with a series of dense connections to capture valuable information and encourage feature reuse. Additionally, we designed a cross-level information interaction mechanism utilizing bidirectional residual connections to address the issue of forgetting a priori knowledge during training. Finally, we assessed MDAU-Net’s performance on the LiTS and SLiver07 datasets. The experimental results demonstrated that MDAU-Net is well-suited for liver and tumor segmentation tasks, outperforming existing widely used methods in terms of robustness and accuracy. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

15 pages, 773 KiB  
Article
A Fuzzy Consensus Clustering Algorithm for MRI Brain Tissue Segmentation
by S. V. Aruna Kumar, Ehsan Yaghoubi and Hugo Proença
Appl. Sci. 2022, 12(15), 7385; https://doi.org/10.3390/app12157385 - 22 Jul 2022
Cited by 4 | Viewed by 1453
Abstract
Brain tissue segmentation is an important component of the clinical diagnosis of brain diseases using multi-modal magnetic resonance imaging (MR). Brain tissue segmentation has been developed by many unsupervised methods in the literature. The most commonly used unsupervised methods are K-Means, Expectation-Maximization, and [...] Read more.
Brain tissue segmentation is an important component of the clinical diagnosis of brain diseases using multi-modal magnetic resonance imaging (MR). Brain tissue segmentation has been developed by many unsupervised methods in the literature. The most commonly used unsupervised methods are K-Means, Expectation-Maximization, and Fuzzy Clustering. Fuzzy clustering methods offer considerable benefits compared with the aforementioned methods as they are capable of handling brain images that are complex, largely uncertain, and imprecise. However, this approach suffers from the intrinsic noise and intensity inhomogeneity (IIH) in the data resulting from the acquisition process. To resolve these issues, we propose a fuzzy consensus clustering algorithm that defines a membership function resulting from a voting schema to cluster the pixels. In particular, we first pre-process the MRI data and employ several segmentation techniques based on traditional fuzzy sets and intuitionistic sets. Then, we adopted a voting schema to fuse the results of the applied clustering methods. Finally, to evaluate the proposed method, we used the well-known performance measures (boundary measure, overlap measure, and volume measure) on two publicly available datasets (OASIS and IBSR18). The experimental results show the superior performance of the proposed method in comparison with the recent state of the art. The performance of the proposed method is also presented using a real-world Autism Spectrum Disorder Detection problem with better accuracy compared to other existing methods. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

10 pages, 2701 KiB  
Article
Comparison of Bone Segmentation Software over Different Anatomical Parts
by Claudio Belvedere, Maurizio Ortolani, Emanuela Marcelli, Barbara Bortolani, Katsiaryna Matsiushevich, Stefano Durante, Laura Cercenelli and Alberto Leardini
Appl. Sci. 2022, 12(12), 6097; https://doi.org/10.3390/app12126097 - 15 Jun 2022
Cited by 2 | Viewed by 2104
Abstract
Three-dimensional bone shape reconstruction is a fundamental step for any subject-specific musculo-skeletal model. Typically, medical images are processed to reconstruct bone surfaces via slice-by-slice contour identification. Freeware software packages are available, but commercial ones must be used for the necessary certification in clinics. [...] Read more.
Three-dimensional bone shape reconstruction is a fundamental step for any subject-specific musculo-skeletal model. Typically, medical images are processed to reconstruct bone surfaces via slice-by-slice contour identification. Freeware software packages are available, but commercial ones must be used for the necessary certification in clinics. The commercial software packages also imply expensive hardware and demanding training, but offer valuable tools. The aim of the present work is to report the performance of five commercial software packages (Mimics®, AmiraTM, D2PTM, SimplewareTM, and Segment 3D PrintTM), particularly the time to import and to create the model, the number of triangles of the mesh, and the STL file size. DICOM files of three different computed tomography scans from five different human anatomical areas were utilized for bone shape reconstruction by using each of these packages. The same operator and the same hosting hardware were used for these analyses. The computational time was found to be different between the packages analyzed, probably because of the pre-processing implied in this operation. The longer “time-to-import” observed in one software is likely due to the volume rendering during uploading. A similar number of triangles per megabyte (approximately 20 thousand) was observed for the five commercial packages. The present work showed the good performance of these software packages, with the main features being better than those analyzed previously in freeware packages. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

16 pages, 4294 KiB  
Article
LiverNet: Diagnosis of Liver Tumors in Human CT Images
by Khaled Alawneh, Hiam Alquran, Mohammed Alsalatie, Wan Azani Mustafa, Yazan Al-Issa, Amin Alqudah and Alaa Badarneh
Appl. Sci. 2022, 12(11), 5501; https://doi.org/10.3390/app12115501 - 29 May 2022
Cited by 10 | Viewed by 3685
Abstract
Liver cancer contributes to the increasing mortality rate in the world. Therefore, early detection may lead to a decrease in morbidity and increase the chance of survival rate. This research offers a computer-aided diagnosis system, which uses computed tomography scans to categorize hepatic [...] Read more.
Liver cancer contributes to the increasing mortality rate in the world. Therefore, early detection may lead to a decrease in morbidity and increase the chance of survival rate. This research offers a computer-aided diagnosis system, which uses computed tomography scans to categorize hepatic tumors as benign or malignant. The 3D segmented liver from the LiTS17 dataset is passed through a Convolutional Neural Network (CNN) to detect and classify the existing tumors as benign or malignant. In this work, we propose a novel light CNN with eight layers and just one conventional layer to classify the segmented liver. This proposed model is utilized in two different tracks; the first track uses deep learning classification and achieves a 95.6% accuracy. Meanwhile, the second track uses the automatically extracted features together with a Support Vector Machine (SVM) classifier and achieves 100% accuracy. The proposed network is light, fast, reliable, and accurate. It can be exploited by an oncological specialist, which will make the diagnosis a simple task. Furthermore, the proposed network achieves high accuracy without the curation of images, which will reduce time and cost. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

21 pages, 2512 KiB  
Article
Cephalometric Landmark Detection in Lateral Skull X-ray Images by Using Improved SpatialConfiguration-Net
by Martin Šavc, Gašper Sedej and Božidar Potočnik
Appl. Sci. 2022, 12(9), 4644; https://doi.org/10.3390/app12094644 - 05 May 2022
Cited by 3 | Viewed by 4897
Abstract
Accurate automated localization of cephalometric landmarks in skull X-ray images is the basis for planning orthodontic treatments, predicting skull growth, or diagnosing face discrepancies. Such diagnoses require as many landmarks as possible to be detected on cephalograms. Today’s best methods are adapted to [...] Read more.
Accurate automated localization of cephalometric landmarks in skull X-ray images is the basis for planning orthodontic treatments, predicting skull growth, or diagnosing face discrepancies. Such diagnoses require as many landmarks as possible to be detected on cephalograms. Today’s best methods are adapted to detect just 19 landmarks accurately in images varying not too much. This paper describes the development of the SCN-EXT convolutional neural network (CNN), which is designed to localize 72 landmarks in strongly varying images. The proposed method is based on the SpatialConfiguration-Net network, which is upgraded by adding replications of the simpler local appearance and spatial configuration components. The CNN capacity can be increased without increasing the number of free parameters simultaneously by such modification of an architecture. The successfulness of our approach was confirmed experimentally on two datasets. The SCN-EXT method was, with respect to its effectiveness, around 4% behind the state-of-the-art on the small ISBI database with 250 testing images and 19 cephalometric landmarks. On the other hand, our method surpassed the state-of-the-art on the demanding AUDAX database with 4695 highly variable testing images and 72 landmarks statistically significantly by around 3%. Increasing the CNN capacity as proposed is especially important for a small learning set and limited computer resources. Our algorithm is already utilized in orthodontic clinical practice. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

16 pages, 5351 KiB  
Article
A Dermoscopic Inspired System for Localization and Malignancy Classification of Melanocytic Lesions
by Sameena Pathan, Tanweer Ali, Shweta Vincent, Yashwanth Nanjappa, Rajiv Mohan David and Om Prakash Kumar
Appl. Sci. 2022, 12(9), 4243; https://doi.org/10.3390/app12094243 - 22 Apr 2022
Cited by 1 | Viewed by 1369
Abstract
This study aims at developing a clinically oriented automated diagnostic tool for distinguishing malignant melanocytic lesions from benign melanocytic nevi in diverse image databases. Due to the presence of artifacts, smooth lesion boundaries, and subtlety in diagnostic features, the accuracy of such systems [...] Read more.
This study aims at developing a clinically oriented automated diagnostic tool for distinguishing malignant melanocytic lesions from benign melanocytic nevi in diverse image databases. Due to the presence of artifacts, smooth lesion boundaries, and subtlety in diagnostic features, the accuracy of such systems gets hampered. Thus, the proposed framework improves the accuracy of melanoma detection by combining the clinical aspects of dermoscopy. Two methods have been adopted for achieving the aforementioned objective. Firstly, artifact removal and lesion localization are performed. In the second step, various clinically significant features such as shape, color, texture, and pigment network are detected. Features are further reduced by checking their individual significance (i.e., hypothesis testing). These reduced feature vectors are then classified using SVM classifier. Features specific to the domain have been used for this design as opposed to features of the abstract images. The domain knowledge of an expert gets enhanced by this methodology. The proposed approach is implemented on a multi-source dataset (PH2 + ISBI 2016 and 2017) of 515 annotated images, thereby resulting in sensitivity, specificity and accuracy of 83.8%, 88.3%, and 86%, respectively. The experimental results are promising, and can be applied to detect asymmetry, pigment network, colors, and texture of the lesions. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

23 pages, 24556 KiB  
Article
Automatic Breast Tumor Screening of Mammographic Images with Optimal Convolutional Neural Network
by Pi-Yun Chen, Xuan-Hao Zhang, Jian-Xing Wu, Ching-Chou Pai, Jin-Chyr Hsu, Chia-Hung Lin and Neng-Sheng Pai
Appl. Sci. 2022, 12(8), 4079; https://doi.org/10.3390/app12084079 - 18 Apr 2022
Cited by 10 | Viewed by 1721
Abstract
Mammography is a first-line imaging examination approach used for early breast tumor screening. Computational techniques based on deep-learning methods, such as convolutional neural network (CNN), are routinely used as classifiers for rapid automatic breast tumor screening in mammography examination. Classifying multiple feature maps [...] Read more.
Mammography is a first-line imaging examination approach used for early breast tumor screening. Computational techniques based on deep-learning methods, such as convolutional neural network (CNN), are routinely used as classifiers for rapid automatic breast tumor screening in mammography examination. Classifying multiple feature maps on two-dimensional (2D) digital images, a multilayer CNN has multiple convolutional-pooling layers and fully connected networks, which can increase the screening accuracy and reduce the error rate. However, this multilayer architecture presents some limitations, such as high computational complexity, large-scale training dataset requirements, and poor suitability for real-time clinical applications. Hence, this study designs an optimal multilayer architecture for a CNN-based classifier for automatic breast tumor screening, consisting of three convolutional layers, two pooling layers, a flattening layer, and a classification layer. In the first convolutional layer, the proposed classifier performs the fractional-order convolutional process to enhance the image and remove unwanted noise for obtaining the desired object’s edges; in the second and third convolutional-pooling layers, two kernel convolutional and pooling operations are used to ensure the continuous enhancement and sharpening of the feature patterns for further extracting of the desired features at different scales and different levels. Moreover, there is a reduction of the dimensions of the feature patterns. In the classification layer, a multilayer network with an adaptive moment estimation algorithm is used to refine a classifier’s network parameters for mammography classification by separating tumor-free feature patterns from tumor feature patterns. Images can be selected from a curated breast imaging subset of a digital database for screening mammography (CBIS-DDSM), and K-fold cross-validations are performed. The experimental results indicate promising performance for automatic breast tumor screening in terms of recall (%), precision (%), accuracy (%), F1 score, and Youden’s index. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

16 pages, 5517 KiB  
Article
The Fusion of MRI and CT Medical Images Using Variational Mode Decomposition
by Srinivasu Polinati, Durga Prasad Bavirisetti, Kandala N V P S Rajesh, Ganesh R Naik and Ravindra Dhuli
Appl. Sci. 2021, 11(22), 10975; https://doi.org/10.3390/app112210975 - 19 Nov 2021
Cited by 8 | Viewed by 2902
Abstract
In medical image processing, magnetic resonance imaging (MRI) and computed tomography (CT) modalities are widely used to extract soft and hard tissue information, respectively. However, with the help of a single modality, it is very challenging to extract the required pathological features to [...] Read more.
In medical image processing, magnetic resonance imaging (MRI) and computed tomography (CT) modalities are widely used to extract soft and hard tissue information, respectively. However, with the help of a single modality, it is very challenging to extract the required pathological features to identify suspicious tissue details. Several medical image fusion methods have attempted to combine complementary information from MRI and CT to address the issue mentioned earlier over the past few decades. However, existing methods have their advantages and drawbacks. In this work, we propose a new multimodal medical image fusion approach based on variational mode decomposition (VMD) and local energy maxima (LEM). With the help of VMD, we decompose source images into several intrinsic mode functions (IMFs) to effectively extract edge details by avoiding boundary distortions. LEM is employed to carefully combine the IMFs based on the local information, which plays a crucial role in the fused image quality by preserving the appropriate spatial information. The proposed method’s performance is evaluated using various subjective and objective measures. The experimental analysis shows that the proposed method gives promising results compared to other existing and well-received fusion methods. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

20 pages, 6525 KiB  
Article
Consecutive Independence and Correlation Transform for Multimodal Data Fusion: Discovery of One-to-Many Associations in Structural and Functional Imaging Data
by Chunying Jia, Mohammad Abu Baker Siddique Akhonda, Yuri Levin-Schwartz, Qunfang Long, Vince D. Calhoun and Tülay Adali
Appl. Sci. 2021, 11(18), 8382; https://doi.org/10.3390/app11188382 - 09 Sep 2021
Cited by 3 | Viewed by 2335
Abstract
Brain signals can be measured using multiple imaging modalities, such as magnetic resonance imaging (MRI)-based techniques. Different modalities convey distinct yet complementary information; thus, their joint analyses can provide valuable insight into how the brain functions in both healthy and diseased conditions. Data-driven [...] Read more.
Brain signals can be measured using multiple imaging modalities, such as magnetic resonance imaging (MRI)-based techniques. Different modalities convey distinct yet complementary information; thus, their joint analyses can provide valuable insight into how the brain functions in both healthy and diseased conditions. Data-driven approaches have proven most useful for multimodal fusion as they minimize assumptions imposed on the data, and there are a number of methods that have been developed to uncover relationships across modalities. However, none of these methods, to the best of our knowledge, can discover “one-to-many associations”, meaning one component from one modality is linked with more than one component from another modality. However, such “one-to-many associations” are likely to exist, since the same brain region can be involved in multiple neurological processes. Additionally, most existing data fusion methods require the signal subspace order to be identical for all modalities—a severe restriction for real-world data of different modalities. Here, we propose a new fusion technique—the consecutive independence and correlation transform (C-ICT) model—which successively performs independent component analysis and independent vector analysis and is uniquely flexible in terms of the number of datasets, signal subspace order, and the opportunity to find “one-to-many associations”. We apply C-ICT to fuse diffusion MRI, structural MRI, and functional MRI datasets collected from healthy controls (HCs) and patients with schizophrenia (SZs). We identify six interpretable triplets of components, each of which consists of three associated components from the three modalities. Besides, components from these triplets that show significant group differences between the HCs and SZs are identified, which could be seen as putative biomarkers in schizophrenia. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop