Algorithms for Biomedical Image Analysis and Processing

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Algorithms for Multidisciplinary Applications".

Deadline for manuscript submissions: closed (15 June 2023) | Viewed by 25086

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Institute for High-Performance Computing and Networking, National Research Council of Italy, via P. Castellino, 111, I-80131 Naples, Italy
Interests: computational data science; image processing; omics and imaging data integration
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Biomedical imaging is a broad field concerning image capture for diagnostic and therapeutic purposes. Biomedical imaging technologies utilize X-rays (CT scans), magnetism (MRI), sound (ultrasound), radioactive pharmaceuticals (nuclear medicine: SPECT, PET), or light (endoscopy, OCT). Algorithms for processing and analyzing biomedical images are commonly used to visualize anatomical structures or assess the functionality of human organs, point out pathological regions, analyze biological and metabolic processes, set therapy plans, and carry out image-guided surgery. At a different scale, microscopy images are generally produced using light microscopes, which provide structural and temporal information about biological specimens. In the most widely used light microscopy techniques, light is transmitted from a source on the opposite side of the specimen to the objective lens. By contrast, fluorescence microscopy uses the reflected light of the specimen. Microscopy imaging requires methods for quantitative, unbiased, and reproducible extraction of meaningful measurements to quantify morphological properties and investigate intra- and intercellular dynamics. New technologies have been developed to address this need, such as microscopy-based screening, sequencing, and imaging, with automated analysis (including high-throughput screening and high-content screening), where basic image processing algorithms (e.g., denoising and segmentation) are fundamental tasks.

The large number of applications that rely on biomedical images increases the demand for efficient, accurate, and reliable algorithms for biomedical image processing and analysis, especially with the rising complexity of imaging technologies and the huge number of images to be processed.

This Special Issue aims to bring together both original research articles and topical reviews on algorithms for biomedical image processing and analysis techniques. Some basic techniques include deblurring, noise cleaning, filtering, 3D reconstruction from projection, segmentation, etc.

Submissions are welcome for algorithms based both on traditional approaches and on new machine learning techniques. Potential topics include but are not limited to:

  • Medical image analysis and processing;
  • Microscopy and histology image analysis and processing;
  • Computer-aided detection;
  • Computer-aided diagnosis;
  • Imaging biomarkers;
  • Reconstruction in emission tomography;
  • Computerized cell tracking;
  • Machine and deep learning for biomedical imaging;
  • Methods for combined imaging technologies.

Dr. Lucia Maddalena
Dr. Laura Antonelli
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deblurring
  • denoising
  • segmentation
  • classification
  • detection
  • tracking
  • lineage

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

4 pages, 173 KiB  
Editorial
Special Issue on “Algorithms for Biomedical Image Analysis and Processing”
by Laura Antonelli and Lucia Maddalena
Algorithms 2023, 16(12), 544; https://doi.org/10.3390/a16120544 - 28 Nov 2023
Viewed by 1060
Abstract
Biomedical imaging is a broad field concerning image capture for diagnostic and therapeutic purposes [...] Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)

Research

Jump to: Editorial, Review

16 pages, 2021 KiB  
Article
Clinical Validation of a New Enhanced Stent Imaging Method
by Chadi Ghafari, Khalil Houissa, Jo Dens, Claudiu Ungureanu, Peter Kayaert, Cyril Constant and Stéphane Carlier
Algorithms 2023, 16(6), 276; https://doi.org/10.3390/a16060276 - 30 May 2023
Cited by 1 | Viewed by 1269
Abstract
(1) Background: Stent underexpansion is the main cause of stent thrombosis and restenosis. Coronary angiography has limitations in the assessment of stent expansion. Enhanced stent imaging (ESI) methods allow a detailed visualization of stent deployment. We qualitatively compare image results from two ESI [...] Read more.
(1) Background: Stent underexpansion is the main cause of stent thrombosis and restenosis. Coronary angiography has limitations in the assessment of stent expansion. Enhanced stent imaging (ESI) methods allow a detailed visualization of stent deployment. We qualitatively compare image results from two ESI system vendors (StentBoost™ (SB) and CAAS StentEnhancer™ (SE)) and report quantitative results of deployed stents diameters by quantitative coronary angiography (QCA) and by SE. (2) Methods: The ESI systems from SB and SE were compared and graded by two blinded observers for different characteristics: 1 visualization of the proximal and distal edges of the stents; 2 visualization of the stent struts; 3 presence of underexpansion and 4 calcifications. Stent diameters were quantitatively measured using dedicated QCA and SE software and compared to chart diameters according to the pressure of implantation. (3) Results: A total of 249 ESI sequences were qualitatively compared. Inter-observer variability was noted for strut visibility and total scores. Inter-observer agreement was found for the assessment of proximal stent edge and stent underexpansion. The predicted chart diameters were 0.31 ± 0.30 mm larger than SE diameters (p < 0.05). Stent diameters by SE after post-dilatation were 0.47 ± 0.31 mm smaller than the post-dilation balloon diameter (p < 0.05). SE-derived diameters significantly differed from QCA; by Bland–Altman analysis the bias was −0.37 ± 0.42 mm (p < 0.001). (4) Conclusions: SE provides an enhanced visualization and allows precise quantitative assessment of stent expansion without the limitations of QCA when overlapping coronary side branches are present. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

23 pages, 2417 KiB  
Article
A Hybrid Direct Search and Model-Based Derivative-Free Optimization Method with Dynamic Decision Processing and Application in Solid-Tank Design
by Zhongda Huang, Andy Ogilvy, Steve Collins, Warren Hare, Michelle Hilts and Andrew Jirasek
Algorithms 2023, 16(2), 92; https://doi.org/10.3390/a16020092 - 07 Feb 2023
Viewed by 1076
Abstract
A derivative-free optimization (DFO) method is an optimization method that does not make use of derivative information in order to find the optimal solution. It is advantageous for solving real-world problems in which the only information available about the objective function is the [...] Read more.
A derivative-free optimization (DFO) method is an optimization method that does not make use of derivative information in order to find the optimal solution. It is advantageous for solving real-world problems in which the only information available about the objective function is the output for a specific input. In this paper, we develop the framework for a DFO method called the DQL method. It is designed to be a versatile hybrid method capable of performing direct search, quadratic-model search, and line search all in the same method. We develop and test a series of different strategies within this framework. The benchmark results indicate that each of these strategies has distinct advantages and that there is no clear winner in the overall performance among efficiency and robustness. We develop the Smart DQL method by allowing the method to determine the optimal search strategies in various circumstances. The Smart DQL method is applied to a problem of solid-tank design for 3D radiation dosimetry provided by the UBCO (University of British Columbia—Okanagan) 3D Radiation Dosimetry Research Group. Given the limited evaluation budget, the Smart DQL method produces high-quality solutions. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

14 pages, 356 KiB  
Article
Biomedical Image Classification via Dynamically Early Stopped Artificial Neural Network
by Giorgia Franchini, Micaela Verucchi, Ambra Catozzi, Federica Porta and Marco Prato
Algorithms 2022, 15(10), 386; https://doi.org/10.3390/a15100386 - 20 Oct 2022
Viewed by 1577
Abstract
It is well known that biomedical imaging analysis plays a crucial role in the healthcare sector and produces a huge quantity of data. These data can be exploited to study diseases and their evolution in a deeper way or to predict their onsets. [...] Read more.
It is well known that biomedical imaging analysis plays a crucial role in the healthcare sector and produces a huge quantity of data. These data can be exploited to study diseases and their evolution in a deeper way or to predict their onsets. In particular, image classification represents one of the main problems in the biomedical imaging context. Due to the data complexity, biomedical image classification can be carried out by trainable mathematical models, such as artificial neural networks. When employing a neural network, one of the main challenges is to determine the optimal duration of the training phase to achieve the best performance. This paper introduces a new adaptive early stopping technique to set the optimal training time based on dynamic selection strategies to fix the learning rate and the mini-batch size of the stochastic gradient method exploited as the optimizer. The numerical experiments, carried out on different artificial neural networks for image classification, show that the developed adaptive early stopping procedure leads to the same literature performance while finalizing the training in fewer epochs. The numerical examples have been performed on the CIFAR100 dataset and on two distinct MedMNIST2D datasets which are the large-scale lightweight benchmark for biomedical image classification. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

15 pages, 6471 KiB  
Article
Impact of Iterative Bilateral Filtering on the Noise Power Spectrum of Computed Tomography Images
by Choirul Anam, Ariij Naufal, Heri Sutanto, Kusworo Adi and Geoff Dougherty
Algorithms 2022, 15(10), 374; https://doi.org/10.3390/a15100374 - 13 Oct 2022
Cited by 3 | Viewed by 2027
Abstract
A bilateral filter is a non-linear denoising algorithm that can reduce noise while preserving the edges. This study explores the characteristics of a bilateral filter in changing the noise and texture within computed tomography (CT) images in an iterative implementation. We collected images [...] Read more.
A bilateral filter is a non-linear denoising algorithm that can reduce noise while preserving the edges. This study explores the characteristics of a bilateral filter in changing the noise and texture within computed tomography (CT) images in an iterative implementation. We collected images of a homogeneous Neusoft phantom scanned with tube currents of 77, 154, and 231 mAs. The images for each tube current were filtered five times with a configuration of sigma space (σd) = 2 pixels, sigma intensity (σr) = noise level, and a kernel of 5 × 5 pixels. To observe the noise texture in each filter iteration, the noise power spectrum (NPS) was obtained for the five slices of each dataset and averaged to generate a stable curve. The modulation-transfer function (MTF) was also measured from the original and the filtered images. Tests on an anthropomorphic phantom image were carried out to observe their impact on clinical scenarios. Noise measurements and visual observations of edge sharpness were performed on this image. Our results showed that the bilateral filter was effective in suppressing noise at high frequencies, which is confirmed by the sloping NPS curve for different tube currents. The peak frequency was shifted from about 0.2 to about 0.1 mm−1 for all tube currents, and the noise magnitude was reduced by more than 50% compared to the original images. The spatial resolution does not change with the number of iterations of the filter, which is confirmed by the constant values of MTF50 and MTF10. The test results on the anthropomorphic phantom image show a similar pattern, with noise reduced by up to 60% and object edges remaining sharp. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

16 pages, 1677 KiB  
Article
Cancer Identification in Walker 256 Tumor Model Exploring Texture Properties Taken from Microphotograph of Rats Liver
by Mateus F. T. Carvalho, Sergio A. Silva, Jr., Carla Cristina O. Bernardo, Franklin César Flores, Juliana Vanessa C. M. Perles, Jacqueline Nelisis Zanoni and Yandre M. G. Costa
Algorithms 2022, 15(8), 268; https://doi.org/10.3390/a15080268 - 31 Jul 2022
Viewed by 1911
Abstract
Recent studies have been evaluating the presence of patterns associated with the occurrence of cancer in different types of tissue present in the individual affected by the disease. In this article, we describe preliminary results for the automatic detection of cancer (Walker 256 [...] Read more.
Recent studies have been evaluating the presence of patterns associated with the occurrence of cancer in different types of tissue present in the individual affected by the disease. In this article, we describe preliminary results for the automatic detection of cancer (Walker 256 tumor) in laboratory animals using preclinical microphotograph images of the subject’s liver tissue. In the proposed approach, two different types of descriptors were explored to capture texture properties from the images, and we also evaluated the complementarity between them. The first texture descriptor experimented is the widely known Local Phase Quantization (LPQ), which is a descriptor based on spectral information. The second one is built by the application of a granulometry given by a family of morphological filters. For classification, we have evaluated the algorithms Support Vector Machine (SVM), k-Nearest Neighbor (k-NN) and Logistic Regression. Experiments carried out on a carefully curated dataset developed by the Enteric Neural Plasticity Laboratory of the State University of Maringá showed that both texture descriptors provide good results in this scenario. The accuracy rates obtained using the SVM classifier were 96.67% for the texture operator based on granulometry and 91.16% for the LPQ operator. The dataset was made available also as a contribution of this work. In addition, it is important to remark that the best overall result was obtained by combining classifiers created using both descriptors in a late fusion strategy, achieving an accuracy of 99.16%. The results obtained show that it is possible to automatically perform the identification of cancer in laboratory animals by exploring texture properties found on the tissue taken from the liver. Moreover, we observed a high level of complementarity between the classifiers created using LPQ and granulometry properties in the application addressed here. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

13 pages, 1921 KiB  
Article
Semi-Automatic Multiparametric MR Imaging Classification Using Novel Image Input Sequences and 3D Convolutional Neural Networks
by Bochong Li, Ryo Oka, Ping Xuan, Yuichiro Yoshimura and Toshiya Nakaguchi
Algorithms 2022, 15(7), 248; https://doi.org/10.3390/a15070248 - 18 Jul 2022
Cited by 2 | Viewed by 1746
Abstract
The role of multi-parametric magnetic resonance imaging (mp-MRI) is becoming increasingly important in the diagnosis of the clinical severity of prostate cancer (PCa). However, mp-MRI images usually contain several unaligned 3D sequences, such as DWI image sequences and T2-weighted image sequences, and there [...] Read more.
The role of multi-parametric magnetic resonance imaging (mp-MRI) is becoming increasingly important in the diagnosis of the clinical severity of prostate cancer (PCa). However, mp-MRI images usually contain several unaligned 3D sequences, such as DWI image sequences and T2-weighted image sequences, and there are many images among the entirety of 3D sequence images that do not contain cancerous tissue, which affects the accuracy of large-scale prostate cancer detection. Therefore, there is a great need for a method that uses accurate computer-aided detection of mp-MRI images and minimizes the influence of useless features. Our proposed PCa detection method is divided into three stages: (i) multimodal image alignment, (ii) automatic cropping of the sequence images to the entire prostate region, and, finally, (iii) combining multiple modal images of each patient into novel 3D sequences and using 3D convolutional neural networks to learn the newly composed 3D sequences with different modal alignments. We arrange the different modal methods to make the model fully learn the cancerous tissue features; then, we predict the clinical severity of PCa and generate a 3D cancer response map for the 3D sequence images from the last convolution layer of the network. The prediction results and 3D response map help to understand the features that the model focuses on during the process of 3D-CNN feature learning. We applied our method to Toho hospital prostate cancer patient data; the AUC (=0.85) results were significantly higher than those of other methods. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

18 pages, 300 KiB  
Article
Machine Learning and rs-fMRI to Identify Potential Brain Regions Associated with Autism Severity
by Igor D. Rodrigues, Emerson A. de Carvalho, Caio P. Santana and Guilherme S. Bastos
Algorithms 2022, 15(6), 195; https://doi.org/10.3390/a15060195 - 07 Jun 2022
Cited by 5 | Viewed by 2712
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized primarily by social impairments that manifest in different severity levels. In recent years, many studies have explored the use of machine learning (ML) and resting-state functional magnetic resonance images (rs-fMRI) to investigate the disorder. [...] Read more.
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized primarily by social impairments that manifest in different severity levels. In recent years, many studies have explored the use of machine learning (ML) and resting-state functional magnetic resonance images (rs-fMRI) to investigate the disorder. These approaches evaluate brain oxygen levels to indirectly measure brain activity and compare typical developmental subjects with ASD ones. However, none of these works have tried to classify the subjects into severity groups using ML exclusively applied to rs-fMRI data. Information on ASD severity is frequently available since some tools used to support ASD diagnosis also include a severity measurement as their outcomes. The aforesaid is the case of the Autism Diagnostic Observation Schedule (ADOS), which splits the diagnosis into three groups: ‘autism’, ‘autism spectrum’, and ‘non-ASD’. Therefore, this paper aims to use ML and fMRI to identify potential brain regions as biomarkers of ASD severity. We used the ADOS score as a severity measurement standard. The experiment used fMRI data of 202 subjects with an ASD diagnosis and their ADOS scores available at the ABIDE I consortium to determine the correct ASD sub-class for each one. Our results suggest a functional difference between the ASD sub-classes by reaching 73.8% accuracy on cingulum regions. The aforementioned shows the feasibility of classifying and characterizing ASD using rs-fMRI data, indicating potential areas that could lead to severity biomarkers in further research. However, we highlight the need for more studies to confirm our findings. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

16 pages, 4962 KiB  
Article
Smart(Sampling)Augment: Optimal and Efficient Data Augmentation for Semantic Segmentation
by Misgana Negassi, Diane Wagner and Alexander Reiterer
Algorithms 2022, 15(5), 165; https://doi.org/10.3390/a15050165 - 16 May 2022
Cited by 8 | Viewed by 3166
Abstract
Data augmentation methods enrich datasets with augmented data to improve the performance of neural networks. Recently, automated data augmentation methods have emerged, which automatically design augmentation strategies. The existing work focuses on image classification and object detection, whereas we provide the first study [...] Read more.
Data augmentation methods enrich datasets with augmented data to improve the performance of neural networks. Recently, automated data augmentation methods have emerged, which automatically design augmentation strategies. The existing work focuses on image classification and object detection, whereas we provide the first study on semantic image segmentation and introduce two new approaches: SmartAugment and SmartSamplingAugment. SmartAugment uses Bayesian Optimization to search a rich space of augmentation strategies and achieves new state-of-the-art performance in all semantic segmentation tasks we consider. SmartSamplingAugment, a simple parameter-free approach with a fixed augmentation strategy, competes in performance with the existing resource-intensive approaches and outperforms cheap state-of-the-art data augmentation methods. Furthermore, we analyze the impact, interaction, and importance of data augmentation hyperparameters and perform ablation studies, which confirm our design choices behind SmartAugment and SmartSamplingAugment. Lastly, we will provide our source code for reproducibility and to facilitate further research. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

8 pages, 1976 KiB  
Review
Polymer Models of Chromatin Imaging Data in Single Cells
by Mattia Conte, Andrea M. Chiariello, Alex Abraham, Simona Bianco, Andrea Esposito, Mario Nicodemi, Tommaso Matteuzzi and Francesca Vercellone
Algorithms 2022, 15(9), 330; https://doi.org/10.3390/a15090330 - 16 Sep 2022
Cited by 2 | Viewed by 1649
Abstract
Recent super-resolution imaging technologies enable tracing chromatin conformation with nanometer-scale precision at the single-cell level. They revealed, for example, that human chromosomes fold into a complex three-dimensional structure within the cell nucleus that is essential to establish biological activities, such as the regulation [...] Read more.
Recent super-resolution imaging technologies enable tracing chromatin conformation with nanometer-scale precision at the single-cell level. They revealed, for example, that human chromosomes fold into a complex three-dimensional structure within the cell nucleus that is essential to establish biological activities, such as the regulation of the genes. Yet, to decode from imaging data the molecular mechanisms that shape the structure of the genome, quantitative methods are required. In this review, we consider models of polymer physics of chromosome folding that we benchmark against multiplexed FISH data available in human loci in IMR90 fibroblast cells. By combining polymer theory, numerical simulations and machine learning strategies, the predictions of the models are validated at the single-cell level, showing that chromosome structure is controlled by the interplay of distinct physical processes, such as active loop-extrusion and thermodynamic phase-separation. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

22 pages, 1059 KiB  
Review
Artificial Intelligence for Cell Segmentation, Event Detection, and Tracking for Label-Free Microscopy Imaging
by Lucia Maddalena, Laura Antonelli, Alexandra Albu, Aroj Hada and Mario Rosario Guarracino
Algorithms 2022, 15(9), 313; https://doi.org/10.3390/a15090313 - 31 Aug 2022
Cited by 9 | Viewed by 3966
Abstract
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic [...] Read more.
Background: Time-lapse microscopy imaging is a key approach for an increasing number of biological and biomedical studies to observe the dynamic behavior of cells over time which helps quantify important data, such as the number of cells and their sizes, shapes, and dynamic interactions across time. Label-free imaging is an essential strategy for such studies as it ensures that native cell behavior remains uninfluenced by the recording process. Computer vision and machine/deep learning approaches have made significant progress in this area. Methods: In this review, we present an overview of methods, software, data, and evaluation metrics for the automatic analysis of label-free microscopy imaging. We aim to provide the interested reader with a unique source of information, with links for further detailed information. Results: We review the most recent methods for cell segmentation, event detection, and tracking. Moreover, we provide lists of publicly available software and datasets. Finally, we summarize the metrics most frequently adopted for evaluating the methods under exam. Conclusions: We provide hints on open challenges and future research directions. Full article
(This article belongs to the Special Issue Algorithms for Biomedical Image Analysis and Processing)
Show Figures

Figure 1

Back to TopTop