Computer Vision and Machine Learning in Medical Applications

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 30 June 2024 | Viewed by 6690

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science and Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
Interests: network; routing; computer networking; network architecture; network communication; QoS; networking; cloud computing; TCP; wireless computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recently, computer vision and machine learning have spread to almost all fields. In medical applications, an immense amount of data is being generated by distributed sensors and cameras, as well as multi-modal digital health platforms that support audio, video, image, and text. The availability of data from medical devices and digital record systems has greatly increased the potential for automated diagnosis. The past several years have witnessed an explosion of interest in, and a rapid development of, computer-aided medical investigations using MRI, CT, and X-ray images, as well as medical data. Researchers, having reached a deeper understanding of the methods, on one hand are proposing elegant ways to better integrate computer vision with machine learning in complex problems, and on the other hand are advancing the learning algorithms themselves.

This Special Issue focuses on computer vision and machine learning techniques for medical applications, including but not limited to:

  • Intelligent medical and health systems;
  • Novel theories and methods of deep learning for medical imaging;
  • Drug discovery with deep learning;
  • Pandemic (such as COVID-19) management with machine learning;
  • Health and medical behavior analytics with deep learning;
  • Un/semi/weakly/fully supervised medical data (text/images);
  • Generating diagnostic reports from medical images;
  • Machine learning for medical imbalanced dataset;
  • Summarization of clinical information;
  • Multimodal medical image analysis;
  • Data mining for medical information;
  • Organ and lesion segmentation/detection;
  • Machine learning for image classification with MRI/CT/PET;
  • Medical image enhancement/denoising;
  • Learning robust medical image representation with noisy annotation;
  • Predicting clinical outcomes from medical data;
  • Anomaly detection in medical images or data;
  • Active learning and life-long learning in medical computer vision;
  • User/patient psychometric modeling from video, image, audio, and text.

Dr. Chunhung Richard Lin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

17 pages, 541 KiB  
Article
Utilizing Nearest-Neighbor Clustering for Addressing Imbalanced Datasets in Bioengineering
by Chih-Ming Huang, Chun-Hung Lin, Chuan-Sheng Hung, Wun-Hui Zeng, You-Cheng Zheng and Chih-Min Tsai
Bioengineering 2024, 11(4), 345; https://doi.org/10.3390/bioengineering11040345 - 31 Mar 2024
Viewed by 432
Abstract
Imbalance classification is common in scenarios like fault diagnosis, intrusion detection, and medical diagnosis, where obtaining abnormal data is difficult. This article addresses a one-class problem, implementing and refining the One-Class Nearest-Neighbor (OCNN) algorithm. The original inter-quartile range mechanism is replaced with the [...] Read more.
Imbalance classification is common in scenarios like fault diagnosis, intrusion detection, and medical diagnosis, where obtaining abnormal data is difficult. This article addresses a one-class problem, implementing and refining the One-Class Nearest-Neighbor (OCNN) algorithm. The original inter-quartile range mechanism is replaced with the K-means with outlier removal (KMOR) algorithm for efficient outlier identification in the target class. Parameters are optimized by treating these outliers as non-target-class samples. A new algorithm, the Location-based Nearest-Neighbor (LBNN) algorithm, clusters one-class training data using KMOR and calculates the farthest distance and percentile for each test data point to determine if it belongs to the target class. Experiments cover parameter studies, validation on eight standard imbalanced datasets from KEEL, and three applications on real medical imbalanced datasets. Results show superior performance in precision, recall, and G-means compared to traditional classification models, making it effective for handling imbalanced data challenges. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Figure 1

14 pages, 5257 KiB  
Article
Motion Artifact Reduction Using U-Net Model with Three-Dimensional Simulation-Based Datasets for Brain Magnetic Resonance Images
by Seong-Hyeon Kang and Youngjin Lee
Bioengineering 2024, 11(3), 227; https://doi.org/10.3390/bioengineering11030227 - 27 Feb 2024
Viewed by 856
Abstract
This study aimed to remove motion artifacts from brain magnetic resonance (MR) images using a U-Net model. In addition, a simulation method was proposed to increase the size of the dataset required to train the U-Net model while avoiding the overfitting problem. The [...] Read more.
This study aimed to remove motion artifacts from brain magnetic resonance (MR) images using a U-Net model. In addition, a simulation method was proposed to increase the size of the dataset required to train the U-Net model while avoiding the overfitting problem. The volume data were rotated and translated with random intensity and frequency, in three dimensions, and were iterated as the number of slices in the volume data. Then, for every slice, a portion of the motion-free k-space data was replaced with motion k-space data, respectively. In addition, based on the transposed k-space data, we acquired MR images with motion artifacts and residual maps and constructed datasets. For a quantitative evaluation, the root mean square error (RMSE), peak signal-to-noise ratio (PSNR), coefficient of correlation (CC), and universal image quality index (UQI) were measured. The U-Net models for motion artifact reduction with the residual map-based dataset showed the best performance across all evaluation factors. In particular, the RMSE, PSNR, CC, and UQI improved by approximately 5.35×, 1.51×, 1.12×, and 1.01×, respectively, and the U-Net model with the residual map-based dataset was compared with the direct images. In conclusion, our simulation-based dataset demonstrates that U-Net models can be effectively trained for motion artifact reduction. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Figure 1

19 pages, 1076 KiB  
Article
A Comparison of the Impact of Pharmacological Treatments on Cardioversion, Rate Control, and Mortality in Data-Driven Atrial Fibrillation Phenotypes in Critical Care
by Alexander Lacki and Antonio Martinez-Millana
Bioengineering 2024, 11(3), 199; https://doi.org/10.3390/bioengineering11030199 - 20 Feb 2024
Viewed by 850
Abstract
Critical care physicians are commonly faced with patients exhibiting atrial fibrillation (AF), a cardiac arrhythmia with multifaceted origins. Recent investigations shed light on the heterogeneity among AF patients by uncovering unique AF phenotypes, characterized by differing treatment strategies and clinical outcomes. In this [...] Read more.
Critical care physicians are commonly faced with patients exhibiting atrial fibrillation (AF), a cardiac arrhythmia with multifaceted origins. Recent investigations shed light on the heterogeneity among AF patients by uncovering unique AF phenotypes, characterized by differing treatment strategies and clinical outcomes. In this retrospective study encompassing 9401 AF patients in an intensive care cohort, we sought to identify differences in average treatment effects (ATEs) across different patient groups. We extract data from the MIMIC-III database, use hierarchical agglomerative clustering to identify patients’ phenotypes, and assign them to treatment groups based on their initial drug administration during AF episodes. The treatment options examined included beta blockers (BBs), potassium channel blockers (PCBs), calcium channel blockers (CCBs), and magnesium sulfate (MgS). Utilizing multiple imputation and inverse probability of treatment weighting, we estimate ATEs related to rhythm control, rate control, and mortality, approximated as hourly and daily rates (%/h, %/d). Our analysis unveiled four distinctive AF phenotypes: (1) postoperative hypertensive, (2) non-cardiovascular mutlimorbid, (3) cardiovascular multimorbid, and (4) valvulopathy atrial dilation. PCBs showed the highest cardioversion rates across phenotypes, ranging from 11.6%/h (9.35–13.3) to 7.69%/h (5.80–9.22). While CCBs demonstrated the highest effectiveness in controlling ventricular rates within the overall patient cohort, PCBs and MgS outperformed them in specific phenotypes. PCBs exhibited the most favorable mortality outcomes overall, except for the non-cardiovascular multimorbid cluster, where BBs displayed a lower mortality rate of 1.33%/d [1.04–1.93] compared to PCBs’ 1.68%/d [1.10–2.24]. The results of this study underscore the significant diversity in ATEs among individuals with AF and suggest that phenotype-based classification could be a valuable tool for physicians, providing personalized insights to inform clinical decision making. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Figure 1

11 pages, 4128 KiB  
Article
GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation
by Anila Sebastian, Omar Elharrouss, Somaya Al-Maadeed and Noor Almaadeed
Bioengineering 2024, 11(1), 4; https://doi.org/10.3390/bioengineering11010004 (registering DOI) - 21 Dec 2023
Cited by 1 | Viewed by 1035
Abstract
Most diabetes patients develop a condition known as diabetic retinopathy after having diabetes for a prolonged period. Due to this ailment, damaged blood vessels may occur behind the retina, which can even progress to a stage of losing vision. Hence, doctors advise diabetes [...] Read more.
Most diabetes patients develop a condition known as diabetic retinopathy after having diabetes for a prolonged period. Due to this ailment, damaged blood vessels may occur behind the retina, which can even progress to a stage of losing vision. Hence, doctors advise diabetes patients to screen their retinas regularly. Examining the fundus for this requires a long time and there are few ophthalmologists available to check the ever-increasing number of diabetes patients. To address this issue, several computer-aided automated systems are being developed with the help of many techniques like deep learning. Extracting the retinal vasculature is a significant step that aids in developing such systems. This paper presents a GAN-based model to perform retinal vasculature segmentation. The model achieves good results on the ARIA, DRIVE, and HRF datasets. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Figure 1

13 pages, 3662 KiB  
Article
Prediction of Contaminated Areas Using Ultraviolet Fluorescence Markers for Medical Simulation: A Mobile Phone Application Approach
by Po-Wei Chiu, Chien-Te Hsu, Shao-Peng Huang, Wu-Yao Chiou and Chih-Hao Lin
Bioengineering 2023, 10(5), 530; https://doi.org/10.3390/bioengineering10050530 - 26 Apr 2023
Viewed by 1116
Abstract
The use of ultraviolet fluorescence markers in medical simulations has become popular in recent years, especially during the COVID-19 pandemic. Healthcare workers use ultraviolet fluorescence markers to replace pathogens or secretions, and then calculate the regions of contamination. Health providers can use bioimage [...] Read more.
The use of ultraviolet fluorescence markers in medical simulations has become popular in recent years, especially during the COVID-19 pandemic. Healthcare workers use ultraviolet fluorescence markers to replace pathogens or secretions, and then calculate the regions of contamination. Health providers can use bioimage processing software to calculate the area and quantity of fluorescent dyes. However, traditional image processing software has its limitations and lacks real-time capabilities, making it more suitable for laboratory use than for clinical settings. In this study, mobile phones were used to measure areas contaminated during medical treatment. During the research process, a mobile phone camera was used to photograph the contaminated regions at an orthogonal angle. The fluorescence marker-contaminated area and photographed image area were proportionally related. The areas of contaminated regions can be calculated using this relationship. We used Android Studio software to write a mobile application to convert photos and recreate the true contaminated area. In this application, color photographs are converted into grayscale, and then into black and white binary photographs using binarization. After this process, the fluorescence-contaminated area is calculated easily. The results of our study showed that within a limited distance (50–100 cm) and with controlled ambient light, the error in the calculated contamination area was 6%. This study provides a low-cost, easy, and ready-to-use tool for healthcare workers to estimate the area of fluorescent dye regions during medical simulations. This tool can promote medical education and training on infectious disease preparation. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Graphical abstract

Other

Jump to: Research

26 pages, 2031 KiB  
Systematic Review
Artificial Intelligence Applications for Osteoporosis Classification Using Computed Tomography
by Wilson Ong, Ren Wei Liu, Andrew Makmur, Xi Zhen Low, Weizhong Jonathan Sng, Jiong Hao Tan, Naresh Kumar and James Thomas Patrick Decourcy Hallinan
Bioengineering 2023, 10(12), 1364; https://doi.org/10.3390/bioengineering10121364 - 27 Nov 2023
Cited by 2 | Viewed by 1648
Abstract
Osteoporosis, marked by low bone mineral density (BMD) and a high fracture risk, is a major health issue. Recent progress in medical imaging, especially CT scans, offers new ways of diagnosing and assessing osteoporosis. This review examines the use of AI analysis of [...] Read more.
Osteoporosis, marked by low bone mineral density (BMD) and a high fracture risk, is a major health issue. Recent progress in medical imaging, especially CT scans, offers new ways of diagnosing and assessing osteoporosis. This review examines the use of AI analysis of CT scans to stratify BMD and diagnose osteoporosis. By summarizing the relevant studies, we aimed to assess the effectiveness, constraints, and potential impact of AI-based osteoporosis classification (severity) via CT. A systematic search of electronic databases (PubMed, MEDLINE, Web of Science, ClinicalTrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. A total of 39 articles were retrieved from the databases, and the key findings were compiled and summarized, including the regions analyzed, the type of CT imaging, and their efficacy in predicting BMD compared with conventional DXA studies. Important considerations and limitations are also discussed. The overall reported accuracy, sensitivity, and specificity of AI in classifying osteoporosis using CT images ranged from 61.8% to 99.4%, 41.0% to 100.0%, and 31.0% to 100.0% respectively, with areas under the curve (AUCs) ranging from 0.582 to 0.994. While additional research is necessary to validate the clinical efficacy and reproducibility of these AI tools before incorporating them into routine clinical practice, these studies demonstrate the promising potential of using CT to opportunistically predict and classify osteoporosis without the need for DEXA. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Figure 1

Back to TopTop