Machine Learning and Artificial Intelligence for Biomedical Applications

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biomedical Engineering and Biomaterials".

Deadline for manuscript submissions: closed (15 November 2023) | Viewed by 30052

Special Issue Editors


E-Mail Website
Guest Editor
Department of Clinical and Experimental Medicine, Università degli studi di Foggia, 71122 Foggia, FG, Italy
Interests: bioinformatics; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Bari "Aldo Moro", 70125 Bari, Italy
Interests: artificial intelligence; bioinformatics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, the increase in the level of information technology has led to several scientific breakthroughs. The first researchers to benefit from improved hardware components have been the developers of artificial intelligence algorithms, who have been able to apply these algorithms in several scientific fields, among which is biomedicine. Biomedicine is a field of medicine that applies the principles of biology and natural sciences to the development of relevant technologies for health care. The combination of artificial intelligence algorithms and biomedicine has led to many applications, such as:

  • Image analysis of human organs by analyzing magnetic resonance images (MRI);
  • Study of DNA/RNA sequencing and protein structure interactions and predictions;
  • Analysis of different biosignals, via methods involving electroencephalograms (EEG), electromyography (EEG) and electrocardiograms (ECG).

 

In this context, machine learning algorithms enable us to learn from observational data and construct highly accurate artificial intelligence models to support the physician. However, obtaining models with high accuracy may not be enough, as AI-based biomedical decisions must be understandable to the physician. Therefore, it is necessary to equip machine learning methods with explainability capacity, leading to explainable artificial intelligence techniques that enable the physician to understand the decisions suggested by the models they use.

Dr. Crescenzio Gallo
Dr. Gianluca Zaza
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 1362 KiB  
Article
Research on Multimodal Fusion of Temporal Electronic Medical Records
by Moxuan Ma, Muyu Wang, Binyu Gao, Yichen Li, Jun Huang and Hui Chen
Bioengineering 2024, 11(1), 94; https://doi.org/10.3390/bioengineering11010094 - 18 Jan 2024
Viewed by 1100
Abstract
The surge in deep learning-driven EMR research has centered on harnessing diverse data forms. Yet, the amalgamation of diverse modalities within time series data remains an underexplored realm. This study probes a multimodal fusion approach, merging temporal and non-temporal clinical notes along with [...] Read more.
The surge in deep learning-driven EMR research has centered on harnessing diverse data forms. Yet, the amalgamation of diverse modalities within time series data remains an underexplored realm. This study probes a multimodal fusion approach, merging temporal and non-temporal clinical notes along with tabular data. We leveraged data from 1271 myocardial infarction and 6450 stroke inpatients at a Beijing tertiary hospital. Our dataset encompassed static, and time series note data, coupled with static and time series table data. The temporal data underwent a preprocessing phase, padding to a 30-day interval, and segmenting into 3-day sub-sequences. These were fed into a long short-term memory (LSTM) network for sub-sequence representation. Multimodal attention gates were implemented for both static and temporal subsequence representations, culminating in fused representations. An attention-backtracking module was introduced for the latter, adept at capturing enduring dependencies in temporal fused representations. The concatenated results were channeled into an LSTM to yield the ultimate fused representation. Initially, two note modalities were designated as primary modes, and subsequently, the proposed fusion model was compared with comparative models including recent models such as Crossformer. The proposed model consistently exhibited superior predictive prowess in both tasks. Removing the attention-backtracking module led to performance decline. The proposed model consistently shows excellent predictive capabilities in both tasks. The proposed method not only effectively integrates data from the four modalities, but also has a good understanding of how to handle irregular time series data and lengthy clinical texts. An effective method is provided, which is expected to be more widely used in multimodal medical data representation. Full article
Show Figures

Figure 1

14 pages, 2584 KiB  
Article
High-Accuracy Neuro-Navigation with Computer Vision for Frameless Registration and Real-Time Tracking
by Isabella Chiurillo, Raahil M. Sha, Faith C. Robertson, Jian Liu, Jacqueline Li, Hieu Le Mau, Jose M. Amich, William B. Gormley and Roman Stolyarov
Bioengineering 2023, 10(12), 1401; https://doi.org/10.3390/bioengineering10121401 - 07 Dec 2023
Viewed by 1542
Abstract
For the past three decades, neurosurgeons have utilized cranial neuro-navigation systems, bringing millimetric accuracy to operating rooms worldwide. These systems require an operating room team, anesthesia, and, most critically, cranial fixation. As a result, treatments for acute neurosurgical conditions, performed urgently in emergency [...] Read more.
For the past three decades, neurosurgeons have utilized cranial neuro-navigation systems, bringing millimetric accuracy to operating rooms worldwide. These systems require an operating room team, anesthesia, and, most critically, cranial fixation. As a result, treatments for acute neurosurgical conditions, performed urgently in emergency rooms or intensive care units on awake and non-immobilized patients, have not benefited from traditional neuro-navigation. These emergent procedures are performed freehand, guided only by anatomical landmarks with no navigation, resulting in inaccurate catheter placement and neurological deficits. A rapidly deployable image-guidance technology that offers highly accurate, real-time registration and is capable of tracking awake, moving patients is needed to improve patient safety. The Zeta Cranial Navigation System is currently the only non-fiducial-based, FDA-approved neuro-navigation device that performs real-time registration and continuous patient tracking. To assess this system’s performance, we performed registration and tracking of phantoms and human cadaver heads during controlled motions and various adverse surgical test conditions. As a result, we obtained millimetric or sub-millimetric target and surface registration accuracy. This rapid and accurate frameless neuro-navigation system for mobile subjects can enhance bedside procedure safety and expand the range of interventions performed with high levels of accuracy outside of an operating room. Full article
Show Figures

Figure 1

15 pages, 699 KiB  
Article
Assessment of the Performances of the Protein Modeling Techniques Participating in CASP15 Using a Structure-Based Functional Site Prediction Approach: ResiRole
by Geoffrey J. Huang, Thomas K. Parry and William A. McLaughlin
Bioengineering 2023, 10(12), 1377; https://doi.org/10.3390/bioengineering10121377 - 30 Nov 2023
Viewed by 1039
Abstract
Background: Model quality assessments via computational methods which entail comparisons of the modeled structures to the experimentally determined structures are essential in the field of protein structure prediction. The assessments provide means to benchmark the accuracies of the modeling techniques and to aid [...] Read more.
Background: Model quality assessments via computational methods which entail comparisons of the modeled structures to the experimentally determined structures are essential in the field of protein structure prediction. The assessments provide means to benchmark the accuracies of the modeling techniques and to aid with their development. We previously described the ResiRole method to gauge model quality principally based on the preservation of the structural characteristics described in SeqFEATURE functional site prediction models. Methods: We apply ResiRole to benchmark modeling group performances in the Critical Assessment of Structure Prediction experiment, round 15. To gauge model quality, a normalized Predicted Functional site Similarity Score (PFSS) was calculated as the average of one minus the absolute values of the differences of the functional site prediction probabilities, as found for the experimental structures versus those found at the corresponding sites in the structure models. Results: The average PFSS per modeling group (gPFSS) correlates with standard quality metrics, and can effectively be used to rank the accuracies of the groups. For the free modeling (FM) category, correlation coefficients of the Local Distance Difference Test (LDDT) and Global Distance Test-Total Score (GDT-TS) metrics with gPFSS were 0.98239 and 0.87691, respectively. An example finding for a specific group is that the gPFSS for EMBER3D was higher than expected based on the predictive relationship between gPFSS and LDDT. We infer the result is due to the use of constraints imprinted by function that are a part of the EMBER3D methodology. Also, we find functional site predictions that may guide further functional characterizations of the respective proteins. Conclusion: The gPFSS metric provides an effective means to assess and rank the performances of the structure prediction techniques according to their abilities to accurately recount the structural features at predicted functional sites. Full article
Show Figures

Graphical abstract

18 pages, 4633 KiB  
Article
Gesture Classification in Electromyography Signals for Real-Time Prosthetic Hand Control Using a Convolutional Neural Network-Enhanced Channel Attention Model
by Guangjie Yu, Ziting Deng, Zhenchen Bao, Yue Zhang and Bingwei He
Bioengineering 2023, 10(11), 1324; https://doi.org/10.3390/bioengineering10111324 - 16 Nov 2023
Viewed by 1145
Abstract
Accurate and real-time gesture recognition is required for the autonomous operation of prosthetic hand devices. This study employs a convolutional neural network-enhanced channel attention (CNN-ECA) model to provide a unique approach for surface electromyography (sEMG) gesture recognition. The introduction of the ECA module [...] Read more.
Accurate and real-time gesture recognition is required for the autonomous operation of prosthetic hand devices. This study employs a convolutional neural network-enhanced channel attention (CNN-ECA) model to provide a unique approach for surface electromyography (sEMG) gesture recognition. The introduction of the ECA module improves the model’s capacity to extract features and focus on critical information in the sEMG data, thus simultaneously equipping the sEMG-controlled prosthetic hand systems with the characteristics of accurate gesture detection and real-time control. Furthermore, we suggest a preprocessing strategy for extracting envelope signals that incorporates Butterworth low-pass filtering and the fast Hilbert transform (FHT), which can successfully reduce noise interference and capture essential physiological information. Finally, the majority voting window technique is adopted to enhance the prediction results, further improving the accuracy and stability of the model. Overall, our multi-layered convolutional neural network model, in conjunction with envelope signal extraction and attention mechanisms, offers a promising and innovative approach for real-time control systems in prosthetic hands, allowing for precise fine motor actions. Full article
Show Figures

Graphical abstract

12 pages, 7066 KiB  
Article
Artificial Intelligence Algorithms for Benign vs. Malignant Dermoscopic Skin Lesion Image Classification
by Francesca Brutti, Federica La Rosa, Linda Lazzeri, Chiara Benvenuti, Giovanni Bagnoni, Daniela Massi and Marco Laurino
Bioengineering 2023, 10(11), 1322; https://doi.org/10.3390/bioengineering10111322 - 16 Nov 2023
Viewed by 1018
Abstract
In recent decades, the incidence of melanoma has grown rapidly. Hence, early diagnosis is crucial to improving clinical outcomes. Here, we propose and compare a classical image analysis-based machine learning method with a deep learning one to automatically classify benign vs. malignant dermoscopic [...] Read more.
In recent decades, the incidence of melanoma has grown rapidly. Hence, early diagnosis is crucial to improving clinical outcomes. Here, we propose and compare a classical image analysis-based machine learning method with a deep learning one to automatically classify benign vs. malignant dermoscopic skin lesion images. The same dataset of 25,122 publicly available dermoscopic images was used to train both models, while a disjointed test set of 200 images was used for the evaluation phase. The training dataset was randomly divided into 10 datasets of 19,932 images to obtain an equal distribution between the two classes. By testing both models on the disjoint set, the deep learning-based method returned accuracy of 85.4 ± 3.2% and specificity of 75.5 ± 7.6%, while the machine learning one showed accuracy and specificity of 73.8 ± 1.1% and 44.5 ± 4.7%, respectively. Although both approaches performed well in the validation phase, the convolutional neural network outperformed the ensemble boosted tree classifier on the disjoint test set, showing better generalization ability. The integration of new melanoma detection algorithms with digital dermoscopic devices could enable a faster screening of the population, improve patient management, and achieve better survival rates. Full article
Show Figures

Figure 1

17 pages, 997 KiB  
Article
Enhancing Taxonomic Categorization of DNA Sequences with Deep Learning: A Multi-Label Approach
by Prommy Sultana Hossain, Kyungsup Kim, Jia Uddin, Md Abdus Samad and Kwonhue Choi
Bioengineering 2023, 10(11), 1293; https://doi.org/10.3390/bioengineering10111293 - 08 Nov 2023
Cited by 1 | Viewed by 1267
Abstract
The application of deep learning for taxonomic categorization of DNA sequences is investigated in this study. Two deep learning architectures, namely the Stacked Convolutional Autoencoder (SCAE) with Multilabel Extreme Learning Machine (MLELM) and the Variational Convolutional Autoencoder (VCAE) with MLELM, have been proposed. [...] Read more.
The application of deep learning for taxonomic categorization of DNA sequences is investigated in this study. Two deep learning architectures, namely the Stacked Convolutional Autoencoder (SCAE) with Multilabel Extreme Learning Machine (MLELM) and the Variational Convolutional Autoencoder (VCAE) with MLELM, have been proposed. These designs provide precise feature maps for individual and inter-label interactions within DNA sequences, capturing their spatial and temporal properties. The collected features are subsequently fed into MLELM networks, which yield soft classification scores and hard labels. The proposed algorithms underwent thorough training and testing on unsupervised data, whereby one or more labels were concurrently taken into account. The introduction of the clade label resulted in improved accuracy for both models compared to the class or genus labels, probably owing to the occurrence of large clusters of similar nucleotides inside a DNA strand. In all circumstances, the VCAE-MLELM model consistently outperformed the SCAE-MLELM model. The best accuracy attained by the VCAE-MLELM model when the clade and family labels were combined was 94%. However, accuracy ratings for single-label categorization using either approach were less than 65%. The approach’s effectiveness is based on MLELM networks, which record connected patterns across classes for accurate label categorization. This study advances deep learning in biological taxonomy by emphasizing the significance of combining numerous labels for increased classification accuracy. Full article
Show Figures

Figure 1

14 pages, 676 KiB  
Article
Improved Network and Training Scheme for Cross-Trial Surface Electromyography (sEMG)-Based Gesture Recognition
by Qingfeng Dai, Yongkang Wong, Mohan Kankanhali, Xiangdong Li and Weidong Geng
Bioengineering 2023, 10(9), 1101; https://doi.org/10.3390/bioengineering10091101 - 20 Sep 2023
Cited by 1 | Viewed by 1000
Abstract
To enhance the performance of surface electromyography (sEMG)-based gesture recognition, we propose a novel network-agnostic two-stage training scheme, called sEMGPoseMIM, that produces trial-invariant representations to be aligned with corresponding hand movements via cross-modal knowledge distillation. In the first stage, an sEMG encoder [...] Read more.
To enhance the performance of surface electromyography (sEMG)-based gesture recognition, we propose a novel network-agnostic two-stage training scheme, called sEMGPoseMIM, that produces trial-invariant representations to be aligned with corresponding hand movements via cross-modal knowledge distillation. In the first stage, an sEMG encoder is trained via cross-trial mutual information maximization using the sEMG sequences sampled from the same time step but different trials in a contrastive learning manner. In the second stage, the learned sEMG encoder is fine-tuned with the supervision of gesture and hand movements in a knowledge-distillation manner. In addition, we propose a novel network called sEMGXCM as the sEMG encoder. Comprehensive experiments on seven sparse multichannel sEMG databases are conducted to demonstrate the effectiveness of the training scheme sEMGPoseMIM and the network sEMGXCM, which achieves an average improvement of +1.3% on the sparse multichannel sEMG databases compared to the existing methods. Furthermore, the comparison between training sEMGXCM and other existing networks from scratch shows that sEMGXCM outperforms the others by an average of +1.5%. Full article
Show Figures

Graphical abstract

21 pages, 3609 KiB  
Article
Deep Learning of Speech Data for Early Detection of Alzheimer’s Disease in the Elderly
by Kichan Ahn, Minwoo Cho, Suk Wha Kim, Kyu Eun Lee, Yoojin Song, Seok Yoo, So Yeon Jeon, Jeong Lan Kim, Dae Hyun Yoon and Hyoun-Joong Kong
Bioengineering 2023, 10(9), 1093; https://doi.org/10.3390/bioengineering10091093 - 18 Sep 2023
Viewed by 1373
Abstract
Background: Alzheimer’s disease (AD) is the most common form of dementia, which makes the lives of patients and their families difficult for various reasons. Therefore, early detection of AD is crucial to alleviating the symptoms through medication and treatment. Objective: Given that AD [...] Read more.
Background: Alzheimer’s disease (AD) is the most common form of dementia, which makes the lives of patients and their families difficult for various reasons. Therefore, early detection of AD is crucial to alleviating the symptoms through medication and treatment. Objective: Given that AD strongly induces language disorders, this study aims to detect AD rapidly by analyzing the language characteristics. Materials and Methods: The mini-mental state examination for dementia screening (MMSE-DS), which is most commonly used in South Korean public health centers, is used to obtain negative answers based on the questionnaire. Among the acquired voices, significant questionnaires and answers are selected and converted into mel-frequency cepstral coefficient (MFCC)-based spectrogram images. After accumulating the significant answers, validated data augmentation was achieved using the Densenet121 model. Five deep learning models, Inception v3, VGG19, Xception, Resnet50, and Densenet121, were used to train and confirm the results. Results: Considering the amount of data, the results of the five-fold cross-validation are more significant than those of the hold-out method. Densenet121 exhibits a sensitivity of 0.9550, a specificity of 0.8333, and an accuracy of 0.9000 in a five-fold cross-validation to separate AD patients from the control group. Conclusions: The potential for remote health care can be increased by simplifying the AD screening process. Furthermore, by facilitating remote health care, the proposed method can enhance the accessibility of AD screening and increase the rate of early AD detection. Full article
Show Figures

Figure 1

25 pages, 8674 KiB  
Article
Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
by Chang-Sik Son and Won-Seok Kang
Bioengineering 2023, 10(9), 1082; https://doi.org/10.3390/bioengineering10091082 - 13 Sep 2023
Cited by 1 | Viewed by 1005
Abstract
This study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily living [...] Read more.
This study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily living (ADL) space, conducted from 1 September to 30 November 2022. We collected prospective data to identify five locomotion activities (level ground walking, stair ascent/descent, and ramp ascent/descent) across three terrains: flat ground, staircase, and ramp. To evaluate the predictive capabilities of the proposed CNN architectures, we compared its performance with three other models: one CNN and two hybrid models (CNN-LSTM and LSTM-CNN). Experiments were conducted using multivariate signals of various types obtained from electromyograms (EMGs) and the wearable robot. Our results reveal that the deeper CNN architecture significantly surpasses the performance of the three competing models. The proposed model, leveraging encoder data such as hip angles and velocities, along with postural signals such as roll, pitch, and yaw from the wearable lower limb robot, achieved superior performance with an inference speed of 1.14 s. Specifically, the F-measure performance of the proposed model reached 96.17%, compared to 90.68% for DDLMI, 94.41% for DeepConvLSTM, and 95.57% for LSTM-CNN, respectively. Full article
Show Figures

Figure 1

10 pages, 2532 KiB  
Article
Grad-CAM-Based Explainable Artificial Intelligence Related to Medical Text Processing
by Hongjian Zhang and Katsuhiko Ogasawara
Bioengineering 2023, 10(9), 1070; https://doi.org/10.3390/bioengineering10091070 - 10 Sep 2023
Cited by 4 | Viewed by 2263
Abstract
The opacity of deep learning makes its application challenging in the medical field. Therefore, there is a need to enable explainable artificial intelligence (XAI) in the medical field to ensure that models and their results can be explained in a manner that humans [...] Read more.
The opacity of deep learning makes its application challenging in the medical field. Therefore, there is a need to enable explainable artificial intelligence (XAI) in the medical field to ensure that models and their results can be explained in a manner that humans can understand. This study uses a high-accuracy computer vision algorithm model to transfer learning to medical text tasks and uses the explanatory visualization method known as gradient-weighted class activation mapping (Grad-CAM) to generate heat maps to ensure that the basis for decision-making can be provided intuitively or via the model. The system comprises four modules: pre-processing, word embedding, classifier, and visualization. We used Word2Vec and BERT to compare word embeddings and use ResNet and 1Dimension convolutional neural networks (CNN) to compare classifiers. Finally, the Bi-LSTM was used to perform text classification for direct comparison. With 25 epochs, the model that used pre-trained ResNet on the formalized text presented the best performance (recall of 90.9%, precision of 91.1%, and an F1 score of 90.2% weighted). This study uses ResNet to process medical texts through Grad-CAM-based explainable artificial intelligence and obtains a high-accuracy classification effect; at the same time, through Grad-CAM visualization, it intuitively shows the words to which the model pays attention when making predictions. Full article
Show Figures

Figure 1

14 pages, 5878 KiB  
Article
Novel Multivariable Evolutionary Algorithm-Based Method for Modal Reconstruction of the Corneal Surface from Sparse and Incomplete Point Clouds
by Francisco L. Sáez-Gutiérrez, Jose S. Velázquez, Jorge L. Alió del Barrio, Jorge L. Alio and Francisco Cavas
Bioengineering 2023, 10(8), 989; https://doi.org/10.3390/bioengineering10080989 - 21 Aug 2023
Cited by 2 | Viewed by 810
Abstract
Three-dimensional reconstruction of the corneal surface provides a powerful tool for managing corneal diseases. This study proposes a novel method for reconstructing the corneal surface from elevation point clouds, using modal schemes capable of reproducing corneal shapes using surface polynomial functions. The multivariable [...] Read more.
Three-dimensional reconstruction of the corneal surface provides a powerful tool for managing corneal diseases. This study proposes a novel method for reconstructing the corneal surface from elevation point clouds, using modal schemes capable of reproducing corneal shapes using surface polynomial functions. The multivariable polynomial fitting was performed using a non-dominated sorting multivariable genetic algorithm (NS-MVGA). Standard reconstruction methods using least-squares discrete fitting (LSQ) and sequential quadratic programming (SQP) were compared with the evolutionary algorithm-based approach. The study included 270 corneal surfaces of 135 eyes of 102 patients (ages 11–63) sorted in two groups: control (66 eyes of 33 patients) and keratoconus (KC) (69 eyes of 69 patients). Tomographic information (Sirius, Costruzione Strumenti Oftalmici, Italy) was processed using Matlab. The goodness of fit for each method was evaluated using mean squared error (MSE), measured at the same nodes where the elevation data were collected. Polynomial fitting based on NS-MVGA improves MSE values by 86% compared to LSQ-based methods in healthy patients. Moreover, this new method improves aberrated surface reconstruction by an average value of 56% if compared with LSQ-based methods in keratoconus patients. Finally, significant improvements were also found in morpho-geometric parameters, such as asphericity and corneal curvature radii. Full article
Show Figures

Figure 1

18 pages, 397 KiB  
Article
Detecting Dementia from Face-Related Features with Automated Computational Methods
by Chuheng Zheng, Mondher Bouazizi, Tomoaki Ohtsuki, Momoko Kitazawa, Toshiro Horigome and Taishiro Kishimoto
Bioengineering 2023, 10(7), 862; https://doi.org/10.3390/bioengineering10070862 - 20 Jul 2023
Cited by 1 | Viewed by 2072
Abstract
Alzheimer’s disease (AD) is a type of dementia that is more likely to occur as people age. It currently has no known cure. As the world’s population is aging quickly, early screening for AD has become increasingly important. Traditional screening methods such as [...] Read more.
Alzheimer’s disease (AD) is a type of dementia that is more likely to occur as people age. It currently has no known cure. As the world’s population is aging quickly, early screening for AD has become increasingly important. Traditional screening methods such as brain scans or psychiatric tests are stressful and costly. The patients are likely to feel reluctant to such screenings and fail to receive timely intervention. While researchers have been exploring the use of language in dementia detection, less attention has been given to face-related features. The paper focuses on investigating how face-related features can aid in detecting dementia by exploring the PROMPT dataset that contains video data collected from patients with dementia during interviews. In this work, we extracted three types of features from the videos, including face mesh, Histogram of Oriented Gradients (HOG) features, and Action Units (AU). We trained traditional machine learning models and deep learning models on the extracted features and investigated their effectiveness in dementia detection. Our experiments show that the use of HOG features achieved the highest accuracy of 79% in dementia detection, followed by AU features with 71% accuracy, and face mesh features with 66% accuracy. Our results show that face-related features have the potential to be a crucial indicator in automated computational dementia detection. Full article
Show Figures

Figure 1

36 pages, 13076 KiB  
Article
EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset
by Akella Subrahmanya Narasimha Raju and Kaliyamurthy Venkatesh
Bioengineering 2023, 10(6), 738; https://doi.org/10.3390/bioengineering10060738 - 19 Jun 2023
Cited by 1 | Viewed by 1334
Abstract
Colorectal cancer is associated with a high mortality rate and significant patient risk. Images obtained during a colonoscopy are used to make a diagnosis, highlighting the importance of timely diagnosis and treatment. Using techniques of deep learning could enhance the diagnostic accuracy of [...] Read more.
Colorectal cancer is associated with a high mortality rate and significant patient risk. Images obtained during a colonoscopy are used to make a diagnosis, highlighting the importance of timely diagnosis and treatment. Using techniques of deep learning could enhance the diagnostic accuracy of existing systems. Using the most advanced deep learning techniques, a brand-new EnsemDeepCADx system for accurate colorectal cancer diagnosis has been developed. The optimal accuracy is achieved by combining Convolutional Neural Networks (CNNs) with transfer learning via bidirectional long short-term memory (BILSTM) and support vector machines (SVM). Four pre-trained CNN models comprise the ADaDR-22, ADaR-22, and DaRD-22 ensemble CNNs: AlexNet, DarkNet-19, DenseNet-201, and ResNet-50. In each of its stages, the CADx system is thoroughly evaluated. From the CKHK-22 mixed dataset, colour, greyscale, and local binary pattern (LBP) image datasets and features are utilised. In the second stage, the returned features are compared to a new feature fusion dataset using three distinct CNN ensembles. Next, they incorporate ensemble CNNs with SVM-based transfer learning by comparing raw features to feature fusion datasets. In the final stage of transfer learning, BILSTM and SVM are combined with a CNN ensemble. The testing accuracy for the ensemble fusion CNN DarD-22 using BILSTM and SVM on the original, grey, LBP, and feature fusion datasets was optimal (95.96%, 88.79%, 73.54%, and 97.89%). Comparing the outputs of all four feature datasets with those of the three ensemble CNNs at each stage enables the EnsemDeepCADx system to attain its highest level of accuracy. Full article
Show Figures

Figure 1

17 pages, 5681 KiB  
Article
Self-Attention MHDNet: A Novel Deep Learning Model for the Detection of R-Peaks in the Electrocardiogram Signals Corrupted with Magnetohydrodynamic Effect
by Moajjem Hossain Chowdhury, Muhammad E. H. Chowdhury, Muhammad Salman Khan, Md Asad Ullah, Sakib Mahmud, Amith Khandakar, Alvee Hassan, Anas M. Tahir and Anwarul Hasan
Bioengineering 2023, 10(5), 542; https://doi.org/10.3390/bioengineering10050542 - 28 Apr 2023
Cited by 3 | Viewed by 1461
Abstract
Magnetic resonance imaging (MRI) is commonly used in medical diagnosis and minimally invasive image-guided operations. During an MRI scan, the patient’s electrocardiogram (ECG) may be required for either gating or patient monitoring. However, the challenging environment of an MRI scanner, with its several [...] Read more.
Magnetic resonance imaging (MRI) is commonly used in medical diagnosis and minimally invasive image-guided operations. During an MRI scan, the patient’s electrocardiogram (ECG) may be required for either gating or patient monitoring. However, the challenging environment of an MRI scanner, with its several types of magnetic fields, creates significant distortions of the collected ECG data due to the Magnetohydrodynamic (MHD) effect. These changes can be seen as irregular heartbeats. These distortions and abnormalities hamper the detection of QRS complexes, and a more in-depth diagnosis based on the ECG. This study aims to reliably detect R-peaks in the ECG waveforms in 3 Tesla (T) and 7T magnetic fields. A novel model, Self-Attention MHDNet, is proposed to detect R peaks from the MHD corrupted ECG signal through 1D-segmentation. The proposed model achieves a recall and precision of 99.83% and 99.68%, respectively, for the ECG data acquired in a 3T setting, while 99.87% and 99.78%, respectively, in a 7T setting. This model can thus be used in accurately gating the trigger pulse for the cardiovascular functional MRI. Full article
Show Figures

Graphical abstract

25 pages, 8130 KiB  
Article
Histopathological Analysis for Detecting Lung and Colon Cancer Malignancies Using Hybrid Systems with Fused Features
by Mohammed Al-Jabbar, Mohammed Alshahrani, Ebrahim Mohammed Senan and Ibrahim Abdulrab Ahmed
Bioengineering 2023, 10(3), 383; https://doi.org/10.3390/bioengineering10030383 - 21 Mar 2023
Cited by 6 | Viewed by 2054
Abstract
Lung and colon cancer are among humanity’s most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking [...] Read more.
Lung and colon cancer are among humanity’s most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking which causes lung cancer, leading to an abnormal diet, which also causes colon cancer. There are many techniques for diagnosing lung and colon cancer, most notably the biopsy technique and its analysis in laboratories. Due to the scarcity of health centers and medical staff, especially in developing countries. Moreover, manual diagnosis takes a long time and is subject to differing opinions of doctors. Thus, artificial intelligence techniques solve these challenges. In this study, three strategies were developed, each with two systems for early diagnosis of histological images of the LC25000 dataset. Histological images have been improved, and the contrast of affected areas has been increased. The GoogLeNet and VGG-19 models of all systems produced high dimensional features, so redundant and unnecessary features were removed to reduce high dimensionality and retain essential features by the PCA method. The first strategy for diagnosing the histological images of the LC25000 dataset by ANN uses crucial features of GoogLeNet and VGG-19 models separately. The second strategy uses ANN with the combined features of GoogLeNet and VGG-19. One system reduced dimensions and combined, while the other combined high features and then reduced high dimensions. The third strategy uses ANN with fusion features of CNN models (GoogLeNet and VGG-19) and handcrafted features. With the fusion features of VGG-19 and handcrafted features, the ANN reached a sensitivity of 99.85%, a precision of 100%, an accuracy of 99.64%, a specificity of 100%, and an AUC of 99.86%. Full article
Show Figures

Figure 1

9 pages, 244 KiB  
Article
Machine Learning and BMI Improve the Prognostic Value of GAP Index in Treated IPF Patients
by Donato Lacedonia, Cosimo Carlo De Pace, Gaetano Rea, Ludovica Capitelli, Crescenzio Gallo, Giulia Scioscia, Pasquale Tondo and Marialuisa Bocchino
Bioengineering 2023, 10(2), 251; https://doi.org/10.3390/bioengineering10020251 - 14 Feb 2023
Cited by 3 | Viewed by 1435
Abstract
Patients affected by idiopathic pulmonary fibrosis (IPF) have a high mortality rate in the first 2–5 years from diagnosis. It is therefore necessary to identify a prognostic indicator that can guide the care process. The Gender-Age-Physiology (GAP) index and staging system is an [...] Read more.
Patients affected by idiopathic pulmonary fibrosis (IPF) have a high mortality rate in the first 2–5 years from diagnosis. It is therefore necessary to identify a prognostic indicator that can guide the care process. The Gender-Age-Physiology (GAP) index and staging system is an easy-to-calculate prediction tool, widely validated, and largely used in clinical practice to estimate the risk of mortality of IPF patients at 1–3 years. In our study, we analyzed the GAP index through machine learning to assess any improvement in its predictive power in a large cohort of IPF patients treated either with pirfenidone or nintedanib. In addition, we evaluated this event through the integration of additional parameters. As previously reported by Y. Suzuki et al., our data show that inclusion of body mass index (BMI) is the best strategy to reinforce the GAP performance in IPF patients under treatment with currently available anti-fibrotic drugs. Full article
15 pages, 1902 KiB  
Article
Machine Learning-Based Respiration Rate and Blood Oxygen Saturation Estimation Using Photoplethysmogram Signals
by Md Nazmul Islam Shuzan, Moajjem Hossain Chowdhury, Muhammad E. H. Chowdhury, Murugappan Murugappan, Enamul Hoque Bhuiyan, Mohamed Arslane Ayari and Amith Khandakar
Bioengineering 2023, 10(2), 167; https://doi.org/10.3390/bioengineering10020167 - 28 Jan 2023
Cited by 12 | Viewed by 3271
Abstract
The continuous monitoring of respiratory rate (RR) and oxygen saturation (SpO2) is crucial for patients with cardiac, pulmonary, and surgical conditions. RR and SpO2 are used to assess the effectiveness of lung medications and ventilator support. In recent studies, the use of a [...] Read more.
The continuous monitoring of respiratory rate (RR) and oxygen saturation (SpO2) is crucial for patients with cardiac, pulmonary, and surgical conditions. RR and SpO2 are used to assess the effectiveness of lung medications and ventilator support. In recent studies, the use of a photoplethysmogram (PPG) has been recommended for evaluating RR and SpO2. This research presents a novel method of estimating RR and SpO2 using machine learning models that incorporate PPG signal features. A number of established methods are used to extract meaningful features from PPG. A feature selection approach was used to reduce the computational complexity and the possibility of overfitting. There were 19 models trained for both RR and SpO2 separately, from which the most appropriate regression model was selected. The Gaussian process regression model outperformed all the other models for both RR and SpO2 estimation. The mean absolute error (MAE) for RR was 0.89, while the root-mean-squared error (RMSE) was 1.41. For SpO2, the model had an RMSE of 0.98 and an MAE of 0.57. The proposed system is a state-of-the-art approach for estimating RR and SpO2 reliably from PPG. If RR and SpO2 can be consistently and effectively derived from the PPG signal, patients can monitor their RR and SpO2 at a cheaper cost and with less hassle. Full article
Show Figures

Graphical abstract

Review

Jump to: Research

44 pages, 9163 KiB  
Review
Deep Learning Approaches for Quantifying Ventilation Defects in Hyperpolarized Gas Magnetic Resonance Imaging of the Lung: A Review
by Ramtin Babaeipour, Alexei Ouriadov and Matthew S. Fox
Bioengineering 2023, 10(12), 1349; https://doi.org/10.3390/bioengineering10121349 - 23 Nov 2023
Viewed by 989
Abstract
This paper provides an in-depth overview of Deep Neural Networks and their application in the segmentation and analysis of lung Magnetic Resonance Imaging (MRI) scans, specifically focusing on hyperpolarized gas MRI and the quantification of lung ventilation defects. An in-depth understanding of Deep [...] Read more.
This paper provides an in-depth overview of Deep Neural Networks and their application in the segmentation and analysis of lung Magnetic Resonance Imaging (MRI) scans, specifically focusing on hyperpolarized gas MRI and the quantification of lung ventilation defects. An in-depth understanding of Deep Neural Networks is presented, laying the groundwork for the exploration of their use in hyperpolarized gas MRI and the quantification of lung ventilation defects. Five distinct studies are examined, each leveraging unique deep learning architectures and data augmentation techniques to optimize model performance. These studies encompass a range of approaches, including the use of 3D Convolutional Neural Networks, cascaded U-Net models, Generative Adversarial Networks, and nnU-net for hyperpolarized gas MRI segmentation. The findings highlight the potential of deep learning methods in the segmentation and analysis of lung MRI scans, emphasizing the need for consensus on lung ventilation segmentation methods. Full article
Show Figures

Figure 1

15 pages, 2794 KiB  
Review
The Potential of Deep Learning to Advance Clinical Applications of Computational Biomechanics
by George A. Truskey
Bioengineering 2023, 10(9), 1066; https://doi.org/10.3390/bioengineering10091066 - 09 Sep 2023
Cited by 1 | Viewed by 1171
Abstract
When combined with patient information provided by advanced imaging techniques, computational biomechanics can provide detailed patient-specific information about stresses and strains acting on tissues that can be useful in diagnosing and assessing treatments for diseases and injuries. This approach is most advanced in [...] Read more.
When combined with patient information provided by advanced imaging techniques, computational biomechanics can provide detailed patient-specific information about stresses and strains acting on tissues that can be useful in diagnosing and assessing treatments for diseases and injuries. This approach is most advanced in cardiovascular applications but can be applied to other tissues. The challenges for advancing computational biomechanics for real-time patient diagnostics and treatment include errors and missing information in the patient data, the large computational requirements for the numerical solutions to multiscale biomechanical equations, and the uncertainty over boundary conditions and constitutive relations. This review summarizes current efforts to use deep learning to address these challenges and integrate large data sets and computational methods to enable real-time clinical information. Examples are drawn from cardiovascular fluid mechanics, soft-tissue mechanics, and bone biomechanics. The application of deep-learning convolutional neural networks can reduce the time taken to complete image segmentation, and meshing and solution of finite element models, as well as improving the accuracy of inlet and outlet conditions. Such advances are likely to facilitate the adoption of these models to aid in the assessment of the severity of cardiovascular disease and the development of new surgical treatments. Full article
Show Figures

Graphical abstract

Back to TopTop