Topic Editors

Department of Computer Science and Engineering, Division of Computer Science, The University of Aizu, Aizuwakamatsu 965-8580, Japan
School of Creative Technologies, University of Bolton, Bolton BL3 5AB, UK
School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu 632014, India
Global Information and Telecommunication Institute, Waseda University, Tokyo 169-8050, Japan

Machine Learning Techniques Driven Medicine Analysis

Abstract submission deadline
closed (10 May 2023)
Manuscript submission deadline
10 August 2023
Viewed by
32284

Topic Information

Dear Colleagues,

With billions of mobile devices in use worldwide, the cost of medical device connectors and sensors has fallen dramatically, and recording and transmitting medical data has never been easier. However, the transformation of physiological data into clinical information of real value requires artificial intelligence algorithms. Processing the big data implicit in biomedical time series and images, accounting for individual differences, identifying and extracting characteristic patterns of health function, and translating these patterns into guiding clinical information requires an adequate knowledge base of physiology, advanced digital signal processing capabilities, and machine learning (e.g., deep learning) skills to support this. The creation of intelligent algorithms combined with new wearable portable biosensors offers unprecedented possibilities and opportunities for remote patient monitoring (i.e., non-traditional clinical settings) and condition management. This Topic will focus on various aspects of information processing, including data pre-processing, visualization, regression, dimensionality reduction, function selection, classification (LR, SVM, NN) and its role in healthcare decision support. The focus will be on computer tools and machine learning techniques, including machine learning fundamentals, classifiers, and deep learning, in conjunction with relevant theory and using the processing of medical datasets (e.g., medical time series) as an example, covering modern artificial intelligence and its biomedical applications.

Prof. Dr. Chunhua Su
Dr. Celestine Iwendi
Topic Editors

Keywords

  • analysis and prediction for COVID-19 data
  • big data and IoT in medical applications
  • medical image processing
  • deep learning models in healthcare and biomedicine
  • machine learning approaches for medicine
  • IT-enabled healthcare services
  • complex health monitoring systems

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.838 3.7 2011 14.9 Days 2300 CHF Submit
Biomedicines
biomedicines
4.757 3.0 2013 17.4 Days 2200 CHF Submit
BioMedInformatics
biomedinformatics
- - 2021 10.7 Days 1000 CHF Submit
Data
data
- 4.8 2016 20.9 Days 1600 CHF Submit
Life
life
3.253 1.9 2011 13.4 Days 1800 CHF Submit

Preprints is a platform dedicated to making early versions of research outputs permanently available and citable. MDPI journals allow posting on preprint servers such as Preprints.org prior to publication. For more details about reprints, please visit https://www.preprints.org.

Published Papers (18 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
Article
Intelligent Bi-LSTM with Architecture Optimization for Heart Disease Prediction in WBAN through Optimal Channel Selection and Feature Selection
Biomedicines 2023, 11(4), 1167; https://doi.org/10.3390/biomedicines11041167 - 13 Apr 2023
Viewed by 832
Abstract
Wireless Body Area Network (WBAN) is a trending technology of Wireless Sensor Networks (WSN) to enhance the healthcare system. This system is developed to monitor individuals by observing their physical signals to offer physical activity status as a wearable low-cost system that is [...] Read more.
Wireless Body Area Network (WBAN) is a trending technology of Wireless Sensor Networks (WSN) to enhance the healthcare system. This system is developed to monitor individuals by observing their physical signals to offer physical activity status as a wearable low-cost system that is considered an unremarkable solution for continuous monitoring of cardiovascular health. Various studies have discussed the uses of WBAN in Personal Health Monitoring systems (PHM) based on real-world health monitoring models. The major goal of WBAN is to offer early and fast analysis of the individuals but it is not able to attain its potential by utilizing conventional expert systems and data mining. Multiple kinds of research are performed in WBAN based on routing, security, energy efficiency, etc. This paper suggests a new heart disease prediction under WBAN. Initially, the standard patient data regarding heart diseases are gathered from benchmark datasets using WBAN. Then, the channel selections for data transmission are carried out through the Improved Dingo Optimizer (IDOX) algorithm using a multi-objective function. Through the selected channel, the data are transmitted for the deep feature extraction process using One Dimensional-Convolutional Neural Networks (ID-CNN) and Autoencoder. Then, the optimal feature selections are done through the IDOX algorithm for getting more suitable features. Finally, the IDOX-based heart disease prediction is done by Modified Bidirectional Long Short-Term Memory (M-BiLSTM), where the hyperparameters of BiLSTM are tuned using the IDOX algorithm. Thus, the empirical outcomes of the given offered method show that it accurately categorizes a patient’s health status founded on abnormal vital signs that is useful for providing the proper medical care to the patients. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
UFODMV: Unsupervised Feature Selection for Online Dynamic Multi-Views
Appl. Sci. 2023, 13(7), 4310; https://doi.org/10.3390/app13074310 - 28 Mar 2023
Viewed by 543
Abstract
In most machine learning (ML) applications, data that arrive from heterogeneous views (i.e., multiple heterogeneous sources of data) are more likely to provide complementary information than does a single view. Hence, these are known as multi-view data. In real-world applications, such as [...] Read more.
In most machine learning (ML) applications, data that arrive from heterogeneous views (i.e., multiple heterogeneous sources of data) are more likely to provide complementary information than does a single view. Hence, these are known as multi-view data. In real-world applications, such as web clustering, data arrive from diverse groups (i.e., sets of features) and therefore have heterogeneous properties. Each feature group is referred to as a particular view. Although multi-view learning provides complementary information for machine learning algorithms, it results in high dimensionality. However, to reduce the dimensionality, feature selection is an efficient method that can be used to select only the representative features of the views so to reduce the dimensionality. In this paper, an unsupervised feature selection for online dynamic multi-views (UFODMV) is developed, which is a novel and efficient mechanism for the dynamic selection of features from multi-views in an unsupervised stream. UFODMV consists of a clustering-based feature selection mechanism enabling the dynamic selection of representative features and a merging process whereby both features and views are received incrementally in a streamed fashion over time. The experimental evaluation demonstrates that the UFODMV model has the best classification accuracy with values of 20% and 50% compared with well-known single-view and multi-view unsupervised feature selection methods, namely OMVFS, USSSF, and SPEC. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Sample Size Analysis for Machine Learning Clinical Validation Studies
Biomedicines 2023, 11(3), 685; https://doi.org/10.3390/biomedicines11030685 - 23 Feb 2023
Cited by 1 | Viewed by 1018
Abstract
Background: Before integrating new machine learning (ML) into clinical practice, algorithms must undergo validation. Validation studies require sample size estimates. Unlike hypothesis testing studies seeking a p-value, the goal of validating predictive models is obtaining estimates of model performance. There is no [...] Read more.
Background: Before integrating new machine learning (ML) into clinical practice, algorithms must undergo validation. Validation studies require sample size estimates. Unlike hypothesis testing studies seeking a p-value, the goal of validating predictive models is obtaining estimates of model performance. There is no standard tool for determining sample size estimates for clinical validation studies for machine learning models. Methods: Our open-source method, Sample Size Analysis for Machine Learning (SSAML) was described and was tested in three previously published models: brain age to predict mortality (Cox Proportional Hazard), COVID hospitalization risk prediction (ordinal regression), and seizure risk forecasting (deep learning). Results: Minimum sample sizes were obtained in each dataset using standardized criteria. Discussion: SSAML provides a formal expectation of precision and accuracy at a desired confidence level. SSAML is open-source and agnostic to data type and ML model. It can be used for clinical validation studies of ML models. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
(This article belongs to the Section Biomedical Engineering in Human Health)
Show Figures

Figure 1

Article
Enhanced Preprocessing Approach Using Ensemble Machine Learning Algorithms for Detecting Liver Disease
Biomedicines 2023, 11(2), 581; https://doi.org/10.3390/biomedicines11020581 - 16 Feb 2023
Cited by 2 | Viewed by 1073
Abstract
There has been a sharp increase in liver disease globally, and many people are dying without even knowing that they have it. As a result of its limited symptoms, it is extremely difficult to detect liver disease until the very last stage. In [...] Read more.
There has been a sharp increase in liver disease globally, and many people are dying without even knowing that they have it. As a result of its limited symptoms, it is extremely difficult to detect liver disease until the very last stage. In the event of early detection, patients can begin treatment earlier, thereby saving their lives. It has become increasingly popular to use ensemble learning algorithms since they perform better than traditional machine learning algorithms. In this context, this paper proposes a novel architecture based on ensemble learning and enhanced preprocessing to predict liver disease using the Indian Liver Patient Dataset (ILPD). Six ensemble learning algorithms are applied to the ILPD, and their results are compared to those obtained with existing studies. The proposed model uses several data preprocessing methods, such as data balancing, feature scaling, and feature selection, to improve the accuracy with appropriate imputations. Multivariate imputation is applied to fill in missing values. On skewed columns, log1p transformation was applied, along with standardization, min–max scaling, maximum absolute scaling, and robust scaling techniques. The selection of features is carried out based on several methods including univariate selection, feature importance, and correlation matrix. These enhanced preprocessed data are trained on Gradient boosting, XGBoost, Bagging, Random Forest, Extra Tree, and Stacking ensemble learning algorithms. The results of the six models were compared with each other, as well as with the models used in other research works. The proposed model using extra tree classifier and random forest, outperformed the other methods with the highest testing accuracy of 91.82% and 86.06%, respectively, portraying our method as a real-world solution for detecting liver disease. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Comparative Analysis of Deep Learning Models Used in Impact Analysis of Coronavirus Chest X-ray Imaging
Biomedicines 2022, 10(11), 2791; https://doi.org/10.3390/biomedicines10112791 - 02 Nov 2022
Cited by 1 | Viewed by 1154
Abstract
The impact analysis of deep learning models for COVID-19-infected X-ray images is an extremely challenging task. Every model has unique capabilities that can provide suitable solutions for some given problem. The prescribed work analyzes various deep learning models that are used for capturing [...] Read more.
The impact analysis of deep learning models for COVID-19-infected X-ray images is an extremely challenging task. Every model has unique capabilities that can provide suitable solutions for some given problem. The prescribed work analyzes various deep learning models that are used for capturing the chest X-ray images. Their performance-defining factors, such as accuracy, f1-score, training and the validation loss, are tested with the support of the training dataset. These deep learning models are multi-layered architectures. These parameters fluctuate based on the behavior of these layers, learning rate, training efficiency, or over-fitting of models. This may in turn introduce sudden changes in the values of training accuracy, testing accuracy, loss or validation loss, f1-score, etc. Some models produce linear responses with respect to the training and testing data, such as Xception, but most of the models provide a variation of these parameters either in the accuracy or the loss functions. The prescribed work performs detailed experimental analysis of deep learning image neural network models and compares them with the above said parameters with detailed analysis of these parameters with their responses regarding accuracy and loss functions. This work also analyses the suitability of these model based on the various parameters, such as the accuracy and loss functions to various applications. This prescribed work also lists out various challenges on the implementation and experimentation of these models. Solutions are provided for enhancing the performance of these deep learning models. The deep learning models that are used in the prescribed work are Resnet, VGG16, Resnet with VGG, Inception V3, Xception with transfer learning, and CNN. The model is trained with more than 1500 images of the chest-X-ray data and tested with around 132 samples of the X-ray image dataset. The prescribed work analyzes the accuracy, f1-score, recall, and precision of these models and analyzes these parameters. It also measures parameters such as training accuracy, testing accuracy, loss, and validation loss. Each epoch of every model is recorded to measure the changes in these parameters during the experimental analysis. The prescribed work provides insight for future research through various challenges and research findings with future directions. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Graphical abstract

Article
Security of Blockchain and AI-Empowered Smart Healthcare: Application-Based Analysis
Appl. Sci. 2022, 12(21), 11039; https://doi.org/10.3390/app122111039 - 31 Oct 2022
Cited by 3 | Viewed by 2326
Abstract
A smart device carries a great amount of sensitive patient data as it offers innovative and enhanced functionalities in the smart healthcare system. Moreover, the components of healthcare systems are interconnected via the Internet, bringing significant changes to the delivery of healthcare services [...] Read more.
A smart device carries a great amount of sensitive patient data as it offers innovative and enhanced functionalities in the smart healthcare system. Moreover, the components of healthcare systems are interconnected via the Internet, bringing significant changes to the delivery of healthcare services to individuals. However, easy access to healthcare services and applications has given rise to severe risks and vulnerabilities that hamper the performance of a smart healthcare system. Moreover, a large number of heterogeneous devices accumulate data that vary in terms of size and formats, making it challenging to manage the data in the healthcare repository and secure it from attackers who seek to profit from the data. Thus, smart healthcare systems are susceptible to numerous security threats and risks, such as hardware and software-based attacks, system-level attacks, and network attacks that have the potential to place patients’ lives at risk. An analysis of the literature revealed a research gap in that most security surveys on the healthcare ecosystem examined only the security challenges and did not explore the possibility of integrating modern technologies to alleviate security issues in the smart healthcare system. Therefore, in this article, we conduct a comprehensive review of the various most recent security challenges and their countermeasures in the smart healthcare environment. In addition, an artificial intelligence (AI) and blockchain-based secure architecture is proposed as a case study to analyse malware and network attacks on wearable devices. The proposed architecture is evaluated using various performance metrics such as blockchain scalability, accuracy, and dynamic malware analysis. Lastly, we highlight different open issues and research challenges facing smart healthcare systems. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
A Modified LBP Operator-Based Optimized Fuzzy Art Map Medical Image Retrieval System for Disease Diagnosis and Prediction
Biomedicines 2022, 10(10), 2438; https://doi.org/10.3390/biomedicines10102438 - 29 Sep 2022
Cited by 2 | Viewed by 1146
Abstract
Medical records generated in hospitals are treasures for academic research and future references. Medical Image Retrieval (MIR) Systems contribute significantly to locating the relevant records required for a particular diagnosis, analysis, and treatment. An efficient classifier and effective indexing technique are required for [...] Read more.
Medical records generated in hospitals are treasures for academic research and future references. Medical Image Retrieval (MIR) Systems contribute significantly to locating the relevant records required for a particular diagnosis, analysis, and treatment. An efficient classifier and effective indexing technique are required for the storage and retrieval of medical images. In this paper, a retrieval framework is formulated by adopting a modified Local Binary Pattern feature (AvN-LBP) for indexing and an optimized Fuzzy Art Map (FAM) for classifying and searching medical images. The proposed indexing method extracts LBP considering information from neighborhood pixels and is robust to background noise. The FAM network is optimized using the Differential Evaluation (DE) algorithm (DEFAMNet) with a modified mutation operation to minimize the size of the network without compromising the classification accuracy. The performance of the proposed DEFAMNet is compared with that of other classifiers and descriptors; the classification accuracy of the proposed AvN-LBP operator with DEFAMNet is higher. The experimental results on three benchmark medical image datasets provide evidence that the proposed framework classifies the medical images faster and more efficiently with lesser computational cost. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Quantitative Measurement of Spinal Cerebrospinal Fluid by Cascade Artificial Intelligence Models in Patients with Spontaneous Intracranial Hypotension
Biomedicines 2022, 10(8), 2049; https://doi.org/10.3390/biomedicines10082049 - 22 Aug 2022
Cited by 1 | Viewed by 1005
Abstract
Cerebrospinal fluid (CSF) hypovolemia is the core of spontaneous intracranial hypotension (SIH). More than 1000 magnetic resonance myelography (MRM) images are required to evaluate each subject. An effective spinal CSF quantification method is needed. In this study, we proposed a cascade artificial intelligence [...] Read more.
Cerebrospinal fluid (CSF) hypovolemia is the core of spontaneous intracranial hypotension (SIH). More than 1000 magnetic resonance myelography (MRM) images are required to evaluate each subject. An effective spinal CSF quantification method is needed. In this study, we proposed a cascade artificial intelligence (AI) model to automatically segment spinal CSF. From January 2014 to December 2019, patients with SIH and 12 healthy volunteers (HVs) were recruited. We evaluated the performance of AI models which combined object detection (YOLO v3) and semantic segmentation (U-net or U-net++). The network of performance was evaluated using intersection over union (IoU). The best AI model was used to quantify spinal CSF in patients. We obtained 25,603 slices of MRM images from 13 patients and 12 HVs. We divided the images into training, validation, and test datasets with a ratio of 4:1:5. The IoU of Cascade YOLO v3 plus U-net++ (0.9374) was the highest. Applying YOLO v3 plus U-net++ to another 13 SIH patients showed a significant decrease in the volume of spinal CSF measured (59.32 ± 10.94 mL) at disease onset compared to during their recovery stage (70.61 ± 15.31 mL). The cascade AI model provided a satisfactory performance with regard to the fully automatic segmentation of spinal CSF from MRM images. The spinal CSF volume obtained through its measurements could reflect a patient’s clinical status. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Unique Deep Radiomic Signature Shows NMN Treatment Reverses Morphology of Oocytes from Aged Mice
Biomedicines 2022, 10(7), 1544; https://doi.org/10.3390/biomedicines10071544 - 29 Jun 2022
Cited by 1 | Viewed by 1857
Abstract
The purpose of this study is to develop a deep radiomic signature based on an artificial intelligence (AI) model. This radiomic signature identifies oocyte morphological changes corresponding to reproductive aging in bright field images captured by optical light microscopy. Oocytes were collected from [...] Read more.
The purpose of this study is to develop a deep radiomic signature based on an artificial intelligence (AI) model. This radiomic signature identifies oocyte morphological changes corresponding to reproductive aging in bright field images captured by optical light microscopy. Oocytes were collected from three mice groups: young (4- to 5-week-old) C57BL/6J female mice, aged (12-month-old) mice, and aged mice treated with the NAD+ precursor nicotinamide mononucleotide (NMN), a treatment recently shown to rejuvenate aspects of fertility in aged mice. We applied deep learning, swarm intelligence, and discriminative analysis to images of mouse oocytes taken by bright field microscopy to identify a highly informative deep radiomic signature (DRS) of oocyte morphology. Predictive DRS accuracy was determined by evaluating sensitivity, specificity, and cross-validation, and was visualized using scatter plots of the data associated with three groups: Young, old and Old + NMN. DRS could successfully distinguish morphological changes in oocytes associated with maternal age with 92% accuracy (AUC~1), reflecting this decline in oocyte quality. We then employed the DRS to evaluate the impact of the treatment of reproductively aged mice with NMN. The DRS signature classified 60% of oocytes from NMN-treated aged mice as having a ‘young’ morphology. In conclusion, the DRS signature developed in this study was successfully able to detect aging-related oocyte morphological changes. The significance of our approach is that DRS applied to bright field oocyte images will allow us to distinguish and select oocytes originally affected by reproductive aging and whose quality has been successfully restored by the NMN therapy. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
A Novel CovidDetNet Deep Learning Model for Effective COVID-19 Infection Detection Using Chest Radiograph Images
Appl. Sci. 2022, 12(12), 6269; https://doi.org/10.3390/app12126269 - 20 Jun 2022
Cited by 13 | Viewed by 1803
Abstract
The suspected cases of COVID-19 must be detected quickly and accurately to avoid the transmission of COVID-19 on a large scale. Existing COVID-19 diagnostic tests are slow and take several hours to generate the required results. However, on the other hand, most X-rays [...] Read more.
The suspected cases of COVID-19 must be detected quickly and accurately to avoid the transmission of COVID-19 on a large scale. Existing COVID-19 diagnostic tests are slow and take several hours to generate the required results. However, on the other hand, most X-rays or chest radiographs only take less than 15 min to complete. Therefore, we can utilize chest radiographs to create a solution for early and accurate COVID-19 detection and diagnosis to reduce COVID-19 patient treatment problems and save time. For this purpose, CovidDetNet is proposed, which comprises ten learnable layers that are nine convolutional layers and one fully-connected layer. The architecture uses two activation functions: the ReLu activation function and the Leaky Relu activation function and two normalization operations that are batch normalization and cross channel normalization, making it a novel COVID-19 detection model. It is a novel deep learning-based approach that automatically and reliably detects COVID-19 using chest radiograph images. Towards this, a fine-grained COVID-19 classification experiment is conducted to identify and classify chest radiograph images into normal, COVID-19 positive, and pneumonia. In addition, the performance of the proposed novel CovidDetNet deep learning model is evaluated on a standard COVID-19 Radiography Database. Moreover, we compared the performance of our approach with hybrid approaches in which we used deep learning models as feature extractors and support vector machines (SVM) as a classifier. Experimental results on the dataset showed the superiority of the proposed CovidDetNet model over the existing methods. The proposed CovidDetNet outperformed the baseline hybrid deep learning-based models by achieving a high accuracy of 98.40%. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Human Angiotensin I-Converting Enzyme Produced by Different Cells: Classification of the SERS Spectra with Linear Discriminant Analysis
Biomedicines 2022, 10(6), 1389; https://doi.org/10.3390/biomedicines10061389 - 12 Jun 2022
Cited by 2 | Viewed by 1483
Abstract
Angiotensin I-converting enzyme (ACE) is a peptidase widely presented in human tissues and biological fluids. ACE is a glycoprotein containing 17 potential N-glycosylation sites which can be glycosylated in different ways due to post-translational modification of the protein in different cells. For the [...] Read more.
Angiotensin I-converting enzyme (ACE) is a peptidase widely presented in human tissues and biological fluids. ACE is a glycoprotein containing 17 potential N-glycosylation sites which can be glycosylated in different ways due to post-translational modification of the protein in different cells. For the first time, surface-enhanced Raman scattering (SERS) spectra of human ACE from lungs, mainly produced by endothelial cells, ACE from heart, produced by endothelial heart cells and miofibroblasts, and ACE from seminal fluid, produced by epithelial cells, have been compared with full assignment. The ability to separate ACEs’ SERS spectra was demonstrated using the linear discriminant analysis (LDA) method with high accuracy. The intervals in the spectra with maximum contributions of the spectral features were determined and their contribution to the spectrum of each separate ACE was evaluated. Near 25 spectral features forming three intervals were enough for successful separation of the spectra of different ACEs. However, more spectral information could be obtained from analysis of 50 spectral features. Band assignment showed that several features did not correlate with band assignments to amino acids or peptides, which indicated the carbohydrate contribution to the final spectra. Analysis of SERS spectra could be beneficial for the detection of tissue-specific ACEs. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Biomedical Signals for Healthcare Using Hadoop Infrastructure with Artificial Intelligence and Fuzzy Logic Interpretation
Appl. Sci. 2022, 12(10), 5097; https://doi.org/10.3390/app12105097 - 18 May 2022
Cited by 11 | Viewed by 1280
Abstract
In all developing countries, the application of biomedical signals has been growing, and there is a potential interest to apply it to healthcare management systems. However, with the existing infrastructure, the system will not provide high-end support for the transfer of signals by [...] Read more.
In all developing countries, the application of biomedical signals has been growing, and there is a potential interest to apply it to healthcare management systems. However, with the existing infrastructure, the system will not provide high-end support for the transfer of signals by using a communication medium, as biomedical signals need to be classified at appropriate stages. Therefore, this article addresses the issues of physical infrastructure, using Hadoop-based systems where a four-layer model is created. The four-layer model is integrated with Fuzzy Interface System Algorithm (FISA) with low robustness, and data transfers in these layers are carried out with reference health data that are collected at various treatment centers. The performance of this new flanged system model aims to minimize the loss functionalities that are present in biomedical signals, and an activation function is introduced at the middle stages. The effectiveness of the proposed model is simulated by using MATLAB, using a biomedical signal processing toolbox, where the performance of FISA proves to be better in terms of signal strength, distance, and cost. As a comparative outcome, the proposed method overlooks the conventional methods for an average percentage of 78% in real-time conditions. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Seizure Prediction Based on Transformer Using Scalp Electroencephalogram
Appl. Sci. 2022, 12(9), 4158; https://doi.org/10.3390/app12094158 - 20 Apr 2022
Cited by 10 | Viewed by 2551
Abstract
Epilepsy is a chronic and recurrent brain dysfunction disease. An acute epileptic attack will interfere with a patient’s normal behavior and consciousness, having a great impact on their life. The purpose of this study was to design a seizure prediction model to improve [...] Read more.
Epilepsy is a chronic and recurrent brain dysfunction disease. An acute epileptic attack will interfere with a patient’s normal behavior and consciousness, having a great impact on their life. The purpose of this study was to design a seizure prediction model to improve the quality of patients’ lives and assist doctors in making diagnostic decisions. This paper presents a transformer-based seizure prediction model. Firstly, the time-frequency characteristics of electroencephalogram (EEG) signals were extracted by short-time Fourier transform (STFT). Secondly, a three transformer tower model was used to fuse and classify the features of the EEG signals. Finally, when combined with the attention mechanism of transformer networks, the EEG signal was processed as a whole, which solves the problem of length limitations in deep learning models. Experiments were conducted with a Children’s Hospital Boston and the Massachusetts Institute of Technology database to evaluate the performance of the model. The experimental results show that, compared with previous EEG classification models, our model can enhance the ability to use time, frequency, and channel information from EEG signals to improve the accuracy of seizure prediction. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Deep Learning Architecture Optimization with Metaheuristic Algorithms for Predicting BRCA1/BRCA2 Pathogenicity NGS Analysis
BioMedInformatics 2022, 2(2), 244-267; https://doi.org/10.3390/biomedinformatics2020016 - 18 Apr 2022
Cited by 1 | Viewed by 1837
Abstract
Motivation, BRCA1 and BRCA2 are genes with tumor suppressor activity. They are involved in a considerable number of biological processes. To help the biologist in tumor classification, we developed a deep learning algorithm. The question when we want to construct a neural network [...] Read more.
Motivation, BRCA1 and BRCA2 are genes with tumor suppressor activity. They are involved in a considerable number of biological processes. To help the biologist in tumor classification, we developed a deep learning algorithm. The question when we want to construct a neural network is how many hidden layers and neurons should we use. If the number of inputs and outputs is defined by the problem, the number of hidden layers and neurons is difficult to define. Hidden layers and neurons that make up each layer of the neural network influence the performance of system predictions. There are different methods for finding the optimal architecture. In this paper, we present the two packages that we have developed, the genetic algorithm (GA) and the particle swarm optimization (PSO) to optimize the parameters of the neural network for predicting BRCA1 and BRCA2 pathogenicity; Results, we will compare the results obtained by the two algorithms. We used datasets collected from our NGS analysis of BRCA1 and BRCA2 genes to train deep learning models. It represents a data collection of 11,875 BRCA1 and BRCA2 variants. Our preliminary results show that the PSO provided the most significant architecture of hidden layers and the number of neurons compared to grid search and GA; Conclusions, the optimal architecture found by the PSO algorithm is composed of 6 hidden layers with 275 hidden nodes with an accuracy of 0.98, precision 0.99, recall 0.98, and a specificity of 0.99. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Prediction of Emergency Cesarean Section Using Machine Learning Methods: Development and External Validation of a Nationwide Multicenter Dataset in Republic of Korea
Life 2022, 12(4), 604; https://doi.org/10.3390/life12040604 - 18 Apr 2022
Cited by 1 | Viewed by 2003
Abstract
This study was a multicenter retrospective cohort study of term nulliparous women who underwent labor, and was conducted to develop an automated machine learning model for prediction of emergent cesarean section (CS) before onset of labor. Nine machine learning methods of logistic regression, [...] Read more.
This study was a multicenter retrospective cohort study of term nulliparous women who underwent labor, and was conducted to develop an automated machine learning model for prediction of emergent cesarean section (CS) before onset of labor. Nine machine learning methods of logistic regression, random forest, Support Vector Machine (SVM), gradient boosting, extreme gradient boosting (XGBoost), light gradient boosting machine (LGBM), k-nearest neighbors (KNN), Voting, and Stacking were applied and compared for prediction of emergent CS during active labor. External validation was performed using a nationwide multicenter dataset for Korean fetal growth. A total of 6549 term nulliparous women was included in the analysis, and the emergent CS rate was 16.1%. The C-statistics values for KNN, Voting, XGBoost, Stacking, gradient boosting, random forest, LGBM, logistic regression, and SVM were 0.6, 0.69, 0.64, 0.59, 0.66, 0.68, 0.68, 0.7, and 0.69, respectively. The logistic regression model showed the best predictive performance with an accuracy of 0.78. The machine learning model identified nine significant variables of maternal age, height, weight at pre-pregnancy, pregnancy-associated hypertension, gestational age, and fetal sonographic findings. The C-statistic value for the logistic regression machine learning model in the external validation set (1391 term nulliparous women) was 0.69, with an overall accuracy of 0.68, a specificity of 0.83, and a sensitivity of 0.41. Machine learning algorithms with clinical and sonographic parameters at near term could be useful tools to predict individual risk of emergent CS during active labor in nulliparous women. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Attention V-Net: A Modified V-Net Architecture for Left Atrial Segmentation
Appl. Sci. 2022, 12(8), 3764; https://doi.org/10.3390/app12083764 - 08 Apr 2022
Cited by 1 | Viewed by 1092
Abstract
We propose a fully convolutional neural network based on the attention mechanism for 3D medical image segmentation tasks. It can adaptively learn to highlight the salient features of images that are useful for image segmentation tasks. Some prior methods enhance accuracy using multi-scale [...] Read more.
We propose a fully convolutional neural network based on the attention mechanism for 3D medical image segmentation tasks. It can adaptively learn to highlight the salient features of images that are useful for image segmentation tasks. Some prior methods enhance accuracy using multi-scale feature fusion or dilated convolution, which is basically artificial and lacks the flexibility of the model itself. Therefore, some works proposed the 2D attention gate module, but these works process 2D medical slice images, ignoring the correlation between 3D image sequences. In contrast, the 3D attention gate can comprehensively use the information of three dimensions of medical images. In this paper, we propose the Attention V-Net architecture, which uses the 3D attention gate module, and applied it to the left atrium segmentation framework based on semi-supervised learning. The proposed method is evaluated on the dataset of the 2018 left atrial challenge. The experimental results show that the Attention V-Net obtains improved performance under evaluation indicators, such as Dice, Jaccard, ASD (Average surface distance), and 95HD (Hausdorff distance). The result indicates that the model in this paper can effectively improve the accuracy of left atrial segmentation, therefore laying the foundation for subsequent work such as in atrial reconstruction. Meanwhile, our model is of great significance for assisting doctors in treating cardiovascular diseases. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
Predicting Childhood Obesity Using Machine Learning: Practical Considerations
BioMedInformatics 2022, 2(1), 184-203; https://doi.org/10.3390/biomedinformatics2010012 - 08 Mar 2022
Cited by 2 | Viewed by 3451
Abstract
Previous studies demonstrate the feasibility of predicting obesity using various machine learning techniques; however, these studies do not address the limitations of these methods in real-life settings where available data for children may vary. We investigated the medical history required for machine learning [...] Read more.
Previous studies demonstrate the feasibility of predicting obesity using various machine learning techniques; however, these studies do not address the limitations of these methods in real-life settings where available data for children may vary. We investigated the medical history required for machine learning models to accurately predict body mass index (BMI) during early childhood. Within a longitudinal dataset of children ages 0–4 years, we developed predictive models based on long short-term memory (LSTM), a recurrent neural network architecture, using history EHR data from 2 to 8 clinical encounters to estimate child BMI. We developed separate, sex-stratified models using 80% of the data for training and 20% for external validation. We evaluated model performance using K-fold cross-validation, mean average error (MAE), and Pearson’s correlation coefficient (R2). Two history encounters and a 4-month prediction yielded a high prediction error and low correlation between predicted and actual BMI (MAE of 1.60 for girls and 1.49 for boys). Model performance improved with additional history encounters; improvement was not significant beyond five history encounters. The combined model outperformed the sex-stratified models, with a MAE = 0.98 (SD 0.03) and R2 = 0.72. Our models show that five history encounters are sufficient to predict BMI prior to age 4 for both boys and girls. Moreover, starting from an initial dataset with more than 269 exposure variables, we were able to identify a limited set of 24 variables that can facilitate BMI prediction in early childhood. Nine of these final variables are collected once, and the remaining 15 need to be updated during each visit. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Article
An Artificial Intelligence-Enabled ECG Algorithm for the Prediction and Localization of Angiography-Proven Coronary Artery Disease
Biomedicines 2022, 10(2), 394; https://doi.org/10.3390/biomedicines10020394 - 07 Feb 2022
Cited by 1 | Viewed by 2136
Abstract
(1) Background: The role of using artificial intelligence (AI) with electrocardiograms (ECGs) for the diagnosis of significant coronary artery disease (CAD) is unknown. We first tested the hypothesis that using AI to read ECG could identify significant CAD and determine which vessel was [...] Read more.
(1) Background: The role of using artificial intelligence (AI) with electrocardiograms (ECGs) for the diagnosis of significant coronary artery disease (CAD) is unknown. We first tested the hypothesis that using AI to read ECG could identify significant CAD and determine which vessel was obstructed. (2) Methods: We collected ECG data from a multi-center retrospective cohort with patients of significant CAD documented by invasive coronary angiography and control patients in Taiwan from 1 January 2018 to 31 December 2020. (3) Results: We trained convolutional neural networks (CNN) models to identify patients with significant CAD (>70% stenosis), using the 12,954 ECG from 2303 patients with CAD and 2090 ECG from 1053 patients without CAD. The Marco-average area under the ROC curve (AUC) for detecting CAD was 0.869 for image input CNN model. For detecting individual coronary artery obstruction, the AUC was 0.885 for left anterior descending artery, 0.776 for right coronary artery, and 0.816 for left circumflex artery obstruction, and 1.0 for no coronary artery obstruction. Marco-average AUC increased up to 0.973 if ECG had features of myocardial ischemia. (4) Conclusions: We for the first time show that using the AI-enhanced CNN model to read standard 12-lead ECG permits ECG to serve as a powerful screening tool to identify significant CAD and localize the coronary obstruction. It could be easily implemented in health check-ups with asymptomatic patients and identifying high-risk patients for future coronary events. Full article
(This article belongs to the Topic Machine Learning Techniques Driven Medicine Analysis)
Show Figures

Figure 1

Back to TopTop