Mathematical Modeling Using Deep Learning with Applications in Biology and Medicine

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematical Biology".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 10122

Special Issue Editors


E-Mail Website
Guest Editor
School of Public Health and Preventive Medicine, Monash University, Melbourne 3004, Australia
Interests: data mining; machine learning; deep learning; natural language processing; medical data; biostatistics; public health; healthcare

Special Issue Information

Dear Colleagues,

This Special Issue aims to publish original research articles covering advances in mathematical modeling using deep learning (DL) with applications in biology and medicine. There have been recent advances in the applications of machine learning and in particular DL, and techniques in biology and medicine. The increasing collection and integration of large biological and patient data sets over the last decade have witnessed a rise in the application of DL techniques in the fields of biology and medicine. For example, these techniques have been applied in the recent COVID-19 pandemic to help fight against the virus and to omics data to address the problems posed by the complex organization of biological processes in relation to cardiovascular disease. The domain of deep learning is growing in relation to models such as pre-trained language models (PLMs), transformer-based architectures such as vision transformer (ViT), generative adversarial networks (GANs), and explainable artificial intelligence (XAI). Topical areas for deep learning research include causality and relevant and insightful explainability and interpretability for a broader research readership and for clinicians. This Special Issue focuses on the application of DL in biology and medicine. Potential topics include but are not limited to:

  • Natural language processing (NLP);
  • Deep neural networks, e.g., CNNs, RNNs and their variants;
  • Generative adversarial networks (GANs);
  • Deep reinforcement learning algorithms;
  • Few-shot learning (FLS);
  • GAN-based data augmentation;
  • Causality in machine learning;
  • Explainable artificial intelligence (XAI);
  • Explainable deep generative models;
  • Self-supervised learning;
  • Self/semi-supervised learning;
  • Transfer learning;
  • Pretrained language models (PLMs) (e.g., transformer and its descendants) and their application in vision.

Dr. Joanna Dipnall
Dr. Lan Du
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mathematics
  • machine learning
  • deep learning
  • biology
  • medicine
  • neural networks
  • transformer
  • reinforcement learning
  • transformers
  • causality

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 3651 KiB  
Article
Classification of Epileptic Seizure Types Using Multiscale Convolutional Neural Network and Long Short-Term Memory
by Hend Alshaya and Muhammad Hussain
Mathematics 2023, 11(17), 3656; https://doi.org/10.3390/math11173656 - 24 Aug 2023
Cited by 2 | Viewed by 871
Abstract
The accurate classification of seizure types using electroencephalography (EEG) signals plays a vital role in determining a precise treatment plan and therapy for epilepsy patients. Among the available deep network models, Convolutional Neural Networks (CNNs) are the most widely adopted models for learning [...] Read more.
The accurate classification of seizure types using electroencephalography (EEG) signals plays a vital role in determining a precise treatment plan and therapy for epilepsy patients. Among the available deep network models, Convolutional Neural Networks (CNNs) are the most widely adopted models for learning and representing EEG signals. However, typical CNNs have high computational complexity, leading to overfitting problems. This paper proposes the design of two effective, lightweight deep network models; the 1D multiscale neural network (1D-MSCNet) model and the Long Short-term Memory (LSTM)-based compact CNN (EEG-LSTMNet) model. The 1D-MSCNet model comprises three modules: a spectral–temporal convolution module, a spatial convolution module, and a classification module. It extracts features from input EEG trials at multiple frequency/time ranges, identifying relationships between the spatial distribution of their channels. The EEG-LSTMNet model includes three convolutional layers, namely temporal, depthwise, and separable layers, a single LSTM layer, and two fully connected classification layers to extract discriminative EEG feature representations. Both models have been applied to the same EEG trials collected from the Temple University Hospital (TUH) database. Results revealed F1-score values of 96.9% and 98.4% for the 1D-MSCNet and EEG-LSTMNet, respectively. Based on the demonstrated outcomes, both models outperform related state-of-the-art methods due to their architectures’ adoption of 1D modules and layers that reduce the computational effort needed, solve the overfitting problem, and enhance classification efficiency. Hence, both models could be valuable additions for neurologists to help them decide upon precise treatments and drugs for patients depending on their type of seizure. Full article
Show Figures

Figure 1

14 pages, 1182 KiB  
Article
PreRadE: Pretraining Tasks on Radiology Images and Reports Evaluation Framework
by Matthew Coleman, Joanna F. Dipnall, Myong Chol Jung and Lan Du
Mathematics 2022, 10(24), 4661; https://doi.org/10.3390/math10244661 - 08 Dec 2022
Viewed by 1201
Abstract
Recently, self-supervised pretraining of transformers has gained considerable attention in analyzing electronic medical records. However, systematic evaluation of different pretraining tasks in radiology applications using both images and radiology reports is still lacking. We propose PreRadE, a simple proof of concept framework that [...] Read more.
Recently, self-supervised pretraining of transformers has gained considerable attention in analyzing electronic medical records. However, systematic evaluation of different pretraining tasks in radiology applications using both images and radiology reports is still lacking. We propose PreRadE, a simple proof of concept framework that enables novel evaluation of pretraining tasks in a controlled environment. We investigated three most-commonly used pretraining tasks (MLM—Masked Language Modelling, MFR—Masked Feature Regression, and ITM—Image to Text Matching) and their combinations against downstream radiology classification on MIMIC-CXR, a medical chest X-ray imaging and radiology text report dataset. Our experiments in the multimodal setting show that (1) pretraining with MLM yields the greatest benefit to classification performance, largely due to the task-relevant information learned from the radiology reports. (2) Pretraining with only a single task can introduce variation in classification performance across different fine-tuning episodes, suggesting that composite task objectives incorporating both image and text modalities are better suited to generating reliably performant models. Full article
Show Figures

Figure 1

27 pages, 9648 KiB  
Article
Deep Learning Cascaded Feature Selection Framework for Breast Cancer Classification: Hybrid CNN with Univariate-Based Approach
by Nagwan Abdel Samee, Ghada Atteia, Souham Meshoul, Mugahed A. Al-antari and Yasser M. Kadah
Mathematics 2022, 10(19), 3631; https://doi.org/10.3390/math10193631 - 04 Oct 2022
Cited by 25 | Viewed by 2651
Abstract
With the help of machine learning, many of the problems that have plagued mammography in the past have been solved. Effective prediction models need many normal and tumor samples. For medical applications such as breast cancer diagnosis framework, it is difficult to gather [...] Read more.
With the help of machine learning, many of the problems that have plagued mammography in the past have been solved. Effective prediction models need many normal and tumor samples. For medical applications such as breast cancer diagnosis framework, it is difficult to gather labeled training data and construct effective learning frameworks. Transfer learning is an emerging strategy that has recently been used to tackle the scarcity of medical data by transferring pre-trained convolutional network knowledge into the medical domain. Despite the well reputation of the transfer learning based on the pre-trained Convolutional Neural Networks (CNN) for medical imaging, several hurdles still exist to achieve a prominent breast cancer classification performance. In this paper, we attempt to solve the Feature Dimensionality Curse (FDC) problem of the deep features that are derived from the transfer learning pre-trained CNNs. Such a problem is raised due to the high space dimensionality of the extracted deep features with respect to the small size of the available medical data samples. Therefore, a novel deep learning cascaded feature selection framework is proposed based on the pre-trained deep convolutional networks as well as the univariate-based paradigm. Deep learning models of AlexNet, VGG, and GoogleNet are randomly selected and used to extract the shallow and deep features from the INbreast mammograms, whereas the univariate strategy helps to overcome the dimensionality curse and multicollinearity issues for the extracted features. The optimized key features via the univariate approach are statistically significant (p-value ≤ 0.05) and have good capability to efficiently train the classification models. Using such optimal features, the proposed framework could achieve a promising evaluation performance in terms of 98.50% accuracy, 98.06% sensitivity, 98.99% specificity, and 98.98% precision. Such performance seems to be beneficial to develop a practical and reliable computer-aided diagnosis (CAD) framework for breast cancer classification. Full article
Show Figures

Figure 1

23 pages, 16498 KiB  
Article
Fracture Recognition in Paediatric Wrist Radiographs: An Object Detection Approach
by Franko Hržić, Sebastian Tschauner, Erich Sorantin and Ivan Štajduhar
Mathematics 2022, 10(16), 2939; https://doi.org/10.3390/math10162939 - 15 Aug 2022
Cited by 6 | Viewed by 2129
Abstract
Wrist fractures are commonly diagnosed using X-ray imaging, supplemented by magnetic resonance imaging and computed tomography when required. Radiologists can sometimes overlook the fractures because they are difficult to spot. In contrast, some fractures can be easily spotted and only slow down the [...] Read more.
Wrist fractures are commonly diagnosed using X-ray imaging, supplemented by magnetic resonance imaging and computed tomography when required. Radiologists can sometimes overlook the fractures because they are difficult to spot. In contrast, some fractures can be easily spotted and only slow down the radiologists because of the reporting systems. We propose a machine learning model based on the YOLOv4 method that can help solve these issues. The rigorous testing on three levels showed that the YOLOv4-based model obtained significantly better results in comparison to the state-of-the-art method based on the U-Net model. In the comparison against five radiologists, YOLO 512 Anchor model-AI (the best performing YOLOv4-based model) was significantly better than the four radiologists (AI AUC-ROC =0.965, Radiologist average AUC-ROC =0.831±0.075). Furthermore, we have shown that three out of five radiologists significantly improved their performance when aided by the AI model. Finally, we compared our work with other related work and discussed what to consider when building an ML-based predictive model for wrist fracture detection. All our findings are based on a complex dataset of 19,700 pediatric X-ray images. Full article
Show Figures

Figure 1

14 pages, 600 KiB  
Article
Quantitative Analysis of Anesthesia Recovery Time by Machine Learning Prediction Models
by Shumin Yang, Huaying Li, Zhizhe Lin, Youyi Song, Cheng Lin and Teng Zhou
Mathematics 2022, 10(15), 2772; https://doi.org/10.3390/math10152772 - 04 Aug 2022
Viewed by 2228
Abstract
It is significant for anesthesiologists to have a precise grasp of the recovery time of the patient after anesthesia. Accurate prediction of anesthesia recovery time can support anesthesiologist decision-making during surgery to help reduce the risk of surgery in patients. However, effective models [...] Read more.
It is significant for anesthesiologists to have a precise grasp of the recovery time of the patient after anesthesia. Accurate prediction of anesthesia recovery time can support anesthesiologist decision-making during surgery to help reduce the risk of surgery in patients. However, effective models are not proposed to solve this problem for anesthesiologists. In this paper, we seek to find effective forecasting methods. First, we collect 1824 patient anesthesia data from the eye center and then performed data preprocessing. We extracted 85 variables to predict recovery time from anesthesia. Second, we extract anesthesia information between variables for prediction using machine learning methods, including Bayesian ridge, lightGBM, random forest, support vector regression, and extreme gradient boosting. We also design simple deep learning models as prediction models, including linear residual neural networks and jumping knowledge linear neural networks. Lastly, we perform a comparative experiment of the above methods on the dataset. The experiment demonstrates that the machine learning method performs better than the deep learning model mentioned above on a small number of samples. We find random forest and XGBoost are more efficient than other methods to extract information between variables on postoperative anesthesia recovery time. Full article
Show Figures

Figure 1

Back to TopTop